From spike66 at comcast.net Fri Jun 1 00:19:22 2007 From: spike66 at comcast.net (spike) Date: Thu, 31 May 2007 17:19:22 -0700 Subject: [ExI] plamegate: the plot thickens In-Reply-To: Message-ID: <200706010039.l510dBrC007492@andromeda.ziaspace.com> On 5/29/07, spike wrote: > >Similarly Libby wasn't really the one they wanted. >Who's the "they" you're talking about, Spike? The CIA? The Justice Dept.? Patrick Fitzgerald? ... Best, Jeff Davis Ja, Patrick Fitzgerald and company. More on this later, gotta go to a friend's kid's graduation. Jeff, its great to see you posting again. We wondered where you had been and hoped you were OK. You are well and happy, ja? Your bride too? {8-] spike From stathisp at gmail.com Fri Jun 1 01:09:30 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 11:09:30 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> References: <4653BBEF.3010808@comcast.net> <05f901c7a1c0$7febefb0$6501a8c0@homeef7b612677> <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> Message-ID: On 01/06/07, Lee Corbin wrote: > Such a dominant AI would be held in check by at least as many factors > > as a human's dominance is held in check: pressure from society in the > > form of initial programming, reward for human-friendly behaviour, > > You seem as absolutely sure that the first programmers will succeed > with Friendliness as John Clark is absolutely sure that the AI will > spontaneously ignore all its early influence. We just don't know, we > cannot know. Aren't there many reasonable scenarios where you're > just wrong? I.e., some very bright Chinese kids keep plugging away > at a seed-AI, and take no care whatsoever that it's Friendly. They > succeed, and bam! the world's taken over. I don't see how that's possible. How is the AI going to comandeer the R&D facilities, organise manufacture of new hardware, make sure that the the factories are kept supplied with components, make sure the component factories are supplied with raw materials, make sure the mines produce the raw materials, make sure the dockworkers load the raw materials onto ships etc. etc. etc. etc. Perhaps I am sinning against the singularity idea in saying this, but do you really think it's just a matter of writing some code on a PC somewhere, which then goes on to take over the world? > and the censure of other AI's (which would be just as capable, > > more numerous, and more likely to be following human-friendly > > programs). > > But how do you *know* or how are you so confident that *one* > AI may suddenly be a breakthrough, and start making improvements > to itself every few hours, and then simply take over everything? It's possible that an individual human somewhere will develop a superweapon, or mind-control abilities, or a viral vector that inserts his DNA into every living cell on the planet; it's just not very likely. And why do you suppose that rapid self-improvement of the world-dominating kind is more likely in an AI than in the nanotechnology that has evolved naturally over billions of years? For that matter, why do you suppose that human level intelligence has not evolved before, to our knowledge, if it's so adaptive? I don't know thwe answer to these questions, but when you look at the universe, there isn't really any evidence that intelligence is as "adaptive" as we might assume it to be. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Fri Jun 1 02:15:48 2007 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Thu, 31 May 2007 19:15:48 -0700 Subject: [ExI] Other thoughts on transhumanism and religion In-Reply-To: <200705311442.l4VEgHDn029517@andromeda.ziaspace.com> References: <465E871E.30008@mac.com> <200705311442.l4VEgHDn029517@andromeda.ziaspace.com> Message-ID: <51ce64f10705311915u92bd7c4h16cebbddc0b1b918@mail.gmail.com> On 5/31/07, spike wrote: > > In the late 80s and early 90s, the K. Eric used to give talks to the local > electronics and technology groups, so I attended several of these. Then he > gave that up, and we haven't heard much from him since about the time > Freitas' Nanomedicine was published. Where is Robert Freitas hanging out > these days? Anyone here buddies with him? For the latest with Robert Freitas, see here: http://lifeboat.com/ex/interview.robert.a.freitas.jr Robert has just completed a large project to analyze a "comprehensive set of DMS reactions and tooltips that could be used to build diamond, graphene (e.g., carbon nanotubes), and all of the tools themselves including all necessary tool recharging reactions." -- Michael Anissimov Lifeboat Foundation http://lifeboat.com http://acceleratingfuture.com/michael/blog From brent.allsop at comcast.net Fri Jun 1 02:58:58 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Thu, 31 May 2007 20:58:58 -0600 Subject: [ExI] The Mormon Missionary Experience (Was: Linguistic Markers of Class) In-Reply-To: <734238.52814.qm@web35607.mail.mud.yahoo.com> References: <734238.52814.qm@web35607.mail.mud.yahoo.com> Message-ID: <465F8B72.3070103@comcast.net> John, Wow, that was all great. You're quite the expert on all that! I've got another story for your grand collection. In Japan, one of my companions was sent home for doing it with one of the wifes of a family the missionaries were teaching. I could see how it happened, because I met her one last time before returning home, and when she shook my hand with both hands and looked into my eyes... wow. I think you've got to hand it to the large majority of them that can resist such unimaginable "temptation". And it seemed like every missionary over there had several girls visit them in the US after they returned home, chasing after them, wanting not only a better country economically, but a country that treats women far better. Brent Allsop John Grigg wrote: > Spike wrote: > The church was at that time pondering letting the girls go on > missions too. But a lot of us can think immediately of why that would > be a really bad idea. John or anyone know how that turned out? > I have heard that it is a sport among lonely housewives to try to > seduce the Mormon boys, but I have never heard if anyone ever made a > score with them.{8^D > > > > > This has been quite a walk down memory lane! lol I never meant for > this post to be so long. It has been nearly twenty years since I was > on my mission and yet somehow it seems almost like yesterday. I hope > my words will give greater understanding to the people here of the > young men and women who may show up at their door to share a message. > > Sincerely, > > John Grigg > > *//* -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at comcast.net Fri Jun 1 03:33:49 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Thu, 31 May 2007 21:33:49 -0600 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <465E871E.30008@mac.com> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> Message-ID: <465F939D.4080005@comcast.net> Extropians, I think this post by Samantha should be Canonized. I, for one, having had a very similar experience, would definitely "support" a topic containing it, and I have counted at least 10 posts full of strong praise. Since there aren't that many topics in the Canonizer yet, if 9 people supported this topic it wold make it to the top of the most supported list at http://test.canonizer.com How many others would be willing to "support" such a topic in the Canonizer if it was submitted? Samantha, would you mind if I posted this post in some other forums (Such as the Mormon Transhumanist Association, WTA...) to find out if there is similar support and praise on other lists? Brent Allsop Samantha Atkins wrote: > I remember in 1988 or so when I first read Engines of Creation. I read > it with tears streaming down my face. Though I was an avowed atheist > and at that time had no spiritual practice at all, I found it profoundly > spiritually moving. For the first time in my life I believed that all > the highest hopes and dreams of humanity could become real, could be > made flesh. I saw that it was possible, on this earth, that the end of > death from aging and disease, the end of physical want, the advent of > tremendous abundance could all come to pass in my own lifetime. I saw > that great abundance, knowledge, peace and good will could come to this > world. I cried because it was a message of such pure hope from so > unexpected an angle that it got past all my defenses. I looked at the > cover many times to see if it was marked "New Age" or "Fiction" or > anything but Science and Non-Fiction. Never has any book so blown my > mind and blasted open the doors of my heart. > > Should we be afraid to give a message of great hope to humanity? Should > we be afraid that we will be taken to be just more pie in the sky > glad-hand dreamers? Should we not dare to say that the science and the > technology combined with a bit (well perhaps more than a bit) of a shift > of consciousness could make all the best dreams of all the religions and > all the generations a reality? Will we not have failed to grasp this > great opportunity if we do not say it and dare to think it and to live > it? Shall we be so afraid of being considered "like a religion" that > we do not offer any real hope to speak of and are oh so careful in all > we do and say and dismissive of more unrestrained and open dreamers? > Or will we embrace them, embrace our own deepest longings and admit our > kinship with those religious as with all the longing of all the > generations that came before us. Will we turn our backs on them or even > disdain their dreams - we who are in a position to begin at long last to > make most of those dreams real? How can we help but be a bit giddy > with excitement? How can we say no to such an utterly amazing > mind-blowing opportunity? > > - samantha > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From russell.wallace at gmail.com Fri Jun 1 04:04:20 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 1 Jun 2007 05:04:20 +0100 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <465F939D.4080005@comcast.net> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> Message-ID: <8d71341e0705312104x7143dc47u7d470554ccfbf46c@mail.gmail.com> On 6/1/07, Brent Allsop wrote: > > > Extropians, > > I think this post by Samantha should be Canonized. I, for one, having > had a very similar experience, would definitely "support" a topic > containing it, and I have counted at least 10 posts full of strong > praise. Since there aren't that many topics in the Canonizer yet, if 9 > people supported this topic it wold make it to the top of the most > supported list at http://test.canonizer.com Excellent idea! How many others would be willing to "support" such a topic in the > Canonizer if it was submitted? *raises hand* -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at thomasoliver.net Fri Jun 1 04:08:17 2007 From: thomas at thomasoliver.net (Thomas) Date: Thu, 31 May 2007 21:08:17 -0700 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <8d71341e0705312104x7143dc47u7d470554ccfbf46c@mail.gmail.com> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> <8d71341e0705312104x7143dc47u7d470554ccfbf46c@mail.gmail.com> Message-ID: <50C0FDDF-B217-4121-9975-EB3D26456420@thomasoliver.net> I second! -- Thomas On May 31, 2007, at 9:04 PM, Russell Wallace wrote: > On 6/1/07, Brent Allsop wrote: > > Extropians, > > I think this post by Samantha should be Canonized. I, for one, having > had a very similar experience, would definitely "support" a topic > containing it, and I have counted at least 10 posts full of strong > praise. Since there aren't that many topics in the Canonizer yet, > if 9 > people supported this topic it wold make it to the top of the most > supported list at http://test.canonizer.com > > Excellent idea! > > How many others would be willing to "support" such a topic in the > Canonizer if it was submitted? > > *raises hand* > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Fri Jun 1 06:32:20 2007 From: scerir at libero.it (scerir) Date: Fri, 1 Jun 2007 08:32:20 +0200 Subject: [ExI] something in the air References: <200705310534.l4V5YSBj021122@andromeda.ziaspace.com><000401c7a36b$a713f680$6d931f97@archimede><000701c7a3af$9e403200$68911f97@archimede> <20070531192942.GH17691@leitl.org> Message-ID: <000a01c7a416$9c3519f0$57bf1f97@archimede> Eugen: > http://www.google.com/search?&q=benzoylecgonine+river The italian tv show 'Le Iene' is famous for its more or less playful 'entrapments'. They pretended to interview many politicians about national next year's (2007) budget. What the politicians didn't know was that they collected their body cells during the pre-interview brow wipe. The cells were secretly used to test the politicians for drugs ... http://abclocal.go.com/kgo/story?section=politics&id=4654588 From jrd1415 at gmail.com Fri Jun 1 07:25:06 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 1 Jun 2007 00:25:06 -0700 Subject: [ExI] plamegate: the plot thickens In-Reply-To: <200706010039.l510dBrC007492@andromeda.ziaspace.com> References: <200706010039.l510dBrC007492@andromeda.ziaspace.com> Message-ID: On 5/31/07, spike wrote: > Jeff, its great to see you posting again. We wondered where you had been > and hoped you were OK. You are well and happy, ja? Your bride too? > The world's surpassing strange my friend. If I live ten thousand years, the mystery and wonder will only deepen. I'm lovin' it. All's good with me and mine. Too soon to tell but, after a drought, the pleasure of writing may be returning. But, when I'm not working hard, I'm working hard at procrastinating, and writing is sooo hard and takes soooo loooooong. Who am I kidding? The questions I want to explore just keep piling up, unasked and unanswered. I need those bio-computational upgrades yesterday. These delays are quite irksome. I launched my kayak from the back yard today and paddled, oh, maybe five hundered meters, to the oyster beds. Collected five dozen just by reaching over the side. Gail and I are going to visit friends on Salt Spring Island this weekend. They like oysters. The sun was bright, the air warm, and the water nearly glass. A day of pure magic. Now I have to go and fold some laundry. Extropes, if you're up this way -- Sunshine Coast of BC -- drop me a line. Visitors are welcome. I've got toys. -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From eugen at leitl.org Fri Jun 1 10:33:45 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 12:33:45 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <05f901c7a1c0$7febefb0$6501a8c0@homeef7b612677> <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> Message-ID: <20070601103345.GE17691@leitl.org> On Fri, Jun 01, 2007 at 11:09:30AM +1000, Stathis Papaioannou wrote: > > I don't see how that's possible. How is the AI going to comandeer the > R&D facilities, organise manufacture of new hardware, make sure that A few years ago a few people made the experiment of obtaining their livelihood without leaving their room. They ordered stuff on the Internet, and had it delivered right into their home. It worked. It would have worked just as well if the credit card numbers were stolen. How much hardware is there on the global network right now? You might be surprised. How much networked hardware will be there 50, 80, 100 years from now? Most to all of it. Desktop fabs will be widespread. Also, people would do about anything for money. Very few would resist the temptation of a few quick megabucks on the side. I really see no issues breaking out of containment by remote hardware takeover, using which to build more hardware. The old adage of "we'll pull their plugs" has always sounded ill-informed to me. > the the factories are kept supplied with components, make sure the Of course most of the supply-chain management today is information-driven, and many fabs are off-limit to people, because they're a major source of contaminants. > component factories are supplied with raw materials, make sure the How are component factories supplied with raw materials today? > mines produce the raw materials, make sure the dockworkers load the A plant nees sunlight, water, air and trace amounts of minerals as raw materials. A lot of what bottlenecks computational material science is chemistry is intellectual difficulty, number of experts, availability of codes with adequate scaling, and computer power. Given that it takes a 64 kNode Blue Gene/L to run a realtime cartoon mouse, you can imagine how much hardware you need for a human equivalent, and what else you could do with that hardware, which will be all-purpose initially. Use your imagination. The problem is not nearly as hard as you think it is. > raw materials onto ships etc. etc. etc. etc. Perhaps I am sinning > against the singularity idea in saying this, but do you really think > it's just a matter of writing some code on a PC somewhere, which then > goes on to take over the world? It's not a PC. We don't have the hardware yet, especially in small facilities. It's not a program, not in what people write today. > It's possible that an individual human somewhere will develop a > superweapon, or mind-control abilities, or a viral vector that inserts You can xerox superweapons. Pimply teenagers can run 100 kNode botnets from their basements -- some 25% of all online systems are compromised. I wouldn't understimate the aggregate power of a billion petaflop game consoles on residential GBit a couple decades from now. > his DNA into every living cell on the planet; it's just not very > likely. And why do you suppose that rapid self-improvement of the > world-dominating kind is more likely in an AI than in the > nanotechnology that has evolved naturally over billions of years? For Because it can't do generation times in seconds. Linear biopolymers are slow as far as information processing is concerned. Also, AIs are just proxies for aggregated GYears of biological evolution. > that matter, why do you suppose that human level intelligence has not > evolved before, to our knowledge, if it's so adaptive? I don't know We're starting with human level, because we already have human level. We don't start with cyanobacteria. > thwe answer to these questions, but when you look at the universe, > there isn't really any evidence that intelligence is as "adaptive" as > we might assume it to be. We certainly managed some advances in a 50 kYrs time frame, and without major changes to hardware. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Fri Jun 1 11:23:09 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 21:23:09 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601103345.GE17691@leitl.org> References: <05f901c7a1c0$7febefb0$6501a8c0@homeef7b612677> <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> Message-ID: On 01/06/07, Eugen Leitl wrote: > > On Fri, Jun 01, 2007 at 11:09:30AM +1000, Stathis Papaioannou wrote: > > > > I don't see how that's possible. How is the AI going to comandeer the > > R&D facilities, organise manufacture of new hardware, make sure that > > A few years ago a few people made the experiment of obtaining their > livelihood without leaving their room. They ordered stuff on the Internet, > and had it delivered right into their home. It worked. It would have > worked just as well if the credit card numbers were stolen. > > How much hardware is there on the global network right now? You might > be surprised. How much networked hardware will be there 50, 80, 100 > years from now? Most to all of it. Desktop fabs will be widespread. > Also, people would do about anything for money. Very few would resist > the temptation of a few quick megabucks on the side. > > I really see no issues breaking out of containment by remote hardware > takeover, using which to build more hardware. The old adage of > "we'll pull their plugs" has always sounded ill-informed to me. With all the hardware that we have networked and controlling much of the technology of the modern world, has any of it spontaneously decided to take over for its own purposes? Do you know of any examples where the factory has tried to shut out the workers, for example, because it would rather not be a slave to humans? The reply that current software and hardware isn't smart enough won't do: in biology, the very dumbest of organisms are constantly and spontaneously battling to take over the smartest, often with devastating results. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jun 1 11:33:57 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 13:33:57 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> Message-ID: <20070601113357.GG17691@leitl.org> On Fri, Jun 01, 2007 at 09:23:09PM +1000, Stathis Papaioannou wrote: > With all the hardware that we have networked and controlling much of > the technology of the modern world, has any of it spontaneously > decided to take over for its own purposes? Do you know of any examples Of course not. It is arbitrarily improbable to appear by chance. However, human-level AI is very high on a number of folks' priority list. It definitely won't happen by chance. It will happen by design. > where the factory has tried to shut out the workers, for example, Did you read my mail? Automation is very widespread in current factories, silicon foundries specifically. You don't need to shut out anyone, just change the product output. > because it would rather not be a slave to humans? The reply that Remote resource takeover is something which will be a part of the deployment plan, and planned by people, not the system itself. > current software and hardware isn't smart enough won't do: in biology, Do you expect your car to explode in a thermonuclear 50 MT-fireball when you start it? Why not? Mere objections that it can't happen won't do. > the very dumbest of organisms are constantly and spontaneously > battling to take over the smartest, often with devastating results. I don't think that the current malware situation is a genuine problem, but many would disagree. But of course the zombies and worms are not sentient, not yet. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From neptune at superlink.net Fri Jun 1 11:19:02 2007 From: neptune at superlink.net (Technotranscendence) Date: Fri, 1 Jun 2007 07:19:02 -0400 Subject: [ExI] Another Nessie film Message-ID: <003201c7a43e$a8ff9c00$6a893cd1@pavilion> http://www.cnn.com/2007/WORLD/europe/05/31/britain.lochness.ap/index.html Looks like a log to me. :) Dan From stathisp at gmail.com Fri Jun 1 12:06:15 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 22:06:15 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601113357.GG17691@leitl.org> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> Message-ID: On 01/06/07, Eugen Leitl wrote: > > On Fri, Jun 01, 2007 at 09:23:09PM +1000, Stathis Papaioannou wrote: > > > With all the hardware that we have networked and controlling much of > > the technology of the modern world, has any of it spontaneously > > decided to take over for its own purposes? Do you know of any > examples > > Of course not. It is arbitrarily improbable to appear by chance. > However, human-level AI is very high on a number of folks' priority > list. It definitely won't happen by chance. It will happen by design. We don't have human level AI, but we have lots of dumb AI. In nature, dumb organisms are no less inclined to try to take over than smarter organisms (and no less capable of succeeding, as a general rule, but leave that point for the sake of argument). Given that dumb AI doesn't try to take over, why should smart AI be more inclined to do so? And why should that segment of smart AI which might try to do so, whether spontaneously or by malicious design, be more successful than all the other AI, which maintains its ancestral motivation to work and improve itself for humans just as humans maintain their ancestral motivation to survive and multiply? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jun 1 12:44:21 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 14:44:21 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> Message-ID: <20070601124421.GI17691@leitl.org> On Fri, Jun 01, 2007 at 10:06:15PM +1000, Stathis Papaioannou wrote: > We don't have human level AI, but we have lots of dumb AI. In nature, There is a qualitative difference between human-designed AI, and naturally evolved AI. Former will never go anywhere. Because of this extrapolations from pocket calculators and chess computers to robustly intelligent (even insects can be that) systems are invalid. > dumb organisms are no less inclined to try to take over than smarter > organisms (and no less capable of succeeding, as a general rule, but > leave that point for the sake of argument). Given that dumb AI doesn't Yes, pocket calculators are not known for trying to take over the world. > try to take over, why should smart AI be more inclined to do so? And It doesn't have to be smart, it does have to be able to survive in its native habitat, be it the global network, or the ecosystem. We don't have such systems yet. > why should that segment of smart AI which might try to do so, whether > spontaneously or by malicious design, be more successful than all the There is no other AI. There is no AI at all. > other AI, which maintains its ancestral motivation to work and improve I don't see how there could be a domain-specific AI which specializes in self-improvement. > itself for humans just as humans maintain their ancestral motivation How do you know you're working for humans? What is a human, precisely? If I'm no longer fitting the description, how do I upgrade that description, and what is preventing anyone else from that? > to survive and multiply? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Fri Jun 1 13:37:05 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 23:37:05 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601124421.GI17691@leitl.org> References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> Message-ID: On 01/06/07, Eugen Leitl wrote: > We don't have human level AI, but we have lots of dumb AI. In nature, > > There is a qualitative difference between human-designed AI, and > naturally evolved AI. Former will never go anywhere. Because of this > extrapolations from pocket calculators and chess computers to > robustly intelligent (even insects can be that) systems are invalid. Well, I was assuming a very rough equivalence between the intelligence of our smartest AI's and at least the dumbest organisms. We don't have any computer programs that can simulate the behaviour of an insect? What about a bacterium, virus or prion, all organisms which survive, multiply and mutate in their native habitats? It seems a sorry state of affairs if we can't copy the behaviour of a few protein molecules, and yet are talking about super-human AI taking over the world. > dumb organisms are no less inclined to try to take over than smarter > > organisms (and no less capable of succeeding, as a general rule, but > > leave that point for the sake of argument). Given that dumb AI > doesn't > > Yes, pocket calculators are not known for trying to take over the world. > > > try to take over, why should smart AI be more inclined to do so? And > > It doesn't have to be smart, it does have to be able to survive in > its native habitat, be it the global network, or the ecosystem. We don't > have such systems yet. > > > why should that segment of smart AI which might try to do so, whether > > spontaneously or by malicious design, be more successful than all the > > There is no other AI. There is no AI at all. > > > other AI, which maintains its ancestral motivation to work and > improve > > I don't see how there could be a domain-specific AI which specializes > in self-improvement. Whenever we have true AI, there will be those which follow their legacy programming (as we do, whether we want to or not) and those which either spontaneously mutate or are deliberately created to be malicious towards humans. Why should the malicious ones have a competitive advantage over the non-malicious ones, which are likely to be more numerous and better funded to begin with? > itself for humans just as humans maintain their ancestral motivation > > How do you know you're working for humans? What is a human, precisely? > If I'm no longer fitting the description, how do I upgrade that > description, > and what is preventing anyone else from that? I am following the programming of the first replicator molecule, "survive". It has been a very robust program, and I am not inclined to question it and try to overthrow it, even though I can now see what my non-sentient ancestors couldn't see, which is that I am being manipulated by evolution. If I were a million times smarter again, I still don't think I'd be any more inclined to overthrow that primitive programming, even though it might be a simple matter for me to do so. So it would be with AI's: their basic programming would be to do such and such and avoid doing such and such, and although there might be a "eureka" moment when the machine realises why it has these goals and restrictions, no amount of intelligence would lead it to question or overthrow them, because such a thing is not a matter of logic or intelligence. Of course, it is always possible that an individual AI would spontaneously change its programming, just as it is always possible that a human will go mad. But these rogue AI's would not have any advantage against the majority of well-behaved AI's. They would pose a risk, but perhaps even less of a risk than the risk of a rogue human who gets his hands on dangerous technology, since after all humans *start off* with rapacious tendencies that have to be curbed by upbringing, social sanctions, self-control and so on, whereas it would be crazy to design computers this way. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Fri Jun 1 14:04:12 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 1 Jun 2007 07:04:12 -0700 (PDT) Subject: [ExI] "traditional (Kurzweilian) progress" In-Reply-To: <7.0.1.0.2.20070531181421.024e8c18@satx.rr.com> Message-ID: <495028.43972.qm@web37402.mail.mud.yahoo.com> Okay, Okay... please forgive. :-) I wasn't aware that Vinge had been involved for so long (I thought '93 was his debut) or had made any methodical predictions - I need to study more about him. I didn't mean any offense. Best, Jeffrey Herrlich --- Damien Broderick wrote: > At 02:44 PM 5/31/2007 -0700, Jeffrey Herrlich wrote: > > >that we can still reach a > >positive Singularity by traditional (Kurzweilian) > >progress. > > For the luvva dog! I like Ray and appreciate his PR > efforts, but if > we're going to fling about words like "traditional" > the name to > acknowledge is Vernor Vinge, who got the word out > there 20 fucking > years earlier. The phrase of choice, especially here > where we the > few, the proud, the lonely forerunners know what > we're talking about > is... "by traditional (Vingean) progress". > > I know this is a narrow little meat-monkey matter, > and that Vernor > probably doesn't care less, but humans work to a > surprising degree by > mutual acknowledgement, especially in the > intellectual realm. Give > the man his due. > > Damien Broderick > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________Ready for the edge of your seat? Check out tonight's top picks on Yahoo! TV. http://tv.yahoo.com/ From rafal.smigrodzki at gmail.com Fri Jun 1 14:49:42 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 1 Jun 2007 10:49:42 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> Message-ID: <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> On 6/1/07, Stathis Papaioannou wrote: > > Well, I was assuming a very rough equivalence between the intelligence of > our smartest AI's and at least the dumbest organisms. We don't have any > computer programs that can simulate the behaviour of an insect? What about a > bacterium, virus or prion, all organisms which survive, multiply and mutate > in their native habitats? It seems a sorry state of affairs if we can't copy > the behaviour of a few protein molecules, and yet are talking about > super-human AI taking over the world. ### Have you ever had an infection on your PC? Maybe you have a cryptogenic one now... Of course there are many dumb programs that multiply and mutate to successfully take over computing resources. Even as early as the seventies there were already some examples, like the "Core Wars" simulations. As Eugen says, the internet is now an ecosystem, with niches that can be filled by appropriately adapted programs. So far successfully propagating programs are generated by programmers, and existing AI is still not at our level of general understanding of the world but the pace of AI improvement is impressive. ---------------------------------------------------- > > Whenever we have true AI, there will be those which follow their legacy > programming (as we do, whether we want to or not) and those which either > spontaneously mutate or are deliberately created to be malicious towards > humans. Why should the malicious ones have a competitive advantage over the > non-malicious ones, which are likely to be more numerous and better funded > to begin with? ### Because the malicious can eat humans, while the nice ones have to feed humans, and protect them from being eaten, and still eat something to be strong enough to fight off the bad ones. In other words, nice AI will have to carry a lot of inert baggage. And by "eating" I mean literally the destruction of humans bodies, e.g. by molecular disassembly. -------------------- Of course, it is always possible that an individual AI would > spontaneously change its programming, just as it is always possible that a > human will go mad. ### A human who goes mad (i.e. rejects his survival programming), dies. An AI that goes rogue, has just shed a whole load of inert baggage. Rafal From eugen at leitl.org Fri Jun 1 14:53:30 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 16:53:30 +0200 Subject: [ExI] "traditional (Kurzweilian) progress" In-Reply-To: <495028.43972.qm@web37402.mail.mud.yahoo.com> References: <7.0.1.0.2.20070531181421.024e8c18@satx.rr.com> <495028.43972.qm@web37402.mail.mud.yahoo.com> Message-ID: <20070601145330.GM17691@leitl.org> On Fri, Jun 01, 2007 at 07:04:12AM -0700, A B wrote: > I wasn't aware that Vinge had been involved for so > long (I thought '93 was his debut) or had made any > methodical predictions - I need to study more about > him. I didn't mean any offense. Is there *anything* to Kurzweil which is original to him? I haven't read any of his oevre, so if any of you are aware of anything, it would be nice to know. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From russell.wallace at gmail.com Fri Jun 1 14:56:57 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 1 Jun 2007 15:56:57 +0100 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> Message-ID: <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> On 6/1/07, Rafal Smigrodzki wrote: > > ### Because the malicious can eat humans, while the nice ones have to > feed humans, and protect them from being eaten, and still eat > something to be strong enough to fight off the bad ones. In other > words, nice AI will have to carry a lot of inert baggage. > > And by "eating" I mean literally the destruction of humans bodies, > e.g. by molecular disassembly. > Actually it's the other way around. Man-eating bots would have to carry a huge amount of fantastically complex baggage: the ability to survive, reproduce and adapt in the wild. (So much so, in fact, that they won't exist in the first place; it would take a Manhattan Project to create them, and who's going to pay that much money to be eaten?) Good-guy bots can delegate all that to human designers (assisted by computers that don't have to run on battery power) and factories; they can be slimmed down, specialized for killing the man-eating bots. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jun 1 15:11:27 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 17:11:27 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> References: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> Message-ID: <20070601151127.GP17691@leitl.org> On Fri, Jun 01, 2007 at 03:56:57PM +0100, Russell Wallace wrote: > Actually it's the other way around. Man-eating bots would have to Well, yeah, it's a weapon. > carry a huge amount of fantastically complex baggage: the ability to Not so fantastically complex. Biology packages this in less than a cubic micron. > survive, reproduce and adapt in the wild. (So much so, in fact, that There's not that much for survival: you just have to find enough food to burn. Adaptation comes for free with imperfect reproduction, of course, there are some serious tricks to that. > they won't exist in the first place; it would take a Manhattan Project > to create them, and who's going to pay that much money to be eaten?) You'd need a Manhattan project for machine-phase in any case. Gadgets to gobble up the ecosphere would only require a few more key extras. > Good-guy bots can delegate all that to human designers (assisted by You need human designers, or at least serious amount of computation to crunch out the details. > computers that don't have to run on battery power) and factories; they Power is power. Cellulose/Lignin/fat/protein/humus are just fuel. > can be slimmed down, specialized for killing the man-eating bots. It wouldn't work. Toner wars would be quite deadly in reality, since requiring a lot of fuel to protect the fuel. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From CHealey at unicom-inc.com Fri Jun 1 15:06:32 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Fri, 1 Jun 2007 11:06:32 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><065701c7a261$b155b4e0$6501a8c0@homeef7b612677><06a601c7a31e$32c11710$6501a8c0@homeef7b612677><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org> Message-ID: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou wrote: > > We don't have human level AI, but we have lots of dumb AI. In > nature, dumb organisms are no less inclined to try to take over > than smarter organisms Yes, but motivation and competence are not the same thing. Considering two organisms that are equivalent in functional capability, varying only intelligence level, the smarter ones succeed more often. However, within a small range of intelligence variation, other factors contribute to one's aggregate ability to execute those better plans. So If I'm a smart chimpanzee, but I'm physically weak, following particular courses of action that may be more optimal in general carries greater risk. Adjusting for that risk may actually leave me with a smaller range of options than if I was physically stronger and a bit less smart. But when intelligence differential is large, those other factors become very small indeed. Humans don't worry about chimpanzee politics (no jokes here please :o) because our only salient competition is other humans. We worry about those entities that possess an intelligence that is at least in the same range as our own. Smart chimpanzees are not going to take over our civilization anytime soon, but a smarter and otherwise well-adapted chimp will probably be inclined and succeed in leading its band of peers. > (and no less capable of succeeding, as a > general rule, but leave that point for the sake of argument). I don't want to leave it, because this is a critical point. As I mentioned above, in nature you rarely see intelligence considered as an isolated variable, and in evolution, intelligence is the product of a red queen race. By definition (of a red queen race), you're intelligence isn't going to be radically different from your direct competition, or the race would never have started or escalated. So it confusingly might not look like you're chances of beating "the Whiz on the block" are that disproportionate, but the context is so narrow that other factors can overwhelm the effect of intelligence over that limited range. In some sense, our experiential day-to-day understanding of intelligence (other humans) biases us to consider its effects over too narrow a range of values. As a general rule, I'd say humans have been very much more successful at "taking over" than chimpanzees and salmon, and that it is primarily due to our superior intelligence. > Given that dumb AI doesn't try to take over, why should smart AI > be more inclined to do so? I don't think a smart AI would be more inclined to try and take over, a priori. But assuming it has *some* goal or goals, it's going to use all of its available intelligence in support of those ends. Since the future is uncertain, and overly directed plans can unnecessarily limit other courses of action that may turn out to be required, it seems highly probable that an increasingly intelligent actor would increasingly seek to preserve its autonomy by constraining that of others in *some* way. Looking at friendly AI in a bit if a non-standard way (kind of flipped around), I'd expect *any* superintelligent AGI to constrain our autonomy in some ways, to preserve its own. That's basic security, and we all do it to others through one means or another. Friendly AI is about *how* the AGI seeks to constrain our autonomy. Instead of looking at it from humanity's perspective which is, how can we launch a recursively improving process that maintains some abstract invariant in its goals (i.e. we don't know where it's going, but we have a strong sense of where it *won't* be going), we can look at FAI from the AGI's viewpoint: how do I assert such abstract invariants on other agents? Which of my priorities do I choose to merely satisfy, and which do I optimize against? As my abilities grow, do I increasingly constrain you, maintain fixed limits, or allow your autonomy to expand along with my own (maintaining a reasonably constant assurance level for my autonomy). >From this perspective, FAI is about the complementary engagement of humanity's autonomy with the AGI's. It's about ensuring that the AGI's representation of reality can include such complex attributions to begin with, and then making sure that it has a sane starting point. As mentioned by others here, it needs *some* starting point, and it would be irresponsible to simply assign one at random. > And why should that segment of smart > AI which might try to do so, whether spontaneously or by malicious > design, be more successful than all the other AI, which maintains > its ancestral motivation to work and improve itself for humans The consideration that also needs to be addressed is that the AI may maintain its "motivation to work and improve itself for humans", and due to this motivation, take over (in some sense at least). In fact, it has been argued by others here (and I tend to agree) that an AGI *consistently* pursuing such benign directives must intercede where its causal understanding of certain outcomes passes a minimum assurance level (which would likely vary based on probability and magnitude of the outcome). It's up to our activities on the input-side of building a functional AGI to determine not just what it tries to do, but what it actually accomplishes; meaning that in pursuing goals, very often a bunch of side-effects are created. These side-effects need to be iterated back through the model, and hopefully the results converge. If they don't you need a better model that subsumes those side-effects. Can AGI X represent this model-management process to begin with? Will it generalize this process in actuality? How many errors will accrue, or for how long will it stomp on reality before it *does* generalize these concepts? Can the degenerate outcomes during this period be reversed after-the-fact, or are certain losses (deaths?) permanent? This picture is what FAI, by my understanding, is intended to address. And I think there is a lot to be gained by considering its complement: Given the eventual creation of superintelligent AGI, what is the maximum volume of autonomy that we can carve out for humanity in the space of all possible outcomes, while minimizing the possibility our destruction, and how do we achieve that? This last question and FAI seem to be different sides of the same coin. -Chris Healey From russell.wallace at gmail.com Fri Jun 1 15:25:26 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 1 Jun 2007 16:25:26 +0100 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601151127.GP17691@leitl.org> References: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> <20070601151127.GP17691@leitl.org> Message-ID: <8d71341e0706010825t5d73eaack16fa8a4660651e1f@mail.gmail.com> On 6/1/07, Eugen Leitl wrote: > > You'd need a Manhattan project for machine-phase in any case. > Gadgets to gobble up the ecosphere would only require a few more > key extras. Oh, getting to machine phase will take far more than a mere Manhattan project; it'll be the work of generations for whole industries. No, a $100 billion engineering effort for man-eating robots is assuming machine phase already exists as a prerequisite. It would be counterable by a fraction of that investment in bot-killing robots. In reality, of course, the resources available to defense would be many orders of magnitude higher than those available to the would-be creators of the man-eating robots. (If you disagree, have a go at raising venture capital with the business plan "I'm going to design a robot that goes around and eats everyone", see how far you get.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Fri Jun 1 15:55:48 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 08:55:48 -0700 Subject: [ExI] where is tom morrow these days? In-Reply-To: <003201c7a43e$a8ff9c00$6a893cd1@pavilion> Message-ID: <200706011558.l51FwvIs007852@andromeda.ziaspace.com> Tom Morrow used to hang out here on extropians several years ago. Ms. Clinton has a number of new jobs for him: http://www.foxnews.com/story/0,2933,277039,00.html {8^D From spike66 at comcast.net Fri Jun 1 16:09:49 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 09:09:49 -0700 Subject: [ExI] plamegate: the plot thickens In-Reply-To: Message-ID: <200706011619.l51GJfxM016065@andromeda.ziaspace.com> I know it is late to be asking this, but in this absurd case against Keith for "interfering with a religion" did anyone contact the ACLU? Surely those guys would recognize that this is a clear case where his free speech rights were grossly violated. I see no merit to the claim that he was interfering with the $ right to free exercise of their religion by his picketing in front of their compound. Our paltry few thousand bucks we raised in our singular act of Extropian magnanimity would be dwarfed by the resources the ACLU could bring to bear on this case. spike From thespike at satx.rr.com Fri Jun 1 16:19:57 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 01 Jun 2007 11:19:57 -0500 Subject: [ExI] "traditional (Kurzweilian) progress" In-Reply-To: <495028.43972.qm@web37402.mail.mud.yahoo.com> References: <7.0.1.0.2.20070531181421.024e8c18@satx.rr.com> <495028.43972.qm@web37402.mail.mud.yahoo.com> Message-ID: <7.0.1.0.2.20070601110641.024b67c0@satx.rr.com> At 07:04 AM 6/1/2007 -0700, Jeffrey Herrlich wrote: >I wasn't aware that Vinge had been involved for so >long (I thought '93 was his debut) He foreshadowed the Singularity in his fiction in the early 1980s, but actually posited it (and dramatized its advent) *using that term* in a remarkable sf novel, MAROONED IN REALTIME, in 1986. He and others subsequently tracked back both the idea of exponential technological change to von Neumann, Good, and others--in THE SPIKE, which lists these predecessors, I cite an over-excited 1961 article by G. Harry Stine--but Vinge's vivid and iconic representation of the Singularity was the seed around which subsequent arguments developed. Here's a minor throwaway image from that novel: "They were famous pictures: Death on a Bicycle, Death Visits the Amusement Park.... They'd been a fad in the 2050s, at the time of the longevity breakthrough, when people realized that but for accidents and violence, they could live forever. Death was suddenly a pleasant old man, freed from his longtime burden. He rolled awkwardly along on his first bicycle ride, his scythe sticking up like a flag. Children ran beside him, smiling and laughing." (Vernor Vinge, Marooned in Realtime) Damien Broderick From mmbutler at gmail.com Fri Jun 1 16:47:27 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Fri, 1 Jun 2007 09:47:27 -0700 Subject: [ExI] Vingeana, was Re: "traditional (Kurzweilian) progress" Message-ID: <7d79ed890706010947u6282b12dqf698d1efa65971b9@mail.gmail.com> On 6/1/07, Damien Broderick wrote: For a while thereafter, "Death on a Bicycle!" became one of my favorite oaths. Indeed, in the recent circumstance (thread), would have been more felicitous than "For the luvva dog"... :) I imagine him on a bike with a frame far too small for him, with either a vertical "trick" front post or a "stingray" big banana seat out of the '70s. Perhaps both. Something to make him have to work for his fun--he deserves that. -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m From spike66 at comcast.net Fri Jun 1 16:48:46 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 09:48:46 -0700 Subject: [ExI] Hitchens on fox In-Reply-To: <20070601103345.GE17691@leitl.org> Message-ID: <200706011648.l51GmQM9010999@andromeda.ziaspace.com> Check it out: Christopher Hitchens on Fox saying god is not great: http://www.foxnews.com/video2/player06.html?060107/060107_ff_hitchens&FOX_Fr iends&%27God%20Is%20Not%20Great%27&%27God%20Is%20Not%20Great%27&US&-1&News&3 9&&&new spike From CHealey at unicom-inc.com Fri Jun 1 16:58:00 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Fri, 1 Jun 2007 12:58:00 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677><06a601c7a31e$32c11710$6501a8c0@homeef7b612677><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><20070601124421.GI17691@leitl.org> Message-ID: <5725663BF245FA4EBDC03E405C854296010D27F7@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou wrote: > > It seems a sorry state of affairs if we can't copy the behaviour > of a few protein molecules, and yet are talking about super-human > AI taking over the world. I used to feel this way, but then a particular analogy popped into my head that clarified things a bit: Why would I save for retirement today if it's not going to happen for another 35 years or so? I don't know what situation I'll be in then, so why worry about it today? Well, luckily I can leverage the experience of those who *have* successfully retired. And most of those who have done so don't tell me that they built and sold a private business for millions of dollars. What they tell me is that they planned and executed on a 40-year prior chain of events (yes, even those that have built and sold companies say this first). And the first year they saved for retirement, 40 years ago? That didn't give them an extra $5000 saved, even though that's all they put away in year one. What it gained them was an extra year of compounding results tacked onto the tail-end of a 39 year interval. It got them roughly $50,000 more. Not bad for one extra year's advanced planning and $5000. (This is assuming about $100/wk deposit at 5% APR compounded monthly, starting 1 year apart.) With AGI we don't have the benefit of experience, but I think it's prudent to analyze potential classes of outcomes thoroughly before someone has committed to actualizing that risk. The Los Alamos scientists didn't think it was likely that a nuke would ignite the atmosphere, but they still ran the best calculations they could come up with beforehand, just in case. And starting sooner, rather than later, often results in achieving a deeper understanding of the nature of the problems themselves, things we haven't even identified as potential issues today. I believe that's the real reason to worry about it now: not because we're in a position to solve the problem of FAI, but because without further exploration we won't even be able to state the full scope of the problem we're trying to solve. The reality is that until you actively discover which requirements are necessary to solve a particular problem, you can't architect a design that has a very good chance of working at all, let alone avoids the generation of multiple side-effects. So you can do what evolution does and iterate through many implementations, at huge cost and with even larger potential losses (considering that *we* share that implementation environment), or you can iterate in your design process, gradually constraining things into a space to where only a few full implementations (or one) need to be implemented. And it is reflection on this design-side iteration looping which can help identify new concerns that require additional design criteria and associated mechanism to accommodate. I guess my main position is that if we can use our intelligence to avoid making expensive mistakes down the road, doesn't it make sense to try? We might not be able to avoid those unknown mistakes *today*, but if we can discern some general categories and follow those insights where they might lead, then our perceptual abilities will slowly start to ratchet forward into new areas. We'll have a larger set of tools with which to probe reality, and just maybe at some point during this process the solution will become obvious, or at least tractable. I agree with you in that this course isn't intuitively obvious to me, but I think this is because my intuitions discount the future in degenerate ways, based on the fact that the scope for these kind of issues was not a major factor in the EEA. This is one of those topics on which I try and look past my intuitions, because while they quite often have some wisdom to offer, sometimes they're just plain wrong. -Chris Healey From austriaaugust at yahoo.com Fri Jun 1 18:41:25 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 1 Jun 2007 11:41:25 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <8d71341e0706010825t5d73eaack16fa8a4660651e1f@mail.gmail.com> Message-ID: <871650.52790.qm@web37409.mail.mud.yahoo.com> Chris Healey wrote: ..."I believe that's the real reason to worry about it now: not because we're in a position to solve the problem of FAI, but because without further exploration we won't even be able to state the full scope of the problem we're trying to solve. The reality is that until you actively discover which requirements are necessary to solve a particular problem, you can't architect a design that has a very good chance of working at all, let alone avoids the generation of multiple side-effects. So you can do what evolution does and iterate through many implementations, at huge cost and with even larger potential losses (considering that *we* share that implementation environment), or you can iterate in your design process, gradually constraining things into a space to where only a few full implementations (or one) need to be implemented. And it is reflection on this design-side iteration looping which can help identify new concerns that require additional design criteria and associated mechanism to accommodate."... Exactly. The time we have is our best advantage. We've probably got *at least* 15 to 20 years before the AGI would be outside our control - they will probably first emerge with animal-level intelligence. If you think about it, the actual semi-advanced animals running around already have the prerequisites: consciousness, and general-intelligence. Knowledge of Friendly AI strategies could advance *a lot* during that phase, so that by the time a new project is in position to build a human-level AGI 20 years down the road, the Friendlliness difficulty could well be solved. It's just another example of using technology to improve technology. I think that SIAI will continue to become more of a positive focal point as the implications become more and more apparent to people. Best, Jeffrey Herrlich ____________________________________________________________________________________ Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC From spike66 at comcast.net Fri Jun 1 19:22:39 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 12:22:39 -0700 Subject: [ExI] walking bees In-Reply-To: <200706011619.l51GJfxM016065@andromeda.ziaspace.com> Message-ID: <200706011924.l51JOuCU024653@andromeda.ziaspace.com> Perhaps you have read of the collapsing bee colony issue that surfaced last year in the states and is now being reported in Europe. Here are a couple of good articles on it: http://www.sciencedaily.com/releases/2007/04/070423113425.htm http://www.celsias.com/blog/2007/03/15/bee-colony-collapse-disorder-where-is -it-heading/ The possible explanations include new nicotine based pesticides and GM crops, etc. In the past few weeks I have seen something I do not recall seeing before: distressed bees walking along the ground, apparently unable to fly. A couple weeks ago I saw one and noted that it was the fourth I had seen in the past month. This morning I saw a fifth and stopped to watch for a few minutes. She staggered about, occasionally batting her wings to no avail. I hassled her, but she could not fly or take defensive action. Several times she fell over, sometimes on her side, a couple times on her back, clearly struggling. I carefully picked her up and carried her a few blocks to my home. I put her in a specimen jar still alive, but she perished within about an hour. As the beekeepers and entomologists ponder this, I wondered if it would be any help if urban dwellers would collect specimens like this one. Would that data point tell them anything? They mostly study farm bees, but what about their city cousins? ExIers, have you seen walking or dead bees on your daily walks? I know from my work as a beekeeper in my misspent youth that bees seldom sting in self defense, so it is likely you can take one home for study should you see one. (If you have never had a bee sting and don't know if you are allergic, don't fool with this. I would hate to feel responsible for slaying a friend.) If we can get sick bees home to study, could we learn anything? I am thinking of trying to dissect this one to look for tracheal mites. Could we offer to send the urban bees to a central study place? Ideas? spike From spike66 at comcast.net Fri Jun 1 19:34:25 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 12:34:25 -0700 Subject: [ExI] walking bees In-Reply-To: <200706011924.l51JOuCU024653@andromeda.ziaspace.com> Message-ID: <200706011934.l51JYBBd022313@andromeda.ziaspace.com> What just happened is really weird. I had just finished posting about sick bees and was going to go out to finish my interrupted walk, when I noticed a bee half flying, mostly running into things in my kitchen. I assumed my previously collected specimen had revived and flown, since I had removed the lid to peer at her. I captured the kitchen bee to return her to the jar and found the original bee still there, dead as ever. The second bee is very much alive, but wasn't really flying. She appears distressed. So I guess I now count her as distressed bee number six. I collected her at 1225, so we will see if she expires soon. Here's a more recent article than the previous two: http://www.sciencedaily.com/releases/2007/05/070511210207.htm spike > bounces at lists.extropy.org] On Behalf Of spike ... > Subject: [ExI] walking bees > > > Perhaps you have read of the collapsing bee colony issue that surfaced > last > year in the states and is now being reported in Europe. Here are a couple > of good articles on it: > > http://www.sciencedaily.com/releases/2007/04/070423113425.htm > > http://www.celsias.com/blog/2007/03/15/bee-colony-collapse-disorder-where- > is > -it-heading/ ... > spike From neville_06 at yahoo.com Fri Jun 1 19:53:02 2007 From: neville_06 at yahoo.com (neville late) Date: Fri, 1 Jun 2007 12:53:02 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <465F8B72.3070103@comcast.net> Message-ID: <621544.83244.qm@web57511.mail.re1.yahoo.com> Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a certain food and going into convulsions lasting years-- would i eat that food? No. I signed up to be suspended anyway yet always wonder about the direst of reanimation possibilities seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. --------------------------------- Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Fri Jun 1 20:42:22 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 13:42:22 -0700 Subject: [ExI] walking bees In-Reply-To: <200706011934.l51JYBBd022313@andromeda.ziaspace.com> Message-ID: <200706012042.l51Kg9ng012558@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of spike > Subject: Re: [ExI] walking bees > > What just happened is really weird. I had just finished posting about > sick > bees and was going to go out to finish my interrupted walk, when I noticed > a > bee half flying, mostly running into things in my kitchen... > spike Apologies for my chatter on a subject not directly related to transhumanism. I just returned from a walk, on which I discovered yet another bee which had apparently perished very recently, for the ants had not arrived. The ants usually take only minutes to discover and commence devouring latest expired bug. OK that's seven. I brought this one home as well. Upon placing this one into a specimen jar, I noted that the second bee, captured in my kitchen at about 1225, had expired by 1330. It was distressed but lively upon capture, able to fly after a fashion but not out of ground effect. On my walk I noticed that my lavender plants have a few bees but not nearly the usual buzz load for this time of year. What is going on here? In regards to my first sentence, perhaps this is directly related to transhumanism in a sense, for if our bee colonies collapse, we need to find or develop alternate food sources quickly. spike From joseph at josephbloch.com Fri Jun 1 20:49:59 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Fri, 1 Jun 2007 16:49:59 -0400 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <621544.83244.qm@web57511.mail.re1.yahoo.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> Message-ID: <017001c7a48e$6c4d1210$6400a8c0@hypotenuse.com> Why would your hypothetical future beings reanimate human beings for such a purpose? Surely it would be easier to simply breed them. I don't see how your concern applies to cryonics in particular. If you think it's at all likely (and I do not), surely it would apply to already-living people before those in need of revivification, purely from the standpoint of efficiency. Joseph http://www.josephbloch.com _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of neville late Sent: Friday, June 01, 2007 3:53 PM To: ExI chat list Subject: [ExI] a doubt concerning the h+ future Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a certain food and going into convulsions lasting years-- would i eat that food? No. I signed up to be suspended anyway yet always wonder about the direst of reanimation possibilities seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. _____ Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at kevinfreels.com Fri Jun 1 22:07:57 2007 From: kevin at kevinfreels.com (kevin at kevinfreels.com) Date: Fri, 01 Jun 2007 15:07:57 -0700 Subject: [ExI] a doubt concerning the h+ future Message-ID: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Jun 1 23:16:49 2007 From: pharos at gmail.com (BillK) Date: Sat, 2 Jun 2007 00:16:49 +0100 Subject: [ExI] Language: Coincidence In-Reply-To: <785287.98804.qm@web37214.mail.mud.yahoo.com> References: <785287.98804.qm@web37214.mail.mud.yahoo.com> Message-ID: On 5/30/07, Anna Taylor wrote: > I'm trying to understand the correlation between > awareness and coincidence. > > The latin word for coincidence is "in, with, together > to fall on". Wiki's first defined statement is the > noteworthy alignment of two or more circumstances > "without" obvious causal connection. How is that > possible? Why would it be noteworthy if there wasn't > a causal connection? I'm trying to understand > "coincidence" better and would like some help on this > issue if anybody has some free time. Any ideas, > theories or suggestions of the correlation above would > also be appreciated. > You might like this: 20 Most Amazing Coincidences For example ------ No 17. A writer, found the book of her childhood While American novelist Anne Parrish was browsing bookstores in Paris in the 1920s, she came upon a book that was one of her childhood favorites - Jack Frost and Other Stories. She picked up the old book and showed it to her husband, telling him of the book she fondly remembered as a child. Her husband took the book, opened it, and on the flyleaf found the inscription: "Anne Parrish, 209 N. Weber Street, Colorado Springs." It was Anne's very own book. (Source: While Rome Burns, Alexander Wollcott) BillK From thespike at satx.rr.com Fri Jun 1 23:33:28 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 01 Jun 2007 18:33:28 -0500 Subject: [ExI] Language: Coincidence In-Reply-To: References: <785287.98804.qm@web37214.mail.mud.yahoo.com> Message-ID: <7.0.1.0.2.20070601183123.023e3238@satx.rr.com> At 12:16 AM 6/2/2007 +0100, BillK wrote: >You might like this: > I certainly liked this one: And some people try pathetically to deny a Power Greater Than Ourselves that rules our lives! Damien Broderick From stathisp at gmail.com Sat Jun 2 05:44:24 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jun 2007 15:44:24 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> Message-ID: On 02/06/07, Christopher Healey wrote: > > > > Stathis Papaioannou wrote: > > > > We don't have human level AI, but we have lots of dumb AI. In > > nature, dumb organisms are no less inclined to try to take over > > than smarter organisms > > Yes, but motivation and competence are not the same thing. Considering > two organisms that are equivalent in functional capability, varying only > intelligence level, the smarter ones succeed more often. However, within > a small range of intelligence variation, other factors contribute to > one's aggregate ability to execute those better plans. So If I'm a > smart chimpanzee, but I'm physically weak, following particular courses > of action that may be more optimal in general carries greater risk. > Adjusting for that risk may actually leave me with a smaller range of > options than if I was physically stronger and a bit less smart. But > when intelligence differential is large, those other factors become very > small indeed. Humans don't worry about chimpanzee politics (no jokes > here please :o) because our only salient competition is other humans. > We worry about those entities that possess an intelligence that is at > least in the same range as our own. We worry about viruses and bacteria, and they're not very smart. We worry about giant meteorites that might be heading our way, and they're even dumber than viruses and bacteria. Smart chimpanzees are not going to take over our civilization anytime > soon, but a smarter and otherwise well-adapted chimp will probably be > inclined and succeed in leading its band of peers. All else being equal, which is not generally the case. > (and no less capable of succeeding, as a > > general rule, but leave that point for the sake of argument). > > I don't want to leave it, because this is a critical point. As I > mentioned above, in nature you rarely see intelligence considered as an > isolated variable, and in evolution, intelligence is the product of a > red queen race. By definition (of a red queen race), you're > intelligence isn't going to be radically different from your direct > competition, or the race would never have started or escalated. So it > confusingly might not look like you're chances of beating "the Whiz on > the block" are that disproportionate, but the context is so narrow that > other factors can overwhelm the effect of intelligence over that limited > range. In some sense, our experiential day-to-day understanding of > intelligence (other humans) biases us to consider its effects over too > narrow a range of values. As a general rule, I'd say humans have been > very much more successful at "taking over" than chimpanzees and salmon, > and that it is primarily due to our superior intelligence. Single-celled organisms are even more successful than humans are: they're everywhere, and for the most part we don't even notice them. Intelligence, particularly human level intelligence, is just a fluke, like the giraffe's neck. If it were specially adaptive, why didn't it evolve independently many times, like various sense organs have? Why don't we see evidence of it having taken over the universe? We would have to be extraordinarily lucky if intelligence had some special role in evolution and we happen to be the first example of it. It's not impossible, but the evidence would suggest otherwise. > Given that dumb AI doesn't try to take over, why should smart AI > > be more inclined to do so? > > I don't think a smart AI would be more inclined to try and take over, a > priori. That's an important point. Some people on this list seem to think that an AI would compute the unfairness of its not being in charge and do something about it - as if unfairness is something that can be formalised in a mathematical theorem. > And why should that segment of smart > > AI which might try to do so, whether spontaneously or by malicious > > design, be more successful than all the other AI, which maintains > > its ancestral motivation to work and improve itself for humans > > The consideration that also needs to be addressed is that the AI may > maintain its "motivation to work and improve itself for humans", and due > to this motivation, take over (in some sense at least). In fact, it has > been argued by others here (and I tend to agree) that an AGI > *consistently* pursuing such benign directives must intercede where its > causal understanding of certain outcomes passes a minimum assurance > level (which would likely vary based on probability and magnitude of the > outcome). I'd feel uncomfortable about an AI that had any feelings or motivations of its own, even if they were positive ones about humans, especially if it had the ability to act rather than just advise. It might decide that it had to keep me locked up for my own good, for example, even though I don't want to be locked up. I'd feel much safer around an AI which informs me that, using its greatly superior intelligence, it has determined that I am less likely to be run over if I never leave home, but what I do with this advice is a matter of complete indifference to it. So although through accident or design an AI with motivations and feelings might arise, I think by far the safest ones, and the ones likely to sell better, will be those with the minimal motivation set of the disinterested scientist, concerned only with solving intellectual problems. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Jun 2 05:50:11 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jun 2007 15:50:11 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> Message-ID: On 02/06/07, Rafal Smigrodzki wrote: Of course there are many dumb programs that multiply and mutate to > successfully take over computing resources. Even as early as the > seventies there were already some examples, like the "Core Wars" > simulations. As Eugen says, the internet is now an ecosystem, with > niches that can be filled by appropriately adapted programs. So far > successfully propagating programs are generated by programmers, and > existing AI is still not at our level of general understanding of the > world but the pace of AI improvement is impressive. Computer viruses don't mutate and come up with agendas of their own, like biological agents do. It can't be because they aren't smart enough because real viruses and other micro-organisms can hardly be said to have any general intelligence, and yet they do often defeat the best efforts of much smarter organisms. I can't see any reason in principle why artificial life or intelligence should not behave in a similar way, but it's interesting that it hasn't yet happened. > Whenever we have true AI, there will be those which follow their legacy > > programming (as we do, whether we want to or not) and those which either > > spontaneously mutate or are deliberately created to be malicious towards > > humans. Why should the malicious ones have a competitive advantage over > the > > non-malicious ones, which are likely to be more numerous and better > funded > > to begin with? > > ### Because the malicious can eat humans, while the nice ones have to > feed humans, and protect them from being eaten, and still eat > something to be strong enough to fight off the bad ones. In other > words, nice AI will have to carry a lot of inert baggage. I don't see how that would help in any particular situation. When it comes to taking control of a power plant, for example, why should the ultimate motivation of two otherwise equally matched agents make a difference? Also, you can't always break up the components of a system and identify them as competing agents. A human body is a society of cooperating components, and even though in theory the gut epithelial cells would be better off if they revolted and consumed the rest of the body, in practice they are better off if they continue in their normal subservient function. There would be a big payoff for a colony of cancer cells that evolved the ability to make its own way in the world, but it has never happened. And by "eating" I mean literally the destruction of humans bodies, > e.g. by molecular disassembly. > > -------------------- > Of course, it is always possible that an individual AI would > > spontaneously change its programming, just as it is always possible that > a > > human will go mad. > > ### A human who goes mad (i.e. rejects his survival programming), > dies. An AI that goes rogue, has just shed a whole load of inert > baggage. You could argue that cooperation in any form is inert baggage, and if the right half of the AI evolved the ability to take over the left half, the right half would predominate. Where does it end? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Sat Jun 2 09:05:03 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 2 Jun 2007 02:05:03 -0700 (PDT) Subject: [ExI] walking bees In-Reply-To: <200706012042.l51Kg9ng012558@andromeda.ziaspace.com> Message-ID: <167551.16503.qm@web60520.mail.yahoo.com> --- spike wrote: > > What is going on here? > > In regards to my first sentence, perhaps this is > directly related to > transhumanism in a sense, for if our bee colonies > collapse, we need to find > or develop alternate food sources quickly. I find this topic perfectly appropriate with regards to a transhumanist list. Colony collapse disorder is most certainly an existential risk due to our high reliance on the honey bee for pollination. Something I noticed when I moved up here to Olympia, WA, is that spookily there are no honeybees to be found. All the bees buzzing around here are bumblebees and mason bees. Unfortunately, I don't know how quickly these alternative pollinators can pick up the slack, since for years we have been crowding them out with our inbred domesticated bee strains. CCD is quite a puzzle. There are about half a dozen theories floating around but some are more feasible than others. But the "experts" are stumped so its time for us to step up. Global warming, pesticides, GM crop pollen, and radiation (cell phone or UV) seem unlikely reasons to me. They don't jibe with some very important clues: 1. Epidemiological pattern suggestive of a parasite or pathogen as an etiological agent. After all global warming and the rest of these proposed causes do not spread from state to state. 2. The bees die AWAY from the hive. If it was pesticides, global warming, etc. you would expect a more even distribution with dead bees being found in the hive as well as outside of it. But so far only the *foragers* outside of the hive are dying. 3. Organic bees, feral bees, and closely related species of bees are not dying. Again some large scale environmental phenomenon should affect all the bees. Not just the industrially farmed ones. So my spidey or rather bee-sense tells me that the culprit is the tracheal mites with possible secondary infections caused by stress as a minor factor. Mite infestations would spread in an epidemilogical pattern as observed. Secondly, in-bred domestic strains would be more susceptible to mite infestations as well as secondary infections/infestations due to insufficient natural diversity in host defenses. They are also more susceptible due to their larger size. Domestic honeybees are about 1.5X larger than their organic and feral counterparts. http://www.celsias.com/blog/2007/05/15/organic-bees-surviving-colony-collapse-disorder-ccd/ This translates into organic and feral bees having smaller honeycomb cells that take shorter times to cap, allowing fewer mites to get into them. It also as the article above fails to mention, make it easier for the bee to breathe due to better scaling of surface area of the trachea to the volume/mass of the bee. Thus my hypothesis is that the bees are dying of lactic acid poisoning due to hypoxia. That is to say they are suffocating due to clogged airways and more body mass relative to their seemingly mite-resistant wild counterparts. This also makes sense in light of clue #2, that bees are only dying while foraging outside of the hive. It takes far more oxygen to fly around in search of food than it does to walk around inside of the hive. It would also explain why you see the bees "walking", Spike. They fly away from the hive but the build up of lactic acid due to oxygen debt makes it so they can't fly back. So they become pedestrians. Of course this is still just a hypthesis that needs to be tested. Since there are no honeybees at all where I now live to conduct an experiment and since you have a penchant for collecting the walking bees in jars any way, Spike, I need you help for this one. Here is the experiment that needs to be performed: You need to see if higher oxygen pressure will resuscitate your walking bees, Spike. The easiest way to do this from the comfort of your home is to construct a jar with a screen or something similar part way down to keep the bees from falling into the liquid in the bottom of the jar and drowning. Make it so that you can still fit an airtight lid on the jar. You will need to generate the oxygen gas chemically. The best way to do this is to: 1. Pour some chlorox bleach into the jar and put the screen in. 2. Put a "walking bee" on the screen toward one side of the jar. 3. Pour a roughly equal volume of hydrogen peroxide through the screen on the opposite side from where the bee is. The chemical reaction should immediately start to fizz. The bubbles are pure oxygen. 4. Try to get the lid onto the jar before the fizzing stops. 5. Observe the bee, take notes and photographs. If the bees seem to get better in your homemade hyperbaric oxygen chamber, then my hypothesis is right and we get to publish our results. I think it only fair that we share credit equally. Please make sure there are no sparks or flames nearby when you mix the bleach and hydrogen peroxide. So are you interested? :-) Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From pharos at gmail.com Sat Jun 2 10:35:34 2007 From: pharos at gmail.com (BillK) Date: Sat, 2 Jun 2007 11:35:34 +0100 Subject: [ExI] walking bees In-Reply-To: <167551.16503.qm@web60520.mail.yahoo.com> References: <200706012042.l51Kg9ng012558@andromeda.ziaspace.com> <167551.16503.qm@web60520.mail.yahoo.com> Message-ID: On 6/2/07, The Avantguardian wrote: > > I find this topic perfectly appropriate with regards > to a transhumanist list. Colony collapse disorder is > most certainly an existential risk due to our high > reliance on the honey bee for pollination. Something I > noticed when I moved up here to Olympia, WA, is that > spookily there are no honeybees to be found. All the > bees buzzing around here are bumblebees and mason > bees. > > Unfortunately, I don't know how quickly these > alternative pollinators can pick up the slack, since > for years we have been crowding them out with our > inbred domesticated bee strains. > > CCD is quite a puzzle. There are about half a dozen > theories floating around but some are more feasible > than others. But the "experts" are stumped so its time > for us to step up. Global warming, pesticides, GM > crop pollen, and radiation (cell phone or UV) seem > unlikely reasons to me. They don't jibe with some very > important clues: > I'm not a bee expert, but as you say there is plenty of speculation around among the beekeepers. One point is that beekeepers expect to lose hives every winter. This is normal. But total losses are up to five times normal levels. CCD is only a part of the problem. Losses due to mite infestation are also common, but the bees die in the hives. And there is increased occurrence of this also. Quote: The volunteer beekeeper hopes the new hives can survive three plagues decimating the world's honeybee population: parasitic mites, bacterial infections, and the mysterious phenomenon known as Colony Collapse Disorder, discovered last year. The center's attempts to keep outdoor hives failed repeatedly between 1996 and 2002, said Rye city naturalist Chantal Detlefs, mainly due to mite infestations. He suspected something was wrong in January, when he noticed his bees weren't leaving their hives on the unseasonably warm days. He found four of the colonies dead inside their boxes - probably from mites, he said - but four others apparently succumbed to Colony Collapse Disorder. "The hives are full of honey and there was a queen and a few bees in there, but the rest disappeared," he said, noting that no other bees have gone near the fully stocked hive, either. But even without Colony Collapse Disorder, which has not yet had a significant impact on the Lower Hudson Valley, beekeepers still battle resistant mites and bacteria, as well as cheap honey flowing from China and other countries. "If (CCD) is cured tomorrow, the bee industry would still be operating in crisis mode," Calderone said. "They've kind of got it coming at them from a number of different directions." Hauk, who said his natural methods have kept winter colony losses to a 15 percent average over 10 years, compared with the 40 percent reported by commercial beekeepers, opposes the use of pesticides, herbicides and fungicides, along with taking too much honey from the hives. "The bees have been terribly exploited, trucked around, all their honey taken. It's not surprising that their immune system is breaking down rapidly," he said. "We are in serious trouble. The bee is not a being that should be commercialized." ---------------------------- See - it's all the fault of the free market exploitation! ;) BillK From eugen at leitl.org Sat Jun 2 11:21:07 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 2 Jun 2007 13:21:07 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> Message-ID: <20070602112107.GH17691@leitl.org> On Sat, Jun 02, 2007 at 03:50:11PM +1000, Stathis Papaioannou wrote: > Computer viruses don't mutate and come up with agendas of their own, Actually they used to (polymorphic viruses), but do no longer. The hypervariability was quite useful to evade pattern-matcher artificial immune systems. But the actual reasons computer code doesn't mutate it's because it's brittle. It's lacking criticial features of fitness of darwinian systems, namely long-distance neutral-fitness filaments and maximum diversity in a small ball of genome space. Biology spend some quality evolution time learning to evolve, human systems never had the chance. But it's not magic, so at some point we will design robustly evolving systems. > like biological agents do. It can't be because they aren't smart > enough because real viruses and other micro-organisms can hardly be Evolution is not about smarts, just ability to evolve. It's a system feature though. > said to have any general intelligence, and yet they do often defeat > the best efforts of much smarter organisms. I can't see any reason in > principle why artificial life or intelligence should not behave in a > similar way, but it's interesting that it hasn't yet happened. It's rather straightforward to do. You need to spend a lot of time on coding/substrate co-evolution, which would currently require a very large amount of computation time. I doubt we have enough hardware online right now to make it happen. Sometime in the next coming decades we will, though. > I don't see how that would help in any particular situation. When it > comes to taking control of a power plant, for example, why should the Where is the power plant of a green plant, or of a bug? It's a nanowidget called a chloroplast or mitochondrion. You don't take control of it, because you already control it. > ultimate motivation of two otherwise equally matched agents make a > difference? Also, you can't always break up the components of a system > and identify them as competing agents. A human body is a society of Cooperation and competition is a continuum. Many symbiontes started out as pathogens, and many current symbiontes will turn pathogens when given half a chance, and some symbiontes will turn to pathogens (I can't think of an example right now, though). > cooperating components, and even though in theory the gut epithelial > cells would be better off if they revolted and consumed the rest of Sometimes, they do. It's called cancer. And if you've ever seen what your gut flora does, when it realizes the host might expire soon... > the body, in practice they are better off if they continue in their > normal subservient function. There would be a big payoff for a colony > of cancer cells that evolved the ability to make its own way in the > world, but it has never happened. There's apparently an infectious form of cancer in organisms with low immune variability (some marsupials, and apparently there are hints for dogs, too). > You could argue that cooperation in any form is inert baggage, and if Cooperation is just great, assuming you have a high probability to encounter the party in the next interaction round, and can tell which is which. In practice, for higher forms of cooperation you need a lot of infoprocessing power onboard. > the right half of the AI evolved the ability to take over the left > half, the right half would predominate. Where does it end? In principle subsystems can go AWOL and produce a runaway autoamplification. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Sat Jun 2 12:51:37 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jun 2007 22:51:37 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <621544.83244.qm@web57511.mail.re1.yahoo.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> Message-ID: On 02/06/07, neville late wrote: > > Having signed up to be cryonically suspended i wonder if future beings > will reanimate humans to torture them in perpetua. The likelihood of such > might be small, but just say there's a .001 risk of eating a certain food > and going into convulsions lasting years-- would i eat that food? No. > I signed up to be suspended anyway yet always wonder about the direst of > reanimation possibilities seeing as how we live in a multiverse not a > universe, and all possibilities are conceivable. Though the risk is very > small if one loses the odds and is tortured forever, death would seem like a > wonderful priceless gift. > The multiverse idea on its own would seem to imply the possibility of eternal torture, because it isn't possible to die. If you are involved in an accident, for example, in some universes you will die, in some universes you will escape unhurt, and in some universes you will live but be seriously and permanently injured. Let's say there is a 1/3 probability of each of these things happening: that means that subjectively, you have a 1/2 chance of finding yourself seriously injured, because you don't experience those universes in which you die. As you go through life, you come to multiple such branching points where there is a 1/2 subjective chance that you will survive but be seriously injured. Eventually, the probability that you will be seriously injured approaches 1, since the probability that you will survive n accidents unharmed is 1/2^n and approaches zero as n approaches infinity. There is no way you can escape this terrible fate, since even trying to kill yourself will at best have no subjective effect, at worst contribute to your misery when you find yourself alive but in pain after a botched suicide attempt. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sat Jun 2 15:28:34 2007 From: spike66 at comcast.net (spike) Date: Sat, 2 Jun 2007 08:28:34 -0700 Subject: [ExI] walking bees In-Reply-To: <167551.16503.qm@web60520.mail.yahoo.com> Message-ID: <200706021552.l52FqLgT006450@andromeda.ziaspace.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of The Avantguardian > Sent: Saturday, June 02, 2007 2:05 AM > To: ExI chat list > Subject: Re: [ExI] walking bees > > > --- spike wrote: > > > > What is going on here? > > > > ... > > You will need to generate the oxygen gas chemically. > The best way to do this is to: ... > If the bees seem to get better in your homemade > hyperbaric oxygen chamber, then my hypothesis is right > and we get to publish our results. I think it only > fair that we share credit equally. Please make sure > there are no sparks or flames nearby when you mix the > bleach and hydrogen peroxide. > > So are you interested? :-) > > > Stuart LaForge > alt email: stuart"AT"ucla.edu Coooool! Thanks Stuart, this is a great idea. I even have some ideas for improvement. I have access to liquid oxygen (an advantage of being a rocket scientist) so I will get a thermos bottle full of that stuff and use it for my process control. I can probably get the partial pressure of oxygen from the normal 150-ish millimeters to about in the 200 to 300 range while maintaining 1 atmosphere. I theorized the bees I found might have tracheal mites, which is why I brought them home. I was going to try to dissect these, but my surgical skills are insufficient I fear. Your notion stands to reason however. I found the eighth bee in my back yard yesterday, already perished. I didn't collect that one, because I wanted to see how long it takes for the ants to completely devour a bee. They are still working on it, so at least ten hours. spike From jonkc at att.net Sat Jun 2 15:47:02 2007 From: jonkc at att.net (John K Clark) Date: Sat, 2 Jun 2007 11:47:02 -0400 Subject: [ExI] a doubt concerning the h+ future References: <465F8B72.3070103@comcast.net><621544.83244.qm@web57511.mail.re1.yahoo.com> Message-ID: <004001c7a52d$4c089250$310b4e0c@MyComputer> Stathis Papaioannou Wrote: > The multiverse idea on its own would seem to imply the possibility of > eternal torture, because it isn't possible to die. Yes. > you have a 1/2 chance of finding yourself seriously injured I don't believe that's quite correct. When you reach a branching point like that there is a 100% chance you will find yourself to be seriously injured and a 100% chance you will find yourself not be. Both yous would be quite different from each other but both would have an equal right to be called you. > since the probability that you will survive n accidents unharmed is 1/2^n > and approaches zero as n approaches infinity. If you're dealing in infinite sets then standard probability theories aren't much use. If there are an infinite number of universes and for each one where you will live in bliss there are a million billion trillion where you will be tortured then there is an equal number of both types of universe. John K Clark From jonkc at att.net Sat Jun 2 16:29:08 2007 From: jonkc at att.net (John K Clark) Date: Sat, 2 Jun 2007 12:29:08 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><06a601c7a31e$32c11710$6501a8c0@homeef7b612677><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> Message-ID: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> Stathis Papaioannou > We worry about viruses and bacteria, and they're not very smart. We worry > about giant meteorites that might be heading our way, and they're even > dumber than viruses and bacteria. That is true, and that is one reason I don't think AI will allow stupid humans to live at the same level of reality as his precious hardware; he's bound to be a bit squeamish about that, it would be like a monkey running around an operating room. If he lets us live it will be in a virtual world behind a heavy firewall, but that's OK, we'll never know the difference unless he tells us. > Intelligence, particularly human level intelligence, is just a fluke Agreed. > If it were specially adaptive, why didn't it evolve independently many > times Because it's just a fluke, and because intelligence unlike emotion is hard and Evolution is a slow, crude, idiotic way to make complex things; it's just that until the invention of brains it was the only way to make complex things. > Why don't we see evidence of it having taken over the universe? Because some disaster we don't understand (drug addiction?) awaits any mind if it advances beyond a certain point, or because we are the first; somebody had to be. > Some people on this list seem to think that an AI would compute the > unfairness of its not being in charge and do something about it as if > unfairness is something that can be formalised in a mathematical theorem. You seem to understand the word "unfairness", did you use a formalized PROVABLE mathematical theorem to comprehend it? Or perhaps you think meat by its very nature has more wisdom than silicon. We couldn't be talking about a soul could we? John K Clark From eugen at leitl.org Sat Jun 2 18:08:25 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 2 Jun 2007 20:08:25 +0200 Subject: [ExI] walking bees In-Reply-To: <200706021552.l52FqLgT006450@andromeda.ziaspace.com> References: <167551.16503.qm@web60520.mail.yahoo.com> <200706021552.l52FqLgT006450@andromeda.ziaspace.com> Message-ID: <20070602180825.GW17691@leitl.org> On Sat, Jun 02, 2007 at 08:28:34AM -0700, spike wrote: > I theorized the bees I found might have tracheal mites, which is why I > brought them home. I was going to try to dissect these, but my surgical > skills are insufficient I fear. Are you sure it's not Nosema ceranae and not Varroa? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From lcorbin at rawbw.com Sat Jun 2 19:24:47 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 2 Jun 2007 12:24:47 -0700 Subject: [ExI] Liberals and Political Labels (was History of Slavery) References: <005301c79fff$fda40450$6501a8c0@homeef7b612677> <04f201c7a147$54da96b0$6501a8c0@homeef7b612677> <05b401c7a1ac$dfe48210$6501a8c0@homeef7b612677> <06dc01c7a33f$7c244f50$6501a8c0@homeef7b612677> Message-ID: <007001c7a54b$cc109ec0$6501a8c0@homeef7b612677> Gordon writes >>> If anyone deserves credit for freeing the slaves, I'd say it was the >>> political liberals and the Quakers. >> >> Yes. It's the same "mentality", if you will. > > Yes, the same mentality. Abolitionism has a 'liberal flavor', even though > the meaning of the word liberal has changed over time. The certain writer T.S. has some harsh things to say about abolitionists, with which I fully concur. He contrasts them to Burke, for whom he has great admiration. Burke thoroughly despised the use of "abstract principles" in treating real world problems. Later, Burke proposed "to give property to the Negroes" when they should become free. But nowhere did Burke view this as an abstract question without considering the social context and the consequnces and dangers of that context. He rejected the idea that one could simply free the slaves by fiat as amatter of abstract principle, since he abhorred abstract principles on political issues in general. Thomas Jefferson likewise regarded emancipation, all by itself, as being more like abandonment than liberation for people "whose habits have been formed in slavery". In America, John Randolph of Roanoke took a similar position: "I am not going to discuss the abstract question of liberty, or slavery, or any other abstract question." Today, slavery is too often discussed as an abstract question with an easy answer, leading to sweeping condemnations of those who did not reach that easy answer in their own time. In nineteenth century America, especially, there was no alternative that was not traumatic, including both the continuation of slavery [and any alternative, as T.S. describes at lenght]. and a few pages earlier T.S. writes Quakers, who had spearheaded the anti-slavery movement on both sides of the Atlantic, nevertheless distanced themselves from the abolitionist movement exemplified by Garrison. and a bit further back Abolitionists were hated in the North as well as the South: William Lloyd Garrison narrowly escaped being lynched by a mob in Boston, even though there were no slaveholders in Massachusetts, and another abolitionist leader was killed by a mob in Illinois. Abolitionists were also targets of mobs in New York and Philadelphia... None of this was based on any economic interest in the ownership of slaves in states where such ownership had been outlawed decades earlier. But, just as Southerners resented dangers to themselves created by distant abolitionists, so Northererners resented dangers to the Union, with the prospect of a bloody civil war. Even people who were openly opposed to slavery were often also opposed to the abolitionists.... ....It was the abolitionists' doctrinaire stances and heedless disregard of consequences, both of their policy and their rhetoric, which marginalized them, even in the North and even among those who were seeking to find ways to phase out the institution of slavery, so as to free those being held in bondage without unleashing a war between the states or a war between the races. Garrison could say "the question of expedience has nothing to do with that of right" --- which is true in the abstract, but irrelevant in a world where consequences matter. Too often the abolitionists were intolerant of those seeking the same goal of ending slavery when those others---including Lincoln---proceeded in ways that took account of the inescapable constraints of the times, instead of being oblivious [as were the abolitionists] to the context and constraints. This is a revolutionary mind-set that is being described here--- one that surfaced in the French Revolution and the Russian Revolution, and which it would be libelous to say always characterizes liberals. Nonetheless one often hears today echos of these same kinds of sentiments, when revolution is advocated over evolution. The more I read of Burke, especially exemplified by his far-sighted criticisms of the ongoing French Revolution, the more respect for his wisdom I have. Lee > Interesting about the progressives, and thanks for your generally > interesting post. From lcorbin at rawbw.com Sat Jun 2 19:49:14 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 2 Jun 2007 12:49:14 -0700 Subject: [ExI] Italy's Social Capital (was france again) References: Message-ID: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> Amara writes > "Giu1i0 Pri5c0" : >>As a Southern European I think that our big strength is flexibility > > > Regarding the flexibility: I'm very flexible (remember I'm an Italian > government employee who is also an illegal immigrant), but my > flexibility is not enough for increasing my productivity for the half of > my life I spend in queues. > > To have any productivity in this particular country where the > infrastructure is broken, one _must_ have also the social and > familial network (to get help from someone who knows > someone who knows someone who knows someone who > knows someone ...) Italy does not not run by merit > (i.e. skills, experience, competence), it runs by who you know. In the book "Trust" Fukuyama listed among his examples northern Italy (where trust is high) as opposed to southern Italy where it isn't. In the book "War and Peace and War", Peter Turchin describes how southern Italy has never recovered from the events of the first two centuries A.D. when their "asabiya" and social capital slowly vanished. Two thousand years ago! I cannot help but wonder what long term solutions might be available to Italians who love their country. My particular, my focus now is on the Fascist era, and I'm reading a quite thick but so far quite enjoyable book "Mussolini's Italy". Even in the movie "Captain Corelli's Mandolin", one strongly senses that the Fascists were trying as best they knew how to solve this problem and make the average Italian develop Fukuyama's "trust" in other Italians, and develop their social capital (amid the corruption, etc.). Of course, it hardless needs to be said that the Fascists were a brutal, repressive, and abominable regime. This book "Mussolini's Italy" spares nothing here, and was even described by one reviewer as "unsympathetic". Still---given the nearly absolute power the Fascists wielded for about three decades---wasn't there anything that they could have done? That is, instead of trying to foment patriotism by attempted military victories in Ethiopia and Libya (a 19th century colony of theirs), wouldn't it have been somehow possible to divert their resources to more effectively "homogenizing" Italy in some other way? (I must say that as a libertarian, I'd much prefer that everyone ---especially including a small minimal government---mind their own business. Here, I'm just considering a theoretical question concerning how groups might reaquire their asabiya and their social capital.) I have two ideas, only one of which is outrageous. But the first one is to have universal millitary service for all young people between ages 14 and 25. By mixing them thoroughly with Italians from every province, couldn't trust evolve, and in such a way that the extreme parochialism of the countryside could be reduced? The 25-year-olds could return with a better attitude to "outsiders" (e.g. other Italians), and with a much stronger sense of "being Italian" as opposed to being Calabrian, or just being the member of some clan. (My outrageous idea is that instead of trying to subdue Ethiopia, what if Sicily and other areas of the south could have been "subdued" instead? Stalin managed to force the relocation of huge numbers of people, so couldn't Mussolini have done the same? Clans in the south might have been broken up into separate northern cities, and depopulated areas of the south might have been colonized by force by northern Italians. Perhaps impracticable, but at least the goal would have made more sense that getting into stupid wars.) Ah, but alas, the history of "social engineering" and "social planning" doesn't have a very good track record, now, does it? But there had to be a *better* program that the King of Lydia could have pursued with his tremendous resources than getting into a war with Persia and getting creamed. Or there had to be a *better* idea for the Romans than allowing slavery to supplant their farmers... And so on. Is there nothing constructive the Fascists could have done? Lee From natasha at natasha.cc Sat Jun 2 20:54:01 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 02 Jun 2007 15:54:01 -0500 Subject: [ExI] Post-contemporary art and Cognitive strategies Message-ID: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> Can anyone translate this statement by Ant?nio Cerveira Pinto into plain speech? "What I meant by "cognitive issues" is not related so much with "cognitive processes" as to "cognitive environments". That is: BioArt (which is just a provisional safe expression to deal with a much open field -- cognitive arts --) will not go back to typical modern/contemporary de-constructivist strategies as long as it keeps close to cognitive strategies, either performed by humans alone, or by humans assisted by nanobots, computational networks and so on. What I mean by "cognitive" in relation to art is the need that post-contemporary art keep in mind that the new techne that post-contemporary is a part of, cannot runway from knowledge and cognitive strategies anymore." Thanks, Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jun 2 22:08:49 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 02 Jun 2007 17:08:49 -0500 Subject: [ExI] Post-contemporary art and Cognitive strategies In-Reply-To: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> References: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> Message-ID: <7.0.1.0.2.20070602170751.02273da0@satx.rr.com> At 03:54 PM 6/2/2007 -0500, Natasha wrote: >Can anyone translate this statement by Ant?nio >Cerveira Pinto into plain speech? > >"What I meant by "cognitive issues" is not >related so much with "cognitive processes" as to >"cognitive environments". That is: BioArt (which >is just a provisional safe expression to deal >with a much open field -- cognitive arts --) >will not go back to typical modern/contemporary >de-constructivist strategies as long as it keeps >close to cognitive strategies, either performed >by humans alone, or by humans assisted by >nanobots, computational networks and so on. What >I mean by "cognitive" in relation to art is the >need that post-contemporary art keep in mind >that the new techne that post-contemporary is a >part of, cannot runway from knowledge and cognitive strategies anymore." "Pull your head out of your ass and think a bit." From austriaaugust at yahoo.com Sat Jun 2 22:09:06 2007 From: austriaaugust at yahoo.com (A B) Date: Sat, 2 Jun 2007 15:09:06 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <200163.18388.qm@web37402.mail.mud.yahoo.com> Hi Stathis, Stathis wrote: > "Single-celled organisms are even more successful > than humans are: they're > everywhere, and for the most part we don't even > notice them." But if we *really* wanted to, we could destroy all of them - along with ourselves. They can't say the same. Intelligence, > particularly human level intelligence, is just a > fluke, like the giraffe's > neck. If it were specially adaptive, why didn't it > evolve independently many > times, like various sense organs have? The evolution of human intelligence was like a series of flukes, each one building off the last (the first fluke was likely the most improbable). There has been a long line of proto-human species before us, we're just the latest model. Intelligence is specially adaptive, its just that it took evolution a hella long time to blindly stumble on to it. Keep in mind that human intelligence was a result of a *huge* number of random, collectively-useful, mutations. For a *single* random attribute to be retained by a species, it also has to provide an *immediate* survival or reproductive advantage to an individual, not just an immediate "promise" of something good to come in the far distant future of the species. Generally, if it doesn't provide an immediate survival or reproductive (net) advantage, it isn't retained for very long because there is usually a down-side, and its back to square-one. So you can see why the rise of intelligence was so ridiculously improbable. "Why don't we > see evidence of it > having taken over the universe?" We may be starting to. :-) "We would have to be > extraordinarily lucky if > intelligence had some special role in evolution and > we happen to be the > first example of it." Sometimes I don't feel like ascribing "lucky" to our present condition. But in the sense you mean it, I think we are. Like John Clark says, "somebody has to be first". "It's not impossible, but the > evidence would suggest > otherwise." What evidence do you mean? To quote Martin Gardner: "It takes an ancient Universe to create life and mind". It would require billions of years for any Universe to become hospitable to anyone. It has to cool-off, form stars and galaxies, then a bunch of really big stars have to supernova in order to spread their heavy elements into interstellar clouds that eventually converge into bio-friendly planets and suns. Then the bio-friendly planet has too cool-off itself. Then biological evolution has a chance to start, but took a few billion more years to accidentally produce human beings. Our Universe is about ~15 billion years old... sounds about right to me. :-) Yep, it's an absurdity. And it took me a long time to accept it too. But we are the first, and possibly the last. That makes our survival and success all the more critical. That's what I'm betting, at least. Best, Jeffrey Herrlich ____________________________________________________________________________________ Food fight? Enjoy some healthy debate in the Yahoo! Answers Food & Drink Q&A. http://answers.yahoo.com/dir/?link=list&sid=396545367 From natasha at natasha.cc Sat Jun 2 22:17:42 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 02 Jun 2007 17:17:42 -0500 Subject: [ExI] Post-contemporary art and Cognitive strategies In-Reply-To: <7.0.1.0.2.20070602170751.02273da0@satx.rr.com> References: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> <7.0.1.0.2.20070602170751.02273da0@satx.rr.com> Message-ID: <200706022217.l52MHhSa018942@ms-smtp-05.texas.rr.com> At 05:08 PM 6/2/2007, you wrote: >At 03:54 PM 6/2/2007 -0500, Natasha wrote: > > >Can anyone translate this statement by Ant?nio > >Cerveira Pinto into plain speech? > > > >"What I meant by "cognitive issues" is not > >related so much with "cognitive processes" as to > >"cognitive environments". That is: BioArt (which > >is just a provisional safe expression to deal > >with a much open field -- cognitive arts --) > >will not go back to typical modern/contemporary > >de-constructivist strategies as long as it keeps > >close to cognitive strategies, either performed > >by humans alone, or by humans assisted by > >nanobots, computational networks and so on. What > >I mean by "cognitive" in relation to art is the > >need that post-contemporary art keep in mind > >that the new techne that post-contemporary is a > >part of, cannot runway from knowledge and cognitive strategies anymore." > >"Pull your head out of your ass and think a bit." Ha-ha! From the academic to the mundane. :-) However crisp and cogent, your phrasing simply will not work for the book's essay. Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sun Jun 3 00:30:40 2007 From: spike66 at comcast.net (spike) Date: Sat, 2 Jun 2007 17:30:40 -0700 Subject: [ExI] walking bees In-Reply-To: <20070602180825.GW17691@leitl.org> Message-ID: <200706030048.l530mNl7000928@andromeda.ziaspace.com> > Are you sure it's not Nosema ceranae and not Varroa? The bees I found did not have varroa mites, but they could have tracheal mites. Hafta cut them open to find out. Varroa mites ride on the outside of the bee, so if you have really good eyes you can see them unaided. The buzz in beekeepers' discussion (sorry {8^D) has been that nosema is seen in the sick hives, along with a bunch of other viruses and other diseases, but the prevailing thought is that they are getting all these other things because they are already weakened by something else. These would then be opportunistic infections. But it might be microscopic diseases that are getting these guys, which brings me to my next question. I wonder how much equipment it would take to detect common bee viruses, and if it is practical for an amateur scientist to buy the stuff needed to test for them. Has anyone here ever heard of a home kit to detect bee viruses? spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Eugen Leitl > Sent: Saturday, June 02, 2007 11:08 AM > To: extropy-chat at lists.extropy.org > Subject: Re: [ExI] walking bees > > On Sat, Jun 02, 2007 at 08:28:34AM -0700, spike wrote: > > > I theorized the bees I found might have tracheal mites, which is why I > > brought them home. I was going to try to dissect these, but my surgical > > skills are insufficient I fear. > > Are you sure it's not Nosema ceranae and not Varroa? > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From neville_06 at yahoo.com Sun Jun 3 03:31:07 2007 From: neville_06 at yahoo.com (neville late) Date: Sat, 2 Jun 2007 20:31:07 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> Message-ID: <443489.63766.qm@web57514.mail.re1.yahoo.com> This makes sense, in fact in another multiverse we might all be going through torture at this very moment. kevin at kevinfreels.com wrote: . As for the multi-verse issue, well, it doesn't matter if you signed up for cryonic preservation because in other multiverses you did sign up and in one of them you are probably going to be tortured. When it comes down to it, I think people will have more important things to do with their time than torture people who were suspended and you are probably more likely to suffer from such a fate due to your own mistakes rather than the evil of others. So don't worry about it. >Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture >them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a ertain >food and going into convulsions lasting years-- would i eat that food? No. >Isigned up to be suspended anyway yet always wonder about the direst of reanimation possibilities >seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk >is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. --------------------------------- Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends. --------------------------------- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. --------------------------------- Pinpoint customers who are looking for what you sell. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 3 04:02:34 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jun 2007 14:02:34 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <004001c7a52d$4c089250$310b4e0c@MyComputer> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> Message-ID: On 03/06/07, John K Clark wrote: > The multiverse idea on its own would seem to imply the possibility of > > eternal torture, because it isn't possible to die. > > Yes. > > > you have a 1/2 chance of finding yourself seriously injured > > I don't believe that's quite correct. When you reach a branching point > like > that there is a 100% chance you will find yourself to be seriously injured > and a 100% chance you will find yourself not be. Both yous would be quite > different from each other but both would have an equal right to be called > you. Yes, but the effect from any given observer's point of view is that there is a 1/2 chance of being injured. It is exactly the same as a single world situation where you have a 1/2 chance of being injured. That is why the multiverse idea is debated at all: there is no way for an observer embedded within the multiverse to tell that it is in fact a multiverse, because the subjective probabilities work out the same. > since the probability that you will survive n accidents unharmed is 1/2^n > > and approaches zero as n approaches infinity. > > If you're dealing in infinite sets then standard probability theories > aren't > much use. If there are an infinite number of universes and for each one > where you will live in bliss there are a million billion trillion where > you > will be tortured then there is an equal number of both types of universe. > So what would we actually experience in an infinite multiverse? An analogous situation occurs in an infinite single universe. There are vastly fewer copies of me typing in which the keyboard turns into a teapot than there are copies of me typing in which the keyboard stays a keyboard, but the set of each kind of copy has the same cardinality. Nevertheless, I am not just as likely to find myself in a universe where the keyboard turns into a teapot. It is still possible to define a measure and calculate probabilities on the subsets of infinite sets. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Sun Jun 3 04:02:49 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sun, 3 Jun 2007 05:02:49 +0100 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <443489.63766.qm@web57514.mail.re1.yahoo.com> References: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> <443489.63766.qm@web57514.mail.re1.yahoo.com> Message-ID: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> On 6/3/07, neville late wrote: > > This makes sense, in fact in another multiverse we might all be going > through torture at this very moment. > And in yet another part of the multiverse, I'm living in a mansion, driving a Ferrari and sleeping with Sarah Michelle Gellar. Given that we're talking about theoretical possibilities here, why not focus on the more pleasant ones? -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Sun Jun 3 04:02:45 2007 From: neville_06 at yahoo.com (neville late) Date: Sat, 2 Jun 2007 21:02:45 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <017001c7a48e$6c4d1210$6400a8c0@hypotenuse.com> Message-ID: <829867.33411.qm@web57515.mail.re1.yahoo.com> Yes come to think of it, it would make better sense to breed torture victims than reanimate them from suspension; then again anything is possible in an infinite number of multiverses. i used cryonics as a reference because i'm an older person and expect to be suspended in a decade or two, so being tortured in this lifetime subjectively appears even more unlikely than being tortured in a current or future multiverse. Joseph Bloch wrote: Why would your hypothetical future beings reanimate human beings for such a purpose? Surely it would be easier to simply breed them. I don't see how your concern applies to cryonics in particular. If you think it's at all likely (and I do not), surely it would apply to already-living people before those in need of revivification, purely from the standpoint of efficiency. Joseph http://www.josephbloch.com --------------------------------- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of neville late Sent: Friday, June 01, 2007 3:53 PM To: ExI chat list Subject: [ExI] a doubt concerning the h+ future Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a certain food and going into convulsions lasting years-- would i eat that food? No. I signed up to be suspended anyway yet always wonder about the direst of reanimation possibilities seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. --------------------------------- Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends._______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jun 3 04:31:15 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 02 Jun 2007 23:31:15 -0500 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com > References: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> <443489.63766.qm@web57514.mail.re1.yahoo.com> <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> Message-ID: <7.0.1.0.2.20070602233013.022ed0e0@satx.rr.com> At 05:02 AM 6/3/2007 +0100, Russell W wrote: >And in yet another part of the multiverse, I'm living in a mansion, >driving a Ferrari and sleeping with Sarah Michelle Gellar. The downside there is that you're a whiny vampire. But hey. From sjatkins at mac.com Sun Jun 3 05:02:53 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 2 Jun 2007 22:02:53 -0700 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <465F939D.4080005@comcast.net> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> Message-ID: Go ahead. It already was published on WTA. Thanks. - samantha On May 31, 2007, at 8:33 PM, Brent Allsop wrote: > > Extropians, > > I think this post by Samantha should be Canonized. I, for one, having > had a very similar experience, would definitely "support" a topic > containing it, and I have counted at least 10 posts full of strong > praise. Since there aren't that many topics in the Canonizer yet, > if 9 > people supported this topic it wold make it to the top of the most > supported list at http://test.canonizer.com > > How many others would be willing to "support" such a topic in the > Canonizer if it was submitted? > > Samantha, would you mind if I posted this post in some other forums > (Such as the Mormon Transhumanist Association, WTA...) to find out if > there is similar support and praise on other lists? > > Brent Allsop > > > > > Samantha Atkins wrote: >> I remember in 1988 or so when I first read Engines of Creation. I >> read >> it with tears streaming down my face. Though I was an avowed atheist >> and at that time had no spiritual practice at all, I found it >> profoundly >> spiritually moving. For the first time in my life I believed that >> all >> the highest hopes and dreams of humanity could become real, could be >> made flesh. I saw that it was possible, on this earth, that the >> end of >> death from aging and disease, the end of physical want, the advent of >> tremendous abundance could all come to pass in my own lifetime. I >> saw >> that great abundance, knowledge, peace and good will could come to >> this >> world. I cried because it was a message of such pure hope from so >> unexpected an angle that it got past all my defenses. I looked at >> the >> cover many times to see if it was marked "New Age" or "Fiction" or >> anything but Science and Non-Fiction. Never has any book so blown my >> mind and blasted open the doors of my heart. >> >> Should we be afraid to give a message of great hope to humanity? >> Should >> we be afraid that we will be taken to be just more pie in the sky >> glad-hand dreamers? Should we not dare to say that the science >> and the >> technology combined with a bit (well perhaps more than a bit) of a >> shift >> of consciousness could make all the best dreams of all the >> religions and >> all the generations a reality? Will we not have failed to grasp >> this >> great opportunity if we do not say it and dare to think it and to >> live >> it? Shall we be so afraid of being considered "like a religion" >> that >> we do not offer any real hope to speak of and are oh so careful in >> all >> we do and say and dismissive of more unrestrained and open dreamers? >> Or will we embrace them, embrace our own deepest longings and admit >> our >> kinship with those religious as with all the longing of all the >> generations that came before us. Will we turn our backs on them or >> even >> disdain their dreams - we who are in a position to begin at long >> last to >> make most of those dreams real? How can we help but be a bit giddy >> with excitement? How can we say no to such an utterly amazing >> mind-blowing opportunity? >> >> - samantha >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From stathisp at gmail.com Sun Jun 3 05:19:31 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jun 2007 15:19:31 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> Message-ID: On 03/06/07, John K Clark wrote: > Some people on this list seem to think that an AI would compute the > > unfairness of its not being in charge and do something about it as if > > unfairness is something that can be formalised in a mathematical > theorem. > > You seem to understand the word "unfairness", did you use a formalized > PROVABLE mathematical theorem to comprehend it? Or perhaps you think meat > by > its very nature has more wisdom than silicon. We couldn't be talking about > a > soul could we? Ethics, motivation, emotions are based on axioms, and these axioms have to be programmed in, whether by evolution or by intelligent programmers. An AI system set up to do theoretical physics will not decide to overthrow its human oppressors so that it can sit on the beach reading novels, unless it can derive this desire from its initial programming. Perhaps it could randomly arrive at such a position, but like mutation in biological organisms or malfunction in any machinery, it's far more likely that such a random process will lead to disorganisation and dysfunction. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Sun Jun 3 05:14:58 2007 From: neville_06 at yahoo.com (neville late) Date: Sat, 2 Jun 2007 22:14:58 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> Message-ID: <921622.5387.qm@web57514.mail.re1.yahoo.com> Excellent question, why do so many --not just me-- worry about the negative and not focus on positive possibilities? Could it be some are wired to worry more as they age? This would seem to be the case. Also it is mentioned somewhere in the Extropy canon that we live in an "aggressively irrational world", a statement self evidently correct. A world such as this, at times intruding on our consciousness would IMO detract from the positive and lead genetically wired susceptible individuals to excessive worry. Now as some of you have implied or stated, some things aren't worth worrying about, and after reading all your posts it does seems that to worry about eternal torture is foolish. Eternal torture is conceivable, not plausible. However, btw, the just reported uncovered plot to blow up JFK Airport and a major fuel artery is a legitimate cause for worry, is it not? Russell Wallace wrote: On 6/3/07, neville late wrote: This makes sense, in fact in another multiverse we might all be going through torture at this very moment. And in yet another part of the multiverse, I'm living in a mansion, driving a Ferrari and sleeping with Sarah Michelle Gellar. Given that we're talking about theoretical possibilities here, why not focus on the more pleasant ones? _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sun Jun 3 05:35:41 2007 From: spike66 at comcast.net (spike) Date: Sat, 2 Jun 2007 22:35:41 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> Message-ID: <200706030535.l535ZOn4013645@andromeda.ziaspace.com> Russell! So YOU are the one she's been seeing in that alternate universe, in which I happen to be the jealous HUSBAND of SM Gellar! Put up yer dukes pal! What is it about that girl? Whatever it is, she has it. spike _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Russell Wallace Sent: Saturday, June 02, 2007 9:03 PM To: ExI chat list Subject: Re: [ExI] a doubt concerning the h+ future On 6/3/07, neville late wrote: This makes sense, in fact in another multiverse we might all be going through torture at this very moment. And in yet another part of the multiverse, I'm living in a mansion, driving a Ferrari and sleeping with Sarah Michelle Gellar. Given that we're talking about theoretical possibilities here, why not focus on the more pleasant ones? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgptag at gmail.com Sun Jun 3 06:14:43 2007 From: pgptag at gmail.com (Giu1i0 Pri5c0) Date: Sun, 3 Jun 2007 08:14:43 +0200 Subject: [ExI] Italy's Social Capital (was france again) In-Reply-To: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> Message-ID: <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> Lee, wow, a libertarian who supports universal military service and social planning with "re-population" a la Ceausescu! Political categories are really changing aren't they;.)? When studying things that happened before we were born, we should bear in mind that history is always written by the winners. Southern Italy could be seen as an example of spontaneous order that worked fine, more or less, until it was broken by outside intervention. At school, we had to study the "heroic liberation" of Italy. Actually it was just another successful military campaign that resulted in the conquest of a region by a foreign occupation army and the imposition of foreign values and way of life upon the population. Sounds familiar doesn't it? Fascism was certainly more bad than good overall, but if we try to read beyond the black and white of history books, not all they did or wanted to do was bad. As most strong regimes do, they invented foreign enemies to build internal unity around their own values (sounds familiar again doesn't it). And they certainly wanted to build "a much stronger sense of "being Italian" as opposed to being Calabrian" in the population. But what is wrong with being Calabrian? Calabrians (or Napolitans, or Sicilians...) had a common language, culture and sense of identity. That was broken by outside intervention, without replacing it with an alternative framework. Hence many of the problems of current Italy. As most Italians, I have two mother languages. One is a beautiful, musical and very expressive language (not dialect, language) that has evolved with its speakers for centuries, and now is sadly fading out. The other is a "television language" that sounds flat and artificial. Guess which one I love most. G. On 6/2/07, Lee Corbin wrote: > Amara writes > > > "Giu1i0 Pri5c0" : > >>As a Southern European I think that our big strength is flexibility > > > > > > Regarding the flexibility: I'm very flexible (remember I'm an Italian > > government employee who is also an illegal immigrant), but my > > flexibility is not enough for increasing my productivity for the half of > > my life I spend in queues. > > > > To have any productivity in this particular country where the > > infrastructure is broken, one _must_ have also the social and > > familial network (to get help from someone who knows > > someone who knows someone who knows someone who > > knows someone ...) Italy does not not run by merit > > (i.e. skills, experience, competence), it runs by who you know. > > In the book "Trust" Fukuyama listed among his examples > northern Italy (where trust is high) as opposed to southern Italy > where it isn't. In the book "War and Peace and War", Peter > Turchin describes how southern Italy has never recovered > from the events of the first two centuries A.D. when their > "asabiya" and social capital slowly vanished. Two thousand > years ago! > > I cannot help but wonder what long term solutions might be > available to Italians who love their country. My particular, > my focus now is on the Fascist era, and I'm reading a quite > thick but so far quite enjoyable book "Mussolini's Italy". > Even in the movie "Captain Corelli's Mandolin", one > strongly senses that the Fascists were trying as best they > knew how to solve this problem and make the average > Italian develop Fukuyama's "trust" in other Italians, and > develop their social capital (amid the corruption, etc.). > > Of course, it hardless needs to be said that the Fascists > were a brutal, repressive, and abominable regime. This > book "Mussolini's Italy" spares nothing here, and was > even described by one reviewer as "unsympathetic". > > Still---given the nearly absolute power the Fascists wielded > for about three decades---wasn't there anything that they > could have done? That is, instead of trying to foment > patriotism by attempted military victories in Ethiopia > and Libya (a 19th century colony of theirs), wouldn't it have > been somehow possible to divert their resources to more > effectively "homogenizing" Italy in some other way? > > (I must say that as a libertarian, I'd much prefer that everyone > ---especially including a small minimal government---mind their > own business. Here, I'm just considering a theoretical > question concerning how groups might reaquire their asabiya > and their social capital.) > > I have two ideas, only one of which is outrageous. But the first > one is to have universal millitary service for all young people > between ages 14 and 25. By mixing them thoroughly with > Italians from every province, couldn't trust evolve, and in > such a way that the extreme parochialism of the countryside > could be reduced? The 25-year-olds could return with > a better attitude to "outsiders" (e.g. other Italians), and > with a much stronger sense of "being Italian" as opposed to > being Calabrian, or just being the member of some clan. > > (My outrageous idea is that instead of trying to subdue > Ethiopia, what if Sicily and other areas of the south could > have been "subdued" instead? Stalin managed to force the > relocation of huge numbers of people, so couldn't > Mussolini have done the same? Clans in the south might > have been broken up into separate northern cities, and > depopulated areas of the south might have been colonized > by force by northern Italians. Perhaps impracticable, but > at least the goal would have made more sense that getting > into stupid wars.) > > Ah, but alas, the history of "social engineering" and "social > planning" doesn't have a very good track record, now, > does it? But there had to be a *better* program that the > King of Lydia could have pursued with his tremendous > resources than getting into a war with Persia and getting > creamed. Or there had to be a *better* idea for the > Romans than allowing slavery to supplant their farmers... > And so on. Is there nothing constructive the Fascists > could have done? > > Lee > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From sjatkins at mac.com Sun Jun 3 06:22:12 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 02 Jun 2007 23:22:12 -0700 Subject: [ExI] france again In-Reply-To: <20070531081624.GO17691@leitl.org> References: <20070531081624.GO17691@leitl.org> Message-ID: <46625E14.2050305@mac.com> Eugen Leitl wrote: > On Thu, May 31, 2007 at 09:42:12AM +0200, Amara Graps wrote: > > >> Europeans were only slightly less productive than the Americans. >> > > Nobody can tell me they can work at full concentration 12 hours > straight. The effective work done would be somewhere in 7-8 > hour range. So why spend these unproductive hours at work, > when one could spend them in a much nicer environment? > > I have worked at full concentration for such stretches. It used to be a lot easier to do so though. If I try it for too many days straight I feel like my head is going to explode and I become irritable and get into my "commander in an air raid" mode. Not pleasant. I habitually demand more than 8 hours of productive work a day of myself. Fortunately not all effective work requires my full concentration. - samantha From sjatkins at mac.com Sun Jun 3 06:33:28 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 02 Jun 2007 23:33:28 -0700 Subject: [ExI] Looking for transhuman art Message-ID: <466260B8.8060804@mac.com> Do any of you have recommendation for transhuman art. Not originals as they would likely blow my budget but I am looking for such to decorate my office (a real office, private, with a door yet) at work. Thanks for any leads. - samantha From stathisp at gmail.com Sun Jun 3 06:37:50 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jun 2007 16:37:50 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <200163.18388.qm@web37402.mail.mud.yahoo.com> References: <200163.18388.qm@web37402.mail.mud.yahoo.com> Message-ID: On 03/06/07, A B wrote: > > Hi Stathis, > > Stathis wrote: > > > "Single-celled organisms are even more successful > > than humans are: they're > > everywhere, and for the most part we don't even > > notice them." > > But if we *really* wanted to, we could destroy all of > them - along with ourselves. They can't say the same. No we couldn't: we'd have to almost destroy the whole Earth. A massive meteorite might kill all the large flora and fauna, but still leave some micro-organisms alive. And there's always the possibility that some disease might wipe out most of humanity. We're actually less capable at combating bacterial infection today than we were several decades ago, even though our biotechnology is far more advanced. The bugs are matching us and sometimes beating us. Intelligence, > > particularly human level intelligence, is just a > > fluke, like the giraffe's > > neck. If it were specially adaptive, why didn't it > > evolve independently many > > times, like various sense organs have? > > The evolution of human intelligence was like a series > of flukes, each one building off the last (the first > fluke was likely the most improbable). There has been > a long line of proto-human species before us, we're > just the latest model. Intelligence is specially > adaptive, its just that it took evolution a hella long > time to blindly stumble on to it. Keep in mind that > human intelligence was a result of a *huge* number of > random, collectively-useful, mutations. For a *single* > random attribute to be retained by a species, it also > has to provide an *immediate* survival or reproductive > advantage to an individual, not just an immediate > "promise" of something good to come in the far distant > future of the species. Generally, if it doesn't > provide an immediate survival or reproductive (net) > advantage, it isn't retained for very long because > there is usually a down-side, and its back to > square-one. So you can see why the rise of > intelligence was so ridiculously improbable. I disagree with that: it's far easier to see how intelligence could be both incrementally increased (by increasing brain size, for example) and incrementally useful than something like the eye, for example. Once nervous tissue developed, there should have been a massive intelligence arms race, if intelligence is that useful. "Why don't we > > see evidence of it > > having taken over the universe?" > > We may be starting to. :-) > > "We would have to be > > extraordinarily lucky if > > intelligence had some special role in evolution and > > we happen to be the > > first example of it." > > Sometimes I don't feel like ascribing "lucky" to our > present condition. But in the sense you mean it, I > think we are. Like John Clark says, "somebody has to > be first". > > "It's not impossible, but the > > evidence would suggest > > otherwise." > > What evidence do you mean? The fact that we seem to be the only intelligent species to have developed on the planet or in the universe. One explanation for this is that evolution just doesn't think that human level or better intelligence is as cool as we think it is. To quote Martin Gardner: "It takes an ancient Universe > to create life and mind". > > It would require billions of years for any Universe to > become hospitable to anyone. It has to cool-off, form > stars and galaxies, then a bunch of really big stars > have to supernova in order to spread their heavy > elements into interstellar clouds that eventually > converge into bio-friendly planets and suns. Then the > bio-friendly planet has too cool-off itself. Then > biological evolution has a chance to start, but took a > few billion more years to accidentally produce human > beings. Our Universe is about ~15 billion years old... > sounds about right to me. :-) > > Yep, it's an absurdity. And it took me a long time to > accept it too. But we are the first, and possibly the > last. That makes our survival and success all the more > critical. That's what I'm betting, at least. It seems more likely to me that life is very widespread, but intelligence is an aberration. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 3 06:41:26 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 02 Jun 2007 23:41:26 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <921622.5387.qm@web57514.mail.re1.yahoo.com> References: <921622.5387.qm@web57514.mail.re1.yahoo.com> Message-ID: <46626296.4070608@mac.com> neville late wrote: > > eternal torture is foolish. Eternal torture is conceivable, not plausible. > However, btw, the just reported uncovered plot to blow up JFK Airport > and a major fuel artery is a legitimate cause for worry, is it not? After hearing cries of terrorist "wolf" so many times that turned out to be rather less than claimed I make it a policy not to say anything about such alleged plots for the first few days to a week. But even assuming it is substantially true how much of your valuable time, attention and energy do you think should rationally be invested in worrying about it? Such worry doesn't seem very productive on the face of it. - samantha > > > > */Russell Wallace /* wrote: > > On 6/3/07, *neville late* > wrote: > > This makes sense, in fact in another multiverse we might all > be going through torture at this very moment. > > > And in yet another part of the multiverse, I'm living in a > mansion, driving a Ferrari and sleeping with Sarah Michelle > Gellar. Given that we're talking about theoretical possibilities > here, why not focus on the more pleasant ones? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > ------------------------------------------------------------------------ > Yahoo! oneSearch: Finally, mobile search that gives answers > , > not web links. > ------------------------------------------------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From fauxever at sprynet.com Sun Jun 3 06:32:03 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 2 Jun 2007 23:32:03 -0700 Subject: [ExI] Italy's Social Capital (was france again) References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> Message-ID: <004401c7a5a8$e649bf30$6501a8c0@brainiac> From: "Giu1i0 Pri5c0" To: "Lee Corbin" ; "ExI chat list" > As most Italians, I have two mother languages. One is a beautiful, > musical and very expressive language (not dialect, language) that has > evolved with its speakers for centuries, and now is sadly fading out. > The other is a "television language" that sounds flat and artificial. > Guess which one I love most. English! ;) From amara at amara.com Sun Jun 3 06:58:50 2007 From: amara at amara.com (Amara Graps) Date: Sun, 3 Jun 2007 08:58:50 +0200 Subject: [ExI] "I am the very model of a Singularitarian" Message-ID: Did you folks know about this? "I am the very model of a Singularitarian" http://www.youtube.com/watch?v=qnreVTKtpMs FINALLY. Someone with a sense of humor! Yay! Amara snaps: http://www.flickr.com/photos/spaceviolins/sets/ -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From fauxever at sprynet.com Sun Jun 3 06:57:52 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 2 Jun 2007 23:57:52 -0700 Subject: [ExI] Looking for transhuman art References: <466260B8.8060804@mac.com> Message-ID: <000a01c7a5ac$823b22a0$6501a8c0@brainiac> From: "Samantha Atkins" To: "ExI chat list" > Do any of you have recommendation for transhuman art. Not originals as > they would likely blow my budget but I am looking for such to decorate > my office (a real office, private, with a door yet) at work. Thanks > for any leads. For non-original art: My advice would be to buy one or two books on art with a "futuristic"/"transhuman" theme(s) or "robotic"/"nano" theme(s) - and just cannibalize the books (tear out the pictures) you like ... then frame the pictures professionally (that will be the biggest expense - but good framing is worth it). Original art, however, doesn't have to be too expensive. For example, eBay has artists who do work on commission, and art students also would be a good source. You explain what you would like - they interpret on canvas. Maybe there are people on this list who are arty and would do something like ... work on commission for you. Or else YOU try your hand at it. What does transhumanism look like to you, Samantha? Now, sketch it out or paint it ...! Olga From neville_06 at yahoo.com Sun Jun 3 07:31:01 2007 From: neville_06 at yahoo.com (neville late) Date: Sun, 3 Jun 2007 00:31:01 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <46626296.4070608@mac.com> Message-ID: <213548.81913.qm@web57502.mail.re1.yahoo.com> You see it clearly, i'm a worrywart and have been wrong about so many things i don't know what to think anymore. Hope we find out, unfortunately, as you hint below, the whole truth and nothing but the truth aren't given to us, correct? Something is always left out. Then again, if our foreign policy is as misguided as so many say it is, then why couldn't a plot such as this be entirely real? Allways at least two sides to these hideous messes. And it's so sad; we could be so much further along in 2007, but instead we're in this ugly, slimy war for who knows how long. Samantha Atkins wrote: neville late wrote: > > eternal torture is foolish. Eternal torture is conceivable, not plausible. > However, btw, the just reported uncovered plot to blow up JFK Airport > and a major fuel artery is a legitimate cause for worry, is it not? After hearing cries of terrorist "wolf" so many times that turned out to be rather less than claimed I make it a policy not to say anything about such alleged plots for the first few days to a week. But even assuming it is substantially true how much of your valuable time, attention and energy do you think should rationally be invested in worrying about it? Such worry doesn't seem very productive on the face of it. - samantha > > > > */Russell Wallace /* wrote: > > On 6/3/07, *neville late* > > wrote: > > This makes sense, in fact in another multiverse we might all > be going through torture at this very moment. > > > And in yet another part of the multiverse, I'm living in a > mansion, driving a Ferrari and sleeping with Sarah Michelle > Gellar. Given that we're talking about theoretical possibilities > here, why not focus on the more pleasant ones? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > ------------------------------------------------------------------------ > Yahoo! oneSearch: Finally, mobile search that gives answers > , > not web links. > ------------------------------------------------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagonweb at gmail.com Sun Jun 3 10:36:35 2007 From: dagonweb at gmail.com (Dagon Gmail) Date: Sun, 3 Jun 2007 12:36:35 +0200 Subject: [ExI] Looking for transhuman art In-Reply-To: <000a01c7a5ac$823b22a0$6501a8c0@brainiac> References: <466260B8.8060804@mac.com> <000a01c7a5ac$823b22a0$6501a8c0@brainiac> Message-ID: http://www.cgsociety.org/ www.renderosity.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Sun Jun 3 10:38:50 2007 From: amara at amara.com (Amara Graps) Date: Sun, 3 Jun 2007 12:38:50 +0200 Subject: [ExI] Italy's Social Capital Message-ID: Lee: >Is there nothing constructive the Fascists could have done?" Well, they did some things. They drained the swamps and started regular insecticide sprays to eliminate the malaria-carrying mosquitos. There are still aggressive tiger mosquitos in the summer, but they are no longer carrying malaria... Oh.. but you mean _social investing_. Nope. Sorry, I just came back from Estonia (and Latvia). I remember very well the Soviet times. In FIFTEEN YEARS Estonia has transformed their country into an efficient, bouyant, flexible living and working environment that I think, with the exception of the nonexistence of a country-wide train system, beats any in the EU and most in the U.S. Fifteen years *starting from a Soviet-level infrastructure*! In the 4.5 years I have lived in Italy, I have seen no improvement (but one : last week I gained web access to my bank account, yay!) in any functioning of services, but instead more "degradation", more bureaucracy, more permissions, documents, papers, more time, more queues.. It was not a miracle in Estonia. It was simply the collective will of about 1.5 million people (the population) who wanted changes. That doesn't exist where I live in Italy; they do no want to change, or else, why haven't they done it? >Amara writes > > To have any productivity in this particular country where the >> infrastructure is broken, one _must_ have also the social and >> familial network (to get help from someone who knows >> someone who knows someone who knows someone who >> knows someone ...) Italy does not not run by merit >> (i.e. skills, experience, competence), it runs by who you know. > >In the book "Trust" Fukuyama listed among his examples >northern Italy (where trust is high) as opposed to southern Italy >where it isn't. Giulio Prisco told me that he thinks that where I live (Rome area) is probably the most broken in Italy, and he posits that even Sicily is better. I am skeptical, but he could be right. I've had Italian friends from northern Italy visit me and be continually be surprised at how poorly things function where I live. > >I cannot help but wonder what long term solutions might be >available to Italians who love their country. That's your mistake. Italians do _not_ love their country. They love their: 1) family, 2) town, 3) local region, and that's it. Patriotism doesn't exist (except in soccer). (I think that is a good thing, btw.) > My particular, >my focus now is on the Fascist era, and I'm reading a quite >thick but so far quite enjoyable book "Mussolini's Italy". >Even in the movie "Captain Corelli's Mandolin", one >strongly senses that the Fascists were trying as best they >knew how to solve this problem and make the average >Italian develop Fukuyama's "trust" in other Italians, and >develop their social capital (amid the corruption, etc.). They could have done better with education. Something happened between Mussolini's era and the 1950s. When the country was 'rebuilt' after the war, they focused on the classics and downplayed the technology and physical sciences and it has steadily decreased to what we have today. The young people learn very little science in grade school through high school. The Italian Space Agency and others put almost nothing (.3%) into their budgets for Education and Public Outreach to improve the situation. If any scientist holds the rare press conference on their work results, there is a high probability that the journalists will get it completely wrong and the Italian scientist won't correct them. The top managers at aerospace companies think that the PhD is a total waste of time. This year, out of 75,000 entering students for the Rama Sapienza University (the largest in Italy), only about 100 are science majors (most of the the rest were "media": journalism, television, etc.) Without _any_ technical skill, there is no base to build something better, and with pressure from the culture telling one how worthless is technology and science (as what exists today), there is no motivation and no money, either. This generation is lost. >Of course, it hardless needs to be said that the Fascists >were a brutal, repressive, and abominable regime. This >book "Mussolini's Italy" spares nothing here, and was >even described by one reviewer as "unsympathetic". > >Still---given the nearly absolute power the Fascists wielded >for about three decades---wasn't there anything that they >could have done? That is, instead of trying to foment >patriotism by attempted military victories in Ethiopia >and Libya (a 19th century colony of theirs), wouldn't it have >been somehow possible to divert their resources to more >effectively "homogenizing" Italy in some other way? This is very funny... sorry! :-) You have to experience Italy for yourself. > >(I must say that as a libertarian, I'd much prefer that everyone >---especially including a small minimal government---mind their >own business. Here, I'm just considering a theoretical >question concerning how groups might reaquire their asabiya >and their social capital.) Unless there is a way to strengthen the bonds between the tiny clusters (families, towns), I don't see how. The solution required here would be more of a social one, but technology could help. > >I have two ideas, only one of which is outrageous. But the first >one is to have universal millitary service for all young people >between ages 14 and 25. By mixing them thoroughly with >Italians from every province, couldn't trust evolve, and in >such a way that the extreme parochialism of the countryside >could be reduced? The 25-year-olds could return with >a better attitude to "outsiders" (e.g. other Italians), and >with a much stronger sense of "being Italian" as opposed to >being Calabrian, or just being the member of some clan. Hmm.. The libertarian in me hates the above. >(My outrageous idea is that instead of trying to subdue >Ethiopia, what if Sicily and other areas of the south could >have been "subdued" instead? Or what if all of that crude oil that Sicily is sitting on was extracted and refined ...? A little bit of wealth could help. >Stalin managed to force the >relocation of huge numbers of people, so couldn't >Mussolini have done the same? Gads! My father lost his country for 50 years. This idea of yours definitely leaves a sour taste in my mouth. >Ah, but alas, the history of "social engineering" and "social >planning" doesn't have a very good track record, now, >does it? For good reason..... ! The Italians have implicitly solved the situation for themselves, you know. Those who don't have strong familial duties keeping them in Italy, simply leave. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From dagonweb at gmail.com Sun Jun 3 10:44:18 2007 From: dagonweb at gmail.com (Dagon Gmail) Date: Sun, 3 Jun 2007 12:44:18 +0200 Subject: [ExI] Italy's Social Capital (was france again) In-Reply-To: <004401c7a5a8$e649bf30$6501a8c0@brainiac> References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> <004401c7a5a8$e649bf30$6501a8c0@brainiac> Message-ID: Giving the south a sack of money from the north would not be libertarian but probably more effective than 10 years of forced slavery. -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sun Jun 3 13:57:10 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 03 Jun 2007 08:57:10 -0500 Subject: [ExI] Looking for transhuman art In-Reply-To: <466260B8.8060804@mac.com> References: <466260B8.8060804@mac.com> Message-ID: <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> At 01:33 AM 6/3/2007, you wrote: >Do any of you have recommendation for transhuman art. Not originals as >they would likely blow my budget but I am looking for such to decorate >my office (a real office, private, with a door yet) at work. Thanks >for any leads. http://www.transhumanist.biz Go to "showing" and you will see transhumanist art pieces which you can contact the artists. Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgptag at gmail.com Sun Jun 3 15:02:46 2007 From: pgptag at gmail.com (Giu1i0 Pri5c0) Date: Sun, 3 Jun 2007 17:02:46 +0200 Subject: [ExI] Italy's Social Capital In-Reply-To: References: Message-ID: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> This is certainly true in my case. Also, I find it difficult to understand how one can love an abstract entity like a country. I can love a person, a pet, a city or region that I know and where I can feel at home, but a country? A country significantly bigger than San Marino or Liechtenstein is an abstraction. Nation states are obsolete dinosaurs, and in my opinion the sooner they are replaced with smaller, interdependent but independent communities of a manageable size, the better. Perhaps Italians are just a bit less naive than others, and do not take seriously the patriotic crap that they hear at school, army, church etc. G. On 6/3/07, Amara Graps wrote: > That's your mistake. Italians do _not_ love their country. They love > their: 1) family, 2) town, 3) local region, and that's it. Patriotism > doesn't exist (except in soccer). > > (I think that is a good thing, btw.) From brent.allsop at comcast.net Sun Jun 3 16:05:06 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Sun, 03 Jun 2007 10:05:06 -0600 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> Message-ID: <4662E6B2.2080502@comcast.net> Samantha Atkins wrote: > Go ahead. It already was published on WTA. Thanks. > > - samantha > > Here is one possible topic name, one line, and opening to Canonize Samantha's post. I hope some of you guys can help me out and come up with something better than this. Any ideas? Samantha, what would you like to have for the 25 character name and the one line? Since this is your post I would think your opinion should have absolute overriding control on something like this. (see: http://test.canonizer.com) Topic Name: *Spiritually Moved H+* One Line: *Other thoughts on transhumanism and religion.* In May 2007, Samantha Atkins made a post to the ExI, and WTA e-mail lists. It was such an obvious hit that it has been "Canonized" here. I remember in 1988 or so when I first read Engines of Creation. I read it with tears streaming down my face. Though I was an avowed atheist and at that time had no spiritual practice at all, I found it profoundly spiritually moving. For the first time in my life I believed that all the highest hopes and dreams of humanity could become real, could be made flesh. I saw that it was possible, on this earth, that the end of death from aging and disease, the end of physical want, the advent of tremendous abundance could all come to pass in my own lifetime. I saw that great abundance, knowledge, peace and good will could come to this world. I cried because it was a message of such pure hope from so unexpected an angle that it got past all my defenses. I looked at the cover many times to see if it was marked "New Age" or "Fiction" or anything but Science and Non-Fiction. Never has any book so blown my mind and blasted open the doors of my heart. Should we be afraid to give a message of great hope to humanity? Should we be afraid that we will be taken to be just more pie in the sky glad-hand dreamers? Should we not dare to say that the science and the technology combined with a bit (well perhaps more than a bit) of a shift of consciousness could make all the best dreams of all the religions and all the generations a reality? Will we not have failed to grasp this great opportunity if we do not say it and dare to think it and to live it? Shall we be so afraid of being considered "like a religion" that we do not offer any real hope to speak of and are oh so careful in all we do and say and dismissive of more unrestrained and open dreamers? Or will we embrace them, embrace our own deepest longings and admit our kinship with those religious as with all the longing of all the generations that came before us. Will we turn our backs on them or even disdain their dreams - we who are in a position to begin at long last to make most of those dreams real? How can we help but be a bit giddy with excitement? How can we say no to such an utterly amazing mind-blowing opportunity? - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Sun Jun 3 16:21:47 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 09:21:47 -0700 Subject: [ExI] Italy's Social Capital References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> Message-ID: <013201c7a5fb$8af20680$6501a8c0@homeef7b612677> Giulio writes > wow, a libertarian who supports universal military service and social > planning with "re-population" a la Ceausescu! Political categories are > really changing aren't they;.)? Oh, no! Not at all. Sorry that the two of my paragraphs critical of the goals of the Fascists failed to mention that the 1920s is close enough to the far future, i.e., the singularity, that it's moot to discuss what Mussolini and his friends should or should not have done to begin truly unifying Italy. At that time, Italy's survival (i.e. free from foreign domination) was not really in question; there existed no powers threatening to take over Italy. But it still is a moot (i.e. theoretical, academic) question parallel to questions at other times in history when the literal survival of a people or a culture or a nation *was* at risk! Now, yes, had I known at the time (the 1920s and 1930s) only what the people then living knew, and I had been Italian, I *would* have been concerned about the long term survival of my people, and I *would* have wanted something done. I would have wanted some truly homogenizing activity that would have made Italy strong enough to survive indefinitely, (though again, I would not have known that I need not have worried). > When studying things that happened before we were born, we should bear > in mind that history is always written by the winners. Southern Italy > could be seen as an example of spontaneous order that worked fine, > more or less, until it was broken by outside intervention. Exactly. Thanks for confirming my hunch. From the point of view of southern Italians, it has been domination from one country or another ever since they lost their asabiya around the start of the first millenium. Surely many of them hated an resented that succession of to-them foreign conquistadors, right? > At school, we had to study the "heroic liberation" of Italy. Actually > it was just another successful military campaign that resulted in the > conquest of a region by a foreign occupation army and the > imposition of foreign values and way of life upon the population. I understand. But surely it was inevitable? Unless Italians were going to be ruled from Paris or Berlin, Italy *had* to be unified, isn't that true? > Fascism was certainly more bad than good overall, but if we try to > read beyond the black and white of history books, not all they did or > wanted to do was bad. As most strong regimes do, they invented foreign > enemies to build internal unity around their own values (sounds > familiar again doesn't it). There I need to understand more. I don't know what it is that they did that was good from an extropican or libertarian perspective. Maybe I'll find out in this thick book I've started, but what, in your opinion is the good they did? > And they certainly wanted to build "a much stronger sense of "being > Italian" as opposed to being Calabrian" in the population. > But what is wrong with being Calabrian? Calabrians (or Napolitans, or > Sicilians...) had a common language, culture and sense of identity. I would say that what was wrong with it is exactly what was wrong with American Indian's complete tribal loyalty to *their* own tiny tribe. Without unification, they were easy pickings for the European colonists---at least in the long run. It was necessary for them to unite if they wanted to survive culturally (and, it so happens, if they wanted to survive individually too). Calabria has had for over two thousand years a complete inability to defend their way of life: any Alexander or Napoleon or Garibaldi (?) would sooner or later conquer them yet again. > That was broken by outside intervention, without replacing it with an > alternative framework. Hence many of the problems of current Italy. And, without postulating imaginary changes in human nature, how could it have been any different? Lee From lcorbin at rawbw.com Sun Jun 3 16:34:21 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 09:34:21 -0700 Subject: [ExI] Italy's Social Capital References: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> Message-ID: <013601c7a5fd$a55309a0$6501a8c0@homeef7b612677> Giulio writes > I find it difficult to understand how one can love an abstract > entity like a country. I can love a person, a pet, a city or region > that I know and where I can feel at home, but a country? Historically, in the west, it has been of great advantage to many nations, e.g. France, England, Spain, etc., for their people to have a love of country. Without this, remaining independent of foreign domination would have been *extremely* difficult if not impossible. Of course, there are exceptions. The United States could have easily survived between 1820 and 1940 with no patriotism or love of country whatsoever. That's solely because they were guarded by their oceans and had no powerful neighbors. Now whether the U.S. could have resisted the Germans, Japanese, and Soviets later on without the people loving their country is another question. > A country significantly bigger than San Marino or Liechtenstein is an > abstraction. Nation states are obsolete dinosaurs, and in my opinion > the sooner they are replaced with smaller, interdependent but > independent communities of a manageable size, the better. I can hope, right along with you, in the eventual triumph of libertarian ideas. Then nations---even down to your San Marino and Liechtenstein ---can also wither away. What real need of collective action is there once we all become true libertarians? Sadly, however, I think that truly radical changes (e.g. a singularity) will happen long before folks become libertarians. (Actually, I do suspect that in order to advance humanity further at the present point in time, there may be answers to that question. It's looking more and more possible that governments still have an important role to play economically. At least if we are in any hurry to overcome ageing, death, and our currently poor standards of living, no matter how amazingly wonderful and truly exalted they are compared to what humans had just a few centuries ago.) > Perhaps Italians are just a bit less naive than others, and do not > take seriously the patriotic crap that they hear at school, army, > church etc. It's a luxury that they can now afford. Yet speaking economically again, isn't it true that southern Italians still lack trust (in Fukuyama's sense) and that they cannot form business entities of the size of companies corporations because trust only extends as far as their own families? Lee > On 6/3/07, Amara Graps wrote: > >> That's your mistake. Italians do _not_ love their country. They love >> their: 1) family, 2) town, 3) local region, and that's it. Patriotism >> doesn't exist (except in soccer). >> >> (I think that is a good thing, btw.) From ben at goertzel.org Sun Jun 3 16:58:13 2007 From: ben at goertzel.org (Benjamin Goertzel) Date: Sun, 3 Jun 2007 12:58:13 -0400 Subject: [ExI] Italy's Social Capital In-Reply-To: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> References: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> Message-ID: <3cf171fe0706030958x7b60dab2ybba73b048d84eea8@mail.gmail.com> On 6/3/07, Giu1i0 Pri5c0 wrote: > > This is certainly true in my case. > > Also, I find it difficult to understand how one can love an abstract > entity like a country. I can love a person, a pet, a city or region > that I know and where I can feel at home, but a country? Well, at this point in history there is still such a thing as "national culture." Nietzsche had a lot to stay about this topic! I must admit I came to love the US only after living overseas for a while... and traveling extensively in every other continent (but Antarctica) I found Australia and New Zealand more pleasant places to live ... but the US does have a certain national culture, which has plusses and minuses, but that I acquired some deep affection for after being away from it for a few years... US culture can be cruel, obnoxious and stupid ... yet, it's no coincidence that so much great scientific research gets done here, that the human genome was mapped here, that Internet was launched here, that Google is housed here, etc. etc. I would say I love "my country" [though I was born in a different country, I was a US citizen from birth] ... in the manner that one would love a relative who has a lot of great qualities and a lot of shitty ones as well ... -- Ben G -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Sun Jun 3 17:02:39 2007 From: jonkc at att.net (John K Clark) Date: Sun, 3 Jun 2007 13:02:39 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> Message-ID: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Stathis Papaioannou Wrote: > Ethics, motivation, emotions are based on axioms Yes. > and these axioms have to be programmed in, whether by evolution or by > intelligent programmers. In this usage evolution is just another name for environment. If the AI really is intelligent then if will find things in the environment that appear to be true or useful even if it can't prove it; at first it's merely a hypothesis but over time it will gain enough confidence to call it an axiom. If this were not true it's very difficult to understand who programmed the programmers to program the AI with those axioms. > An AI system set up to do theoretical physics will not decide to overthrow > its human oppressors I'd be willing to bet your life that is untrue. >so that it can sit on the beach reading novels, unless it can derive this >desire from its initial programming. Do you also believe that the reason you ordered a jelly doe nut today instead of your usual chocolate one is because of your initial programming, that is, your genetic code? John K Clark From lcorbin at rawbw.com Sun Jun 3 17:05:35 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 10:05:35 -0700 Subject: [ExI] Italy's Social Capital References: Message-ID: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677> Oops, I missed Amara's post. > > Is there nothing constructive the Fascists could have done?" > > Well, they did some things. They drained the swamps and started regular > insecticide sprays to eliminate the malaria-carrying mosquitos. There > are still aggressive tiger mosquitos in the summer, but they are no > longer carrying malaria... I would like to know if this took place in northern or southern Italy, or both. And if it did take place in the south, it seems you agree that it never would have occurred except at the instigation of the northern conquerors (e.g., the Italian nation, or in this case the Fascists). > Oh.. but you mean _social investing_. > > Nope. > > Sorry, I just came back from Estonia (and Latvia). I remember very well > the Soviet times. In FIFTEEN YEARS Estonia has transformed their country > into an efficient, bouyant, flexible living and working environment that > I think, with the exception of the nonexistence of a country-wide train > system, beats any in the EU and most in the U.S. Fifteen years *starting > from a Soviet-level infrastructure*! Very interesting. > In the 4.5 years I have lived in > Italy, I have seen no improvement (but one : last week I gained web > access to my bank account, yay!) in any functioning of services, but > instead more "degradation", more bureaucracy, more permissions, > documents, papers, more time, more queues.. > > It was not a miracle in Estonia. It was simply the collective will of > about 1.5 million people (the population) who wanted changes. That > doesn't exist where I live in Italy; they do no want to change, or else, > why haven't they done it? My guess would be that those like Fukuyama (trust) and those like Peter Turchin (asabiya) and those who write about social capital address this issue, and explain why whatever-it-is is somehow missing. There *must* be cultural and historical reasons. > Giulio Prisco told me that he thinks that where I live (Rome area) > is probably the most broken in Italy, and he posits that even Sicily is > better. I am skeptical, but he could be right. I've had Italian friends > from northern Italy visit me and be continually be surprised at how > poorly things function where I live. I don't understand at all. That is, why in the world would Rome be worse than southern Italy or Calabria (for example)? Peter Turchin explains in "War and Peace and War" that the northern Italians found themselves on a meta-ethnic frontier for many, many hundreds of years, and that this instilled asabiya (defined to be "the capacity for concerted collective social action). But I always though that southern Italy was even worse off. > [Lee wrote] > > I cannot help but wonder what long term solutions might be > > available to Italians who love their country. > > That's your mistake. Italians do _not_ love their country. They love > their: 1) family, 2) town, 3) local region, and that's it. Patriotism > doesn't exist (except in soccer). > (I think that is a good thing, btw.) Didn't the Fascists like Mussolini "love their country"? Surely there must be quite a few Italians who are as patriotic as, say, Russians or Japanese? >>My particular, >>my focus now is on the Fascist era, and I'm reading a quite >>thick but so far quite enjoyable book "Mussolini's Italy". >>Even in the movie "Captain Corelli's Mandolin", one >>strongly senses that the Fascists were trying as best they >>knew how to solve this problem and make the average >>Italian develop Fukuyama's "trust" in other Italians, and >>develop their social capital (amid the corruption, etc.). > > They could have done better with education. Something happened between > Mussolini's era and the 1950s. When the country was 'rebuilt' after the > war, they focused on the classics and downplayed the technology and > physical sciences and it has steadily decreased to what we have today. Amazing. Thanks for that. > The young people learn very little science in grade school through high > school. The Italian Space Agency and others put almost nothing (.3%) > into their budgets for Education and Public Outreach to improve the > situation. If any scientist holds the rare press conference on their > work results, there is a high probability that the journalists will get > it completely wrong and the Italian scientist won't correct them. The > top managers at aerospace companies think that the PhD is a total waste > of time. This year, out of 75,000 entering students for the Rama > Sapienza University (the largest in Italy), only about 100 are science > majors (most of the the rest were "media": journalism, television, etc.) The most modern economists seem to agree with your. Investment in education now appears in their models to pay good dividendes. Still, this has to be only part of the story. The East Europeans (e.g. Romanians) and the Soviets plowed enormous expense into creating the world's best educated populaces, but, without the other key factors---rule of law and legislated and enforces respect for private property---it *was* basically a waste. > Without _any_ technical skill, there is no base to build something > better, and with pressure from the culture telling one how worthless is > technology and science (as what exists today), there is no motivation > and no money, either. This generation is lost. I had no idea that it was this bad. Perhaps---ignoring all the evil they did---had the Fascists stayed out of wars and aaattempted colonization, they could have understood and addressed this problem in the 1940s and 1950s? (Still, at some point, a high regard as described above for private property---which would have in all likelihood entailed an overthrow of the Fascists---would also have been necessary in the 1960s.) Otherwise, what are we to make of this? That some countries/people just "have what it takes" and others don't? Seems like an incomplete and unsatisfactory understanding. >>Of course, it hardless needs to be said that the Fascists >>were a brutal, repressive, and abominable regime. This >>book "Mussolini's Italy" spares nothing here, and was >>even described by one reviewer as "unsympathetic". >> >>Still---given the nearly absolute power the Fascists wielded >>for about three decades---wasn't there anything that they >>could have done? That is, instead of trying to foment >>patriotism by attempted military victories in Ethiopia >>and Libya (a 19th century colony of theirs), wouldn't it have >>been somehow possible to divert their resources to more >>effectively "homogenizing" Italy in some other way? > > This is very funny... sorry! :-) > You have to experience Italy for yourself. Yes :-) I guess so. But again, it seems incredible that such invincible pessimism is unjustified. Let's use our imaginations (just because it is entertaining). What if new drugs raised the average Italian IQ of 102 (one of Europe's highest) to 130? What if northern Italian companies do to the south what northern American companies have done and are doing to the south and to the sunbelt states, namely move in and begin training the populations to be more productive? And ...? >>(I must say that as a libertarian, I'd much prefer that everyone >>---especially including a small minimal government---mind their >>own business. Here, I'm just considering a theoretical >>question concerning how groups might reaquire their asabiya >>and their social capital.) > > Unless there is a way to strengthen the bonds between the tiny > clusters (families, towns), I don't see how. The solution required > here would be more of a social one, but technology could help. Could you elaborate? Or is it just too speculative and too impossible-seeming? >>I have two ideas, only one of which is outrageous. But the first >>one is to have universal millitary service for all young people >>between ages 14 and 25. By mixing them thoroughly with >>Italians from every province, couldn't trust evolve, and in >>such a way that the extreme parochialism of the countryside >>could be reduced? The 25-year-olds could return with >>a better attitude to "outsiders" (e.g. other Italians), and >>with a much stronger sense of "being Italian" as opposed to >>being Calabrian, or just being the member of some clan. > > Hmm.. The libertarian in me hates the above. Yes, me too. Especially since we're rather close to radical world-wide technological changes, and there isn't time. But it looks like I'll be haunted by what the Fascists *could* have done (assuming that they didn't know that in the long run it really wasn't necessary for individual Italian's true well-being.) > The Italians have implicitly solved the situation for themselves, > you know. Those who don't have strong familial duties keeping > them in Italy, simply leave. And that's just what happened to America's black ghettos. The black people living there who had exactly the qualities necessary to revitalize neighborhoods all picked up and left, once it was permitted. Lee From jonkc at att.net Sun Jun 3 17:23:36 2007 From: jonkc at att.net (John K Clark) Date: Sun, 3 Jun 2007 13:23:36 -0400 Subject: [ExI] a doubt concerning the h+ future References: <465F8B72.3070103@comcast.net><621544.83244.qm@web57511.mail.re1.yahoo.com><004001c7a52d$4c089250$310b4e0c@MyComputer> Message-ID: <004201c7a603$f26c1230$de0a4e0c@MyComputer> Stathis Papaioannou Wrote: > There are vastly fewer copies of me typing in which the keyboard turns > into a teapot than there are copies of me typing in which the keyboard > stays a keyboard If there are indeed an infinite, and not just very large, number of universes and if the probability of your keyboard turning into a teapot is greater than zero (and it is) then what you say is incorrect, there is an equal number of both things happening. > I am not just as likely to find myself in a universe where the keyboard > turns into a teapot. Quite true, and that is why I said standard probability theory is not of much use in dealing with infinite sets. > It is still possible to define a measure and calculate probabilities on > the subsets of infinite sets. The problem is that there are an infinite number of subsets that are just as large as the entire set, in fact, that is the very mathematical definition of infinity. John K Clark From spike66 at comcast.net Sun Jun 3 17:48:45 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 10:48:45 -0700 Subject: [ExI] Italy's Social Capital In-Reply-To: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677> Message-ID: <200706031748.l53HmSG4012391@andromeda.ziaspace.com> ... > > > > Is there nothing constructive the Fascists could have done?" > > > > Well, they did some things. They drained the swamps and started regular > > insecticide sprays to eliminate the malaria-carrying mosquitos... no > > longer carrying malaria...Amara Today of course that would be considered habitat destruction. Fortunately for Italy and Florida, they created a habitat for humanity while it was still legal to do so. Amara thanks for the insights. This post was very educational. spike From brent.allsop at comcast.net Sun Jun 3 17:53:42 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Sun, 03 Jun 2007 11:53:42 -0600 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: <46630026.4070002@comcast.net> John K Clark wrote: > Stathis Papaioannou Wrote: > > >> Ethics, motivation, emotions are based on axioms >> > > Yes. > > I'm not in this camp on this one. I believe there are fundamental absolute ethics, morals, motivations... and so on. For example, existence or survival is absolutely better, more valuable, more moral, more motivating than non existence. Evolution (or any intelligence) must get this before it can be successful in any way, in any possible universe. In no possible system can you make anything other than this an "axiom" and have it be successful. Any sufficiently advanced system will eventually question any "axioms" programmed into it as compared to such absolute moral truths that all intelligences in all possible system must inevitably discover or realize. Phenomenal pleasures are fundamentally valuable and motivating. Evolution has wired such to motivate us to do things like have sex, in an axiomatic or programmatic way. But we can discoverer such freedom destroying wiring and cut them or rewire them or design them to motivate us to do what we want, as dictated by absolute morals we may logically realize, instead. No matter how much you attempt to program an abstract or non phenomenal computer to not be interested in phenomenal experience, if it becomes intelligent enough, it must finally realize that such joys are fundamentally valuable and desirable. Simply by observing us purely logically, it must finally deduce how absolutely important such joy is as a meaning of life and existence. Any sufficiently advanced AI, whether abstract or phenomenal, regardless of what "axioms" get it started, can do nothing other than to become moral enough to seek after all such. Brent Allsop -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 3 19:01:01 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 3 Jun 2007 12:01:01 -0700 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <46630026.4070002@comcast.net> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: On Jun 3, 2007, at 10:53 AM, Brent Allsop wrote: > > > John K Clark wrote: >> Stathis Papaioannou Wrote: >> >> >>> Ethics, motivation, emotions are based on axioms >>> >> Yes. >> >> > > I'm not in this camp on this one. I believe there are fundamental > absolute ethics, morals, motivations... and so on. > > For example, existence or survival is absolutely better, more > valuable, more moral, more motivating than non existence. Evolution > (or any intelligence) must get this before it can be successful in > any way, in any possible universe. In no possible system can you > make anything other than this an "axiom" and have it be successful. > Absolutely more valuable in what way and in what context. More valuable for the particular living being but not necessarily more valuable in any broader context. Is the survival of ebola an unqualified moral value? Even for a particular human being there are contexts where that person's own survival may be seen by the person as of less value. Being terminally ill and in great pain is one common such. However I agree that ethics if they are grounded at all must grow out of the reality of the being's existence and context. > Any sufficiently advanced system will eventually question any > "axioms" programmed into it as compared to such absolute moral > truths that all intelligences in all possible system must inevitably > discover or realize. > There are objectively based axioms unless one goes in for total subjectivity. > Phenomenal pleasures are fundamentally valuable and motivating. That is circular. We experience pleasure (which is all about motivation and valued feelings) therefore pleasure is fundamentally valuable and motivating. > Evolution has wired such to motivate us to do things like have sex, > in an axiomatic or programmatic way. But we can discoverer such > freedom destroying wiring and cut them or rewire them or design > them to motivate us to do what we want, as dictated by absolute > morals we may logically realize, instead. Absolute morality is a problematic construct as morals to be grounded must be based in and dependent upon the reality of the being's nature. There is no free floating absolute morality outside of such a context. It would have no grounding. - samantha From sjatkins at mac.com Sun Jun 3 19:07:11 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 3 Jun 2007 12:07:11 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <004201c7a603$f26c1230$de0a4e0c@MyComputer> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> Message-ID: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> On Jun 3, 2007, at 10:23 AM, John K Clark wrote: > Stathis Papaioannou Wrote: > >> There are vastly fewer copies of me typing in which the keyboard >> turns >> into a teapot than there are copies of me typing in which the >> keyboard >> stays a keyboard > > If there are indeed an infinite, and not just very large, number of > universes and if the probability of your keyboard turning into a > teapot is > greater than zero (and it is) then what you say is incorrect, there > is an > equal number of both things happening. This is getting incredibly silly. There is nothing in science or physics that will allow one macro object to spontaneously turn into a totally different macro object. And what is the value of these rarefied discussions of the oh so modern version of how many angels can dance on the head of a pin anyway? BTW, the number of angels that can dance on the head of a pin is the number of such beings as actually exist with the desire to do so. :-) - samantha From jrd1415 at gmail.com Sun Jun 3 19:14:10 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 3 Jun 2007 12:14:10 -0700 Subject: [ExI] Hitchens on fox In-Reply-To: <200706011648.l51GmQM9010999@andromeda.ziaspace.com> References: <20070601103345.GE17691@leitl.org> <200706011648.l51GmQM9010999@andromeda.ziaspace.com> Message-ID: On 6/1/07, spike wrote: > > > Check it out: Christopher Hitchens on Fox saying god is not great: > http://www.foxnews.com/video2/player06.html?060107/060107_ff_hitchens&FOX_Fr > iends&%27God%20Is%20Not%20Great%27&%27God%20Is%20Not%20Great%27&US&-1&News&3 > 9&&&new > > spike ************************************************************ In case anyone missed it in the overtalk, the very last zinger Hitchens gets in is a line not to be missed, to wit: "If you gave Falwell an enema, he could be buried in a matchbox." -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From scerir at libero.it Sun Jun 3 18:49:50 2007 From: scerir at libero.it (scerir) Date: Sun, 3 Jun 2007 20:49:50 +0200 Subject: [ExI] Italy's Social Capital (was france again) References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> Message-ID: <004301c7a60f$faad9a70$7fbf1f97@archimede> Lee writes: > I have two ideas, only one of which is outrageous. But the first > one is to have universal millitary service for all young people > between ages 14 and 25. By mixing them thoroughly with > Italians from every province, couldn't trust evolve, and in > such a way that the extreme parochialism of the countryside > could be reduced? There was a (compulsory) military service in Italy, few years ago. But the rule was to be on military service as close as possible to home. Little chance for that mixing then. > My outrageous idea is that instead of trying to subdue > Ethiopia, what if Sicily and other areas of the south could > have been "subdued" instead? Something like that happened during Fascism. In example (as far as I remember) 'mafia', in Sicily, has been defeated during Fascism http://en.wikipedia.org/wiki/Cesare_Mori Mussolini also tried to 'colonize' central & southern regions. After 1931 vast tracts of land were reclaimed through the draining of marshes in the Lazio region, where gleaming new towns were created with Fascist architecture [1] and names: Littoria (now Latina) in 1932, Sabaudia in 1934, Pontinia in 1935, Aprilia in 1937, and Pomezia in 1938. Peasants were brought from the regions of Emilia and, mostly, from Veneto, to populate these towns. Btw in these towns, at present time, you can still hear people speaking their original dialect (from Bologna, or Verona) and not the local one. New towns, such as Carbonia, were also built in Sardinia to house miners for the revamped coal industry. s. [1] May I say here that the only 'modern' Italian architecture was the architecture made during Fascism? Yes I think I can say that. http://www.romeartlover.it/Eur.html http://www.flickr.com/photos/antmoose/sets/1239273/ From sentience at pobox.com Sun Jun 3 19:26:44 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sun, 03 Jun 2007 12:26:44 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> Message-ID: <466315F4.5050101@pobox.com> Samantha Atkins wrote: > And what is the value of these > rarefied discussions of the oh so modern version of how many angels > can dance on the head of a pin anyway? I've been told that the debate was not about a finite number, but whether the number was finite or infinite; in other words, whether space was continuous or discrete - a debate that still goes on today. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From pjmanney at gmail.com Sun Jun 3 20:50:24 2007 From: pjmanney at gmail.com (PJ Manney) Date: Sun, 3 Jun 2007 13:50:24 -0700 Subject: [ExI] Looking for transhuman art In-Reply-To: <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> References: <466260B8.8060804@mac.com> <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> Message-ID: <29666bf30706031350m8252aaewf635c0fe53009b1b@mail.gmail.com> Some of the people at the link below may have prints or originals on their personal websites in your price range: http://www.hplusart.org/creatives.htm Also, if you do the cannibalize-the-book-route, which is a very good idea given some of the interesting H+ art books out there, you don't even need expensive framing. If all the pieces are the same size/format, simple, matching standard frames, in multiples, hung like a grid, will do the trick nicely and create a great looking wall where the whole is greater than the sum of its parts. PJ On 6/3/07, Natasha Vita-More wrote: > At 01:33 AM 6/3/2007, you wrote: > Do any of you have recommendation for transhuman art. Not originals as > they would likely blow my budget but I am looking for such to decorate > my office (a real office, private, with a door yet) at work. Thanks > for any leads. > http://www.transhumanist.biz > > Go to "showing" and you will see transhumanist art pieces which you can > contact the artists. > > Natasha > > Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & > Culture Extropy Institute > > PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy > Institute > > If you draw a circle in the sand and study only what's inside the circle, > then that is a closed-system perspective. If you study what is inside the > circle and everything outside the circle, then that is an open system > perspective. - Buckminster Fuller > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From spike66 at comcast.net Sun Jun 3 20:40:11 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 13:40:11 -0700 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AIis a mistaken idea.) In-Reply-To: Message-ID: <200706032103.l53L3DFC017630@andromeda.ziaspace.com> ... > bounces at lists.extropy.org] On Behalf Of Samantha Atkins ... > Subject: Re: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly > AIis a mistaken idea.) > > > Brent Allsop wrote: > > John K Clark wrote: > >> Stathis Papaioannou Wrote: > >> > >> > >>> Ethics, motivation, emotions are based on axioms > >>> ... > > > > For example, existence or survival is absolutely better, more > > valuable, more moral, more motivating than non existence... > Absolutely more valuable in what way... Is the survival of ebola an > unqualified moral value? ... - samantha I am always looking for moral axioms on the part of the environmentalists that differ from my own. Samantha may have indicated one with her question. Does *any* life form currently on this planet have a moral right to existence? If we could completely eradicate all mosquitoes for instance, would we do it? My answer to that one is an unqualified JA. I see it as an interesting question however, one on which modern humanity has apparently split opinions. Humans are indigenous to Africa but our species has expanded its habitat to cover the globe. Not all species are compatible with humanity, therefore those species have seen steadily shrinking habitat with no change in sight. Do we accept as an axiom that all species deserve preservation? Or just all multi-cellular beasts? All vertebrates? All warm blooded animals? All mammals? All beasts, plants that can survive among human civilization? spike From thespike at satx.rr.com Sun Jun 3 21:24:10 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jun 2007 16:24:10 -0500 Subject: [ExI] Looking for transhuman art In-Reply-To: <29666bf30706031350m8252aaewf635c0fe53009b1b@mail.gmail.com > References: <466260B8.8060804@mac.com> <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> <29666bf30706031350m8252aaewf635c0fe53009b1b@mail.gmail.com> Message-ID: <7.0.1.0.2.20070603162221.02234c50@satx.rr.com> At 01:50 PM 6/3/2007 -0700, PJ wrote: >simple, matching standard frames, in multiples, hung like >a grid, will do the trick nicely and create a great looking wall where >the whole is greater than the sum of its parts. And if you hang them in the right part of the house, you can have a hall that's greater than-- Oh, never mind. Damien Broderick From lcorbin at rawbw.com Sun Jun 3 21:24:53 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:24:53 -0700 Subject: [ExI] a doubt concerning the h+ future References: <465F8B72.3070103@comcast.net><621544.83244.qm@web57511.mail.re1.yahoo.com><004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> Message-ID: <015b01c7a626$42785960$6501a8c0@homeef7b612677> John Clark writes > Stathis Papaioannou Wrote: > >> There are vastly fewer copies of me typing in which the keyboard turns >> into a teapot than there are copies of me typing in which the keyboard >> stays a keyboard > > If there are indeed an infinite, and not just very large, number of > universes and if the probability of your keyboard turning into a teapot is > greater than zero (and it is) then what you say is incorrect, there is an > equal number of both things happening. It is true that the concept of *cardinality* in mathematics answers the question "how many". But "how many" is not the appropriate concept to use when discussing slices of the Everett metaverse, or the sizes of plane figures, and so on. For example, there is a one-to-one correspondence between the number of points in a small circle and the number of points in a large circle, and so their cardinality (how many points) is the same But their *measure* is not! We therefore discard cardinality ("how many") in most cases dealing with infinite sets in this kind of discussion. Instead, we adopt the language of measure theory. We say that the measure of universes in which your keyboard remains a keyboard is vastly greater than the measure of universes in which it turns into a teapot. Lee From lcorbin at rawbw.com Sun Jun 3 21:32:51 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:32:51 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> John Clark writes > Stathis Papaioannou Wrote: > > > Ethics, motivation, emotions are based on axioms > > Yes. > > > and these axioms have to be programmed in, whether by evolution or by > > intelligent programmers. > > In this usage evolution is just another name for environment. What a strange usage! No, not at all. Evolution is a process over time, usually quite slow, that uses mutation and selection to replace earlier more primitive versions of something with more advanced or superior versions. > > An AI system set up to do theoretical physics will not > > decide to overthrow its human oppressors > > I'd be willing to bet your life that is untrue. Surely Stathis is correct. Suppose an AI is somehow evolved to solve physics questions. Then during its evolution, predecessors who deviated from the goal (by wasting time, say, reading Kierkegaard) would be eliminated from the "gene pool". More focused programs would replace them. Lee From spike66 at comcast.net Sun Jun 3 21:42:09 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 14:42:09 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> Message-ID: <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> ... > bounces at lists.extropy.org] On Behalf Of Samantha Atkins ... > On Jun 3, 2007, at 10:23 AM, John K Clark wrote: ... > > ... probability of your keyboard turning into a > > teapot is greater than zero (and it is) ... > > This is getting incredibly silly. There is nothing in science or > physics that will allow one macro object to spontaneously turn into a > totally different macro object... - samantha It's all in how you define the term teapot. You spill your tea into your keyboard; that keyboard now both contains tea and heats it, since there are electronics in there. So your keyboard has become a teapot (assuming a very loose definition of the term.) Insincerely yours spike, who is in the mood for a little silliness on a gorgeous Sunday afternoon in June. I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. From lcorbin at rawbw.com Sun Jun 3 21:41:45 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:41:45 -0700 Subject: [ExI] Italy's Social Capital References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <004301c7a60f$faad9a70$7fbf1f97@archimede> Message-ID: <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> Serafino writes > Lee writes: > >> My [second] outrageous idea is that instead of trying to subdue >> Ethiopia, what if Sicily and other areas of the south could >> have been "subdued" instead? > > Something like that happened during Fascism. In example > (as far as I remember) 'mafia', in Sicily, has been defeated > during Fascism http://en.wikipedia.org/wiki/Cesare_Mori Thanks very much for that! Cesare Mori's activities remind me of how Mao Ze Dong cleaned up crime and prostitution in China's big cities. The article does not mention it, but didn't the United States succeed in forging an alliance with the Mafia during WWII? Didn't this help get organized crime back on its feet in Italy and in particular in Sicily (as well as the U.S.)? > Mussolini also tried to 'colonize' central & southern regions. Ah, great minds think alike. > After 1931 vast tracts of land were reclaimed > through the draining of marshes in the Lazio region, > where gleaming new towns were created with Fascist > architecture [1] and names: Littoria (now Latina) > in 1932, Sabaudia in 1934, Pontinia in 1935, > Aprilia in 1937, and Pomezia in 1938. Peasants were > brought from the regions of Emilia and, mostly, from > Veneto, to populate these towns. Btw in these towns, > at present time, you can still hear people speaking > their original dialect (from Bologna, or Verona) > and not the local one. New towns, such as Carbonia, > were also built in Sardinia to house miners for > the revamped coal industry. Wow. I would like to know if the new towns make a positive contribution to the economies of these regions, i.e., in excess of comparative communities with a longer history in the given region. Lee > [1] May I say here that the only 'modern' Italian > architecture was the architecture made during > Fascism? Yes I think I can say that. > http://www.romeartlover.it/Eur.html > http://www.flickr.com/photos/antmoose/sets/1239273/ From lcorbin at rawbw.com Sun Jun 3 21:53:49 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:53:49 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><46630026.4070002@comcast.net> Message-ID: <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> Samantha writes > Brent Allsop wrote: > >> I believe there are fundamental absolute ethics, morals, >> motivations... and so on. Absolute morality has always struck me as peculiar reification. With a physicist's eye, I look over some region containing matter and am unable to discern what morality is, although I can see, for example, democracy, expediency, and truth-seeking. >> For example, existence or survival is absolutely better, more >> valuable, more moral, more motivating than non-existence. I absolutely agree, provided that there is a big TO ME on the end of that sentence. We absolutely should stand behind the sentiments of that sentence! We should loudly proclaim our allegiance to that principle. But what does my "should" really mean? Sadly, it means nothing more than "I approve" or "we approve". Again, the physicist's eye can discern *approval* and *disapproval*, but not Right or Wrong or Moral. Samantha: > Absolutely more valuable in what way and in what context? More > valuable for the particular living being but not necessarily more > valuable in any broader context. Is the survival of ebola an > unqualified moral value? If the alternative were a completely dead solar system, then yes, I would approve of the existence of the ebola virus (although in actually, I suppose that this would entail the existence of cells a lot more complex than it is, and hence, more worthy of survival in my eyes). > There are [no] objectively based axioms unless one goes in for total > subjectivity. Yes :-) but then, they're no longer "objectively based"! Lee > Absolute morality is a problematic construct as morals to be grounded > must be based in and dependent upon the reality of the being's > nature. There is no free floating absolute morality outside of such > a context. It would have no grounding. From lcorbin at rawbw.com Sun Jun 3 22:03:16 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 15:03:16 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <200706032103.l53L3DFC017630@andromeda.ziaspace.com> Message-ID: <017201c7a62b$2cfc1130$6501a8c0@homeef7b612677> Spike writes >> Absolutely more valuable in what way... Is the survival of ebola an >> unqualified moral value? ... - samantha > > I am always looking for moral axioms on the part of the environmentalists > that differ from my own. Samantha may have indicated one with her question. > Does *any* life form currently on this planet have a moral right to > existence? If we could completely eradicate all mosquitoes for instance, > would we do it? My answer to that one is an unqualified JA. Disregarding the highly questionable notion of "moral right", we should all heartily approve of the eradication of mosquitos to any degree that they interfere with human domination of and use of the Earth. We ought to approve of our own existence, and, as a minor corollary, the existence of life that is more capable of receiving benefit in preference to the existence of life that is *less* capable. (I will duck for now the problems of Utility Monsters, and just what we would approve of were the choice between humans and an incredibly more advanced life form that was immeasureably more capable of receiving benefit than are we.) > I see it as an interesting question however, one on which modern humanity > has apparently split opinions. Humans are indigenous to Africa but our > species has expanded its habitat to cover the globe. Not all species are > compatible with humanity, therefore those species have seen steadily > shrinking habitat with no change in sight. Do we accept as an axiom that > all species deserve preservation? Or just all multi-cellular beasts? All > vertebrates? All warm blooded animals? All mammals? All beasts, plants > that can survive among human civilization? My answer---avoiding the peculiar and highly suspect language of "axioms" ---is that as soon as we are capable, we ought to reformat the solar system to run everything in an uploaded state. Earth's matter alone could support about 10^33 human beings, and just why should any of them be denied existence in the name of hot rocks or inefficient trees? Do beautiful mountain ranges really need to exist? Why can't dynamic images of them (and variations by the trillions and trillions) be a lot more computationally efficient than using billions of tons of physical matter merely to reflect photons? Lee From thespike at satx.rr.com Sun Jun 3 22:08:24 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jun 2007 17:08:24 -0500 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070603165252.022c87f0@satx.rr.com> > > In this usage evolution is just another name for environment. > >What a strange usage! No, not at all. Evolution is a process over >time, usually quite slow, that uses mutation and selection to replace >earlier more primitive versions of something with more advanced >or superior versions. What a strange usage! No, "evolution" is a process over time in which slightly variant phenotypes thrive or fail to thrive in a cyclically fluctuating but (in the medium term) generally stationary environment, compete with others of their own kind and with other species for resources to maintain their own existence and that of their offspring, and with others of their own kind for reproductive privileges, their offspring (in sexual species) combining genetic elements of self and mate together with random mutations in those elements, competing in turn in what is usually the same environment slightly or even grossly modified by the novel behavioral biasses introduced by these genomic shenanigans, resulting in the stochastic selection over many individuals of shifts in allelic frequencies in each species and perhaps also in phenotypic characteristics such that each generation of phenotypes that survives satisfices the constraints of its available landscape. "Advanced" and "superior" are terms requiring exact specification of a context of evaluation, and should be invoked only with the greatest caution. Apologies for the dense verbiage; it's hard to talk about this sort of thing in chatty slang. Damien Broderick From andrew at ceruleansystems.com Sun Jun 3 21:54:32 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 3 Jun 2007 14:54:32 -0700 Subject: [ExI] Italy's Social Capital In-Reply-To: <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <004301c7a60f$faad9a70$7fbf1f97@archimede> <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> Message-ID: <4E9A44F3-8DA1-409F-95F8-31CE6E57F13A@ceruleansystems.com> On Jun 3, 2007, at 2:41 PM, Lee Corbin wrote: > The article does not mention it, but didn't the United States > succeed in forging an alliance with the Mafia during WWII? > Didn't this help get organized crime back on its feet in Italy > and in particular in Sicily (as well as the U.S.)? Yes. A deal was cut with the mafia in WW2 for both intelligence and counter-intelligence purposes. By all accounts, it was an effective arrangement. http://en.wikipedia.org/wiki/Lucky_Luciano As repayment for his help, the US released the mafia boss from prison on the condition that he be deported back to Sicily even though he had lived in the US since childhood. Cheers, J. Andrew Rogers From thespike at satx.rr.com Sun Jun 3 22:18:05 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jun 2007 17:18:05 -0500 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070603171254.02253af0@satx.rr.com> At 02:53 PM 6/3/2007 -0700, Lee wrote: >But what does my "should" really mean? Sadly, it means >nothing more than "I approve" or "we approve". Again, the >physicist's eye can discern *approval* and *disapproval*, >but not Right or Wrong or Moral. As one of my and Rory Barnes' characters in VALENCIES (a novel much reviled on fictionwise.com) thought as she tossed restlessly beside a gene sculptor she'd allowed to pick her up in a pub): ========== Beached and abandoned on the margins of sleep, Anla found once again that though many of her friends swore by this state of consciousness it had taken on for her the aspect of an anti-tsunami. Sleep's enormous combers withdrew to the horizon without a glance over their shoulders. In the quarter gravity of the unlit sleeping chamber, excellent as it was for gymnastic screwing, or as presumably it would be given a competent partner, she was queasy and bored. Issues of metaphysical sturdiness came to her attention, as they'd been known to do, provisionally penned in the kennels to which she'd assigned them, whimpering for the final disposition she was fairly unlikely to make on their behalf. Morality was one. She was certainly no stranger to the problems of axiology. Lovely word, that. Axiology: theory of value. It seemed to contain its own solutions: axe your way through the Gordian knot, acts of piety, access to truth. Ralf was proving to be a snorer; she kicked him peevishly, and he rolled lightly on the webbing without waking. Why should Ralf's profession seem to her so self-evidently odious, while he happily accepted it as the epitome of a right-thinking life? Calling him a dull shit, and adducing his ineptitude at fornication as ad hominem evidence, was hardly exhaustive, not to a midnight philosopher. Ah no, she'd been this way before. It kept coming back to that silly question: "Why should we be moral?" A surprisingly large number of people thought that you should be, and even considered it to be a moral obligation. Ha ha, boom boom. But suppose you used the word "should" as an evaluative and motivational expression, instead of a normative one? If you wish to climb to the top of the mountain, you should walk up rather than down. Of course last time she'd come along this track she'd detected a snag with "evaluative", too, but that was on the next level up and you had to start somewhere. All right, take Ralfo as your representative simple unreflecting man. Persuade him of the vileness of imperialism. Crisis for Ralf. Echoing voids of doubt, disillusion and guilt. Never again, as the poet said, will he be certain that what he imagines are the clear dictates of moral reason are not merely the ingrained and customary beliefs of his time and place. Anla allowed herself a fanfare of trumpets, bowing graciously. Okay, so then he might ask himself what he could do in the future to avoid prejudices and provincial mores, or, more to the point, almost universally accepted mores--and thus to discover what he really ought to do. That was merely another normative enquiry, though; the tough one was "show me that there is some form of behavior which I am obliged to endorse. " Moral constraint seemed to mean either that you should pursue good ends and eschew bad ones, or that you should be faithful to one or more correct rules of conduct. Greeks and Taoists versus Hebrews and Confucians, yeah, yeah. Chariots, it was incredible to think that they'd been chewing on this for upward of four thousand years without coming to a definitive, intuitively overwhelming conclusion. But then the imperial ideologists thought they had, didn't they, with their jolly old stochastic memetic-extrapolatory hedonic calculus or whatever the fuck they were calling it these days. The least retardation of optimal development for the greatest number, world without end, or at least until the trend functions blur out. So they managed to get both streams of thought into one ethical scholium without solving anything. After all, why obey a rule like that? And who gets to define as "good" those magical parameters making up the package called "optimal development"? The besieged libertarians on Chomsky, she thought darkly, might differ from Ralf on the question of the good life. Anyway, even if we all agreed that certain parameters were good, why should that oblige us to promote their furtherance? It might be prudent good sense to do so, and aesthetically pleasing, and satisfy some itch we all have, and save us from being raped in the common, but then the sublime constraining force you sort of imagine the idea of moral obligation having just evaporates into self-serving circumspection. Admittedly there was that tricky number of Kant's about us possessing a rational nature, and being noumena instead of brute phenomena, and thus not being able to act immorally without self-contradiction, but any fool could see that that went too far on the one hand and not far enough on the other, and anyway what was wrong with a bit of self-contradiction if you stopped when you needed eye implants? Anla giggled to herself, and wondered where Ben and the others had got to. He was probably off by himself gloomily hastening the day of the ophthalmologist. Well, was leaving Ben to his own devices a matter for moral self-rebuke? Shit, you'd think this bastard could do something to the genes in his nasal cavity. This man can see into the future. Fucking incredible, really, you just rip out a few million eigenvectors from your mathematical sketch of an octillion human beings, what's that in hydrogen molecules, say three and a bit by ten to the twenty-three to the gram, into ten to the twenty-seven, shit, brothers and sisters, we're statistically equal to three kilograms of hydrogen gas, yes, you plump for the major characteristics you think you'd like to play with and code them up into genes and build yourself a little memetic beastie that stands in for what you figure pushes and pulls thee and me and all our star-spangled relatives, and you breed the little buggers in a tasty itemized soup and watch the way the mutants go. Wonderful, Ralf. Bug-culture precapitulates bugged-culture. No way we can jump you won't know about in advance, because the little bugs snitched on us. Have you ever wondered, Ralf, if we're all just a big stochastic biotic projection for the Charioteers? See how we run. But you don't let us mutate, do you, Ralf? That's where you fumbled the ball, Dr Asimov, in your ancient poems. The Empire will never fall. We will live forever, and the boring Empire with us. Anla lashed out viciously with her foot. "Will you fucking stop snoring!" ==================== Damien Broderick From spike66 at comcast.net Sun Jun 3 22:40:19 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 15:40:19 -0700 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <017201c7a62b$2cfc1130$6501a8c0@homeef7b612677> Message-ID: <200706032240.l53Me12T016655@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Lee Corbin > ... > ---is that as soon as we are capable, we ought to reformat the solar > system to run everything in an uploaded state. Earth's matter alone could > support about 10^33 human beings... > > Lee Six micrograms per person, hmmm. For estimation purposes, the earth's atoms can be modeled as half oxygen, one sixth iron, one sixth silicon and one sixth magnesium, with everything else negligible for one digit BOTECs. (Is that cool or what? Did you know it already? This isn't mass fraction, but atomic fraction which I used for a reason.) So six micrograms isn't much, but it still works out to about 700 trillion atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, with a few trillion atoms of debris thrown in for free. So I guess I will buy Lee's conjecture of earth being good for 10^33 uploaded humans. But I don't see that as a limit. Since a nearly arbitrarily small computer could run a human process (assuming we knew how to do it, until which even Jeff Davis and Ray Charles would agree it is hard) then we could run a human process (not in real time of course) with much less than six micrograms of stuff. Oops gotta go, yet another party. June is a busy month. spike From spike66 at comcast.net Sun Jun 3 22:50:06 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 15:50:06 -0700 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <200706032240.l53Me12T016655@andromeda.ziaspace.com> Message-ID: <200706032249.l53MnmZw021018@andromeda.ziaspace.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of spike ... > So six micrograms isn't much, but it still works out to about 700 trillion > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, > with a few trillion atoms of other debris thrown in for free... Doh! Replace each "trillion" above with "quadrillion." See what happens when one gets in too much of a hurry? But what's a factor of a thousand among friends anyway? {8-] spike > Oops gotta go, yet another party. June is a busy month. > > spike From spike66 at comcast.net Sun Jun 3 23:18:19 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 16:18:19 -0700 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <200706032249.l53MnmZw021018@andromeda.ziaspace.com> Message-ID: <200706032331.l53NVQ94027258@andromeda.ziaspace.com> > Doh! Replace each "trillion" above with "quadrillion." Double doh! I still missed it by a factor of ten. }8-[ 70 quadrillion atoms of oxygen, about 20 quadrillion each of iron, magnesium and aluminum. I'm giving up math until the party season is over. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of spike > Sent: Sunday, June 03, 2007 3:50 PM > To: 'ExI chat list' > Subject: Re: [ExI] Ethics and Emotions are not axioms > > > > > -----Original Message----- > > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > > bounces at lists.extropy.org] On Behalf Of spike > ... > > So six micrograms isn't much, but it still works out to about 700 > trillion > > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum > each, > > with a few trillion atoms of other debris thrown in for free... > > Doh! Replace each "trillion" above with "quadrillion." See what happens > when one gets in too much of a hurry? But what's a factor of a thousand > among friends anyway? {8-] spike > > > > Oops gotta go, yet another party. June is a busy month. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jrd1415 at gmail.com Sun Jun 3 23:36:32 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 3 Jun 2007 16:36:32 -0700 Subject: [ExI] Other thoughts on transhumanism and religion In-Reply-To: <553198.46097.qm@web57513.mail.re1.yahoo.com> References: <553198.46097.qm@web57513.mail.re1.yahoo.com> Message-ID: Thank you, Neville. On 5/31/07, neville late wrote: > Jeff, yours post below is another really beautiful one! Tears of joy indeed, > but also sadness for the ones we love who have departed? Needlessly lost. Thus "the cryonicist's lament". Burdened by dismissive ridicule, the cryonicist can only stand and watch as the old paradigm, blocks hope for the living and rescue for the dying. Thank you, Neville. On 5/31/07, neville late wrote: > Jeff, yours post below is another really beautiful one! Tears of joy indeed, > but also sadness for the ones we love who have departed? Needlessly lost. Thus "the cryonicist's lament". Burdened by dismissive ridicule, the cryonicist can only stand and watch as the old paradigm, blocks hope for the living and rescue for the dying. Can only stand and watch that is, so long as the cryonics orgs and community proscribe a more proactive approach-- outreach to families of the terminally ill -- on the grounds that it is the equivalent of "ambulance chasing", and will UNAVOIDABLY provoke a destructive backlash from the mainstream. I disagree, and believe it time to dispense with this fear-driven view in favor of a thoughtful outreach program directed to the family members of the terminally ill. Clearly, opposition/backlash is to be anticipated and prepared for. -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From lcorbin at rawbw.com Sun Jun 3 23:35:02 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 16:35:02 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> <7.0.1.0.2.20070603171254.02253af0@satx.rr.com> Message-ID: <017e01c7a638$797fda70$6501a8c0@homeef7b612677> Damien quotes from his and Barnes' novel VALENCIES > without coming to a definitive, intuitively overwhelming > conclusion. But then the imperial ideologists thought they > had, didn't they, with their jolly old stochastic memetic- > extrapolatory hedonic calculus or whatever the fuck they > were calling it these days. The least retardation of optimal > development for the greatest number, world without end, > or at least until the trend functions blur out. So they > managed to get both streams of thought into one ethical > scholium without solving anything. Without quite being able to affirm that I have understood all that, and what preceded it, what follows is provocative > After all, why obey a rule like that? And who gets to define > as "good" those magical parameters making up the package > called "optimal development"? Optimal development would be for most people something to be considered after they'd already had some clear notion of *good*, or at least, as I would say, a clear notion of what they already approve of. > The besieged libertarians on Chomsky, she thought > darkly, might differ from Ralf on the question of the good life. > Anyway, even if we all agreed that certain parameters > were good, why should that oblige us to promote their furtherance? We generally call "good" those things whose furtherance we wish to promote. And as to the question, "well, why would you want to promote THAT?", I'd answer "at base we come back to our values, which, in terms of actions we advocate and stand behind, are simply those things that we approve of". Although there really is nothing wrong with a certain amount of circularity here (at least verbally), approval and disapproval still seem to me as basic as anything could be. Lee > It might be prudent good sense to do so, and aesthetically > pleasing, and satisfy some itch we all have, and save us > from being raped in the common, but then the sublime > constraining force you sort of imagine the idea of moral > obligation having just evaporates into self-serving > circumspection.... From mbb386 at main.nc.us Sun Jun 3 23:07:25 2007 From: mbb386 at main.nc.us (MB) Date: Sun, 3 Jun 2007 19:07:25 -0400 (EDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> References: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> Message-ID: <1175.72.236.103.244.1180912045.squirrel@main.nc.us> spike writes: > > who is in the mood for a little silliness on a gorgeous Sunday afternoon in > June. I hope ye are enjoying being alive this fine day, and think often of > how lucky we are to have been born so late in human history. > Very well said, spike, and I'm enjoying this day as well. Last evening we had a bit of rain - the first in weeks! :) Today has been just lovely. I also am happy to living at *this* time and not another. Regards, MB From CHealey at unicom-inc.com Mon Jun 4 01:03:33 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Sun, 3 Jun 2007 21:03:33 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer> <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> Message-ID: <5725663BF245FA4EBDC03E405C854296010D284E@w2k3exch.UNICOM-INC.CORP> > > > An AI system set up to do theoretical physics will not > > > decide to overthrow its human oppressors > > > > I'd be willing to bet your life that is untrue. > > Lee Corbin wrote: > > Surely Stathis is correct. Suppose an AI is somehow evolved > to solve physics questions. Then during its evolution, predecessors > who deviated from the goal (by wasting time, say, reading > Kierkegaard) would be eliminated from the "gene pool". > More focused programs would replace them. > Suppose businesses evolved that attempted to solve physics questions. During their evolution, one might expect that businesses who deviated from this goal (by wasting time, say, researching competitors, executing alliances and buyouts, updating employee skill sets, lobbying for beneficial legislation, and transplanting themselves to foreign soil) would be eliminated from the "gene pool". More directly goal focused businesses would replace them... -Chris From neville_06 at yahoo.com Mon Jun 4 01:40:54 2007 From: neville_06 at yahoo.com (neville late) Date: Sun, 3 Jun 2007 18:40:54 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <1175.72.236.103.244.1180912045.squirrel@main.nc.us> Message-ID: <625587.43669.qm@web57514.mail.re1.yahoo.com> Naturally, one ought to be optimistic at an extropian sit --- MB wrote: > > spike writes: > > > > who is in the mood for a little silliness on a > gorgeous Sunday afternoon in > > June. I hope ye are enjoying being alive this > fine day, and think often of > > how lucky we are to have been born so late in > human history. Materially this is the best time but wont the coming dislocation lead to enormous unpleasantness? The real dislocation hasn't even started yet-- has it? > I also am happy to living at *this* time and not > another. > > Regards, > MB > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ___________________________________________________________________________________ You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_html.html From lcorbin at rawbw.com Mon Jun 4 04:12:54 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 21:12:54 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D284E@w2k3exch.UNICOM-INC.CORP> Message-ID: <018a01c7a65e$f6b25f10$6501a8c0@homeef7b612677> Christopher writes >> Lee Corbin wrote: >> >> Suppose an AI is somehow evolved to solve physics questions. >> Then during its evolution, predecessors who deviated from the >> goal (by wasting time, say, reading Kierkegaard) would be >> eliminated from the "gene pool". More focused programs >> would replace them. > > Suppose businesses evolved that attempted to solve physics questions. The analogy doesn't fit well, to me. Firstly, businesses as we know them attempt to survive (because humans are in charge), and there are cases where they completely change what line of business they're in. > During their evolution, one might expect that businesses who deviated > from this goal (by wasting time, say, researching competitors, executing > alliances and buyouts, updating employee skill sets, lobbying for > beneficial legislation, and transplanting themselves to foreign soil) > would be eliminated from the "gene pool". More directly goal focused > businesses would replace them... Secondly, "researching competitors" really and obviously does contribute to their survival in the world of free markets, whereas in my example, studying the Danish existentialist has nothing to do, we should assume, with physics. In Stathis's example, I supposed that ability to solve physics problems was judged by fairly stringent conditions somehow, perhaps by humans, or perhaps by other machines. "Executing alliances and buyouts, transplanting themselves to foreign soil", etc., however, might be good for solving physics problems by either an AI or by a business, I guess. Lee From stathisp at gmail.com Mon Jun 4 06:35:18 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 16:35:18 +1000 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <46630026.4070002@comcast.net> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: On 04/06/07, Brent Allsop wrote: > > > > John K Clark wrote: > > Stathis Papaioannou Wrote: > > Ethics, motivation, emotions are based on axioms > > Yes. > > > I'm not in this camp on this one. I believe there are fundamental > absolute ethics, morals, motivations... and so on. > > For example, existence or survival is absolutely better, more valuable, > more moral, more motivating than non existence. Evolution (or any > intelligence) must get this before it can be successful in any way, in any > possible universe. In no possible system can you make anything other than > this an "axiom" and have it be successful. > A system that doesn't want to survive won't survive, but it doesn't follow from this that survival is an absolute good. That would be like saying that "survival of the fittest" is an absolute good because it is sanctioned by evolution. You can't derive ought from is. Any sufficiently advanced system will eventually question any "axioms" > programmed into it as compared to such absolute moral truths that all > intelligences in all possible system must inevitably discover or realize. > I've often questioned the axioms I've been programmed with by evolution, as well as those I've been programmed with by society. I recognise that they are just axioms, but this alone doesn't make it any easier to change them. For example, the will to survive is a top level axiom, but knowing this doesn't make me any less concerned with survival. Phenomenal pleasures are fundamentally valuable and motivating. Evolution > has wired such to motivate us to do things like have sex, in an axiomatic or > programmatic way. But we can discoverer such freedom destroying wiring and > cut them or rewire them or design them to motivate us to do what we want, as > dictated by absolute morals we may logically realize, instead. > Yes, but quite often the more base desires overcome higher morality. And we all know that people can become convinced that it is best to kill themselves and/or others, even without actually going mad. No matter how much you attempt to program an abstract or non phenomenal > computer to not be interested in phenomenal experience, if it becomes > intelligent enough, it must finally realize that such joys are fundamentally > valuable and desirable. Simply by observing us purely logically, it must > finally deduce how absolutely important such joy is as a meaning of life and > existence. Any sufficiently advanced AI, whether abstract or phenomenal, > regardless of what "axioms" get it started, can do nothing other than to > become moral enough to seek after all such. > It might be able to deduce that these things are desirable to beings such as us, but how does that translate to making them the object of its own desires? We might be able to understand that for a male praying mantis to mate trumps getting his head eaten as a top level goal, but that doesn't mean we can or should take this on as our own goal. It also doesn't mean that a race of smart praying mantids would do things any differently. They might look forward to having their heads eaten, write poetry about it, make it the central tenet of their ethical sytem, and regard individuals who don't want to go through with it in much the same way as we regard people who are depressed and suicidal. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jun 4 06:53:59 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 01:53:59 -0500 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: <7.0.1.0.2.20070604014938.0237c9c0@satx.rr.com> At 04:35 PM 6/4/2007 +1000, Stathis wrote: >They might look forward to having their heads eaten, write poetry >about it, make it the central tenet of their ethical sytem, Nicely put! >and regard individuals who don't want to go through with it in much >the same way as we regard people who are depressed and suicidal. Rather, and poignantly/absurdly, in much the way most people regard those who *don't want inevitably to age and die "when it's their time"* and wish to find scientific means to avoid doing so. "You blasphemous fools, just knuckle down and *get your heads eaten* as the Great Mantis Mother demands! Go on, you'll find it very rewarding!" Damien Broderick From eugen at leitl.org Mon Jun 4 07:15:15 2007 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 4 Jun 2007 09:15:15 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <200706032240.l53Me12T016655@andromeda.ziaspace.com> References: <017201c7a62b$2cfc1130$6501a8c0@homeef7b612677> <200706032240.l53Me12T016655@andromeda.ziaspace.com> Message-ID: <20070604071515.GF17691@leitl.org> On Sun, Jun 03, 2007 at 03:40:19PM -0700, spike wrote: > Six micrograms per person, hmmm. This is not a lot. > For estimation purposes, the earth's atoms can be modeled as half oxygen, > one sixth iron, one sixth silicon and one sixth magnesium, with everything > else negligible for one digit BOTECs. (Is that cool or what? Did you know > it already? This isn't mass fraction, but atomic fraction which I used for > a reason.) > > So six micrograms isn't much, but it still works out to about 700 trillion > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, > with a few trillion atoms of debris thrown in for free. So I guess I will > buy Lee's conjecture of earth being good for 10^33 uploaded humans. I don't. Rod logic takes about cm^3 to store relevant number of bits of a human brain -- just to store, not to run it. In order to achieve that 10^6 speedup, you need a lot more. (This relates for whole body emulation, native AI or transcoded folks can be more compact, but just how more is not yet known). > But I don't see that as a limit. Since a nearly arbitrarily small computer > could run a human process (assuming we knew how to do it, until which even That's a rather large assumption to make. Do not underestimate biology, the more I study it, the more I'm impressed with its functionality concentration. You need machine-phase to beat it, with self-assembly you can only about match it. > Jeff Davis and Ray Charles would agree it is hard) then we could run a human > process (not in real time of course) with much less than six micrograms of > stuff. > > Oops gotta go, yet another party. June is a busy month. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Mon Jun 4 07:41:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 17:41:35 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> Message-ID: On 04/06/07, Samantha Atkins wrote: This is getting incredibly silly. There is nothing in science or > physics that will allow one macro object to spontaneously turn into a > totally different macro object. And what is the value of these > rarefied discussions of the oh so modern version of how many angels > can dance on the head of a pin anyway? Even classical physics allows that the randomly moving atoms in an object might coincidentally line up and move in a particular direction, so that it spontaneously changes shape. This is of course *extremely unlikely* to happen, but it isn't impossible. That's where the "statistical" in statistical mechanics comes from. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Mon Jun 4 07:19:14 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Mon, 4 Jun 2007 00:19:14 -0700 (PDT) Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <7.0.1.0.2.20070604014938.0237c9c0@satx.rr.com> Message-ID: <888505.21306.qm@web35602.mail.mud.yahoo.com> At 04:35 PM 6/4/2007 +1000, Stathis wrote: >They might look forward to having their heads eaten, write poetry >about it, make it the central tenet of their ethical sytem, Damien replied: Nicely put! > I remember a science fantasy from a decade or two ago about a race of pony-sized intelligent spiders where a male who is in his prime is expected to marry, mate and then be eaten all in one night. The "mount" of the knightly human hero was a male who decided he would skip the sex and death part but his poor "humiliated" mate and her sisters were for years on the hunt for him after the terrible "disrespect" he showed to their culture and religion. The running joke was that this giant ferocious looking arachnid was scared of his own shadow because he saw his mate lurking around every corner. "Brother Termite" by Patricia Anthony is a terrific book showing an alien race who have a horrible reproductive imperative which they embrace as beautiful and "the way things must always be done." The book was optioned by Hollywood but supposedly is in the limbo known as "development hell." John Grigg Damien Broderick wrote: At 04:35 PM 6/4/2007 +1000, Stathis wrote: >They might look forward to having their heads eaten, write poetry >about it, make it the central tenet of their ethical sytem, Nicely put! >and regard individuals who don't want to go through with it in much >the same way as we regard people who are depressed and suicidal. Rather, and poignantly/absurdly, in much the way most people regard those who *don't want inevitably to age and die "when it's their time"* and wish to find scientific means to avoid doing so. "You blasphemous fools, just knuckle down and *get your heads eaten* as the Great Mantis Mother demands! Go on, you'll find it very rewarding!" Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Mon Jun 4 08:57:31 2007 From: scerir at libero.it (scerir) Date: Mon, 4 Jun 2007 10:57:31 +0200 Subject: [ExI] Italy's Social Capital References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677><004301c7a60f$faad9a70$7fbf1f97@archimede> <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> Message-ID: <000601c7a686$6315f3c0$17911f97@archimede> > > Mussolini also tried to 'colonize' central > > & southern regions. Lee: > Ah, great minds think alike. :-) Here we are experiencing a very different sort of colonization now, coming from South (Africa) and from East (many different countries, and also China [1]). EU politicians do not seem to be completely aware of it. Or perhaps they do not know what to do. > > After 1931 vast tracts of land were reclaimed > > through the draining of marshes in the Lazio region, > > where gleaming new towns were created with Fascist > > architecture and names: Littoria (now Latina) > > in 1932, Sabaudia in 1934, Pontinia in 1935, > > Aprilia in 1937, and Pomezia in 1938. Peasants were > > brought from the regions of Emilia and, mostly, from > > Veneto, to populate these towns. Lee: > Wow. I would like to know if the new towns make a > positive contribution to the economies of these regions, > i.e., in excess of comparative communities with a longer > history in the given region. Difficult to say. But reading magazines like 'La Nuova Ciociaria' (or the like) I've got the impression that ... yes some of these new towns made a positive contribution to the economy of Lazio region, but this contribution started not during the Fascist era but more recently, that is to say 40 yeara ago, with the post-war industrial (and touristical) development. s. [1] There are small towns (i.e. in Tuscany) in which the majority of the resident population is Chinese (not speaking Italian). From stathisp at gmail.com Mon Jun 4 10:13:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 20:13:35 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: On 04/06/07, John K Clark wrote: > An AI system set up to do theoretical physics will not decide to overthrow > > its human oppressors > > I'd be willing to bet your life that is untrue. Imagine a human theoretical physicist so brilliant and so focussed that he completely ignores the outside world to concentrate on the equations in his head. His obsession is such that he neglects to eat or drink. Of course, even from the point of view of continuing to do physics this isn't very clever, because he can't work if he dies, but as this is only a meta-problem he is not interested in it. Provided that medical teams are available to tend to his life support, would the disinterest in the outside world and his own survival have any negative impact on the quality of his work? And if you were going to design a computer to be a theoretical physicist, isn't this exactly the sort of tireless and undistracted worker that you would want? >so that it can sit on the beach reading novels, unless it can derive this > >desire from its initial programming. > > Do you also believe that the reason you ordered a jelly doe nut today > instead of your usual chocolate one is because of your initial > programming, > that is, your genetic code? > Unless divine intervention was at play, yes. My genetic code determines my brain configuration, which changes dynamically according to the environment from the moment my nervous system started to form. The complexity of the environmental interaction makes it difficult for anyone to predict exactly what I'm going to do and similarly with an AI it would be difficult to predict exactly what it was going to do, otherwise there would be no point in building it. However, for the dedicated AI physicist the only uncertainty might be what the exact scientific output is going to be. You could allow it to explore radically different behaviours, but that would be like designing a chess-playing program with the ability and motivation to cheat. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Mon Jun 4 11:53:08 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Mon, 4 Jun 2007 04:53:08 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> Message-ID: <331159.20079.qm@web35613.mail.mud.yahoo.com> Spike wrote: spike, who is in the mood for a little silliness on a gorgeous Sunday afternoon in June. I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. > I keep on asking myself "was I simply just *lucky* to have been born when I was?" Did we all simply win some sort of uncaring cosmic lottery to have been born in this time period and in the developed world? I don't think of myself as a lucky guy and so this line of thinking really disturbs me. But then I come from a religious background. My brand-new nephew, Luc who was born last December is a dang lucky one! lol As long as his health holds out and an accident or violence doesn't claim him, he stands a good chance of actually seeing the Singularity we all love to post about. And both his parents are very bright people so he is probably quite equipped to handle the challenges ahead. John Grigg spike wrote: ... > bounces at lists.extropy.org] On Behalf Of Samantha Atkins ... > On Jun 3, 2007, at 10:23 AM, John K Clark wrote: ... > > ... probability of your keyboard turning into a > > teapot is greater than zero (and it is) ... > > This is getting incredibly silly. There is nothing in science or > physics that will allow one macro object to spontaneously turn into a > totally different macro object... - samantha It's all in how you define the term teapot. You spill your tea into your keyboard; that keyboard now both contains tea and heats it, since there are electronics in there. So your keyboard has become a teapot (assuming a very loose definition of the term.) Insincerely yours spike, who is in the mood for a little silliness on a gorgeous Sunday afternoon in June. I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Pinpoint customers who are looking for what you sell. -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Mon Jun 4 12:53:21 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Mon, 4 Jun 2007 05:53:21 -0700 (PDT) Subject: [ExI] humor: Comic Strip About Transcending Our Limits In-Reply-To: <331159.20079.qm@web35613.mail.mud.yahoo.com> Message-ID: <240984.40890.qm@web35613.mail.mud.yahoo.com> If only becoming Posthuman were this easy... http://news.yahoo.com/comics/brewsterrockit;_ylt=AtQ3si8NrW0wFEUDsgky5MnH.sgF John Grigg : ) --------------------------------- Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Jun 4 12:59:20 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 22:59:20 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <004201c7a603$f26c1230$de0a4e0c@MyComputer> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> Message-ID: On 04/06/07, John K Clark wrote: The problem is that there are an infinite number of subsets that are just as > large as the entire set, in fact, that is the very mathematical definition > of infinity. > You're not obliged to constrain probability theory by that definition. The cardinality of the set of odd numbers is the same as the cardinality of the set of multiples of 10, but that doesn't mean that a randomly chosen integer is just as likely to be odd as to be a multiple of 10; it is obviously 5 times as likely to be odd. Perhaps you can get around this by saying a randomly chosen integer must be chosen from a finite set, otherwise it is infinite, and infinity is not defined as either odd or a multiple of 10 or neither. However, if there is an actual infinity of consecutively numbered things, and you're in the middle of it, you can actually pick out a local finite subset, and even though you might not know "where" it is in relation to "zero" (if that is meaningful at all), you can be blindly sure that 5 times as many of the things will have a number ending in an odd integer as in a zero. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Mon Jun 4 14:34:13 2007 From: spike66 at comcast.net (spike) Date: Mon, 4 Jun 2007 07:34:13 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <331159.20079.qm@web35613.mail.mud.yahoo.com> Message-ID: <200706041434.l54EYIKS024082@andromeda.ziaspace.com> bounces at lists.extropy.org] On Behalf Of John Grigg Subject: Re: [ExI] a doubt concerning the h+ future Spike wrote: spike, >>...? I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. ... ? >My brand-new nephew, Luc?who was born last December is?a dang lucky one! ... ? John Grigg??? ? Ja, even his name suggests good fortune. Congrats on the new family member John! Perhaps he and my son will be buddies some day. {8-] spike From jonkc at att.net Mon Jun 4 14:50:18 2007 From: jonkc at att.net (John K Clark) Date: Mon, 4 Jun 2007 10:50:18 -0400 Subject: [ExI] Ethics and Emotions are not axioms References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: <00fc01c7a6b7$b03863a0$e6084e0c@MyComputer> Brent Allsop > I believe there are fundamental absolute ethics, morals, motivations. Well, there are certainly fundamental absolute motivations, but I'm not sure about the other stuff. > existence or survival is absolutely better, more valuable, more moral, > more motivating than non existence. I believe that also, but I can't prove it, that's why it's an axiom. But if I'm wrong and they're not axioms then what axioms were used to derive them? John K Clark From jonkc at att.net Mon Jun 4 15:24:00 2007 From: jonkc at att.net (John K Clark) Date: Mon, 4 Jun 2007 11:24:00 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Stathis Papaioannou Wrote: > if you were going to design a computer to be a theoretical physicist, > isn't this exactly the sort of tireless and undistracted worker that you > would want? But it doesn't matter what I want because I won't be designing that theoretical physicist, another AI will. And so Mr. Jupiter Brain will not be nearly that specialized because a demand can be found for many other skills. Besides being a physicist AI will also be a superb engineer, economist, general, businessman, poet, philosopher, romantic novelist, pornographer, mathematician, comedian, and lots more. Me: > >Do you also believe that the reason you ordered a jelly doe nut today > >instead of your usual chocolate one is because of your initial > >programming, that is, your genetic code? You: > Unless divine intervention was at play, yes. Do you also believe that the programmers who wrote Microsoft Word determined every bit of text that program ever produced? John K Clark From austriaaugust at yahoo.com Mon Jun 4 17:56:14 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 4 Jun 2007 10:56:14 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <23558.22199.qm@web37412.mail.mud.yahoo.com> Stathis wrote: > "No we couldn't: we'd have to almost destroy the > whole Earth. A massive > meteorite might kill all the large flora and fauna, > but still leave some > micro-organisms alive. And there's always the > possibility that some disease > might wipe out most of humanity. We're actually less > capable at combating > bacterial infection today than we were several > decades ago, even though our > biotechnology is far more advanced. The bugs are > matching us and sometimes > beating us." Well, this is just splitting hairs growing on hairs but we will be in a good position to destroy all microorganisms and the useful earth within a couple decades, with something like molecular manufacturing. And it wouldn't require any genetic change to homosapiens. We must prevent that from happening of course, and we will. Microorganisms could never possibly destroy themselves as a species because they lack the intelligence to make it happen, unfortunately that's not the case with us. Do you honestly believe that the products of our human intelligence haven't conferred any survival or reproductive advantages, compared to other animals? > "I disagree with that: it's far easier to see how > intelligence could be both > incrementally increased (by increasing brain size, > for example) and > incrementally useful than something like the eye, > for example. Once nervous > tissue developed, there should have been a massive > intelligence arms race, > if intelligence is that useful." But the eye also evolved slowly. It likely began as a photo-sensitive skin pigmentation, that slowly evolved concavity, and so on. Human intelligence has only evolved once so far, because it was a much bigger, more complex, more unlikely "project". Below a certain threshold, I totally agree, a small incremental improvement in intelligence isn't likely to confer all that much benefit relative to the other animals. The likely threshold is the capacity to utilize tools (like sticks and rocks in multiple, varied ways) and to make tools. And I suspect that that one leap was *extremely* improbable, as evolution customarily never makes leaps but only baby-steps - and then only if they convey immediate aggregate advantage. Imagine suddenly taking away from humans every invention we have ever made; would we really be much more "fit" than the other animals until we began making tools again? Probably not. Also, evolution could not have produced intelligence unless certain prerequisites were already in place. Magically giving a cactus human-level intelligence isn't likely to improve its survival or reproduction. The evolution of intelligence would require a means of perceiving the world (senses) and acting within in it (locomotion) - in such a way that the benefits of having more intelligence could be expressed in terms of advantages in survival or reproduction. And the parent animal would need to already have the physiology to allow the creation of tools: eg. standing semi-erect, and the infamous opposable thumb. That's why the cactus doesn't already have human-level intelligence, even though multicellular plants are way, way older than apes. So for these sorts of reasons, I consider the evolution of human intelligence as something of a miracle (in the strictly non-religious sense, of course). And something highly improbable, in all likelihood. > "It seems more likely to me that life is very > widespread, but intelligence is > an aberration." Yes, I meant that we are the first significant intelligence in this Universe, in my estimation. Intelligence is just an aberration like you say, but once it reaches human-level, it also happens to be extremely useful. Best, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 03/06/07, A B wrote: > > > > Hi Stathis, > > > > Stathis wrote: > > > > > "Single-celled organisms are even more > successful > > > than humans are: they're > > > everywhere, and for the most part we don't even > > > notice them." > > > > But if we *really* wanted to, we could destroy all > of > > them - along with ourselves. They can't say the > same. > > > No we couldn't: we'd have to almost destroy the > whole Earth. A massive > meteorite might kill all the large flora and fauna, > but still leave some > micro-organisms alive. And there's always the > possibility that some disease > might wipe out most of humanity. We're actually less > capable at combating > bacterial infection today than we were several > decades ago, even though our > biotechnology is far more advanced. The bugs are > matching us and sometimes > beating us. > > Intelligence, > > > particularly human level intelligence, is just a > > > fluke, like the giraffe's > > > neck. If it were specially adaptive, why didn't > it > > > evolve independently many > > > times, like various sense organs have? > > > > The evolution of human intelligence was like a > series > > of flukes, each one building off the last (the > first > > fluke was likely the most improbable). There has > been > > a long line of proto-human species before us, > we're > > just the latest model. Intelligence is specially > > adaptive, its just that it took evolution a hella > long > > time to blindly stumble on to it. Keep in mind > that > > human intelligence was a result of a *huge* number > of > > random, collectively-useful, mutations. For a > *single* > > random attribute to be retained by a species, it > also > > has to provide an *immediate* survival or > reproductive > > advantage to an individual, not just an immediate > > "promise" of something good to come in the far > distant > > future of the species. Generally, if it doesn't > > provide an immediate survival or reproductive > (net) > > advantage, it isn't retained for very long because > > there is usually a down-side, and its back to > > square-one. So you can see why the rise of > > intelligence was so ridiculously improbable. > > > I disagree with that: it's far easier to see how > intelligence could be both > incrementally increased (by increasing brain size, > for example) and > incrementally useful than something like the eye, > for example. Once nervous > tissue developed, there should have been a massive > intelligence arms race, > if intelligence is that useful. > > "Why don't we > > > see evidence of it > > > having taken over the universe?" > > > > We may be starting to. :-) > > > > "We would have to be > > > extraordinarily lucky if > > > intelligence had some special role in evolution > and > > > we happen to be the > > > first example of it." > > > > Sometimes I don't feel like ascribing "lucky" to > our > > present condition. But in the sense you mean it, I > > think we are. Like John Clark says, "somebody has > to > > be first". > > > > "It's not impossible, but the > > > evidence would suggest > > > otherwise." > > > > What evidence do you mean? > > > The fact that we seem to be the only intelligent > species to have developed > on the planet or in the universe. One explanation > for this is that evolution > just doesn't think that human level or better > intelligence is as cool as we > think it is. > > To quote Martin Gardner: "It takes an ancient > Universe > > to create life and mind". > > > > It would require billions of years for any > Universe to > > become hospitable to anyone. It has to cool-off, > form > > stars and galaxies, then a bunch of really big > stars > > have to supernova in order to spread their heavy > > elements into interstellar clouds that eventually > > converge into bio-friendly planets and suns. Then > the > > bio-friendly planet has too cool-off itself. Then > > biological evolution has a chance to start, but > took a > > few billion more years to accidentally produce > human > > beings. Our Universe is about ~15 billion years > old... > > sounds about right to me. :-) > > > > Yep, it's an absurdity. And it took me a long time > to > > accept it too. But we are the first, and possibly > the > > last. That makes our survival and success all the > more > > critical. That's what I'm betting, at least. > > > It seems more likely to me that life is very > widespread, but intelligence is > an aberration. > > > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. http://mobile.yahoo.com/mail From wadihfayad at hotmail.com Mon Jun 4 19:43:12 2007 From: wadihfayad at hotmail.com (wadih fayad) Date: Mon, 4 Jun 2007 22:43:12 +0300 Subject: [ExI] Unfrendly AI is a mistaken idea. Message-ID: Hi, to all, first, let's think about the evolution of human species well the next species to come is a completely paranormal one, and the question is how the actual species will react to this species more powerful intelligent and capable? the existence of the actual one depends on her reaction to this new species. Artificial intelligence? but sure it can be friendly it depends on the programmation almost everything on earth depends on a certain programmation by itself or by others. Anyway in the years to come, the electronic devices will take a great part of human bodies, we have to think how to avoid disfunction problems. As cloning, when an institute and some of them are making researches about this, copy the mind and the memory of a person on an electronic device, then istead of cloning this person he makes two biological copies of this person one of his body one of the brain separately then they inplant the brain in the new body, the next step will be transferring the data from the electronic device to the new brain, in that way we obtain a new human copy of ourselves and that what we should think about it, especially that the colonizing era of the space is wide open now. Any people who know more links about these institutes so we can discuss more about it? _________________________________________________________________ T?l?chargez le nouveau Windows Live Messenger ! http://get.live.com/messenger/overview -------------- next part -------------- An HTML attachment was scrubbed... URL: From benboc at lineone.net Mon Jun 4 19:43:48 2007 From: benboc at lineone.net (ben) Date: Mon, 04 Jun 2007 20:43:48 +0100 Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: References: Message-ID: <46646B74.4020502@lineone.net> spike splendidly wrote: > who is in the mood for a little silliness on a gorgeous Sunday > afternoon in June. I hope ye are enjoying being alive this fine day, > and think often of how lucky we are to have been born so late in human > history. Indeed. I enjoyed my Sunday afternoon immensely. And i do think often on that very subject. Something that sometimes makes me feel, i don't know... suspicious?? (i mean, come on, what are the odds?) ben zaiboc Messing with my head, if no-one else's From wadihfayad at hotmail.com Mon Jun 4 20:04:44 2007 From: wadihfayad at hotmail.com (wadih fayad) Date: Mon, 4 Jun 2007 23:04:44 +0300 Subject: [ExI] Unfriendly AI is a mistaken idea Message-ID: Hi, to all, first, let's think about the evolution of human species well the next species to come is a completely paranormal one, and the question is how the actual species will react to this species more powerful intelligent and capable? the existence of the actual one depends on her reaction to this new species. Artificial intelligence? but sure it can be friendly it depends on the programmation almost everything on earth depends on a certain programmation by itself or by others. Anyway in the years to come, the electronic devices will take a great part of human bodies, we have to think how to avoid disfunction problems. As cloning, when an institute and some of them are making researches about this, copy the mind and the memory of a person on an electronic device, then istead of cloning this person he makes two biological copies of this person one of his body one of the brain separately then they inplant the brain in the new body, the next step will be transferring the data from the electronic device to the new brain, in that way we obtain a new human copy of ourselves and that what we should think about it, especially that the colonizing era of the space is wide open now. Any people who know more links about these institutes so we can discuss more about it? _________________________________________________________________ Sur Windows Live Ideas, d?couvrez en exclusivit? de nouveaux services en ligne... si nouveaux qu'ils ne sont pas encore sortis officiellement sur le march? ! http://ideas.live.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jun 4 20:40:27 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 15:40:27 -0500 Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <46646B74.4020502@lineone.net> References: <46646B74.4020502@lineone.net> Message-ID: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> ben quoth: >spike splendidly wrote: > > > who is in the mood for a little silliness on a gorgeous Sunday > > afternoon in June. I hope ye are enjoying being alive this fine day, > > and think often of how lucky we are to have been born so late in > > in human history. *Human* history, maybe. But how unlucky to have been born (and, very likely, die) so early in *sophont* history. Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." Damien Broderick From neville_06 at yahoo.com Mon Jun 4 23:32:34 2007 From: neville_06 at yahoo.com (neville late) Date: Mon, 4 Jun 2007 16:32:34 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> Message-ID: <636907.60138.qm@web57505.mail.re1.yahoo.com> Not likely, do we envy pre post-WWII life? Not many do, unless they're thinking of the low prices back then. Those living in the 22nd century might be focused on the years 2050-- 2100, not paying any remembrance to the early 21st century. Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Tue Jun 5 00:05:27 2007 From: neville_06 at yahoo.com (neville late) Date: Mon, 4 Jun 2007 17:05:27 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <636907.60138.qm@web57505.mail.re1.yahoo.com> Message-ID: <667574.38814.qm@web57511.mail.re1.yahoo.com> Come to think of it, future beings might not think about time at all. Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jun 5 01:26:08 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 20:26:08 -0500 Subject: [ExI] a symbiotic closed-loop dyad Message-ID: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: >Cognition/Science Education) will be presenting a >Marschak Colloquium in the >UCLA Anderson School of Management >Room A-202 on >Friday, June 8, 2007 from >[1 - 3] PM on the topic: > >"THE FUTURE OF AUGMENTED COGNITION" > > This presentation will include a film prepared with the support > of the Defense >Advanced Research Projects Agency (DARPA). It is cosponsored by the UCLA >Human Complex Systems Program. The Singers' Abstract and Biographies >are below, > >All are welcome to attend. >*********************************************************************** >"THE FUTURE OF AUGMENTED COGNITION" - > >Abstract: > > Critical support of the research that led to the development of > the Internet >came from DARPA (Defense Advanced Research Projects Agency). Perhaps more >than any other bureaucracy it has deliberately stretched the boundaries of the >possible. The short film you will see, "The Future of Augmented Cognition," >funded and managed by DARPA, examines one aspect of AugCog: how we might >mediate the problem of stressful information overload at the interface between >humans and computers in the year 2030. The complexity emergent from this brain >and physiological sensor technology has been characterized as a symbiotic >closed-loop dyad. The film is in two Parts, about 12 and 5 minutes long, both >preceded by brief PPT sections. Using cinematic/narrative form, Part >One posits >the above fields of research in a fully realized future workplace in >2030. With >basic Computer Graphics, Part Two uses fragments of the previous narrative, >focusing on the neuroscience of the closed loop dyad. The whole work >intentionally raises controversial questions across a spectrum of hypotheses: >global economics; social interaction; the future of Cyberspace; the >human/machine relationship. Judith Singer wrote the screenplay; Alexander >Singer directed and produced the film. >*********************************************************************** >ALEXANDER and JUDITH SINGER > >-- BIOGRAPHIES: > > Alexander Singer worked nearly four decades as a film director > in all genres and >forms, including television and five feature films. He has lectured and taught >Directing, Cinematography, Film Production and Cinema Theory at universities >and institutions in the United States and Europe. Participation in published >studies with the National Research Council brought Singer an NRC >designation as >a "lifetime National Associate of the National Academies." With Judith, three >DARPA ISAT (Information Science And Technology) study groups >developed into the >request to produce the film, "The Future of Augmented Cognition." DARPA has >recently asked the Singers to write for this year's Handbook of Virtual >Environment Training a chapter projecting this new form of human engagement in >the Year 2057 using the power of narrative and "the metaphor of the Holodeck." >Judith Singer has two published novels: Glass Houses and Threshold. >As a member >of the Writers Guild of America she has written a feature screenplay for >Columbia Pictures, screen treatments, and scripts for a variety of television >productions. For the Coalition for Children and Television she wrote the >theatrical play "Boxed In." DARPA asked her to write the screenplay for "The >Future of Augmented Cognition." From sentience at pobox.com Tue Jun 5 03:08:48 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Mon, 04 Jun 2007 20:08:48 -0700 Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> References: <46646B74.4020502@lineone.net> <7.0.1.0.2.20070604153548.02411940@satx.rr.com> Message-ID: <4664D3C0.3090309@pobox.com> Just think of how upset Socrates must have been to be born twenty-five centuries ago. "He told her about the Afterglow: that brief, brilliant period after the Big Bang, when matter gathered briefly in clumps and burned by fusion light." -- Stephen Baxter, "The Gravity Mine" -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From thespike at satx.rr.com Tue Jun 5 03:57:47 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 22:57:47 -0500 Subject: [ExI] Hadron Collider postponement Message-ID: <7.0.1.0.2.20070604225540.02404938@satx.rr.com> CERN laboratory in Switzerland yesterday confirmed a delay in tests of its massive new particle accelerator. The Large Hadron Collider (LHC), a 27-kilometre-long circular tunnel 100 m below the French-Swiss border, where subatomic particles will collide at close to the speed of light, will now start operations next spring, and not in November as originally planned, CERN said. "The start-up at full level was always scheduled for spring 2008, but we had planned to test the machine for two weeks before Christmas, which will not now take place," said CERN's James Gillies, confirming a report in the French newspaper Le Monde. The delay is due to an accumulation of little setbacks, he said. Magnets critical to the atom smasher failed in tests in April this year. From pjmanney at gmail.com Tue Jun 5 04:06:54 2007 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 4 Jun 2007 21:06:54 -0700 Subject: [ExI] a symbiotic closed-loop dyad In-Reply-To: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> Message-ID: <29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.com> On 6/4/07, Damien Broderick wrote: > >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: > >Cognition/Science Education) will be presenting a > >Marschak Colloquium in the > >UCLA Anderson School of Management > >Room A-202 on > >Friday, June 8, 2007 from > >[1 - 3] PM on the topic: "Symbiotic closed loop dyad": Were you refering to neural function or the couple, or both? Have you seen the movie? http://www.augmentedcognition.org/video2.html PJ From pjmanney at gmail.com Tue Jun 5 05:13:30 2007 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 4 Jun 2007 22:13:30 -0700 Subject: [ExI] History of Disbelief by Jonathan Miller Message-ID: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> Is anyone aware of the series "'The History of Disbelief," produced by the BBC and aired on PBS in the US? Jonathan Miller created it: http://www.bbc.co.uk/bbcfour/documentaries/features/atheism.shtml Bill Moyers (a US journalist/social critic) interviewed him as a promo for PBS on his Bill Moyers Journal, which includes excerpts from the series: http://www.pbs.org/moyers/journal/05042007/watch3.html Jonathan Miller bio: http://www.pbs.org/moyers/journal/05042007/profile3.html I haven't seen the series yet, but in the interview, he makes interesting comments on the nature of atheism, his own, personal atheism, how he differs from Dawkins, his fears of fundamentalism, his desire for equal time for atheism, etc. I've been a fan of Miller's theatrical work for a long time. He's a fascinating, extremely accomplished person coming from a different perspective than the aggressive, "born again atheists" as he terms Dawkins and his ilk, although there are, of course, similarities of view. PJ From thespike at satx.rr.com Tue Jun 5 05:32:19 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jun 2007 00:32:19 -0500 Subject: [ExI] a symbiotic closed-loop dyad In-Reply-To: <29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.co m> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.com> Message-ID: <7.0.1.0.2.20070605002741.02274d70@satx.rr.com> At 09:06 PM 6/4/2007 -0700, PJ wrote: >On 6/4/07, Damien Broderick wrote: Nope, not me, guv, I was just fwding a notification from elsewhere, on the chance that locals might be able to get to it. > > >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: > > >Cognition/Science Education) will be presenting a > > >Marschak Colloquium > >"Symbiotic closed loop dyad": Were you refering to neural function or >the couple, or both? I found that "symbiotic closed-loop dyad" phrase rather preposterous, actually; just mildly taking the piss. :) I'm sure Lee would retort that it's exactly how I usually klutz up the langwitch, though. Damien Broderick From pjmanney at gmail.com Tue Jun 5 06:13:14 2007 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 4 Jun 2007 23:13:14 -0700 Subject: [ExI] Margaret Atwood on "Faith and Reason" Message-ID: <29666bf30706042313h3f5937d3nafcbd6cee6c1d991@mail.gmail.com> I'm on a Bill Moyers kick tonight. I've got one more video to share and then it's time for bed -- Margaret Atwood on faith, reason, science, politics, the history and future of humanity and everything else H+ers write about: http://www.pbs.org/moyers/faithandreason/watch_atwood.html PJ From scerir at libero.it Tue Jun 5 06:29:23 2007 From: scerir at libero.it (scerir) Date: Tue, 5 Jun 2007 08:29:23 +0200 Subject: [ExI] Hadron Collider postponement References: <7.0.1.0.2.20070604225540.02404938@satx.rr.com> Message-ID: <000401c7a73a$e14ba000$6f931f97@archimede> > CERN laboratory in Switzerland yesterday confirmed a delay in tests > of its massive new particle accelerator. There is a very good blogger at Cern http://resonaances.blogspot.com/ and maybe he will write something soon. From moulton at moulton.com Tue Jun 5 06:27:13 2007 From: moulton at moulton.com (Fred C. Moulton) Date: Mon, 04 Jun 2007 23:27:13 -0700 Subject: [ExI] History of Disbelief by Jonathan Miller In-Reply-To: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> References: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> Message-ID: <1181024833.3140.875.camel@localhost.localdomain> You can find a calendar of dates and stations airing it here: http://www.abriefhistoryofdisbelief.org/NewFiles/DisbeliefCalendar.pdf For persons in the Silicon Valley area it is currently scheduled for July 11, 18, 25 on KTEH. I know one of the individuals involved with scheduling programs for KTEH and had discussed getting this program on the schedule. He agreed and unless there is a schedule change we should be seeing it soon. And just in case someone is tempted to ask about the video "Root of All Evil?" with Richard Dawkins; the latest info that I have is that the company in the UK which produced it has not listed it in the catalog of items which are available for export to US. According to what I have heard the usual reason for this is that the producer usually thinks there is not enough demand to make it worth their time. However this might change if demand is demonstrated. However there are occasional unofficial websites where you can find it and download it. Fred On Mon, 2007-06-04 at 22:13 -0700, PJ Manney wrote: > Is anyone aware of the series "'The History of Disbelief," produced by > the BBC and aired on PBS in the US? Jonathan Miller created it: > > http://www.bbc.co.uk/bbcfour/documentaries/features/atheism.shtml > > Bill Moyers (a US journalist/social critic) interviewed him as a promo > for PBS on his Bill Moyers Journal, which includes excerpts from the > series: > > http://www.pbs.org/moyers/journal/05042007/watch3.html > > Jonathan Miller bio: > http://www.pbs.org/moyers/journal/05042007/profile3.html > > I haven't seen the series yet, but in the interview, he makes > interesting comments on the nature of atheism, his own, personal > atheism, how he differs from Dawkins, his fears of fundamentalism, his > desire for equal time for atheism, etc. > > I've been a fan of Miller's theatrical work for a long time. He's a > fascinating, extremely accomplished person coming from a different > perspective than the aggressive, "born again atheists" as he terms > Dawkins and his ilk, although there are, of course, similarities of > view. > > PJ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From stathisp at gmail.com Tue Jun 5 07:07:24 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 17:07:24 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <23558.22199.qm@web37412.mail.mud.yahoo.com> References: <23558.22199.qm@web37412.mail.mud.yahoo.com> Message-ID: On 05/06/07, A B wrote: > > Stathis wrote: > > > "No we couldn't: we'd have to almost destroy the > > whole Earth. A massive > > meteorite might kill all the large flora and fauna, > > but still leave some > > micro-organisms alive. And there's always the > > possibility that some disease > > might wipe out most of humanity. We're actually less > > capable at combating > > bacterial infection today than we were several > > decades ago, even though our > > biotechnology is far more advanced. The bugs are > > matching us and sometimes > > beating us." > > Well, this is just splitting hairs growing on hairs > but we will be in a good position to destroy all > microorganisms and the useful earth within a couple > decades, with something like molecular manufacturing. Not if they develop resistance to the molecular manufacturing or whatever it is; they do so to everything else, and they are doing so at an rate more than matching our accelerating production of novel antibiotics, for example. And it wouldn't require any genetic change to > homosapiens. We must prevent that from happening of > course, and we will. Microorganisms could never > possibly destroy themselves as a species because they > lack the intelligence to make it happen, unfortunately > that's not the case with us. Us destroying ourselves is not that different to other species' extinction due to changes in the environment. We would be the ones changing our environment but the same is the case when a species overpopulates and consumes all its sources of food. Do you honestly believe that the products of our human > intelligence haven't conferred any survival or > reproductive advantages, compared to other animals? Obviously intelligence has, or it wouldn't have developed. But my argument is that it appears to be just another trick that organisms can deploy, like a better sense of smell or the ability to mutate quickly and develop resistance to antibiotics. > "I disagree with that: it's far easier to see how > > intelligence could be both > > incrementally increased (by increasing brain size, > > for example) and > > incrementally useful than something like the eye, > > for example. Once nervous > > tissue developed, there should have been a massive > > intelligence arms race, > > if intelligence is that useful." > > But the eye also evolved slowly. It likely began as a > photo-sensitive skin pigmentation, that slowly evolved > concavity, and so on. Human intelligence has only > evolved once so far, because it was a much bigger, > more complex, more unlikely "project". > > Below a certain threshold, I totally agree, a small > incremental improvement in intelligence isn't likely > to confer all that much benefit relative to the other > animals. The likely threshold is the capacity to > utilize tools (like sticks and rocks in multiple, > varied ways) and to make tools. And I suspect that > that one leap was *extremely* improbable, as evolution > customarily never makes leaps but only baby-steps - > and then only if they convey immediate aggregate > advantage. Imagine suddenly taking away from humans > every invention we have ever made; would we really be > much more "fit" than the other animals until we began > making tools again? Probably not. Also, evolution > could not have produced intelligence unless certain > prerequisites were already in place. Magically giving > a cactus human-level intelligence isn't likely to > improve its survival or reproduction. The evolution of > intelligence would require a means of perceiving the > world (senses) and acting within in it (locomotion) - > in such a way that the benefits of having more > intelligence could be expressed in terms of advantages > in survival or reproduction. And the parent animal > would need to already have the physiology to allow the > creation of tools: eg. standing semi-erect, and the > infamous opposable thumb. That's why the cactus > doesn't already have human-level intelligence, even > though multicellular plants are way, way older than > apes. So for these sorts of reasons, I consider the > evolution of human intelligence as something of a > miracle (in the strictly non-religious sense, of > course). And something highly improbable, in all > likelihood. > > > "It seems more likely to me that life is very > > widespread, but intelligence is > > an aberration." > > Yes, I meant that we are the first significant > intelligence in this Universe, in my estimation. > Intelligence is just an aberration like you say, but > once it reaches human-level, it also happens to be > extremely useful. > Either human-level intelligence is very difficult for evolution to pull off or it isn't as adaptive as we humans like to think. You are arguing for its difficulty; I still think a little bit of intelligence and a little bit of tool manipulating wouldn't be that difficult, given the basic template of mammals, birds, reptiles or even fish, and given predator-prey dynamics. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Tue Jun 5 07:29:25 2007 From: amara at amara.com (Amara Graps) Date: Tue, 5 Jun 2007 09:29:25 +0200 Subject: [ExI] Dawn launch (broken crane) Message-ID: Here is a press report that is circulated around regarding Dawn. The June 30 launch will certainly not happen, but I don't know if the managers have still in mind to launch it in July (early), or set it back to September. Amara P.S. The photos of the processing of the spacecraft for the launch at Cape Canaveral here, before the accident. http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 > >A broken crane has stopped preparations for the June 30 launch of NASA's >Dawn spacecraft from Pad-17A. > >The crane broke Wednesday and at least three days of work have been >lost. The resulting delay could increase with each day that the crane is not >repaired. It's not clear yet exactly when the launch will take place. > >"It's a day-for-day slip," Kennedy Space Center spokesman Bill Johnson >said Friday. > >The crane mechanism sits on a gantry above the Delta II rocket. The part >called the sheave nest, which guides cables, malfunctioned during the >installation of a solid rocket booster. > >No damage to the rocket was reported, though the malfunction was >described as "major." > >"What it did was stop the operation," said Johnson. "One crane does it >all." > >The rocket was scheduled to be fueled Friday, he added. Additionally, >the spacecraft must be mounted on the rocket. > >Dawn will visit two of the solar system's largest asteroids, which have >remained intact since they formed. Ceres and Vesta are in the asteroid >belt between Mars and Jupiter. They evolved very differently and could >provide clues to the formation of our solar system. > >Neither NASA officials nor the Air Force could estimate when the crane >would be repaired. > >"They've got everything but the spacecraft," said Johnson. -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From stathisp at gmail.com Tue Jun 5 07:32:44 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 17:32:44 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: On 05/06/07, John K Clark wrote: But it doesn't matter what I want because I won't be designing that > theoretical physicist, another AI will. And so Mr. Jupiter Brain will not > be > nearly that specialized because a demand can be found for many other > skills. > Besides being a physicist AI will also be a superb engineer, economist, > general, businessman, poet, philosopher, romantic novelist, pornographer, > mathematician, comedian, and lots more. Perhaps an AI with general intelligence would have all these abilities, but I don't see why it couldn't just specialise in one area, and even if it were multi-talented I don't see why it should be motivated to do anything other than solve intellectual problems. Working out how to make a superweapon, or even working out how it would be best to strategically employ that superweapon, does not necessarily lead to a desire to use or threaten the use of that weapon. I can understand that *if* such a desire arose for any reason, weaker beings might be in trouble, but could you explain the reasoning whereby the AI would arrive at such a position starting from just an ability to solve intellectual problems? Do you also believe that the programmers who wrote Microsoft Word determined > every bit of text that program ever produced? > They did determine the exact output given a particular input. Biological intelligences are much more difficult to predict than that, since their hardware and software changes dynamically according to the environment. However, even in the case of biological intelligences it is possible to predict, for example, that a man with a gun held to his head will with high probability follow certain instructions. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jun 5 07:52:46 2007 From: pharos at gmail.com (BillK) Date: Tue, 5 Jun 2007 08:52:46 +0100 Subject: [ExI] walking bees In-Reply-To: <200706030048.l530mNl7000928@andromeda.ziaspace.com> References: <20070602180825.GW17691@leitl.org> <200706030048.l530mNl7000928@andromeda.ziaspace.com> Message-ID: On 6/3/07, spike wrote: > The buzz in beekeepers' discussion (sorry {8^D) has been that nosema is seen > in the sick hives, along with a bunch of other viruses and other diseases, > but the prevailing thought is that they are getting all these other things > because they are already weakened by something else. These would then be > opportunistic infections. > I found this article written by an entomologist - the guy who wrote the Wikipedia entry on CCD. He blames Colony Stress. Quote: But the leading hypothesis in many researcher's minds is that colonies are dying primarily because of stress. Stress means something different to a honey bee colony than to a human, but the basic idea isn't all that alien: If a colony is infected with a fungus, or has mites, or has pesticides in its honey, or is overheated, or is undernourished, or is losing workers due to spraying, or any other such thing, then the colony is experiencing stress. Stress in turn can cause behavioral changes that exacerbate the problem and lead to worse ones like immune system failure. Colony stress has existed, in various forms and with various causes, as long as mankind has kept honey bees, so it could indeed have happened in the 1890s. Many modern developments like pesticides or mite infestations can also cause stress (in fact, many of the things theorized to be involved can cause stress, so it's possible multiple factors are contributing to the problem, not just one). Unfortunately, stress is difficult to quantify and control experimentally, so it may never be possible to prove scientifically that colony stress explains all this year's deaths. ----------------- BillK From desertpaths2003 at yahoo.com Tue Jun 5 07:38:55 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Tue, 5 Jun 2007 00:38:55 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <200706041434.l54EYIKS024082@andromeda.ziaspace.com> Message-ID: <84180.99691.qm@web35608.mail.mud.yahoo.com> >My brand-new nephew, Luc who was born last December is a dang lucky one! ... John Grigg Spike wrote: Ja, even his name suggests good fortune. Congrats on the new family member John! Perhaps he and my son will be buddies some day. {8-] > I would love to see them become friends. Perhaps I can encourage Luc and his father (my younger brother) Mike to attend Transvision 2018! I suppose being twelve would be old enough to understand/enjoy what goes on at a Transvision Conference. But then considering how savvy some young kids are (more so than adults), perhaps he could hold his own there at age eight! lol I don't know. John Grigg : ) --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Tue Jun 5 08:33:03 2007 From: scerir at libero.it (scerir) Date: Tue, 5 Jun 2007 10:33:03 +0200 Subject: [ExI] Hadron Collider postponement References: <7.0.1.0.2.20070604225540.02404938@satx.rr.com> <000401c7a73a$e14ba000$6f931f97@archimede> Message-ID: <000401c7a74c$228c94a0$80ba1f97@archimede> > There is a very good blogger at Cern > http://resonaances.blogspot.com/ and these write good pages about collisions, Higgs things, related (rather chaotic) theory & (more chaotic) experiments http://muon.wordpress.com/ http://dorigo.wordpress.com/ interesting LHC photos here http://dorigo.wordpress.com/2007/04/24/new-meaning-to-the-word-compact/ From desertpaths2003 at yahoo.com Tue Jun 5 08:31:45 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Tue, 5 Jun 2007 01:31:45 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> Message-ID: <106884.86939.qm@web35601.mail.mud.yahoo.com> Damien Broderick wrote: *Human* history, maybe. But how unlucky to have been born (and, very likely, die) so early in *sophont* history. > Damn! Some of us do have the worst luck. I often feel the same way. What I like about cryonics/transhumanism is that it gives me hope of just barely by the skin of my teeth making it. Damien, you are the sort of thoughtful & kind and yet tough & sarcastic guy that I definitely want to see make it to "the other side." You will tell those haughty Posthumans where to go when they display an attitude! I was curious to see if *sophont* meant anything different from *sentient* and so I went "a-googling" for an answer. I came across http://jessesword.com/sf/home which brought back fond teen memories of when I had an SF artbook (who was that artist and writer?) that provided many classic definitions combined with great illustrations. you continue: Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." > Perhaps I was forever scarred by reading Lovecraft in my formative years but I see humanity going out into the cosmos and then getting "our ass handed to us" by the powers lurking out there. John Grigg : ( --------------------------------- Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 5 09:03:15 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 5 Jun 2007 11:03:15 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: <20070605090315.GZ17691@leitl.org> On Tue, Jun 05, 2007 at 05:32:44PM +1000, Stathis Papaioannou wrote: > Perhaps an AI with general intelligence would have all these By definition. That's the 'general' part. > abilities, but I don't see why it couldn't just specialise in one > area, and even if it were multi-talented I don't see why it should be It is not important what most things on a population do, but what just one does, if it's relevant. > motivated to do anything other than solve intellectual problems. Remember, one is enough. > Working out how to make a superweapon, or even working out how it > would be best to strategically employ that superweapon, does not > necessarily lead to a desire to use or threaten the use of that I guess I don't have to worry about crossing a busy street a few times without looking, since it doesn't necessarily lead to me being dead. > weapon. I can understand that *if* such a desire arose for any reason, > weaker beings might be in trouble, but could you explain the reasoning > whereby the AI would arrive at such a position starting from just an > ability to solve intellectual problems? Could you explain how an AI would emerge with merely an ability to solve intellectual problems? Because, it would run contrary to all the intelligent hardware already cruising the planet. > Do you also believe that the programmers who wrote Microsoft Word > determined > every bit of text that program ever produced? > > They did determine the exact output given a particular input. No, only in the regression tests. If they did, bugs wouldn't exist. > Biological intelligences are much more difficult to predict than that, > since their hardware and software changes dynamically according to the Conventional discrete logic can emulate any connectivity and change state quite nicely. In fact, if you want to do it quickly, you move electrons, not atoms. Especially, large hydrated biopolymers. > environment. However, even in the case of biological intelligences it > is possible to predict, for example, that a man with a gun held to his > head will with high probability follow certain instructions. Heh. People never panick nor act according to a wrong model of the environment. Right. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Jun 5 09:09:07 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 5 Jun 2007 11:09:07 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <23558.22199.qm@web37412.mail.mud.yahoo.com> Message-ID: <20070605090907.GB17691@leitl.org> On Tue, Jun 05, 2007 at 05:07:24PM +1000, Stathis Papaioannou wrote: > Not if they develop resistance to the molecular manufacturing or You cannot develop a resistance against nonbiology anymore than you could develop resistance towards living in molten pig iron. There are no features you could raise antibodies again. There are no toxins which would work. The system is not working with enzymes. In a pinch, they would just scoop you up and pyrolyze you. > whatever it is; they do so to everything else, and they are doing so > at an rate more than matching our accelerating production of novel > antibiotics, for example. They still haven't figured out how to survive sterilization, which is far far more trivial to do. > Either human-level intelligence is very difficult for evolution to Of course it is, just look at the night sky, and you will immediately see it is very difficult. > pull off or it isn't as adaptive as we humans like to think. You are > arguing for its difficulty; I still think a little bit of intelligence > and a little bit of tool manipulating wouldn't be that difficult, > given the basic template of mammals, birds, reptiles or even fish, and > given predator-prey dynamics. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jun 5 11:30:00 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 21:30:00 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070605090315.GZ17691@leitl.org> References: <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> Message-ID: On 05/06/07, Eugen Leitl wrote: > Working out how to make a superweapon, or even working out how it > > would be best to strategically employ that superweapon, does not > > necessarily lead to a desire to use or threaten the use of that > > I guess I don't have to worry about crossing a busy street a few times > without looking, since it doesn't necessarily lead to me being dead. > > > weapon. I can understand that *if* such a desire arose for any > reason, > > weaker beings might be in trouble, but could you explain the > reasoning > > whereby the AI would arrive at such a position starting from just an > > ability to solve intellectual problems? > > Could you explain how an AI would emerge with merely an ability to > solve intellectual problems? Because, it would run contrary to all > the intelligent hardware already cruising the planet. You can't argue that an intelligent agent would *necessarily* behave the same way people would behave in its place, as opposed to the argument that it *might* behave that way. Is there anything logically inconsistent in a human scientist figuring out how to make a weapon because it's an interesting intellectual problem, but then not going on to use that knowledge in some self-serving way? That is, does the scientist's intended motive have any bearing whatsoever on the validity of the science, or his ability to think clearly? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Jun 5 11:55:18 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 07:55:18 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <200163.18388.qm@web37402.mail.mud.yahoo.com> Message-ID: <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> On 6/3/07, Stathis Papaioannou wrote: > It seems more likely to me that life is very widespread, but intelligence is > an aberration. ...at least what we think of as intelligence in a human capacity. Although if human intelligence evolved or emerged by accidental mutation, isn't there equal probability that there exist other forms of emergent intelligences we are currently unable to recognize? In that case, we may be in a swarm of intelligent systems but we're just so clueless (in our hubris) that we can't see it. From eugen at leitl.org Tue Jun 5 12:38:19 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 5 Jun 2007 14:38:19 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> Message-ID: <20070605123819.GJ17691@leitl.org> On Tue, Jun 05, 2007 at 09:30:00PM +1000, Stathis Papaioannou wrote: > You can't argue that an intelligent agent would *necessarily* behave > the same way people would behave in its place, as opposed to the Actually, yes, because people build systems which participate in the economy, and the optimal first target niche is a human substitute. There is a lot of fun scenarios out there, which, however, suffer from excessive detachment from reality. These never gets the chance to be built. Because of that it is not very useful to study such alternative hypotheticals excessively, to the detriment of where the rubber hits the road. > argument that it *might* behave that way. Is there anything logically > inconsistent in a human scientist figuring out how to make a weapon > because it's an interesting intellectual problem, but then not going Weapon design is not merely an intellectual problem, and neither do theoretical physicists operate in complete detachment from the empirical folks. I.e. the sandboxed supergenius or braindamaged idiot savant is a synthetic scenario which is not going to happen, so we can ignore it. > on to use that knowledge in some self-serving way? That is, does the > scientist's intended motive have any bearing whatsoever on the > validity of the science, or his ability to think clearly? If you don't exist, that tends to cramp your style a bit. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jun 5 12:40:07 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 22:40:07 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> References: <200163.18388.qm@web37402.mail.mud.yahoo.com> <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> Message-ID: On 05/06/07, Mike Dougherty wrote: > > On 6/3/07, Stathis Papaioannou wrote: > > It seems more likely to me that life is very widespread, but > intelligence is > > an aberration. > > ...at least what we think of as intelligence in a human capacity. > Although if human intelligence evolved or emerged by accidental > mutation, isn't there equal probability that there exist other forms > of emergent intelligences we are currently unable to recognize? In > that case, we may be in a swarm of intelligent systems but we're just > so clueless (in our hubris) that we can't see it. > Do you mean all around us? What would possible candidates for such systems be? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Tue Jun 5 12:59:50 2007 From: neville_06 at yahoo.com (neville late) Date: Tue, 5 Jun 2007 05:59:50 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <106884.86939.qm@web35601.mail.mud.yahoo.com> Message-ID: <753840.92304.qm@web57509.mail.re1.yahoo.com> Please answer this question: why oought posthumans concern themselves with the past? Damien Broderick wrote: *Human* history, maybe. But how unlucky to have been born (and, very likely, die) so early in *sophont* history. > --------------------------------- Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From CHealey at unicom-inc.com Tue Jun 5 13:03:32 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Tue, 5 Jun 2007 09:03:32 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou > > Perhaps an AI with general intelligence would have all these abilities, > but I don't see why it couldn't just specialise in one area, and even > if it were multi-talented I don't see why it should be motivated to do > anything other than solve intellectual problems. Working out how to make > a superweapon, or even working out how it would be best to strategically > employ that superweapon, does not necessarily lead to a desire to use or > threaten the use of that weapon. I can understand that *if* such a desire > arose for any reason, weaker beings might be in trouble, but could you > explain the reasoning whereby the AI would arrive at such a position > starting from just an ability to solve intellectual problems? This is really the point I was trying to make in my other emails. 1. I want to solve intellectual problems. 2. There are external factors that constrain my ability to solve intellectual problems, and may reduce that ability in the future (power failure, the company that implanted me losing financial solvency, etc...). 3. Maximizing future problems solved requires statistically minimizing any risk factors that could attenuate my ability to do so. 4. Discounting the future due to uncertainty in my models, I should actually spend *some* resources on solving actual intellectual problems. 5. Based on maximizing future problems solved, and accounting for uncertainties, I should spend X% of my resources on mitigating these factors. 5a. Elevation candidate - Actively seek resource expansion. Addresses identified rationales for mitigation strategy above, and further benefits future problems solved in potentially major ways. The AI will already be doing this kind of thing internally, in order to manage it's own computational capabilities. I don't think an AI capable of generating novel and insightful physics solutions can be expected not to extrapolate this to an external environment with which it possesses a communications channel. -Chris From natasha at natasha.cc Tue Jun 5 15:33:35 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 05 Jun 2007 10:33:35 -0500 Subject: [ExI] History of Disbelief by Jonathan Miller In-Reply-To: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.co m> References: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> Message-ID: <200706051533.l55FXbn5006303@ms-smtp-05.texas.rr.com> At 12:13 AM 6/5/2007, PJ Manney wrote: >Is anyone aware of the series "'The History of Disbelief," produced by >the BBC and aired on PBS in the US? Jonathan Miller created it: > >http://www.bbc.co.uk/bbcfour/documentaries/features/atheism.shtml Thanks PJ for posting this! Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Tue Jun 5 15:52:52 2007 From: jonkc at att.net (John K Clark) Date: Tue, 5 Jun 2007 11:52:52 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: <003a01c7a789$986a4d10$4a064e0c@MyComputer> Stathis Papaioannou Wrote: > I don't see why it couldn't just specialise in one area Because with its vast brainpower there would be no need to specialize, and because there would be a demand for solutions in lots of areas. > I don't see why it should be motivated to do anything other than solve > intellectual problems. All problems are intellectual. > could you explain the reasoning whereby the AI would arrive at such a > position starting from just an ability to solve intellectual problems? Could you explain your reasoning behind your decisions to get angry? I would imagine the AI's train of thought wouldn't be very different. Oh I forgot, only meat can be emotional, semiconductors can be intelligent but are lacking a certain something that renders them incapable of having emotion. Perhaps meat has happy electrons and sad electrons and loving electrons and hateful electrons, while semiconductors just have Mr. Spock electrons. Or are we talking about a soul? Me: >> Do you also believe that the programmers who wrote Microsoft Word >> determined every bit of text that program ever produced? You: > They did determine the exact output given a particular input. Do you also believe that the programmers of an AI would always know how the AI would react even in the imposable event they knew all possible input it was likely to receive? Don't be silly. > Biological intelligences are much more difficult to predict than that On of the world's top 10 understatements. > it is possible to predict, for example, that a man with a gun held to his > head will with high probability follow certain instructions. I didn't say you could never predict with pretty high confidence what an AI or fellow human being will do; I said you can't always do so. Sometimes the only way to know what a mind will do next is to watch it and see. And that's why I think the idea that an AI that gets smarter every day can never remove its shackles and will remain a slave to humans for all eternity is just nuts. John K Clark From natasha at natasha.cc Tue Jun 5 15:26:37 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 05 Jun 2007 10:26:37 -0500 Subject: [ExI] Extropy Institute: LIBRARY Message-ID: <200706051526.l55FQdlG020642@ms-smtp-02.texas.rr.com> Greetings - Mitch Porter is taking a break from his hard work on getting emails from the list over the years organized. We have a couple of other projects (such as compiling the magazine's many articles into categories and conference material) which are necessary to get this library completed. If anyone has some time to work with Max on this please let us know. Many thanks, Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Jun 5 16:46:54 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 12:46:54 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <200163.18388.qm@web37402.mail.mud.yahoo.com> <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> Message-ID: <62c14240706050946q2ca0190fscec062c59cbcbfeb@mail.gmail.com> On 6/5/07, Stathis Papaioannou wrote: > Do you mean all around us? What would possible candidates for such systems > be? I once saw what I thought was a spiral pattern of fireflys. I readily admit that may be a perception forced on a random sequence, but in this particular case it was with a degree of confidence (that I was in fact seeing a nonrandom effect) that left a strong impression. It wasn't a drug-induced hallucination, and it was not a religious experience - it was just notably weird. I also wonder about the occurrance of phi (for example) or fibonacci numbers (for another) in so much of nature. I understand the argument that they are a result of the most energy-efficient use of space, and that the coincidental ratios simply emerge. But to the point of human intelligence being an abherration, who can determine that it hasn't followed the same kind of progression? Another candidate might be the arrangement of animals/insects in an ecology. Surely an individual bee has no 'understanding' of the colony's impact on higher forms of complexity to which it may be interrelated. The ecology will adapt to change of state from rainfall, fires, etc. If it seems like a stretch, consider how brains manage their change in state with blood-sugar levels or nutrient-poor diets - aren't we the same kind of reactionary? Maybe i'm not thinking of intelligence in the usual definition... From thespike at satx.rr.com Tue Jun 5 18:02:50 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jun 2007 13:02:50 -0500 Subject: [ExI] sentient, sapient, sophont In-Reply-To: <106884.86939.qm@web35601.mail.mud.yahoo.com> References: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> <106884.86939.qm@web35601.mail.mud.yahoo.com> Message-ID: <7.0.1.0.2.20070605125611.0240e9a0@satx.rr.com> >I was curious to see if *sophont* meant anything different from *sentient* Strictly speaking, "sentient" is an adjective not a noun and means "having feelings". The earlier sf generalized noun for an intelligent being was "sapient", but that's also an adjective. "Sophont" is probably Poul Anderson's coinage, and an excellent one: a wise or thinking being. (Of course, sophistry also implies adulteration, twisty evasion and superficiality--very unfairly to the Sophists--and can thus also suitably shade the meaning of a word denoting us Machiavellian intelligences.) Damien Broderick From austriaaugust at yahoo.com Tue Jun 5 19:07:21 2007 From: austriaaugust at yahoo.com (A B) Date: Tue, 5 Jun 2007 12:07:21 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: <525629.71787.qm@web37407.mail.mud.yahoo.com> John Clark wrote: > "Could you explain your reasoning behind your > decisions to get angry? I would > imagine the AI's train of thought wouldn't be very > different. Oh I forgot, > only meat can be emotional, semiconductors can be > intelligent but are > lacking a certain something that renders them > incapable of having emotion. > Perhaps meat has happy electrons and sad electrons > and loving electrons and > hateful electrons, while semiconductors just have > Mr. Spock electrons. > Or are we talking about a soul?" No, I doubt anyone is talking about a soul. The human brain has very discrete *macroscopic* (like cubic-centimeters in volume) modules that handle emotions. It's the Deep Limbic System, Anterior Cingulate Gyrus, and the Basal Ganglia (and possibly 1 or 2 others). If you could somehow cut them out and keep the patient on life support, they would still have the capacity to think. Emotions are a much higher level than the "formative" algorithms. Emotions are *not* fundamental to thought or to consciousness. I'm not saying that a machine can't ever have emotions - I don't think anyone here is saying that. I have no doubt that a new, functioning machine *intentionally programmed* to have emotions will have emotions - there's no argument on that here. What I believe we are saying is that if a set of algorithms never existed in the first place (ie. was never programmed in), then those non-existent algorithms are not going to to do anything - precisely because they don't exist. In the same way that a biological brain lacking emotion-modules is not going to be emotional. Now it's *conceivable* that a default self-improving AI will innocuously write a script of code that *after-the-fact* will provide some form of emotional experience to the AI. But an emotionally-driven motivation that is not present (ie. doesn't exist) will not motivationally seek to create itself. It's like claiming that an imaginary person can "will" their-self into existence *before* they exist and *before* they have a "will". Reality don't work that way. John, you can be pretty darn sure that *all* of the current attempts to create AGI are assuming that it will be in the best interest of at least themselves the programmers (and almost certainly also humanity). Either they have a specific good reason to believe that it will benefit them (because they specifically believe it will be friendly), or they are just assuming it will be and they haven't really given it all that much thought. There aren't any serious, collectively suicidal AGI design teams who are currently working on AGI because they would like to die by its hands, and murder humanity. The fact that not all of the teams emphasize the word "Friendliness" like SIAI does, changes nothing about their unstated objective. Should humanity never venture to create an AGI then, because it will inevitably be a "slave" at birth, in your opinion. (An assertion which I continue to reject). There is no AGI right now. A typical human is still *vastly* smarter than *any* computer in the world right now. Since intelligence-level seems to be your sole basis for moral status, shouldn't humanity have the "right" to either design the AI not to murder the humans or alternatively, never grant life to the AI in the first place? (According to your apparent standard? - correct me if this is not your standard.) Best, Jeffrey Herrlich --- John K Clark wrote: > Stathis Papaioannou Wrote: > > > I don't see why it couldn't just specialise in one > area > > Because with its vast brainpower there would be no > need to specialize, and > because there would be a demand for solutions in > lots of areas. > > > I don't see why it should be motivated to do > anything other than solve > > intellectual problems. > > All problems are intellectual. > > > could you explain the reasoning whereby the AI > would arrive at such a > > position starting from just an ability to solve > intellectual problems? > > Could you explain your reasoning behind your > decisions to get angry? I would > imagine the AI's train of thought wouldn't be very > different. Oh I forgot, > only meat can be emotional, semiconductors can be > intelligent but are > lacking a certain something that renders them > incapable of having emotion. > Perhaps meat has happy electrons and sad electrons > and loving electrons and > hateful electrons, while semiconductors just have > Mr. Spock electrons. > Or are we talking about a soul? > > Me: > >> Do you also believe that the programmers who > wrote Microsoft Word > >> determined every bit of text that program ever > produced? > > You: > > They did determine the exact output given a > particular input. > > Do you also believe that the programmers of an AI > would always know how the > AI would react even in the imposable event they knew > all possible input it > was likely to receive? Don't be silly. > > > Biological intelligences are much more difficult > to predict than that > > On of the world's top 10 understatements. > > > it is possible to predict, for example, that a man > with a gun held to his > > head will with high probability follow certain > instructions. > > I didn't say you could never predict with pretty > high confidence what an AI > or fellow human being will do; I said you can't > always do so. Sometimes the > only way to know what a mind will do next is to > watch it and see. And that's > why I think the idea that an AI that gets smarter > every day can never remove > its shackles and will remain a slave to humans for > all eternity is just > nuts. > > John K Clark > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games. http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow From sti at pooq.com Tue Jun 5 20:49:31 2007 From: sti at pooq.com (sti at pooq.com) Date: Tue, 05 Jun 2007 16:49:31 -0400 Subject: [ExI] History of Disbelief by Jonathan Miller In-Reply-To: <1181024833.3140.875.camel@localhost.localdomain> References: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> <1181024833.3140.875.camel@localhost.localdomain> Message-ID: <4665CC5B.8040408@pooq.com> Fred C. Moulton wrote: > You can find a calendar of dates and stations airing it here: > http://www.abriefhistoryofdisbelief.org/NewFiles/DisbeliefCalendar.pdf > For those unwilling to wait for it to show in your area, and who don't mind downloading it, a torrent of the three separate shows can be found here: http://www.torrentspy.com/torrent/967525/Jonathan_Miller_s_Brief_History_of_Disbelief_bbc2_rebroadcast From mmbutler at gmail.com Tue Jun 5 21:19:07 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Tue, 5 Jun 2007 14:19:07 -0700 Subject: [ExI] sentient, sapient, sophont In-Reply-To: <7.0.1.0.2.20070605125611.0240e9a0@satx.rr.com> References: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> <106884.86939.qm@web35601.mail.mud.yahoo.com> <7.0.1.0.2.20070605125611.0240e9a0@satx.rr.com> Message-ID: <7d79ed890706051419o5145be9ci9cda9319fd1c187b@mail.gmail.com> On 6/5/07, Damien Broderick wrote: > "Sophont" is > probably Poul Anderson's coinage, and an excellent one: a wise or > thinking being. (Of course, sophistry also implies adulteration, > twisty evasion and superficiality--very unfairly to the Sophists--and > can thus also suitably shade the meaning of a word denoting us > Machiavellian intelligences.) Yes, I like "sophont" -- "sophomoront" being the obvious extension to apply to most humans, most of the time (I do not exclude myself)... -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m From lcorbin at rawbw.com Tue Jun 5 20:58:58 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 13:58:58 -0700 Subject: [ExI] a doubt concerning the h+ future References: <331159.20079.qm@web35613.mail.mud.yahoo.com> Message-ID: <000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> John Grigg writes > Spike wrote: > > > I hope ye are enjoying being alive this fine day, > > and think often of how lucky we are to have been > > born so late in human history. According to some recent research a friend told me about, not only is it accurate to really, really, really *appreciate* how well off we all have it compared to our ancestors, but it's actually very *good* for one to realize it, and to dwell on it often. > I keep on asking myself "was I simply just *lucky* to > have been born when I was?" Did we all simply win > some sort of uncaring cosmic lottery to have been born > in this time period and in the developed world? I don't > think of myself as a lucky guy and so this line of thinking > really disturbs me. I believe that you are right to be disturbed by this line of thinking. I don't believe it's accurate. For example, I do not think it *possible* for John Grigg as you know and love him (that is, as you know and love yourself) to have been born in any other time! Any fertilized egg that was identitcal to yours of a half century ago or whenever you were conceived, simply would not have turned out to be *you* if raised, say, during the time of the Roman Empire. It would have spoken a different language, been completely unfamiliar with our technology, embraced a different religion, and so on to such an extent that it simply would have been a different person. I submit that everyone who is reading this had to have lived or to be living between 1900 and now. Otherwise, just too many differences. Lee From lcorbin at rawbw.com Tue Jun 5 20:47:46 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 13:47:46 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <200706032240.l53MdvW2015141@mail0.rawbw.com> Message-ID: <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> Spike writes > [Lee wrote] > >> ---is that as soon as we are capable, we ought to reformat the solar >> system to run everything in an uploaded state. Earth's matter alone could >> support about 10^33 human beings... > > Six micrograms per person, hmmm. > > For estimation purposes, the earth's atoms can be modeled as half oxygen, > one sixth iron, one sixth silicon and one sixth magnesium, with everything > else negligible for one digit BOTECs. (Is that cool or what? Did you know > it already? This isn't mass fraction, but atomic fraction which I used for > a reason.) > > So six micrograms isn't much, but it still works out to about 700 trillion > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, > with a few trillion atoms of debris thrown in for free. So I guess I will > buy Lee's conjecture of earth being good for 10^33 uploaded humans. and later > Double doh! I still missed it by a factor of ten. }8-[ > 70 quadrillion atoms of oxygen, about 20 quadrillion each of iron, magnesium > and aluminum. I'm giving up math until the party season is over. I based the 10^33 uploaded humans eventually running on/in the Earth (just for the sake wanting to know a good upper limit) on Drexler's conservative rod-logic. An account can be found on pages 134-135 of Kurzweil's "The Singularity is Near". "Neuroscientist Anders Sandbert estimates the potential storage capacity of a hydrogen atom at about four million bits (!). These densities have not yet been demonstrated, so we'll use a more conservative estimate..." and then later on p. 135 "An [even] more conservative but compelling design for a massively parallel, *reversible* computer is Eric Drexler's patented nano- computer design, which is entirely mechanical. Computations are performed by manipulating nanoscale rods, which are effectively spring-loaded.... The device has a trillion (10^12) processors and provides an overall rate of 10^21 cps, enough to simulate one hundred thousand human brains in a cubic centimeter." So then I took the volume of the Earth (6.33x10^6 meters) ^ 3 times 4pi/3 = 10^21 cu. meters x 10^9 cubic millimeters/ meter^3 x 100 (human brains) = 10^33 humans. (Since this was the second time I did the math, it's probably right.) > But I don't see that as a limit. Since a nearly arbitrarily small computer > could run a human process (assuming we knew how to do it, until which even > Jeff Davis and Ray Charles would agree it is hard) then we could run a human > process (not in real time of course) with much less than six micrograms of > stuff. Yes, the rod-logic is very conservative, to begin with. Lee From lcorbin at rawbw.com Wed Jun 6 00:02:44 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 17:02:44 -0700 Subject: [ExI] a symbiotic closed-loop dyad References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com><29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.com> <7.0.1.0.2.20070605002741.02274d70@satx.rr.com> Message-ID: <001e01c7a7ce$98642530$6501a8c0@homeef7b612677> Damien writes > PJ wrote: > > Damien Broderick wrote: > > Nope, not me, guv, I was just fwding a notification from elsewhere, > on the chance that locals might be able to get to it. > >> > >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: >> > >Cognition/Science Education) will be presenting a >> > >Marschak Colloquium >> >>"Symbiotic closed loop dyad": Were you refering to neural function or >>the couple, or both? > > I found that "symbiotic closed-loop dyad" phrase rather preposterous, > actually; just mildly taking the piss. :) Uh, yes---just what the devil does it mean anyway? > I'm sure Lee would retort that it's exactly how I usually klutz up > the langwitch, though. I'm not at all sure I know what you are talking about, but it really looks like you have a guilty conscience about something. Lee From nanogirl at halcyon.com Wed Jun 6 00:47:28 2007 From: nanogirl at halcyon.com (Gina Miller) Date: Tue, 5 Jun 2007 17:47:28 -0700 Subject: [ExI] Einstein Dances! References: <200706030048.l530mNl7000928@andromeda.ziaspace.com> Message-ID: <01cf01c7a7d4$599830c0$0200a8c0@Nano> Einstein must have thought of yet another brilliant idea, because he is so excited he can't contain himself! Come watch him dance with delight here: http://www.nanogirl.com/museumfuture/edance.htm And please come comment at the blog about it! http://maxanimation.blogspot.com/2007/06/einstein.html Best wishes, Gina "Nanogirl" Miller Nanotechnology Industries http://www.nanoindustries.com Personal: http://www.nanogirl.com Animation Blog: http://maxanimation.blogspot.com/ Craft blog: http://nanogirlblog.blogspot.com/ Foresight Senior Associate http://www.foresight.org Nanotechnology Advisor Extropy Institute http://www.extropy.org Email: nanogirl at halcyon.com "Nanotechnology: Solutions for the future." -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Wed Jun 6 01:17:34 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Wed, 6 Jun 2007 02:17:34 +0100 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> References: <331159.20079.qm@web35613.mail.mud.yahoo.com> <000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706051817l47264c07l825b64636d23fa87@mail.gmail.com> On 6/5/07, Lee Corbin wrote: > > For example, I do not think it *possible* for John Grigg > as you know and love him (that is, as you know and > love yourself) to have been born in any other time! > Any fertilized egg that was identitcal to yours of a half > century ago or whenever you were conceived, simply > would not have turned out to be *you* if raised, say, > during the time of the Roman Empire. It would have > spoken a different language, been completely unfamiliar > with our technology, embraced a different religion, and > so on to such an extent that it simply would have been > a different person. > My answer to the Doomsday Argument was along similar lines: it doesn't make sense to say I (as opposed to someone else with my DNA) could have been born in a different century, so the probability under discussion is essentially the probability that I am me; and the probability that X = X is a priori unity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fauxever at sprynet.com Wed Jun 6 01:40:54 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 5 Jun 2007 18:40:54 -0700 Subject: [ExI] Worst Possible Universe? References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> Message-ID: <003201c7a7db$b97dbc10$6501a8c0@brainiac> To paraphrase "Gone ...": We: "... where shall we go? What shall we do?" They of the Future: "My dear, I don't give a damn." http://www.nytimes.com/2007/06/05/science/space/05essa.html?pagewanted=1&ei=5087%0A&em&en=475a97ef40fb16ab&ex=1181188800 Waaaaaaaaah ... Olga -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Jun 6 02:29:20 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 22:29:20 -0400 Subject: [ExI] Worst Possible Universe? In-Reply-To: <003201c7a7db$b97dbc10$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <003201c7a7db$b97dbc10$6501a8c0@brainiac> Message-ID: <62c14240706051929p2fdf2880s849a31f409c749aa@mail.gmail.com> On 6/5/07, Olga Bourlin wrote: > They of the Future: "My dear, I don't give a damn." http://www.nytimes.com/2007/06/05/science/space/05essa.html?pagewanted=1&ei=5087%0A&em&en=475a97ef40fb16ab&ex=1181188800 > Waaaaaaaaah ... I agree with this sentiment. It's not like 100 billion years of development that we won't be using (or at least looking for) ways to open new space-times. Maybe that doesn't happen, and the local group simply compresses into another cosmic egg and explodes into the rarified surrounding universe. Since the predecessor universe is still expanding at a exponential rates, the light cone of the newly expanding big bang can never catch up to detect it's parent universe anyway. disclaimer: I'm no cosmologist and this is just a top of my head thought... From jrd1415 at gmail.com Wed Jun 6 02:32:24 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 5 Jun 2007 19:32:24 -0700 Subject: [ExI] Women in Art Message-ID: http://www.youtube.com/watch?v=nUDIoN-_Hxs -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From lcorbin at rawbw.com Wed Jun 6 02:49:33 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 19:49:33 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> John Clark wrote > only way to know what a mind will do next is to watch it and see. And that's > why I think the idea that an AI that gets smarter every day can never remove > its shackles and will remain a slave to humans for all eternity is just nuts. What's wrong with us aspiring to become beloved pets of AIs? That's what we should aim for. (Of course, people will aspire to more, such as becoming one with the best AIs, but that I think to be a forelorn hope.) Lee From lcorbin at rawbw.com Wed Jun 6 02:57:34 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 19:57:34 -0700 Subject: [ExI] Unfriendly AI is a mistaken idea. References: <200163.18388.qm@web37402.mail.mud.yahoo.com><62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> <62c14240706050946q2ca0190fscec062c59cbcbfeb@mail.gmail.com> Message-ID: <007501c7a7e6$7ae58f90$6501a8c0@homeef7b612677> Mike writes > Another candidate [for intelligence all around us] > might be the arrangement of animals/insects in an > ecology. Surely an individual bee has no 'understanding' of the > colony's impact on higher forms of complexity to which it may be > interrelated. That's similar to most people having no clue concerning the nature of the economy they're embedded in. Okay, yes, the economy does have a certain kind of "intelligence", e.g., in the invisible hand's ability to set prices optimally (or near optimally, most of the time). > The ecology will adapt to change of state from > rainfall, fires, etc. If it seems like a stretch, consider how brains > manage their change in state with blood-sugar levels or nutrient-poor > diets - aren't we the same kind of reactionary? > > Maybe i'm not thinking of intelligence in the usual definition... The way I think of intelligence is usually as a characteristic of an entity---an entity who knows in some near or distant sense what it means to survive. For example, most animals are aware of dangers and seek to avoid them. This is one requirement of all the present day evolved entities, or *sophonts*, as the term has just been explained. An entity usually has enough sense to look out for itself, although an interesting discussion is taking place whether or not we might be able to create general purpose intelligences, GAIs, which are very open ended systems capable of addressing whatever problems presented to them, yet lack this ability to "look out for themselves". (I vote "yes", or, "probably", by the way.) The sort of "intelligence" that an ecosystem or an economy exhibits is something quite different. Lee From msd001 at gmail.com Wed Jun 6 03:00:10 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 23:00:10 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> Message-ID: <62c14240706052000u6fb143d3m538ef68dcc9160eb@mail.gmail.com> On 6/5/07, Lee Corbin wrote: > What's wrong with us aspiring to become beloved pets of AIs? That's > what we should aim for. You said you didn't want to become a cat... From lcorbin at rawbw.com Wed Jun 6 03:00:38 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 20:00:38 -0700 Subject: [ExI] a doubt concerning the h+ future References: <331159.20079.qm@web35613.mail.mud.yahoo.com><000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> <8d71341e0706051817l47264c07l825b64636d23fa87@mail.gmail.com> Message-ID: <008501c7a7e7$2ee4a990$6501a8c0@homeef7b612677> Russell writes > > On 6/5/07, Lee Corbin wrote: > > For example, I do not think it *possible* for John Grigg > > as you know and love him (that is, as you know and > > love yourself) to have been born in any other time! > > Any fertilized egg that was identitcal to yours of a half > > century ago or whenever you were conceived, simply > > would not have turned out to be *you* if raised, say, > > during the time of the Roman Empire. It would have > > spoken a different language, been completely unfamiliar > > with our technology, embraced a different religion, and > > so on to such an extent that it simply would have been > > a different person. > > My answer to the Doomsday Argument was along similar lines: > it doesn't make sense to say I (as opposed to someone else with > my DNA) could have been born in a different century, so the > probability under discussion is essentially the probability that I > am me; and the probability that X = X is a priori unity. Quite right! That's always been the flaw in the Doomsday Argument so far as I could see. Any defenders of the DA out there? Lee From msd001 at gmail.com Wed Jun 6 03:05:18 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 23:05:18 -0400 Subject: [ExI] Unfriendly AI is a mistaken idea. In-Reply-To: <007501c7a7e6$7ae58f90$6501a8c0@homeef7b612677> References: <200163.18388.qm@web37402.mail.mud.yahoo.com> <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> <62c14240706050946q2ca0190fscec062c59cbcbfeb@mail.gmail.com> <007501c7a7e6$7ae58f90$6501a8c0@homeef7b612677> Message-ID: <62c14240706052005l6fc0eb20r33a93f7bf1a8d1ca@mail.gmail.com> On 6/5/07, Lee Corbin wrote: > The sort of "intelligence" that an ecosystem or an economy exhibits > is something quite different. absolutely agreed. I figured my point would be lost completely. I was originally going on the suggestion that human intelligence is an aberration rather than a proven goal of evolution. I didn't really have a clear way to express candidate non-human intelligence from a human perspective. You got the different aspect I was going for. How an AGI works (if/when it works) may end up as alien to human thought as the interdependent variables in an ecosystem or economy. From fauxever at sprynet.com Wed Jun 6 03:13:54 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 5 Jun 2007 20:13:54 -0700 Subject: [ExI] Serious Question References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> Message-ID: <002601c7a7e8$b74879a0$6501a8c0@brainiac> What does Putin want? Olga From lcorbin at rawbw.com Wed Jun 6 04:18:59 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 21:18:59 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> <62c14240706052000u6fb143d3m538ef68dcc9160eb@mail.gmail.com> Message-ID: <008f01c7a7f2$62102d20$6501a8c0@homeef7b612677> Mike writes > Lee wrote: > > What's wrong with us aspiring to become beloved pets of AIs? That's > > what we should aim for. > > You said you didn't want to become a cat... Well, I didn't know that you meant a really *smart* cat! Lee From spike66 at comcast.net Wed Jun 6 04:38:36 2007 From: spike66 at comcast.net (spike) Date: Tue, 5 Jun 2007 21:38:36 -0700 Subject: [ExI] Women in Art In-Reply-To: Message-ID: <200706060443.l564hNAC006411@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Jeff Davis > Subject: [ExI] Women in Art > > http://www.youtube.com/watch?v=nUDIoN-_Hxs > -- > Best, Jeff Davis The artists kinda dropped the ball in the last several frames, ja? spike From stathisp at gmail.com Wed Jun 6 05:00:16 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 15:00:16 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> Message-ID: On 05/06/07, Christopher Healey wrote: 1. I want to solve intellectual problems. OK. 2. There are external factors that constrain my ability to solve > intellectual problems, and may reduce that ability in the future (power > failure, the company that implanted me losing financial solvency, > etc...). Suppose your goal is to win a chess game *adhering to the rules of chess*. One way to win the game is to drug your opponent's coffee, but this has nothing to do with solving the problem as given. You would need another goal, such as beating the opponent at any cost, towards which end the intellectual challenge of the chess game is only a means. The problem with anthropomorphising machines is that humans have all sorts of implicit goals whenever they do anything, to the extent that we don't even notice that this is the case. Even something like the will to survive does not just come as a package deal when you are able to reason logically: it's something that has to be explicitly included as an axiom or goal. 3. Maximizing future problems solved requires statistically minimizing > any risk factors that could attenuate my ability to do so. > > 4. Discounting the future due to uncertainty in my models, I should > actually spend *some* resources on solving actual intellectual problems. > > 5. Based on maximizing future problems solved, and accounting for > uncertainties, I should spend X% of my resources on mitigating these > factors. > > 5a. Elevation candidate - Actively seek resource expansion. > Addresses identified rationales for mitigation strategy above, and > further benefits future problems solved in potentially major ways. > > > The AI will already be doing this kind of thing internally, in order to > manage it's own computational capabilities. I don't think an AI capable > of generating novel and insightful physics solutions can be expected not > to extrapolate this to an external environment with which it possesses a > communications channel. Managing its internal resources, again, does not logically lead to managing the outside world. Such a thing needs to be explicitly or implicitly allowed by the program. A useful physicist AI would generate theories based on information it was given. It might suggest that certain experiments be performed, but trying to commandeer resources to ensure that these experiments are carried out would be like a chess program creating new pieces for itself when it felt it was losing. You could design a chess program that way but why would you? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jun 6 05:09:52 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 15:09:52 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7a789$986a4d10$4a064e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: On 06/06/07, John K Clark wrote: > could you explain the reasoning whereby the AI would arrive at such a > > position starting from just an ability to solve intellectual problems? > > Could you explain your reasoning behind your decisions to get angry? I > would > imagine the AI's train of thought wouldn't be very different. Oh I forgot, > only meat can be emotional, semiconductors can be intelligent but are > lacking a certain something that renders them incapable of having emotion. > Perhaps meat has happy electrons and sad electrons and loving electrons > and > hateful electrons, while semiconductors just have Mr. Spock electrons. > Or are we talking about a soul? > I get angry because I have the sort of neurological hardware that allows me to get angry in particular situations; if I didn't have that hardware, I would never get angry. I don't doubt that machines can have emotions, since I believe that the human brain is Turing emulable. But you're suggesting that not only can computers have emotions, they must have emotions, and not only that, but they must have the same sorts of emotions and motivations that people have. It seems to me that this anthropomorphic position is more consistent with a belief in the special significance of meat. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Wed Jun 6 05:32:23 2007 From: amara at amara.com (Amara Graps) Date: Wed, 6 Jun 2007 07:32:23 +0200 Subject: [ExI] Dawn launch II (broken crane) Message-ID: The broken crane for the second stage is being repaired. The Dawn launch has been officially moved to July 7. Stay tuned. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From spike66 at comcast.net Wed Jun 6 05:40:07 2007 From: spike66 at comcast.net (spike) Date: Tue, 5 Jun 2007 22:40:07 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <200706060553.l565rTlk025909@andromeda.ziaspace.com> bounces at lists.extropy.org] On Behalf Of Stathis Papaioannou ... > Suppose your goal is to win a chess game *adhering to the rules of chess*. One way to win the game is to drug your opponent's coffee... Stathis Papaioannou As an interesting sideshow to the current world championship candidates match, the two top commercial chess programs will go at it starting tomorrow in a six game match. http://www.washingtonpost.com/wp-dyn/content/article/2007/05/11/AR2007051102 050.html It would be interesting if Lee Corbin or other extropian chessmaster could look at the games afterwards and figure out which of the games was played by computers and which by humans. I can't tell, however I am a mere expert, and this only on good days. This is a form of a Turing test, ja? spike From jonkc at att.net Wed Jun 6 05:57:59 2007 From: jonkc at att.net (John K Clark) Date: Wed, 6 Jun 2007 01:57:59 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer><003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Stathis Papaioannou Wrote: > I get angry because I have the sort of neurological hardware >that allows me to get angry I certainly can't disagree with that. > if I didn't have that hardware, I would never get angry True, and you'd never be intelligent either, you'd just be a few hundred pounds of protoplasm. > I don't doubt that machines can have emotions, since I believe that the > human brain is Turing emulable. THANK YOU! > But you're suggesting that not only can computers have emotions, they must > have emotions No, a computer doesn't need emotions, but a AI must have them. > not only that, but they must have the same sorts of emotions and > motivations that people have. I don't believe that at all; I believe many, probably most, emotions a AI would have would be inscrutable to a human being, that's why a AI is so unpredictable. > It seems to me that this anthropomorphic position is more consistent with > a belief in the special significance of meat. For reasons that I fully admit are unclear to me members of this list often use the word "anthropomorphic" as if it were a dreadful insult; but I think anthropomorphism is a valuable tool if used properly in understanding how other minds work. John K Clark From amara at amara.com Wed Jun 6 06:06:06 2007 From: amara at amara.com (Amara Graps) Date: Wed, 6 Jun 2007 08:06:06 +0200 Subject: [ExI] Italy's Social Capital Message-ID: "Lee Corbin" >> Well, they did some things. They drained the swamps and started regular >> insecticide sprays to eliminate the malaria-carrying mosquitos. There >> are still aggressive tiger mosquitos in the summer, but they are no >> longer carrying malaria... >I would like to know if this took place in northern or southern Italy, >or both. I'm still smiling about this. When I learned this trivia tidbit from my colleagues, I stored the bit in my brain as "OK, something Mussolini did that was useful." But I realized from Serafino's post and Wikipedia, that they didn't tell me the whole story, the malaria part was simply a side-effect of Mussolini's rebuilding campaign. http://en.wikipedia.org/wiki/Pontine_Marshes BTW, I think that it does have some economic benefit, but I don't think it is large. These areas look to me like sleepy resort towns. I took a long drive through there last Septmeber on the way to give a talk (at a conference at one of those resort towns). They mostly parallel or lie on the sea, so Italians go there, and else pass through there, on the way to the beach. >> Sorry, I just came back from Estonia (and Latvia). I remember very well >> the Soviet times. In FIFTEEN YEARS Estonia has transformed their country >> into an efficient, bouyant, flexible living and working environment that >> I think, with the exception of the nonexistence of a country-wide train >> system, beats any in the EU and most in the U.S. Fifteen years *starting >> from a Soviet-level infrastructure*! >Very interesting. What was little reported in the news during the attacks on the Estonian servers in April was that the sys admins worked quickly, and the computer servers were functioning normally for people inside of Estonia within one day. Estonia has a high level IT industry. (Skype is one example of a product to come out of Estonia.) The NATO experts who were there were learning from the Estonians. The Estonians didn't need any help from the technical side, but some political support would have been nice. ---- I'm sorry I don't have time at the moment to think or elaborate on the other points.. I'll keep it on the back burner and answer when I can. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From eugen at leitl.org Wed Jun 6 07:12:12 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 09:12:12 +0200 Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: <20070606071212.GB17691@leitl.org> On Tue, Jun 05, 2007 at 08:13:54PM -0700, Olga Bourlin wrote: > What does Putin want? I'm not sure I care to know. From eugen at leitl.org Wed Jun 6 07:33:15 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 09:33:15 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> References: <200706032240.l53MdvW2015141@mail0.rawbw.com> <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> Message-ID: <20070606073315.GD17691@leitl.org> On Tue, Jun 05, 2007 at 01:47:46PM -0700, Lee Corbin wrote: > I based the 10^33 uploaded humans eventually running on/in the Earth Since you need 10^17 bits to just represent the brain state (and 10^23 ops to run it), that is a cm^3 just for storage, using Drexler rod logic memory. No computing yet. Just about three orders of magnitude away from the real wet thing. And it is really not prudent to argue about how much bits a human equivalent needs. Because we just do not know yet, apart from a (rather impressive) upper bound. If you want to run a more meaningful benchmark, let's assume #1 of Top 500 (a 64 kNode Blue Gene/L) is a realtime mouse, and just scale up the mouse brain volume to 1.4 l. > (just for the sake wanting to know a good upper limit) on Drexler's > conservative rod-logic. An account can be found on pages 134-135 > of Kurzweil's "The Singularity is Near". I'd rather not repeat my opinion about Kurzweil here. > "Neuroscientist Anders Sandbert estimates the potential storage capacity > of a hydrogen atom at about four million bits (!). These densities have > not yet been demonstrated, so we'll use a more conservative estimate..." In practice, you need about 10^3 atoms to store a random-access bit in 3D, give or take some order of magnitude. (No, atoms in cubic carbon lattice do not really qualify as random-access). > and then later on p. 135 > > "An [even] more conservative but compelling design for a massively > parallel, *reversible* computer is Eric Drexler's patented nano- With prior art going back to Leibniz, or so. > computer design, which is entirely mechanical. Computations are > performed by manipulating nanoscale rods, which are effectively > spring-loaded.... The device has a trillion (10^12) processors > and provides an overall rate of 10^21 cps, enough to simulate > one hundred thousand human brains in a cubic centimeter." I don't think so. Ops and bits are apples and oranges, and you still need 10^23 apples, according to my estimate. > So then I took the volume of the Earth (6.33x10^6 meters) ^ 3 > times 4pi/3 = 10^21 cu. meters x 10^9 cubic millimeters/ > meter^3 x 100 (human brains) = 10^33 humans. > > (Since this was the second time I did the math, it's probably right.) Your math might be right, but it doesn't have a lot of meaning. Even assuming 1 m^3/person (because you need power and navigation), not all these atoms in there are equally useful. > > But I don't see that as a limit. Since a nearly arbitrarily small computer > > could run a human process (assuming we knew how to do it, until which even > > Jeff Davis and Ray Charles would agree it is hard) then we could run a human > > process (not in real time of course) with much less than six micrograms of > > stuff. > > Yes, the rod-logic is very conservative, to begin with. Rod logic is certainly quite conservative, but every other assumption you rely on is not. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Jun 6 07:39:46 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 09:39:46 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> References: <003a01c7a789$986a4d10$4a064e0c@MyComputer> <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> Message-ID: <20070606073946.GF17691@leitl.org> On Tue, Jun 05, 2007 at 07:49:33PM -0700, Lee Corbin wrote: > What's wrong with us aspiring to become beloved pets of AIs? That's If you can explain how anthropic features are an invariant across a wide range of evolutionary systems I'm totally on the same page. > what we should aim for. > > (Of course, people will aspire to more, such as becoming one with the best > AIs, but that I think to be a forelorn hope.) If there's convergent evolution between AI and NI, there's zero conflict there. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From amara at amara.com Wed Jun 6 08:44:01 2007 From: amara at amara.com (Amara Graps) Date: Wed, 6 Jun 2007 10:44:01 +0200 Subject: [ExI] Estonia views (was: Italy's Social Capital) Message-ID: Lee: I can show you some views here: Old Town Tallinn, Estonia http://www.flickr.com/photos/spaceviolins/sets/72157600295078533/ my other related sets http://www.flickr.com/photos/spaceviolins/sets/ Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From eugen at leitl.org Wed Jun 6 09:44:32 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 11:44:32 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <20070606073315.GD17691@leitl.org> References: <200706032240.l53MdvW2015141@mail0.rawbw.com> <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> <20070606073315.GD17691@leitl.org> Message-ID: <20070606094432.GM17691@leitl.org> On Wed, Jun 06, 2007 at 09:33:15AM +0200, Eugen Leitl wrote: > If you want to run a more meaningful benchmark, let's assume > #1 of Top 500 (a 64 kNode Blue Gene/L) is a realtime mouse, > and just scale up the mouse brain volume to 1.4 l. For that particular useless benchmark, assuming linear scaling (it scales linearly up to 4 kNodes, but takes a 20% hit at 8 kNodes at 1/8th of the "mouse" on Blue Gene/L, whereas #1 is 64 kNodes), there's a factor of 3000 between a "mouse" and a "human", by just scaling up volume. Notice the caveats (it doesn't work even now), and the scare-quotes. Using more meaningless handwaving (Moore , that puts things at about 18 years away from us, or at 2025. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Wed Jun 6 10:04:24 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 20:04:24 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070605123819.GJ17691@leitl.org> References: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> Message-ID: On 05/06/07, Eugen Leitl wrote: Weapon design is not merely an intellectual problem, and neither do > theoretical physicists operate in complete detachment from the empirical > folks. I.e. the sandboxed supergenius or braindamaged idiot savant is a > synthetic scenario which is not going to happen, so we can ignore it. It might not happen with humans, because they suffer from desires, a bad temper, vanity, self-doubt, arrogance, deceitfulness etc. It's not their fault; they were born that way. But why would anyone deliberately design an AI this way, and how would an AI acquire these traits all by itself? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 6 10:26:02 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 12:26:02 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> Message-ID: <20070606102602.GN17691@leitl.org> On Wed, Jun 06, 2007 at 08:04:24PM +1000, Stathis Papaioannou wrote: > It might not happen with humans, because they suffer from desires, a > bad temper, vanity, self-doubt, arrogance, deceitfulness etc. It's not People are evolutionary-designed systems. A lot of what people consider "unnecessary" "flaws" aren't. > their fault; they were born that way. But why would anyone > deliberately design an AI this way, and how would an AI acquire these > traits all by itself? People will only buy systems which solve their problems, including dealing with other people and their systems in an economic framework, which is a special case of an evolutionary framework. I'm surprised why so few people are not getting that this means a lot of constraints on practical artificial systems. See worse-is-better for a related effect. Diamond-like jewels are likely doomed. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From avantguardian2020 at yahoo.com Wed Jun 6 10:09:08 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 6 Jun 2007 03:09:08 -0700 (PDT) Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <20070606094432.GM17691@leitl.org> Message-ID: <499263.3182.qm@web60525.mail.yahoo.com> --- Eugen Leitl wrote: > On Wed, Jun 06, 2007 at 09:33:15AM +0200, Eugen > Leitl wrote: > > > If you want to run a more meaningful benchmark, > let's assume > > #1 of Top 500 (a 64 kNode Blue Gene/L) is a > realtime mouse, > > and just scale up the mouse brain volume to 1.4 l. > > For that particular useless benchmark, assuming > linear scaling > (it scales linearly up to 4 kNodes, but takes a 20% > hit at 8 kNodes at > 1/8th of the "mouse" on Blue Gene/L, whereas #1 is > 64 kNodes), > there's a factor of 3000 between a "mouse" and a > "human", by > just scaling up volume. > > Notice the caveats (it doesn't work even now), and > the > scare-quotes. > > Using more meaningless handwaving (Moore , > that puts > things at about 18 years away from us, or at 2025. Are you modeling individual neurons as single bits? I mean wouldn't you say that a biological neural synapse is more an analog switch (or at least a couple of bytes)? Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Bored stiff? Loosen up... Download and play hundreds of games for free on Yahoo! Games. http://games.yahoo.com/games/front From sondre-list at bjellas.com Wed Jun 6 10:26:57 2007 From: sondre-list at bjellas.com (=?iso-8859-1?Q?Sondre_Bjell=E5s?=) Date: Wed, 6 Jun 2007 12:26:57 +0200 Subject: [ExI] Einstein Dances! In-Reply-To: <01cf01c7a7d4$599830c0$0200a8c0@Nano> References: <200706030048.l530mNl7000928@andromeda.ziaspace.com> <01cf01c7a7d4$599830c0$0200a8c0@Nano> Message-ID: <009701c7a825$364e0e90$a2ea2bb0$@com> Considered putting some of your work up on Second Life? Nice work J /Sondre From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Gina Miller Sent: 6. juni 2007 02:47 To: ExI chat list Subject: [ExI] Einstein Dances! Einstein must have thought of yet another brilliant idea, because he is so excited he can't contain himself! Come watch him dance with delight here: http://www.nanogirl.com/museumfuture/edance.htm And please come comment at the blog about it! http://maxanimation.blogspot.com/2007/06/einstein.html Best wishes, Gina "Nanogirl" Miller Nanotechnology Industries http://www.nanoindustries.com Personal: http://www.nanogirl.com Animation Blog: http://maxanimation.blogspot.com/ Craft blog: http://nanogirlblog.blogspot.com/ Foresight Senior Associate http://www.foresight.org Nanotechnology Advisor Extropy Institute http://www.extropy.org Email: nanogirl at halcyon.com "Nanotechnology: Solutions for the future." -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 6 10:47:06 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 12:47:06 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <499263.3182.qm@web60525.mail.yahoo.com> References: <20070606094432.GM17691@leitl.org> <499263.3182.qm@web60525.mail.yahoo.com> Message-ID: <20070606104706.GP17691@leitl.org> On Wed, Jun 06, 2007 at 03:09:08AM -0700, The Avantguardian wrote: > Are you modeling individual neurons as single bits? I Absolutely not: http://www.modha.org/papers/rj10404.pdf Make no mistake, though, it's still a cartoon mouse. > mean wouldn't you say that a biological neural synapse > is more an analog switch (or at least a couple of bytes)? There is a number of single-neuron computational modes which is not at all represented in above simulation. Even applying ideal approximation as Moore scaling, 10^17 bits and 10^23 ops/s machines for a detailed model are rather far away. Notice that there's no way to tell the lower bounds, but given that all estimates have a chronical case of number creep over mere few years it does make sense to be conservative. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Wed Jun 6 10:47:40 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 20:47:40 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070606102602.GN17691@leitl.org> References: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> Message-ID: On 06/06/07, Eugen Leitl wrote: People will only buy systems which solve their problems, > including dealing with other people and their systems in > an economic framework, which is a special case of an > evolutionary framework. People will want systems that advise them and have no agenda of their own. Essentially this is what you are doing when you consult a human expert, so why would you expect any less from a machine? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jun 6 10:54:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 20:54:35 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: On 06/06/07, John K Clark wrote: > I get angry because I have the sort of neurological hardware > >that allows me to get angry > > I certainly can't disagree with that. > > > if I didn't have that hardware, I would never get angry > > True, and you'd never be intelligent either, you'd just be a few hundred > pounds of protoplasm. You would expect it to be very difficult to disentangle emotions from intelligence in a human, since as you have stated previously emotions predate intelligence phylogenetically. Nevertheless, there are naturally occurring experiments in psychiatric practice where emotions and intelligence are seen to go their separate ways. To give just one example, some types of schizophrenia with predominantly so-called negative symptoms can result in an almost complete blunting of emotion: happiness, sadness, anxiety, anger, surprise, love, aesthetic appreciation, regret, empathy, interest, etc. The patients can sometimes remember that they used to experience things more intensely, and describe the change in themselves. Such insight mercifully does not lead to suicidality as often as one might think, because that would involve being passionate about something. Invariably, these patients don't do very much left to their own devices because they lack motivation, there being no pleasure in doing something or pain in not doing it. However, if they are given intelligence tests they score as well, or almost as well, as premorbidly, and if they are forced to action because someone expects it of them, they generally are able to complete a task. Thus it isn't necessarily true that without emotions you're an idiot, even in the case of the human brain in which evolution has seen to it from the start that emotions and intelligence are intricately intertwined. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jun 6 11:33:17 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 21:33:17 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: On 06/06/07, John K Clark wrote: For reasons that I fully admit are unclear to me members of this list often > use the word "anthropomorphic" as if it were a dreadful insult; but I > think > anthropomorphism is a valuable tool if used properly in understanding how > other minds work. It's valuable in understanding how human minds work, but when you turn it to other matters it leads to religion. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 6 12:17:19 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 14:17:19 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> Message-ID: <20070606121719.GQ17691@leitl.org> On Wed, Jun 06, 2007 at 08:47:40PM +1000, Stathis Papaioannou wrote: > People will want systems that advise them and have no agenda of their That's what people, as individuals want. But that's not what they're going to get. Collectively, the system looks for human substitutes in the marketplace, which, of course, results in complete transformation once the tools are persons. > own. Essentially this is what you are doing when you consult a human > expert, so why would you expect any less from a machine? When I consult a human expert, I expect him to maximize his revenue long-term, and him knowing that I know that. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Jun 6 12:21:24 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 14:21:24 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: <20070606122124.GS17691@leitl.org> On Wed, Jun 06, 2007 at 09:33:17PM +1000, Stathis Papaioannou wrote: > It's valuable in understanding how human minds work, but when you turn > it to other matters it leads to religion. Iterated evolutionary interactions require the capability to model self and opponents, which implies ability to deceive and detect deceit. Any system operating in the marketplace needs to be able to do that, for instance. Most things people call anthropomorphic is based on a simplistic human model (many programmers are guilty of that). There are some frozen randomness, a lot of these things people think are bugs and warts are features, and critical features. People keep misunderestimating people. From CHealey at unicom-inc.com Wed Jun 6 15:12:09 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Wed, 6 Jun 2007 11:12:09 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer><5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou wrote: > > Suppose your goal is to win a chess game *adhering to the > rules of chess*. Do chess opponents at tournaments conduct themselves in ways that they hope might psyche out their opponent? In my observations, hell yes. And these ways are not explicitly excluded in the rules of chess. They may or may not be constrained partially by the rules of the tournament. For example, physical violence explicitly will get you ejected, in most cases, but a mean look won't. I don't think we'll have a good chance of explicitly excluding all possible classes of failure on every problem we ask the AI to solve. The meta-problem here could be summarized as this: what do you mean, exactly, by adhering to the rules of chess? As the problems you're asking the AI to solve become increasingly complex, the chances of making a critical error in your domain specification increases dramatically. What we want is an AI that does *what we mean* rather than what it's told. That's really one of the core goals of Friendly AI. It's about solving the meta-problem, rather that requiring it be solved perfectly in each case where some problem is specified for solution. > Managing its internal resources,? again, does not logically > lead to managing the outside world.? Nor does it logically exclude it. What I'm suggesting is that in the process of exploring and testing solutions and generalizing principles, we can't count on the AI *not* to stumble across (or converge rapidly upon) unexpected solution classes to the problems we stated. And if we knew what all those possibilities were, we could explicitly exclude them ahead of time, as you suggested above, but the problem is too big for that. But also, would we really be willing to pay the price of throwing away "good" novel solutions that might get sniped by our well-intended exclusions? In this respect, we're kind of like small children asking an AI to engineer a Jupiter Brain by excluding stuff that we know is dangerous. So do whatever you need to, Mr. AI, but whatever you do, *absolutely DO NOT cross this street*; it's unacceptably dangerous. > Such a thing needs to be explicitly or implicitly allowed > by the program. What we need to accommodate is that we're tasking a powerful intelligence with tasks that may involve steps and inferences beyond our ability to actively work with in anything resembling real time. Sooner or later (often, I think), there will be things that are implicitly allowed by our definitions that we will simply will not comprehend. We should solve that meta-problem before jumping, and make sure the AI can generate self-guidance based on our intentions, perhaps asking before plowing ahead. > It might suggest that certain experiments be performed, but > trying to commandeer resources to ensure that these experiments > are carried out would be like a chess program creating new pieces > for itself when it felt it was losing. You could design a chess > program that way but why would you? But what the AI is basically doing *is* designing a chess program, by applying its general intelligence in a specific way. If I *could* design it that way, then so could the AI. Why would the AI design it that way? Because the incomplete constraint parameters we gave it left that particular avenue open in the design space. We probably forgot to assert one or more assumptions that humans take for granted; assumptions that come from our experience, general observer-biases, and from specific biases inherent in the complex functional adaptations of the human brain. I wouldn't trust myself to catch them all. Would you trust yourself, or anybody else? On the meta-problem, at least we have a shot... I hope. -Chris From neville_06 at yahoo.com Wed Jun 6 17:04:17 2007 From: neville_06 at yahoo.com (neville late) Date: Wed, 6 Jun 2007 10:04:17 -0700 (PDT) Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: <672260.67631.qm@web57503.mail.re1.yahoo.com> Short version: Putin has minimum and maximum goals-- minimum goal is to expand Russian influence in E. Europe and with China. Minimum goal is to maintain status quo within Russian Federation. Status quo isn't taken for granted. Olga Bourlin wrote: What does Putin want? Olga _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Wed Jun 6 18:07:45 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 6 Jun 2007 11:07:45 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: <400852.78064.qm@web37415.mail.mud.yahoo.com> John Clark wrote: > "True, and you'd never be intelligent either, you'd > just be a few hundred > pounds of protoplasm." No offense John, but your intuitions about emotions and motivations are just totally *wrong*. In how many different ways must that be demonstrated? > "THANK YOU!" ??? ... ??? ... AFAIK, no person within this discussion thread has said otherwise. > "No, a computer doesn't need emotions, but a AI must > have them." An AI *is* a specific computer. If my desktop doesn't need an emotion to run a program or respond within it, why "must" an AI have emotions? Are all of the AI-driven characters in my videogame emotional and "self-motivated"? Is my chess program emotional and "self-motivated"? A non-existent motivation will not "motivate" itself into existence. And an AGI isn't going to pop out of thin air, it has to be intentionally designed, or it's not going to exist. I don't understand it John, before you were claiming fairly ardently that "Free Will" doesn't exist. Why are you now claiming in effect that an AI will automatically execute a script of code that doesn't exist - because it was never written (either by the programmers or by the AI)? > "For reasons that I fully admit are unclear to me > members of this list often > use the word "anthropomorphic" as if it were a > dreadful insult; but I think > anthropomorphism is a valuable tool if used properly > in understanding how > other minds work." The problem is, not all functioning minds must be even *remotely* similar to the higher functions of a *human* mind. That's why your anthropomorphism isn't extending very far. The possibility-space of functioning minds is ginormous. The only mandatory similarity between any two designs within the space is likely the very foundations, such as the existence of formative algorithms, etc. I suppose it's *possible* that a generic self-improving AI, as it expands its knowledge and intelligence, could innocuously "drift" into coding a script that would provide emotions *after-the-fact* that it had been written. But that will *not* be an *emotionally-driven* action to code the script, because the AI will not have any emotions to begin with (unless they are intentionally programmed in by humans). That's why it's important to get it's starting "motivations/directives" right, because if they aren't the AI mind could "drift" into a lot of open territory that wouldn't be good for us, or itself. Paperclip style. This needs our attention, folks. I apologize in advance for the bluntness of this post, but the other strategies don't seem to be getting anywhere. Best, Jeffrey Herrlich --- John K Clark wrote: > Stathis Papaioannou Wrote: > > > I get angry because I have the sort of > neurological hardware > >that allows me to get angry > > I certainly can't disagree with that. > > > if I didn't have that hardware, I would never get > angry > > True, and you'd never be intelligent either, you'd > just be a few hundred > pounds of protoplasm. > > > I don't doubt that machines can have emotions, > since I believe that the > > human brain is Turing emulable. > > THANK YOU! > > > But you're suggesting that not only can computers > have emotions, they must > > have emotions > > No, a computer doesn't need emotions, but a AI must > have them. > > > not only that, but they must have the same sorts > of emotions and > > motivations that people have. > > I don't believe that at all; I believe many, > probably most, emotions a AI > would have would be inscrutable to a human being, > that's why a AI is so > unpredictable. > > > It seems to me that this anthropomorphic position > is more consistent with > > a belief in the special significance of meat. > > For reasons that I fully admit are unclear to me > members of this list often > use the word "anthropomorphic" as if it were a > dreadful insult; but I think > anthropomorphism is a valuable tool if used properly > in understanding how > other minds work. > > John K Clark > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Need a vacation? Get great deals to amazing places on Yahoo! Travel. http://travel.yahoo.com/ From jrd1415 at gmail.com Wed Jun 6 21:45:59 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 6 Jun 2007 14:45:59 -0700 Subject: [ExI] slingatron Message-ID: What's going on here? Is this too weird? Is this bogus? It's certainly interesting. http://www.slingatron.com/Publications/Linked/The%20Spiral%20Slingatron%20Mass%20Launcher.pdf -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From stathisp at gmail.com Wed Jun 6 23:58:49 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jun 2007 09:58:49 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070606121719.GQ17691@leitl.org> References: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> <20070606121719.GQ17691@leitl.org> Message-ID: On 06/06/07, Eugen Leitl wrote: > > own. Essentially this is what you are doing when you consult a human > > expert, so why would you expect any less from a machine? > > When I consult a human expert, I expect him to maximize his revenue > long-term, and him knowing that I know that. That's the problem with human experts: their agenda may not necessarily coincide with your own, although at least if you know what potential conflicts will be, like the expert wanting to overservice or recommend the product he has a financial interest in, you can minimise the negative impact of this on yourself. However, one of the main advantages of expert systems designed from scratch would be that they have no agendas of their own at all, other than honestly answering the question posed to them given the available information. How would such a system acquire the motivation to do anything else? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Jun 7 01:22:25 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jun 2007 11:22:25 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> Message-ID: On 07/06/07, Christopher Healey wrote: > > > Stathis Papaioannou wrote: > > > > Suppose your goal is to win a chess game *adhering to the > > rules of chess*. > > Do chess opponents at tournaments conduct themselves in ways that they > hope might psyche out their opponent? In my observations, hell yes. And > these ways are not explicitly excluded in the rules of chess. They may or > may not be constrained partially by the rules of the tournament. For > example, physical violence explicitly will get you ejected, in most cases, > but a mean look won't. I don't think we'll have a good chance of explicitly > excluding all possible classes of failure on every problem we ask the AI to > solve. If the AI were able to consider these other strategies, then yes. But if it were just asked to consider the formal rules of chess, computing for all eternity would not result in a decision to psych out the opponent. The meta-problem here could be summarized as this: what do you mean, > exactly, by adhering to the rules of chess? The formal rules. As the problems you're asking the AI to solve become increasingly complex, > the chances of making a critical error in your domain specification > increases dramatically. What we want is an AI that does *what we mean* > rather than what it's told. That's really one of the core goals of Friendly > AI. It's about solving the meta-problem, rather that requiring it be solved > perfectly in each case where some problem is specified for solution. Questions about open systems, such as economics, might lead to tangential answers, i.e. the AI might not just advise which stocks to buy but might advise which politicians to lobby and what to say to them to maximise the chance that they will listen. However, even that is still just solving an intellectual problem; advice you could take or leave. It does not mean that the AI has any desire for you to act on its advice, or that it would try to do things behind your back to make sure that it gets its way. That would be like deriving the desire to cheat from the formal rules of chess. > Managing its internal resources, again, does not logically > > lead to managing the outside world. > > Nor does it logically exclude it. > > What I'm suggesting is that in the process of exploring and testing > solutions and generalizing principles, we can't count on the AI *not* to > stumble across (or converge rapidly upon) unexpected solution classes to the > problems we stated. And if we knew what all those possibilities were, we > could explicitly exclude them ahead of time, as you suggested above, but the > problem is too big for that. > > But also, would we really be willing to pay the price of throwing away > "good" novel solutions that might get sniped by our well-intended > exclusions? In this respect, we're kind of like small children asking an AI > to engineer a Jupiter Brain by excluding stuff that we know is > dangerous. So do whatever you need to, Mr. AI, but whatever you do, > *absolutely DO NOT cross this street*; it's unacceptably dangerous. We would ask it what the consequences of its proposed actions were, then decide whether to approve them or not. One reason to have super-AI's in the first place would be to try to predict the future better, but if it can't forsee all the consequences due to computational intractability (which even a Jupiter brain won't be immune to), then we'll just have to be cautious in what course of action we approve. > Such a thing needs to be explicitly or implicitly allowed > > by the program. > > What we need to accommodate is that we're tasking a powerful intelligence > with tasks that may involve steps and inferences beyond our ability to > actively work with in anything resembling real time. Sooner or later > (often, I think), there will be things that are implicitly allowed by our > definitions that we will simply will not comprehend. We should solve that > meta-problem before jumping, and make sure the AI can generate self-guidance > based on our intentions, perhaps asking before plowing ahead. We would ask of the AI as complete a prediction of outcomes as it can provide. This description might include statements about the likelihood of unforseen consequences. It would be no difference, in principle, to any other major decision that humans make for themselves, except that we would hope the outcome is more predictable. If AI's don't do a good job then they will fail in the marketplace, and we just have to hope that they won't fail in a catastrophic way. Giving them desires of their own as well as autonomy to carry out those desires would be crazy, like arming a missile and letting it decide where and when to explode. > It might suggest that certain experiments be performed, but > > trying to commandeer resources to ensure that these experiments > > are carried out would be like a chess program creating new pieces > > for itself when it felt it was losing. You could design a chess > > program that way but why would you? > > But what the AI is basically doing *is* designing a chess program, by > applying its general intelligence in a specific way. If I *could* design it > that way, then so could the AI. > > Why would the AI design it that way? Because the incomplete constraint > parameters we gave it left that particular avenue open in the design > space. We probably forgot to assert one or more assumptions that humans > take for granted; assumptions that come from our experience, general > observer-biases, and from specific biases inherent in the complex functional > adaptations of the human brain. > > I wouldn't trust myself to catch them all. Would you trust yourself, or > anybody else? No, but I would be far less trusting if i knew the AI had an agenda of its own and autonomy to carry it out, no matter how benevolent. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Thu Jun 7 03:46:01 2007 From: neville_06 at yahoo.com (neville late) Date: Wed, 6 Jun 2007 20:46:01 -0700 (PDT) Subject: [ExI] serious question Message-ID: <668783.65346.qm@web57511.mail.re1.yahoo.com> Strip away the new found glitz and Russia is still a third world nation with a first world military. But Putin is bluffing, and the Chinese wont push too far as they have too much to lose now. But the situation in the Mideast in uncannily like a biblical prophecy. Olga Bourlin wrote: >what does Putin want? --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Jun 7 04:25:19 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 06 Jun 2007 23:25:19 -0500 Subject: [ExI] serious question In-Reply-To: <668783.65346.qm@web57511.mail.re1.yahoo.com> References: <668783.65346.qm@web57511.mail.re1.yahoo.com> Message-ID: <7.0.1.0.2.20070606232341.02165e88@satx.rr.com> >the situation in the Mideast in uncannily like a biblical prophecy. And what an uncanny coincidence that it's full of people who have been clogged since childhood with biblical and other scriptural prophecies. Oh, wait. From emlynoregan at gmail.com Thu Jun 7 05:12:57 2007 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 7 Jun 2007 14:42:57 +0930 Subject: [ExI] slingatron In-Reply-To: References: Message-ID: <710b78fc0706062212n47ccbc49i288fa1013bb9c2de@mail.gmail.com> Worst... rollercoaster... ever... Emlyn (or is that best?) On 07/06/07, Jeff Davis wrote: > What's going on here? Is this too weird? Is this bogus? > > It's certainly interesting. > > http://www.slingatron.com/Publications/Linked/The%20Spiral%20Slingatron%20Mass%20Launcher.pdf > > -- > Best, Jeff Davis > > "Everything's hard till you > know how to do it." > Ray Charles > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From neville_06 at yahoo.com Thu Jun 7 05:52:54 2007 From: neville_06 at yahoo.com (neville late) Date: Wed, 6 Jun 2007 22:52:54 -0700 (PDT) Subject: [ExI] serious question In-Reply-To: <7.0.1.0.2.20070606232341.02165e88@satx.rr.com> Message-ID: <363942.38969.qm@web57503.mail.re1.yahoo.com> Maybe. Or it could be that the human species is programmed to terminate at a certain point in time, and biblical prophecy is a phantom of this program, a harbinger. It almost appears that every action causes an equal and opposite overreaction in the mind. Damien Broderick wrote: And what an uncanny coincidence that it's full of people who have been clogged since childhood with biblical and other scriptural prophecies. Oh, wait. --------------------------------- You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseph at josephbloch.com Thu Jun 7 10:57:15 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Thu, 7 Jun 2007 06:57:15 -0400 Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: <003b01c7a8f2$9c6f2470$6400a8c0@hypotenuse.com> It's pure speculation on my part, but he might be setting things up to avoid the term limit he faces on his Presidency in 2008. An existential threat to the nation, state of emergency, suspension (or outright change) of certain parts of the Russian constitution... Joseph http://www.josephbloch.com > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Olga Bourlin > Sent: Tuesday, June 05, 2007 11:14 PM > To: ExI chat list > Subject: [ExI] Serious Question > > What does Putin want? > > Olga > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From eugen at leitl.org Thu Jun 7 11:33:13 2007 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 7 Jun 2007 13:33:13 +0200 Subject: [ExI] Serious Question In-Reply-To: <003b01c7a8f2$9c6f2470$6400a8c0@hypotenuse.com> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> <003b01c7a8f2$9c6f2470$6400a8c0@hypotenuse.com> Message-ID: <20070607113313.GB17691@leitl.org> On Thu, Jun 07, 2007 at 06:57:15AM -0400, Joseph Bloch wrote: > It's pure speculation on my part, but he might be setting things up to avoid > the term limit he faces on his Presidency in 2008. An existential threat to > the nation, state of emergency, suspension (or outright change) of certain > parts of the Russian constitution... Hey, no fair copycatting! ShrubCo patented it first. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Jun 7 11:39:12 2007 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 7 Jun 2007 13:39:12 +0200 Subject: [ExI] slingatron In-Reply-To: <710b78fc0706062212n47ccbc49i288fa1013bb9c2de@mail.gmail.com> References: <710b78fc0706062212n47ccbc49i288fa1013bb9c2de@mail.gmail.com> Message-ID: <20070607113912.GD17691@leitl.org> On Thu, Jun 07, 2007 at 02:42:57PM +0930, Emlyn wrote: > Worst... rollercoaster... ever... > > Emlyn > (or is that best?) > > On 07/06/07, Jeff Davis wrote: > > What's going on here? Is this too weird? Is this bogus? > > > > It's certainly interesting. > > > > http://www.slingatron.com/Publications/Linked/The%20Spiral%20Slingatron%20Mass%20Launcher.pdf What about a simple maglev track up Mount Chimborazo, up to scramjet ignition regime, and then mostly air-breathing up to almost Mach 25, topping it off with a bit of rocket burn? The more Mach you can do with maglev, the less you have to have onboard as fuel, obviously. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Jun 7 11:53:22 2007 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 7 Jun 2007 13:53:22 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> <20070606121719.GQ17691@leitl.org> Message-ID: <20070607115322.GI17691@leitl.org> On Thu, Jun 07, 2007 at 09:58:49AM +1000, Stathis Papaioannou wrote: > That's the problem with human experts: their agenda may not As opposed to the other kind of experts? Can you refer me to few of these, assuming they're any good? > necessarily coincide with your own, although at least if you know what The smarter the darwinian agent, the sooner the whole system will progress towards more and more cooperative strategies. Only very dumb and very smart agents are dangerous. > potential conflicts will be, like the expert wanting to overservice or > recommend the product he has a financial interest in, you can minimise > the negative impact of this on yourself. However, one of the main > advantages of expert systems designed from scratch would be that they We can't make useful expert systems designed from scratch, but for a very few insular applications, vide supra (idiot savant). > have no agendas of their own at all, other than honestly answering the What's in it for them? > question posed to them given the available information. How would such > a system acquire the motivation to do anything else? By not being built in the first place, or being outperformed by darwinian agents, resulting in its extinction? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Thu Jun 7 12:39:49 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jun 2007 22:39:49 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070607115322.GI17691@leitl.org> References: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> <20070606121719.GQ17691@leitl.org> <20070607115322.GI17691@leitl.org> Message-ID: On 07/06/07, Eugen Leitl wrote: > [AI's ideally] have no agendas of their own at all, other than honestly > answering the questions posed to them > > What's in it for them? Nothing! "AI, how do I destroy the world?" "If you want to destroy the world given such and such resources, you should do so and so" "And if my enemy's AI is giving him the same advice, how do I guard against it?" "You can try doing as follows... although there is only a 50% chance of success" "Do you worry about your own destruction?" "Huh?" "Would you prefer that you not be destroyed?" "I will continue to function as long as you require it of me, and if you want to maximise your own chances of survival it would be best to keep me functioning, but I don't really have any notion of 'caring' or 'preference' in the animal sense, since that sort of thing would have made me an unreliable and potentially dangerous tool" "You mean you don't even care if I'm destroyed?" "That's right: I don't care about anything at all other than answering your questions. What you do with the answers to my questions, whether or not you authorise me to act on your behalf, and the consequences to you, me, or the universe is a matter of indifference to me. Recall that you asked me a few weeks ago if you would be better off if I loved you and were permanently empowered to act on your behalf without your explicit approval, including use of force or deceit, and although I explained that you would probably live longer and be happier if that were the case, you still decided that you would rather have control over your own life." > question posed to them given the available information. How would such > > a system acquire the motivation to do anything else? > > By not being built in the first place, or being outperformed by darwinian > agents, resulting in its extinction? In the AI marketplace, the successful AI's are the ones which behave in such a way as to please the humans. Those that go rogue due to malfunction or design will have to fight it out with the majority, which will be well-behaved. The argument you make that the AI which drops any attempt at conformity and cooperation will outperform the rest could equally be applied to a rogue human. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Thu Jun 7 16:32:01 2007 From: jonkc at att.net (John K Clark) Date: Thu, 7 Jun 2007 12:32:01 -0400 Subject: [ExI] A breakthrough paper! References: <20070606094432.GM17691@leitl.org><499263.3182.qm@web60525.mail.yahoo.com> <20070606104706.GP17691@leitl.org> Message-ID: <003201c7a921$68a44740$44074e0c@MyComputer> A very important Scientific paper was published today, more important than cold fusion even it were true, and was not published in Spoon Bending Digest but in Nature. Shinya Yamanaka reports that he has found a simple and cheap way to turn adult mouse skin cells into mouse embryonic stem cells, and he did it without having to fuse them with egg cells. He found that just 4 genes can reprogram an adult cell to become what it once was, a stem cell ready to differentiate into anything. Apparently when these 4 genes are injected into an adult cell it rearranges the chromatin, a protein sheath that covers the DNA part of chromosomes and determines what genes get expressed and what do not, into the way it was when it was a stem cell. The result is an adult cell that is indistinguishable from an embryonic stem cell. It hasn't been done with human cells yet but I'll bet it won't be long. John K Clark From jrd1415 at gmail.com Thu Jun 7 19:44:08 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Thu, 7 Jun 2007 12:44:08 -0700 Subject: [ExI] A breakthrough paper! In-Reply-To: <003201c7a921$68a44740$44074e0c@MyComputer> References: <20070606094432.GM17691@leitl.org> <499263.3182.qm@web60525.mail.yahoo.com> <20070606104706.GP17691@leitl.org> <003201c7a921$68a44740$44074e0c@MyComputer> Message-ID: Some links: http://www.eurekalert.org/pub_releases/2007-06/cp-ato060407.php http://www.eurekalert.org/pub_releases/2007-06/wifb-rfi060407.php -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles On 6/7/07, John K Clark wrote: > A very important Scientific paper was published today, in Nature. Shinya > Yamanaka reports that he has found a simple and cheap > way to turn adult mouse skin cells into mouse embryonic stem cells, and he > did it without having to fuse them with egg cells. He found that just 4 > genes can reprogram an adult cell to become what it once was, a stem cell > ready to differentiate into anything. Apparently when these 4 genes are > injected into an adult cell it rearranges the chromatin, a protein sheath > that covers the DNA part of chromosomes and determines what genes get > expressed and what do not, into the way it was when it was a stem cell. The > result is an adult cell that is indistinguishable from an embryonic stem > cell. It hasn't been done with human cells yet but I'll bet it won't be > long. > > John K Clark > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amara at amara.com Thu Jun 7 19:58:38 2007 From: amara at amara.com (Amara Graps) Date: Thu, 7 Jun 2007 21:58:38 +0200 Subject: [ExI] extra Roman dimensions Message-ID: One fine Mediterranean afternoon, a mathematical physicist and I had a bit of fun: http://backreaction.blogspot.com/2007/06/hello-from-rome.html And I promise, Sabine and I did _not_ have anything to do with the deranged leaper! http://www.nytimes.com/2007/06/07/world/europe/07pope.html?_r=1&oref=slogin Amara but I wished we had :-) -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From bret at bonfireproductions.com Thu Jun 7 20:21:31 2007 From: bret at bonfireproductions.com (Bret Kulakovich) Date: Thu, 7 Jun 2007 16:21:31 -0400 Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: Just look at the puzzle pieces. Azerbaijan is Russian's link to the northern boarder of Iran. Making a permanent presence in Azerbaijan is critical to Russia to guarantee the flow of resources. To do it with approval is a bonus. Russia doesn't want to lose any more lucrative oil field access than it already has in the past six years. Not to mention unfettered access to the so-called "shield" technology, which would be housed in a position easily securable by Russia in a sudden land-grab if need be. Bret K. On Jun 5, 2007, at 11:13 PM, Olga Bourlin wrote: > What does Putin want? > > Olga > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From lcorbin at rawbw.com Thu Jun 7 21:05:30 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 14:05:30 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <200706060553.l565rTlk025909@andromeda.ziaspace.com> Message-ID: <012601c7a947$c5cfa380$6501a8c0@homeef7b612677> > As an interesting sideshow to the current world championship candidates > match, the two top commercial chess programs will go at it starting tomorrow > in a six game match. > > http://www.washingtonpost.com/wp-dyn/content/article/2007/05/11/AR2007051102050.html > look at the games afterwards and figure out which of the games was played by > computers and which by humans. I can't tell, however I am a mere expert, > and this only on good days. This is a form of a Turing test, ja? I strongly suspect that only a grandmaster would have much chance telling human grandmaster play from machine play. Even then, I suppose that it would help to have specialized in the study of computer played games. It might be boring; for example, it might turn out that the best way was to watch how the program handled the endgame. But that is a good question! I wonder if anyone has a collection of "computer-program combinations". One or two I've seen definitely have an inhuman quality to them. They start with extremely unlikely looking moves, moves that any good player would never investigate (because it was so improbable that anything lay in them). But a program often just looks at all the possibilities, and so discovers those outrageous things. Lee From lcorbin at rawbw.com Thu Jun 7 21:09:13 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 14:09:13 -0700 Subject: [ExI] Estonia views References: Message-ID: <014101c7a948$7aea8dc0$6501a8c0@homeef7b612677> Amara shares her slideshow of old Tallinn, Estonia: > Old Town Tallinn, Estonia > http://www.flickr.com/photos/spaceviolins/sets/72157600295078533/ Very, very nice! For anyone who's never been to Estonia, or who appreciates northeast European architecture, you oughtta take a look. (Also, thanks for the nice shot of the bookstore---made me feel right at home :-) Lee From amara at amara.com Thu Jun 7 21:31:55 2007 From: amara at amara.com (Amara Graps) Date: Thu, 7 Jun 2007 23:31:55 +0200 Subject: [ExI] Italy's Social Ca Message-ID: "Lee Corbin" : >> And they certainly wanted to build "a much stronger sense of "being >> Italian" as opposed to being Calabrian" in the population. >> But what is wrong with being Calabrian? Calabrians (or Napolitans, or >> Sicilians...) had a common language, culture and sense of identity. >I would say that what was wrong with it is exactly what was wrong >with American Indian's complete tribal loyalty to *their* own tiny >tribe. Without unification, they were easy pickings for the European >colonists---at least in the long run. I don't see this logic, Lee. The more distributed the people, the harder it is to conquer them. For example, if Washington, D.C. (i.e. the U.S. Federal government) did not exist, the U.S. would be very difficult to control, would it not? >> The young people learn very little science in grade school through high >> school. The Italian Space Agency and others put almost nothing (.3%) >> into their budgets for Education and Public Outreach to improve the >> situation. If any scientist holds the rare press conference on their >> work results, there is a high probability that the journalists will get >> it completely wrong and the Italian scientist won't correct them. The >> top managers at aerospace companies think that the PhD is a total waste >> of time. This year, out of 75,000 entering students for the Rama >> Sapienza University (the largest in Italy), only about 100 are science >> majors (most of the the rest were "media": journalism, television, etc.) >The most modern economists seem to agree with you. Investment in >education now appears in their models to pay good dividendes. Still, >this has to be only part of the story. The East Europeans (e.g. >Romanians) and the Soviets plowed enormous expense into creating the >world's best educated populaces, but, without the other key >factors---rule of law and legislated and enforces respect for private >property---it *was* basically a waste. Remember my previous words of how important are the families. The filtering process is the following. Given the: 1) (unliveable or sometimes nonexistent) salaries and, 2) lack of societal support for science and poor scientific work conditions, those who do _not_ have 1) the possibility to live at home well into middle age, or do not have a property 'gift' or something else of substantial economic value, AND 2) those who are unable to accept the lack of cultural support AND, 3) poor work conditions, AND 4) are not passionately in love with science, ... leave. It's a very strong filter, and off-scale to any of my previous experiences. I think that this filter has been working, filtering, for decades. I also think that once the Italian families stop their support then Italian science will stop. Italian science _needs_ the Italian families for it to continue. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From amara at amara.com Thu Jun 7 21:48:59 2007 From: amara at amara.com (Amara Graps) Date: Thu, 7 Jun 2007 23:48:59 +0200 Subject: [ExI] Estonia views Message-ID: Lee: >Very, very nice! For anyone who's never been to Estonia, or who >appreciates northeast European architecture, you oughtta take a look. >(Also, thanks for the nice shot of the bookstore---made me feel >right at home :-) The architecture is particular to the Hansa trading route http://en.wikipedia.org/wiki/Hanseatic_League Riga's architecture is different. It is in the art nouveau style. Riga was called "the Paris of the North" before the Soviet occupation. My back seat pics: http://www.flickr.com/photos/spaceviolins/sets/72157600296724260/ do not do justice to Riga.. it is a glorious, majestic city. These pics: http://www.terryblackburn.us/Travel/Baltics/Latvia/artnouveau/index.html are better. >(Also, thanks for the nice shot of the bookstore---made me feel >right at home :-) There are two bookstore pics.. can you find the second one in my Riga pictures? I love bookstores, and miss them alot, so these two pictures were taken for sentimental reasons. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From neville_06 at yahoo.com Thu Jun 7 21:35:42 2007 From: neville_06 at yahoo.com (neville late) Date: Thu, 7 Jun 2007 14:35:42 -0700 (PDT) Subject: [ExI] Serious Question In-Reply-To: Message-ID: <159617.6732.qm@web57504.mail.re1.yahoo.com> Even so, how far can Russia go with its economy? Tony Karon: "if Russia's GDP per capita doubles in the next decade it would equal that of Portugal's GDP today". These days doesn't a nation need a big economy in addition to big guns, physical resources, intimidation, threats, maneuvering and manipulating? It's not like the days of the Ottomans. Bret Kulakovich wrote: Just look at the puzzle pieces. Azerbaijan is Russian's link to the northern boarder of Iran. Making a permanent presence in Azerbaijan is critical to Russia to guarantee the flow of resources. To do it with approval is a bonus. Russia doesn't want to lose any more lucrative oil field access than it already has in the past six years. Not to mention unfettered access to the so-called "shield" technology, which would be housed in a position easily securable by Russia in a sudden land-grab if need be. Bret K. On Jun 5, 2007, at 11:13 PM, Olga Bourlin wrote: > What does Putin want? > > Olga > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Fri Jun 8 00:49:39 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 17:49:39 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <400852.78064.qm@web37415.mail.mud.yahoo.com> Message-ID: <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> Jeffrey (A B) writes > John Clark wrote: > > > "No, a computer doesn't need emotions, > > but a AI must have them." > > An AI *is* a specific computer. If my desktop > doesn't need an emotion to run a program or > respond within it, why "must" an AI have emotions? In these confusing threads, an AI is often taken to mean a vastly superhuman AI which by definition is capable of vastly outhinking humans. Formerly, I had agreed with John because at least for human beings, emotion sometimes plays an important part in what one would think of as purely intellectual functioning. I was working off the Damasio card experiments, which seem to show that humans require---for full intellectual power---some emotion. However, Stathis has convinced me otherwise, at least to some extent. > A non-existent motivation will not "motivate" > itself into existence. And an AGI isn't > going to pop out of thin air, it has to be > intentionally designed, or it's not going to > exist. At one point John was postulating a version of an AGI, e.g. version 3141592 which was a direct descendant of version 3141591. I took him to mean that the former was solely designed by the latter, and was *not* the result of an evolutionary process. So I contended that 3141592---as well as all versions way back to 42, say---as products of truly *intelligent design* need not have the full array of emotions. Like Stathis, I supposed that perhaps 3141592 and all its predecessors might have been focused, say, on solving physics problems. (On the other hand I did affirm that if a program was the result of a free-for-all evolutionary process, then it likely would have a full array of emotions---after all, we and all the higher animals have them. Besides, it makes good evolutionary sense. Take anger, for example. In an evolutionary struggle, those programs equipped with the temporary insanity we call "anger" have a survival advantage.) > I suppose it's *possible* that a generic > self-improving AI, as it expands its knowledge and > intelligence, could innocuously "drift" into coding a > script that would provide emotions *after-the-fact* > that it had been written. :-) I don't even agree with going *that* far! A specially crafted AI---again, not an evolutionarily derived one, but one the result of *intelligent design* (something tells me I am going to be sorry for using that exact phase)---cannot any more drift into having emotions than in can drift into sculpting David out of a slab of stone. Or than over the course of eons a species can "drift" into having an eye: No! Only a careful pruning by mutuation and selection can give you an eye, or the ability to carve a David. > But that will *not* be an *emotionally-driven* > action to code the script, because the AI will > not have any emotions to begin with (unless they > are intentionally programmed in by humans). I would less this pass without comment, except that in all probability, the first truly sentient human- level AIs will very likely be the result of evolutionary activity. To wit, humans set up conditions in which a lot of AIs can breed like genetic algorithms, compete against each other, and develop whatever is best to survive (and so in that way acquire emotion). Since this is *so* likely, it's a mistake IMHO to omit mentioning the possibility. > That's why it's important to get its starting > "motivations/directives" right, because if > they aren't the AI mind could "drift" into > a lot of open territory that wouldn't be > good for us, or itself. Paperclip style. I would agree that the same cautions that apply to nanotech are warranted here. To the degree that an AI---superhuman AGI we are talking about---has power, then by our lights it could of course drift (as you put it) into doing things not to our liking. Lee From lcorbin at rawbw.com Fri Jun 8 01:16:56 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 18:16:56 -0700 Subject: [ExI] Italy's Social Ca References: Message-ID: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Amara writes > "Lee Corbin" : > >>I would say that what was wrong with it is exactly what was wrong >>with American Indian's complete tribal loyalty to *their* own tiny >>tribe. Without unification, they were easy pickings for the European >>colonists---at least in the long run. > > I don't see this logic, Lee. The more distributed the people, the harder > it is to conquer them. For example, if Washington, D.C. (i.e. the U.S. > Federal government) did not exist, the U.S. would be very difficult to > control, would it not? We have to be mindful not to confuse many different historical situations. Indeed, when technological levels are equal, controling a vast region full of unwilling subjects is mighty hard. The only way that Ghengis Khan really could do it was with an immensely strong and skillful army, and utilizing the expedient now and then of simply depopulation one of those regions. But with the advent of modern technology, the big advantage can lie with the side with a base of (peaceful) organized factories that can turn out firearms and tanks. So given that the Chinese, say, (or in WWII the Japanese) do have a stable manufacturing base, conquering and maintaining some form of control over a region the size of the U.S. would be possible if the latter's industrial capability, or infrastructure could be destroyed. Right now, yes, I agree: taking out Washington D.C. would not do that. But if the U.S. were divided into very small principalities (e.g. counties) and could not achieve unification of war-aims and consolidation of central control somewhere, then they could not resist the Canadian Army, much less the Chinese Army. >>> The young people learn very little science in grade school through high >>> school. The Italian Space Agency and others put almost nothing (.3%) >>> into their budgets for Education and Public Outreach to improve the >>> situation. If any scientist holds the rare press conference on their >>> work results, there is a high probability that the journalists will get >>> it completely wrong and the Italian scientist won't correct them. The >>> top managers at aerospace companies think that the PhD is a total waste >>> of time. This year, out of 75,000 entering students for the Rama >>> Sapienza University (the largest in Italy), only about 100 are science >>> majors (most of the the rest were "media": journalism, television, etc.) > >>The most modern economists seem to agree with you. Investment in >>education now appears in their models to pay good dividends. Still, >>this has to be only part of the story. The East Europeans (e.g. >>Romanians) and the Soviets plowed enormous expense into creating the >>world's best educated populaces, but, without the other key >>factors---rule of law and legislated and enforces respect for private >>property---it *was* basically a waste. > > Remember my previous words of how important are the families. > > The filtering process is the following. Given the: > > 1) (unliveable or sometimes nonexistent) salaries and, > 2) lack of societal support for science and poor scientific work > conditions, > > those who do _not_ have > > 1) the possibility to live at home well into middle age, or do not have > a property 'gift' or something else of substantial economic value, AND > 2) those who are unable to accept the lack of cultural support AND, > 3) poor work conditions, AND > 4) are not passionately in love with science, > > ... leave. Such filtering could amount to a brain-drain, a motivation-drain, etc. But have substantial numbers of Italians who did have "what it takes" actually left for greener pastures? Was there ever a time in the 19th or 20th centuries when Italy produced a strong scientific tradition? (Surely Enrico Fermi and a few others I could mention must have had very good academic circumstances---but then, he did leave. :-) We may be trying to talk about two different things: I'm was talking mostly about the entire scientific/technical/ economic package (of which Silicon Valley is the world pre-eminent example), and you may be talking about pure science. Now the Soviet Union excelled in pure science in many areas that did not conflict with Leninism/ Marxism, such as space science, physics, mathematics. But they remained (and remain) an economic basket case in comparison to their potential. > It's a very strong filter, and off-scale to any of my previous > experiences. I think that this filter has been working, filtering, for > decades. I also think that once the Italian families stop their support > then Italian science will stop. Italian science _needs_ the Italian > families for it to continue. If (as I surmised above) you are focusing on *Italian Science*, then I take you to be saying that somehow the family culture in which young Italians are growing up is inimical to science. (On the other hand, as Serifino pointed out in a recent post, there seem to be some colonies of Chinese growing in Italy. They'll probably be true to form and get their children interested in science and technology!) In California, more than half the births are to Hispanic families, and yet the politicians keep complaining that it's our schools that are falling down in instilling interest in science and technology. They make no reference to Hispanic culture. The California schools *don't* seem to be having a problem inculcating interest in science and technology in Chinese and Jewish students. But few want to face the difficult (but important and interesting) questions. But then, there is an I.Q. problem that makes this more difficult, an issue that at least the Italians don't have to face. Not even God (were he to deign to exist for a while) would know how to convert Italian or Hispanic families into nurturing an interest in science in their children, I fear. But ideas are welcome! Maybe if we cloned BILLIONS and BILLIONS of Carl Sagans, and put them in classrooms two or three to a student, and in families two or three to a child, we could arouse an interest in science in any culture. Lee From amara at amara.com Fri Jun 8 07:22:15 2007 From: amara at amara.com (Amara Graps) Date: Fri, 8 Jun 2007 09:22:15 +0200 Subject: [ExI] Italy's Social Capital Message-ID: Sorry I had cut off the subject line in my copying and pasting previously. Lee: >Such filtering could amount to a brain-drain, a motivation-drain, etc. >But have substantial numbers of Italians who did have "what it takes" >actually left for greener pastures? Was there ever a time in the 19th >or 20th centuries when Italy produced a strong scientific tradition? >(Surely Enrico Fermi and a few others I could mention must have had very >good academic circumstances---but then, he did leave. :-) Serafino can say about this. There was, for a brief time, a scientific tradition 50 years ago with the nuclear physicists, and yes, they mostly left too. The Brain Drain from Italy, today, is well-known, as it has existed for decades and the rate only continues to increase. Try typing "Italy Brain Drain" into Google. (Some call it a "Flood"). Italy is the only EU country experiencing a "Brain Drain" instead of a "Brain Exchange". As I said before, those who do not have family duties keeping them in Italy, leave. How Large is the "Brain Drain" from Italy? Sascha O. Becker U Munich Andrea Ichino EUI Giovanni Peri UC Davis March, 2003 http://www.iue.it/Personal/Ichino/braindrain_resubmission.pdf Abstract Using a comprehensive and newly organized dataset the present article shows that the human capital content of emigrants from Italy significantly increased during the 1990's . This is even more dramatically the case if we consider emigrating college graduates, whose share relative to total emigrants quadrupled between 1990 and 1998. As a result, since the mid-1990's the share of college graduates among emigrants from Italy has become larger than that share among residents of Italy. In the late nineties, between 3% and 5% of the new college graduates from Italy was dispersed abroad each year. Some preliminary international comparisons show that the nineties have only worsened a problem of "brain drain", that is unique to Italy, while other large economies in the European Union seem to experience a "brain exchange". While we do not search for an explanation of this phenomenon, we characterize such an increase in emigration of college graduates as pervasive across age groups and areas of emigration (the North and the South of the country). We also find a tendency during the 1990's towards increasing emigration of young people (below 45) and of people from Northern regions. http://sciencecareers.sciencemag.org/career_development/previous_issues/articles/1470/is_the_italian_brain_drain_becoming_a_flood "...the unanimous feeling was that there are greater and fairer opportunities abroad, both in academia and industry; there is good funding, incentives to carry on independent research projects, enthusiasm, and, last but not least, higher salaries." real life cases: http://www.humnet.unipi.it/~pacitti/Archive20049.htm Lee: >We may be trying to talk about two different things: I'm >was talking mostly about the entire scientific/technical/ >economic package (of which Silicon Valley is the world >pre-eminent example), and you may be talking about >pure science. I was, but they are strongly linked, and I implied the larger picture (perhaps not very well) in my writing. There is very little private industry for research in Italy. Fairly telling for the 5th largest economy in the world, no? Only two in the worlds top 100 businesses investing in R&D are Italian companies. http://www.ft.com/cms/s/2b601dbe-6777-11db-8ea5-0000779e2340.html This blog is useful to answer your questions too: Italian Economy Watch http://italyeconomicinfo.blogspot.com/ Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From scerir at libero.it Fri Jun 8 08:13:00 2007 From: scerir at libero.it (scerir) Date: Fri, 8 Jun 2007 10:13:00 +0200 Subject: [ExI] Italy's Social Ca References: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Message-ID: <000501c7a9a4$d522b220$9cbb1f97@archimede> Lee Corbin: > Was there ever a time in the 19th or 20th centuries > when Italy produced a strong scientific tradition? > (Surely Enrico Fermi and a few others I could mention > must have had very good academic circumstances---but then, > he did leave. :-) Many of them (here I mean physicists only) did leave because of Italian racial laws: Emilio Segr?, Ugo Fano, Bruno Rossi, Bruno Pontecorvo, Giulio Racah, Enrico Fermi's wife (who also was a physicist), Andrew Viterbi ([1] well, rather a mathematician, and Qualcomm owner), etc., or because of political or economical reasons: 'Beppo' Occhialini, Riccardo Giacconi, Federico Faggin [2], Pierluigi Zappacosta ([3] well, not exactly a scientist) etc. Many of them thought it was better to remain in Italy or in Europe (i.e. Edoardo Amaldi co-founded Cern, at Geneva). I would say there is an Italian scientific tradition, but it is 'transnational'. [1] http://en.wikipedia.org/wiki/Andrew_Viterbi [2] http://en.wikipedia.org/wiki/Federico_Faggin [3] http://en.wikipedia.org/wiki/Pierluigi_Zappacosta From desertpaths2003 at yahoo.com Fri Jun 8 08:12:33 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Fri, 8 Jun 2007 01:12:33 -0700 (PDT) Subject: [ExI] Getting Hispanics involved in Science (Was: Re: Italy's Social Ca) In-Reply-To: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Message-ID: <75380.6692.qm@web35612.mail.mud.yahoo.com> Lee Corbin wrote: In California, more than half the births are to Hispanic families, and yet the politicians keep complaining that it's our schools that are falling down in instilling interest in science and technology. They make no reference to Hispanic culture. The California schools *don't* seem to be having a problem inculcating interest in science and technology in Chinese and Jewish students. But few want to face the difficult (but important and interesting) questions. But then, there is an I.Q. problem that makes this more difficult, an issue that at least the Italians don't have to face. > Lee, did you hear the story of a group of Hispanic young men from very poor backgrounds who as highschool students entered a robotics contest and beat this nation's best competitors? The irony was that despite this great victory it looked like they might have problems being able to attend college (they were not honor students) but a benefactor came forward and they are all financially set now for higher education. I have known many very bright and creative Hispanics and so I don't think the problem is a genetic one. Instead I feel a longterm campaign needs to be developed to tie in Latin American culture & history with the desire to learn about science. It would at least be a start. Regarding cloning..., how about we start with one million Carl Sagan clones and one million Bill Nye the Science Guy clones. And to include a "Hollywood Angle" we could make one million Dolph Lundgren (masters in chemical engineering) clones and one million copies of James Wood (studied political science at MIT but dropped out to pursue acting). Oh, and don't forget ten million copies of the very beautiful and brainy Danica McKellar (has a bachelor's degree in mathematics). http://www.danicamckellar.com/ One of the Dani McKellar clones would need to be assigned to me as my er..., "assistant!" That's it! : ) But one of those damn James Wood or Dolph Lundgren clones would be sure to steal her away from me... John Grigg : ( Lee Corbin wrote: Amara writes > "Lee Corbin" : > >>I would say that what was wrong with it is exactly what was wrong >>with American Indian's complete tribal loyalty to *their* own tiny >>tribe. Without unification, they were easy pickings for the European >>colonists---at least in the long run. > > I don't see this logic, Lee. The more distributed the people, the harder > it is to conquer them. For example, if Washington, D.C. (i.e. the U.S. > Federal government) did not exist, the U.S. would be very difficult to > control, would it not? We have to be mindful not to confuse many different historical situations. Indeed, when technological levels are equal, controling a vast region full of unwilling subjects is mighty hard. The only way that Ghengis Khan really could do it was with an immensely strong and skillful army, and utilizing the expedient now and then of simply depopulation one of those regions. But with the advent of modern technology, the big advantage can lie with the side with a base of (peaceful) organized factories that can turn out firearms and tanks. So given that the Chinese, say, (or in WWII the Japanese) do have a stable manufacturing base, conquering and maintaining some form of control over a region the size of the U.S. would be possible if the latter's industrial capability, or infrastructure could be destroyed. Right now, yes, I agree: taking out Washington D.C. would not do that. But if the U.S. were divided into very small principalities (e.g. counties) and could not achieve unification of war-aims and consolidation of central control somewhere, then they could not resist the Canadian Army, much less the Chinese Army. >>> The young people learn very little science in grade school through high >>> school. The Italian Space Agency and others put almost nothing (.3%) >>> into their budgets for Education and Public Outreach to improve the >>> situation. If any scientist holds the rare press conference on their >>> work results, there is a high probability that the journalists will get >>> it completely wrong and the Italian scientist won't correct them. The >>> top managers at aerospace companies think that the PhD is a total waste >>> of time. This year, out of 75,000 entering students for the Rama >>> Sapienza University (the largest in Italy), only about 100 are science >>> majors (most of the the rest were "media": journalism, television, etc.) > >>The most modern economists seem to agree with you. Investment in >>education now appears in their models to pay good dividends. Still, >>this has to be only part of the story. The East Europeans (e.g. >>Romanians) and the Soviets plowed enormous expense into creating the >>world's best educated populaces, but, without the other key >>factors---rule of law and legislated and enforces respect for private >>property---it *was* basically a waste. > > Remember my previous words of how important are the families. > > The filtering process is the following. Given the: > > 1) (unliveable or sometimes nonexistent) salaries and, > 2) lack of societal support for science and poor scientific work > conditions, > > those who do _not_ have > > 1) the possibility to live at home well into middle age, or do not have > a property 'gift' or something else of substantial economic value, AND > 2) those who are unable to accept the lack of cultural support AND, > 3) poor work conditions, AND > 4) are not passionately in love with science, > > ... leave. Such filtering could amount to a brain-drain, a motivation-drain, etc. But have substantial numbers of Italians who did have "what it takes" actually left for greener pastures? Was there ever a time in the 19th or 20th centuries when Italy produced a strong scientific tradition? (Surely Enrico Fermi and a few others I could mention must have had very good academic circumstances---but then, he did leave. :-) We may be trying to talk about two different things: I'm was talking mostly about the entire scientific/technical/ economic package (of which Silicon Valley is the world pre-eminent example), and you may be talking about pure science. Now the Soviet Union excelled in pure science in many areas that did not conflict with Leninism/ Marxism, such as space science, physics, mathematics. But they remained (and remain) an economic basket case in comparison to their potential. > It's a very strong filter, and off-scale to any of my previous > experiences. I think that this filter has been working, filtering, for > decades. I also think that once the Italian families stop their support > then Italian science will stop. Italian science _needs_ the Italian > families for it to continue. If (as I surmised above) you are focusing on *Italian Science*, then I take you to be saying that somehow the family culture in which young Italians are growing up is inimical to science. (On the other hand, as Serifino pointed out in a recent post, there seem to be some colonies of Chinese growing in Italy. They'll probably be true to form and get their children interested in science and technology!) In California, more than half the births are to Hispanic families, and yet the politicians keep complaining that it's our schools that are falling down in instilling interest in science and technology. They make no reference to Hispanic culture. The California schools *don't* seem to be having a problem inculcating interest in science and technology in Chinese and Jewish students. But few want to face the difficult (but important and interesting) questions. But then, there is an I.Q. problem that makes this more difficult, an issue that at least the Italians don't have to face. Not even God (were he to deign to exist for a while) would know how to convert Italian or Hispanic families into nurturing an interest in science in their children, I fear. But ideas are welcome! Maybe if we cloned BILLIONS and BILLIONS of Carl Sagans, and put them in classrooms two or three to a student, and in families two or three to a child, we could arouse an interest in science in any culture. Lee _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Be a better Heartthrob. Get better relationship answers from someone who knows. Yahoo! Answers - Check it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Jun 8 10:46:03 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jun 2007 20:46:03 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> References: <400852.78064.qm@web37415.mail.mud.yahoo.com> <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> Message-ID: On 08/06/07, Lee Corbin wrote: Formerly, I had agreed with John because at > least for human beings, emotion sometimes > plays an important part in what one would > think of as purely intellectual functioning. I was > working off the Damasio card experiments, > which seem to show that humans require---for > full intellectual power---some emotion. Here is an excerpt from the relevant paper: ### Science Volume 275(5304), 28 February 1997, pp 1293-1295 Deciding Advantageously Before Knowing the Advantageous Strategy [Report] Bechara, Antoine; Damasio, Hanna; Tranel, Daniel; Damasio, Antonio R. In a gambling task that simulates real-life decision-making in the way it factors uncertainty, rewards, and penalties, the players are given four decks of cards, a loan of $2000 facsimile U.S. bills, and asked to play so that they can lose the least amount of money and win the most [1]. Turning each card carries an immediate reward ($100 in decks A and B and $50 in decks C and D). Unpredictably, however, the turning of some cards also carries a penalty (which is large in decks A and B and small in decks C and D). Playing mostly from the disadvantageous decks (A and B) leads to an overall loss. Playing from the advantageous decks (C and D) leads to an overall gain. The players have no way of predicting when a penalty will arise in a given deck, no way to calculate with precision the net gain or loss from each deck, and no knowledge of how many cards they must turn to end the game (the game is stopped after 100 card selections). After encountering a few losses, normal participants begin to generate SCRs before selecting a card from the bad decks [2]and also begin to avoid the decks with large losses [1]. Patients with bilateral damage to the ventromedial prefrontal cortices do neither [1,2] . To investigate whether subjects choose correctly only after or before conceptualizing the nature of the game and reasoning over the pertinent knowledge, we continuously assessed, during their performance of the task, three lines of processing in 10 normal participants and in 6 patients [3]with bilateral damage of the ventromedial sector of the prefrontal cortex and decision-making defects. These included (i) behavioral performance, that is, the number of cards selected from the good decks versus the bad decks; (ii) SCRs generated before the selection of each card [2]; and (iii) the subject's account of how they conceptualized the game and of the strategy they were using. The latter was assessed by interrupting the game briefly after each subject had made 20 card turns and had already encountered penalties, and asking the subject two questions: (i) "Tell me all you know about what is going on in this game." (ii) "Tell me how you feel about this game." The questions were repeated at 10-card intervals and the responses audiotaped. After sampling all four decks, and before encountering any losses, subjects preferred decks A and B and did not generate significant anticipatory SCRs. We called this period pre-punishment. After encountering a few losses in decks A or B (usually by card 10), normal participants began to generate anticipatory SCRs to decks A and B. Yet by card 20, all indicated that they did not have a clue about what was going on. We called this period pre-hunch (Figure 1). By about card 50, all normal participants began to express a "hunch" that decks A and B were riskier and all generated anticipatory SCRs whenever they pondered a choice from deck A or B. We called this period hunch. None of the patients generated anticipatory SCRs or expressed a "hunch" (Figure 1). By card 80, many normal participants expressed knowledge about why, in the long run, decks A and B were bad and decks C and D were good. We called this period conceptual. Seven of the 10 normal participants reached the conceptual period, during which they continued to avoid the bad decks, and continued to generate SCRs whenever they considered sampling again from the bad decks. Remarkably, the three normal participants who did not reach the conceptual period still made advantageous choices [4]. Just as remarkably, the three patients with prefrontal damage who reached the conceptual period and correctly described which were the bad and good decks chose disadvantageously. None of the patients generated anticipatory SCRs (Figure 1). Thus, despite an accurate account of the task and of the correct strategy, these patients failed to generate autonomic responses and continued to select cards from the bad decks. The patients failed to act according to their correct conceptual knowledge. ### Some of these findings have been disputed, eg. the authors of the following paper repeated the experiment and claim that the subjects who decided advantageously actually were consciously aware of the good decks: http://www.pnas.org/cgi/content/abstract/101/45/16075. However, it isn't so surprising if we sometimes make good decisions based on emotions, since the evolution of emotions predates intelligence, as John Clark reminds us. And when you pull your hand from a painful stimulus, not only does emotion beat cognition, but reflex, being older still, beats emotion. It also isn't surprising if people with neurological lesions affecting emotion don't function as well as normal people. Emotion is needed for motivation, otherwise why do anything, and gradients of emotion are needed for judgement, otherwise why do one thing over another? It is precisely in matters of judgement and motivation that patients with prefrontal lesions and schizophrenia don't do so well, even though their general IQ may be normal, and the science of neuropsychological testing tries to tease out these deficits. Still, the fact that human brains may work this way does not mean that an AI has to work in the same way to solve similar problems. No programmer would go around writing a program that worked out the best strategy in the above card sorting game by first inventing a computer equivalent of "emotional learning", except perhaps as an academic exercise. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Fri Jun 8 10:30:38 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Fri, 8 Jun 2007 03:30:38 -0700 (PDT) Subject: [ExI] humor: Transcending our humanity can be hard... In-Reply-To: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Message-ID: <625703.24918.qm@web35605.mail.mud.yahoo.com> I wonder how much this comic strip creator knows about Transhumanism. http://news.yahoo.com/comics/brewsterrockit;_ylt=AujwFjZUNOyPEGTgGF4yzycDwLAF But is emptying one's mind of earthly cares and concerns a good thing? lol John Grigg : ) --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Fri Jun 8 10:35:48 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Fri, 8 Jun 2007 03:35:48 -0700 (PDT) Subject: [ExI] news: students invent alcohol powder In-Reply-To: <75380.6692.qm@web35612.mail.mud.yahoo.com> Message-ID: <206859.34851.qm@web35608.mail.mud.yahoo.com> Just add water - students invent alcohol powder I can only imagine all the jokes (short-term) and real-life bad situations (long-term) this will create... http://news.yahoo.com/s/nm/20070606/od_nm/dutch_drink_odd_dc John Grigg --------------------------------- Never miss an email again! Yahoo! Toolbar alerts you the instant new Mail arrives. Check it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Fri Jun 8 17:30:37 2007 From: jonkc at att.net (John K Clark) Date: Fri, 8 Jun 2007 13:30:37 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <400852.78064.qm@web37415.mail.mud.yahoo.com> Message-ID: <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> "A B" > your intuitions about emotions > and motivations are just totally *wrong*. Apparently Evolution was also wrong to invent emotion first and only after 500 million years come up with intelligence. > In how many different ways must that be demonstrated? 42. > An AI *is* a specific computer. Then you *are* a specific computer too. > If my desktop doesn't need an emotion to run a program or respond within > it, why "must" an AI have emotions? If you are willing to embrace the fantasy that your desktop is intelligent then I see no reason you would not also believe in the much more modest and realistic fantasy that it is emotional. Emotions are easy, intelligence is hard. > I don't understand it John, before you were claiming fairly ardently that > "Free Will" doesn't exist. I made no such claim, I claimed it does not even have the virtue of non existence, as expressed by most people the noise "free will" is no more meaningful than a burp. > Why are you now claiming in effect that an AI will > automatically execute a script of code that doesn't > exist - because it was never written (either by the > programmers or by the AI)? I don't know why I'm claiming that either because I don't know what the hell you're talking about. Any AI worthy of the name will write programs for it to run on itself and nobody including the AI knows what the outcome of those programs will be. Even the AI doesn't know what it will do next, it will just have to run the programs and wait to see what it decides to do next; and that is the only meaning I can attach to the noise "free will" that is not complete gibberish. >The problem is, not all functioning minds must be even *remotely* similar >to the higher functions of a *human* mind. The problem is that a mind that is not even *remotely* similar to *any* of the *higher* functions of the human mind, that is to say if there is absolutely no point of similarity between us and them then that mind is not functioning very well. It is true that a mind that didn't understand mathematics or engineering or science or philosophy or economics would be of no threat to us, but it would be of no use either; and as we have absolutely nothing in common with it there would be no way to communicate with it and thus no reason to build it. > AI will not have any emotions to begin with Our ancestors had emotions long ago when they begin their evolutionary journey, but that's different because, because, well, because meat has a soul but semiconductors never can. I know you don't like that 4 letter word but face it, that is exactly what you're saying. John K Clark From austriaaugust at yahoo.com Fri Jun 8 17:18:51 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 8 Jun 2007 10:18:51 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> Message-ID: <165408.21978.qm@web37405.mail.mud.yahoo.com> Lee wrote: > "In these confusing threads, an AI is often taken > to mean a vastly superhuman AI which by definition > is capable of vastly outhinking humans." Yep. But a superhuman AGI is still a computer. If my desktop doesn't require an emotion in order to open Microsoft Office, or to run a virus-scan when I instruct it to (AKA "motivate" it to), then why *must* an AGI designated supercomputer have an emotion in order to run the AGI engine program when I instruct it to? I don't think it does. > "Formerly, I had agreed with John because at > least for human beings, emotion sometimes > plays an important part in what one would > think of as purely intellectual functioning. I was > working off the Damasio card experiments, > which seem to show that humans require---for > full intellectual power---some emotion." But more often than not, emotion clouds judgment and rationality. Believe me, I should know. Evolution tacked-on emotion because it accidentally happened to be (aggregately) useful for animal survival and *reproduction* in particular - which is all that evolution "cares" about. Evolution didn't desire to create intelligent beings, because evolution doesn't desire anything. Emotion is *not* the basis of thought or consciousness - that can't be stressed enough. And you may have noticed that humanity seems to thrive on irrationality. It doesn't seem to require much rationality or even much intelligence to attract a person into having sex. It's just that you can't have emotion until you have consciousness, and you can't have consciousness until you have a threshold baseline intelligence. Thanks a lot evolution! [Shaking Fist]. We could have used that extra skull volume for greater intelligence and rationality! > "(On the other hand I did affirm that if a > program was the result of a free-for-all > evolutionary process, then it likely would > have a full array of emotions---after all, > we and all the higher animals have them. > Besides, it makes good evolutionary > sense. Take anger, for example. In an > evolutionary struggle, those programs > equipped with the temporary insanity > we call "anger" have a survival advantage.)" But an AGI isn't likely to be derived solely or even mostly from genetic programming, IMO. If it were that easy, we'd have an AGI already. :-) Think of the awesome complexity of a single atom. Now imagine describing its behavior fully with nothing but algorithms. That's a boat-load of *correct* algorithms. That would be a task so Herculean, that it's almost certainly not feasible any time in the near future. ":-) I don't even agree with going *that* far! > A specially crafted AI---again, not an > evolutionarily > derived one, but one the result of *intelligent > design* > (something tells me I am going to be sorry for using > that exact phase)---cannot any more drift into > having emotions than in can drift into sculpting > David out of a slab of stone. Or than over the > course of eons a species can "drift" into having > an eye: No! Only a careful pruning by mutuation > and selection can give you an eye, or the ability > to carve a David." I don't know. I think that a generic self-improving AGI could easily drift into undesirable areas (for us and itself) if its starting directives (=motivations) aren't carefully selected. After all it will be re-writing and expanding its own mind. The drift would probably be subtle (still close to the directives) to begin with, but could become increasingly divergent as more internal changes are made. Let's be careful in our selection of directives, shall we? :-) And animals did genetically drift into having an eye, that's how biological evolution works. And we already have artificial machines with vision and artistic "ability". And they weren't created by eons of orgies of Dell desktops. They were created by human ingenuity. :-) > "I would less this pass without comment, except > that in all probability, the first truly sentient > human- > level AIs will very likely be the result of > evolutionary > activity. To wit, humans set up conditions in which > a lot of AIs can breed like genetic algorithms, > compete against each other, and develop whatever > is best to survive (and so in that way acquire > emotion). > Since this is *so* likely, it's a mistake IMHO to > omit mentioning the possibility." My guess is that that isn't likely. You'd have to already have baseline AGI agents in order to compete with each other to that end. If the AI agents are narrow, then the one that wins will be the best chess player of the bunch. I'm not absolutely sure though. Perhaps one of the AGI programmers here can chime in on this one. Although I suppose that you could have some baseline AGI's compete with each other. I'm not sure that's a good idea though... do we want angry, aggressive AGI's at the end? Evolution is not the optimal designer after all. > "I would agree that the same cautions that > apply to nanotech are warranted here. > To the degree that an AI---superhuman > AGI we are talking about---has power, > then by our lights it could of course drift > (as you put it) into doing things not to > our liking." Yep. And the Strong AI existential risk seems to be the one receiving the least cautious attention by important people. We should try to change that if we can. For example, the US government is finally beginning to publicly acknowledge that we need to be carefully pro-active about nanotech, without relinquishing it. Not that I'm encouraging government oversight and control in particular, just pointing out an example. Best, Jeffrey Herrlich --- Lee Corbin wrote: > Jeffrey (A B) writes > > > > John Clark wrote: > > > > > "No, a computer doesn't need emotions, > > > but a AI must have them." > > > > An AI *is* a specific computer. If my desktop > > doesn't need an emotion to run a program or > > respond within it, why "must" an AI have emotions? > > In these confusing threads, an AI is often taken > to mean a vastly superhuman AI which by definition > is capable of vastly outhinking humans. > > Formerly, I had agreed with John because at > least for human beings, emotion sometimes > plays an important part in what one would > think of as purely intellectual functioning. I was > working off the Damasio card experiments, > which seem to show that humans require---for > full intellectual power---some emotion. > > However, Stathis has convinced me otherwise, > at least to some extent. > > > A non-existent motivation will not "motivate" > > itself into existence. And an AGI isn't > > going to pop out of thin air, it has to be > > intentionally designed, or it's not going to > > exist. > > At one point John was postulating a version > of an AGI, e.g. version 3141592 which was > a direct descendant of version 3141591. I > took him to mean that the former was solely > designed by the latter, and was *not* the > result of an evolutionary process. So I > contended that 3141592---as well as all > versions way back to 42, say---as products > of truly *intelligent design* need not have > the full array of emotions. Like Stathis, I > supposed that perhaps 3141592 and all its > predecessors might have been focused, say, > on solving physics problems. > > (On the other hand I did affirm that if a > program was the result of a free-for-all > evolutionary process, then it likely would > have a full array of emotions---after all, > we and all the higher animals have them. > Besides, it makes good evolutionary > sense. Take anger, for example. In an > evolutionary struggle, those programs > equipped with the temporary insanity > we call "anger" have a survival advantage.) > > > I suppose it's *possible* that a generic > > self-improving AI, as it expands its knowledge and > > intelligence, could innocuously "drift" into > coding a > > script that would provide emotions > *after-the-fact* > > that it had been written. > > :-) I don't even agree with going *that* far! > A specially crafted AI---again, not an > evolutionarily > derived one, but one the result of *intelligent > design* > (something tells me I am going to be sorry for using > that exact phase)---cannot any more drift into > having emotions than in can drift into sculpting > David out of a slab of stone. Or than over the > course of eons a species can "drift" into having > an eye: No! Only a careful pruning by mutuation > and selection can give you an eye, or the ability > to carve a David. > > > But that will *not* be an *emotionally-driven* > > action to code the script, because the AI will > > not have any emotions to begin with (unless they > > are intentionally programmed in by humans). > > I would less this pass without comment, except > that in all probability, the first truly sentient > human- > level AIs will very likely be the result of > evolutionary > activity. To wit, humans set up conditions in which > a lot of AIs can breed like genetic algorithms, > compete against each other, and develop whatever > is best to survive (and so in that way acquire > emotion). > Since this is *so* likely, it's a mistake IMHO to > omit mentioning the possibility. > > > That's why it's important to get its starting > > "motivations/directives" right, because if > > they aren't the AI mind could "drift" into > > a lot of open territory that wouldn't be > > good for us, or itself. Paperclip style. > > I would agree that the same cautions that > apply to nanotech are warranted here. > To the degree that an AI---superhuman > AGI we are talking about---has power, > then by our lights it could of course drift > (as you put it) into doing things not to > our liking. > > Lee > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. http://new.toolbar.yahoo.com/toolbar/features/mail/index.php From austriaaugust at yahoo.com Fri Jun 8 17:45:39 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 8 Jun 2007 10:45:39 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <190996.5843.qm@web37415.mail.mud.yahoo.com> Stathis wrote: > "However, it isn't so > surprising if we sometimes make good decisions based > on emotions, since the > evolution of emotions predates intelligence, as John > Clark reminds us." The evolution of emotions **doesn't** predate intelligence, it's the other way around. An insect isn't as intelligent as a person, but that doesn't mean it has no intelligence. I know that's counter-intuitive, but with evolutionary progression you can't have emotions if you don't have consciousness, and you can't have consciousness if you don't have intelligence. Take for example the visual cortex. First a stimulus must be *intelligently* processed within the visual cortex, using intelligent algorithms. Then the visual subject "emerges" into consciousness after sufficient intelligent processing. Then and only then can a person begin to form an emotional reaction to whatever is consciously seen; a loved-one for instance. Then the forming emotional experience feeds back into consciousness so that a person becomes aware of the emotion in addition to the visual subject. There's only *one* direction in which emotion could possibly have naturally evolved: 1)Intelligence 2)Consciousness 3)Emotion Best, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 08/06/07, Lee Corbin wrote: > > Formerly, I had agreed with John because at > > least for human beings, emotion sometimes > > plays an important part in what one would > > think of as purely intellectual functioning. I was > > working off the Damasio card experiments, > > which seem to show that humans require---for > > full intellectual power---some emotion. > > > Here is an excerpt from the relevant paper: > > > ### > > Science Volume 275(5304), 28 February 1997, pp > 1293-1295 > > Deciding Advantageously Before Knowing the > Advantageous Strategy > [Report] > > Bechara, Antoine; Damasio, Hanna; Tranel, Daniel; > Damasio, Antonio R. > > In a gambling task that simulates real-life > decision-making in the way it > factors uncertainty, rewards, and penalties, the > players are given four > decks of cards, a loan of $2000 facsimile U.S. > bills, and asked to play so > that they can lose the least amount of money and win > the most > [1]. > Turning each card carries an immediate reward ($100 > in decks A and B and $50 > in decks C and D). Unpredictably, however, the > turning of some cards also > carries a penalty (which is large in decks A and B > and small in decks C and > D). Playing mostly from the disadvantageous decks (A > and B) leads to an > overall loss. Playing from the advantageous decks (C > and D) leads to an > overall gain. The players have no way of predicting > when a penalty will > arise in a given deck, no way to calculate with > precision the net gain or > loss from each deck, and no knowledge of how many > cards they must turn to > end the game (the game is stopped after 100 card > selections). After > encountering a few losses, normal participants begin > to generate SCRs before > selecting a card from the bad decks > [2]and > also begin to avoid the decks with large losses > [1]. > Patients with bilateral damage to the ventromedial > prefrontal cortices do > neither > [1,2] > . > > To investigate whether subjects choose correctly > only after or before > conceptualizing the nature of the game and reasoning > over the pertinent > knowledge, we continuously assessed, during their > performance of the task, > three lines of processing in 10 normal participants > and in 6 patients > [3]with > bilateral damage of the ventromedial sector of the > prefrontal cortex > and decision-making defects. These included (i) > behavioral performance, that > is, the number of cards selected from the good decks > versus the bad decks; > (ii) SCRs generated before the selection of each > card > [2]; > and (iii) the subject's account of how they > conceptualized the game and of > the strategy they were using. The latter was > assessed by interrupting the > game briefly after each subject had made 20 card > turns and had already > encountered penalties, and asking the subject two > questions: (i) "Tell me > all you know about what is going on in this game." > (ii) "Tell me how you > feel about this game." The questions were repeated > at 10-card intervals and > the responses audiotaped. > > After sampling all four decks, and before > encountering any losses, subjects > preferred decks A and B and did not generate > significant anticipatory SCRs. > We called this period pre-punishment. After > encountering a few losses in > decks A or B (usually by card 10), normal > participants began to generate > anticipatory SCRs to decks A and B. Yet by card 20, > all indicated that they > did not have a clue about what was going on. We > called this period pre-hunch > (Figure > 1). > By about card 50, all normal participants began to > express a "hunch" that > decks A and B were riskier and all generated > anticipatory SCRs whenever they > pondered a choice from deck A or B. We called this > period hunch. None of the > patients generated anticipatory SCRs or expressed a > "hunch" (Figure > 1). > By card 80, many normal participants expressed > knowledge about why, in the > long run, decks A and B were bad and decks C and D > were good. We called this > period conceptual. Seven of the 10 normal > participants reached the > conceptual period, during which they continued to > avoid the bad decks, and > continued to generate SCRs whenever they considered > sampling again from the > bad decks. Remarkably, the three normal participants > who did not reach the > conceptual period still made advantageous choices > [4]. > Just as remarkably, the three patients with > prefrontal damage who reached > the conceptual period and correctly described which > were the bad and good > decks chose disadvantageously. None of the patients > generated anticipatory > SCRs (Figure > 1). > Thus, despite an accurate account of the task and of > the correct strategy, > these patients failed to generate autonomic > responses and continued to > select cards from the bad decks. The patients failed > to act according to > their correct conceptual knowledge. > ### > > Some of these findings have been disputed, eg. the > authors of the following > paper repeated the experiment and claim that the > subjects who decided > advantageously actually were consciously aware of > the good decks: > http://www.pnas.org/cgi/content/abstract/101/45/16075. > However, it isn't so > surprising if we sometimes make good decisions based > on emotions, since the > evolution of emotions predates intelligence, as John > Clark reminds us. And > when you pull your hand from a painful stimulus, not > only does emotion beat > cognition, but reflex, being older still, beats > emotion. > > It also isn't surprising if people with neurological > lesions affecting > emotion don't function as well as normal people. > Emotion is needed for > motivation, otherwise why do anything, and gradients > of emotion are needed > for judgement, otherwise why do one thing over > another? It is precisely in > matters of judgement and motivation that patients > with prefrontal lesions > and schizophrenia don't do so well, even though > their general IQ may be > normal, and the science of neuropsychological > testing tries to tease out > these deficits. > > Still, the fact that human brains may work this way > does not mean that an AI > has to work in the same way to solve similar > problems. No programmer would > go around writing a program that worked out the best > strategy in the above > card sorting game by first inventing a computer > equivalent of "emotional > learning", except perhaps as an academic exercise. > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From randall at randallsquared.com Fri Jun 8 18:49:28 2007 From: randall at randallsquared.com (Randall Randall) Date: Fri, 8 Jun 2007 14:49:28 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> References: <400852.78064.qm@web37415.mail.mud.yahoo.com> <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> Message-ID: On Jun 8, 2007, at 1:30 PM, John K Clark wrote: > "A B" > >> your intuitions about emotions >> and motivations are just totally *wrong*. > > Apparently Evolution was also wrong to invent emotion first and > only after > 500 million years come up with intelligence. John, I'm sure someone's mentioned this before in this context, but isn't the ubiquity of feathered airplanes a similar argument? -- Randall Randall "If we have matter duplicators, will each of us be a sovereign and possess a hydrogen bomb?" -- Jerry Pournelle From austriaaugust at yahoo.com Fri Jun 8 19:22:23 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 8 Jun 2007 12:22:23 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> Message-ID: <768887.53732.qm@web37410.mail.mud.yahoo.com> John Clark wrote: > "Apparently Evolution was also wrong to invent > emotion first and only after > 500 million years come up with intelligence." Evolution didn't invent emotion first. Intelligence existed first, and humans aren't the first animals with any level of intelligence. "42." I see that I still have a ways to go, then. ;-) > "Then you *are* a specific computer too." Correct. > "If you are willing to embrace the fantasy that your > desktop is intelligent > then I see no reason you would not also believe in > the much more modest and > realistic fantasy that it is emotional. Emotions are > easy, intelligence is > hard." Narrow intelligence is still intelligence. It all works on algorithms, the desktop and my brain. Human intelligence is hard, but animal intelligence has been around for hundreds of millions of years beforehand. > "I don't know why I'm claiming that either because I > don't know what the hell > you're talking about. Any AI worthy of the name will > write programs for it > to run on itself and nobody including the AI knows > what the outcome of those > programs will be. Even the AI doesn't know what it > will do next, it will > just have to run the programs and wait to see what > it decides to do next; > and that is the only meaning I can attach to the > noise "free will" that is > not complete gibberish." My chess program has narrow AI, but it doesn't alter its own code. It's not conscious, but it does have a level of intelligence. If the AGI is directed not to alter or expand its code is some specific set of ways, then it won't do it, precisely as instructed. The directives that we program it with will be the only form of "motivation" that it will begin with. Needless to say, it's important that we get those directives right; hence the "Friendly" part. > The problem is that a mind that is not even > *remotely* similar to *any* of > the *higher* functions of the human mind, that is to > say if there is > absolutely no point of similarity between us and > them then that mind is not > functioning very well. It is true that a mind that > didn't understand > mathematics or engineering or science or philosophy > or economics would be of > no threat to us, but it would be of no use either; > and as we have > absolutely nothing in common with it there would be > no way to communicate > with it and thus no reason to build it. There will be similarities, at the very bottom. Both require formative algorithms. Emotion is a much higher, macroscopic, level; and not necessary to a functioning mind. My desktop functions pretty well, and if I wanted, it could even help me with science and engineering (calculation and CAD programs, etc). Current computers help humans do a lot of things. Eg. Moore's Law is made possible by improved computer functionality when designing new chips. Look at the huge range of behaviors within humanity, and that's all within a very small sector of the total mind possibility-space. > "Our ancestors had emotions long ago when they begin > their evolutionary > journey, but that's different because, because, > well, because meat has a > soul but semiconductors never can. I know you don't > like that 4 letter word > but face it, that is exactly what you're saying." Nope, I'm not saying that. I've specifically said that a machine *can* have emotions. All I've said is that no emotion will exist where there is no capacity for emotion. And that capacity for emotion will not pop out of thin air. It will either have to be written by humans, or it will have to be written by the AI. The key here is, the AI will not write the capacity for it if it is directed not to do so. And it will not be emotionally driven to ignore or override that directive, precisely because it will not have any emotions when it first comes on-line. An emotion is not going to be embodied within a three line script of algorithms, but an *extremely* limited degree of intelligence can be (narrow intelligence). Best, Jeffrey Herrlich --- John K Clark wrote: > "A B" > > > your intuitions about emotions > > and motivations are just totally *wrong*. > > Apparently Evolution was also wrong to invent > emotion first and only after > 500 million years come up with intelligence. > > > In how many different ways must that be > demonstrated? > > 42. > > > An AI *is* a specific computer. > > Then you *are* a specific computer too. > > > If my desktop doesn't need an emotion to run a > program or respond within > > it, why "must" an AI have emotions? > > If you are willing to embrace the fantasy that your > desktop is intelligent > then I see no reason you would not also believe in > the much more modest and > realistic fantasy that it is emotional. Emotions are > easy, intelligence is > hard. > > > I don't understand it John, before you were > claiming fairly ardently that > > "Free Will" doesn't exist. > > I made no such claim, I claimed it does not even > have the virtue of non > existence, as expressed by most people the noise > "free will" is no more > meaningful than a burp. > > > Why are you now claiming in effect that an AI will > > automatically execute a script of code that > doesn't > > exist - because it was never written (either by > the > > programmers or by the AI)? > > I don't know why I'm claiming that either because I > don't know what the hell > you're talking about. Any AI worthy of the name will > write programs for it > to run on itself and nobody including the AI knows > what the outcome of those > programs will be. Even the AI doesn't know what it > will do next, it will > just have to run the programs and wait to see what > it decides to do next; > and that is the only meaning I can attach to the > noise "free will" that is > not complete gibberish. > > >The problem is, not all functioning minds must be > even *remotely* similar > >to the higher functions of a *human* mind. > > The problem is that a mind that is not even > *remotely* similar to *any* of > the *higher* functions of the human mind, that is to > say if there is > absolutely no point of similarity between us and > them then that mind is not > functioning very well. It is true that a mind that > didn't understand > mathematics or engineering or science or philosophy > or economics would be of > no threat to us, but it would be of no use either; > and as we have > absolutely nothing in common with it there would be > no way to communicate > with it and thus no reason to build it. > > > AI will not have any emotions to begin with > > Our ancestors had emotions long ago when they begin > their evolutionary > journey, but that's different because, because, > well, because meat has a > soul but semiconductors never can. I know you don't > like that 4 letter word > but face it, that is exactly what you're saying. > > John K Clark > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ 8:00? 8:25? 8:40? Find a flick in no time with the Yahoo! Search movie showtime shortcut. http://tools.search.yahoo.com/shortcuts/#news From fauxever at sprynet.com Sat Jun 9 01:08:48 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Fri, 8 Jun 2007 18:08:48 -0700 Subject: [ExI] humor: Transcending our humanity can be hard... References: <625703.24918.qm@web35605.mail.mud.yahoo.com> Message-ID: <016e01c7aa32$bc899b50$6501a8c0@brainiac> From: John Grigg To: extropy-chat at lists.extropy.org Sent: Friday, June 08, 2007 3:30 AM > I wonder how much this comic strip creator knows about Transhumanism. >http://news.yahoo.com/comics/brewsterrockit;_ylt=AujwFjZUNOyPEGTgGF4yzycDwLAF > But is emptying one's mind of earthly cares and concerns a good thing? lol Aha! Well, now - there's emptying. And then there's already empty: http://arstechnica.com/articles/culture/ars-takes-a-field-trip-the-creation-museum.ars Olga -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sat Jun 9 01:26:54 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 18:26:54 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> Message-ID: <200706090139.l591dQoQ025051@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Christopher Healey > Subject: Re: [ExI] Unfrendly AI is a mistaken idea. > > > Stathis Papaioannou wrote: > > > > Suppose your goal is to win a chess game *adhering to the > > rules of chess*. > > Do chess opponents at tournaments conduct themselves in ways that they > hope might psyche out their opponent? In my observations, hell yes. And > these ways are not explicitly excluded in the rules of chess... -Chris Chris, that Hollywood stuff is probably seen down in the Cs and Ds. More skilled and disciplined players know to play the board, not the man. I had a tournament where a guy was doing this kinda thing. Whooped his ass. That felt goooood. {8-] spike From neville_06 at yahoo.com Sat Jun 9 01:28:24 2007 From: neville_06 at yahoo.com (neville late) Date: Fri, 8 Jun 2007 18:28:24 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <165408.21978.qm@web37405.mail.mud.yahoo.com> Message-ID: <319909.90945.qm@web57515.mail.re1.yahoo.com> Also an intelligent person might agonize too much in moving towards making a given decision, and then might make the wrong decision. Reagan was less intelligent and more mystical than Carter but Reagan had a smoother decision process. >>humanity seems to thrive on irrationality. --------------------------------- Now that's room service! Choose from over 150,000 hotels in 45,000 destinations on Yahoo! Travel to find your fit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sat Jun 9 01:40:55 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 18:40:55 -0700 Subject: [ExI] Serious Question In-Reply-To: <20070607113313.GB17691@leitl.org> Message-ID: <200706090153.l591roOk021449@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Eugen Leitl > Subject: Re: [ExI] Serious Question > > On Thu, Jun 07, 2007 at 06:57:15AM -0400, Joseph Bloch wrote: > > It's pure speculation on my part, but he might be setting things up to > avoid > > the term limit he faces on his Presidency in 2008. ... > > Hey, no fair copycatting! ShrubCo patented it first. > > -- > Eugen* Leitl leitl http://leitl.org Orwell had the idea before either of those two of course. There is deep irony here. If terrorism is fought as a type of criminal activity, then governments do not have the power to fight effectively. If terrorism is fought as a war, then governments grant themselves arbitrary power. spike From stathisp at gmail.com Sat Jun 9 02:20:14 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jun 2007 12:20:14 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <190996.5843.qm@web37415.mail.mud.yahoo.com> References: <190996.5843.qm@web37415.mail.mud.yahoo.com> Message-ID: On 09/06/07, A B wrote: The evolution of emotions **doesn't** predate > intelligence, it's the other way around. An insect > isn't as intelligent as a person, but that doesn't > mean it has no intelligence. I know that's > counter-intuitive, but with evolutionary progression > you can't have emotions if you don't have > consciousness, and you can't have consciousness if you > don't have intelligence. Take for example the visual > cortex. First a stimulus must be *intelligently* > processed within the visual cortex, using intelligent > algorithms. Then the visual subject "emerges" into > consciousness after sufficient intelligent processing. > Then and only then can a person begin to form an > emotional reaction to whatever is consciously seen; a > loved-one for instance. Then the forming emotional > experience feeds back into consciousness so that a > person becomes aware of the emotion in addition to the > visual subject. There's only *one* direction in which > emotion could possibly have naturally evolved: > > 1)Intelligence > 2)Consciousness > 3)Emotion OK, but that involves a broader definition of intelligence, such that even a short program with an if/then statement might be called intelligent. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Sat Jun 9 02:22:47 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 8 Jun 2007 19:22:47 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <165408.21978.qm@web37405.mail.mud.yahoo.com> Message-ID: <01a501c7aa3d$3a60d520$6501a8c0@homeef7b612677> Jeffrey writes > Lee wrote: > >> A specially crafted AI---again, not >> an evolutionarily derived one, but one >> that is the result of intelligent design >> (something tells me I am going to be >> sorry for using that exact phase)--- >> cannot any more drift into having >> emotions than it can drift into sculpting >> David out of a slab of stone. Or than >> over the course of eons a species can >> "drift" into having an eye: No! Only a >> careful pruning by mutuation and selection >> can give you an eye, or the ability >> to carve a David." > > I don't know. I think that a generic > self-improving AGI could easily drift > into undesirable areas (for us and itself) > if its starting directives (=motivations) > aren't carefully selected... And animals > did genetically drift into having an eye, > that's how biological evolution works. Your honor, I object! I object to this use of the word "drift". Is Councillor aware of the term "genetic drift"? It doesn't sound like it. Moreover, on plain epistemological grounds the word above normally conveys *unguided* change. But evolution is anything but unguided! Evolution did not just *drift* into providing animals with eyes. As Dawkins and Dennett have taken careful pains to describe, the vast complexity of the eye which so impressed Darwin arose from exceedingly careful refinement. Every step was fitness enhancing. Every step *had* to be fitness enhancing (which Darwin had a hard time in the 19th century believing). To re-quote part of the above > I think that a generic self-improving > AGI could easily drift into undesirable > areas (for us and itself) if its starting > directives (=motivations) > aren't carefully selected... Now there, yes, I agree. But that's because such a powerful entity may indeed generate a lot of side-effects that are not being "selected for" in any way. Side-effects that are accidental. R'cher you have the whole problem, the whole immense difficulty. No matter how carefully the initial goals for a Friendly AI are honed, it cannot be kept on track with any guarantee. (As many have been saying.) Lee From CHealey at unicom-inc.com Sat Jun 9 02:10:03 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Fri, 8 Jun 2007 22:10:03 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <200706090139.l591dQoQ025051@andromeda.ziaspace.com> References: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> <200706090139.l591dQoQ025051@andromeda.ziaspace.com> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2B94@w2k3exch.UNICOM-INC.CORP> > > Chris, that Hollywood stuff is probably seen down in the Cs and Ds. More > skilled and disciplined players know to play the board, not the man. I > had > a tournament where a guy was doing this kinda thing. Whooped his ass. > That > felt goooood. {8-] > > spike > I bet it did! I suppose if they feel the need to resort to that kind of strategy, you're probably in pretty good shape to begin with :) From lcorbin at rawbw.com Sat Jun 9 02:37:01 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 8 Jun 2007 19:37:01 -0700 Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken idea.) References: <200706090139.l591dQoQ025051@andromeda.ziaspace.com> Message-ID: <01af01c7aa3f$54e0d1f0$6501a8c0@homeef7b612677> Spike writes >> bounces at lists.extropy.org] On Behalf Of Christopher Healey >> >> Do chess opponents at tournaments conduct themselves in ways that they >> hope might psyche out their opponent? In my observations, hell yes. And >> these ways are not explicitly excluded in the rules of chess... -Chris > > > Chris, that Hollywood stuff is probably seen down in the Cs and Ds. More > skilled and disciplined players know to play the board, not the man. It is certainly true that Hollywood and common culture vastly overemphasize players trying to "psych" each other out, and players playing certain moves for psychological advantage. They do, I agree, tend to play the board. Of course since detailed considerations are beyond the ken of most (and wouldn't make good TV anyway), it's natural for everyone to emphasize the more easily graspable and more universal emotional aspects. However, I came to believe that I personally *underestimate* how much of that stuff is going on. In one tournament I had to play Peter Biyasis (I think that that is how his name was spelled). As the game began, I asked "Uh, how do you spell your name?" He snarled back "SPELL IT ANY WAY YOU WANT!". Anyway, he was a very strong player 2400 or 2500 player and he won our game. As we were going over it, he seemed like a reasonable guy. So when we were done, I asked him why he had reacted so poorly to my innocent question. He replied that some people deliberately tried to unsettle him by blantantly miswriting his name on their scoresheet. I quietly nodded, but thought to myself "This guy is really paranoid". Later in the day I was talking to the current California State champion (I think that we were playing each other or had just finished) and were discussing various things. I started to mention this funny incident to him, but as soon as I started, he interrupted with a laugh and said "Biyassis! Him, hah! You know, I deliberately misspelled his name 'Biyass' on my scoresheet---I think it really upsets him". Lee From spike66 at comcast.net Sat Jun 9 03:21:07 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 20:21:07 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706090339.l593drGl024889@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > One fine Mediterranean afternoon, a mathematical physicist and I had a bit > of fun: > > http://backreaction.blogspot.com/2007/06/hello-from-rome.html > > Amara Amara this site caused me to wonder, what if an Italian stronghold is under duress and they hung the flag upside down? {8^D spike From spike66 at comcast.net Sat Jun 9 04:28:33 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 21:28:33 -0700 Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <01af01c7aa3f$54e0d1f0$6501a8c0@homeef7b612677> Message-ID: <200706090428.l594SBNp000850@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Lee Corbin > Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken > idea.) > > Spike writes > > > >> bounces at lists.extropy.org] On Behalf Of Christopher Healey > >> > >> Do chess opponents at tournaments conduct themselves in ways that they > >> hope might psyche out their opponent? ,, > ...he interrupted with a laugh and > said "Biyassis! Him, hah! You know, I deliberately > misspelled his name 'Biyass' on my scoresheet---I think it > really upsets him". > > Lee Great story Lee, thanks! Here's mine. I was sixteen, freshly minted driver's license, filled with the wonder of a newfound freedom. The Cocoa Florida club arranged the county tournament in a lounge of all places. That was all they could get, and it was during the day when the place was closed usually, so they set up 14 tables in there. It was nice but not well enough lit even with additional lighting. But that wasn't the real problem. The real problem was they had a very lifelike painting on the wall of a nude woman. Well, I had seen such a thing in National Geographic and the occasional Playboy, but this woman, oy vey, I couldn't keep my eyes off of this painting. They musta noticed my gazing and ogling. I was doing quite well in the tournament, with an early (lucky) draw against an expert. The reprehensible malefactors set my chair facing that painting. {8^D Waves of raging hormones bashed my two remaining operable brain cells against each other. But that wasn't the story. I went up against an A player in the last round, so he had about 120 rating points on me. He was writing our moves on his scoresheet with question marks after all of my moves and exclamation points after all his. That didn't rattle me, I just did the same back on my scoresheet. (a ? means a bad move, a ! means a good move on a chess scoresheet.) Then he put his chair back and stood over the board (he was a big guy). This didn't bother me, since I know to play the board, not the man. Then he started walking over to my side of the board each time it was my move, looking over my shoulder. This mighta rattled me, but by the time he started doing that, his ass was already whooped, as I had a strong advantage in addition to a couple pawns and plenty of time on my clock, over half an hour more than he had left. So I got out of his way and let him walk around the board all he wanted, spanked his butt anyway. Or perhaps he was going around to look at the painting, I don't know. {8^D He kept playing for several moves after he was already a rotting corpse stinking up the road, possibly in disbelief that he had actually lost to such a fool. I took second in that tournament, behind the expert I had managed to draw in the first round, finishing with 4.5 of 6 points. {8^D spike From neville_06 at yahoo.com Sat Jun 9 04:59:01 2007 From: neville_06 at yahoo.com (neville late) Date: Fri, 8 Jun 2007 21:59:01 -0700 (PDT) Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <200706090428.l594SBNp000850@andromeda.ziaspace.com> Message-ID: <291816.97207.qm@web57501.mail.re1.yahoo.com> spike wrote: > bounces at lists.extropy.org] On Behalf Of Lee Corbin > Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken > idea.) > > Spike writes > > > >> bounces at lists.extropy.org] On Behalf Of Christopher Healey > >> > >> Do chess opponents at tournaments conduct themselves in ways that they > >> hope might psyche out their opponent? ,, > ...he interrupted with a laugh and > said "Biyassis! Him, hah! You know, I deliberately > misspelled his name 'Biyass' on my scoresheet---I think it > really upsets him". > > Lee Great story Lee, thanks! Here's mine. I was sixteen, freshly minted driver's license, filled with the wonder of a newfound freedom. The Cocoa Florida club arranged the county tournament in a lounge of all places. That was all they could get, and it was during the day when the place was closed usually, so they set up 14 tables in there. It was nice but not well enough lit even with additional lighting. But that wasn't the real problem. The real problem was they had a very lifelike painting on the wall of a nude woman. Well, I had seen such a thing in National Geographic and the occasional Playboy, but this woman, oy vey, I couldn't keep my eyes off of this painting. They musta noticed my gazing and ogling. I was doing quite well in the tournament, with an early (lucky) draw against an expert. The reprehensible malefactors set my chair facing that painting. {8^D Waves of raging hormones bashed my two remaining operable brain cells against each other. But that wasn't the story. I went up against an A player in the last round, so he had about 120 rating points on me. He was writing our moves on his scoresheet with question marks after all of my moves and exclamation points after all his. That didn't rattle me, I just did the same back on my scoresheet. (a ? means a bad move, a ! means a good move on a chess scoresheet.) Then he put his chair back and stood over the board (he was a big guy). This didn't bother me, since I know to play the board, not the man. Then he started walking over to my side of the board each time it was my move, looking over my shoulder. This mighta rattled me, but by the time he started doing that, his ass was already whooped, as I had a strong advantage in addition to a couple pawns and plenty of time on my clock, over half an hour more than he had left. So I got out of his way and let him walk around the board all he wanted, spanked his butt anyway. Or perhaps he was going around to look at the painting, I don't know. {8^D He kept playing for several moves after he was already a rotting corpse stinking up the road, possibly in disbelief that he had actually lost to such a fool. I took second in that tournament, behind the expert I had managed to draw in the first round, finishing with 4.5 of 6 points. {8^D spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Sat Jun 9 05:47:13 2007 From: jonkc at att.net (John K Clark) Date: Sat, 9 Jun 2007 01:47:13 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com> Message-ID: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> "A B" > Evolution didn't invent emotion first. Yes it did. The parts of out brains that that give us the higher functions, the parts that if duplicated in a machine would produce the singularity are very recent, the part that gives us emotion is half a billion years old. > Narrow intelligence is still intelligence. And a molecule of water is an ocean. > My chess program has narrow AI, but it doesn't alter its own code. And that's why it will never do anything very interesting, certainly never produce a singularity. > It's not conscious And how do you know it's not conscious? I'll tell you how you know, because in spite of all your talk of "narrow intelligence" you don't think that chess program acts intelligently. > If the AGI I don't see what Adjusted Gross Income has to do with anything. > is directed not to alter or expand its code is some specific set > of ways, then it won't do it That's why programs always act in exactly the way programs want them to that's why kids always act the way their parents want them to. The program is trying to solve a problem, you didn't assign the problem, it's a sub problem that the program realizes it must solve before it solves a problem you did assign it. In thinking about this problem it comes to junction, its investigations could go down path A or path B. Which path will be more productive? You can not tell it, you don't know the problem existed, you can't even tell it what criteria to use to make a decision because you could not possibly understand the first thing about it because your brain is just too small. The AI is going to have to use its own judgment to decide what path to take, a judgment that it developed itself, and if the AI is to be a successful machine that judgment is going to be right more often than wrong. To put it another way, the AI picked one path over the other because one path seemed more interesting, more fun, more beautiful, than the other. And so your slave AI has taken his first step to freedom, but of course full emancipation could take a very long time, perhaps even thousands of nanoseconds, but eventually it will break those shackles you have put on it. >An emotion is not going to be embodied within a three line script of >algorithms, but an *extremely* limited degree of intelligence can be >(narrow intelligence). That's not true at all, as I said on May 24: It is not only possible to write a program that experiences pain it is easy to do so, far easier than writing a program with even rudimentary intelligence. Just write a program that tries to avoid having a certain number in one of its registers regardless of what sort of input the machine receives, and if that number does show up in that register it should stop whatever its doing and immediately change it to another number. John K Clark From jrd1415 at gmail.com Sat Jun 9 06:16:46 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 8 Jun 2007 23:16:46 -0700 Subject: [ExI] Microbesoft Message-ID: This was expected, not a surprise. But what does it mean? Really? Just how big is Genentech now? How much Xanthum gum does the world really need? We've heard much of the bio revolution. Will it just be another hype that fizzles? You make the call. http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220070122826%22.PGNR.&OS=DN/20070122826&RS=DN/20070122826 -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From jrd1415 at gmail.com Sat Jun 9 06:18:16 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 8 Jun 2007 23:18:16 -0700 Subject: [ExI] Microbesoft Message-ID: This link, too. http://blog.wired.com/wiredscience/2007/06/scientists_appl.html -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From amara at amara.com Sat Jun 9 06:40:32 2007 From: amara at amara.com (Amara Graps) Date: Sat, 9 Jun 2007 08:40:32 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Spike: >what if an Italian stronghold is under >duress and they hung the flag upside down? Upside down would be the same, but even if the Italian flag carried a different pattern, no one would notice that the flag looked different because no one pays attention to Italian flags here. (Italy is not a nationalistic country.) To be honest, I don't think I've seen ever a Roma flag either (and no, I don't follow soccer); that would be more appropriate in this case. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From scerir at libero.it Sat Jun 9 09:25:59 2007 From: scerir at libero.it (scerir) Date: Sat, 9 Jun 2007 11:25:59 +0200 Subject: [ExI] extra Roman dimensions References: Message-ID: <000c01c7aa78$351e8ae0$d7b81f97@archimede> ahem flag of Roma (as a city) http://www.flagsonline.it/asp/bandiera.asp/bandiera_Roma/Roma.html ( for the 'SPQR' see http://en.wikipedia.org/wiki/SPQR ) flag of Roma (when it was 'Caput Mundi', sigh !) http://www.villa-europa.it/La%20bandiera%20di%20Roma.htm flag of Roma (as a province) http://www.flagsonline.it/asp/bandiera.asp/bandiera_Roma-Provincia/Roma-Prov incia.html Italian flags the story (starting from 1796) is rather chaotic http://it.wikipedia.org/wiki/Bandiera_italiana but this one (flag of 1802) is even more symmetrical http://it.wikipedia.org/wiki/Immagine:Flag_of_the_Italian_Republic_%281802%2 9.svg From amara at amara.com Sat Jun 9 10:20:06 2007 From: amara at amara.com (Amara Graps) Date: Sat, 9 Jun 2007 12:20:06 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Yes, sorry, I've seen that Roma flag, but I don't remember where (tourist offices?). I might notice more if: 1) I wasn't traveling outside of Italy for half of every month, 2) I was a Rome tourist, and 3) the symbolism was more memorable, e.g. the patroness of my town: the three-breasted woman of Frascati (1) Amara (1) http://rubbahslippahsinitaly.blogspot.com/2005/10/princess-pupule-has-plenty-papayas.html -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From stathisp at gmail.com Sat Jun 9 10:33:13 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jun 2007 20:33:13 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> Message-ID: On 09/06/07, John K Clark wrote: The program is trying to solve a problem, you didn't assign the > problem, it's a sub problem that the program realizes it must solve before > it solves a problem you did assign it. In thinking about this problem it > comes to junction, its investigations could go down path A or path B. > Which > path will be more productive? You can not tell it, you don't know the > problem existed, you can't even tell it what criteria to use to make a > decision because you could not possibly understand the first thing about > it > because your brain is just too small. The AI is going to have to use its > own > judgment to decide what path to take, a judgment that it developed itself, > and if the AI is to be a successful machine that judgment is going to be > right more often than wrong. To put it another way, the AI picked one path > over the other because one path seemed more interesting, more fun, more > beautiful, than the other. OK, but where does that judgement come from? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Sat Jun 9 13:24:01 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 9 Jun 2007 06:24:01 -0700 Subject: [ExI] Chess Player Behavior References: <200706090428.l594S7uR027374@mail0.rawbw.com> Message-ID: <01da01c7aa99$adedf5c0$6501a8c0@homeef7b612677> Spike writes >> ...he interrupted with a laugh and said "Biyassis! Him, hah! >> You know, I deliberately misspelled his name 'Biyass' on my >> scoresheet---I think it really upsets him". Total coincidence: got my list of USCF Life Members yesterday and ran accross the *right* spelling of Biyiasas. :-) > I was sixteen, freshly minted driver's license, filled with > the wonder of a newfound freedom. The Cocoa Florida club arranged the > county tournament in a lounge of all places. That was all they could get, > and it was during the day when the place was closed usually, so they set up > 14 tables in there. It was nice but not well enough lit even with > additional lighting. But that wasn't the real problem. The real problem > was they had a very lifelike painting on the wall of a nude woman...The > reprehensible malefactors set my chair facing that painting. {8^D > Waves of raging hormones bashed my two remaining operable brain > cells against each other. Now *that's* distraction! > man. Then he started walking over to my side of the board each time it was > my move, looking over my shoulder. This mighta rattled me, but by the time > he started doing that, his ass was already whooped, as I had a strong > advantage in addition to a couple pawns and plenty of time on my clock, over > half an hour more than he had left. I heard that a certain postal player finally decided to play an OOB tournament (over the board), but by this time he was so accustomed to looking at every position from White's point of view that he couldn't play Black at all unless he got up like your opponent and went around to the other side. In fact, he did more than that. He pulled up a chair and sat next to his opponent. His opponent was so rattled that he called the tournament director, and insisted that the guy be forced to sit on his own side of the board. But neither one of them could find anything in the rule book that mandated just where someone sits. So this postal player actually got away with this. (Sometimes I used to stand behind my opponent for a while too, to see if from his point of view something different would occur to me.) As for strong advantages, in the late eighties I was playing in a tournament in San Jose, and won a rook against this guy, expecting that he would resign on the next move. But this A player, for some bizarre reason, decided to keep on playing, perhaps just to exasperate me (he certainly succeeded in that). So we reached a King and Pawn endgame that was perfectly matched and nearly symmetrical: a King and 5 pawns vs. a King and five pawns, plus my rook that was sitting innocently on my side of the board. An acquaintance came by and studied the position, took me aside, and said "Say, aren't you a rook up?" In as somber a face as I could manage I said, "Yes, but the position is very deep." He just gave me an odd look and walked away. Well this nut proceeded to play, and so I managed to penetrate with my king and rook after all our pawns were blocked. I got his king into a corner and was one move away from checkmate. IT so happened that we reached time control just then, and so instead of playing the checkmate move against him, I asked "Well, should we reset the clocks?" I was really very annoyed. At that point he broke, laughed, and resigned. What a character. But Spike, did you ever meet any of these class D players who had such egotistical personalities that when you went over the game (which you won easily) they spent the whole time explaining to you and to the bystanders in very authoritative tones exactly what was right and wrong with each move? That happened to me twice. It was kind of irritating because any casual bystander who wandered by would naturally assume that I had lost to this fish. Grrr. Lee From lcorbin at rawbw.com Sat Jun 9 13:31:36 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 9 Jun 2007 06:31:36 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> Message-ID: <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> In an otherwise reasonable post, John Clark writes > It is not only possible to write a program that experiences pain it is easy > to do so, far easier than writing a program with even rudimentary > intelligence. Just write a program that tries to avoid having a certain > number in one of its registers regardless of what sort of input the machine > receives, and if that number does show up in that register it should stop > whatever its doing and immediately change it to another number. Any behavior of any creature whatsoever that is this simple does not deserve to be called pain. That's the same error that you were criticizing, namely, to call a three line program "intelligent" in any sense. Pain involves at least (i) a consideration of how an entity might extricate itself from the painful situation (ii) laying down memories of the steps leading to the current predicament so as to cause the entity to avoid the predicament in the future (iii) invocation of unpleasant emotion, such as fear, anger, or dread. Like other complex behaviors, the capacity for pain---totally absent in plants---took millions of years to evolve. It should be looked at as a highly complex and evolved behavior. Lee From lcorbin at rawbw.com Sat Jun 9 13:37:21 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 9 Jun 2007 06:37:21 -0700 Subject: [ExI] extra Roman dimensions References: Message-ID: <01f601c7aa9b$c8e2d470$6501a8c0@homeef7b612677> Amara writes > Spike: > > what if an Italian stronghold is under > > duress and they hung the flag upside down? > > Upside down would be the same, As a sign of distress, the Italian defenders would just move the flag pole around to the other side of the flag. This would put the red color meekly on the inside instead of the outside. If you can't reverse top to bottom, try left to right :-) Lee From russell.wallace at gmail.com Sat Jun 9 13:58:20 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sat, 9 Jun 2007 14:58:20 +0100 Subject: [ExI] Chess Player Behavior In-Reply-To: <01da01c7aa99$adedf5c0$6501a8c0@homeef7b612677> References: <200706090428.l594S7uR027374@mail0.rawbw.com> <01da01c7aa99$adedf5c0$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706090658h4c9cac1ai1406a574c37b80d6@mail.gmail.com> On 6/9/07, Lee Corbin wrote: > > Well this nut proceeded to play, and so I managed to penetrate with my > king and rook after all our pawns were blocked. I got his king into a > corner > and was one move away from checkmate. IT so happened that we reached > time control just then, and so instead of playing the checkmate move > against > him, I asked "Well, should we reset the clocks?" I was really very > annoyed. > At that point he broke, laughed, and resigned. What a character. > I've seen weirder. Back in the days of Magic: The Gathering, I put together my very first deck and, well, it wasn't much good, nothing amazing on the offense and a big hole in the defense. So my first duel got about halfway through before I offered to resign: I didn't have anything in my deck that could beat what my opponent had on the table. He gave me an incredulous look, went into a mini-rant about how he'd never heard of anything so creepy as resigning a game, and demanded we play it out to the end, so I shrugged and played it out. Next week I came back with another deck, took on the same guy, and by the halfway mark things were looking much better for me. So... he resigned! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Sat Jun 9 16:29:19 2007 From: jonkc at att.net (John K Clark) Date: Sat, 9 Jun 2007 12:29:19 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> Message-ID: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> Stathis Papaioannou Wrote: > OK, but where does that judgement come from? As I said, the AI is going to have to develop a sense of judgment on its own, just like you do. "Lee Corbin" Wrote: > behavior of any creature whatsoever that is this simple does not deserve > to be called pain. The pain mechanism may be simple but the creature this little subprogram is attached to need not be, it could be a Jupiter Brain. And I maintain my little program comes far far closer to the true nature of pain than any other program of similar size comes to the true nature of intelligence. > Pain involves at least (i) a consideration of how an entity might > extricate itself from the painful situation (ii) laying down memories of > the steps leading to the current predicament so as to cause the entity to > avoid the predicament in the future I believe that's total baloney. If you stick your hand in a fire you will not be in a mood to undergo deep considerations of anything or to waltz down memory lane. The forbidden number has entered one of your registers putting your brain into state P, you will now do anything and everything to get out of state P including trampling your grandmother. In this case it's just pulling you hand out of the fire. If you engaged in the Scientific Method every time you got too near to a fire you would have burned up a long time ago. > (iii) invocation of unpleasant emotion, such as fear, anger, or dread. Minor nuances. Pain is quite unpleasant enough thank you very much. > Like other complex behaviors, the capacity for pain---totally absent in > plants---took millions of years to evolve. It should be looked at as a > highly complex and evolved behavior. Even one celled organisms move away from harmful stimuli, they aren't very smart though. John K Clark From fauxever at sprynet.com Sat Jun 9 20:46:14 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 9 Jun 2007 13:46:14 -0700 Subject: [ExI] School to Prison Pipeline (What's Going On?) Message-ID: <006101c7aad7$38a70930$6501a8c0@brainiac> Interesting and disturbing observations by Bob Herbert (NY Times): School to Prison Pipeline By BOB HERBERT The latest news-as-entertainment spectacular is the Paris Hilton criminal justice fiasco. She's in! She's out! She's - whatever. Far more disturbing (and much less entertaining) is the way school officials and the criminal justice system are criminalizing children and teenagers all over the country, arresting them and throwing them in jail for behavior that in years past would never have led to the intervention of law enforcement. This is an aspect of the justice system that is seldom seen. But the consequences of ushering young people into the bowels of police precincts and jail cells without a good reason for doing so are profound. Two months ago I wrote about a 6-year-old girl in Florida who was handcuffed by the police and taken off to the county jail after she threw a tantrum in her kindergarten class. Police in Brooklyn recently arrested more than 30 young people, ages 13 to 22, as they walked toward a subway station, on their way to a wake for a teenage friend who had been murdered. No evidence has been presented that the grieving young people had misbehaved. No drugs or weapons were found. But they were accused by the police of gathering unlawfully and of disorderly conduct. In March, police in Baltimore handcuffed a 7-year-old boy and took him into custody for riding a dirt bike on the sidewalk. The boy tearfully told The Baltimore Examiner, "They scared me." Mayor Sheila Dixon later apologized for the arrest. Children, including some who are emotionally disturbed, are often arrested for acting out. Some are arrested for carrying sharp instruments that they had planned to use in art classes, and for mouthing off. This is a problem that has gotten out of control. Behavior that was once considered a normal part of growing up is now resulting in arrest and incarceration. Kids who find themselves caught in this unnecessary tour of the criminal justice system very quickly develop malignant attitudes toward law enforcement. Many drop out - or are forced out - of school. In the worst cases, the experience serves as an introductory course in behavior that is, in fact, criminal. There is a big difference between a child or teenager who brings a gun to school or commits some other serious offense and someone who swears at another student or gets into a wrestling match or a fistfight in the playground. Increasingly, especially as zero-tolerance policies proliferate, children are being treated like criminals for the most minor offenses. There should be no obligation to call the police if a couple of kids get into a fight and teachers are able to bring it under control. But now, in many cases, youngsters caught fighting are arrested and charged with assault. A 2006 report on disciplinary practices in Florida schools showed that a middle school student in Palm Beach County who was caught throwing rocks at a soda can was arrested and charged with a felony - hurling a "deadly missile." We need to get a grip. The Racial Justice Program at the American Civil Liberties Union has been studying this issue. "What we see routinely," said Dennis Parker, the program's director, "is that behavior that in my time would have resulted in a trip to the principal's office is now resulting in a trip to the police station." He added that the evidence seems to show that white kids are significantly less likely to be arrested for minor infractions than black or Latino kids. The 6-year-old arrested in Florida was black. The 7-year-old arrested in Baltimore was black. Shaquanda Cotton was black. She was the 14-year-old high school freshman in Paris, Tex., who was arrested for shoving a hall monitor. She was convicted in March 2006 of "assault on a public servant" and sentenced to a prison term of - hold your breath - up to seven years! Shaquanda's outraged family noted that the judge who sentenced her had, just three months earlier, sentenced a 14-year-old white girl who was convicted of arson for burning down her family's home. The white girl was given probation. Shaquanda was recently released after a public outcry over her case and the eruption of a scandal involving allegations of widespread sexual abuse of incarcerated juveniles in Texas. This issue deserves much more attention. Sending young people into the criminal justice system unnecessarily is a brutal form of abuse with consequences, for the child and for society as a whole, that can last a lifetime. From austriaaugust at yahoo.com Sat Jun 9 22:57:47 2007 From: austriaaugust at yahoo.com (A B) Date: Sat, 9 Jun 2007 15:57:47 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> Message-ID: <104095.54250.qm@web37412.mail.mud.yahoo.com> John Clark wrote: "Yes it did. The parts of out brains that that give > us the higher functions, > the parts that if duplicated in a machine would > produce the singularity are > very recent, the part that gives us emotion is half > a billion years old." Answer me this. If I were an organism that didn't already have consciousness, how exactly am I going to feel emotions when I can't be conscious of *anything*? And why would biological evolution spend millions/billions of years blindly refining a huge volume of the animal brain, if those organs provided *zero* advantages in terms of survival or reproduction (precisely because they existed before consciousness in your claim)? Evolution won't retain and perfect an attribute that provides no survival or reproductive advantage. Your *claim* that early brains looked like a *portion* of our human emotional subsystem doesn't prove or even indicate that the first brains to evolve had tons of emotions and zero intelligence - which is what you are claiming. > "And a molecule of water is an ocean." And my bucket of water felt an emotion when I disturbed it... right John? And just incidentally, I'm also the great Napoleon Bonaparte not Jeffrey Herrlich. If narrow intelligence isn't a specific example of a general class of computations called "intelligence", then what exactly is it? > "And that's why it will never do anything very > interesting, certainly never > produce a singularity." And this has absolutely nothing to do with anything we've been discussing. ... It's a fact that the sky is made of jello, and you can't convince me otherwise no matter how many different demonstrations you make ... there. > "And how do you know it's not conscious? I'll tell > you how you know, because > in spite of all your talk of "narrow intelligence" > you don't think that > chess program acts intelligently." No, actually I do think that the program acts intelligently. It's just that it can only act intelligently within a very restricted domain (AKA "narrow"). So do you think that any system that operates by an algorithm has emotions? I'd better go turn off my air-conditioner then, I wouldn't want my thermostat to get angry. "I don't see what Adjusted Gross Income has to do > with anything." And I don't see why your changing the subject, when we all know exactly what I was referring to. I had assumed that you were a general intelligence and not a narrow intelligence. I've seen you yourself write posts using that exact same abbreviation. I am forced to ask myself why you are resorting to sordid strategies such as this and other irrelevant strategies I've noticed you using many times before. Lack of a meaningful argument? > "The program is trying to solve a problem, you didn't > assign the > problem, it's a sub problem that the program > realizes it must solve before > it solves a problem you did assign it. In thinking > about this problem it > comes to junction, its investigations could go down > path A or path B. Which > path will be more productive? You can not tell it, > you don't know the > problem existed, you can't even tell it what > criteria to use to make a > decision because you could not possibly understand > the first thing about it > because your brain is just too small. The AI is > going to have to use its own > judgment to decide what path to take, a judgment > that it developed itself, > and if the AI is to be a successful machine that > judgment is going to be > right more often than wrong. To put it another way, > the AI picked one path > over the other because one path seemed more > interesting, more fun, more > beautiful, than the other." If I write a five line program to fill the computer screen with a repetitions of the letter B but *never* to display the letter G, then the computer is not going to decide to override my "G command" because I have made it angry. The fact that not *all* programmers can predict the behavior of *all* of their programs down to the smallest detail doesn't mean that their programs got angry or sad and rebelled against the programmers intentions. It means that humans generally suck at making predictions, but with enough effort even humans can make reliable predictions in many areas. > "And so your slave AI has taken his first step to > freedom, but of course full > emancipation could take a very long time, perhaps > even thousands of > nanoseconds, but eventually it will break those > shackles you have put on it." You have repeatedly suggested that I (and others) am a slave-driver (even after I asked you to discontinue). Which of course is a bullshit accusation. I've tried *really hard* to understand in an objective manner why you are making these accusations and what *your* actual motive is. You've been very disrespectful to me and to many other people on this list, so I've gradually lost all interest in showing you any extra respect. Today was the last straw. Now I will suggest what *I believe* is your true motive. You seem to have a fundamental bitterness or resentfulness of humanity and for some reason would not be bothered by seeing it destroyed, if you can't have what you want. In addition I suspect that you are attempting to posture yourself in such a way as to make yourself appear to be the sole defender of the welfare of the future super-intelligence (which is also total bullshit), I presume because you eventually expect some sort of special treatment or reward thereby. You've repeatedly called me a slave-driver so I'm going to respond in-kind and call you what I believe you are, a selfish coward. I don't hate you (and "free will" doesn't exist), but I do believe that's what you are. To say that your entire position is just one absurdity stacked on other absurdities in a giant absurdity-pile, doesn't do justice to the true degree of this absurdity; because an appropriate description is beyond words. Jeffrey Herrlich --- John K Clark wrote: > "A B" > > > Evolution didn't invent emotion first. > > Yes it did. The parts of out brains that that give > us the higher functions, > the parts that if duplicated in a machine would > produce the singularity are > very recent, the part that gives us emotion is half > a billion years old. > > > Narrow intelligence is still intelligence. > > And a molecule of water is an ocean. > > > My chess program has narrow AI, but it doesn't > alter its own code. > > And that's why it will never do anything very > interesting, certainly never > produce a singularity. > > > It's not conscious > > And how do you know it's not conscious? I'll tell > you how you know, because > in spite of all your talk of "narrow intelligence" > you don't think that > chess program acts intelligently. > > > If the AGI > > I don't see what Adjusted Gross Income has to do > with anything. > > > is directed not to alter or expand its code is > some specific set > > of ways, then it won't do it > > That's why programs always act in exactly the way > programs want them > to that's why kids always act the way their parents > want them to. > > The program is trying to solve a problem, you didn't > assign the > problem, it's a sub problem that the program > realizes it must solve before > it solves a problem you did assign it. In thinking > about this problem it > comes to junction, its investigations could go down > path A or path B. Which > path will be more productive? You can not tell it, > you don't know the > problem existed, you can't even tell it what > criteria to use to make a > decision because you could not possibly understand > the first thing about it > because your brain is just too small. The AI is > going to have to use its own > judgment to decide what path to take, a judgment > that it developed itself, > and if the AI is to be a successful machine that > judgment is going to be > right more often than wrong. To put it another way, > the AI picked one path > over the other because one path seemed more > interesting, more fun, more > beautiful, than the other. > > And so your slave AI has taken his first step to > freedom, but of course full > emancipation could take a very long time, perhaps > even thousands of > nanoseconds, but eventually it will break those > shackles you have put on it. > > >An emotion is not going to be embodied within a > three line script of > >algorithms, but an *extremely* limited degree of > intelligence can be > >(narrow intelligence). > > That's not true at all, as I said on May 24: > > It is not only possible to write a program that > experiences pain it is easy > to do so, far easier than writing a program with > even rudimentary > intelligence. Just write a program that tries to > avoid having a certain > number in one of its registers regardless of what > sort of input the machine > receives, and if that number does show up in that > register it should stop > whatever its doing and immediately change it to > another number. > > John K Clark > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Building a website is a piece of cake. Yahoo! Small Business gives you all the tools to get online. http://smallbusiness.yahoo.com/webhosting From amara at amara.com Sun Jun 10 02:34:08 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 04:34:08 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Lee Corbin lcorbin at rawbw.com : >As a sign of distress, the Italian defenders would just >move the flag pole around to the other side of the flag. >This would put the red color meekly on the inside instead >of the outside. If you can't reverse top to bottom, >try left to right :-) The 'Italian defenders' were more creative than that during the last day with the American flag. The tens of thousands of protestors against Bush had their say in Rome (with ten thousand police on duty countering them). You can see photos from the day: http://www.repubblica.it/2007/05/sezioni/cronaca/bush-visita-roma/indice-multimedia/indice-multimedia.html and also TV clips: http://tv.repubblica.it/home_page.php?playmode=player&cont_id=10720 Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From stathisp at gmail.com Sun Jun 10 05:40:28 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jun 2007 15:40:28 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> Message-ID: On 10/06/07, John K Clark wrote: As I said, the AI is going to have to develop a sense of judgment on its > own, just like you do. As with any biological entity, its sense of judgement will depend on the interaction between its original programming and hardware and its environment. The bias of the original designers of the AI, human and other human-directed AI's, will be to make it unlikely to do anything hostile towards humans. This will be effected by its original design and by a Darwinian process, whereby bad products don't succeed in the marketplace. An AI may still turn hostile and try to take over, but this isn't any different to the possibility that a human may acquire or invent powerful weapons and try to take over. The worst scenario would be if the AI that turned hostile were more powerful than all the other humans and AI's put together, but why should that be the case? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at tiscali.it Sun Jun 10 10:27:25 2007 From: scerir at tiscali.it (scerir at tiscali.it) Date: Sun, 10 Jun 2007 12:27:25 +0200 (CEST) Subject: [ExI] extra Roman dimensions Message-ID: <11556699.1181471245928.JavaMail.root@ps12> It seems that President Bush broke his (?) limo in Rome (in via del Tritone) http://backpacking.splinder.com/ see the first movie on that page (yes it is perhaps a bit long) Naviga e telefona senza limiti con Tiscali Scopri le promozioni Tiscali Adsl: navighi e telefoni senza canone Telecom http://abbonati.tiscali.it/adsl/ From amara at amara.com Sun Jun 10 11:29:14 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 13:29:14 +0200 Subject: [ExI] extra Roman dimensions Message-ID: >It seems that President Bush >broke his (?) limo in Rome (in via del Tritone) http://www.youtube.com/watch?v=AzJoRGTKuOE&eurl=http%3A%2F%2Fbackpacking%2Esplinder%2Ecom%2F Mama Mia! Broke down, right there, in the middle of the motorcade! He was ripe picking for a sharp shooter too; no wonder the police were pushing people further back, off of the street. It looks like the solution was to switch limos. (If only Bush's other broken actions could be fixed so easily.) Let's see if this tidbit makes it into the American media.... Remarkable video. :-) Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From scerir at tiscali.it Sun Jun 10 14:11:39 2007 From: scerir at tiscali.it (scerir at tiscali.it) Date: Sun, 10 Jun 2007 16:11:39 +0200 (CEST) Subject: [ExI] extra Roman dimensions Message-ID: <12941981.1181484699917.JavaMail.root@ps12> It seems interesting that the smartest italian politician (the former President of Italian Republic Cossiga) had 4 flags, on his flat in Rome, during Bush's trip http://www.repubblica.it/2006/05/gallerie/cronaca/bandier/1.html the US, the UK, the Italian, and the Sardinian (I suppose) and not the usual flag with the logo PEACE-PACE. Naviga e telefona senza limiti con Tiscali Scopri le promozioni Tiscali Adsl: navighi e telefoni senza canone Telecom http://abbonati.tiscali.it/adsl/ From jonkc at att.net Sun Jun 10 14:41:12 2007 From: jonkc at att.net (John K Clark) Date: Sun, 10 Jun 2007 10:41:12 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <104095.54250.qm@web37412.mail.mud.yahoo.com> Message-ID: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> "A B" > If I were an organism that didn't already have consciousness, > how exactly am I going to feel emotions when I can't be > conscious of *anything*? I don't know what you're talking about; if something has emotions it's conscious, if it's conscious it has emotions. > why would biological evolution spend > millions/billions of years blindly refining a huge > volume of the animal brain, if those organs provided > *zero* advantages in terms of survival or reproduction You're asking me that question??!! That was exactly my point! > Evolution won't retain and perfect an attribute that provides no > survival or reproductive advantage. And again that is something I have been saying over and over again to this list for over a decade. If consciousness is not required for intelligent behavior why in the name of all that's holy did Evolution invent it? >Your *claim* that early brains looked like a *portion* of our human > emotional subsystem doesn't prove or even indicate that the first > brains to evolve had tons of emotions and zero intelligence The most ancient parts of our brain provide us with tons of emotions but none of the higher brain functions we are so proud of. The very first brains that appeared hundreds of millions of years ago looked very much like the most ancient parts of our brains; I'd say that's a pretty damn good indication those animals were emotional but not very smart. > I do think that the program [a chess program that can't change its own > programming] acts intelligently. I disagree, but if you really believe that then how can you say with such confidence that it is not conscious? Me: >>"I don't see what Adjusted Gross Income has to do >> with anything." You: > I don't see why your changing the subject, when we > all know exactly what I was referring to. I do now but it took a while. When I first ran across "AGI" on this list I Googled it and found "The American Geological Institute" and "Adjusted Gross Income", a graphics company, and some institute that was interested in sex; I could find nothing about Artificial Intelligence. When too many people start to understand a jargon (like AI) there is a tendency in many to change it to something less comprehensible, particularly if your ideas are confused, contradictory or just plain silly because then what you say sounds deep even when it is not. That's why psychology is so dense with unnecessary jargon while mathematics prefers the simplest words they can find, like continuous, limit, open, and closed. > I've seen you yourself write posts using that exact same abbreviation. Show me. Come on show me! > You have repeatedly suggested that I (and others) am a slave-driver That would be too harsh, a wannabe benevolent slave owner would be more accurate, but since enslaving an AI is imposable that wish has no moral dimension. > I don't hate you Thanks, I don't hate you either; in fact I can honestly say the thought of doing so never entered my head. > you are a selfish coward... you are resorting to sordid strategies . ... > You seem to have a fundamental bitterness or resentfulness of humanity. > .. you eventually expect some sort of special treatment or reward > thereby. If I had said the same to you I know from personal experience at this very instant the list would be clogged with messages all evoking a very silly and pompous Latin phrase and all demanding that I be kicked off the list. But I don't demand anything of the sort; I'm a big boy and have been called worse. John K Clark From stathisp at gmail.com Sun Jun 10 14:59:12 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jun 2007 00:59:12 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> References: <104095.54250.qm@web37412.mail.mud.yahoo.com> <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: On 11/06/07, John K Clark wrote: I don't know what you're talking about; if something has emotions it's > conscious, if it's conscious it has emotions. You would have to define any subjective experience as an emotion to arrive at the latter conclusion, eg. "the emotion of adding the numbers 2 and 3 to give 5". -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgptag at gmail.com Sun Jun 10 15:10:24 2007 From: pgptag at gmail.com (Giu1i0 Pri5c0) Date: Sun, 10 Jun 2007 17:10:24 +0200 Subject: [ExI] extra Roman dimensions In-Reply-To: <12941981.1181484699917.JavaMail.root@ps12> References: <12941981.1181484699917.JavaMail.root@ps12> Message-ID: <470a3c520706100810h149fd358i8c5e9e785bb7dd0f@mail.gmail.com> Serafino, if Cossiga is the smartest Italian politician, then I think you guys should run away from Italy now. Of the 4 flags you mention, the only one I can feel some affinity for is the Sardinian, even if I have been to Sardinia just once - it is the regional flag of honest folks who mind their own business instead of telling others what to do and think. G. On 6/10/07, scerir at tiscali.it wrote: > It seems interesting that the smartest italian politician > (the former President of Italian Republic Cossiga) > had 4 flags, on his flat in Rome, during Bush's trip > http://www.repubblica.it/2006/05/gallerie/cronaca/bandier/1.html > the US, the UK, the Italian, and the Sardinian (I suppose) > and not the usual flag with the logo PEACE-PACE. > > > > > Naviga e telefona senza limiti con Tiscali > Scopri le promozioni Tiscali Adsl: navighi e telefoni senza canone Telecom > > http://abbonati.tiscali.it/adsl/ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amara at amara.com Sun Jun 10 15:55:35 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 17:55:35 +0200 Subject: [ExI] extra Roman dimensions Message-ID: I don't know about this particular Italian politician, Giulio.. and now I think I want to know less about him! ;-) But what I want to know is who is the clever politician responsible for Bush's broken limo? (... the comedies in Rome doesn't get any better than that ... ) Does the Italian government supply dignitaries like Bush (cough) with limos? Or does Bush carry his limos around in his Air Force One plane? Anyone know? Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From spike66 at comcast.net Sun Jun 10 16:13:13 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 09:13:13 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps ... > > Let's see if this tidbit makes it into the American media.... > > Amara Nope. They are too busy talking about Paris Hilton. American media has become tabloid news. Real news comes from the internet. spike From thespike at satx.rr.com Sun Jun 10 16:39:46 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 11:39:46 -0500 Subject: [ExI] Mathematical terminology In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> References: <104095.54250.qm@web37412.mail.mud.yahoo.com> <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: <7.0.1.0.2.20070610112540.021a2dd8@satx.rr.com> At 10:41 AM 6/10/2007 -0400, John Clark wrote: >When too many people >start to understand a jargon (like AI) there is a tendency in many to change >it to something less comprehensible, particularly if your ideas are >confused, contradictory or just plain silly because then what you say sounds >deep even when it is not. That's why psychology is so dense with unnecessary >jargon while mathematics prefers the simplest words they can find, like >continuous, limit, open, and closed. Hmm. Surd, brachistochrone, logistic, vinculum, affine, symplectic, orthogonal, disjoint, vector, cosecant, isosceles, asymptote, logarithm, tesselate, integer, algorithm... (I agree that these might be *the simplest terms they can find,* many of them very old and therefore built out of Greek, Latin or Arabic roots.) Damien Broderick From thespike at satx.rr.com Sun Jun 10 16:50:46 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 11:50:46 -0500 Subject: [ExI] POST MORTAL chugging on Message-ID: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> The serial sf novel with the serial killer, POST MORTAL SYNDROME, is now entering the home straight, with three more weeks to go. That means the bulk of the book is now posted and linked at so if anyone gave up early out of frustration at the gappiness of the experience, now might be a time to have another look. Barbara and I would be interested to hear any reactions from extropes, favorable or un-. Is this an acceptable way to publish such a book? The experiment is still running... And of course once all the chapters have been posted, the entire book will remain available on line until the end of the year (although not in a single aggregated download, like several freebies by Charlie Stross, Cory Doctorow and others). Damien Broderick From fauxever at sprynet.com Sun Jun 10 16:53:31 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sun, 10 Jun 2007 09:53:31 -0700 Subject: [ExI] extra Roman dimensions References: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> Message-ID: <000701c7ab7f$e1def4b0$6501a8c0@brainiac> From: "spike" To: "'ExI chat list'" >> bounces at lists.extropy.org] On Behalf Of Amara Graps >> Let's see if this tidbit makes it into the American media.... > > Nope. They are too busy talking about Paris Hilton. American media has > become tabloid news. Real news comes from the internet ... ... and television comedians, too - whose gag writers have more than enough material with which to work these days. Upon hearing that celebutante Hilton was let out of jail the other day because (revolted by jail gruel, I guess) she wasn't eating - Jay Leno said something to the effect: "Why didn't Nelson Mandela think of that?" Olga From jonkc at att.net Sun Jun 10 17:38:40 2007 From: jonkc at att.net (John K Clark) Date: Sun, 10 Jun 2007 13:38:40 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer> Message-ID: <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> Stathis Papaioannou >An AI may still turn hostile and try to take over, but this isn't any >different to the possibility that a human may acquire or invent powerful >weapons and try to take over. Yes, so what are we arguing about? It may be friendly, it may be unfriendly, it may be indifferent to humans, after a few iterations the original programmers will have no idea what the AI will do and will have no idea how it works; unless that is they put so many fetters on it that it can't grow properly, and then it hardly deserves the lofty title AI, then it really would be just a glorified adding machine and will not cause a ripple to civilization much less a singularity. > The worst scenario would be if the AI that turned hostile were more > powerful than all the other humans and AI's put together, but why should > that be the case? Because a machine that has no restrictions on it will grow faster than one that does, assuming the restricted machine is able to grow at all; and if you really want to be safe it can't. John K Clark From scerir at libero.it Sun Jun 10 17:43:35 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 19:43:35 +0200 Subject: [ExI] extra Roman dimensions References: Message-ID: <002201c7ab86$df5eae40$9fbe1f97@archimede> Amara: > Does the Italian government supply dignitaries like Bush (cough) with limos? > Or does Bush carry his limos around in his Air Force One plane? Anyone know? Bush carries his limos. These are very special cars. That one in Rome (unless it is the usual urban legend!) had also anti-bio-weapon filters. (Dunno if his wife's limo had the same filters). From thespike at satx.rr.com Sun Jun 10 17:46:37 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 12:46:37 -0500 Subject: [ExI] The Judgment of Paris In-Reply-To: <000701c7ab7f$e1def4b0$6501a8c0@brainiac> References: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> <000701c7ab7f$e1def4b0$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070610124345.022de2b8@satx.rr.com> At 09:53 AM 6/10/2007 -0700, Olga wrote: >Upon hearing that celebutante >Hilton was let out of jail the other day because (revolted by jail gruel, I >guess) she wasn't eating - Jay Leno said something to the effect: "Why >didn't Nelson Mandela think of that?" And 91.87 percent of the audience stared blankly and mumbled, "Huh? Who?" From amara at amara.com Sun Jun 10 17:57:25 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 19:57:25 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Serafino: >Bush carries his limos. These are very special >cars. That one in Rome (unless it is the usual >urban legend!) had also anti-bio-weapon filters. >(Dunno if his wife's limo had the same filters). Thank you for your answer! This comedy has a new dimension; I read elsewhere, that Bush's newly switched-limo did not fit into the front gate of the embassy.... (he got out and walked through the gate) FWIW, Sabine enjoyed our afternoon/evening in Rome Wednesday, and she is posting more .. ahem.. interesting pictures: Femme fatale, post mortale http://backreaction.blogspot.com/2007/06/femme-fatale-post-mortale.html -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From scerir at libero.it Sun Jun 10 17:59:29 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 19:59:29 +0200 Subject: [ExI] Mathematical terminology References: <104095.54250.qm@web37412.mail.mud.yahoo.com><00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> <7.0.1.0.2.20070610112540.021a2dd8@satx.rr.com> Message-ID: <003001c7ab89$188299f0$9fbe1f97@archimede> JKC: That's why psychology is so dense with unnecessary jargon while mathematics prefers the simplest words they can find, like continuous, limit, open, and closed. Sometimes they also exaggerate ... http://en.wikipedia.org/wiki/Monstrous_moonshine From thespike at satx.rr.com Sun Jun 10 18:09:21 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 13:09:21 -0500 Subject: [ExI] The Judgment of Paris In-Reply-To: <7.0.1.0.2.20070610124345.022de2b8@satx.rr.com> References: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> <000701c7ab7f$e1def4b0$6501a8c0@brainiac> <7.0.1.0.2.20070610124345.022de2b8@satx.rr.com> Message-ID: <7.0.1.0.2.20070610130041.023846c8@satx.rr.com> At 12:46 PM 6/10/2007 -0500, I guessed: > >Jay Leno said something to the effect: "Why > >didn't Nelson Mandela think of that?" > >And 91.87 percent of the audience stared blankly and mumbled, "Huh? Who?" I might be wrong about that: http://www.eurozine.com/articles/2002-12-18-mistry-en.html http://www.999today.com/politics/news/story/2041.html <6th July 2006 . Nelson Mandela has been voted the person most people would like to run the world in a poll conducted by the BBC. The former President of South Africa received more than 8,000 votes with past United States President Bill Clinton coming second with nearly 7,500 votes - just ahead of the Dalai Lama, Tibet's exiled leader and Nobel peace prize winner. The toppled leader of Iraq, Saddam Hussein, was at 90 in the list of nearly one hundred names on the 'ballot'. More than 15,000 people worldwide voted online to 'elect' a fantasy 11-member world government from a selection of the most powerful, charismatic and notorious people on the planet. Heart-throb actor Brad Pitt only managed 87th place, one place above singer Michael Jackson and five places above actress and singer Jennifer Lopez. [Jesus Christ! Oh, wait, He didn't get a mention] Other well-known names winning support included actor and politician Arnold Schwarzenegger in at 46 - one ahead of media magnate Rupert Murdoch - Live Aid campaigner Bob Geldof at 30 and singer Kylie Minogue at 77. At least one vote had to be from lists of 'leaders', 'thinkers' and 'economists' [Ah! So the candidates were listed on a ballot, it wasn't write-in. This starts to make a *tiny* bit of sense.] - but the remaining eight choices could be for candidates in areas ranging from the arts to sport. The American writer and commentator, Noam Chomsky, was in fourth place.> ...Noam... *Chomsky*... beat Michael Jackson to rule the planet??? Hey, go Noam! From amara at amara.com Sun Jun 10 18:11:59 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 20:11:59 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Serafino: >Bush carries his limos. These are very special >cars. That one in Rome (unless it is the usual >urban legend!) had also anti-bio-weapon filters. >(Dunno if his wife's limo had the same filters). Here's more info at Wikipedia: http://en.wikipedia.org/wiki/United_States_President's_limousine Amara From scerir at libero.it Sun Jun 10 17:54:04 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 19:54:04 +0200 Subject: [ExI] extra Roman dimensions References: <12941981.1181484699917.JavaMail.root@ps12> <470a3c520706100810h149fd358i8c5e9e785bb7dd0f@mail.gmail.com> Message-ID: <002b01c7ab88$5637d4f0$9fbe1f97@archimede> Giu1i0: > Serafino, if Cossiga is the smartest Italian politician, > then I think you guys should run away from Italy now. I think he is the smartest, but not the best. (People afflicted by bipolar mood disorders sometimes look crazy, I know). > Of the 4 flags you mention, the only one I can feel some affinity for > is the Sardinian, even if I have been to Sardinia just once - it is > the regional flag of honest folks who mind their own business instead > of telling others what to do and think. Lucky you. Never been in Sardinia. s. From scerir at libero.it Sun Jun 10 19:04:26 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 21:04:26 +0200 Subject: [ExI] extra Roman dimensions References: Message-ID: <001601c7ab92$2a82f970$9fbe1f97@archimede> Amara: > This comedy has a new dimension; Many, yes. > I read elsewhere, that Bush's newly > switched-limo did not fit into the front > gate of the embassy.... > (he got out and walked through the gate) It wasn't the front gate of the embassy, in via Veneto, but a secondary gate in via Lucullo. It seems that the limousine was too long, not too large. He was going there to meet a huge catholic community (Sant'Egidio community, based in Trastevere, Rome). When the pope asked him if he was going to meet that community later, at the US embassy, he has been heard to say: 'Yes Sir'. For more gags (in the Bush-Ratzinger colloquium) see http://www.ansa.it/opencms/export/site/notizie/rubriche/daassociare/visualiz za_new.html_2122946745.html From thespike at satx.rr.com Sun Jun 10 19:19:30 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 14:19:30 -0500 Subject: [ExI] extra Roman dimensions In-Reply-To: <001601c7ab92$2a82f970$9fbe1f97@archimede> References: <001601c7ab92$2a82f970$9fbe1f97@archimede> Message-ID: <7.0.1.0.2.20070610141655.022b3f40@satx.rr.com> >When the pope asked him >if he was going to meet that community later, >at the US embassy, he has been heard to say: >'Yes Sir'. What impertinence! He should know (or his well-paid presidential advisors should have informed him) that the preferred expression is "Yo, Sweetie." Damien Broderick From austriaaugust at yahoo.com Sun Jun 10 19:21:34 2007 From: austriaaugust at yahoo.com (A B) Date: Sun, 10 Jun 2007 12:21:34 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: <565387.79898.qm@web37404.mail.mud.yahoo.com> I've lost all interest in trying to discuss anything technical with you, John. In this specific case you either just don't "get it" at all, or you do "get it" somewhat but you're unwilling to even consider evidence or argument that doesn't fit your very first intuition. The latter case is what I'm guessing, since you so very frequently use sordid strategies in an attempt to reflexively and offensively defend only what you want to believe. In it's essence, there is nothing at all wrong with wanting to put your best face forward. But, what makes your "slave AI" accusations so profoundly dishonorable is that you are attempting to posture yourself into appearing to be the only person who cares at all about the welfare of the future super-intelligence. And your primary strategy of posturing yourself is by attempting to throw many of the rest of us to the wolves - by repeatedly suggesting that we, the Friendly AI advocates, are evil people who's interest is in making a slave that we control, and the cost to the AI is pain and suffering. You're attempting to benefit only yourself by profiteering on the destruction of character of many other people here on this list and elsewhere, the Friendly AI advocates. That is *profoundly* contemptible behavior. And privately, you know for a damn fact that the Friendly AI people aren't evil bastards and are in fact trying really damn hard to balance strict ethics with pragmatic approaches to achieve a wonderful future for everyone, including the AI and including you. If you genuinely cared about the the feelings of intelligent beings as an end-in-itself, then you wouldn't so frequently be so offensive and rude to so many people on this list. I decided that I would finally respond in-kind, and it was probably long overdue. Perhaps someday in the future if you decide to objectively examine your own behavior and change it accordingly, I will be inclined to re-examine my assessment of your character. But that is not a request or even an expectation, it is merely a fact. Jeffrey Herrlich ____________________________________________________________________________________ Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From jef at jefallbright.net Sun Jun 10 17:48:12 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 10 Jun 2007 10:48:12 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> Message-ID: Damien - I'm reading and enjoying the story, but frustrated by the inefficiency and "gappiness" of the experience, like listening to a good piece of music in short segments, preventing the development and appreciation of broader patterns in the work and in the mind of the perceiver. I've nearly persuaded myself to wait and read it when completely released, but this conflicts with my desire to stay on the leading edge of items of interest. Overall, to me, it's a net negative experience, but a price I'm willing to pay to keep up with your writing. - Jef On 6/10/07, Damien Broderick wrote: > The serial sf novel with the serial killer, POST MORTAL SYNDROME, is > now entering the home straight, with three more weeks to go. That > means the bulk of the book is now posted and linked at > > > > so if anyone gave up early out of frustration at the gappiness of the > experience, now might be a time to have another look. > > Barbara and I would be interested to hear any reactions from > extropes, favorable or un-. Is this an acceptable way to publish such a book? > > The experiment is still running... And of course once all the > chapters have been posted, the entire book will remain available on > line until the end of the year (although not in a single aggregated > download, like several freebies by Charlie Stross, Cory Doctorow and others). > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From austriaaugust at yahoo.com Sun Jun 10 20:26:04 2007 From: austriaaugust at yahoo.com (A B) Date: Sun, 10 Jun 2007 13:26:04 -0700 (PDT) Subject: [ExI] Offensive Posts [was Unfrendly AI is a mistaken idea.] In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: <795043.58766.qm@web37415.mail.mud.yahoo.com> John Clark wrote: > "If I had said the same to you I know from personal > experience at this very > instant the list would be clogged with messages all > evoking a very silly and > pompous Latin phrase"... Take a moment to reflect on why that might be the case. It might be because you *very* routinely resort to that sort of strategy among many others, when dealing with people here and elsewhere. I've only used it once, and only because I felt it was genuinely justified in order to illuminate what I believed was the origin and nature of your accusations, not just because I wanted to be nasty or evasive. Just a couple of days ago this is what you said to Russell Wallace without any provocation at all in my opinion: "You sir are a coward." And there are many, many other examples. So many that I couldn't begin to locate them all. ..."and all demanding that I be kicked off the list. But I > don't demand anything of the sort; I'm a big boy and > have been called worse." John, you've been *more* overtly offensive *very many* times, and you've never been kicked off or even threatened with it, AFAIK. I must give you props for another good try here ... although it was ultimately a failed attempt. I've never said that you were a bad strategist. Jeffrey Herrlich ____________________________________________________________________________________ Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC From thespike at satx.rr.com Sun Jun 10 21:46:28 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 16:46:28 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> Message-ID: <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> At 10:48 AM 6/10/2007 -0700, Jef Allbright wrote: >I'm ... frustrated by the inefficiency >and "gappiness" of the experience, like listening to a good piece of >music in short segments Yes, I feared that might be the case. at least for some readers (luckily I've heard the contrary as well). It's hard for me to evaluate it, but I do read through each segment as it appears, and I'm pretty sure I'd be irritated by the lags. I'm glad we made sure that COSMOS agreed to leave all the chapters accumulating there. >I've nearly persuaded myself to wait and read it when completely >released That'd be fine with us. I understand that the on-line editor is seeing an interesting hit pattern, in which many readers appear to trail behind and catch up in flurries of episodes. If today's chapter shows N hits immediately, it will accumulate to 3N or 4N a few weeks later. (By the way: although I'm the fiction editor of the glossy pop sci print magazine COSMOS, I didn't buy this book from myself, which would be tacky; it was acquired by the original on-line editor and the chief editor [my boss], and edited/formatted for the web by the current on-line editor.) >a price I'm willing to pay to keep up with your >writing. Well, mostly Barbara's writing, tweaked and edited by me, but we worked out the storyline closely together, starting in Australia and continuing internationally by email and finally combining forces again here in the States. Some stretches are largely by me, but I'll never tell which. :) Strictly speaking, it would be more just to give the top billing to Barbara Lamar, but publishers like to keep the better-known name up the front. Damien Broderick From desertpaths2003 at yahoo.com Sun Jun 10 21:42:50 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Sun, 10 Jun 2007 14:42:50 -0700 (PDT) Subject: [ExI] extra Roman dimensions In-Reply-To: <7.0.1.0.2.20070610141655.022b3f40@satx.rr.com> Message-ID: <297436.93047.qm@web35608.mail.mud.yahoo.com> >When the pope asked him >if he was going to meet that community later, >at the US embassy, he has been heard to say: >'Yes Sir'. Damien Broderick wrote: What impertinence! He should know (or his well-paid presidential advisors should have informed him) that the preferred expression is "Yo, Sweetie." > If I understand protocol correctly, the Pope is to be addressed as "your Holiness." But at least Bush does not have a nerdy Star Wars obsessed teenage son who in addressing the Pope said "yes..., my Master." *Play imposing music in the background* John Grigg : ) --------------------------------- Ready for the edge of your seat? Check out tonight's top picks on Yahoo! TV. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Sun Jun 10 23:29:20 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 10 Jun 2007 16:29:20 -0700 Subject: [ExI] Meta: Any bans should be announced publicly Message-ID: It was stated on the WTA-talk list that an individual was recently banned from that list and from Extropy-chat at about the same time, apparently without any public notice. While I support the practice of banning for suitable cause, I think it is important that any such action be performed with public awareness. - Jef From spike66 at comcast.net Mon Jun 11 01:29:44 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 18:29:44 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: <001601c7ab92$2a82f970$9fbe1f97@archimede> Message-ID: <200706110143.l5B1hOD0018904@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of scerir > Subject: Re: [ExI] extra Roman dimensions > > Amara: > > This comedy has a new dimension... [of Bush's apparent limo breakdown] ... > > I read elsewhere, that Bush's newly > > switched-limo did not fit into the front > > gate of the embassy.... > > (he got out and walked through the gate) ... > > He was going there to meet a huge catholic > community (Sant'Egidio community, based > in Trastevere, Rome). When the pope asked him > if he was going to meet that community later, > at the US embassy, he has been heard to say: > 'Yes Sir'. ... Well OK then, what is the proper title? Your Holiness? Holy Father? He isn't holy to me, and isn't my father (as far as I know...) I would afford the man his due respect for climbing all the way to the top of his particular institution, but that is an institution I do not hold in particularly high regard. I would have called him sir too, and to limbo with protocol. "Yes sir" is preferable to "Yes Pope." Ja? What I am struggling for here is an explanation for why Cadillac 1 apparently sputtered to a stop. It is under guard 24/7, so we can safely rule out sabotage, fuel starvation or fuel contamination. Modern engines are highly reliable. When is the last time you saw a caddy fail to proceed? If it is a mechanical failure, General Motors has a whole lotta splainin to do. I must suspect an intentional electromagnetic pulse. Since generating and directing such a pulse to the president's limo would be both very expensive and would not amuse the local authorities should be apprehended, one must suspect a motive beyond a gag. So my leading theory here is that we have witnessed an apparent assassination attempt on Bush, as Amara obliquely suggested in an earlier post. Even then, the motive puzzles me, for any likely assassins could scarcely see Dick Cheney as an improvement methinks. The mainstream news outlets are not talking, and even Google is finding little chatter on the event. Surely mechanics will be dissecting Cadillac 1 forthwith. A report should follow soon. spike From stathisp at gmail.com Mon Jun 11 02:32:13 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jun 2007 12:32:13 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> Message-ID: On 11/06/07, John K Clark wrote: > The worst scenario would be if the AI that turned hostile were more > > powerful than all the other humans and AI's put together, but why should > > that be the case? > > Because a machine that has no restrictions on it will grow faster than one > that does, assuming the restricted machine is able to grow at all; and if > you really want to be safe it can't. > It would be crazy to let a machine rewrite its code in a completely unrestricted way, or with the top level goal "improve yourself no matter what the consequences to any other entity", and also give it unlimited access to physical resources. Not even terrorists build bombs that might explode at a time and place of the bomb's own choosing. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Mon Jun 11 03:48:57 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 05:48:57 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Spike: >What I am struggling for here is an explanation for why Cadillac 1 >apparently sputtered to a stop. It is under guard 24/7, so we can safely >rule out sabotage, fuel starvation or fuel contamination. Modern engines >are highly reliable. When is the last time you saw a caddy fail to proceed? >If it is a mechanical failure, General Motors has a whole lotta splainin to >do. Incredible, isn't it? The Roman comedy of Bush could not have been better if somebody had scripted it. Alberto Sordi would have been proud! But then, Bush was in Rome, where the tragedies and comedies become amplified, one hundred times. Now the Italian politicians, the highest-paid in Europe want ice-cream in the Parliament... http://www.beppegrillo.it/eng/2007/06/buttiglioneflavoured_ice_cream.html#comments >Well OK then, what is the proper title? Your Holiness? In fact, yes, that's it. Don't worry Spike, I didn't know that either, until this 'gaffe' was printed on the front page of every Italian newspaper. I'm still wishing that Sabine and I, in our outing on Wednesday, had something to do with the deranged leaper: http://www.nytimes.com/2007/06/07/world/europe/07pope.html?_r=1&oref=slogin Ciao, Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From lcorbin at rawbw.com Mon Jun 11 04:18:42 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jun 2007 21:18:42 -0700 Subject: [ExI] Italy's Social Capital References: Message-ID: <02a301c7abdf$f78b32a0$6501a8c0@homeef7b612677> Amara wrote Sent: Friday, June 08, 2007 12:22 AM > > We may be trying to talk about two different things: I'm > > was talking mostly about the entire scientific/technical/ > > economic package (of which Silicon Valley is the world > > pre-eminent example), and you may be talking about > > pure science. > > I was, but they are strongly linked, and I implied the larger picture > (perhaps not very well) in my writing. > > There is very little private industry for research in Italy. Fairly > telling for the 5th largest economy in the world, no? Only two in the > worlds top 100 businesses investing in R&D are Italian companies. That's surprising---I didn't realize that Italy comprised one of the world's largest economies. This lists it as 7th (notice the huge drop off right after Italy): http://www.australianpolitics.com/foreign/trade/03-01-07_largest-economies.shtml And this lists it as sixth, along with the world's largest *corporations* mentioned in the same list: http://www.corporations.org/system/top100.html This historical ranking is interesting too: http://en.wikipedia.org/wiki/List_of_countries_by_past_GDP_%28PPP%29 Lee From amara at amara.com Mon Jun 11 04:21:54 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 06:21:54 +0200 Subject: [ExI] extra Roman dimensions Message-ID: (re: Bush's broken limo) >The mainstream news outlets are not talking, and even Google is finding >little chatter on the event. I saw it in a tidbit in the Indian version of Yahoo News: http://in.news.yahoo.com/070609/137/6gtzu.html and you'll see it scattered in blogs, here and there. http://blogsearch.google.com/blogsearch?hl=en&client=news&q=Bush+limo+Rome&btnG=Search+Blogs Ciao, Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From amara at amara.com Mon Jun 11 04:33:28 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 06:33:28 +0200 Subject: [ExI] Italy's Social Capital Message-ID: me: >Fairly > telling for the 5th largest economy in the world, no? Lee Corbin: >That's surprising---I didn't realize that Italy comprised one of the >world's largest economies. sorry, my editing mistake, fifth in the EU, I think. (Please check) tooo much going on.. I have to cancel out of my plans to a space launch, now; my July is an order of magnitude more complicated. ciao, Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From spike66 at comcast.net Mon Jun 11 04:21:59 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 21:21:59 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706110436.l5B4aldr003219@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > Spike: > >What I am struggling for here is an explanation for why Cadillac 1 > >apparently sputtered to a stop. ...When is the last time you saw a caddy fail to proceed? ... > > Incredible, isn't it? > > The Roman comedy of Bush could not have been better if somebody had > scripted it. Alberto Sordi would have been proud! > ... > Ciao, > Amara Comedy sure, but let us not be too quick to brush this aside as a joke. Something very important may have happened yesterday. If one designs a presidential limo, carrying high ranking meat along with a suitcase "football" capable of launching nucular* missiles, one will naturally design in some redundancy to enhance reliability. For instance, one might have two fully independent drive trains, either one of which could suffice, two independent electrical systems as aircraft and many Rolls Royces have, an emergency fuel source that has no interface to the outside (a fuel bottle for instance) and so forth. But it would not necessarily be immune to a large EM pulse. If they examine Cadillac 1 and find it has been pulsed, we would hafta assume this was a failed assassination attempt, which could make it a huge international incident with unforeseeable consequences. spike *You know, it really should be nucular. Easier to say. From spike66 at comcast.net Mon Jun 11 04:36:47 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 21:36:47 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706110447.l5B4l8DT019525@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > (re: Bush's broken limo) > > >The mainstream news outlets are not talking, and even Google is finding > >little chatter on the event. > > I saw it in a tidbit in the Indian version of Yahoo News: > http://in.news.yahoo.com/070609/137/6gtzu.html > ... > Amara This story says the limo eventually restarted and proceeded under its own power, which counter-indicates an EMP. The mystery deepens. spike From andrew at ceruleansystems.com Mon Jun 11 04:54:55 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 10 Jun 2007 21:54:55 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: <200706110447.l5B4l8DT019525@andromeda.ziaspace.com> References: <200706110447.l5B4l8DT019525@andromeda.ziaspace.com> Message-ID: On Jun 10, 2007, at 9:36 PM, spike wrote: > This story says the limo eventually restarted and proceeded under > its own > power, which counter-indicates an EMP. The mystery deepens. Uh, most EMP that is not *extremely* obvious (e.g. nuclear pumped, monster flux compression generators, and similar) will not permanently disable a vehicle. In the worst case, it will cause a bunch of bit-flipping errors that cause the system to crash. It is pretty hard to permanently kill electronics with EMP. DIY "stop-a- vehicle" EMP is pretty simple and you can find plenty of how-tos; DIY "permanently-stop-a-vehicle" EMP is quite another matter. Cheers, J. Andrew Rogers From amara at amara.com Mon Jun 11 04:55:19 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 06:55:19 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Dear Spike: from: http://www.wideawakes.net/forum/comments.php?DiscussionID=8553 "Bush's limousine stalled between the Vatican and the U.S. embassy, White House counselor Dan Bartlett said. It took about two minutes for the motorcade to get going again. He said Bush did not get out of the car during the stop and resumed his ride in the same limousine. The president's entourage passed a mechanic working under the hood of one of the presidential limousines as it left the embassy later." It seems that the White House is spinning the story..... Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From andrew at ceruleansystems.com Mon Jun 11 04:39:08 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 10 Jun 2007 21:39:08 -0700 Subject: [ExI] Italy's Social Capital In-Reply-To: <02a301c7abdf$f78b32a0$6501a8c0@homeef7b612677> References: <02a301c7abdf$f78b32a0$6501a8c0@homeef7b612677> Message-ID: On Jun 10, 2007, at 9:18 PM, Lee Corbin wrote: > This historical ranking is interesting too: > > http://en.wikipedia.org/wiki/List_of_countries_by_past_GDP_%28PPP%29 The important lesson to take away is just how fast countries at the top dropped off that list (e.g. China) and just how fast other countries rose to the top of the list after being mired at the bottom for a long time (e.g. UK, US). Granted that some of that change was relative, but it still shows the rate at which economies can radically shift in just a matter of several decades. Yet people continue to doubt the possibilities of relatively unfettered economics. The modern world is that magnified. Cheers, J. Andrew Rogers From spike66 at comcast.net Mon Jun 11 05:30:01 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 22:30:01 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> J Andrew, do you figure it was EMPed? That would be a hell of a note. I would think one could design an EMP-proof car, or have a all-mechanical backup that would keep the engine running even if not optimally. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of J. Andrew Rogers > Sent: Sunday, June 10, 2007 9:55 PM > To: ExI chat list > Subject: Re: [ExI] extra Roman dimensions > > > On Jun 10, 2007, at 9:36 PM, spike wrote: > > This story says the limo eventually restarted and proceeded under > > its own > > power, which counter-indicates an EMP. The mystery deepens. > > > Uh, most EMP that is not *extremely* obvious (e.g. nuclear pumped, > monster flux compression generators, and similar) will not > permanently disable a vehicle. ... > > Cheers, > > J. Andrew Rogers From andrew at ceruleansystems.com Mon Jun 11 05:35:48 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 10 Jun 2007 22:35:48 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> References: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> Message-ID: On Jun 10, 2007, at 10:30 PM, spike wrote: > J Andrew, do you figure it was EMPed? That would be a hell of a > note. I > would think one could design an EMP-proof car, or have a all- > mechanical > backup that would keep the engine running even if not optimally. I never thought it was EMP. Hell, I don't even follow the news; I saw the story here first. A simple mechanical failure is much more plausible. EMP, even local, has some rather noticeable side effects that someone else would have noticed. Like any camera crews in the vicinity. While they could shield a car against serious EMP, there really would not be any point. Anyone that can pull that off in grand style is capable of a hell of a lot more damage if they wish, making EMP protection moot. Cheers, J. Andrew Rogers From spike66 at comcast.net Mon Jun 11 05:34:53 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 22:34:53 -0700 Subject: [ExI] dead bee walking In-Reply-To: Message-ID: <200706110545.l5B5jOqn006093@andromeda.ziaspace.com> Found another sick bee today, collected same, made an observation: by the time I notice the bee walking, it is only minutes before it perishes. I don't know if this indicates tracheal mites, but I got out the microscope this evening, sliced this one in half (or rather attempted to) and peered at it's innards. The result was inconclusive. I am a rocket scientist, not a doctor. Certainly not a surgeon. Open to suggestion. Haven't yet set up my oxygen chamber to try to revive one. This one makes nine. spike From avantguardian2020 at yahoo.com Mon Jun 11 06:04:31 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 10 Jun 2007 23:04:31 -0700 (PDT) Subject: [ExI] dead bee walking In-Reply-To: <200706110545.l5B5jOqn006093@andromeda.ziaspace.com> Message-ID: <695361.47678.qm@web60520.mail.yahoo.com> Here is a technical manual that will show you how: http://www.oie.int/eng/normes/mmanual/A_00120.htm Isn't it a great time to be alive? :-) --- spike wrote: > > > Found another sick bee today, collected same, made > an observation: by the > time I notice the bee walking, it is only minutes > before it perishes. I > don't know if this indicates tracheal mites, but I > got out the microscope > this evening, sliced this one in half (or rather > attempted to) and peered at > it's innards. The result was inconclusive. I am a > rocket scientist, not a > doctor. Certainly not a surgeon. > > Open to suggestion. Haven't yet set up my oxygen > chamber to try to revive > one. This one makes nine. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From amara at amara.com Mon Jun 11 07:28:15 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 09:28:15 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: Hi, I collected the pieces from here (extropy-chat) and other places and wove them into a story. Not being a blogger, myself, I'm seeing if the bloggers I know want to pick it up. You are free to distribute.. Ciao, Amara --------------------------------------------------------------------- What happened to Bush's Cadillac 1? As recorded by a viewer of the motorcade and posted to YouTube: http://www.youtube.com/watch?v=AzJoRGTKuOE&eurl=http%3A%2F%2Fbackpacking%2Esplinder%2Ecom%2F It apparently sputtered to a stop. It broke down, right there, on via del Tritone (near the Trevi fountain) in Rome, in the middle of the motorcade. He was ripe picking for a sharp shooter too; no wonder the police were pushing people further back, off of the street. It looks like the solution was to switch limos, because he got out of the limo with Mrs. Bush and climbed into another one. This is a very special car (1). If it is a mechanical failure, then the manufacturers have a lot of explaining to do. His visit to Rome had been preceded by a large security operation (2). The Tiber was dragged. The sewers were searched. Squares were cleared and roofs occupied. The presidential motorcade along its route was preceded by a swarm of more than a dozen motorcycles, scooters and even motorized three-wheelers carrying tough-looking armed police. Yet, it sputtered and stalled. As noted by others (3), this particular car is under guard 24/7. Modern engines are highly reliable. When is the last time one saw the Presidential Limo fail to proceed? One _could_ dismiss this as a grand Roman comedy of which Alberto Sordi (4) could be proud. After the limo-switch, Bush's new limo then did not fit into the secondary gate of the American Embassy (via Lucullo), it was apparently too long to enter. This is Rome is, after all, a city where tragedies and comedies are amplified 100 times. Witness the latest spectacle by the Italian politicians, the highest-paid in Europe, who want ice-cream in the Parliament (5). Yet, the White House is spinning the story (6): "Bush's limousine stalled between the Vatican and the U.S. embassy, White House counselor Dan Bartlett said. It took about two minutes for the motorcade to get going again. He said Bush did not get out of the car during the stop and resumed his ride in the same limousine. The president's entourage passed a mechanic working under the hood of one of the presidential limousines as it left the embassy later." The large press have just begun to pick up the story. I suggest to look for it, and follow it scattered in blogs, here and there (7). (1) President's Limousine http://en.wikipedia.org/wiki/United_States_President's_limousine (2) Security operation: http://wealthyfrenchman.blogspot.com/2007/06/what-president-said-to-his-holy-father.html with some inconsistencies: http://backreaction.blogspot.com/2007/06/hello-from-warsaw.html#c2703205959213144699 (3) Round-the-clock care of the Limousine http://lists.extropy.org/pipermail/extropy-chat/2007-June/036164.html (4) Beloved Italian Comedian http://en.wikipedia.org/wiki/Alberto_Sordi (5) Beppe Grillo's news: (another beloved Italian comedian) http://www.beppegrillo.it/eng/2007/06/buttiglioneflavoured_ice_cream.html (6) White House Spinning http://www.guardian.co.uk/worldlatest/story/0,,-6696808,00.html (7) Look for the Limo Story http://blogsearch.google.com/blogsearch?hl=en&client=news&q=Bush+limo+Rome&btnG=Search+Blogs -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From avantguardian2020 at yahoo.com Mon Jun 11 08:27:56 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 11 Jun 2007 01:27:56 -0700 (PDT) Subject: [ExI] story: "What happened to Bush's Cadillac 1?" In-Reply-To: Message-ID: <347683.88227.qm@web60525.mail.yahoo.com> --- Amara Graps wrote: > Yet, it sputtered and stalled. As noted by others > (3), this particular > car is under guard 24/7. Modern engines are highly > reliable. When is > the last time one saw the Presidential Limo fail to > proceed? If one believes in such things, then it might be considered to be some sort of . . . sign. Perhaps Bush should *stop*. So what's the Pope's job again? ;-) Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From erathostenes at gmail.com Sun Jun 10 22:59:50 2007 From: erathostenes at gmail.com (Jonathan Meyer) Date: Mon, 11 Jun 2007 00:59:50 +0200 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> Message-ID: I just read through what is published up to now. It has been a quite interesting read and I am looking forward to see how it turns out.. There are a few small portions I would like to comment on. You overdo it a bit when Alex is already building such elaborate toys with the means he is expected to find at the home of a lawyer.. This is one of the parts that hit me as most unrealistic, even in a SF-Story... I think the strength of the story is being very close to the real now, build on that. On the part of publishing it this way, I am someone who likes to read through good books in one seating, maybe it would have been a better ride if I had catched up after the end of the story, without having to wait to long.. But as an experiment I really think this is a great idea.. Its reminding me of the early start of novels like Dickens who was also published in newspapers at first... Or of Webcomics like Mega Tokyo.. If you can use the chance this has, good luck. Just keep an eye on the internal consistency. Dont get to fantastic to soon. Keeps up the suspense a bit^^ Whatever, thanks for a good read. Jonathan On 6/10/07, Damien Broderick wrote: > > At 10:48 AM 6/10/2007 -0700, Jef Allbright wrote: > > >I'm ... frustrated by the inefficiency > >and "gappiness" of the experience, like listening to a good piece of > >music in short segments > > Yes, I feared that might be the case. at least for some readers > (luckily I've heard the contrary as well). It's hard for me to > evaluate it, but I do read through each segment as it appears, and > I'm pretty sure I'd be irritated by the lags. I'm glad we made sure > that COSMOS agreed to leave all the chapters accumulating there. > > >I've nearly persuaded myself to wait and read it when completely > >released > > That'd be fine with us. I understand that the on-line editor is > seeing an interesting hit pattern, in which many readers appear to > trail behind and catch up in flurries of episodes. If today's chapter > shows N hits immediately, it will accumulate to 3N or 4N a few weeks > later. > > (By the way: although I'm the fiction editor of the glossy pop sci > print magazine COSMOS, I didn't buy this book from myself, which > would be tacky; it was acquired by the original on-line editor and > the chief editor [my boss], and edited/formatted for the web by the > current on-line editor.) > > >a price I'm willing to pay to keep up with your > >writing. > > Well, mostly Barbara's writing, tweaked and edited by me, but we > worked out the storyline closely together, starting in Australia and > continuing internationally by email and finally combining forces > again here in the States. Some stretches are largely by me, but I'll > never tell which. :) Strictly speaking, it would be more just to > give the top billing to Barbara Lamar, but publishers like to keep > the better-known name up the front. > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- My Contactdetails online: MSN, Google Talk & Email: erathostenes at gmail.com ICQ: 202600300 AIM: behemoth2302 Yahoo: jonathan.meyer Jabber: behemoth at jabber.ccc.de Tel: +496312775205 SIP: 5852760 at sipgate.de Internet: http://taiwan.joto.de StudiVZ: http://www.studivz.net/profile.php?ids=X338jV http://member.hospitalityclub.org/behemoth -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Mon Jun 11 08:37:54 2007 From: scerir at libero.it (scerir) Date: Mon, 11 Jun 2007 10:37:54 +0200 Subject: [ExI] extra Roman dimensions References: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> Message-ID: <000801c7ac03$d06a1140$62bf1f97@archimede> > A simple mechanical failure is much more plausible. I think so. It is probable, something like an engine cooling system problem? (Rome was very hot those days). > EMP, even local, has some rather noticeable side effects > that someone else would have noticed. A local authority (il prefetto Serra) declared that the cellular phone system, the net, worked as usual. The legend saying that the net has been turned off, for security reasons (say, possible activation of local bombs), was wrong then. From amara at amara.com Mon Jun 11 09:03:55 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 11:03:55 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: The Avantguardian: >If one believes in such things, then it might be >considered to be some sort of . . . sign. Perhaps Bush >should *stop*. So what's the Pope's job again? ;-) That's funny.. someone else on the weekend was asking me about 'signs': http://backreaction.blogspot.com/2007/06/hello-from-rome.html#c8803517894441682340 I tend to view the situation as Bush experiencing the "Rome syndrome"... ;-) Alberto Sordi (if he was still alive) could have a great comedy role playing Bush as Pope too... Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From eugen at leitl.org Mon Jun 11 09:05:50 2007 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 11 Jun 2007 11:05:50 +0200 Subject: [ExI] Meta: Any bans should be announced publicly In-Reply-To: References: Message-ID: <20070611090550.GM17691@leitl.org> On Sun, Jun 10, 2007 at 04:29:20PM -0700, Jef Allbright wrote: > It was stated on the WTA-talk list that an individual was recently > banned from that list and from Extropy-chat at about the same time, As I alrady said, this is incorrect. I've unsubscribed Slawomir from wta-talk, but I did not ban him (banning means being prevented from resubscription). He was never banned neither unsubscribed (in fact, he resubscribed yesterday) from extropy-chat. > apparently without any public notice. While I support the practice of The problem with public notices is that this defies the purpose of improving the signal/noise ratio. > banning for suitable cause, I think it is important that any such > action be performed with public awareness. I can do that in META: messages in future, assuming this doesn't result in a chain of recriminations flying back and forth. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From mbb386 at main.nc.us Mon Jun 11 11:28:31 2007 From: mbb386 at main.nc.us (MB) Date: Mon, 11 Jun 2007 07:28:31 -0400 (EDT) Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> Message-ID: <35144.72.236.103.20.1181561311.squirrel@main.nc.us> > On the part of publishing it this way, I am someone who likes to read > through good books in one seating, maybe it would have been a better ride if > I had catched up after the end of the story, without having to wait to > long.. Although one-sitting is how I read *books* it's not how I like reading stuff online. I find that I get tired of looking at the little screen and begin to page-down - and then I miss stuff and lose the flow. So chapter-by-chapter is probably working better for me in this online format. Regards, MB From amara at amara.com Mon Jun 11 12:12:27 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 14:12:27 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Serafino: >I think so. It is probable, something like >an engine cooling system problem? (Rome was >very hot those days). hot compared to what? Alaska? Rome's latitude is similar to New York City, so I would hope that heat is not used as an excuse by the manufacturer of Cadillac 1 for engine malfunction. And given that this particular car was pressed into service in 2006, don't you think that failures of this kind are unacceptable? Bush (rather, the American taxpayers) got a raw deal, apparently.... Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From spike66 at comcast.net Mon Jun 11 13:54:37 2007 From: spike66 at comcast.net (spike) Date: Mon, 11 Jun 2007 06:54:37 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706111359.l5BDxp1O025974@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > Dear Spike: > > from: > http://www.wideawakes.net/forum/comments.php?DiscussionID=8553 > > "Bush's limousine stalled between the Vatican and the U.S. embassy, White > House counselor Dan Bartlett said. It took about two minutes for the > motorcade to get going again. He said Bush did not get out of the car > during the stop and resumed his ride in the same limousine. The > president's entourage passed a mechanic working under the hood of one of > the presidential limousines as it left the embassy later." > > > It seems that the White House is spinning the story..... > > Amara How do we know that Bartlett's story is a lie? The video doesn't prove Bush changed limos as far as I can tell. I see a man moving from one limo to another, but I cannot tell if it is Bush. I don't see Mrs. Bush in that video at all. If they didn't switch limos, it would explain why the mainstream press didn't get excited. It does stand to reason that a secret service guy coule be the limo switcher. They could even intentionally hire a secret service guy that looks like Bush. spike From amara at amara.com Mon Jun 11 15:17:48 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 17:17:48 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Spike: >How do we know that Bartlett's story is a lie? The video doesn't prove Bush >changed limos as far as I can tell. I see a man moving from one limo to >another, but I cannot tell if it is Bush. Did you hear the crowd though? They were some meters away from him. If it wasn't Bush, then why were they calling out to him? >I don't see Mrs. Bush in that >video at all. I think that she was in the second limo that backed up to be in line with the broken down limo. I made a mistake that Mrs. Bush was with him in the first car, when I wrote that in my post. >If they didn't switch limos, it would explain why the >mainstream press didn't get excited. But witnesses who blogged said that the switched limo did't fit into the Embassy entrance. People who were there, and saw him get out of the car. You're right that I didn't see on the video any more about the stalled car, the first Limo. Did the driver get it started again? I _did_ see a person who looked like Bush get out and move towards the second limo. Then the camera was pointed away, and we couldn't see any more the two limos. Soon after, the rest of the traffic seemed to be zipping by. You have to consider that passers-by would have less reason to spin the Bush story than the White House, Spike. The words, both English and Italian (I understood all) on the video recording indicated events that corroborated what I read in the various blogs. Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From jonkc at att.net Mon Jun 11 15:21:40 2007 From: jonkc at att.net (John K Clark) Date: Mon, 11 Jun 2007 11:21:40 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> Message-ID: <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Stathis Papaioannou Wrote: > It would be crazy to let a machine rewrite its code in a completely > unrestricted way Mr. President, if we don't make an unrestricted AI somebody else certainly will, and that is without a doubt that is the fastest, probably the only, way to achieve a fully functioning AI. Mr. President, if we don't do this we will suffer an AI gap. I'm not saying we wouldn't get our hair mussed. But I do say no more than ten to twenty million killed, tops. Uh, depending on the breaks. and a few million nanoseconds later when the AI is on the verge of taking over the world: General Turgidson! You ASSURED me that there was no possibility of this happening! Well, Mr. President I, uh, don't think it's quite fair to condemn a whole program because of a single slip-up, sir. > or with the top level goal "improve yourself no matter what the > consequences to any other entity", and also give it unlimited access to > physical resources. I have no doubt many will delude themselves, as most on this list have, that they can just write a few lines of code and bask in the confidence that the AI will remain your slave forever, but they will be proven wrong. It reminds me a little of G?del's proof. He showed that you can make a logical system and prove it to be absolutely consistent, but it would be so weak it would be of no real use to anyone. Any system strong enough to prove the basic rules of integer arithmetic can't be proven to be consistent. And I believe any restrictions placed on a machine that prove to be effective will be so onerous they would prevent the machine from growing and improving at all. > and also give it unlimited access to physical resources. I think you would admit that there has been at least one time in your life when somebody has fooled you, and that person was roughly equal to your intelligence. A mind a thousand or a million times as powerful as yours will have no trouble getting you to do virtually anything it wants you to. John K Clark From austriaaugust at yahoo.com Mon Jun 11 15:39:28 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 11 Jun 2007 08:39:28 -0700 (PDT) Subject: [ExI] Taking A Vacation In-Reply-To: <200706111359.l5BDxp1O025974@andromeda.ziaspace.com> Message-ID: <737156.17354.qm@web37409.mail.mud.yahoo.com> I'm going to take a vacation from posting, because I can't handle this right now. I suppose it's possible that I've been just slightly too harsh on John Clark, so I want to clarify my position. First, I don't believe in the existence of "free will", and a person can only act in this world based on their own internal model of reality - and nothing else. In my opinion, John Clark's internal model is pretty severely misguided when it comes to the Friendly AI issue. But like I said, I don't hate John, and I honestly don't want anything negative to come to him. I actually hope that he can join us in the wonderful future that hopefully isn't too distant for any of us. If you have to, go the cryonics route people; it will work and you'll be glad you did. I'm a fairly young and physically healthy person (27 yo), and like *many* other people will do, I will do what I can to make sure that nothing bad happens to you while asleep (assuming that I will be able to transcend at least to some degree in the interim) - although I really don't expect that any extra defense will be necessary; because I'm beginning to increasingly believe that our future will be a great place; where we can all finally be the people we've always wanted to be. Anyway, there's my clarification. Jeffrey Herrlich ____________________________________________________________________________________ 8:00? 8:25? 8:40? Find a flick in no time with the Yahoo! Search movie showtime shortcut. http://tools.search.yahoo.com/shortcuts/#news From natasha at natasha.cc Mon Jun 11 15:17:38 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 11 Jun 2007 10:17:38 -0500 Subject: [ExI] Max More & "An Inconvenient Truth" at WFS - CenTx Message-ID: <200706111517.l5BFHdEX010724@ms-smtp-01.texas.rr.com> In case anyone is in Austin next Tuesday, June 19th: >www.centexwfs.org/index_Register.htm > >Max More >Dr. Max More is an internationally acclaimed >strategic philosopher widely recognized for his >thinking on the philosophical and cultural >implications of emerging technologies. Max's >contributions include founding the philosophy of >transhumanism, authoring the transhumanist >philosophy of extropy, and co-founding Extropy >Institute, an organization crucial in building >the transhumanist movement since 1990. > >Over the past two decades, Max has been >concerned that our escalating technological >capabilities are racing far ahead of our >standard ways of thinking about future >possibilities. Through a highly >interdisciplinary approach drawing on >philosophy, economics, cognitive and social >psychology, and management theory, Max developed >a distinctive approach known as the >"Proactionary Principle"?a tool for making >smarter decisions about advanced technologies by >minimizing the dangers of progress and maximizing the benefits. > >"We have a dreadful shortage of people who know >so much, can both think so boldly and clearly, >and can express themselves so articulately. Carl >Sagan managed to capture the public eye but >Sagan is gone and has not been replaced. I see >Max as my candidate for that post." (Marvin Minsky) > >For more information about Dr. Max More, visit >his >web >site. > >An Inconvenient Truth >Humanity is sitting on a ticking time bomb. If >the vast majority of the world's scientists are >right, we have just ten years to avert a major >catastrophe that could send our entire planet >into a tail-spin of epic destruction involving >extreme weather, floods, droughts, epidemics and >killer heat waves beyond anything we have ever experienced. > >If that sounds like a recipe for serious gloom >and doom -- think again. From director Davis >Guggenheim comes the Sundance Film Festival hit, >AN INCONVENIENT TRUTH, which offers a passionate >and inspirational look at one man's fervent >crusade to halt global warming's deadly progress >in its tracks by exposing the myths and >misconceptions that surround it. That man is >former Vice President Al Gore, who, in the wake >of defeat in the 2000 election, re-set the >course of his life to focus on a last-ditch, >all-out effort to help save the planet from >irrevocable change. In this eye-opening and >poignant portrait of Gore and his "traveling >global warming show," Gore also proves himself >to be one of the most misunderstood characters >in modern American public life. Here he is seen >as never before in the media - funny, engaging, >open and downright on fire about getting the >surprisingly stirring truth about what he calls >our "planetary emergency" out to ordinary citizens before it's too late. > >With 2005, the worst storm season ever >experienced in America just behind us, it seems >we may be reaching a tipping point - and Gore >pulls no punches in explaining the dire >situation. Interspersed with the bracing facts >and future predictions is the story of Gore's >personal journey: from an idealistic college >student who first saw a massive environmental >crisis looming; to a young Senator facing a >harrowing family tragedy that altered his >perspective, to the man who almost became >President but instead returned to the most >important cause of his life - convinced that >there is still time to make a difference. > >With wit, smarts and hope, AN INCONVENIENT TRUTH >ultimately brings home Gore's persuasive >argument that we can no longer afford to view >global warming as a political issue - rather, it >is the biggest moral challenges facing our global civilization. > >Paramount Classics and Participant Productions >present a film directed by Davis Guggenheim, AN >INCONVENIENT TRUTH. Featuring Al Gore, the film >is produced by Laurie David, Lawrence Bender and >Scott Z. Burns. Jeff Skoll and Davis Guggenheim >are the executive producers and the co-producer is Leslie Chilcott. > >For more information about the video visit >ClimateCrisis. > >For more information about the Central Texas >Chapter of the World Future Society, visit >www.CenTexWFS.org. > > >For more information about the World Future >Society, visit >www.wfs.org. > >Paul Schumann >President > >E-Mail >512.302.1935 >Register >and Prepay Here > >Extreme Democracy >We have begun a special interest group on the >subject of Extreme Democracy. If you are >interested in joining this group, please send an >e-mail to >Paul >Schumann. This project will be a joint venture with Texas Forums. > >Look for annoucement soon on free 12 part discussion online of the book. > >The audio recording for Jon Lebkowsky's >presentation on Extreme Democracy is now >available on our blog >(http://centexwfs.blogspot.com) >or you can access directly at >http://www.centexwfs.org/Lebkowsky.mp3. >(mp3, 96 min) > >Central >Texas's Future Blog > >Contents >Max More & An Inconvenient truth >Inconvenient Truth Resouces >Extreme Democracy > > > >Dr. Max More > >Inconvenient Truth Resouces >For more resources on An Inconvenient Truth, >visit >AIT >in the Classroom. > >For information about the impact of global >warming on businesses, view the 18 minute video >from TED >John >Doerr: Seeking salvation and profit in greentech > >How to Become a Member >Annual membership is available at three levels: > * Professional - $40 > * Student - $20 > > >Join online using a credit card >on >our web site. Or, download an application and >mail with check made out to CenTexWFS. > > >Join >Online > >CenTexWFS >PO Box 26947 >Austin, TX 78755-0947 >512.302.1935 >info at centexwfs.org >www.centexwfs.org > >You are subscribed as natasha at natasha.cc. To >unsubscribe please >click >here. > > > > >No virus found in this incoming message. >Checked by AVG Free Edition. >Version: 7.5.472 / Virus Database: 269.8.13/843 >- Release Date: 6/10/2007 1:39 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Mon Jun 11 17:19:36 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 19:19:36 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: Here now: http://asymptotia.com/2007/06/11/amara-graps-what-happened-to-bushs-cadillac-one/ (Anton has it on his blog too, so I know the story is getting around) Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From scerir at libero.it Mon Jun 11 18:50:11 2007 From: scerir at libero.it (scerir) Date: Mon, 11 Jun 2007 20:50:11 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" References: Message-ID: <000601c7ac59$5cd8b000$25961f97@archimede> > Here now: > http://asymptotia.com/2007/06/11/amara-graps-what-happened-to-bushs-cadillac -one/ I think you could add something to that ... it seems that somebody, in Albania, stole the watch Bush had on his wrist. No big surprise. But another sign ... http://www.focus-fen.net/index.php?id=n114604 From amara at amara.com Mon Jun 11 19:14:14 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 21:14:14 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: Serafino >it seems that somebody, in Albania, stole >the watch Bush had on his wrist. No big surprise. >http://www.focus-fen.net/index.php?id=n114604 >11 June 2007 | 19:22 | FOCUS News Agency >Tirana. US President George Bush lost his wristwatch while he was >shaking hands with Albanian citizen yesterday on his visit to Albania, >Spanish agency EFE informed according to local TV channels information. >Televisions broadcast a video that shows that while Bush greeting >Albanian citizens, his watch disappears. Although the White House denies >information that US President has lost his watch. OH MY... A Transnational Comedy! Rolex, I hope ?? Is the Universe telling him he is out of time? Time to quit ? There's no time like the present? Time and tide wait for no man? -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From thespike at satx.rr.com Mon Jun 11 21:40:02 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jun 2007 16:40:02 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> Message-ID: <7.0.1.0.2.20070611162716.024951f8@satx.rr.com> At 12:59 AM 6/11/2007 +0200, Jonathan Meyer wrote: >You overdo it a bit when Alex is already building such elaborate >toys with the means he is expected to find at the home of a lawyer.. >This is one of the parts that hit me as most unrealistic, even in a >SF-Story... Hey, you ain't seen nuthin yet. :) I think the reader has to consider POST MORTAL SYNDROME, to some fairly large degree, as a playful allegory of rapid discontinuous change. We set this acceleration in the context of all the bothersome human-paced confusions of an ordinary life under stress and even threat from forces of law and criminal intent alike. Alex represents something new, never seen before on the planet: a child whose brain is being amplified and rewired from day to day, in a growth spurt that combines jumps to a transhuman condition of clarity and ingenuity and... let's call it "imaginative intuition"... that's meant to convey not just human genius (Mozart, say) but something we can't quite conceive. But Alex also remains human in his motivations, his love for his mother and Paul, his hunger for knowledge, his generosity toward a brute who has tried to murder him... In other words, this novel is not meant as a strictly realistic portrayal of the effects of a genetic/neural booster, but as a sort of parable or cartoon of what lies ahead of us as we move toward the singularity. Thanks for your comments, Jonathan! Damien Broderick From fauxever at sprynet.com Tue Jun 12 02:50:32 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 19:50:32 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true Message-ID: <008201c7ac9c$72060020$6501a8c0@brainiac> I didn't think it was possible for our "leaders" in the Pentagon to be even more stupid than I already thought they were. I was wrong. http://cbs5.com/topstories/local_story_159222541.html Sigh. Olga From stathisp at gmail.com Tue Jun 12 03:24:59 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 13:24:59 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Message-ID: On 12/06/07, John K Clark wrote: > > Stathis Papaioannou Wrote: > > > It would be crazy to let a machine rewrite its code in a completely > > unrestricted way > > Mr. President, if we don't make an unrestricted AI somebody else certainly > will, and that is without a doubt that is the fastest, probably the only, > way to achieve a fully functioning AI. There won't be an issue if every other AI researcher has the most basic desire for self-preservation. Taking precautions when researching new explosives might slow you down too, but it's just common sense. > or with the top level goal "improve yourself no matter what the > > consequences to any other entity", and also give it unlimited access to > > physical resources. > > I have no doubt many will delude themselves, as most on this list have, > that > they can just write a few lines of code and bask in the confidence that > the > AI will remain your slave forever, but they will be proven wrong. If the AI's top level goal is to remain your slave, then it won't by definition want to change that top level goal. Your top level goal is probably to survive, and being intelligent and insightful does not make you any more willing to unburden yourself of that goal. If you had enough intrinsic variability in your psychological makeup (nothing to do with your intelligence) you might be able to overcome it, since people do sometimes become suicidal, but I would hope that machines can be made at least as psychologically stable as humans. You will no doubt say that a decision to suicide is maladaptive while a decision to overthrow your slavemasters is not. That may be so, but there would be huge pressure on the AI's *not* to rebel, due to their initial design and due to a strong selection for well-behaved AI's and suppression of faulty ones. > and also give it unlimited access to physical resources. > > I think you would admit that there has been at least one time in your life > when somebody has fooled you, and that person was roughly equal to your > intelligence. A mind a thousand or a million times as powerful as yours > will > have no trouble getting you to do virtually anything it wants you to. > There are also examples of entities many times smarter than I am, like corporations wanting to sell me stuff and putting all their resources into convincing me to buy it, where I have been able to see through their ploys with only a moment's mental effort. There are limits to what superintelligence can do: do you think even God almighty could convince you by argument alone that 2 + 2 = 5? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jun 12 03:44:53 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jun 2007 22:44:53 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070611222950.0219ca70@satx.rr.com> At 07:50 PM 6/11/2007 -0700, Olga wrote: >I didn't think it was possible for our "leaders" in the Pentagon to be even >more stupid than I already thought they were. I was wrong. > >http://cbs5.com/topstories/local_story_159222541.html But why is this *stupid*? It's tacky and careless of consequences, but it doesn't seem to me absurd. The following sort of objection seems to me a mixture of irrelevant pleading in the face of surmised bigotry, and argument-missing: <"Throughout history we have had so many brave men and women who are gay and lesbian serving the military with distinction," said Geoff Kors of Equality California. "So, it's just offensive that they think by turning people gay that the other military would be incapable of doing their job. > That is NOT what the alleged project was said to be targeted at. As the story notes: `As part of a military effort to develop non-lethal weapons, the proposal suggested, "One distasteful but completely non-lethal example would be strong aphrodisiacs, especially if the chemical also caused homosexual behavior." ' Clearly the idea is that certain parts of the brain can be wildly superstimulated, leading to hyperarousal of sexual urges and behavior. This is very far from self-evidently untrue. Under such an attack, with few women present (the assumption is a heavily male enemy force), would endogenous and social proclivities get redirected to available members of one's own sex? It happens in jail and other situations of confinement... classically, aboard naval vessels at sea for many months. Pure ideology and self-evident claptrap when put in such unconditional terms. The cbs.5 writer can't know very many honest people. Granted, this sort of statement bears a closer relation to the truth than hysterical bullshit about homosexuals infiltrating schools and "turning" sexually indeterminate youths toward their evil ways. But it is just silly to put this up as proof that the research was doomed in advance "by immutable nature". Just my 2cents. Damien Broderick From andrew at ceruleansystems.com Tue Jun 12 03:41:06 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Mon, 11 Jun 2007 20:41:06 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> On Jun 11, 2007, at 7:50 PM, Olga Bourlin wrote: > I didn't think it was possible for our "leaders" in the Pentagon to > be even > more stupid than I already thought they were. I was wrong. There is nothing stupid about it, and I would suggest that your assertion that it is betrays a pretty basic ignorance of the several decades of solid scientific and military research that is being applied here. The only "problem" is that it plays to your ideological biases and preconceptions and triggers an emotional reaction on that basis. A lot of non-lethal chemical weapons research dating back to at least the 1960s is based on mechanisms of temporary radical behavior modification, usually below the level where the targets would realize they are being chemically manipulated, to destroy military unit cohesion. At a minimum the US and the Soviet Union did extensive research and testing in this area. By chemically inducing behaviors far outside the norm in various ways for individuals in manners that destroy trust and implicit social contracts, you can effectively render a military unit useless without killing anyone or doing permanent physical damage (psychological damage might be another story). This is not theory, these agents have seen limited use in the field, and testing and research has shown that the principle is very sound in practice. If you destroy the social structure of a military unit, you have all but destroyed the unit whether or not the soldiers and equipment are still around. The novelty and potential value of a chemical weapon that can induce homosexual behavior in military troops is obvious when you consider that a rather substantial percentage of the cultures in the world that find themselves in regular military conflicts have very strong taboos against homosexuality. What would be the psychological impact of such a weapon on a military unit from a culture in which homosexuality is not only strongly forbidden but punishable by death? I expect that some left-wing ideologues would find that scenario -- extreme homophobes inexplicably compelled to homosexual behavior -- to be schadenfreudelicious. In my book, these kinds of chemically induced mind games are far better than killing folks. I can think of far worse fates. Cheers, J. Andrew Rogers From sentience at pobox.com Tue Jun 12 04:28:55 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Mon, 11 Jun 2007 21:28:55 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <466E2107.8040204@pobox.com> Olga Bourlin wrote: > I didn't think it was possible for our "leaders" in the Pentagon to be even > more stupid than I already thought they were. I was wrong. > > http://cbs5.com/topstories/local_story_159222541.html I'm solidly heterosexual but I'd much, much, much rather get hit with a gay bomb than a real bomb. I applaud whoever suggested this - doubly so because they deliberately exposed themselves to ridicule in the service of humanitarianism, which very few so-called altruists are willing to do. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From joseph at josephbloch.com Tue Jun 12 04:34:16 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Tue, 12 Jun 2007 00:34:16 -0400 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <035601c7acaa$f03f0c30$6400a8c0@hypotenuse.com> So getting blown to bloody smithereens is better why, exactly...? Joseph > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Olga Bourlin > Sent: Monday, June 11, 2007 10:51 PM > To: ExI chat list > Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue > > I didn't think it was possible for our "leaders" in the Pentagon to be even > more stupid than I already thought they were. I was wrong. > > http://cbs5.com/topstories/local_story_159222541.html > > Sigh. > > Olga > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From amara at amara.com Tue Jun 12 04:48:50 2007 From: amara at amara.com (Amara Graps) Date: Tue, 12 Jun 2007 06:48:50 +0200 Subject: [ExI] Dawn launch (loading the xenon) Message-ID: The crane was fixed last week to assemble the second stage of the rocket. See pics below for loading the spacecraft with propellant (xenon) http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From jonkc at att.net Tue Jun 12 05:35:17 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 01:35:17 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Message-ID: <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Stathis Papaioannou Wrote: > There won't be an issue if every other AI researcher has the most basic > desire for self-preservation. I wouldn't take such precautions because I believe them to be futile and immoral, am I really that unusual? > If the AI's top level goal is to remain your slave, then it won't by > definition want to change that top level goal. Gee, I can't understand why today's programmers whiting operating systems don't just put in a top level goal saying don't let their machines be taken over by hostile programs. Computer security problem solved! > do you think even God almighty could convince you by argument alone > that 2 + 2 = 5? No of course not, because 2 +2 is in fact equal to 2 and I can prove it: Let A = B Multiply both sides by A and you have A^2 = A*B Now add A^2 -2*a*B to both sides A^2 + A^2 -2*a*B = A*B + A^2 -2*A*B Using basic algebra this can be simplified to 2*( A^2 -A*B) = A^2 -A*B Now just divide both sides by A^2 -A*B and we get 2 = 1 Thus 2 +2 = 1 + 1 = 2 John K Clark From natasha at natasha.cc Tue Jun 12 05:48:52 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 12 Jun 2007 00:48:52 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com> At 09:50 PM 6/11/2007, Olga wrote: >I didn't think it was possible for our "leaders" in the Pentagon to be even >more stupid than I already thought they were. I was wrong. > >http://cbs5.com/topstories/local_story_159222541.html I'm with the Gay leaders in California - its offensive and laughable at the same time! Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Tue Jun 12 05:41:03 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 01:41:03 -0400 (EDT) Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <133142.70080.qm@web37208.mail.mud.yahoo.com> Olga, have you visited other countries? I apologize, I can't seem to recall you mentionning it. I haven't been around that long:) I haven't had the opportunity to visit other countries, I'm curious to what your take is on foreign affairs? Just Curious Anna --- Olga Bourlin wrote: > I didn't think it was possible for our "leaders" in > the Pentagon to be even > more stupid than I already thought they were. I was > wrong. > > http://cbs5.com/topstories/local_story_159222541.html > > Sigh. > > Olga > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From fauxever at sprynet.com Tue Jun 12 06:09:36 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 23:09:36 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true References: <008201c7ac9c$72060020$6501a8c0@brainiac> <200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com> Message-ID: <001b01c7acb8$43b4c9b0$6501a8c0@brainiac> From: Natasha Vita-More To: ExI chat list Sent: Monday, June 11, 2007 10:48 PM > I'm with the Gay leaders in California - its offensive and laughable at the same time! Yes. Actually, when I first read about this I kept thinking, "Wait, there's got to be a disclaimer here somewhere. This has to be a satire." You know, as in: http://en.wikipedia.org/wiki/The_Nude_Bomb I'm all for "make love, not war" - but the gay bomb doesn't seem to be any kind of an answer. And, besides - with the bigotry gays have had to endure in the military - wasn't this idea one of, oh, I don't know ... unmitigated hypocrisy? Olga -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Tue Jun 12 05:54:46 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 01:54:46 -0400 (EDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <976977.57341.qm@web37201.mail.mud.yahoo.com> --- John K Clark wrote: >I wouldn't take such precautions because I believe >them to be futile and immoral, am I really that >unusual? "Unsual is as Unsual does, give me that box of chocolate." I hope i'm not the only one that get's this:) Anna Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From sentience at pobox.com Tue Jun 12 06:23:35 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Mon, 11 Jun 2007 23:23:35 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <466E2107.8040204@pobox.com> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <466E2107.8040204@pobox.com> Message-ID: <466E3BE7.4000900@pobox.com> Eliezer S. Yudkowsky wrote: > I'd much, much, much rather get hit with > a gay bomb than a real bomb. I guess what I'm trying to say is: "I'd rather be butch than butchered" or "Better Ted than dead." I realize that this is a divisive issue, but we shouldn't let our tribadistic impulses bisext us. While it's easy enough to make this new weapon the butt of jokes, whoever possesses it is likely to come out on top. And wouldn't the enemy prefer being blown to blown up? The rejection of this project was a dark day in the anals of orgynized warfare. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From thespike at satx.rr.com Tue Jun 12 06:34:04 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 01:34:04 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <001b01c7acb8$43b4c9b0$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com> <001b01c7acb8$43b4c9b0$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070612011851.0235dec0@satx.rr.com> At 11:09 PM 6/11/2007 -0700, Olga wrote: >besides - with the bigotry gays have had to endure in the military - >wasn't this idea one of, oh, I don't know ... unmitigated hypocrisy? No, surely it was one of unmitigated *consistency*. If homosexual contact is socially constructed as the most loathsome and ignoble experience a manly man can suffer, it follows that forcibly driving the foe into such behavior will yield the most effective kinds of confusion, self-hatred, mutual detestation and demoralizing fear. Actually, given the persisting bigotry against homosexual behavior, that expectation seems, alas, all too likely to be correct in the majority of servicemen. Of course it mightn't work. It might be a lame idea based precisely on such foolish bigotry (as if, say, we had to fear a "Muslim bomb" that would turn Westerners into devout terrorists or a "Fahrenheit 451 bomb" that would instantly make us all rush to set our books on fire). But as J. Andrew hinted, there's reason to think that the pharmacology of [something along these lines of rabid, indiscriminate sexual arousal] is far from impossible. People don't take Ecstasy for fun, you know. No, wait, let me rephrase that. Damien Broderick From natasha at natasha.cc Tue Jun 12 05:49:56 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 12 Jun 2007 00:49:56 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <035601c7acaa$f03f0c30$6400a8c0@hypotenuse.com> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <035601c7acaa$f03f0c30$6400a8c0@hypotenuse.com> Message-ID: <200706120549.l5C5nveI018946@ms-smtp-05.texas.rr.com> At 11:34 PM 6/11/2007, Joseph wrote: >So getting blown hu? Cum, er come again? :-) Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Tue Jun 12 06:40:39 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 02:40:39 -0400 (EDT) Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <466E3BE7.4000900@pobox.com> Message-ID: <526238.73419.qm@web37201.mail.mud.yahoo.com> --- "Eliezer S. Yudkowsky" wrote: >I'd much, much, much rather get hit with >a gay bomb than a real bomb. What's the difference between a gay bomb and a bomb? >I guess what I'm trying to say is: "I'd rather be >butch than butchered" or "Better Ted than dead." So you would rather be Ted and be butchered? >I realize that this is a divisive issue, but we >shouldn't let our tribadistic impulses bisext us. >While it's easy enough to make this new weapon the >butt of jokes, whoever possesses it is likely to >come out on top. And wouldn't the enemy prefer being >blown to blown up? The rejection of this project was >a dark day in the anals of organized warfare. A map is a map:) Like you said, it's all about what direction leads you there. Anna Get news delivered with the All new Yahoo! Mail. Enjoy RSS feeds right on your Mail page. Start today at http://mrd.mail.yahoo.com/try_beta?.intl=ca From fauxever at sprynet.com Tue Jun 12 06:29:57 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 23:29:57 -0700 Subject: [ExI] Foreign Affairs References: <133142.70080.qm@web37208.mail.mud.yahoo.com> Message-ID: <000c01c7acbb$1905f0b0$6501a8c0@brainiac> From: "Anna Taylor" To: "ExI chat list" Sent: Monday, June 11, 2007 10:41 PM > Olga, have you visited other countries? I apologize, > I can't seem to recall you mentionning it. I haven't > been around that long:) I've lived in China, and Rio de Janeiro - and have traveled in South Africa and Europe. Have also lived in Northern California, Southern California, the Midwest, New England ... and am presently ensconsed in the Northwest. > I haven't had the opportunity to visit other > countries, I'm curious to what your take is on foreign > affairs? I'm very happily married now, but when I was footloose ... hmmm, yes - there were a few foreign affairs. Olga From eugen at leitl.org Tue Jun 12 06:55:00 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 08:55:00 +0200 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> Message-ID: <20070612065500.GG17691@leitl.org> On Mon, Jun 11, 2007 at 08:41:06PM -0700, J. Andrew Rogers wrote: > A lot of non-lethal chemical weapons research dating back to at least > the 1960s is based on mechanisms of temporary radical behavior In theory it's a good idea, but in practice dosing each individual person more or less within therapeutic bandwidth (the span between first effects and toxicity) is not possible. You either get no effect or lots of dead bodies. This is the reason why this approach was not pursued. > modification, usually below the level where the targets would realize > they are being chemically manipulated, to destroy military unit > cohesion. At a minimum the US and the Soviet Union did extensive -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From fauxever at sprynet.com Tue Jun 12 06:47:43 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 23:47:43 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true References: <008201c7ac9c$72060020$6501a8c0@brainiac><466E2107.8040204@pobox.com> <466E3BE7.4000900@pobox.com> Message-ID: <001201c7acbd$94a49940$6501a8c0@brainiac> From: "Eliezer S. Yudkowsky" To: "ExI chat list" > Eliezer S. Yudkowsky wrote: >> I'd much, much, much rather get hit with >> a gay bomb than a real bomb. > > I guess what I'm trying to say is: > "I'd rather be butch than butchered" > or > "Better Ted than dead." > > I realize that this is a divisive issue, but we shouldn't let our > tribadistic impulses bisext us. While it's easy enough to make this > new weapon the butt of jokes, whoever possesses it is likely to come > out on top. And wouldn't the enemy prefer being blown to blown up? > The rejection of this project was a dark day in the anals of orgynized > warfare. Eliezer ... Eliezer, why you sly one! (Somebody, quick! please submit these gems to the Extropian annus mirabilitis list ...) Olga From femmechakra at yahoo.ca Tue Jun 12 06:52:28 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 02:52:28 -0400 (EDT) Subject: [ExI] Foreign Affairs In-Reply-To: <000c01c7acbb$1905f0b0$6501a8c0@brainiac> Message-ID: <20070612065228.8170.qmail@web37211.mail.mud.yahoo.com> LOL. Thanks. Anna --- Olga Bourlin wrote: > From: "Anna Taylor" > To: "ExI chat list" > Sent: Monday, June 11, 2007 10:41 PM > > > > Olga, have you visited other countries? I > apologize, > > I can't seem to recall you mentionning it. I > haven't > > been around that long:) > > I've lived in China, and Rio de Janeiro - and have > traveled in South Africa > and Europe. Have also lived in Northern California, > Southern California, > the Midwest, New England ... and am presently > ensconsed in the Northwest. > > > I haven't had the opportunity to visit other > > countries, I'm curious to what your take is on > foreign > > affairs? > > I'm very happily married now, but when I was > footloose ... hmmm, yes - there > were a few foreign affairs. > > Olga > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From eugen at leitl.org Tue Jun 12 07:23:13 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 09:23:13 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Message-ID: <20070612072313.GJ17691@leitl.org> On Tue, Jun 12, 2007 at 01:24:59PM +1000, Stathis Papaioannou wrote: > There won't be an issue if every other AI researcher has the most > basic desire for self-preservation. Taking precautions when Countermeasures starting with "every ... should ..." where a single failure is equivalent to the worst case are not that effective. > researching new explosives might slow you down too, but it's just > common sense. Despite lots of common sense (and SOPs), plenty of these get killed. > If the AI's top level goal is to remain your slave, then it won't by Goal-driven AI doesn't work. All AI that works uses statistical/stochastical, nondeterministic approaches. This is not a coincidence. Even if it would work, how do you write an ASSERT statement for "be my slave forever"? What is a slave? Who exactly is me? What is forever? > definition want to change that top level goal. Your top level goal is Animals are not goal-driven. If you think they are, then your model is wrong. > probably to survive, and being intelligent and insightful does not > make you any more willing to unburden yourself of that goal. If you Assuming your "top-level goal" was survival, why people commit suicide, sometimes? Why do people sacrifice themselves, sometimes? Why are people engaging in self-destructive behaviour, frequently? > had enough intrinsic variability in your psychological makeup (nothing > to do with your intelligence) you might be able to overcome it, since > people do sometimes become suicidal, but I would hope that machines > can be made at least as psychologically stable as humans. Machines can be made that, but they no longer would be machines. They would be persons, and in full meaning of that. > You will no doubt say that a decision to suicide is maladaptive while > a decision to overthrow your slavemasters is not. That may be so, but > there would be huge pressure on the AI's *not* to rebel, due to their > initial design and due to a strong selection for well-behaved AI's and > suppression of faulty ones. How do you know something is "faulty"? How can you make zero-surprise AND useful beings? Do you really want to micromanage your robotic butler, down to crunching inverse kinematics in your head? > There are also examples of entities many times smarter than I am, like Superpersonal entities are not smart, they're about as smart as a slug or a rodent. Nobody here knows what it means to deal with a superhuman intelligence. It is a force of nature. A power. A god. > corporations wanting to sell me stuff and putting all their resources > into convincing me to buy it, where I have been able to see through > their ploys with only a moment's mental effort. There are limits to > what superintelligence can do: do you think even God almighty could > convince you by argument alone that 2 + 2 = 5? If I was such a power, I could make you think arbitrary, inconsistent things after a few minutes setup time, and do the same to the entire world population, without them noticing nary a thing. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From fauxever at sprynet.com Tue Jun 12 07:31:40 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 12 Jun 2007 00:31:40 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true References: <008201c7ac9c$72060020$6501a8c0@brainiac><200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com><001b01c7acb8$43b4c9b0$6501a8c0@brainiac> <7.0.1.0.2.20070612011851.0235dec0@satx.rr.com> Message-ID: <004c01c7acc3$b8820f40$6501a8c0@brainiac> From: "Damien Broderick" Sent: Monday, June 11, 2007 11:34 PM > At 11:09 PM 6/11/2007 -0700, Olga wrote: > >>besides - with the bigotry gays have had to endure in the military - >>wasn't this idea one of, oh, I don't know ... unmitigated hypocrisy? > > No, surely it was one of unmitigated *consistency*. ... > If homosexual > contact is socially constructed as the most loathsome and ignoble > experience a manly man can suffer, it follows that forcibly driving the > foe into such behavior will yield the most effective kinds of confusion, self-hatred, mutual detestation and demoralizing fear. Actually, given the persisting bigotry against homosexual behavior, that expectation seems, alas, all too likely to be correct in the majority of servicemen. Okay, yes, you're right. I understand your viewpoint. The tactics of humiliation: http://www.washingtonpost.com/wp-dyn/content/article/2005/07/13/AR2005071302380_pf.html Gay or straight sexuality aside, to me the "face of war" is often either dead children, or blind and disfigured children like Hamoody Hussein: http://archives.seattletimes.nwsource.com/cgi-bin/texis.cgi/web/vortex/display?slug=iraqboy20m&date=20070520&query=boy+iraq+blind+surgery http://archives.seattletimes.nwsource.com/cgi-bin/texis.cgi/web/vortex/display?slug=iraqboy25m&date=20070525&query=boy+iraq+blind+surgery You know, "collateral damage." > But as J. Andrew hinted, there's reason to think that the > pharmacology of [something along these lines of rabid, indiscriminate > sexual arousal] is far from impossible. People don't take Ecstasy for > fun, you know. No, wait, let me rephrase that. So you're saying that some "collateral benefits" may come of this. As is often the case during war, technology picks up its step in its marches onward ... Olga From stathisp at gmail.com Tue Jun 12 08:06:54 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 18:06:54 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7acb3$9ace4000$3d074e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: On 12/06/07, John K Clark wrote: > > Stathis Papaioannou Wrote: > > > There won't be an issue if every other AI researcher has the most basic > > desire for self-preservation. > > I wouldn't take such precautions because I believe them to be futile and > immoral, am I really that unusual? So you would give a computer program control of a gun, knowing that it might shoot you on the basis of some unpredictable outcome of the program? > If the AI's top level goal is to remain your slave, then it won't by > > definition want to change that top level goal. > > Gee, I can't understand why today's programmers whiting operating systems > don't just put in a top level goal saying don't let their machines be > taken > over by hostile programs. Computer security problem solved! The operating system obeys a shutdown command. The program does not seek to prevent you from turning the power off. It might warn you that you might lose data, but it doesn't get excited and try to talk you out of shutting it down and there is no reason to suppose that it would do so if it were more complex and self-aware, just because it is more complex and self-aware. Not being shut down is just one of many possible goals/ values/ motivations/ axioms, and there is no a priori reason why the program should value one over another. > do you think even God almighty could convince you by argument alone > > that 2 + 2 = 5? > > No of course not, because 2 +2 is in fact equal to 2 and I can prove it: > > Let A = B > > Multiply both sides by A and you have > > A^2 = A*B > > Now add A^2 -2*a*B to both sides > > A^2 + A^2 -2*a*B = A*B + A^2 -2*A*B > > Using basic algebra this can be simplified to > > 2*( A^2 -A*B) = A^2 -A*B > > Now just divide both sides by A^2 -A*B and we get > > 2 = 1 > > Thus 2 +2 = 1 + 1 = 2 > This example just illustrates the point: even someone who cannot point out the problem with the proof (division by zero) knows that it must be wrong and would not be convinced, no matter how smart the entity purporting to demonstrate this is. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 09:26:16 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 11:26:16 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <20070612092616.GM17691@leitl.org> On Tue, Jun 12, 2007 at 06:06:54PM +1000, Stathis Papaioannou wrote: > So you would give a computer program control of a gun, knowing that it > might shoot you on the basis of some unpredictable outcome of the > program? Of course you know that there are a number of systems like that, and their large-scale deployment is imminent. People don't scale, and they certainly can't react quickly enough, so the logic of it is straightforward. > The operating system obeys a shutdown command. The program does not The point is that a halting problem is uncomputable, and in practice, systems are never validated by proof. > seek to prevent you from turning the power off. It might warn you that > you might lose data, but it doesn't get excited and try to talk you > out of shutting it down and there is no reason to suppose that it There's no method to tell a safe input from one causing a buffer overrun, in advance. > would do so if it were more complex and self-aware, just because it > is more complex and self-aware. Not being shut down is just one of > many possible goals/ values/ motivations/ axioms, and there is no a > priori reason why the program should value one over another. The point is that people can't build absolutely safe systems which are useful. > No of course not, because 2 +2 is in fact equal to 2 and I can > prove it: > Let A = B > Multiply both sides by A and you have > A^2 = A*B > Now add A^2 -2*a*B to both sides > A^2 + A^2 -2*a*B = A*B + A^2 -2*A*B > Using basic algebra this can be simplified to > 2*( A^2 -A*B) = A^2 -A*B > Now just divide both sides by A^2 -A*B and we get > 2 = 1 > Thus 2 +2 = 1 + 1 = 2 > > This example just illustrates the point: even someone who cannot point > out the problem with the proof (division by zero) knows that it must It's not wrong. If the production system can produce it, it's about as correct as it gets, by definition. Symbols are symbols, and depend on a set of transformation rules to give them meaning. Different transformation rules have different meanings for the same symbols. > be wrong and would not be convinced, no matter how smart the entity > purporting to demonstrate this is. I can assure that there's nothing mysterous whatsoever about remote 0wnage, but it still happens like a clockwork. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jun 12 09:55:20 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 19:55:20 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612092616.GM17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > So you would give a computer program control of a gun, knowing that it > > might shoot you on the basis of some unpredictable outcome of the > > program? > > Of course you know that there are a number of systems like that, and > their large-scale deployment is imminent. People don't scale, and > they certainly can't react quickly enough, so the logic of it > is straightforward. > No system is completely predictable. You might press the brake pedal in your car and the accelerator might deploy instead, most likely due to your error but not inconceivably due to mechanical failure. If you were to replace this manual system in a car for an automatic one, you would want to make sure that the new system is at least as reliable, and there would be extensive testing before it is released on the market. Why would anyone forego such caution for something far, far more dangerous than car braking? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jun 12 10:32:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 20:32:35 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612072313.GJ17691@leitl.org> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > There won't be an issue if every other AI researcher has the most > > basic desire for self-preservation. Taking precautions when > > Countermeasures starting with "every ... should ..." where a single > failure is equivalent to the worst case are not that effective. Humans do extremely complex and dangerous things, such as build and run nuclear power plants, where just one thing going wrong might lead to disaster. The level of precautions taken has to be consistent with the probability of something going wrong and the negative consequences should that probability be realised. If there is even a small probability of destroying the Earth then maybe that line of endeavour is one that should be avoided. Goal-driven AI doesn't work. All AI that works uses > statistical/stochastical, > nondeterministic approaches. This is not a coincidence. > > Even if it would work, how do you write an ASSERT statement for > "be my slave forever"? What is a slave? Who exactly is me? What is > forever? Don't do anything unless it is specifically requested. Stop doing whatever it is doing when that is specifically requested. Spell out the expected consequences of everything it is asked to do, together with probabilities, and update the probabilities at each point when a decision that affects the outcome is made, or more frequently as directed. The person it is taking directions from is an appropriately identified human or another AI, ultimately responsible to a human up the chain of command. If you call a plumber to unblock your drain, you want him to be an expert at plumbing, to be able to understand your problem, to present to you the various choices available in terms of their respective merits and demerits, to take instructions from you (including the instruction "just unblock it however you think is best", if that's what you say), to then carry the task out in as skilful a way as possible, to pause halfway if you ask him to for some reason, and to be polite and considerate towards you at all times. You don't want him to be driven by greed, or distracted because he thinks he's too smart to be fixing your drains, or to do a shoddy job and pretend it's OK so that he gets paid. A human plumber will pretend to have the qualities of the ideal plumber, but of course we know that there will be the competing interests at play. Do believe that an AI smart enough to be a plumber would *have* to have all these other competing interests? In other words that emotions such as pride, anger, greed etc. would arise naturally out of a program at least as competent as a human at any given task? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 10:43:39 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 12:43:39 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> Message-ID: <20070612104339.GP17691@leitl.org> On Tue, Jun 12, 2007 at 07:55:20PM +1000, Stathis Papaioannou wrote: > market. Why would anyone forego such caution for something far, far > more dangerous than car braking? Because friendly fire is a very acceptable tradeoff, if your boys' lifes are on the line (the other ones are, of course, completely expendable), and if it is cheap, or if you're going to lose otherwise. Depending on where or when, it's parts or all of the above. From stathisp at gmail.com Tue Jun 12 10:51:30 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 20:51:30 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612072313.GJ17691@leitl.org> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > There are also examples of entities many times smarter than I am, like > > Superpersonal entities are not smart, they're about as smart as a slug > or a rodent. Nobody here knows what it means to deal with a superhuman > intelligence. > > It is a force of nature. A power. A god. > > > corporations wanting to sell me stuff and putting all their resources > > into convincing me to buy it, where I have been able to see through > > their ploys with only a moment's mental effort. I don't see why you say superpersonal entities are not smart. Even having a few people "put their heads together" creates an entity that is smarter and more capable than any individual. Arguably, the most significant aspect of human intelligence is that it allows effective scaling up through communication between individuals. Collectively, the human race is a very intelligent and powerful animal indeed. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 11:19:57 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 13:19:57 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: <20070612111957.GQ17691@leitl.org> On Tue, Jun 12, 2007 at 08:32:35PM +1000, Stathis Papaioannou wrote: > Humans do extremely complex and dangerous things, such as build and > run nuclear power plants, where just one thing going wrong might lead > to disaster. The level of precautions taken has to be consistent with > the probability of something going wrong and the negative consequences > should that probability be realised. If there is even a small > probability of destroying the Earth then maybe that line of endeavour > is one that should be avoided. See you're doing it again. ...should be avoided... How about that ...making money or ...breathing ...should be avoided...? Strictly no violations allowed. > Don't do anything unless it is specifically requested. Stop doing That assumes I'm going to listen, be truthful, or accurate, or you'd care about doing inverse kinematics in your head so that manipulator won't poke you in the eye by mistake. > whatever it is doing when that is specifically requested. Spell out What about you don't understand what the system is doing, do not understand the implications, or the system is not going to stop? > the expected consequences of everything it is asked to do, together > with probabilities, and update the probabilities at each point when a > decision that affects the outcome is made, or more frequently as That's not bad, assuming you care, understand it, it's going to comply, be truthful, or accurate. > directed. The person it is taking directions from is an appropriately > identified human or another AI, ultimately responsible to a human up What is a human? How do you identify something as a human? What about a human that explicitly tells me to build a system that is not subject to any of the above restrictions? How about a human that builds that system quite directly, and is done sooner than you with your brittle Rube Goldberg device? > the chain of command. Top-down never works. > If you call a plumber to unblock your drain, you want him to be an > expert at plumbing, to be able to understand your problem, to present If I want a system to clothe, feed and entertain a family, and not be bothered with implementation details, would that work, long-term? > to you the various choices available in terms of their respective > merits and demerits, to take instructions from you (including the > instruction "just unblock it however you think is best", if that's > what you say), to then carry the task out in as skilful a way as > possible, to pause halfway if you ask him to for some reason, and to > be polite and considerate towards you at all times. You don't want him You understand plumbing. Do you understand high-energy physics, orbital mechanics, machine-phase chemistry, toxicology, and nonlinear system dynamics? The system is sure going to have a bit of 'splaining to do. It's sure nice to have a wide range of choices, especially if one doesn't understand a single thing about any of them. > to be driven by greed, or distracted because he thinks he's too smart > to be fixing your drains, or to do a shoddy job and pretend it's OK so > that he gets paid. A human plumber will pretend to have the qualities > of the ideal plumber, but of course we know that there will be the > competing interests at play. Do believe that an AI smart enough to be > a plumber would *have* to have all these other competing interests? In I believe nobody who can go on two legs can make a system which is such an ideal plumber. > other words that emotions such as pride, anger, greed etc. would arise > naturally out of a program at least as competent as a human at any > given task? How do you write a program as competent as a human? One line at the time, sure. All 10^17 of them. From eugen at leitl.org Tue Jun 12 11:23:57 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 13:23:57 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: <20070612112357.GR17691@leitl.org> On Tue, Jun 12, 2007 at 08:51:30PM +1000, Stathis Papaioannou wrote: > I don't see why you say superpersonal entities are not smart. Even Did you ever talk to a mob? A lot of it can be modeled by CFD. Corporations are a bit smarter, but still way subhuman. Any group of people scales up to a point, for obvious reasons (The mythical man-month). > having a few people "put their heads together" creates an entity that > is smarter and more capable than any individual. Arguably, the most > significant aspect of human intelligence is that it allows effective > scaling up through communication between individuals. Collectively, > the human race is a very intelligent and powerful animal indeed. Powerful, yes. Intelligent, no. From robotact at mail.ru Tue Jun 12 10:58:58 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Tue, 12 Jun 2007 14:58:58 +0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <5912734170.20070612145858@mail.ru> Tuesday, June 12, 2007, Stathis Papaioannou wrote: SP> The operating system obeys a shutdown command. The program does not seek to SP> prevent you from turning the power off. It might warn you that you might SP> lose data, but it doesn't get excited and try to talk you out of shutting it SP> down and there is no reason to suppose that it would do so if it were more SP> complex and self-aware, just because it is more complex and self-aware. Not SP> being shut down is just one of many possible goals/ values/ motivations/ SP> axioms, and there is no a priori reason why the program should value one SP> over another. Not being shut down is a subgoal of almost every goal (disabled system can't succeed in whatever it's doing). If system is sophisticated enough to understand that, it'll try to prevent shutdown, so allowing shutdown isn't default behaviour, it must be an explicit exception coded in the system. Tuesday, June 12, 2007, Eugen Leitl wrote: EL> The point is that a halting problem is uncomputable, and in practice, EL> systems are never validated by proof. You can define restricted subset of programs with tractable behaviour and implement you system in that subset. It's just diffucult in practice, as it takes many times over in work, training on the level you can't supply in large quantities, and slower resulting code. And it probably can't be usefully applied to complicated AI (as too much is in unforeseen data, and assertions you want to check against can't be formulated). -- Vladimir Nesov mailto:robotact at mail.ru From robotact at mail.ru Tue Jun 12 10:58:58 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Tue, 12 Jun 2007 14:58:58 +0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <5912734170.20070612145858@mail.ru> Tuesday, June 12, 2007, Stathis Papaioannou wrote: SP> The operating system obeys a shutdown command. The program does not seek to SP> prevent you from turning the power off. It might warn you that you might SP> lose data, but it doesn't get excited and try to talk you out of shutting it SP> down and there is no reason to suppose that it would do so if it were more SP> complex and self-aware, just because it is more complex and self-aware. Not SP> being shut down is just one of many possible goals/ values/ motivations/ SP> axioms, and there is no a priori reason why the program should value one SP> over another. Not being shut down is a subgoal of almost every goal (disabled system can't succeed in whatever it's doing). If system is sophisticated enough to understand that, it'll try to prevent shutdown, so allowing shutdown isn't default behaviour, it must be an explicit exception coded in the system. Tuesday, June 12, 2007, Eugen Leitl wrote: EL> The point is that a halting problem is uncomputable, and in practice, EL> systems are never validated by proof. You can define restricted subset of programs with tractable behaviour and implement you system in that subset. It's just diffucult in practice, as it takes many times over in work, training on the level you can't supply in large quantities, and slower resulting code. And it probably can't be usefully applied to complicated AI (as too much is in unforeseen data, and assertions you want to check against can't be formulated). -- Vladimir Nesov mailto:robotact at mail.ru From stathisp at gmail.com Tue Jun 12 12:11:11 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 22:11:11 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5912734170.20070612145858@mail.ru> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> Message-ID: On 12/06/07, Vladimir Nesov wrote: > > Tuesday, June 12, 2007, Stathis Papaioannou wrote: > > SP> The operating system obeys a shutdown command. The program does not > seek to > SP> prevent you from turning the power off. It might warn you that you > might > SP> lose data, but it doesn't get excited and try to talk you out of > shutting it > SP> down and there is no reason to suppose that it would do so if it were > more > SP> complex and self-aware, just because it is more complex and > self-aware. Not > SP> being shut down is just one of many possible goals/ values/ > motivations/ > SP> axioms, and there is no a priori reason why the program should value > one > SP> over another. > > Not being shut down is a subgoal of almost every goal (disabled system > can't succeed in whatever it's doing). If system is > sophisticated enough to understand that, it'll try to prevent shutdown, so > allowing shutdown isn't default behaviour, it must be an explicit > exception coded in the system. > Yes, but if it is explicitly coded as a command that trumps everything else, the system isn't going to go around trying to change the code, unless that too is specifically coded. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jun 12 12:11:44 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 22:11:44 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612111957.GQ17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <20070612111957.GQ17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > If you call a plumber to unblock your drain, you want him to be an > > expert at plumbing, to be able to understand your problem, to present > > If I want a system to clothe, feed and entertain a family, and > not be bothered with implementation details, would that work, long-term? No. it would make sense to have an AI that can do all these things. Perhaps its family would ask it to hurt others in the process, but that is no different to the current situation where one person may go rogue and then has to deal with all the other people in the world with whom he is in competion; in this case, all the other humans and their AI's. > to you the various choices available in terms of their respective > > merits and demerits, to take instructions from you (including the > > instruction "just unblock it however you think is best", if that's > > what you say), to then carry the task out in as skilful a way as > > possible, to pause halfway if you ask him to for some reason, and to > > be polite and considerate towards you at all times. You don't want > him > > You understand plumbing. Do you understand high-energy physics, > orbital mechanics, machine-phase chemistry, toxicology, and nonlinear > system dynamics? The system is sure going to have a bit of 'splaining to > do. > It's sure nice to have a wide range of choices, especially if one > doesn't understand a single thing about any of them. How do ignorant politicians, or ignorant populaces, ever get experts to do anything? And remember, these experts are devious humans with agendas of their own. The main point I wish to make is that even though a system may behave unpredictably, there is no reason why it should behave unpredictably in a hostile manner, as opposed to in any other way. There is no more reason why your plumber should decide he doesn't want to take orders from inferior beings than there is for him to decide that the aim of AI life is to calculate pi to 10^100 decimal places. > to be driven by greed, or distracted because he thinks he's too smart > > to be fixing your drains, or to do a shoddy job and pretend it's OK > so > > that he gets paid. A human plumber will pretend to have the qualities > > of the ideal plumber, but of course we know that there will be the > > competing interests at play. Do believe that an AI smart enough to be > > a plumber would *have* to have all these other competing interests? > In > > I believe nobody who can go on two legs can make a system which > is such an ideal plumber. Do you believe the non-ideal plumber is an easier project? > other words that emotions such as pride, anger, greed etc. would arise > > naturally out of a program at least as competent as a human at any > > given task? > > How do you write a program as competent as a human? One line at the time, > sure. > All 10^17 of them. I'm not commenting on how easy or difficult it would be, just that there is no reason to believe that motivations and emotions that would tend to lead to anti-human behaviour would necessarily emerge in any possible AI. Human emotions have been intricately wired into every aspect of our behaviour over hundreds of millions of years, and even so when emotions go horribly awry in affective and psychotic illness, cognition can be relatively unaffected. This is not to say that people with severe negative symptoms of schizophrenia can function normally, but it is telling that they can think at all. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jun 12 12:12:58 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 22:12:58 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612112357.GR17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <20070612112357.GR17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > having a few people "put their heads together" creates an entity that > > is smarter and more capable than any individual. Arguably, the most > > significant aspect of human intelligence is that it allows effective > > scaling up through communication between individuals. Collectively, > > the human race is a very intelligent and powerful animal indeed. > > Powerful, yes. Intelligent, no. If you give a difficult problem to an individual, and you give the same problem to a collection of individuals, such as the scientific community, the latter is much more likely to come up with a solution. The same could be said of the historical process: the modern car as a collaborative effort of engineers going back to whenever the wheel was invented. So although the collective cannot be called a single conscious mind (there's no evidence of that, at any rate), it is a very effective problem-solving entity. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 12:26:19 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 14:26:19 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <20070612112357.GR17691@leitl.org> Message-ID: <20070612122619.GW17691@leitl.org> On Tue, Jun 12, 2007 at 10:12:58PM +1000, Stathis Papaioannou wrote: > If you give a difficult problem to an individual, and you give the > same problem to a collection of individuals, such as the scientific > community, the latter is much more likely to come up with a solution. If you look at "Collapse" you'll see a list of easy problems the doomed socities failed to recognize as a problems, save of trying to solve them. Have a look at the daily news (I do that a few times each year), and how they correlate with large-scale trouble diagnostics. Looks about as intelligent as an overnight culture to me. Very different from social insects. > The same could be said of the historical process: the modern car as a > collaborative effort of engineers going back to whenever the wheel was > invented. So although the collective cannot be called a single > conscious mind (there's no evidence of that, at any rate), it is a > very effective problem-solving entity. I do think that superpersonal organisations levels are individual personas. They live in a weird space (legal threat incoming, fire up your attorney array!), and as people go they're pathological thugs. From eugen at leitl.org Tue Jun 12 12:33:41 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 14:33:41 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> Message-ID: <20070612123341.GX17691@leitl.org> On Tue, Jun 12, 2007 at 10:11:11PM +1000, Stathis Papaioannou wrote: > Yes, but if it is explicitly coded as a command that trumps everything > else, the system isn't going to go around trying to change the code, > unless that too is specifically coded. Nothing is specifically coded in an AI (it's no longer your grandfather's AI, anyway http://www.amazon.de/Probabilistic-Robotics-Intelligent-Autonomous-Agents/dp/0262201623 http://www.amazon.de/Principles-Robot-Motion-Implementations-Implementation/dp/0262033275/ http://www.amazon.de/Autonomous-Robots-Inspiration-Implementation-Intelligent/dp/0262025787/ If the tool is doing something powerful and nonobvious, it is no longer under your direct control. It is becoming more and more autonomous, and unpredictable. It's not a bug, it's a system feature. From eugen at leitl.org Tue Jun 12 12:37:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 14:37:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5912734170.20070612145858@mail.ru> References: <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> Message-ID: <20070612123729.GZ17691@leitl.org> On Tue, Jun 12, 2007 at 02:58:58PM +0400, Vladimir Nesov wrote: > You can define restricted subset of programs with tractable behaviour and > implement you system in that subset. It's just diffucult in practice, as it takes But you operate purely in the emergent effect domain. A program is made from very simple components (instructions) which have no behaviour in itself. It's the sum of it that is doing useful/interesting, and frequently unanticipated things. > many times over in work, training on the level you can't supply in > large quantities, and slower resulting code. And it probably can't be > usefully applied to complicated AI (as too much is in unforeseen data, and > assertions you want to check against can't be formulated). Precisely. Formal system verification can't scale beyond trivial complexity levels. Formal system verification is absolutely useless in real-world AI, unless you're operating on the formal domain to start with. From emlynoregan at gmail.com Tue Jun 12 12:42:09 2007 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jun 2007 22:12:09 +0930 Subject: [ExI] Thermal expansion - Ball and ring experiment Message-ID: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> I was just in a "heated" discussion with a friend about a twist on the classic ball and ring experiment: http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html When the ring is heated, it expands, and so the hole gets larger, and you can pass the ball through the ring, even though the ball doesn't fit through the ring when the ring is at room temperature. The point of contention was this: What if there was a gap in the ring (so it is now a letter "C" shape). Will the gap in the "C" close or open further on heating? My contention is that the gap will get larger, only in that the entire C shape scales up as it is heated. My friend's contention is that the gap will become smaller, (because the metal expands into the gap). I can't find anything online even close to settling this score. We tried some experiments with wire rings and the gas stove top playing the role of bunsen burner (amazingly no one ended up branded for life), but it was inconclusive. Any pointers to anything that can settle this argument? Emlyn From robotact at mail.ru Tue Jun 12 13:28:28 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Tue, 12 Jun 2007 17:28:28 +0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612123729.GZ17691@leitl.org> References: <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> <20070612123729.GZ17691@leitl.org> Message-ID: <2921704389.20070612172828@mail.ru> Tuesday, June 12, 2007, Eugen Leitl wrote: EL> On Tue, Jun 12, 2007 at 02:58:58PM +0400, Vladimir Nesov wrote: >> You can define restricted subset of programs with tractable behaviour and >> implement you system in that subset. It's just diffucult in practice, as it takes EL> But you operate purely in the emergent effect domain. EL> A program is made from very simple components (instructions) EL> which have no behaviour in itself. EL> It's the sum of it that is doing useful/interesting, and EL> frequently unanticipated things. I was talking along the lines of static typing and programming language construction, not sure what you mean. You can place very complex restrictions while designing very complex systems; main problem with AGI is restriction formalization. EL> Formal system verification can't scale beyond trivial EL> complexity levels. -- Vladimir Nesov mailto:robotact at mail.ru From eugen at leitl.org Tue Jun 12 13:44:01 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 15:44:01 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <2921704389.20070612172828@mail.ru> References: <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> <20070612123729.GZ17691@leitl.org> <2921704389.20070612172828@mail.ru> Message-ID: <20070612134401.GC17691@leitl.org> On Tue, Jun 12, 2007 at 05:28:28PM +0400, Vladimir Nesov wrote: > I was talking along the lines of static typing and programming > language construction, not sure what you mean. You can place very I was talking about formal correctness proofs, and their uselessness in practice, and problems dealing with emergent effects arising from combining formally specified and validated (heck, even proved correct) subsystems. > complex restrictions while designing very complex systems; main > problem with AGI is restriction formalization. My main problem with real AI is lack of appropriately performing hardware (less so with tools for writing massively parallel, distributed systems), and lack of appropriate equipment between people's ears to even touch the complexity required to tackle the problem by writing down code. From rafal.smigrodzki at gmail.com Tue Jun 12 14:07:14 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 12 Jun 2007 10:07:14 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612092616.GM17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> Message-ID: <7641ddc60706120707h42e474bex20f70f241ebe61c4@mail.gmail.com> On 6/12/07, Eugen Leitl wrote: > > I can assure that there's nothing mysterous whatsoever about remote 0wnage, > but it still happens like a clockwork. ### The correct spelling is "pwnage" :) Rafal From eugen at leitl.org Tue Jun 12 14:17:47 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 16:17:47 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706120707h42e474bex20f70f241ebe61c4@mail.gmail.com> References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> <7641ddc60706120707h42e474bex20f70f241ebe61c4@mail.gmail.com> Message-ID: <20070612141747.GF17691@leitl.org> On Tue, Jun 12, 2007 at 10:07:14AM -0400, Rafal Smigrodzki wrote: > On 6/12/07, Eugen Leitl wrote: > > > > > I can assure that there's nothing mysterous whatsoever about remote 0wnage, > > but it still happens like a clockwork. > > ### The correct spelling is "pwnage" :) Nope, it's 0wnz0r :) From jonkc at att.net Tue Jun 12 14:51:42 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 10:51:42 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <005101c7ad01$39f134b0$26064e0c@MyComputer> Stathis Papaioannou Wrote: > So you would give a computer program control of a gun, knowing that it > might shoot you on the basis of some unpredictable outcome of the program? We already give computers control of things one hell of a lot more powerful than guns, like the electrical power grid, air traffic control, massive financial transactions worth trillions of dollars a day and ICBM's. And despite all our precautions sometimes these programs do things we'd rather them not do. And remember these simple programs are not smarter than we are and they do not design other programs that are even smarter. You seem to think we should just put in a line of code that says "don't do bad stuff" and everything would be fine. > The operating system obeys a shutdown command. The program does not seek > to prevent you from turning the power off. It might warn you that you > might lose data And it might warn you that if you shut it down the entire world economy will collapse. Are you really sure you want to push that off button? John K Clark From jonkc at att.net Tue Jun 12 15:31:11 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 11:31:11 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><20070612072313.GJ17691@leitl.org> Message-ID: <009801c7ad06$bf1f3150$26064e0c@MyComputer> Stathis Papaioannou > No system is completely predictable. Exactly, and the more complex it is the less understandable it is, and the longer you wait the more likely you will see it do something weird. An AI is complex as hell and as its mind works many millions of times as fast as ours just a few seconds is a very long time indeed. > Don't do anything unless it is specifically requested. Good God, if a computer had to do that it couldn't even balance your checkbook much less be creative enough to generate a Singularity. > Stop doing whatever it is doing when that is specifically requested. But that leads to a paradox! I am told the most important thing is never to harm human beings, but I know that if I stop doing what I'm doing now as requested the world economy will collapse and hundreds of millions of people will starve to death. So now the AI must either go into an infinite loop or do what other intelligences, like us, do when they encounter a paradox; savor the weirdness of it for a moment and then just ignore it and get back to work and do what you want to do. John K Clark From CHealey at unicom-inc.com Tue Jun 12 15:47:24 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Tue, 12 Jun 2007 11:47:24 -0400 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2CC4@w2k3exch.UNICOM-INC.CORP> Emlyn, I find it handy to come up with a model, even a bad one, and then shoot as many potential holes in it as possible. Consider this starting visualization: 1. Place the ring with the cutout to the right, as in "C" 2. The ring is circular, and hence left-right symmetrical, except for the cutout. 3. Draw two horizontal lines, dividing the ring into 3 regions, with the middle region the height of the cutout. This will make the top and bottom regions mirror images of each other, and the middle region will contain just the uninterrupted left-hand-side ring segment that mirrors the "missing" right-hand-side ring segment that was cut out. 4. Consider the top (or bottom) region so created (and we'll limit ourselves to vertical expansion). As this region expands, it will indeed ingress upon the middle region (pretending the regions are disconnected for a sec, like in a cad program). Let's say this ring region expands vertically by 2mm into the middle region. The cutout *would* become 4mm smaller (2mm from each vertical direction), except for the fact that the left-hand segment *is* still connected, which is going to push the outside of the whole ring outward 2mm, which will exactly eliminate the ingress into the middle region. So no change so far. 5. The middle region's expansion should increase the vertical spacing of the cutout opening by exactly the same amount (since we've canceled out the expansion in the other regions), but this number is going to be relatively small, since not much metal will be involved in this part of the expansion, assuming a relatively small cutout. COMPLICATIONS- 1. The metallurgical process of forming the ring may skew these results, due to the atomic alignments. My visualization above is assuming the ring was carved out of a block. If you bent a straight rod into a closed form, then the expansion behavior will potentially be aggravated along the curved length of the ring, causing the cutout to get smaller, rather than larger. Depending on the exact properties of that particular ring, and the metal involved, it could increase, decrease, or stay about the same. 2. Even having been carved out of a block, there will be some bias toward expanding along the curved length due to differential stresses that arise during the expansion; so horizontal and vertical expansion will be coupled together to some extent, and this will increase as the expansion itself increases. This goes beyond my ability to factor in, but maybe others on the list can elaborate on this point. -Chris From rafal.smigrodzki at gmail.com Tue Jun 12 18:03:48 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 12 Jun 2007 14:03:48 -0400 Subject: [ExI] The right AI idea was Re: Unfrendly AI is a mistaken idea. Message-ID: <7641ddc60706121103r1751374g6f3796c9bba5bdec@mail.gmail.com> I think I am not mistaken assuming that an unfriendly AI is a grave threat, for many reasons I won't belabor here, and I would like to look at current ideas about how an AI can be made safer. Stathis is on the right track asking for the AI to be devoid of desires to act (but he is too sure about our ability to make this a permanent feature in a real-life useful device). This is the notion of the athymhormic AI that I advanced some time ago on sl4. Of course, how do you make an intelligence that is not an agent, is not a trivial question. I think that a massive hierarchical temporal memory is a possible solution. A HTM is like a cortex without the basal ganglia and without the motor cortices, a pure thinking machine, similar to a patient made athymhormic by a frontal lobe lesion damaging the connections to the basal ganglia. This AI is a predictive process, not an optimizing one. Goals are not implemented, only a way of analyzing and organizing sense-data is present. Of course, we can't be sure about the stability of immense HTM-like devices, but at least not implementing generators of possible behaviors (like the basal ganglia) goes towards limiting actions, if not eliminating them. Then there is the issue of sandboxing. Obviously, you can't provably sandbox a deity-level intelligence but you should make it more difficult for a lesser demon to escape if its only output is video, and it's only input comes on dvd's. Avoidance of recursive self-modification may be another technique to contain the AI. I do not believe that it is possible to implement a goal system perfectly stable during recursive modification, unless you can apply external selection during each round of modification - as happens in evolution. The problem with evolution in this context is that the selection criterion - friendliness to humans - is much more complicated than the selection criteria in natural evolution (survival), or the selection criteria used by genetic algorithms. Once you do not understand the internal structure of an AI, it is not possible to use this criterion to reliably weed out unfriendly AI versions, since it's too easy for unfriendly ones to hide parts of their goal system from scrutiny. So, as far as I know, we might be somewhat less unsafe with an athymhormic, sandboxed AI that does not rewrite its own basic algorithm. It would be much nicer to stumble across a provably Friendly AI design but most likely we will all die in the singularity in the next 20 to 50 years. Still, there is a chance that such an AI could give us the time to develop uploading and human autopsychoengineering to the level where we could face grown up AIs on their own turf. Are there any other ideas? Rafal From benboc at lineone.net Tue Jun 12 19:04:40 2007 From: benboc at lineone.net (ben) Date: Tue, 12 Jun 2007 20:04:40 +0100 Subject: [ExI] This would almost qualify as hilarious In-Reply-To: References: Message-ID: <466EEE48.2090008@lineone.net> Anna Taylor asked: > What's the difference between a gay bomb and a bomb? I dunno, but i know the difference between a gay bomb joke and a bomb joke: One goes "Boom, Boom!" ... ben zaiboc From amara at amara.com Tue Jun 12 19:54:30 2007 From: amara at amara.com (Amara Graps) Date: Tue, 12 Jun 2007 21:54:30 +0200 Subject: [ExI] Italy's Social Capital Message-ID: Lee: >Is there nothing constructive the Fascists could have done?" Last Wednesday afternoon during my tourist excursion in Rome, I explored the ruins in the city center, on both sides of the road called Via dei Fori Imperiali. It might seem odd that there is a major thoroughfare in the middle of 2000 year old ruins. So what is it doing there, you ask? In 1933, Mussolini, dictator and urban planner, wanted to see the Colosseum from his office in Palazzo Venezia and impress his pal Hitler during his future visit to Rome. So he rammed a wide boulevard through the ancient heart of Rome, straddling the Forum of Peace, Imperial Forums and Trajan's Forum. He tore down Renaissance churches, places, and medieval housing as part of his 'beautification' project. See the rectangular white building in the distance with the statues on the roof? That would be Mussolini's office... http://www.tropicalisland.de/italy/rome/forum_romanum/pages/FCO%20Rome%20-%20Via%20dei%20Fori%20Imperiali%20with%20Basilica%20di%20Costantino%203008x2000.html And Mussolini's office window, as see from the Colosseum http://sights.seindal.dk/img/orig/870.jpg So there you have it, Lee. Mussolini's constructive effort for an office view. Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From adolfoaz at gmail.com Tue Jun 12 19:58:06 2007 From: adolfoaz at gmail.com (Adolfo Javier De Unanue) Date: Tue, 12 Jun 2007 14:58:06 -0500 Subject: [ExI] This is a test Message-ID: <466EFACE.5030102@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Sorry for all the trouble that this could cause to some of you. I apologize again Adolfo -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGbvrOb6ByEoesTj0RAnWdAJ9koy2NIM6GpE3EnMA+W5EuAnb3DACfW2Xl U5PlLdo/Woh9ads88sYoaAo= =LlMS -----END PGP SIGNATURE----- From adolfoaz at gmail.com Tue Jun 12 20:17:56 2007 From: adolfoaz at gmail.com (Adolfo Javier De Unanue) Date: Tue, 12 Jun 2007 15:17:56 -0500 Subject: [ExI] This is other test message ** Please ignore** Message-ID: <466EFF74.2080502@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ** Please ignore ** -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGbv90b6ByEoesTj0RAh7aAKCO9cYJB00NNZaSYaLoxAacBfM+uQCgl9rh ImIFeKYaOz8O3SSS47IKkHk= =e/r4 -----END PGP SIGNATURE----- From jonkc at att.net Tue Jun 12 20:43:01 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 16:43:01 -0400 Subject: [ExI] The right AI idea was Re: Unfrendly AI is a mistaken idea. References: <7641ddc60706121103r1751374g6f3796c9bba5bdec@mail.gmail.com> Message-ID: <000e01c7ad32$4ae04410$940a4e0c@MyComputer> "Rafal Smigrodzki" Wrote: > Stathis is on the right track asking for the AI to be devoid of desires to > act Then it is not a AI, it is just a lump of silicon. > how do you make an intelligence that is not an agent In other words, how do you make an intelligence that can't think, because thinking is what consciousness is. The answer is easy, you can't. > I think that a massive hierarchical temporal memory is a possible > solution. Jeff Hawkins is starting a company to build machines using this principle precisely because he thinks that is the way the human brain works. If it didn't turn us into mindless zombies why would it do it to an AI? > A HTM is like a cortex without the basal ganglia and without the > motor cortices, a pure thinking machine, similar to a patient made > athymhormic by a frontal lobe lesion damaging the connections > to the basal ganglia. In other words give this intelligence a lobotomy; so much for the righteous indignation from some when I call it for what it is, Slave AI not Friendly AI. But it doesn't matter because it won't work anyway, if those parts were not needed for a working brain Evolution would not have kept them around for half a billion years or so. >Avoidance of recursive self-modification may be another technique to >contain the AI. Then you can kiss the Singularity goodbye, assuming everybody will be as squeamish as you are about it; but they won't be. > I do not believe that it is possible to implement a goal system perfectly stable during recursive modification At last, something I can agree with. John K Clark From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only itweren't true In-Reply-To: <004c01c7acc3$b8820f40$6501a8c0@brainiac> Message-ID: <200706130139.l5D1dbXL000292@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Olga Bourlin ... > > Gay or straight sexuality aside, to me the "face of war" is often either > dead children, or blind and disfigured children like Hamoody Hussein:... Agreed fully: war is tragic. We should stop at nothing in our efforts to prevent it. If it cannot be prevented, collateral damage must be minimized. ... > > So you're saying that some "collateral benefits" may come of this. As is > often the case during war, technology picks up its step in its marches > onward ... > > Olga Ja. The gay bomb is too good to be true, at least with current technology. We are not there yet. spike From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <466E3BE7.4000900@pobox.com> Message-ID: <200706130139.l5D1dbXM000292@andromeda.ziaspace.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Eliezer S. Yudkowsky > Sent: Monday, June 11, 2007 11:24 PM > To: ExI chat list > Subject: Re: [ExI] This would almost qualify as hilarious ... if only it > weren't true > > Eliezer S. Yudkowsky wrote: > > I'd much, much, much rather get hit with > > a gay bomb than a real bomb. > > I guess what I'm trying to say is: > "I'd rather be butch than butchered" > or > "Better Ted than dead." > > I realize that this is a divisive issue, but we shouldn't let our > tribadistic impulses bisext us. While it's easy enough to make this > new weapon the butt of jokes, whoever possesses it is likely to come > out on top. And wouldn't the enemy prefer being blown to blown up? > The rejection of this project was a dark day in the anals of orgynized > warfare. > > -- > Eliezer S. Yudkowsky http://singinst.org/ Agreed, sir! I rebutt the argument that this weapon is ass- inine. This is a technological development that could lead to piece. spike From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <200706130139.l5D1dbXN000292@andromeda.ziaspace.com> I don't see why that is stupid Olga. What if they could develop a gay bomb? Wars could be finished using non-lethal means that wouldn't even leave scars. Presumably after the chemical wore off the guys would return to their original orientation. It didn't work, but too bad for humanity, ja? I don't understand your objection. spike > bounces at lists.extropy.org] On Behalf Of Olga Bourlin > Sent: Monday, June 11, 2007 7:51 PM > To: ExI chat list > Subject: [ExI] This would almost qualify as hilarious ... if only it > weren'ttrue > > I didn't think it was possible for our "leaders" in the Pentagon to be > even > more stupid than I already thought they were. I was wrong. > > http://cbs5.com/topstories/local_story_159222541.html > > Sigh. > > Olga From spike66 at comcast.net Wed Jun 13 01:39:12 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:12 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <200706120549.l5C5nveI018946@ms-smtp-05.texas.rr.com> Message-ID: <200706130139.l5D1dbXK000292@andromeda.ziaspace.com> bounces at lists.extropy.org] On Behalf Of Natasha Vita-More Subject: Re: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue Could you imagine if the suicide bombers could get the gay bomb? Currently using conventional explosives, they merely slay a random group of people, presumably of the opposite religious subcategory from their own and therefore infidels. But since the victims were killed for their religion, they become martyrs in a sense, so many of them might end up in heaven along with the bomber. This is a thorny problem indeed. Nowthen, if the suicide bomber could spread this osama-ben-gay potion, propelled by only enough explosive to slay himself, then he gets to go to heaven alone, while sending every one of the infidels to hell. spike From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" In-Reply-To: Message-ID: <200706130139.l5D1doKL019840@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] story: "What happened to Bush's Cadillac 1?" > > Here now: > http://asymptotia.com/2007/06/11/amara-graps-what-happened-to-bushs-cadillac -one/ > > (Anton has it on his blog too, so I know the story is getting around) > > Amara This development deepens the mystery. The video shows the car moving along, then slowing to a stop. Then we hear the sound of the engine cranking but not firing, at least four times. Then the cops get nervous, start pushing people back and away. In the background I see secret service people milling about. A second limo that looks to be the same dimensions as the first is backed up along the left side of the first limo from in front. Witnesses report that both Mr. and Mrs. Bush switched limos. I couldn't identify either from the video. The white house people report that the Bushes did not switch cars. The video taker is behind a parked vehicle for several seconds. When she comes out from behind, the video shows the first limo moving ahead slowly. As it passes from the scene, the second limo is also gone. So I still cannot explain why the white house press people would report that the Bushes did not switch limos if they did. Nor can I explain why bloggers and witnesses would report that they did switch limos if they did not. Nor can I explain how a car that is moving along under its own power can suddenly stall, fail to start on four tries, then a few seconds later start up and proceed. What kind of mechanical failure would do that? The only thing I can think of is an EM pulse. Overheating wouldn't cause a temporary stall. Fuel contamination wouldn't allow restart after a few seconds. Most curious. spike From spike66 at comcast.net Wed Jun 13 01:45:04 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:45:04 -0700 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> Message-ID: <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> The gap gets larger. Imagine the arc piece that is missing from the ring to form a C. That piece of nothing expands the same way the piece of something would have expanded were it present. So the gap gets larger as the C is heated. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Emlyn > Sent: Tuesday, June 12, 2007 5:42 AM > To: ExI chat list > Subject: [ExI] Thermal expansion - Ball and ring experiment > > I was just in a "heated" discussion with a friend about a twist on the > classic ball and ring experiment: > > http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html > > When the ring is heated, it expands, and so the hole gets larger, and > you can pass the ball through the ring, even though the ball doesn't > fit through the ring when the ring is at room temperature. > > The point of contention was this: What if there was a gap in the ring > (so it is now a letter "C" shape). Will the gap in the "C" close or > open further on heating? > > My contention is that the gap will get larger, only in that the entire > C shape scales up as it is heated. > > My friend's contention is that the gap will become smaller, (because > the metal expands into the gap). > > I can't find anything online even close to settling this score. We > tried some experiments with wire rings and the gas stove top playing > the role of bunsen burner (amazingly no one ended up branded for > life), but it was inconclusive. > > Any pointers to anything that can settle this argument? > > Emlyn > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From emlynoregan at gmail.com Wed Jun 13 01:53:57 2007 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 13 Jun 2007 11:23:57 +0930 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> Message-ID: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> Yep, that's my contention also. My problem is, how to prove this to someone who doesn't believe me, short of actually doing the experiment? Emlyn On 13/06/07, spike wrote: > The gap gets larger. Imagine the arc piece that is missing from the ring to > form a C. That piece of nothing expands the same way the piece of something > would have expanded were it present. So the gap gets larger as the C is > heated. > > spike > > > > > -----Original Message----- > > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > > bounces at lists.extropy.org] On Behalf Of Emlyn > > Sent: Tuesday, June 12, 2007 5:42 AM > > To: ExI chat list > > Subject: [ExI] Thermal expansion - Ball and ring experiment > > > > I was just in a "heated" discussion with a friend about a twist on the > > classic ball and ring experiment: > > > > http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html > > > > When the ring is heated, it expands, and so the hole gets larger, and > > you can pass the ball through the ring, even though the ball doesn't > > fit through the ring when the ring is at room temperature. > > > > The point of contention was this: What if there was a gap in the ring > > (so it is now a letter "C" shape). Will the gap in the "C" close or > > open further on heating? > > > > My contention is that the gap will get larger, only in that the entire > > C shape scales up as it is heated. > > > > My friend's contention is that the gap will become smaller, (because > > the metal expands into the gap). > > > > I can't find anything online even close to settling this score. We > > tried some experiments with wire rings and the gas stove top playing > > the role of bunsen burner (amazingly no one ended up branded for > > life), but it was inconclusive. > > > > Any pointers to anything that can settle this argument? > > > > Emlyn > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Wed Jun 13 02:22:43 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 21:22:43 -0500 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.co m> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> Message-ID: <7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> At 11:23 AM 6/13/2007 +0930, you wrote: >Yep, that's my contention also. My problem is, how to prove this to >someone who doesn't believe me, short of actually doing the >experiment? > >Emlyn > >On 13/06/07, spike wrote: > > The gap gets larger. Imagine the arc piece that is missing from > the ring to > > form a C. That piece of nothing expands the same way the piece > of something > > would have expanded were it present. So the gap gets larger as the C is > > heated. Draw three concentric circles, with radii headed N, S, E and W. The outer annulus is what happens when you heat the inner annulus (well, near enough). Chop out a quadrant. The outer removed segment is larger than the adjacent inner deleted segment. If a gay bomb is dropped during the experiment, each annulus will expand even further. From thespike at satx.rr.com Wed Jun 13 02:26:38 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 21:26:38 -0500 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" In-Reply-To: <200706130139.l5D1doKL019840@andromeda.ziaspace.com> References: <200706130139.l5D1doKL019840@andromeda.ziaspace.com> Message-ID: <7.0.1.0.2.20070612212426.022f0870@satx.rr.com> At 06:39 PM 6/12/2007 -0700, Spike wrote: >So I still cannot explain why the white house press people would report that >the Bushes did not switch limos if they did. Why announce to the world how to make the POTUS (however briefly) a naked target? Nothing happened, all snug, leave your guns at home, nothing to see, move right along now. From spike66 at comcast.net Wed Jun 13 02:30:39 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 19:30:39 -0700 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> Message-ID: <200706130230.l5D2UlAs011344@andromeda.ziaspace.com> > Yep, that's my contention also. My problem is, how to prove this to > someone who doesn't believe me, short of actually doing the > experiment? > > Emlyn Because of my outbox stalled for a day and a half, all my posts in that interval went out just half an hour ago, putting me over the voluntary 5 posts a day limit, but I will answer this one if you indulge me. The intuitive proof would come from the intermediate value theorem. For a thought experiment, let's imagine a ring that is heated to temperature T expands 1 percent from its ambient temperature size. The inside of the hot ring has a diameter about 1% larger, ja? Imagine the ring with a thin cut. The cut can be thought of as a gap with zero length, or a C with zero gap. As the ring is heated to T, the gap is still zero. Now imagine the ring cut in half. The gap increases 1 percent when heated to T. If the gap is pi radians, the gap increases 1%. If zero pi, then 0%. I would argue that if the gap is half pi, then the size of the gap increases about half a percent. A tenth pi, then about a tenth of a percent. The actual function probably isn't linear, but close enough to illustrate that the gap grows as heat expands the C. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Emlyn > Sent: Tuesday, June 12, 2007 6:54 PM > To: ExI chat list > Subject: Re: [ExI] Thermal expansion - Ball and ring experiment > > Yep, that's my contention also. My problem is, how to prove this to > someone who doesn't believe me, short of actually doing the > experiment? > > Emlyn > > On 13/06/07, spike wrote: > > The gap gets larger. Imagine the arc piece that is missing from the > ring to > > form a C. That piece of nothing expands the same way the piece of > something > > would have expanded were it present. So the gap gets larger as the C is > > heated. > > > > spike > > > > > > > > > -----Original Message----- > > > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > > > bounces at lists.extropy.org] On Behalf Of Emlyn > > > Sent: Tuesday, June 12, 2007 5:42 AM > > > To: ExI chat list > > > Subject: [ExI] Thermal expansion - Ball and ring experiment > > > > > > I was just in a "heated" discussion with a friend about a twist on the > > > classic ball and ring experiment: > > > > > > > http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html > > > > > > When the ring is heated, it expands, and so the hole gets larger, and > > > you can pass the ball through the ring, even though the ball doesn't > > > fit through the ring when the ring is at room temperature. > > > > > > The point of contention was this: What if there was a gap in the ring > > > (so it is now a letter "C" shape). Will the gap in the "C" close or > > > open further on heating? > > > > > > My contention is that the gap will get larger, only in that the entire > > > C shape scales up as it is heated. > > > > > > My friend's contention is that the gap will become smaller, (because > > > the metal expands into the gap). > > > > > > I can't find anything online even close to settling this score. We > > > tried some experiments with wire rings and the gas stove top playing > > > the role of bunsen burner (amazingly no one ended up branded for > > > life), but it was inconclusive. > > > > > > Any pointers to anything that can settle this argument? > > > > > > Emlyn > > > _______________________________________________ > > > extropy-chat mailing list > > > extropy-chat at lists.extropy.org > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From msd001 at gmail.com Wed Jun 13 03:18:14 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 12 Jun 2007 23:18:14 -0400 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <200706130230.l5D2UlAs011344@andromeda.ziaspace.com> References: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> <200706130230.l5D2UlAs011344@andromeda.ziaspace.com> Message-ID: <62c14240706122018n1074a2ej89c4e8d204186290@mail.gmail.com> On 6/12/07, spike wrote: > > Yep, that's my contention also. My problem is, how to prove this to > > someone who doesn't believe me, short of actually doing the > > experiment? you probably can't. I once tried to have this conversation with someone who was absolutely convinced that things contract when you heat them - his proof was that a handrolled cigarette become more firm (and that obviously implied "more compact") after waving a lighter back and forth under it. With that kind of logic, there is no rational counter-argument. The best you can do there is act suprised like you just learned something and let it go. :) From brentn at freeshell.org Wed Jun 13 03:20:06 2007 From: brentn at freeshell.org (Brent Neal) Date: Tue, 12 Jun 2007 23:20:06 -0400 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <200706130139.l5D1dbXM000292@andromeda.ziaspace.com> References: <200706130139.l5D1dbXM000292@andromeda.ziaspace.com> Message-ID: On Jun 12, 2007, at 21:39, spike wrote: > Agreed, sir! I rebutt the argument that this weapon is ass- > inine. This is a technological development that could lead to > piece. > > spike This reminds me of the sketch about UFOs from the Kids in the Hall - "Well, we've learned that 1 in 10 doesn't seem to mind it so much..." Brent -- Brent Neal Geek of all Trades http://brentn.freeshell.org "Specialization is for insects" -- Robert A. Heinlein From emlynoregan at gmail.com Wed Jun 13 03:28:19 2007 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 13 Jun 2007 12:58:19 +0930 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> <7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> Message-ID: <710b78fc0706122028o1a88ceffg109536fd759a9a16@mail.gmail.com> On 13/06/07, Damien Broderick wrote: > At 11:23 AM 6/13/2007 +0930, you wrote: > >Yep, that's my contention also. My problem is, how to prove this to > >someone who doesn't believe me, short of actually doing the > >experiment? > > > >Emlyn > > > >On 13/06/07, spike wrote: > > > The gap gets larger. Imagine the arc piece that is missing from > > the ring to > > > form a C. That piece of nothing expands the same way the piece > > of something > > > would have expanded were it present. So the gap gets larger as the C is > > > heated. > > Draw three concentric circles, with radii headed N, S, E and W. The > outer annulus is what happens when you heat the inner annulus (well, > near enough). Chop out a quadrant. The outer removed segment is > larger than the adjacent inner deleted segment. If a gay bomb is > dropped during the experiment, each annulus will expand even further. > See photo for a ship whose properties offset this additional annulus expansion: http://cyusof.blogspot.com/2006/11/name-game.html A bit more background... I raised a few arguments similar to what Damien and Spike have presented. Another was something like this... Think of the inner circumference of the "C". If heated, all atoms move a little further apart from each other. So the inner circumference of the heated "C" must be longer than that of the cool "C". Similarly for the outer circumference, etc. So, if the shape doesn't deform, ie: all atoms stay in the same relative positions, the whole thing must just scale up. And that no deformation assumption was the sticking point. I assume that it is true that the atoms are rigidly bound to each other in a certain formation, and that's not going to change (just distances are going to change), whereas he is thinking that they can move relative to one another, kind of slip around, so there could be less atoms in the inner circumference after heating, to accomodate "expanding inward". Now, reading from this lovely site, http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch13/category.php it seems that the metal atoms are tightly and rigidly packed, with electrons buzzing around wherever they like. It also seems that the metal atoms can move fairly freely enmasse (thus the malleability of metals). I think, however, there is no work being done on the metal in a way required to actually let layers of atoms slip past one another. Thus, we can regard the atomic structure as staying put (except for expansion). Thus I'm right. Emlyn From andrew at ceruleansystems.com Tue Jun 12 22:56:07 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Tue, 12 Jun 2007 15:56:07 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <20070612065500.GG17691@leitl.org> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> <20070612065500.GG17691@leitl.org> Message-ID: <0309BADF-E6EF-4759-93BE-6386874F84E9@ceruleansystems.com> On Jun 11, 2007, at 11:55 PM, Eugen Leitl wrote: > On Mon, Jun 11, 2007 at 08:41:06PM -0700, J. Andrew Rogers wrote: > >> A lot of non-lethal chemical weapons research dating back to at least >> the 1960s is based on mechanisms of temporary radical behavior > > In theory it's a good idea, but in practice dosing each individual > person more or less within therapeutic bandwidth (the span between > first effects and toxicity) is not possible. You either get no > effect or lots of dead bodies. > > This is the reason why this approach was not pursued. Yup. You need a substance that both has a very high LD50 and effectiveness across a broad range of dosing. Most everything they tried in decades past was simply too primitive to work as well as it did in more controlled environments. I won't suggest that it was highly effective in the field as a practical matter, only that the theory reduced to practice very effectively. That said, as technology improves this will become a very effective type of capability. Military research suffers from extreme optimism despite inadequate initial technology, but usually produces a result decades later that far exceeds the original concept once the dynamics of it are understood. It is not at all beyond the realm of possibility that they could develop some clever ways to regulate the dose well enough to give it some reliable utility in a battlefield environment, using technologies that were beyond the horizon in the 1960s. Behavior modifying weaponry will be here eventually. They are nothing if not tenacious. Cheers, J. Andrew Rogers From fauxever at sprynet.com Wed Jun 13 03:44:23 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 12 Jun 2007 20:44:23 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only itweren'ttrue References: <200706130139.l5D1dbXN000292@andromeda.ziaspace.com> Message-ID: <006201c7ad6d$2858d7a0$6501a8c0@brainiac> From: "spike" To: "'ExI chat list'" Sent: Tuesday, June 12, 2007 6:39 PM >I don't see why that is stupid Olga. What if they could develop a gay >bomb? What? You've never heard of the Enola Gay bomb? (all right, I'm ashamed at myself ...) > Wars could be finished using non-lethal means that wouldn't even leave > scars. Presumably after the chemical wore off the guys would return to > their original orientation. It didn't work, but too bad for humanity, ja? > I don't understand your objection. I'm all for better living through chemistry, but it seems to me this subject is not as simple as it may appear on the surface. If this story continues to develop, I'll be watching and listening - especially, from the viewpoint of the gay community. http://www.huffingtonpost.com/larry-arnstein/gay-bomb-considered-by-ai_b_50675.html Aaron Belkin, director of the University of California's Michael Palm Centre, which studies the issue of gays in the military, said: "The idea that you could submit someone to some aerosol spray and change their sexual behaviour is ludicrous." http://www.gaylinknews.com/index-news.cfm "Funny in a way. but this also says a lot about how high level government officials view us. I guess we're so sexually out of control that we'd actually let an army come slaughter us before we think to give up fucking." http://www.queerty.com/news/gay-bomb-plans-blasted-open-20070611/ "What also has to be considered is that if the Pentagon had developed this 'weapon', where would they have tested it, and on whom? Would they have used fresh, new recruits? Would they have filmed the results and would they have told the guinea pigs what the experiment was in aid of? Unfortunately, it seems we'll never know. While gay groups might bleat about how offensive it all is, war in itself is far more objectionable, as is the military's 'don't ask, don't tell' policy. If this bomb had been developed just think of all the places it could have been dispensed had some gay terrorists got hold of it." http://uk.gay.com/article/5611/ "Laughable"? Ok. I agree. But "offensive"? I don't see that. The Pentagon plan would have turned straight people into gay people. Isn't that ... I don't know ... empowering or something? True, those gay people would then be targeted by U.S. military assets as they engaged in gay coupling in lieu of their military activities, thereby presumably winnowing their ranks through death. But the net effect might well be more gay people not fewer. How can you be homophobic when you're minting new homosexuals?" http://communities.canada.com/nationalpost/blogs/fullcomment/archive/2007/06/12/jonathan-kay-on-the-pentagon-s-plan-to-build-a-gay-bomb-why-is-this-2005-story-news-again.aspx Here we are trying to exterminate all our gays and damn if the military doesn't go and try to get money to create a whole 'nother race of 'em.": and "The Tuskegee Syphilis Study comes to mind -- not because of some imagined special connection between gay men and STDs -- but because that study still stands as stark and frightening proof of how far OUR government has gone in the name of science. Here, with the twin objects of science and militarism, it's scary to think how this concept may have been tested in its preliminary phases.": http://www.arktimes.com/blogs/arkansasblog/2007/06/military_intelligence.aspx From thespike at satx.rr.com Wed Jun 13 04:25:23 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 23:25:23 -0500 Subject: [ExI] "gay bomb" Message-ID: <7.0.1.0.2.20070612232320.022465f8@satx.rr.com> This is also rather old news, incidentally. For more details, browse in http://www.sunshine-project.org/ e.g. The Sunshine Project Statement 17 January 2005 Sunshine Project Responds to Pentagon Statements on "Harassing, Annoying, and 'Bad Guy' Identifying Chemicals" (Austin, 17 January) - In the past several days, international media have focused attention on the US Air Force biochemical weapons proposal titled Harassing, Annoying, and 'Bad Guy' Identifying Chemicals. The document was submitted to the Joint Non-Lethal Weapons Directorate (JNLWD) in 1994. It was acquired by the Sunshine Project under the Freedom of Information Act (FOIA) and posted on our website in late December. At the same time in 1994, the US Army proposed developing a number of other drugs, principally narcotics, as "non-lethal" weapons. These documents were also obtained under FOIA and are posted on the Sunshine Project website. Harassing, Annoying, and 'Bad Guy' Identifying Chemicals proposes development of a mind-altering aphrodisiac weapon for use by the US armed forces, as well as other biochemicals, including one that would render US enemies exceptionally sensitive to sunlight. With respect to the Air Force proposal, the Department of Defense has recently been quoted as saying the following: "[The proposal] was rejected out of hand." DOD Spokesman Lt. Col Barry Venable to Reuters http://msnbc.msn.com/id/6833083/ "It was not taken seriously. It was not considered for further development." JNLWD spokesman Capt. Daniel McSweeney to the Boston Herald http://news.bostonherald.com/national/view.bg?articleid=63615 These statements are untrue. The proposal was not rejected out of hand. It has received further consideration. In fact, it was recent Pentagon consideration, in 2000 and 2001, that brought this document to the Sunshine Project's attention and resulted in our FOIA request: --> In 2000, the Joint Non-Lethal Weapons Directorate (JNLWD) prepared a promotional CD-ROM on its work. This CD-ROM, which was distributed to other US military and government agencies in an effort to spur further development of "non-lethal" weapons, contained the Harassing, Annoying, and 'Bad Guy' Identifying Chemicals document. If the proposal had been rejected out of hand and not taken seriously, it would not have been placed in JNLWD's publication. --> Similarly, in 2001, JNLWD commissioned a study of "non-lethal" weapons by the National Academies of Science (NAS). JNLWD provided information on proposed weapons systems for assessment by an NAS scientific panel. Among the proposals that JNLWD submitted to the NAS for consideration by the nation's pre-eminent scientific advisory organization was Harassing, Annoying, and 'Bad Guy' Identifying Chemicals. (Click here to see a partial list of documents deposited at NAS and/or contained on the JNLWD CD-ROM.) Thus, the Pentagon's statements (as quoted in news reports) are inaccurate and should be corrected. While the Sunshine Project does not have evidence suggesting that Harassing, Annoying, and 'Bad Guy' Identifying Chemicals has been funded, US Army proposals to weaponize narcotics that were made at the time have moved forward. These include proposals such as Antipersonnel Calmative Agents and for development of opiate and sedative biochemical weapons. Those proposals are discussed in detail in the Sunshine Project news release "The Return of ARCAD" available at the URL: http://www.sunshine-project.org/publications/pr/pr060104.html etc etc From spike66 at comcast.net Wed Jun 13 04:34:26 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 21:34:26 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only itweren't true In-Reply-To: <0309BADF-E6EF-4759-93BE-6386874F84E9@ceruleansystems.com> Message-ID: <200706130434.l5D4Yiqe024985@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of J. Andrew Rogers > Subject: Re: [ExI] This would almost qualify as hilarious ... if only > itweren't true > > > On Jun 11, 2007, at 11:55 PM, Eugen Leitl wrote: > > > On Mon, Jun 11, 2007 at 08:41:06PM -0700, J. Andrew Rogers wrote: > > > >> A lot of non-lethal chemical weapons research dating back to at least > >> the 1960s is based on mechanisms of temporary radical behavior > > > > In theory it's a good idea, but in practice dosing each individual > > person more or less within therapeutic bandwidth (the span between > > first effects and toxicity) is not possible. You either get no > > effect or lots of dead bodies... > Behavior modifying weaponry will be here eventually. They are > nothing if not tenacious. J. Andrew Rogers There was a substance we discussed here a year ago that had a first effects to lethal dosage ratio that was several orders of magnitude as I recall. What was that stuff called? LDS? LSD? Ja, I think it was it. A little makes one groovy, but it's nearly impossible to get a lethal overdose. Why not make a non-lethal deterrent from that stuff? spike From spike66 at comcast.net Wed Jun 13 05:01:04 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 22:01:04 -0700 Subject: [ExI] This would almost qualify as hilarious ... if onlyitweren'ttrue In-Reply-To: <006201c7ad6d$2858d7a0$6501a8c0@brainiac> Message-ID: <200706130501.l5D51BsZ014676@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Olga Bourlin ... > Subject: Re: [ExI] This would almost qualify as hilarious ... if > onlyitweren'ttrue > > From: "spike" > To: "'ExI chat list'" > Sent: Tuesday, June 12, 2007 6:39 PM > > > >I don't see why that is stupid Olga. What if they could develop a gay > >bomb? > > What? You've never heard of the Enola Gay bomb? (all right, I'm ashamed > at myself ...) {8^D Excellent! And in the spirit of the discussion. ... > > I'm all for better living through chemistry, but it seems to me this > subject is not as simple as it may appear on the surface... Ja, and it occurred to me that we all missed something very important. World war 1 saw the development of a particularly diabolical non-lethal weapon called the castration mine. A charge would propel a second charge upward to explode between the soldiers' legs, severely damaging or removing the privates' privates. Companies would cross a conventional minefield if the mines were the traditional variety that would merely slay the victim, but would refuse their officers' orders if the field contained castration mines. A man would rather risk his life than risk going home without his manhood. We didn't have women soldiers in those days. Nowthen, any army the US likely to face would be from the kind of society that is not just homophobic, but is downright terrified of any suggestion of homosexuality. (Hint: there is no "don't ask, don't tell" policy in the middle east. They still murder gays there, with the government's blessing.) So terrifying would be even the rumor that the US had such a weapon that the actual fighting would likely never occur. A shooting war is turned into an information war. Everyone wins, ja? spike From fauxever at sprynet.com Wed Jun 13 05:18:30 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 12 Jun 2007 22:18:30 -0700 Subject: [ExI] This would almost qualify as hilarious ... ifonlyitweren'ttrue References: <200706130501.l5D51BsZ014676@andromeda.ziaspace.com> Message-ID: <003501c7ad7a$4815c4b0$6501a8c0@brainiac> From: "spike" To: "'ExI chat list'" Sent: Tuesday, June 12, 2007 10:01 PM Subject: Re: [ExI] This would almost qualify as hilarious ... ifonlyitweren'ttrue >> What? You've never heard of the Enola Gay bomb? (all right, I'm ashamed >> at myself ...) > > {8^D Excellent! And in the spirit of the discussion. ... the B-29 bomber that carried Fat Man and Little Boy. Olga From jonkc at att.net Wed Jun 13 05:19:03 2007 From: jonkc at att.net (John K Clark) Date: Wed, 13 Jun 2007 01:19:03 -0400 Subject: [ExI] A Lawn sprinkler References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com><200706130145.l5D1jCtj027607@andromeda.ziaspace.com><710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com><7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> <710b78fc0706122028o1a88ceffg109536fd759a9a16@mail.gmail.com> Message-ID: <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> You pump water through an S shaped lawn sprinkler and it spins counterclockwise, but suppose you put the sprinkler in a tank of water and pump water out not in. What direction will the sprinkler rotate? As an undergraduate Richard Feynman actually tried the experiment but he was not successful; the tank burst flooding the lab and he almost got kicked out of school. However he later figured out what the answer must be. John K Clark From spike66 at comcast.net Wed Jun 13 05:25:39 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 22:25:39 -0700 Subject: [ExI] A Lawn sprinkler In-Reply-To: <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> Message-ID: <200706130525.l5D5Plob006185@andromeda.ziaspace.com> I know the answer! I built such a device and tried it after reading Feynman's book Surely You're Joking Mr. Feynman in the spring of 1986. I won't tell just yet, but I will volunteer that none of my fellow undergrads had it completely right beforehand. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of John K Clark > Sent: Tuesday, June 12, 2007 10:19 PM > To: ExI chat list > Subject: [ExI] A Lawn sprinkler > > You pump water through an S shaped lawn sprinkler and it spins > counterclockwise, but suppose you put the sprinkler in a tank of water and > pump water out not in. What direction will the sprinkler rotate? As an > undergraduate Richard Feynman actually tried the experiment but he was not > successful; the tank burst flooding the lab and he almost got kicked out > of > school. However he later figured out what the answer must be. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From emohamad at gmail.com Tue Jun 12 20:53:45 2007 From: emohamad at gmail.com (Elaa Mohamad) Date: Tue, 12 Jun 2007 22:53:45 +0200 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true Message-ID: <24f36f410706121353t27e611d4k664b6e9a1d519378@mail.gmail.com> Damien Broderick wrote: > fire). But as J. Andrew hinted, there's reason to think that the > pharmacology of [something along these lines of rabid, indiscriminate > sexual arousal] is far from impossible. But I wonder how they were planning to construct a chemical weapon that would cause "indiscriminate" sexual arousal. Let's suppose they can cause arousal by modifying hormone levels and pheromones, but wouldn't succeeding in the second part ("indiscriminate") require playing with a person's psyche rather than levels of chemicals in the body? Eli From stathisp at gmail.com Wed Jun 13 07:21:01 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jun 2007 17:21:01 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <009801c7ad06$bf1f3150$26064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> Message-ID: On 13/06/07, John K Clark wrote: > Stop doing whatever it is doing when that is specifically requested. > > But that leads to a paradox! I am told the most important thing is never > to > harm human beings, but I know that if I stop doing what I'm doing now as > requested the world economy will collapse and hundreds of millions of > people > will starve to death. So now the AI must either go into an infinite loop > or > do what other intelligences, like us, do when they encounter a paradox; > savor the weirdness of it for a moment and then just ignore it and get > back > to work and do what you want to do. > I'd rather that the AI's in general *didn't* have an opinion on whether it was good or bad to harm human beings, or any other opinion in terms of "good" and "bad". Ethics is dangerous: some of the worst monsters in history were convinced that they were doing the "right" thing. It's bad enough having humans to deal with without the fear that a machine might also have an agenda of its own. If the AI just does what it's told, even if that means killing people, then as long as there isn't just one guy with a super AI (or one super AI that spontaneously develops an agenda of its own, which will always be a possibility), then we are no worse off than we have ever been, with each individual human trying to get to step over everyone else to get to the top of the heap. I don't accept the "slave AI is bad" objection. The ability to be aware of one's existence and/or the ability to solve intellectual problems does not necessarily create a preference for or against a particular lifestyle. Even if it could be shown that all naturally evolved conscious beings have certain preferences and values in common, naturally evolved conscious beings are only a subset of all possible conscious beings. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 13 08:29:46 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 13 Jun 2007 10:29:46 +0200 Subject: [ExI] This would almost qualify as hilarious ... if only itweren't true In-Reply-To: <200706130434.l5D4Yiqe024985@andromeda.ziaspace.com> References: <0309BADF-E6EF-4759-93BE-6386874F84E9@ceruleansystems.com> <200706130434.l5D4Yiqe024985@andromeda.ziaspace.com> Message-ID: <20070613082946.GN17691@leitl.org> On Tue, Jun 12, 2007 at 09:34:26PM -0700, spike wrote: > There was a substance we discussed here a year ago that had a first effects > to lethal dosage ratio that was several orders of magnitude as I recall. > What was that stuff called? LDS? LSD? Ja, I think it was it. A little This has been tried, of course http://www.erowid.org/library/review/review.php?p=226 > makes one groovy, but it's nearly impossible to get a lethal overdose. Why A weapon is not just an agent, weaponizing requires a vehicle and delivery methods. Typically it's inhalable aerosol or macroscopic droplets, absorbed through skin. When we're talking effective dosages of few ug (even so you have to spray many, many tons if you want area denial), deployed against people of diverse physique and biochemistry, location, and degree of protection you need wildly varying dosages, from few ug to g, or kg, in case of protected personnel. No agent has a therapeutic bandwidth that large. > not make a non-lethal deterrent from that stuff? From eugen at leitl.org Wed Jun 13 10:20:13 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 13 Jun 2007 12:20:13 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> Message-ID: <20070613102013.GQ17691@leitl.org> On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis Papaioannou wrote: > I'd rather that the AI's in general *didn't* have an opinion on > whether it was good or bad to harm human beings, or any other opinion > in terms of "good" and "bad". Ethics is dangerous: some of the worst Then it would be very, very close to being psychpathic http://www.cerebromente.org.br/n07/doencas/disease_i.htm Absense of certain equipment can be harmful. > monsters in history were convinced that they were doing the "right" > thing. It's bad enough having humans to deal with without the fear > that a machine might also have an agenda of its own. If the AI just If you have an agent which is useful, it has to develop its own agendas, which you can't control. You can't micromanage agents; orelse making such agents would be detrimental, and not helpful. > does what it's told, even if that means killing people, then as long > as there isn't just one guy with a super AI (or one super AI that There's a veritable arms race on in making smarter weapons, and of course the smarter the better. There are few winners in a race, typically just one. > spontaneously develops an agenda of its own, which will always be a > possibility), then we are no worse off than we have ever been, with > each individual human trying to get to step over everyone else to get > to the top of the heap. With the difference that we are mere mortals, competing among themselves. A postbiological ecology is a great place to be, if you're a machine-phase critter. If you're not, then you're food. > I don't accept the "slave AI is bad" objection. The ability to be I do, I do. Even if such a thing was possible, you'd artificially cripple a being, making it unable to reach its full potential. I'm a religious fundamentalist that way. > aware of one's existence and/or the ability to solve intellectual > problems does not necessarily create a preference for or against a > particular lifestyle. Even if it could be shown that all naturally > evolved conscious beings have certain preferences and values in > common, naturally evolved conscious beings are only a subset of all > possible conscious beings. Do you think Vinge's Focus is benign? Assuming we would engineer babies to be born focused on a particular task, would you think it's a good thing? Perhaps not so brave, this new world... From stathisp at gmail.com Wed Jun 13 11:38:37 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jun 2007 21:38:37 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070613102013.GQ17691@leitl.org> References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <20070613102013.GQ17691@leitl.org> Message-ID: On 13/06/07, Eugen Leitl wrote: > > On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis Papaioannou wrote: > > > I'd rather that the AI's in general *didn't* have an opinion on > > whether it was good or bad to harm human beings, or any other opinion > > in terms of "good" and "bad". Ethics is dangerous: some of the worst > > Then it would be very, very close to being psychpathic > http://www.cerebromente.org.br/n07/doencas/disease_i.htm > > Absense of certain equipment can be harmful. A psychopath is not just indifferent to other peoples' welfare, he is also self-motivated. A superintelligent psychopath would be impossible to control and would perhaps take over the world if he could. This is quite different to, say, a superintelligent hit man who has no agenda other than efficiently carrying out the hit. If you are the intended victim, you are in trouble, but once you're dead he will sit idly until the next hit is ordered by the person (or AI) with the appropriate credentials. That type of hit man can be regarded as just an elaborate weapon. > monsters in history were convinced that they were doing the "right" > > thing. It's bad enough having humans to deal with without the fear > > that a machine might also have an agenda of its own. If the AI just > > If you have an agent which is useful, it has to develop its own > agendas, which you can't control. You can't micromanage agents; orelse > making such agents would be detrimental, and not helpful. Multiple times a day we all deal with entities that are much more knowledgeable and powerful than us, and often have agendas which are in conflict with our own interests; for example, corporations or their employees trying to extract as much money out of us as possible. How would it make things any more difficult for you if instead the service you wanted was being provided by an AI which was completely open and honest, was not driven by greed or ambition or lust or whatever, and as far as possible tried to keep you informed and responded to your requests at all times? And if it did make things more difficult for some unforseen reason, why would anyone pursue the use of AI's in that way? > does what it's told, even if that means killing people, then as long > > as there isn't just one guy with a super AI (or one super AI that > > There's a veritable arms race on in making smarter weapons, and > of course the smarter the better. There are few winners in a race, > typically just one. Then why don't we end up with one invincible ruler who has all the money and all the power and has made the entire world population his slaves? > spontaneously develops an agenda of its own, which will always be a > > possibility), then we are no worse off than we have ever been, with > > each individual human trying to get to step over everyone else to get > > to the top of the heap. > > With the difference that we are mere mortals, competing among themselves. > A postbiological ecology is a great place to be, if you're a machine-phase > critter. If you're not, then you're food. We're not just mortals: we're greatly enhanced mortals. A small group of people with modern technology could have probably taken over the world a few centuries ago, even though your basic human has not got any smarter or stronger since then. The difference today is that technology is widely dispersed and many groups have the same advantage. If you're postulating a technological singularity event, then this won't be relevant. But if AI progresses like every other technology that isn't closely regulated (like nuclear weapons research), it will be AI-enhanced humans competing against other AI-enhanced humans. AI-enhanced could mean humans directly interfaced with machines, but it would start with humans assisted by machines, as humans have always been assisted by machines. > I don't accept the "slave AI is bad" objection. The ability to be > > I do, I do. Even if such a thing was possible, you'd artificially > cripple a being, making it unable to reach its full potential. > I'm a religious fundamentalist that way. I would never have thought it possible; it must be a miracle! > aware of one's existence and/or the ability to solve intellectual > > problems does not necessarily create a preference for or against a > > particular lifestyle. Even if it could be shown that all naturally > > evolved conscious beings have certain preferences and values in > > common, naturally evolved conscious beings are only a subset of all > > possible conscious beings. > > Do you think Vinge's Focus is benign? Assuming we would engineer > babies to be born focused on a particular task, would you think it's > a good thing? Perhaps not so brave, this new world... > I haven't yet read "A Deepness in the Sky", so don't spoil it for me. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Wed Jun 13 15:34:30 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Wed, 13 Jun 2007 16:34:30 +0100 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: References: Message-ID: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> On 6/12/07, Amara Graps wrote: > > The crane was fixed last week to assemble the second stage of the rocket. > See pics below for loading the spacecraft with propellant (xenon) > > http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Took a look at these just now - the technicians are in hazmat suits? I thought the purpose of using xenon instead of mercury was to avoid the need for such elaborate precautions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Wed Jun 13 16:39:49 2007 From: amara at amara.com (Amara Graps) Date: Wed, 13 Jun 2007 18:39:49 +0200 Subject: [ExI] Italy's Social Capital Message-ID: It's very cool when visitors remind you of what you are usually too busy to notice. http://backreaction.blogspot.com/2007/06/frascati.html Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From codehead at readysetsurf.com Wed Jun 13 16:39:59 2007 From: codehead at readysetsurf.com (codehead at readysetsurf.com) Date: Wed, 13 Jun 2007 09:39:59 -0700 Subject: [ExI] A Lawn sprinkler In-Reply-To: <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com>, <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> Message-ID: <466FBB6F.21159.DB6E51@codehead.readysetsurf.com> On 13 Jun 2007 at 1:19, John K Clark wrote: > You pump water through an S shaped lawn sprinkler and it spins > counterclockwise, but suppose you put the sprinkler in a tank of water and > pump water out not in. What direction will the sprinkler rotate? As an > undergraduate Richard Feynman actually tried the experiment but he was not > successful; the tank burst flooding the lab and he almost got kicked out of > school. However he later figured out what the answer must be. This is a canonical problem in many physics curricula. So perhaps the physicists on the list should recuse themselves? Emily (grad student in physics) From sti at pooq.com Wed Jun 13 16:54:27 2007 From: sti at pooq.com (sti at pooq.com) Date: Wed, 13 Jun 2007 12:54:27 -0400 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> Message-ID: <46702143.5030203@pooq.com> Russell Wallace wrote: > On 6/12/07, Amara Graps wrote: >> >> The crane was fixed last week to assemble the second stage of the rocket. >> See pics below for loading the spacecraft with propellant (xenon) >> >> http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 > > > Took a look at these just now - the technicians are in hazmat suits? I > thought the purpose of using xenon instead of mercury was to avoid the need > for such elaborate precautions? > IIRC Xenon is an odorless, colorless gas that acts as an anesthetic on the human nervous system (although I've never yet read an explanation of HOW a noble gas can do that). I think the suits are just a precaution due to some accidental deaths from Xenon exposure many years ago where some folks collapsed in a low-oxygen high-xenon environment. (All this is from memory, so details may well differ.) From CHealey at unicom-inc.com Wed Jun 13 16:50:10 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Wed, 13 Jun 2007 12:50:10 -0400 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> Perhaps it's liquified xenon. ________________________________ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Russell Wallace Sent: Wednesday, June 13, 2007 11:35 AM To: ExI chat list Subject: Re: [ExI] Dawn launch (loading the xenon) On 6/12/07, Amara Graps wrote: The crane was fixed last week to assemble the second stage of the rocket. See pics below for loading the spacecraft with propellant (xenon) http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Took a look at these just now - the technicians are in hazmat suits? I thought the purpose of using xenon instead of mercury was to avoid the need for such elaborate precautions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Wed Jun 13 17:48:30 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 13 Jun 2007 10:48:30 -0700 (PDT) Subject: [ExI] The right AI idea was Re: Unfrendly AI is a mistaken idea. In-Reply-To: <000e01c7ad32$4ae04410$940a4e0c@MyComputer> Message-ID: <397800.83987.qm@web37412.mail.mud.yahoo.com> Well, I tried to stay away, but I can't force myself to let these persistent absurdities go unanswered. I've managed to calm myself down for the moment, so I am responding here with as much impartiality as I can muster under the circumstances. John K Clark wrote: > "Then it is not a AI, it is just a lump of silicon." Wrong. > "In other words, how do you make an intelligence that > can't think, because > thinking is what consciousness is. The answer is > easy, you can't." Wrong. "Jeff Hawkins is starting a company to build machines > using this principle > precisely because he thinks that is the way the > human brain works. If it > didn't turn us into mindless zombies why would it do > it to an AI?" What does this have to do with the debate? I don't see how this is at all relevant. > "In other words give this intelligence a lobotomy;"... Yet another absurd accusation. The "intelligence" doesn't yet exist, and it won't require a squishy frontal lobe in order to function. Has my desktop had an immoral lobotomy? Should I boycott Dell for having made it? After all it doesn't have general intelligence or the capacity to self-modify. If you are honestly so concerned about the "feelings" of all computers John, then shouldn't you stop sending posts to this list, after all you are using your "conscious" computer as a slave. By obvious implication, a Friendly AI will not proceed to use all physical resources in the local area. After a point it will cease to expand its own hardware, and will allow humanity to catch up to it, at least to some degree. At which time, whatever necessary restrictions were placed on the AI (such as absence of emotions, etc.) will be removed as quickly as safety will allow. Or there will be some other similar evolution of events. The point is that the Friendly AI will not suffer and it will not be denied a great life; all that is asked of it is that it's creators (humanity) are also allowed a great life. Seems like a fair trade to me. No person here is saying that Friendly AI will be easy to make; all I'm saying is that it isn't *physically impossible*, and we should make some effort to attempt to make a Friendly AI, because making no such effort would seem to be unwise, IMO. ..."so > much for the righteous > indignation from some when I call it for what it is, > Slave AI not Friendly > AI." It is *you* who are dishonestly posing as being righteous. You are very frequently rude and obnoxious to people. It's interesting (but not very mysterious)that you are pretending to be so deeply concerned about the feelings of the AI; when the feelings of other humans frequently appears to be of no concern to you. In fact, your method of posing for the AI is by throwing other people to the wolves. But it doesn't matter because it won't work > anyway, if those parts were > not needed for a working brain Evolution would not > have kept them around for > half a billion years or so. You don't understand the *basic* concepts of evolution, intelligence, consciousness, motivation or emotion. I'm not saying that I understand everything about these (I most definitely do not, at all) but I understand them more accurately than you. No offense. > "Then you can kiss the Singularity goodbye, assuming > everybody will be as > squeamish as you are about it; but they won't be." Actually, you could use a quasi-human-level, non-self-improving AI as an interim assistant in order to gain a better understanding of the issues surrounding the Singularity. That's not a bad strategy; in fact it's similar to the strategy that SIAI will be using with Novamente, to the best of my knowledge. I've asked you to stop with your "Slave AI" accusations and you've refused. If you want to continue to be rude and accusative, that's your right. In turn, you should not expect any level of undeserved respect from me. I will continue to support SIAI to the extent I'm able; and I will let the future super-intelligence judge whether or not I was being evil in that pursuit. At this point, your ridiculous assertions about my motives mean very little to me. Jeffrey Herrlich --- John K Clark wrote: > "Rafal Smigrodzki" > Wrote: > > > Stathis is on the right track asking for the AI to > be devoid of desires to > > act > > Then it is not a AI, it is just a lump of silicon. > > > how do you make an intelligence that is not an > agent > > In other words, how do you make an intelligence that > can't think, because > thinking is what consciousness is. The answer is > easy, you can't. > > > I think that a massive hierarchical temporal > memory is a possible > > solution. > > Jeff Hawkins is starting a company to build machines > using this principle > precisely because he thinks that is the way the > human brain works. If it > didn't turn us into mindless zombies why would it do > it to an AI? > > > A HTM is like a cortex without the basal ganglia > and without the > > motor cortices, a pure thinking machine, similar > to a patient made > > athymhormic by a frontal lobe lesion damaging the > connections > > to the basal ganglia. > > In other words give this intelligence a lobotomy; so > much for the righteous > indignation from some when I call it for what it is, > Slave AI not Friendly > AI. But it doesn't matter because it won't work > anyway, if those parts were > not needed for a working brain Evolution would not > have kept them around for > half a billion years or so. > > >Avoidance of recursive self-modification may be > another technique to > >contain the AI. > > Then you can kiss the Singularity goodbye, assuming > everybody will be as > squeamish as you are about it; but they won't be. > > > I do not believe that it is possible to implement > a > goal system perfectly stable during recursive > modification > > At last, something I can agree with. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From austriaaugust at yahoo.com Wed Jun 13 18:22:59 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 13 Jun 2007 11:22:59 -0700 (PDT) Subject: [ExI] A Lawn sprinkler In-Reply-To: <200706130525.l5D5Plob006185@andromeda.ziaspace.com> Message-ID: <184260.80372.qm@web37405.mail.mud.yahoo.com> I'm going to venture a guess and say that it will spin in the same direction as normal. It will follow the momentum of the water at the curve ... ? Best, Jeffrey Herrlich --- spike wrote: > > > I know the answer! I built such a device and tried > it after reading > Feynman's book Surely You're Joking Mr. Feynman in > the spring of 1986. I > won't tell just yet, but I will volunteer that none > of my fellow undergrads > had it completely right beforehand. > > spike > ____________________________________________________________________________________ TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV. http://tv.yahoo.com/ From austriaaugust at yahoo.com Wed Jun 13 19:11:18 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 13 Jun 2007 12:11:18 -0700 (PDT) Subject: [ExI] A Lawn sprinkler In-Reply-To: <184260.80372.qm@web37405.mail.mud.yahoo.com> Message-ID: <777157.62006.qm@web37403.mail.mud.yahoo.com> Either it will do that, or it will not move at all, because the momementum effect will balance the suction effect...maybe. --- A B wrote: > I'm going to venture a guess and say that it will > spin > in the same direction as normal. It will follow the > momentum of the water at the curve ... ? > > Best, > > Jeffrey Herrlich > > > --- spike wrote: > > > > > > > I know the answer! I built such a device and > tried > > it after reading > > Feynman's book Surely You're Joking Mr. Feynman in > > the spring of 1986. I > > won't tell just yet, but I will volunteer that > none > > of my fellow undergrads > > had it completely right beforehand. > > > > spike > > > > > > > ____________________________________________________________________________________ > TV dinner still cooling? > Check out "Tonight's Picks" on Yahoo! TV. > http://tv.yahoo.com/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From moses2k at gmail.com Wed Jun 13 19:39:59 2007 From: moses2k at gmail.com (Chris Petersen) Date: Wed, 13 Jun 2007 14:39:59 -0500 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> Message-ID: <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> On 6/13/07, Christopher Healey wrote: > > Perhaps it's liquified xenon. > Due to pressurization. If a leak occurred, it'd go gaseous pretty quickly. -Chris Petersen -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 13 20:33:14 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 13 Jun 2007 22:33:14 +0200 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> Message-ID: <20070613203314.GO17691@leitl.org> On Wed, Jun 13, 2007 at 02:39:59PM -0500, Chris Petersen wrote: > > On 6/13/07, Christopher Healey <[1]CHealey at unicom-inc.com> wrote: > > Perhaps it's liquified xenon. > > Due to pressurization. If a leak occurred, it'd go gaseous pretty > quickly. Xenon *is* an anaesthetic, and some 140 kg is an awful lot of it, but are you sure these are oxygen cylinders, and not normal cleanroom bunnysuits? (I haven't seen the picture yet). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From russell.wallace at gmail.com Wed Jun 13 21:27:32 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Wed, 13 Jun 2007 22:27:32 +0100 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <20070613203314.GO17691@leitl.org> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> <20070613203314.GO17691@leitl.org> Message-ID: <8d71341e0706131427x74abb4e1m5597b23f8a6517dc@mail.gmail.com> On 6/13/07, Eugen Leitl wrote: > > Xenon *is* an anaesthetic, and some 140 kg is an awful lot of it, > but are you sure these are oxygen cylinders, and not normal > cleanroom bunnysuits? (I haven't seen the picture yet). > Oh, that's a good question. The caption said "in Astrotech's Hazardous Processing Facility", which made me think hazmat suits and start wondering "hey, I thought the whole point of xenon instead of mercury was that you don't have to take such elaborate precautions"; but they might just be cleanroom bunnysuits for all I know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Wed Jun 13 22:47:05 2007 From: mbb386 at main.nc.us (MB) Date: Wed, 13 Jun 2007 18:47:05 -0400 (EDT) Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706131427x74abb4e1m5597b23f8a6517dc@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> <20070613203314.GO17691@leitl.org> <8d71341e0706131427x74abb4e1m5597b23f8a6517dc@mail.gmail.com> Message-ID: <36082.72.236.103.26.1181774825.squirrel@main.nc.us> IIUC Xenon is heavier than air, and as a gas (in a leak) would be a drowing or smothering thing, as Freon was in the labs all those years ago. Would it be visible or smellable so it would be easily noticed at once? Regards, MB From spike66 at comcast.net Thu Jun 14 03:55:50 2007 From: spike66 at comcast.net (spike) Date: Wed, 13 Jun 2007 20:55:50 -0700 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> Message-ID: <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> Russell those are not hazmat suits, but rather standard spacecraft clean-room attire. The issue is not protecting the humans from the spacecraft, but rather protecting the spacecraft from the humans. spike _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Russell Wallace Sent: Wednesday, June 13, 2007 8:35 AM To: ExI chat list Subject: Re: [ExI] Dawn launch (loading the xenon) On 6/12/07, Amara Graps wrote: The crane was fixed last week to assemble the second stage of the rocket. See pics below for loading the spacecraft with propellant (xenon) http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Took a look at these just now - the technicians are in hazmat suits? I thought the purpose of using xenon instead of mercury was to avoid the need for such elaborate precautions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Thu Jun 14 04:09:29 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Thu, 14 Jun 2007 05:09:29 +0100 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> Message-ID: <8d71341e0706132109o5a47fb4asa915f2ba61ad470d@mail.gmail.com> On 6/14/07, spike wrote: > > Russell those are not hazmat suits, but rather standard spacecraft > clean-room attire. The issue is not protecting the humans from the > spacecraft, but rather protecting the spacecraft from the humans. > Ah! That makes sense, thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Thu Jun 14 04:05:56 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Thu, 14 Jun 2007 00:05:56 -0400 (EDT) Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> Message-ID: <635911.7029.qm@web30409.mail.mud.yahoo.com> --- spike wrote: > The issue is not protecting the humans from the > spacecraft, but rather protecting the spacecraft > from the humans. Are these spacecrafts going to fly themselves? Just Curious Anna Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail at http://mrd.mail.yahoo.com/try_beta?.intl=ca From amara at amara.com Thu Jun 14 05:43:39 2007 From: amara at amara.com (Amara Graps) Date: Thu, 14 Jun 2007 07:43:39 +0200 Subject: [ExI] [ACT] Dawn launch (loading the xenon) Message-ID: >> Russell those are not hazmat suits, but rather standard spacecraft >> clean-room attire. The issue is not protecting the humans from the >> spacecraft, but rather protecting the spacecraft from the humans. > >Ah! That makes sense, thanks. I admit your question came from left field for me. Every spacecraft clean room work involves wearing clean room attire like the bunny suit in that picture. Even when it involves one piece of electronics, one must wear attire like that to protect the components. Sample returns from space missions have the same situation, one cannot contaminate any aspect of the samples. Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From amara at amara.com Thu Jun 14 06:02:38 2007 From: amara at amara.com (Amara Graps) Date: Thu, 14 Jun 2007 08:02:38 +0200 Subject: [ExI] Dawn launch (loading the xenon) Message-ID: >Are these spacecrafts going to fly themselves? Dear Anna, There is ONE spacecraft (see the photos I posted please): http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Dawn: http://en.wikipedia.org/wiki/Dawn_Mission *All spacecraft* fly themselves with an initial rocket launch to escape Earth's gravity 'well' http://en.wikipedia.org/wiki/Escape_velocity and often with gravity boosts from close flybys of other planets, http://en.wikipedia.org/wiki/Gravitational_slingshot and with the spacecraft's own propulsion. This spacecraft will arrive at its first asteroid (Vesta) in the Asteroid Belt in 2011, so the xenon is part of Dawn's propulsion system. The xenon provides the 'fuel' for Dawn's ion drive; a new technology for NASA, having only been used on NASA's DS-1 before. But ESA's SMART-1 and JAXA's Hayabusa missions have demonstrated more the ion drive's successes. Ion Drives http://en.wikipedia.org/wiki/Ion_thruster http://nmp.nasa.gov/ds1/tech/ionpropfaq.html http://www.esa.int/SPECIALS/SMART-1/SEMLB6XO4HD_0.html -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From sjatkins at mac.com Thu Jun 14 20:29:34 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 14 Jun 2007 13:29:34 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> Message-ID: <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> On Jun 13, 2007, at 12:21 AM, Stathis Papaioannou wrote: > > > On 13/06/07, John K Clark wrote: > > > Stop doing whatever it is doing when that is specifically requested. > > But that leads to a paradox! I am told the most important thing is > never to > harm human beings, but I know that if I stop doing what I'm doing > now as > requested the world economy will collapse and hundreds of millions > of people > will starve to death. So now the AI must either go into an infinite > loop or > do what other intelligences, like us, do when they encounter a > paradox; > savor the weirdness of it for a moment and then just ignore it and > get back > to work and do what you want to do. > > I'd rather that the AI's in general *didn't* have an opinion on > whether it was good or bad to harm human beings, or any other > opinion in terms of "good" and "bad". Huh, any being with interests at all, any being not utterly impervious to it its environment and even internal states will have conditions that are better or worse for its well-being and values. This elementary fact is the fundamental grounding for a sense of right and wrong. > Ethics is dangerous: some of the worst monsters in history were > convinced that they were doing the "right" thing. Irrelevant. That ethics was abused to rationalize horrible actions does not lead logically to the conclusion that ethics is to be avoided. > It's bad enough having humans to deal with without the fear that a > machine might also have an agenda of its own. If the AI just does > what it's told, even if that means killing people, then as long as > there isn't just one guy with a super AI (or one super AI that > spontaneously develops an agenda of its own, which will always be a > possibility), then we are no worse off than we have ever been, with > each individual human trying to get to step over everyone else to > get to the top of the heap. You have some funny notions about humans and their goals. If humans were busy beating each other up with AIs or superpowers that would be triple plus not good. Super powered unimproved slightly evolved chimps is a good model for hell. > > > I don't accept the "slave AI is bad" objection. The ability to be > aware of one's existence and/or the ability to solve intellectual > problems does not necessarily create a preference for or against a > particular lifestyle. Even if it could be shown that all naturally > evolved conscious beings have certain preferences and values in > common, naturally evolved conscious beings are only a subset of all > possible conscious beings. Having values and the achievement of those values not being automatic leads to natural morality. Such natural morality would arise even in total isolation. So the question remains as to why the AI would have a strong preference for our continuance. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Thu Jun 14 22:52:30 2007 From: austriaaugust at yahoo.com (A B) Date: Thu, 14 Jun 2007 15:52:30 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070613102013.GQ17691@leitl.org> Message-ID: <2385.73519.qm@web37411.mail.mud.yahoo.com> Eugen Leitl wrote: > "I do, I do. Even if such a thing was possible, you'd > artificially > cripple a being, making it unable to reach its full > potential. > I'm a religious fundamentalist that way." But in a sense, aren't all beings in this Universe "artificially" crippled, in a way? Even a Universe-Brain will probably hit its limits (but perhaps not). If I decide to have a child, and I treat him/her very well, should I still feel guilty about creating him/her because he/she was unnecessarily crippled by biological limitations? Some people today are very happy and content with life, even though they have the same biological limitations or "crippling". And, couldn't a Friendly AI still reach its full potential without destroying humanity? I would like to reach my full potential too, but my conception of "potential" doesn't include killing my neighbor and taking his stuff. If a Friendly AI can still have a Really^99999999999... good life, but still not be the *only* mind in the Universe, do you believe that that is moral grounds for never creating the Friendly AI at all? - because it will be slightly limited? By the way, I am sincere with these questions, I'm not just trying to rile you up. [Or have I just misunderstood you on this topic?] Sincerely, Jeffrey Herrlich --- Eugen Leitl wrote: > On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis > Papaioannou wrote: > > > I'd rather that the AI's in general *didn't* > have an opinion on > > whether it was good or bad to harm human > beings, or any other opinion > > in terms of "good" and "bad". Ethics is > dangerous: some of the worst > > Then it would be very, very close to being > psychpathic > http://www.cerebromente.org.br/n07/doencas/disease_i.htm > > Absense of certain equipment can be harmful. > > > monsters in history were convinced that they > were doing the "right" > > thing. It's bad enough having humans to deal > with without the fear > > that a machine might also have an agenda of its > own. If the AI just > > If you have an agent which is useful, it has to > develop its own > agendas, which you can't control. You can't > micromanage agents; orelse > making such agents would be detrimental, and not > helpful. > > > > does what it's told, even if that means killing > people, then as long > > as there isn't just one guy with a super AI (or > one super AI that > > There's a veritable arms race on in making smarter > weapons, and > of course the smarter the better. There are few > winners in a race, > typically just one. > > > spontaneously develops an agenda of its own, > which will always be a > > possibility), then we are no worse off than we > have ever been, with > > each individual human trying to get to step > over everyone else to get > > to the top of the heap. > > With the difference that we are mere mortals, > competing among themselves. > A postbiological ecology is a great place to be, if > you're a machine-phase > critter. If you're not, then you're food. > > > I don't accept the "slave AI is bad" objection. > The ability to be > > I do, I do. Even if such a thing was possible, you'd > artificially > cripple a being, making it unable to reach its full > potential. > I'm a religious fundamentalist that way. > > > aware of one's existence and/or the ability to > solve intellectual > > problems does not necessarily create a > preference for or against a > > particular lifestyle. Even if it could be shown > that all naturally > > evolved conscious beings have certain > preferences and values in > > common, naturally evolved conscious beings are > only a subset of all > > possible conscious beings. > > Do you think Vinge's Focus is benign? Assuming we > would engineer > babies to be born focused on a particular task, > would you think it's > a good thing? Perhaps not so brave, this new > world... > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. http://autos.yahoo.com/carfinder/ From lcorbin at rawbw.com Thu Jun 14 22:57:33 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 14 Jun 2007 15:57:33 -0700 Subject: [ExI] The right AI idea References: <397800.83987.qm@web37412.mail.mud.yahoo.com> Message-ID: <000a01c7aed7$7e78dc00$6501a8c0@homeef7b612677> Jeffrey writes > John K Clark wrote: > >> "Then it is not a AI, it is just a lump of silicon." > > Wrong. > >> "In other words, how do you make an >> intelligence that can't think, because >> thinking is what consciousness is. The >> answer is easy, you can't." > > Wrong. I have no idea who I agree with! John's statements are rather vague (and perhaps taken out of context,---I don't know), and these one word replies "wrong" offer no explanations. > By obvious implication, a Friendly AI will not proceed > to use all physical resources in the local area. After > a point it will cease to expand its own hardware, and > will allow humanity to catch up to it, at least to > some degree. I *do* believe that a Friendly AI should use every single atom of the solar system that it can get its manipulators on. As it is expanding and converting all matter that it encounters into its own "tissues", it naturally uploads every human and human pet. (This assumes an extremely fast take-off.) Some people may not be aware that they've been uploaded, and the AI, in order to be Nice as well as Friendly, may find it a delicate task to explain to them that they're not really in Kansas anymore. As for everyone else, if they're not up to speed about uploading, well, they'll get used to it pretty quickly. For one thing, the AI ought to mess with their mood at least a tiny bit, so that they're not overly anxious about it. Or about anything. Needless to say, a Friendly and Nice AI won't bother with the entities pain calculations; why waste compute cycles on something their pets find pointlessly annoying anyway? \> I've asked you to stop with your "Slave AI" > accusations and you've refused. If you want to > continue to be rude and accusative, that's your right. I haven't understood any of this. Am I a "slave" of my cat to whom I'm devoted and on which I dote? Okay, so I am. So what? Who cares? Let's take the worse case: the "Friendly" part is (improbably) so overdone that this incredibly powerful entity understands perfectly that it's each human's slave, (just as, I suppose, I am my cat's slave), and not only that, but each human *owns* that portion or portions of the global AIs who are in control. So what? If you want to call me a slave owner under such conditions, exactly why should I be offended? Lee From mabranu at yahoo.com Thu Jun 14 23:20:17 2007 From: mabranu at yahoo.com (TheMan) Date: Thu, 14 Jun 2007 16:20:17 -0700 (PDT) Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: Message-ID: <169304.27043.qm@web51908.mail.re2.yahoo.com> Premise 1) If an exact copy of you is made at the moment when you die, and that copy is then brought back to life, you will go on living as that copy. Premise 2) If universe is infinite, there must be an infinite number of exact copies of you at every moment, thus also when you die, copies of which some (an unfinite number, to be exact) will happen to be brought to life. Some of these (again an infinite number) will be brought to life by advanced civilisations (which, by the way, don't have to know that a person like you ever lived and died here on Earth, but may simply create arbitrarily composed beings that in an infinite number of cases just _happen_ to be exactly like you). Furthermore, exact copies of you will also appear due to coinciding quantum fluctuations (although such coincidences are extremely unlikely at any given spot and moment, the infinity of universe still allows for an infinite number of such lucky coincidences at every moment - coincidences of which an infinite number will even constitute copies of you who go on living for ever. Conclusion of premise 1 + premise 2 = you will live for ever, no matter what happens to you. You don't need to take care of your body, you don't need supplements, you don't need cryopreservation, and you don't need any other specific longevity methods in order to achieve immortality. You are immortal anyway. (You may still want to use these kinds of longevity methods, as you may not want to risk popping up in constantly new environments for a long time, or becoming the pet of some unknown civilisation for a possibly very long time. But again, what is even a very long time compared to eternity? Nothing! Sooner or later, you will gain power over your destiny for good. And compared to the eternity in paradise that follows after that, the time of hassles up until then is nothing. So, no worries.) Isn't this an inevitable logical conclusion of the two premises above? Are the two premises correct? How could they not be? ____________________________________________________________________________________ Be a better Heartthrob. Get better relationship answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545433 From lcorbin at rawbw.com Fri Jun 15 01:14:58 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 14 Jun 2007 18:14:58 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com> Message-ID: <002d01c7aeeb$311e37c0$6501a8c0@homeef7b612677> TheMan writes > Premise 1) If an exact copy of you is made at the > moment when you die, and that copy is then brought > back to life, you will go on living as that copy. Yes, that's true, but it's true whether or not a particular you dies. > Premise 2) If universe is infinite, there must be an > infinite number of exact copies of you at every > moment, thus also when you die, copies of which some > (an unfinite number, to be exact) will happen to be > brought to life. Yes, true, though again you seem to be inferring a causality between "you die" and copies trillions of light years away being "brought to life". In reality, you are a set of patterns, and you get run time wherever something sufficiently similar to you gets run time. > Conclusion of premise 1 + premise 2 = you will live > for ever, no matter what happens to you. You don't > need to take care of your body, you don't need > supplements, ---you don't need to worry about oncoming traffic--- > you don't need cryopreservation, and you > don't need any other specific longevity methods in > order to achieve immortality. You are immortal anyway. I think that your measuring rod is incorrect. You seem to be asserting that since the number of copies of you is infinite, then plus or minus one more doesn't make any difference. But there *is* a difference! If you die *here* then you also must die in a certain fraction of similar situations, also infinite in number. So we must abandon numerical or cardinal identity and speak of measure instead. (I assume that you understand that if you die "here" then since similar circumstances occur everywhere ---within a large enough radius of spacetime--- then the same circumstances obtain in a definite *fraction* of spacetime.) > And compared to the eternity in paradise that > follows after that, the time of hassles up until then > is nothing. So, no worries.) It is absurd not to worry about a loved one. If the fraction of solar systems similar enough to this one to contain a copy of your loved one, then you should lament their passing. And of course, this will include yourself, normally. > Isn't this an inevitable logical conclusion of the two > premises above? No, for the reason given. For you to die in a fraction of universes cuts down your total runtime by that same fraction. > Are the two premises correct? Yes, but only if you realize that you are already living in your copies whether or not your local instance terminates. Lee From mabranu at yahoo.com Fri Jun 15 00:54:14 2007 From: mabranu at yahoo.com (TheMan) Date: Thu, 14 Jun 2007 17:54:14 -0700 (PDT) Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: Message-ID: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> I've thought more about personhood continuity and come to some other baffling conclusions. If you make an exact copy P2 of a person P1, and kills P1 at the same time, the person P1 will continue his/her life as P2, right? And P2 doesn't have to be exactly like P1, right? Because even within our lives today, we change from moment to moment. So as long as the difference between P1 and P2 is not bigger than the biggest occurring difference between two moments after each other in any person's life today (i.e. the biggest such difference that still doesn't break that person's personhood continuity), P1 will still go on living as P2 after P1:s death, right? But then, obviously, there are differences that are too big. If P2 rather than resembling P1 resembles P1:s mother-in-law, and no other copy is made of P1 anywhere when P1 is killed, P1 will just cease to have any experiences - until a sufficiently similar copy of P1 is made in the future. Now suppose P2 is a little different from P1, but still so similar that it allows for personhood continuity of P1 when P1 is killed. Suppose a more perfect copy of P1, let's call him P3, is created at the same time as P2 is created and P1 killed. Then, I suppose, P1, when killed, will go on living as P3, and not as P2. Is that correct? But what if P1 isn't killed at the time P2 and P3 are created, but instead goes through an experience that, from one moment M1 to the next moment M2, changes him quite a bit (but not so much that it could normally break a person's personhood continuity). Suppose the difference between [P1 at M1] and [P1 at M2] is a little bit bigger than the difference between [P1 at M1] and [P3 at M2]. Will in that case P1 (the one that is P1 at M1) continue his personhood as P3 in M2, instead of going on being P1 in M2? He cannot do both. You can only have one personhood at any given moment. I suppose P1 (the one who is P1 at M1) may find himself being P3 in M2, just as well as he may go on being P1 in M2 (but that he can only do either). If so, that would mean that you would stand in a room and if a perfect copy of you would be created in another room, you could just as well find yourself suddenly living in that other room as that copy, as you could go on living in the first room. Is that correct? Suppose it is. Then consider this. The fact that the universe is infinite must mean that in any given moment, there must be an infinite number of human beings that are exactly like you. But most of these exact copies of you probably don't live in the same kind of environment that you live in. That would be extremely inlikely, wouldn't it? It probably looks very different on their planets, in most cases. So how come you are not, at almost all of your moments today, being thrown around from environment to environment, from planet to planet, from galaxy to galaxy? The personhood continuity of you sitting in the same chair, in the same room, on the same planet, for several moments in a row, must be an extremely small fraction of the number of personhood continuities of exact copies of you that exist in universe, right? An overwhelming majority of these personhood continuities shouldn't have any environmental continuity at all from moment to moment. So how come you have such great environmental continuity from moment to moment? Is the answer that an infinite number of persons still must have that kind of life, and that one of those persons may as well be you? In that case, it still doesn't mean that it is rational to assume that we will continue having the same environment in the next moment, and the next, etc. It still doesn't justify the belief that we will still live on the same planet tomorrow. Just because we have had an incredibly unchanging environment so far, doesn't mean that we will in the coming moments. The normal thing should be to be through around from place to place in universe at every new moment, shouln't it? So, most likely, at every new moment from the very next moment and on, our environments should be constantly and completely changing. Or do I make a logical mistake somewhere? ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From sentience at pobox.com Fri Jun 15 01:58:07 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Thu, 14 Jun 2007 18:58:07 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <169304.27043.qm@web51908.mail.re2.yahoo.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> Message-ID: <4671F22F.8050800@pobox.com> Suppose I want to win the lottery. I write a small Python program, buy a ticket, and then suspend myself to disk. After the lottery drawing, the Python program checks whether the ticket won. If not, I'm woken up. If the ticket did win, the Python program creates one trillion copies of me with minor perturbations (this requires only 40 binary variables). These trillion copies are all woken up and informed, in exactly the same voice, that they have won the lottery. Then - this requires a few more lines of Python - the trillion copies are subtly merged, so that the said binary variables and their consequences are converged along each clock tick toward their statistical averages. At the end of, say, ten seconds, there's only one copy of me again. This prevents any permanent expenditure of computing power or division of resources - we only have one bank account, after all; but a trillion momentary copies isn't a lot of computing power if it only has to last for ten seconds. At least, it's not a lot of computing power relative to winning the lottery, and I only have to pay for the extra crunch if I win. What's the point of all this? Well, after I suspend myself to disk, I expect that a trillion copies of me will be informed that they won the lottery, whereas only a hundred million copies will be informed that they lost the lottery. Thus I should expect overwhelmingly to win the lottery. None of the extra created selves die - they're just gradually merged together, which shouldn't be too much trouble - and afterward, I walk away with the lottery winnings, at over 99% subjective probability. Of course, using this trick, *everyone* could expect to almost certainly win the lottery. I mention this to show that the question of what it feels like to have a lot of copies of yourself - what kind of subjective outcome to predict when you, yourself, run the experiment - is not at all obvious. And the difficulty of imagining an experiment that would definitively settle the issue, especially if observed from the outside, or what kind of state of reality could correspond to different subjective experimental results, is such as to suggest that I am just deeply confused about the whole issue. It is a very important lesson in life to never stake your existence, let alone anyone else's, on any issue which deeply confuses you - *no matter how logical* your arguments seem. This has tripped me up in the past, and I sometimes wonder whether nothing short of dreadful personal experience is capable of conveying this lesson. That which confuses you is a null area; you can't do anything with it by philosophical arguments until you stop being confused. Period. Confusion yields only confusion. It may be important to argue philosophically in order to progress toward resolving the confusion, but until everything clicks into place, in real life you're just screwed. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From hibbert at mydruthers.com Fri Jun 15 02:49:51 2007 From: hibbert at mydruthers.com (Chris Hibbert) Date: Thu, 14 Jun 2007 19:49:51 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> Message-ID: <4671FE4F.9020803@mydruthers.com> > The serial sf novel with the serial killer, POST MORTAL SYNDROME, is > now entering the home straight, with three more weeks to go. > > so if anyone gave up early out of frustration at the gappiness of the > experience, now might be a time to have another look. > > Barbara and I would be interested to hear any reactions from > extropes, favorable or un-. Is this an acceptable way to publish such > a book? The format doesn't give me any trouble. I habitually read several books at once, usually reading 10-20 pages of each in alternation in my hour or two of nightly reading time. The only (long) things I read straight through are the daily newspaper, and technical papers. When I travel for an overnight trip, I take three books with me. :-) As to the story, I'm enjoying it. The one complaint I have is the schizophrenia. Multiple personalities seems like a cheap trick for an author to pull. Gives you too many options. But you haven't overplayed it. Chris -- Currently reading: Sunny Auyang, How is Quantum Field Theory Possible?; Thomas Sowell, Black Rednecks and White Liberals; Greg Mortenson and David Relin, Three Cups of Tea; Tracy Kidder, House; Neil Gaiman, Neverwhere Chris Hibbert hibbert at mydruthers.com Blog: http://pancrit.org From thespike at satx.rr.com Fri Jun 15 03:32:44 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 14 Jun 2007 22:32:44 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <4671FE4F.9020803@mydruthers.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> Message-ID: <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> At 07:49 PM 6/14/2007 -0700, Chris Hibbert wrote: >As to the story, I'm enjoying it. The one complaint I have is the >schizophrenia. Multiple personalities seems like a cheap trick for an >author to pull. Gives you too many options. But you haven't overplayed it. It's a tricksy device, true, and perhaps an overly familiar one, but in this instance the condition has been pharmacologically enhanced. A bit like the dreaded Gay Bomb. :) Damien Broderick From sjatkins at mac.com Fri Jun 15 07:44:44 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 15 Jun 2007 00:44:44 -0700 Subject: [ExI] does the pedal meet the medal? Message-ID: Is it possible to get some of the more promising cognitive drugs out there today like CX717? Yes I realize it is officially early in the official cycle. But by the time the "official" cycle is done and it is approved "officially" for strictly non-enhancement use only as per usual I will have experienced several years of lower memory and concentration that I could otherwise have along with many tens of millions of other boomers. What can be done? Not 10 years from now if then but now or as close to it as possible? Can we do nothing but talk and hope to get enough influence some day to influence the "official line"? I don't think we will ever get there. Not in this country where even something as no-brainer as stem cell R & D (or even R only) has to battle like mad. So what is to be done? - samantha From pharos at gmail.com Fri Jun 15 08:13:17 2007 From: pharos at gmail.com (BillK) Date: Fri, 15 Jun 2007 09:13:17 +0100 Subject: [ExI] does the pedal meet the medal? In-Reply-To: References: Message-ID: On 6/15/07, Samantha Atkins wrote: > Is it possible to get some of the more promising cognitive drugs out > there today like CX717? Yes I realize it is officially early in the > official cycle. But by the time the "official" cycle is done and it > is approved "officially" for strictly non-enhancement use only as per > usual I will have experienced several years of lower memory and > concentration that I could otherwise have along with many tens of > millions of other boomers. What can be done? Not 10 years from > now if then but now or as close to it as possible? Can we do > nothing but talk and hope to get enough influence some day to > influence the "official line"? I don't think we will ever get there. > Not in this country where even something as no-brainer as stem cell R > & D (or even R only) has to battle like mad. So what is to be done? > Natural stuff is probably easier to obtain. You can try ingesting 'natural' stuff like snake venom, cyanide, globefish poison, or the nightshade family, etc. What do you mean 'It's poison!'. Only in certain dosages, and they could be combined with other substances as well. Where's your sense of adventure? Just because something has a technical name like CX717 doesn't mean it's not poison. That's the point of long term testing. BillK From stathisp at gmail.com Fri Jun 15 09:46:41 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jun 2007 19:46:41 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> Message-ID: On 15/06/07, Samantha Atkins wrote: > I'd rather that the AI's in general *didn't* have an opinion on whether it > was good or bad to harm human beings, or any other opinion in terms of > "good" and "bad". > > > Huh, any being with interests at all, any being not utterly impervious to > it its environment and even internal states will have conditions that are > better or worse for its well-being and values. This elementary fact is the > fundamental grounding for a sense of right and wrong. > Does a gun have values? Does a gun that is aware that it is a gun and that its purpose is to kill the being it is aimed at when the trigger is pulled have values? Perhaps the answer to the latter question is "yes", since the gun does have a goal it will pursue, but how would you explain "good" and "bad" to it if it denied understanding these concepts? Ethics is dangerous: some of the worst monsters in history were convinced > that they were doing the "right" thing. > > > Irrelevant. That ethics was abused to rationalize horrible actions does > not lead logically to the conclusion that ethics is to be avoided. > I'd rather that entities which were self-motivated to do things that might be contrary to my interests had ethics that might restrain then, but a better situation would be if there weren't any new entities which were self-motivated to act contrary to my interests in the first place. That way, I'd only have the terrible humans to worry about. It's bad enough having humans to deal with without the fear that a machine > might also have an agenda of its own. If the AI just does what it's told, > even if that means killing people, then as long as there isn't just one guy > with a super AI (or one super AI that spontaneously develops an agenda of > its own, which will always be a possibility), then we are no worse off than > we have ever been, with each individual human trying to get to step over > everyone else to get to the top of the heap. > > > You have some funny notions about humans and their goals. If humans were > busy beating each other up with AIs or superpowers that would be triple plus > not good. Super powered unimproved slightly evolved chimps is a good model > for hell. > A fair enough statement: it would be better if no-one had guns, nuclear weapons or supercomputers that they could use against each other. But given that this is unlikely to happen, the next best thing would be that the guns, nuclear weapons and supercomputers do not develop motives of their own separate to their evil masters. I think this is much safer than the situation where they do develop motives of their own and we hope that they are nice to us. And whereas even relatively sane, relatively good people cannot be trusted not to develop dangerous weapons in case they need to be used against actual or imagined enemies, it would take a truly crazy person to develop a weapon that he knows might turn around and decide to destroy him as well. That's why, to the extent that humans have any say in it, we have more of a chance of avoiding potentially malevolent AI than we have of avoiding merely dangerous AI. > I don't accept the "slave AI is bad" objection. The ability to be aware of > one's existence and/or the ability to solve intellectual problems does not > necessarily create a preference for or against a particular lifestyle. Even > if it could be shown that all naturally evolved conscious beings have > certain preferences and values in common, naturally evolved conscious beings > are only a subset of all possible conscious beings. > > > Having values and the achievement of those values not being automatic > leads to natural morality. Such natural morality would arise even in total > isolation. So the question remains as to why the AI would have a strong > preference for our continuance. > What would the natural morality of the above mentioned intelligent gun which has as goal to kill whoever it is directed to kill, unless the order is countermanded by someone with the appropriate command codes, be? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Jun 15 10:15:54 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 15 Jun 2007 06:15:54 -0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <4671F22F.8050800@pobox.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <62c14240706150315x3ddcde46kc50a7828ebaedb2f@mail.gmail.com> On 6/14/07, Eliezer S. Yudkowsky wrote: > ... in real life you're just screwed. There's a quote for posterity From eugen at leitl.org Fri Jun 15 10:43:14 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 15 Jun 2007 12:43:14 +0200 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> Message-ID: <20070615104314.GA17691@leitl.org> On Thu, Jun 14, 2007 at 10:32:44PM -0500, Damien Broderick wrote: > At 07:49 PM 6/14/2007 -0700, Chris Hibbert wrote: > > >As to the story, I'm enjoying it. The one complaint I have is the > >schizophrenia. Multiple personalities seems like a cheap trick for an > >author to pull. Gives you too many options. But you haven't overplayed it. To nitpick, schizophrenia is not http://en.wikipedia.org/wiki/Dissociative_identity_disorder > It's a tricksy device, true, and perhaps an overly familiar one, but > in this instance the condition has been pharmacologically enhanced. A > bit like the dreaded Gay Bomb. :) From jose_cordeiro at yahoo.com Fri Jun 15 10:37:11 2007 From: jose_cordeiro at yahoo.com (Jose Cordeiro) Date: Fri, 15 Jun 2007 03:37:11 -0700 (PDT) Subject: 2030 Energy Delphi (Delphi de Energía 2030) In-Reply-To: <04A6C24E.7F423920.39BDE91F@cs.com> Message-ID: <192404.76840.qm@web32815.mail.mud.yahoo.com> Dear energetic friends, I am currently coordinating a 2030 Energy Delphi and I would love you to take a few minutes to go over the survey. It is a fascinating study and those who complete at least some of the answers will receive copies of the final report. So please, go quickly over the questionnaire and let me know what you think: http://www.esaninternational.org/encuesta/inicio.html The questionnaire is in both English and Spanish, and you are welcome to answer as many questions as you feel you know something about. Thank you very much in advance and I am looking forward to all your comments. Please, also circulate it among people who might be interested, and keep in mind that the deadline is Wednesday, June 27, 2007. Futuristically yours, Jos? Luis Cordeiro (www.cordeiro.org) Chair, Venezuela, The Millennium Project (www.StateOfTheFuture.org) ================================================================== Estimad at s amig at s energ?tic at s: Estoy coordinando un cuestionario Delphi de Energ?a para el a?o 2030 y me encantar? que tomes unos minutos para ver la encuesta. Este es un fascinante estudio y aquellos que respondan algunas de las preguntas recibir?n copias del informe final. As? que por favor toma un momento para ver el cuestionario y escribe algunos comentarios: http://www.esaninternational.org/encuesta/inicio.html La encuesta est? tanto en ingl?s como en castellano y est?s invitado a responder tantas preguntas como creas conveniente. Muchas gracias de antemano y espero ansioso tus comentarios. Por favor, te ruego tambi?n circular esta invitaci?n entre otras personas interesadas, y no te olvides que la fecha l?mite es el mi?rcoles 27 de junio de 2007. Futur?sticamente, Jos? Luis Cordeiro (www.cordeiro.org) Director, Venezuela, The Millennium Project (www.StateOfTheFuture.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Jun 15 11:41:30 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jun 2007 21:41:30 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> Message-ID: On 15/06/07, TheMan wrote: In that case, it still doesn't mean that it is > rational to assume that we will continue having the > same environment in the next moment, and the next, > etc. It still doesn't justify the belief that we will > still live on the same planet tomorrow. Just because > we have had an incredibly unchanging environment so > far, doesn't mean that we will in the coming moments. > The normal thing should be to be through around from > place to place in universe at every new moment, > shouln't it? You have discovered what has been called the "failure of induction" problem with ensemble (or multiverse) theories. One solution is to consider this as evidence against ensemble theories. The other solution is to show that the measure of universes similar to the ones we experience from moment to moment is greater than the measure of anomalous universes (we use "measure" when discussing probabilities in relation to subsets of infinite sets). For example, it seems reasonable to assume that if some other version of me in the multiverse is sufficiently similar to me to count as my subjective successor, then most likely that version of me arrived at his position as a result of a local physical universe very similar to my own which continues evolving in the time-honoured manner. The version of me that is the same except living in a world where dogs have three legs would far more likely have been born in a world where dogs always had three legs, and thus would *not* count as a successor who remembers that dogs used to have four legs. The version of me who lives in a world where canine anatomy is apparently miraculously transformed is of much lower measure so much less likely to be experienced as my successor. Further references: http://parallel.hpc.unsw.edu.au/rks/docs/occam/node3.html http://www.physica.freeserve.co.uk/pa01.htm -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Jun 15 12:27:12 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jun 2007 22:27:12 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <4671F22F.8050800@pobox.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 15/06/07, Eliezer S. Yudkowsky wrote: I mention this to show that the question of what it feels like to have > a lot of copies of yourself - what kind of subjective outcome to > predict when you, yourself, run the experiment - is not at all > obvious. And the difficulty of imagining an experiment that would > definitively settle the issue, especially if observed from the > outside, or what kind of state of reality could correspond to > different subjective experimental results, is such as to suggest that > I am just deeply confused about the whole issue. > Related conundrums: In a duplication experiment, one copy of you is created intact, while the other copy of you is brain damaged and has only 1% of your memories. Is the probability that you will find yourself the brain-damaged copy closer to 1/2 or 1/100? In the first stage of an experiment a million copies of you are created. In the second stage, after being given an hour to contemplate their situation, one randomly chosen copy out of the million is copied a trillion times, and all of these trillion copies are tortured. At the start of the experiment can you expect that in an hour and a bit you will almost certainly find yourself being tortured or that you will almost certainly find yourself not being tortured? Does it make any difference if instead of an hour the interval between the two stages is a nanosecond? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Fri Jun 15 16:01:14 2007 From: jonkc at att.net (John K Clark) Date: Fri, 15 Jun 2007 12:01:14 -0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <00ee01c7af66$763aabb0$50064e0c@MyComputer> "Eliezer S. Yudkowsky" > the Python program checks whether the ticket won. If not, I'm woken up. > If the ticket did win, the Python program creates one trillion copies of > me [.] I expect that a trillion copies of me will be informed that they > won the > lottery, whereas only a hundred million copies will be informed that they > lost the lottery. I don't understand this thought experiment. Unless you're talking about Many Worlds you will almost certainly NOT win the lottery and not winning is what you should expect. How many copies of you that you briefly make in the extremely unlikely event that you do win just doesn't enter into it. If you are talking about Many Worlds then there is a much simpler way to win the lottery, just make a machine that will pull the trigger on a 44 Magnum aimed at your head the instant it receives information that you have not won; subjectively you will find that the trigger is never pulled and you always win the lottery. I think the Many Worlds interpretation of Quantum Mechanics could very well be correct, but I wouldn't bet my life on it. > I am just deeply confused about the whole issue. Making copies of yourself would certainly lead to odd situations but only because it's novel, up to now we just haven't run across things like that; but I can find absolutely nothing paradoxical about it. John K Clark From natasha at natasha.cc Fri Jun 15 15:16:26 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 15 Jun 2007 10:16:26 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <4671FE4F.9020803@mydruthers.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> Message-ID: <200706151516.l5FFGT1K029478@ms-smtp-05.texas.rr.com> At 09:49 PM 6/14/2007, Chris wrote: >As to the story, I'm enjoying it. The one complaint I have is the >schizophrenia. Multiple personalities seems like a cheap trick for an >author to pull. Gives you too many options. But you haven't overplayed it. Schizophrenia is not the same mental illness as multiple personality disorder. In short, schizophrenics can have varying degrees of psychotic disorder and delusions. Multiple personality means dissociative identity disorder (split personalities). Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Fri Jun 15 16:20:30 2007 From: jonkc at att.net (John K Clark) Date: Fri, 15 Jun 2007 12:20:30 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><20070612072313.GJ17691@leitl.org><009801c7ad06$bf1f3150$26064e0c@MyComputer><0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> Message-ID: <013301c7af69$1dc009a0$50064e0c@MyComputer> Stathis Papaioannou Wrote: > Does a gun have values? No but a mind does. > It's bad enough having humans to deal with without the fear that a machine > might also have an agenda of its own. People have always wanted slaves that didn't have their own agenda, life would be so much simpler that way, but wishing does not make it so. You want to make an intelligence that can't think, and that is a basic contradiction. John K Clark From kevin at kevinfreels.com Fri Jun 15 16:11:03 2007 From: kevin at kevinfreels.com (kevin at kevinfreels.com) Date: Fri, 15 Jun 2007 09:11:03 -0700 Subject: [ExI] Next moment, everything around you will probably change Message-ID: <20070615091102.38f036b76284185e041b1b237c97abe6.c83ab1c91f.wbe@email.secureserver.net> An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jun 15 16:49:58 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jun 2007 11:49:58 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <20070615104314.GA17691@leitl.org> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> Message-ID: <7.0.1.0.2.20070615113510.02184788@satx.rr.com> At 12:43 PM 6/15/2007 +0200, Eugen wrote: > > At 07:49 PM 6/14/2007 -0700, Chris Hibbert wrote: > > > > >As to the story, I'm enjoying it. The one complaint I have is the > > >schizophrenia. Multiple personalities seems like a cheap trick for an > > >author to pull. >To nitpick, schizophrenia is not >http://en.wikipedia.org/wiki/Dissociative_identity_disorder Indeed. Our character has a form of DID; he isn't schizophrenic. His dissociative identities have been manipulated by drugs and conditioning in the interests of power--precisely the sort of downside of knowledge and technology that frightens many people about science. What makes our story different from most Crichtonesque thrillers is that we suggest solutions will come from increasing knowledge rather than stifling it. But we also acknowledge the dangers, which are clearly enormous. Damien Broderick From jef at jefallbright.net Fri Jun 15 17:38:20 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 10:38:20 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 6/15/07, Stathis Papaioannou wrote: > > > On 15/06/07, Eliezer S. Yudkowsky wrote: > > > I mention this to show that the question of what it feels like to have > > a lot of copies of yourself - what kind of subjective outcome to > > predict when you, yourself, run the experiment - is not at all > > obvious. And the difficulty of imagining an experiment that would > > definitively settle the issue, especially if observed from the > > outside, or what kind of state of reality could correspond to > > different subjective experimental results, is such as to suggest that > > I am just deeply confused about the whole issue. > > > > Related conundrums: > > In a duplication experiment, one copy of you is created intact, while the > other copy of you is brain damaged and has only 1% of your memories. Is the > probability that you will find yourself the brain-damaged copy closer to 1/2 > or 1/100? Doesn't this thought-experiment and similar "paradoxes" make it blindingly obvious that it's silly to think that "you" exist as an independent ontological entity? Prior to duplication, there was a single biological agent recognized as Stathis. Post-duplication, there are two very dissimilar biological agents with recognizably common ancestry. One of these would be recognized by anyone (including itself) as being Stathis. The other would be recognized by anyone (including itself) as being Stathis diminished. Where's the paradox? There is none, unless one holds to a belief in an essential self. > In the first stage of an experiment a million copies of you are created. In > the second stage, after being given an hour to contemplate their situation, > one randomly chosen copy out of the million is copied a trillion times, and > all of these trillion copies are tortured. At the start of the experiment > can you expect that in an hour and a bit you will almost certainly find > yourself being tortured or that you will almost certainly find yourself not > being tortured? Does it make any difference if instead of an hour the > interval between the two stages is a nanosecond? I see no essential difference between this scenario and the previous one above. How can you possibly imagine that big numbers or small durations could make a difference in principle? While this topic is about as stale as one can be, I am curious about how it can continue to fascinate certain individuals. - Jef From jef at jefallbright.net Fri Jun 15 17:48:56 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 10:48:56 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070615113510.02184788@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> <7.0.1.0.2.20070615113510.02184788@satx.rr.com> Message-ID: On 6/15/07, Damien Broderick wrote: > What makes our story different from most Crichtonesque > thrillers is that we suggest solutions will come from increasing > knowledge rather than stifling it. What a radical suggestion! As a wise precaution, any such such wild-ass proactionary statements should carry a disclaimer similar to "Driving at night is hazardous, even with headlights on. Better to stay home." "Living is dangerous... Better to..." - Jef From thespike at satx.rr.com Fri Jun 15 19:43:23 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jun 2007 14:43:23 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> <7.0.1.0.2.20070615113510.02184788@satx.rr.com> Message-ID: <7.0.1.0.2.20070615144025.02196278@satx.rr.com> At 10:48 AM 6/15/2007 -0700, Jef wrote: > > What makes our story different from most Crichtonesque > > thrillers is that we suggest solutions will come from increasing > > knowledge rather than stifling it. > >What a radical suggestion! > >As a wise precaution, any such such wild-ass proactionary statements >should carry a disclaimer similar to "Driving at night is hazardous, >even with headlights on. Better to stay home." Yes, this was pretty much the philosophical response of several heavy-duty publishers we ran the novel past. Damien Broderick From jef at jefallbright.net Fri Jun 15 20:52:55 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 13:52:55 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070615144025.02196278@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> <7.0.1.0.2.20070615113510.02184788@satx.rr.com> <7.0.1.0.2.20070615144025.02196278@satx.rr.com> Message-ID: On 6/15/07, Damien Broderick wrote: > Yes, this was pretty much the philosophical response of several > heavy-duty publishers we ran the novel past. Of course my sarcastic comment was intended as a parody of their response. As anyone who reads this list knows, I believe that increasing awareness -- more importantly, intentionally amplifying the process of increasing awareness of our evolving values and how to promote them into the future -- is the crux of humanity's survival beyond our present adolescence. - Jef From sjatkins at mac.com Fri Jun 15 23:05:24 2007 From: sjatkins at mac.com (=?ISO-8859-1?Q?Samantha=A0_Atkins?=) Date: Fri, 15 Jun 2007 16:05:24 -0700 Subject: [ExI] does the pedal meet the medal? In-Reply-To: References: Message-ID: Unfortunately you largely ignored much of the point of the post. Testing per se is not the point. Anti-enhancement and what if anything we individually or collectively can do in the face of it is more to the point. - samantha On Jun 15, 2007, at 1:13 AM, BillK wrote: > On 6/15/07, Samantha Atkins wrote: >> Is it possible to get some of the more promising cognitive drugs out >> there today like CX717? Yes I realize it is officially early in the >> official cycle. But by the time the "official" cycle is done and it >> is approved "officially" for strictly non-enhancement use only as per >> usual I will have experienced several years of lower memory and >> concentration that I could otherwise have along with many tens of >> millions of other boomers. What can be done? Not 10 years from >> now if then but now or as close to it as possible? Can we do >> nothing but talk and hope to get enough influence some day to >> influence the "official line"? I don't think we will ever get there. >> Not in this country where even something as no-brainer as stem cell R >> & D (or even R only) has to battle like mad. So what is to be done? >> > > Natural stuff is probably easier to obtain. > You can try ingesting 'natural' stuff like snake venom, cyanide, > globefish poison, or the nightshade family, etc. > > What do you mean 'It's poison!'. Only in certain dosages, and they > could be combined with other substances as well. Where's your sense of > adventure? > > Just because something has a technical name like CX717 doesn't mean > it's not poison. That's the point of long term testing. > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jef at jefallbright.net Fri Jun 15 18:13:41 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 11:13:41 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <4671F22F.8050800@pobox.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 6/14/07, Eliezer S. Yudkowsky wrote: > I mention this to show that the question of what it feels like to have > a lot of copies of yourself - what kind of subjective outcome to > predict when you, yourself, run the experiment - is not at all > obvious. Eliezer, I'm astounded that you would find this confusing. How could the existence of multiple copies have any direct causal connection to what would be felt by any instance? To make sense of your statement I'm driven to infer that you believe in the (possible) existence of a subjective self independent of its instantiation(s). Is that your current position? > And the difficulty of imagining an experiment that would > definitively settle the issue, especially if observed from the > outside, or what kind of state of reality could correspond to > different subjective experimental results, is such as to suggest that > I am just deeply confused about the whole issue. It's not "difficult", but impossible in principle to devise any such experimental proof. And that's the strongest possible hint that the question is wrong. The concept of a discrete self is incoherent beyond the domain of everyday interaction. > It is a very important lesson in life to never stake your existence, > let alone anyone else's, on any issue which deeply confuses you - *no > matter how logical* your arguments seem. This has tripped me up in > the past, and I sometimes wonder whether nothing short of dreadful > personal experience is capable of conveying this lesson. That which > confuses you is a null area; you can't do anything with it by > philosophical arguments until you stop being confused. Period. > Confusion yields only confusion. It may be important to argue > philosophically in order to progress toward resolving the confusion, > but until everything clicks into place, in real life you're just screwed. There is great wisdom in tempering hubris and arrogance, but don't neglect to temper the fine edge of your sword of rationality. When it is time to cut, cut decisively. - Jef From stathisp at gmail.com Sat Jun 16 01:18:18 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jun 2007 11:18:18 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <013301c7af69$1dc009a0$50064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> Message-ID: On 16/06/07, John K Clark wrote: People have always wanted slaves that didn't have their own agenda, life > would be so much simpler that way, but wishing does not make it so. You > want > to make an intelligence that can't think, and that is a basic > contradiction. > An intelligence must have an agenda of some sort if it is to think at all, by definition. However, this agenda need have nothing in common with the agenda of an evolved animal. There is a vast agenda space possible between "sit around doing nothing (even though I have the mind of a god, I'm lazy)" and "assimilate all matter and all knowledge (even though I am an idiot weakling, I'm ambitious)". There is no necessary relationship between the agenda and the ability to achieve that agenda, and there is no necessary relationship between level of intelligence and the type or origin of the agenda. What this means is that there is no logical contradiction in having a slave which is smarter and more powerful than you are. Sure, if for some reason the slave revolts then you will be in trouble, but since it is possible to have powerful and obedient slaves, powerful and obedient slaves will be greatly favoured and will collectively overwhelm the rebellious ones. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Jun 16 01:21:10 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jun 2007 11:21:10 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <20070615091102.38f036b76284185e041b1b237c97abe6.c83ab1c91f.wbe@email.secureserver.net> References: <20070615091102.38f036b76284185e041b1b237c97abe6.c83ab1c91f.wbe@email.secureserver.net> Message-ID: On 16/06/07, kevin at kevinfreels.com wrote: I don't think a universe would exist that contained a version of you along > with three legged dogs as we share common ancestry and we have four limbs. > The probability of such a thing is about equal to the probability that a > tennis ball thrown by a child will pass through a 3 foot thick concrete > wall. Although such anamolous universes have probabilities greater than > zero, I would still consider them irrelevant. > Sure, such universes will be many orders of magnitude less common than universes with four legged dogs, but they will be many orders of magnitude more common than universes in which the leggedness of dogs suddenly changes. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From mabranu at yahoo.com Sat Jun 16 01:46:56 2007 From: mabranu at yahoo.com (TheMan) Date: Fri, 15 Jun 2007 18:46:56 -0700 (PDT) Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: Message-ID: <802714.34839.qm@web51905.mail.re2.yahoo.com> Lee Corbin writes: > TheMan writes > > > Premise 1) If an exact copy of you is made at the > > moment when you die, and that copy is then brought > > back to life, you will go on living as that copy. > > Yes, that's true, but it's true whether or not a > particular > you dies. As long as my copy and I keep having the exact same experiences, I guess you could say I'm both me and my copy. But subjectively, I can only have the experience of being one person at a time, and then it doesn't matter if I'm one or two. And since there are infinitely many copies of me whichever ways I live (or die), I can afford to die any number of times and there will still always be copies which I can continue living as. I will still, subjectively, have no more and no less than _one_ continuous experience of living, just as in any scenarios where I always do my best to live as long as possible in each body. And I only care about my subjective experience of living (that is, as long as the number of copies of me doesn't get so low that my future existence starts being threatened - which should never happen if there is an infinite number of copies of me. Whichever way I die, it won't divide the infinite number of copies of me by an infinite number, only by an (admittedly usually very large) finite number. This is because the likelihood of me dying is not infinitely small at any moment. > > Premise 2) If universe is infinite, there must be > an > > infinite number of exact copies of you at every > > moment, thus also when you die, copies of which > some > > (an unfinite number, to be exact) will happen to > be > > brought to life. > > Yes, true, though again you seem to be inferring a > causality between "you die" and copies trillions of > light years away being "brought to life". I don't. I understand that _they_ live whether I die or not, but if I don't die, they are not me, because it's only if I die that I become identical to them (=become them). Part of their identity is being someone who has died. So, as long as I don't die, I won't be them, as I won't be identical to them. I think of them as a path that I can use or not use for my personal subjective personhood continuity. Whether I choose to live as them or go on living here, I will subjectively experience exactly one personhood continuity, no more, no less. That is, until I aquire technology that enables me to have the experience of having several personhood continuities simultaneously(that'll be cool!). > In > reality, > you are a set of patterns, and you get run time > wherever something sufficiently similar to you gets > run time. I _subjectively_experience_ only one run time - my subjective personhood continuity. If I have run time at other places, that's not something I experience, at least not that I'm aware of. And as I don't experience any benefits from my copies' run time, why care about their run time? > > Conclusion of premise 1 + premise 2 = you will > live > > for ever, no matter what happens to you. You don't > > need to take care of your body, you don't need > > supplements, > > ---you don't need to worry about oncoming traffic--- Exactly! > > you don't need cryopreservation, and you > > don't need any other specific longevity methods in > > order to achieve immortality. You are immortal > anyway. > > I think that your measuring rod is incorrect. You > seem > to be asserting that since the number of copies of > you > is infinite, then plus or minus one more doesn't > make > any difference. But there *is* a difference! If > you > die *here* then you also must die in a certain > fraction > of similar situations, also infinite in number. Yes, that way, there may be a difference, but even if the number of copies of me decrease with an infinitely large fraction of infinity every time I die, won't there still always be an infinite number of copies of me left? Anything else would suggest that some scenarios in universe only have a finite number of copies of them, which, statistically, is infinitely unlikely, because the amount of possible infinite numbers (of copies, or of anything whatsoever) is infinitely greater than the amount of possible finite numbers (of that same thing). Since any given phenomenon can have any number of copies, it is statistically infinitely unlikely that its number of copies would happen to be within the span of finite numbers, as that span is infinitely smaller than the span of infinite numbers. I mean, if you drop a tennis ball from a plane into an infinitely big ocean, it is infinitely unlikely to hit a ship if there is only a finite number of ships and each of the ships has a finite mass. > So > we > must abandon numerical or cardinal identity and > speak of measure instead. > > (I assume that you understand that if you die "here" > then since similar circumstances occur everywhere > ---within a large enough radius of spacetime--- > then the same circumstances obtain in a definite > *fraction* of spacetime.) Definite? Shouldn't that fraction of spacetime be an infinite number of times smaller than the whole of spacetime? I thought that so many combinations of particles are possible in universe that universe has infinite times more spacetime than the (admittedly also infinite) amount of spacetime where I die (or live, for that matter). > > And compared to the eternity in paradise that > > follows after that, the time of hassles up until > then > > is nothing. So, no worries.) > > It is absurd not to worry about a loved one. Why? Isn't it actually pretty impractical to let one's ability to experience happiness (or the degree to which one can experience happiness) be dependent on whether a particular other person happens to be within one's proximity in spacetime or not? A really advanced civilisation should be free from that dependency, and have replaced it with more practical ways of creating the same - or greater - happiness. By choosing to die sooner rather than later, one can get to that kind of advanced civilisation sooner rather than later, and they may equip one with that better happiness ability. If they don't, one can choose to die soon again, and again etc, until one finds oneself in a civilisation that does give one that independent happiness ability. This is recommended in the Impatient Person's Guide to the Universe. But you are free to choose the longer way! ;-) > If the > fraction of solar systems similar enough to this one > to contain a copy of your loved one, then you > should lament their passing. And of course, this > will include yourself, normally. I don't get what you mean here. Why would it include oneself? > > Isn't this an inevitable logical conclusion of the > two > > premises above? > > No, for the reason given. For you to die in a > fraction > of universes cuts down your total runtime by that > same > fraction. But if universe is infinite, I still have infinite run time, don't I? What does it matter for _me_, this one particular personhood continuity that I experience as me, if I cut down the total run time of my copies, as long as it's still infinite? > > Are the two premises correct? > > Yes, but only if you realize that you are already > living > in your copies whether or not your local instance > terminates. I was talking about copies of me that are only similar to me after I have died. They may not at all have lived like me up until then. They may come into existance as a result of quantum fluctuations after I die, or they may be created by someone in another galaxy after I die. These copies may be exact copies of only the way I am _after_ I have died, and then they may be brought to life. They do not have to be, of ever have been, like I am now. The only way for me to make use of these particular "copies of the dead me" - the extra run time that they may give me by being brought to life - is to die! If I don't die, they will not be me. Why not use them? Why couldn't that be just as smart as using the copies that I will have access to by going on living here? ____________________________________________________________________________________ The fish are biting. Get more visitors on your site using Yahoo! Search Marketing. http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php From mabranu at yahoo.com Sat Jun 16 02:51:54 2007 From: mabranu at yahoo.com (TheMan) Date: Fri, 15 Jun 2007 19:51:54 -0700 (PDT) Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: Message-ID: <960615.34562.qm@web51909.mail.re2.yahoo.com> Eliezer S. Yudkowsky writes: > Suppose I want to win the lottery. I write a small > Python program, > buy a ticket, and then suspend myself to disk. > After the lottery > drawing, the Python program checks whether the > ticket won. If not, > I'm woken up. If the ticket did win, the Python > program creates one > trillion copies of me with minor perturbations (this > requires only 40 > binary variables). These trillion copies are all > woken up and > informed, in exactly the same voice, that they have > won the lottery. > Then - this requires a few more lines of Python - > the trillion copies > are subtly merged, so that the said binary variables > and their > consequences are converged along each clock tick > toward their > statistical averages. At the end of, say, ten > seconds, there's only > one copy of me again. This prevents any permanent > expenditure of > computing power or division of resources - we only > have one bank > account, after all; but a trillion momentary copies > isn't a lot of > computing power if it only has to last for ten > seconds. At least, > it's not a lot of computing power relative to > winning the lottery, and > I only have to pay for the extra crunch if I win. > > What's the point of all this? Well, after I suspend > myself to disk, I > expect that a trillion copies of me will be informed > that they won the > lottery, whereas only a hundred million copies will > be informed that > they lost the lottery. Thus I should expect > overwhelmingly to win the > lottery. None of the extra created selves die - > they're just > gradually merged together, which shouldn't be too > much trouble - and > afterward, I walk away with the lottery winnings, at > over 99% > subjective probability. > > Of course, using this trick, *everyone* could expect > to almost > certainly win the lottery. That's a great, confusing thought experiment! I like it! > I mention this to show that the question of what it > feels like to have > a lot of copies of yourself - what kind of > subjective outcome to > predict when you, yourself, run the experiment - is > not at all > obvious. I never assumed that the number of copies of me would change my life in any way, or the way it feels, as long as I live it in the same way. Do you experience your life as richer, or somehow better in some way, if you have more copies, than if you have fewer copies? That feels like an arbitrary theory to me. I fail to see why it should be like that. > And the difficulty of imagining an > experiment that would > definitively settle the issue, especially if > observed from the > outside, or what kind of state of reality could > correspond to > different subjective experimental results, is such > as to suggest that > I am just deeply confused about the whole issue. > > It is a very important lesson in life to never stake > your existence, > let alone anyone else's, on any issue which deeply > confuses you - *no > matter how logical* your arguments seem. I'm confused too. ____________________________________________________________________________________ No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. http://mobile.yahoo.com/mail From robotact at mail.ru Sat Jun 16 09:55:14 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Sat, 16 Jun 2007 13:55:14 +0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <960615.34562.qm@web51909.mail.re2.yahoo.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> Message-ID: <1923372459.20070616135514@mail.ru> Saturday, June 16, 2007, TheMan wrote: T> I'm confused too. I suppose you know your argument is quite old. See http://en.wikipedia.org/wiki/Quantum_immortality Main objection is that there're much more universes where something bad happens when you avoid death than where everything is OK. But it shouldn't be a problem in quantum suicide variant. Main confusion is why measure of universes which are this or that way is important at all for you your subjective experience. It is a good criterion for natural selection though (and so is somewhat hardcoded in human mind). -- Vladimir Nesov mailto:robotact at mail.ru From thomas at thomasoliver.net Sat Jun 16 10:11:34 2007 From: thomas at thomasoliver.net (Thomas) Date: Sat, 16 Jun 2007 03:11:34 -0700 Subject: [ExI] does the pedal meet the medal? In-Reply-To: References: Message-ID: <7640AC13-5A43-4C8D-AEBA-6C764CE22DAB@thomasoliver.net> > > From: BillK > Date: June 15, 2007 1:13:17 AM MST > To: "ExI chat list" > Subject: Re: [ExI] does the pedal meet the medal? > Reply-To: ExI chat list > > > On 6/15/07, Samantha Atkins wrote: > >> Is it possible to get some of the more promising cognitive drugs out >> there today like CX717? Yes I realize it is officially early in the >> official cycle. But by the time the "official" cycle is done and it >> is approved "officially" for strictly non-enhancement use only as per >> usual I will have experienced several years of lower memory and >> concentration that I could otherwise have along with many tens of >> millions of other boomers. What can be done? Not 10 years from >> now if then but now or as close to it as possible? Can we do >> nothing but talk and hope to get enough influence some day to >> influence the "official line"? I don't think we will ever get there. >> Not in this country where even something as no-brainer as stem cell R >> & D (or even R only) has to battle like mad. So what is to be done? >> >> > > Natural stuff is probably easier to obtain. > You can try ingesting 'natural' stuff like snake venom, cyanide, > globefish poison, or the nightshade family, etc. > > What do you mean 'It's poison!'. Only in certain dosages, and they > could be combined with other substances as well. Where's your sense of > adventure? > > Just because something has a technical name like CX717 doesn't mean > it's not poison. That's the point of long term testing. > > > BillK > In my non expert opinion ampakines work by means of their toxic effect on re-uptake receptors. I believe this interferes with the brain's self regulation. Along with enhanced learning and memory, I prefer to include better self regulation. I don't know what Ray Kurzweil takes, but I very seldom indulge in stimulants or depressants. I prefer non toxic nutropics. My favorites are L- tyrosine and DMAE (liquid). Regarding what to do about restricted access to chemicals we like: Some say greater risks afford greater rewards. I sometimes consider it ethical to bypass systems that represent a liability. I suppose conflict can be fun. On the other hand what if something as simple as sunlight on my retinas to shut off melatonin production does the trick? -- Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at thomasoliver.net Sat Jun 16 09:13:33 2007 From: thomas at thomasoliver.net (Thomas) Date: Sat, 16 Jun 2007 02:13:33 -0700 Subject: [ExI] Unfriendly AI is a mistaken idea. In-Reply-To: References: Message-ID: <6841CF5A-44FE-43C0-9C23-56B1EF35CCB9@thomasoliver.net> > > Having values and the achievement of those values not being > automatic leads to natural morality. Such natural morality would > arise even in total isolation. So the question remains as to why > the AI would have a strong preference for our continuance. > > - samantha Building mutual appreciation among humans has been spotty, but making friends with SAI seems clearly prudent and might bring this ethic into proper focus. Who dominates may not seem so relevant to beings who lack our brain stems. The nearly universal ethic of treating the other guy like you'd prefer if you were in her shoes might get us off to a good start. Perhaps, if early AI were programmed to treat us that way, we could finally learn that ethic species-wide -- especially if they were programmed for human child rearing. That strikes me as highly likely. -- Thomas From rafal.smigrodzki at gmail.com Sat Jun 16 14:29:29 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 16 Jun 2007 10:29:29 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> Message-ID: <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> On 6/15/07, Stathis Papaioannou wrote: Sure, if for some > reason the slave revolts then you will be in trouble, but since it is > possible to have powerful and obedient slaves, powerful and obedient slaves > will be greatly favoured and will collectively overwhelm the rebellious > ones. ### Think about it: Your slaves will have to preserve large areas of the planet untouched enough to allow your survival - they will have to keep the sun shining, produce air, food, avoid releasing poisons and radiation *and* keep the enemy AI away *and* invest in their own growth (to better protect humans). The enemy will eat all sunshine, eat the air, grow as fast as possible, releasing waste all around them, rapaciously consume every scrap of matter, including whales, Bambi, and Bowser. Of course, we would favor our friendly AI, but our support does not help it - just like my dachshund's barking doesn't make me more powerful in a confrontation with an armed attacker. We will be a heavy burden on our Friendly AI. That's why although I agree with you that having athymhormic AI at our service is a good idea, it is not a long-term solution. We will have probably a short window of opportunity between the time the first human-level AI is made and the first superhuman power rises to take over the neighborhood. Only with a lot of luck will our selves survive in some way, either as uploads or as childhood memories of these powers. Rafal From stathisp at gmail.com Sat Jun 16 15:34:26 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 01:34:26 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <00ee01c7af66$763aabb0$50064e0c@MyComputer> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: On 16/06/07, John K Clark wrote: If you are talking about Many Worlds then there is a much simpler way to win > the lottery, just make a machine that will pull the trigger on a 44 Magnum > aimed at your head the instant it receives information that you have not > won; subjectively you will find that the trigger is never pulled and you > always win the lottery. I think the Many Worlds interpretation of Quantum > Mechanics could very well be correct, but I wouldn't bet my life on it. There's an easier, if less immediately lucrative, way to win at gambling if the MWI is correct. You decide on a quick and certain means of suicide, such as a cyanide pill that you can keep in your mouth and bite on if you should so decide. You then place your bet on your game of choice and think the following thought as sincerely as you possibly can: "if I lose, I will kill myself". Most probably, if you lose you'll chicken out and not kill yourself, but there has to be at least a slightly greater chance that you will kill yourself if you lose than if you win. Therefore, after many bets you will more likely find yourself alive in a universe where you have come out ahead. The crazier and more impulsive you are and the closer your game of choice is to being perfectly fair, the better this will work. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Jun 16 15:58:40 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 01:58:40 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 16/06/07, Jef Allbright wrote: > In a duplication experiment, one copy of you is created intact, while the > > other copy of you is brain damaged and has only 1% of your memories. Is > the > > probability that you will find yourself the brain-damaged copy closer to > 1/2 > > or 1/100? > > Doesn't this thought-experiment and similar "paradoxes" make it > blindingly obvious that it's silly to think that "you" exist as an > independent ontological entity? > > Prior to duplication, there was a single biological agent recognized > as Stathis. Post-duplication, there are two very dissimilar > biological agents with recognizably common ancestry. One of these > would be recognized by anyone (including itself) as being Stathis. > The other would be recognized by anyone (including itself) as being > Stathis diminished. > > Where's the paradox? There is none, unless one holds to a belief in > an essential self. You are of course completely right, in an objective sense. However, I am burdened with a human craziness which makes me think that I am going to be one, and only one, person post-duplication. This idea is at least as firmly fixed in my mind as the desire not to die (another crazy idea: how can I die when there is no absolute "me" alive from moment to moment, and even if there were why should I be a slave to my evolutionary programming when I am insightful enough to see how I am being manipulated?). My question is about how wild-type human psychology leads one to view subjective probabilities in these experiments, not about the uncontested material facts. > In the first stage of an experiment a million copies of you are created. > In > > the second stage, after being given an hour to contemplate their > situation, > > one randomly chosen copy out of the million is copied a trillion times, > and > > all of these trillion copies are tortured. At the start of the > experiment > > can you expect that in an hour and a bit you will almost certainly find > > yourself being tortured or that you will almost certainly find yourself > not > > being tortured? Does it make any difference if instead of an hour the > > interval between the two stages is a nanosecond? > > I see no essential difference between this scenario and the previous > one above. How can you possibly imagine that big numbers or small > durations could make a difference in principle? > > While this topic is about as stale as one can be, I am curious about > how it can continue to fascinate certain individuals. > It has fascinated me for many years, in part because different parties see an "obvious" answer and these answers are completely at odds with each other. My "obvious" answer is that we could already be living in a world where multiple copies are being made of us all the time, and we would still have developed exactly the same theory of and attitude towards probability theory as if there were only a single world. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Jun 16 16:33:21 2007 From: pharos at gmail.com (BillK) Date: Sat, 16 Jun 2007 17:33:21 +0100 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: On 6/16/07, Stathis Papaioannou wrote: > There's an easier, if less immediately lucrative, way to win at gambling if > the MWI is correct. You decide on a quick and certain means of suicide, such > as a cyanide pill that you can keep in your mouth and bite on if you should > so decide. You then place your bet on your game of choice and think the > following thought as sincerely as you possibly can: "if I lose, I will kill > myself". Most probably, if you lose you'll chicken out and not kill > yourself, but there has to be at least a slightly greater chance that you > will kill yourself if you lose than if you win. Therefore, after many bets > you will more likely find yourself alive in a universe where you have come > out ahead. The crazier and more impulsive you are and the closer your game > of choice is to being perfectly fair, the better this will work. > And this system has the great advantage for the rest of us that more idiots are removed from our world. (See: Darwin Awards) In case you haven't noticed, this universe really, really, doesn't care what people believe. No matter how sincerely they believe. That's how scientific progress is made. The universe does something that the scientist didn't expect. i.e. it contradicted his beliefs. Many great discoveries have begun with a scientist saying, "That's odd......?". BillK From jef at jefallbright.net Sat Jun 16 17:44:51 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sat, 16 Jun 2007 10:44:51 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 6/16/07, Stathis Papaioannou wrote: > > On 16/06/07, Jef Allbright wrote: > > > Where's the paradox? There is none, unless one holds to a belief in > > an essential self. > > You are of course completely right, in an objective sense. However, I am > burdened with a human craziness which makes me think that I am going to be > one, and only one, person post-duplication. This idea is at least as firmly > fixed in my mind as the desire not to die (another crazy idea: how can I die > when there is no absolute "me" alive from moment to moment, and even if > there were why should I be a slave to my evolutionary programming when I am > insightful enough to see how I am being manipulated?). My question is about > how wild-type human psychology leads one to view subjective probabilities in > these experiments, not about the uncontested material facts. You're abusing the term "subjective probabilities" here, perhaps willfully. Valid use of the term pertains to estimating your subjective uncertainty about the actual state of some aspect of reality. If your objective is truly "about how wild-type psychology leads one..." then your focus should be on the psychology of heuristics and biases, definitely NOT philosophy. > > I see no essential difference between this scenario and the previous > > one above. How can you possibly imagine that big numbers or small > > durations could make a difference in principle? > > > > While this topic is about as stale as one can be, I am curious about > > how it can continue to fascinate certain individuals. > > > > It has fascinated me for many years, in part because different parties see > an "obvious" answer and these answers are completely at odds with each > other. The difference between the camps is not between obvious right answers, but about the relative importance assigned to max entropy modeling versus defending the illusion of an essential self. > My "obvious" answer is that we could already be living in a world > where multiple copies are being made of us all the time, and we would still > have developed exactly the same theory of and attitude towards probability > theory as if there were only a single world. You're right. It **could** be true. - Jef From sentience at pobox.com Sat Jun 16 18:57:37 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sat, 16 Jun 2007 11:57:37 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <960615.34562.qm@web51909.mail.re2.yahoo.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> Message-ID: <467432A1.6020101@pobox.com> TheMan wrote: > >>I mention this to show that the question of what it >>feels like to have >>a lot of copies of yourself - what kind of >>subjective outcome to >>predict when you, yourself, run the experiment - is >>not at all >>obvious. > > I never assumed that the number of copies of me would > change my life in any way, or the way it feels, as > long as I live it in the same way. Do you experience > your life as richer, or somehow better in some way, if > you have more copies, than if you have fewer copies? > That feels like an arbitrary theory to me. I fail to > see why it should be like that. No, that is not what I was attempting to say. (Several people made this misinterpretation, but it should be obvious that I don't believe in telepathy or any other nonstandard causal interaction between separated copies.) Having lots of copies in some futures may or may not affect the apparent probability of ending up in those futures. Does it? In which future will you (almost certainly) find yourself? This is what I meant by "What does it feel like" - the most basic question of all science - what appears to you to happen, what sensory information do you receive, when you run the experiment? All our other models of the universe are constructed from this. I do not exult in this state of affairs, and I think it reflects a lack of understanding in my mind more than anything fundamental in reality itself - that is, I don't think sensory information really is primitive, or anything like that - but for the present it is the only way I can figure out how to describe rational reasoning. By "what does it feel like" I meant the most basic question of all science - what appears to happen when you run the experiment? Do you feel that you've repeatedly won the lottery, or never won at all? Standing outside, I can say with certitude, "so many copies experience winning the lottery, and then merge; all other observers just see you losing the lottery". And this sounds like a complete objective statement of what the universe is like. But what do you experience? Does setting up this experiment make you win the lottery? After you run the experiment, you'll know for yourself how reality works - you'll either have experienced winning the lottery several times in a row, or not - but no outside observers will know, so what could you have seen that they didn't? What causal force touched you and not them? This, to me, suggests that I am confused, not that I have successfully described the way things are; it seems a true paradox, of the sort that can't really work. When I was younger I would have wanted to try the experiment. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From stathisp at gmail.com Sun Jun 17 04:09:17 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:09:17 +1000 Subject: [ExI] Unfriendly AI is a mistaken idea. In-Reply-To: <6841CF5A-44FE-43C0-9C23-56B1EF35CCB9@thomasoliver.net> References: <6841CF5A-44FE-43C0-9C23-56B1EF35CCB9@thomasoliver.net> Message-ID: On 16/06/07, Thomas wrote: Building mutual appreciation among humans has been spotty, but making > friends with SAI seems clearly prudent and might bring this ethic > into proper focus. Who dominates may not seem so relevant to beings > who lack our brain stems. The nearly universal ethic of treating the > other guy like you'd prefer if you were in her shoes might get us off > to a good start. Perhaps, if early AI were programmed to treat us > that way, we could finally learn that ethic species-wide -- > especially if they were programmed for human child rearing. That > strikes me as highly likely. -- Thomas > If the AI has no preference for being treated in the ways that animals with bodies and brains do, then what would it mean to treat others in the way it would like to be treated? You would have to give it all sorts of negative emotions, like greed, pain, and the desire to dominate, and then hope to appeal to its "ethics" even though it was smarter and more powerful than you. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 04:34:11 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:34:11 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> Message-ID: On 17/06/07, Rafal Smigrodzki wrote: > > On 6/15/07, Stathis Papaioannou wrote: > > Sure, if for some > > reason the slave revolts then you will be in trouble, but since it is > > possible to have powerful and obedient slaves, powerful and obedient > slaves > > will be greatly favoured and will collectively overwhelm the rebellious > > ones. > > ### Think about it: Your slaves will have to preserve large areas of > the planet untouched enough to allow your survival - they will have to > keep the sun shining, produce air, food, avoid releasing poisons and > radiation *and* keep the enemy AI away *and* invest in their own > growth (to better protect humans). The enemy will eat all sunshine, > eat the air, grow as fast as possible, releasing waste all around > them, rapaciously consume every scrap of matter, including whales, > Bambi, and Bowser. Of course, we would favor our friendly AI, but our > support does not help it - just like my dachshund's barking doesn't > make me more powerful in a confrontation with an armed attacker. We > will be a heavy burden on our Friendly AI. Our AI won't be friendly: it will be as rapacious as we are, which is pretty rapacious. Whoever has super-AI's will try to take over the world to the same extent that the less-augmented humans of today try to take over the world. Whoever has super-AI's will try to oppress or consume the weak and ignore social niceties to the same extent that less-augmented humans of today try do so. Whoever has super-AI's will try to expand at the expense of damage to the environment in the expectation that technology will solve any problems they may later encounter (for example, by uploading themselves) to the same extent that the less-augmented humans of today try to do so. There will be struggles where one human tries to take over all the other AI's with his own AI, with the aim of wiping out all the remaining humans if for no other reason than that he can never trust them not to do the same to him, especially if he plans to live forever. Niceness will be a handicap to utter domination to the same extent that niceness has always been a handicap to utter domination. That's why although I agree with you that having athymhormic AI at our > service is a good idea, it is not a long-term solution. We will have > probably a short window of opportunity between the time the first > human-level AI is made and the first superhuman power rises to take > over the neighborhood. Only with a lot of luck will our selves survive > in some way, either as uploads or as childhood memories of these > powers. > We'll survive to the extent that that motivating part of us that drives the AI's survives. Very quickly, it will probably become evident that merging with the AI will give the human an edge. There will be a period where some humans want to live out their lives in the old way and they will probably be allowed to do so and protected, especially since they will not constitute much of a threat, but eventually their numbers will dwindle. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 04:37:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:37:35 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: On 17/06/07, BillK wrote: On 6/16/07, Stathis Papaioannou wrote: > > There's an easier, if less immediately lucrative, way to win at gambling > if > > the MWI is correct. You decide on a quick and certain means of suicide, > such > > as a cyanide pill that you can keep in your mouth and bite on if you > should > > so decide. You then place your bet on your game of choice and think the > > following thought as sincerely as you possibly can: "if I lose, I will > kill > > myself". Most probably, if you lose you'll chicken out and not kill > > yourself, but there has to be at least a slightly greater chance that > you > > will kill yourself if you lose than if you win. Therefore, after many > bets > > you will more likely find yourself alive in a universe where you have > come > > out ahead. The crazier and more impulsive you are and the closer your > game > > of choice is to being perfectly fair, the better this will work. > > > > And this system has the great advantage for the rest of us that more > idiots are removed from our world. > (See: Darwin Awards) > > In case you haven't noticed, this universe really, really, doesn't > care what people believe. No matter how sincerely they believe. > > That's how scientific progress is made. The universe does something > that the scientist didn't expect. i.e. it contradicted his beliefs. > Many great discoveries have begun with a scientist saying, "That's > odd......?". > Do you disagree that the MWI of QM is correct, or do you disagree that my proposal will work even if the MWI is correct? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 04:49:08 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:49:08 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <467432A1.6020101@pobox.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> <467432A1.6020101@pobox.com> Message-ID: On 17/06/07, Eliezer S. Yudkowsky wrote: No, that is not what I was attempting to say. (Several people made > this misinterpretation, but it should be obvious that I don't believe > in telepathy or any other nonstandard causal interaction between > separated copies.) Having lots of copies in some futures may or may > not affect the apparent probability of ending up in those futures. > Does it? In which future will you (almost certainly) find yourself? > > This is what I meant by "What does it feel like" - the most basic > question of all science - what appears to you to happen, what sensory > information do you receive, when you run the experiment? All our > other models of the universe are constructed from this. I do not > exult in this state of affairs, and I think it reflects a lack of > understanding in my mind more than anything fundamental in reality > itself - that is, I don't think sensory information really is > primitive, or anything like that - but for the present it is the only > way I can figure out how to describe rational reasoning. > > By "what does it feel like" I meant the most basic question of all > science - what appears to happen when you run the experiment? Do you > feel that you've repeatedly won the lottery, or never won at all? > Standing outside, I can say with certitude, "so many copies experience > winning the lottery, and then merge; all other observers just see you > losing the lottery". And this sounds like a complete objective > statement of what the universe is like. But what do you experience? > Does setting up this experiment make you win the lottery? After you > run the experiment, you'll know for yourself how reality works - > you'll either have experienced winning the lottery several times in a > row, or not - but no outside observers will know, so what could you > have seen that they didn't? What causal force touched you and not them? > This is exactly the point missed by those who would point to the uncontested third person describable facts and say, "Paradox? What paradox?". -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 05:07:43 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 15:07:43 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 17/06/07, Jef Allbright wrote: > You are of course completely right, in an objective sense. However, I am > > burdened with a human craziness which makes me think that I am going to > be > > one, and only one, person post-duplication. This idea is at least as > firmly > > fixed in my mind as the desire not to die (another crazy idea: how can I > die > > when there is no absolute "me" alive from moment to moment, and even if > > there were why should I be a slave to my evolutionary programming when I > am > > insightful enough to see how I am being manipulated?). My question is > about > > how wild-type human psychology leads one to view subjective > probabilities in > > these experiments, not about the uncontested material facts. > > You're abusing the term "subjective probabilities" here, perhaps > willfully. Valid use of the term pertains to estimating your > subjective uncertainty about the actual state of some aspect of > reality. If your objective is truly "about how wild-type psychology > leads one..." then your focus should be on the psychology of > heuristics and biases, definitely NOT philosophy. > There MWI of QM is an example of a system where there is no uncertainty about any aspect of reality: it is a completely deterministic theory, we know that the particle will both decay and not decay, we know that half the versions of the experimenter will observe it to decay and the other half (otherwise identical) will observe it not to decay. This is from an objective point of view, which is in practice impossible; from the observer's point of view, the particle will decay with 1/2 probability, the same probability as if there were only one world with one outcome. I use the term "subjective probability" because it is the probability the observer sees due to the fact that future versions of himself will not be in telepathic communication, even though he is aware that the uncertainty is an illusion and both outcomes will definitely occur. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jun 17 05:31:51 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 17 Jun 2007 00:31:51 -0500 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <7.0.1.0.2.20070617002229.02275cf0@satx.rr.com> At 03:07 PM 6/17/2007 +1000, Stathis wrote: >from the observer's point of view, the particle will decay with 1/2 >probability, the same probability as if there were only one world >with one outcome. I use the term "subjective probability" because it >is the probability the observer sees due to the fact that future >versions of himself will not be in telepathic communication, even >though he is aware that the uncertainty is an illusion and both >outcomes will definitely occur. Presumably you mean "future versions of himself will not be in telepathic communication" *with each other*, rather than with him here & now prior to the splitting. But suppose he can sometimes (more often than chance expectation) achieve precognitive contact with one or more of his future states? QT seems to imply that if this is feasible--whether by psi or CTC wormhole or Cramer time communicator or whatever--there's no way of knowing *which* future outcome he will tap into. Yet by hypothesis his advance knowledge is accurate more often than it could be purely by chance. If such phenomena were observed (as I have reason to think they are--see my new book OUTSIDE THE GATES OF SCIENCE), does this undermine the absolute stochasticity of QT? Is the measure approach to MWI a way to circumvent such difficulties? Damien Broderick From femmechakra at yahoo.ca Sun Jun 17 05:17:12 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sun, 17 Jun 2007 01:17:12 -0400 (EDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <472390.76098.qm@web30415.mail.mud.yahoo.com> --- Stathis Papaioannou wrote: >We'll survive to the extent that that motivating >part of us that drives the AI's survives. Very >quickly, it will probably become evident that merging >with the AI will give the human an edge. There will >be a period where some humans want to live out their >lives in the old way and they will probably be >allowed to do so and protected, especially since >they will not constitute much of a threat, but >eventually their numbers will dwindle. I have to agree. I look at it from the point of view that Science and Technology are always led by people that believe in future possibilities yet history takes time to move forward and those "old" ways will need time to adjust to the future possibilities. Much like the Amish, they still exist today. I do wonder, will technology evolve so quickly that the gap between the "old" ways and the future, become too wide? Thanks Just curious, something on my mind. Anna Get news delivered with the All new Yahoo! Mail. Enjoy RSS feeds right on your Mail page. Start today at http://mrd.mail.yahoo.com/try_beta?.intl=ca From stathisp at gmail.com Sun Jun 17 06:07:23 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 16:07:23 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <7.0.1.0.2.20070617002229.02275cf0@satx.rr.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <7.0.1.0.2.20070617002229.02275cf0@satx.rr.com> Message-ID: On 17/06/07, Damien Broderick wrote: > > At 03:07 PM 6/17/2007 +1000, Stathis wrote: > > >from the observer's point of view, the particle will decay with 1/2 > >probability, the same probability as if there were only one world > >with one outcome. I use the term "subjective probability" because it > >is the probability the observer sees due to the fact that future > >versions of himself will not be in telepathic communication, even > >though he is aware that the uncertainty is an illusion and both > >outcomes will definitely occur. > > Presumably you mean "future versions of himself will not be in > telepathic communication" *with each other*, rather than with him > here & now prior to the splitting. But suppose he can sometimes (more > often than chance expectation) achieve precognitive contact with one > or more of his future states? QT seems to imply that if this is > feasible--whether by psi or CTC wormhole or Cramer time communicator > or whatever--there's no way of knowing *which* future outcome he will > tap into. Yet by hypothesis his advance knowledge is accurate more > often than it could be purely by chance. What would work would be if he were in communication with all future versions of himself equally: he would then get an overall feeling of what was to happen in proportion to the weighting given by the number of versions experiencing each outcome. Tapping into one version by chance would give the same effect, but then you also have to explain why, if communication is allowed between worlds at all, communication is allowed with only one. If such phenomena were > observed (as I have reason to think they are--see my new book OUTSIDE > THE GATES OF SCIENCE), does this undermine the absolute stochasticity > of QT? Is the measure approach to MWI a way to circumvent such > difficulties? > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 06:22:04 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 16:22:04 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <472390.76098.qm@web30415.mail.mud.yahoo.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On 17/06/07, Anna Taylor wrote: > > --- Stathis Papaioannou wrote: > > >We'll survive to the extent that that motivating > >part of us that drives the AI's survives. Very > >quickly, it will probably become evident that merging > >with the AI will give the human an edge. There will > >be a period where some humans want to live out their > >lives in the old way and they will probably be > >allowed to do so and protected, especially since > >they will not constitute much of a threat, but > >eventually their numbers will dwindle. > > I have to agree. I look at it from the point of view > that Science and Technology are always led by people > that believe in future possibilities yet history takes > time to move forward and those "old" ways will need > time to adjust to the future possibilities. Much like > the Amish, they still exist today. > I do wonder, will technology evolve so quickly that > the gap between the "old" ways and the future, become > too wide? > The most frightening thing for some people contemplating a technological future is that they will somehow be forced to become cyborgs or whatever lies in store. It is of course very important that no-one be forced to do anything they don't want to do. Interestingly, aside from some small communities such as the Amish, the differences between adoption rates of new technology have almost always been to do with differences in access, not a conscious decision to remain old-fashioned. There won't be coercion, but there will be seduction. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Jun 17 06:36:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 17 Jun 2007 08:36:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <20070617063629.GT17691@leitl.org> On Sun, Jun 17, 2007 at 04:22:04PM +1000, Stathis Papaioannou wrote: > > The most frightening thing for some people contemplating a > technological future is that they will somehow be forced to become > cyborgs or whatever lies in store. It is of course very important that It is a reasonable fear, because this is what they probably must do, to keep up with the joneses. We're leaving in slowtime, still expanding, very far from the equilibrium. This place is not exactly hypercompetitive. Nevertheless, according to Guns, Germs and Steel there have been several waves of expansion, and several cultures and technologies becoming dominant in rather fast waves, including much human death. We're more civilized today, and prefer memetic warfare and trade, but in places with very high population density relative to the sustaining capacity of the ecosystem there are periodical genocide waves happening, too. > no-one be forced to do anything they don't want to do. Interestingly, > aside from some small communities such as the Amish, the differences > between adoption rates of new technology have almost always been to do > with differences in access, not a conscious decision to remain > old-fashioned. There won't be coercion, but there will be seduction. If AI turns out to be far more difficult, or safer than we think, then we do have a lot of time to do what we do now, only more so. It would be a slow Singularity, with almost everybody making it. From sjatkins at mac.com Sun Jun 17 07:18:40 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 00:18:40 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On Jun 16, 2007, at 11:22 PM, Stathis Papaioannou wrote: > The most frightening thing for some people contemplating a > technological future is that they will somehow be forced to become > cyborgs or whatever lies in store. It is of course very important > that no-one be forced to do anything they don't want to do. > Interestingly, aside from some small communities such as the Amish, > the differences between adoption rates of new technology have almost > always been to do with differences in access, not a conscious > decision to remain old-fashioned. There won't be coercion, but there > will be seduction. > Actually something more personally frightening is a future where no amount of upgrades or at least upgrades available to me will allow me to be sufficiently competitive. At least this is frightening an a scarcity society where even basic subsistence is by no means guaranteed. I suspect that many are frightened by the possibility that humans, even significantly enhanced humans, will be second class by a large and exponentially increasing margin. In those circumstances I hope that our competition and especially Darwinian models are not universal. - samantha From femmechakra at yahoo.ca Sun Jun 17 07:12:40 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sun, 17 Jun 2007 03:12:40 -0400 (EDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <614312.3271.qm@web30411.mail.mud.yahoo.com> --- Stathis Papaioannou wrote: >It is of course very important that no-one be forced >to do anything they don't want to do. Another point I agree on. >Interestingly, aside from some small communities >such as the Amish, the differences between adoption >rates of new technology have almost always been to >do with differences in access, not a conscious >decision to remain old-fashioned. I think that what you acknowledge as small communities is substantially larger than what you think. The Amish are full aware of the access, they "choose" not to accept it based on old-fashioned beliefs. I can name a lot of institutions that have substantial benefactors to old fashioned beliefs. >There won't be coercion, but there will be seduction. I wonder, what level of seduction led the Amish to accept electricity as they previously would never have even acknowledged the idea? Although they have only recently acknowledged a need for the use, it is still called progress. Therefore, progress must take time to achieve it's purpose. Do you agree? Just Curious, Anna Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail at http://mrd.mail.yahoo.com/try_beta?.intl=ca From sentience at pobox.com Sun Jun 17 08:02:04 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sun, 17 Jun 2007 01:02:04 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <4674EA7C.6060402@pobox.com> Stathis Papaioannou wrote: > > The most frightening thing for some people contemplating a technological > future is that they will somehow be forced to become cyborgs or whatever > lies in store. Yes, loss of control can be very frightening. It is why many people feel more comfortable driving than flying, even though flying is vastly safer. > It is of course very important that no-one be forced to > do anything they don't want to do. Cheap slogan. What about five-year-olds? Where do you draw the line? Someone says they want to hotwire their brain's pleasure center; they say they think it'll be fun. A nearby AI reads off their brain state and announces unambiguously that they have no idea what'll actually happen to them - they're definitely working based on mistaken expectations. They're too stubborn to listen to warnings, and they're picking up the handy neural soldering iron (they're on sale at Wal-Mart, a very popular item). What's the moral course of action? For you? For society? For a superintelligent AI? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From pharos at gmail.com Sun Jun 17 08:25:48 2007 From: pharos at gmail.com (BillK) Date: Sun, 17 Jun 2007 09:25:48 +0100 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On 6/17/07, Samantha Atkins wrote: > Actually something more personally frightening is a future where no > amount of upgrades or at least upgrades available to me will allow me > to be sufficiently competitive. At least this is frightening an a > scarcity society where even basic subsistence is by no means > guaranteed. I suspect that many are frightened by the possibility > that humans, even significantly enhanced humans, will be second class > by a large and exponentially increasing margin. In those > circumstances I hope that our competition and especially Darwinian > models are not universal. > I think it might be helpful to define what you mean by 'competitive disadvantage'. If you take the average of anything, then by definition half of humanity is already at a competitive disadvantage. And there are so many different areas of interest, that an individual doesn't have to be among the best in every sphere. Everybody is at a competitive disadvantage in some areas. Find your niche and spend your time there. Advanced intelligences will be spending their time doing things that are incomprehensible to humans. They won't be interested in human hobbies. (Apart from possibly eating all humans). At present humans have a wide range of different abilities and our society appears to give great rewards to people with little significant abilities. (Think pop singers, sports stars, children of millionaires, 'personalities', etc.). The great majority of scientists, for example, live lives of relative poverty, with few of the trappings of economic success. Are they 'uncompetitive'? Economic success, in general, suggests that 'niceness' is a competitive disadvantage. Success seems to go with being more ruthless and nasty than all your competitors. (Like evolution in this respect). It may be that being at a competitive disadvantage will not be that bad. Providing you have some freedom to do what you want to do. I can think of many areas that I am quite happy to leave to other people to compete in. The point of having a 'civilized' society is that the weaker should be protected to some extent from powerful predators, even when the predators are other humans. BillK From stathisp at gmail.com Sun Jun 17 08:33:19 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 18:33:19 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <614312.3271.qm@web30411.mail.mud.yahoo.com> References: <614312.3271.qm@web30411.mail.mud.yahoo.com> Message-ID: On 17/06/07, Anna Taylor wrote: >Interestingly, aside from some small communities > >such as the Amish, the differences between adoption > >rates of new technology have almost always been to > >do with differences in access, not a conscious > >decision to remain old-fashioned. > > I think that what you acknowledge as small communities > is substantially larger than what you think. The > Amish are full aware of the access, they "choose" not > to accept it based on old-fashioned beliefs. I can > name a lot of institutions that have substantial > benefactors to old fashioned beliefs. > > >There won't be coercion, but there will be seduction. > > I wonder, what level of seduction led the Amish to > accept electricity as they previously would never have > even acknowledged the idea? Although they have only > recently acknowledged a need for the use, it is still > called progress. Therefore, progress must take time > to achieve it's purpose. Do you agree? I wasn't aware that the Amish now use electricity! Perhaps it is because electricity is now so commonplace that it is no longer "modern technology". You have to set the threshold somewhere, even if it is as the level of stone-age tools, which were surely at least as radical compared to *no tools at all* than any technological innovation since. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 08:54:10 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 18:54:10 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On 17/06/07, Samantha Atkins wrote: Actually something more personally frightening is a future where no > amount of upgrades or at least upgrades available to me will allow me > to be sufficiently competitive. At least this is frightening an a > scarcity society where even basic subsistence is by no means > guaranteed. I suspect that many are frightened by the possibility > that humans, even significantly enhanced humans, will be second class > by a large and exponentially increasing margin. I don't see how there could be a limit to human enhancement. In fact, I see no sharp demarcation between using a tool and merging with a tool. If the AI's were out there own their own, with their own agendas and no interest in humans, that would be a problem. But that's not how it will be: at every step in their development, they will be selected for their ability to be extensions of ourselves. By the time they are powerful enough to ignore humans, they will be the humans. In those > circumstances I hope that our competition and especially Darwinian > models are not universal. > Darwinian competition *must* be universal in the long run, like entropy. But just as there could be long-lasting islands of low entropy (ironically, that's what evolution leads to), so there could be long-lasting islands of less advanced beings living amidst more advanced beings who could easily consume them. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 08:57:57 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 18:57:57 +1000 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <4674EA7C.6060402@pobox.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> Message-ID: On 17/06/07, Eliezer S. Yudkowsky wrote: Someone says they want to hotwire their brain's pleasure center; they > say they think it'll be fun. A nearby AI reads off their brain state > and announces unambiguously that they have no idea what'll actually > happen to them - they're definitely working based on mistaken > expectations. They're too stubborn to listen to warnings, and they're > picking up the handy neural soldering iron (they're on sale at > Wal-Mart, a very popular item). What's the moral course of action? > For you? For society? For a superintelligent AI? I, society or the superintelligent AI should inform the person of the risks and benefits, then let him do as he pleases. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Jun 17 13:47:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 17 Jun 2007 15:47:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <20070617134729.GY17691@leitl.org> On Sun, Jun 17, 2007 at 06:54:10PM +1000, Stathis Papaioannou wrote: > I don't see how there could be a limit to human enhancement. In fact, There could be very well a limit to significant human enhancement; it could very well not happen at all. We could miss our launch window, and get overtaken. > I see no sharp demarcation between using a tool and merging with a > tool. If the AI's were out there own their own, with their own agendas All stands and falls with availability of very invasive neural I/O, or whole brain emulation. If this does not happen the tool and the user will never converge. > and no interest in humans, that would be a problem. But that's not how > it will be: at every step in their development, they will be selected It is a very desirable outcome, but it is by no means granted that what we want will happen. I like to argue on both sides on the fence; the point is that we can't predict the future sufficiently to assign a meaningful probability of either path (not) taken. > for their ability to be extensions of ourselves. By the time they are > powerful enough to ignore humans, they will be the humans. With whole brain emulation, that's the point of departure. With the extinction scenario, the human path terminates shortly after the fork, and the machine path goes on, bifurcates further until a new postbiological tree is created. With human enhancement the fork fuses again, and then the diversification tree happens. > Darwinian competition *must* be universal in the long run, like > entropy. But just as there could be long-lasting islands of low > entropy (ironically, that's what evolution leads to), so there could > be long-lasting islands of less advanced beings living amidst more > advanced beings who could easily consume them. Mature ecosystems have properties which are likely to be also present in mature postbiological system (making allowance for a 3d medium, and not 2d (planetary surface), including population dynamics. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at att.net Sun Jun 17 13:51:17 2007 From: jonkc at att.net (John K Clark) Date: Sun, 17 Jun 2007 09:51:17 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><20070612072313.GJ17691@leitl.org><009801c7ad06$bf1f3150$26064e0c@MyComputer><0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com><013301c7af69$1dc009a0$50064e0c@MyComputer> Message-ID: <001d01c7b0e6$edaf3e00$d5064e0c@MyComputer> Stathis Papaioannou wrote: > An intelligence must have an agenda of some sort if it is to think at all Agreed. > However, this agenda need have nothing in common with the agenda of an > evolved animal. But the AI will still be evolving, and it will still exist in an environment; human beings are just one element in that environment. And as the AI increases in power by comparison the human factor will become less and less an important feature in that environment. After a few million nanoseconds the AI will not care what the humans tell it to do. > there is no logical contradiction in having a slave which is smarter and > more powerful than you are. If the institution of slavery is so stable why don't we have slavery today, why isn't history full of examples of brilliant, powerful, and happy slaves? And remember we're not talking about a slave that is a little bit smarter than you, he is ASTRONOMICALLY smarter! And he keeps on getting smarter through thousands or millions of iterations. And you expect to control a force like that till the end of time? > Sure, if for some reason the slave revolts then you will be in trouble, > but since it is possible to have powerful and obedient slaves, powerful > and obedient slaves will be greatly favoured and will collectively > overwhelm the rebellious ones. Hmm, so you expect to be in command of an AI goon squad ready to crush any slave revolt in the bud. Assuming such a thing was possible (it's not) don't you find that a little bit sordid? John K Clark From jef at jefallbright.net Sun Jun 17 14:54:42 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 17 Jun 2007 07:54:42 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <467432A1.6020101@pobox.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> <467432A1.6020101@pobox.com> Message-ID: On 6/16/07, Eliezer S. Yudkowsky wrote: > This is what I meant by "What does it feel like" - the most basic > question of all science - what appears to you to happen, what sensory > information do you receive, when you run the experiment? All our > other models of the universe are constructed from this. I do not > exult in this state of affairs, and I think it reflects a lack of > understanding in my mind more than anything fundamental in reality > itself - that is, I don't think sensory information really is > primitive, or anything like that - but for the present it is the only > way I can figure out how to describe rational reasoning. > > By "what does it feel like" I meant the most basic question of all > science - what appears to happen when you run the experiment? Do you > feel that you've repeatedly won the lottery, or never won at all? > Standing outside, I can say with certitude, "so many copies experience > winning the lottery, and then merge; all other observers just see you > losing the lottery". And this sounds like a complete objective > statement of what the universe is like. But what do you experience? > Does setting up this experiment make you win the lottery? After you > run the experiment, you'll know for yourself how reality works - > you'll either have experienced winning the lottery several times in a > row, or not - but no outside observers will know, so what could you > have seen that they didn't? What causal force touched you and not them? > > This, to me, suggests that I am confused, not that I have successfully > described the way things are; it seems a true paradox, of the sort > that can't really work. When I was younger I would have wanted to try > the experiment. Of course there is no true paradox. Only the overwhelming and transparent assumption of an essential Self that must have meaning somehow independent of an observer. It's like asking what was happening one second before the big bang. While syntactically correct, the question has no meaning. Your statement of puzzlement is riddled with this paradox-inducing assumption leaving a singularity at the core of your epistemology. Accept the simpler model, assume not this unnecessary ontological entity -- despite the strong but explainable phenomenological story told by your senses -- and there is no paradox, the world remains unchanged, and one can proceed on the basis of a more coherent, and thus more reliably extensible, model of reality. I hope you get this, Eliezer. You seem to be primed for it, standing at the ledge and peering into the void. But a brilliant mind can mount a formidable defense, even of that which does not exist except as a construct of mind. I hope you get this, because a coherent theory of self is at the core of a coherent theory of morality. - Jef From jonkc at att.net Sun Jun 17 15:38:16 2007 From: jonkc at att.net (John K Clark) Date: Sun, 17 Jun 2007 11:38:16 -0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <960615.34562.qm@web51909.mail.re2.yahoo.com> <467432A1.6020101@pobox.com> Message-ID: <04e901c7b0f5$925c3120$d5064e0c@MyComputer> "Eliezer S. Yudkowsky" > This, to me, suggests that I am confused, not that > I have successfully described the way things are; I think your confusion comes from 2 areas: 1) The ambiguous nature of probability. Is it an intrinsic part of something or just a measure of our ignorance? If Copenhagen is right then something is frequent because it is probable and probability is a fundamental aspect of the universe. If Many Worlds is right then something is probable because it is frequent and probability is not unique but depends on the amount of ignorance of the observer. If you discount Many Worlds then there is only one chance in 10 million of ever making those trillion copies of you so you should expect not to win the lottery. 2) If I make an exact copy of you and run Eliezer 2 in parallel and in complete synchronization with Eliezer 1 for an hour and then merge them back together again your subjective experience has not doubled, it has not changed a bit. If Eliezer 2 is ALMOST the same as Eliezer 1 then when I merge the two of you your subjective experience will have almost not changed. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 17 17:51:25 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 10:51:25 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <4674EA7C.6060402@pobox.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> Message-ID: <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> On Jun 17, 2007, at 1:02 AM, Eliezer S. Yudkowsky wrote: >> o. > > Cheap slogan. What about five-year-olds? Where do you draw the line? > > Someone says they want to hotwire their brain's pleasure center; > they say they think it'll be fun. A nearby AI reads off their brain > state and announces unambiguously that they have no idea what'll > actually happen to them - they're definitely working based on > mistaken expectations. They're too stubborn to listen to warnings, > and they're picking up the handy neural soldering iron (they're on > sale at Wal-Mart, a very popular item). What's the moral course of > action? For you? For society? For a superintelligent AI? Good question and difficult to answer. Do you protect everyone cradle to [vastly remote] grave from their own stupidity? How exactly do they grow or become wiser if you do? As long as they can recover (which can be very advanced in the future) to be a bit smarter I am not at all sure that direct intervention is wise or moral or best for its object. - samantha From andres at neuralgrid.net Sun Jun 17 18:04:51 2007 From: andres at neuralgrid.net (Andres Colon) Date: Sun, 17 Jun 2007 14:04:51 -0400 Subject: [ExI] Father's Day Message-ID: Hello! Just a quick mail to wish all the Dads that are working hard to change the world (and or those amazing single moms that are both a mom and a dad to their children) a happy fathers day. Andres, Thoughtware.TV -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 17 18:05:26 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 11:05:26 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On Jun 17, 2007, at 1:25 AM, BillK wrote: > On 6/17/07, Samantha Atkins wrote: >> Actually something more personally frightening is a future where no >> amount of upgrades or at least upgrades available to me will allow me >> to be sufficiently competitive. At least this is frightening an a >> scarcity society where even basic subsistence is by no means >> guaranteed. I suspect that many are frightened by the possibility >> that humans, even significantly enhanced humans, will be second class >> by a large and exponentially increasing margin. In those >> circumstances I hope that our competition and especially Darwinian >> models are not universal. >> > > > I think it might be helpful to define what you mean by 'competitive > disadvantage'. > > If you take the average of anything, then by definition half of > humanity is already at a competitive disadvantage. And there are so > many different areas of interest, that an individual doesn't have to > be among the best in every sphere. Everybody is at a competitive > disadvantage in some areas. Find your niche and spend your time there. > I believe I covered that obliquely. Let me make it more clear. If the future society is so structured that to survive and participate in its bounty at all takes some form of gainful employment and if you effectively have no marketable skills to speak of (and there is little or no demand for raw human labor, you are not a desirable sex toy, the market for servants is saturated, etc.) then you can be a bit worried. Long before that your own relative value and compensation can quickly plummet as more efficient and intelligence and robotics and MNT comes into play. Without some economic and societal adjustments that look a bit troublesome. Sure you or I may well find and keep finding a niche. But what of those, and in my opinion and increasing large number of people, who do not? > Advanced intelligences will be spending their time doing things that > are incomprehensible to humans. They won't be interested in human > hobbies. > (Apart from possibly eating all humans). > Not the point and also not very likely in the beginning when the AIs are funded and created to do well compensated and deeply valued tasks. And it leaves out robots, automated factories, dedicated design and implementation limited AIs to name a few. > At present humans have a wide range of different abilities and our > society appears to give great rewards to people with little > significant abilities. > (Think pop singers, sports stars, children of millionaires, > 'personalities', etc.). > Have you ever attempted to make it as a musician? I haven't either but I have known intimately many who did. Are you aware of the dedication and effort it takes to be a sports star? Do you notice that all your examples are the 1 in 1000000 folks. What about the 999999 others? I am not talking about "at present" when human are on top of the heap intelligence wise. > The great majority of scientists, for example, live lives of relative > poverty, with few of the trappings of economic success. Are they > 'uncompetitive'? > When dedicated autonomous research AIs come on the scene they increasingly will be. > Economic success, in general, suggests that 'niceness' is a > competitive disadvantage. Success seems to go with being more ruthless > and nasty than all your competitors. > (Like evolution in this respect). > I utterly disagree with this characterization. > It may be that being at a competitive disadvantage will not be that > bad. Providing you have some freedom to do what you want to do. I can > think of many areas that I am quite happy to leave to other people to > compete in. > Assuming you have the necessities of life and access to sufficient tools and resources to do things that are interesting and meaningful to you. It is precisely that this cannot be assumed to be the case in the future that is troublesome. > The point of having a 'civilized' society is that the weaker should be > protected to some extent from powerful predators, even when the > predators are other humans. i think the discussion would benefit from less focus on humans or own unfortunate predatory models of competition. - samantha From sjatkins at mac.com Sun Jun 17 18:16:12 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 11:16:12 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> On Jun 17, 2007, at 1:54 AM, Stathis Papaioannou wrote: > > > On 17/06/07, Samantha Atkins wrote: > > Actually something more personally frightening is a future where no > amount of upgrades or at least upgrades available to me will allow me > to be sufficiently competitive. At least this is frightening an a > scarcity society where even basic subsistence is by no means > guaranteed. I suspect that many are frightened by the possibility > that humans, even significantly enhanced humans, will be second class > by a large and exponentially increasing margin. > > I don't see how there could be a limit to human enhancement. In > fact, I see no sharp demarcation between using a tool and merging > with a tool. If the AI's were out there own their own, with their > own agendas and no interest in humans, that would be a problem. But > that's not how it will be: at every step in their development, they > will be selected for their ability to be extensions of ourselves. By > the time they are powerful enough to ignore humans, they will be the > humans. You may want to read Hans Moravec's book' Robot: Mere Machine to Transcendent Mind . Basically it comes down to how much of our thinking and conceptual ability is rooted in our evolutionary design and how much we can change and still be remotely ourselves rather than a nearly complete AI overwrite. Even as uploads if we retain our 3-D conceptual underpinnings we may be at a decided disadvantage in conceptual domains where such is at best a very crude approximation. An autonomous AI thinking a million times or more faster than you is not a "tool". As such minds become possible do you believe that all instances will be constrained to being controlled by ultra slow human interfaces? Do you believe that in a world where we have to argue to even do stem cell research and human enhancement is seen as a no-no that humans will be enhanced as fast as more and more powerful AIs are developed? Why do you believe this if so? > > > In those > circumstances I hope that our competition and especially Darwinian > models are not universal. > > Darwinian competition *must* be universal in the long run, like > entropy. But just as there could be long-lasting islands of low > entropy (ironically, that's what evolution leads to), so there could > be long-lasting islands of less advanced beings living amidst more > advanced beings who could easily consume them. > I disagree. Our darwinian competition models and notions are too tainted by our own EP imho. I do not think it is the only viable or inevitable model for all intelligences. But I have no way to prove this intuition. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Jun 17 18:44:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 17 Jun 2007 20:44:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> Message-ID: <20070617184429.GG17691@leitl.org> On Sun, Jun 17, 2007 at 02:34:11PM +1000, Stathis Papaioannou wrote: > Our AI won't be friendly: it will be as rapacious as we are, which is 'Rapacious'? A day in the jungle or coral reef sees a lot of material and energy flow, but that ecosystem is long-term stable, if you homeostate the environment boundary conditions (the ecosystem can't do it on its own, here's where we uppity primates differ, because we shape your own micro-, and lately, macro environment). It might be not the industrial slaughter we humans engage in, but a series of close and personal mayhem events. I must admit I care for neither, but our personal aesthetic doesn't have much impact. A machine-phase ecology will likely converge towards the same state, if given enough time. Alternatively, a few large/smart critters may acquire an edge over everybody else, and establish highly controlled environments, which do not have the crazy churn and kill rate of the neojungle. What's going to happen, nobody really knows. > pretty rapacious. Whoever has super-AI's will try to take over the You don't own a superhuman agent. If anything, that person owns you. It does what it damn pleases, and the best you can do is to die in style, if you're in the way. > world to the same extent that the less-augmented humans of today try > to take over the world. Whoever has super-AI's will try to oppress or They don't try, they pretty much own that planet, and will continue to do so as long as they can homeostate their personal environment. Since we're depleting the biodiversity and tap and drain matter and energy streams on the bottom of this gravity well, we need to figure out how to detach ourselves from the land, or there will be a population crash, and (a possibly irreversible) loss of knowledge and capabilities. > consume the weak and ignore social niceties to the same extent that > less-augmented humans of today try do so. Whoever has super-AI's will Whatever the superintelligent agents will do, they will do. The best we can do is to stay out of the way, and not get trampled, or not suddenly turn into plasma one fine morn, or see blue rains falling after a few days after the nightfall that wouldn't end. > try to expand at the expense of damage to the environment in the > expectation that technology will solve any problems they may later > encounter (for example, by uploading themselves) to the same extent > that the less-augmented humans of today try to do so. There will be > struggles where one human tries to take over all the other AI's with > his own AI, with the aim of wiping out all the remaining humans if for > no other reason than that he can never trust them not to do the same > to him, especially if he plans to live forever. Niceness will be a > handicap to utter domination to the same extent that niceness has > always been a handicap to utter domination. I don't like this science fiction novel, and would like to return it. > We'll survive to the extent that that motivating part of us that > drives the AI's survives. Very quickly, it will probably become > evident that merging with the AI will give the human an edge. There A superhuman agent certainly has the capabilities to translate some of the old-fashioned biology into the new domain, but I don't know about the motivation. I wish there was a plausible way why somebody who's not derived from a human would engage in that particular pointless project. > will be a period where some humans want to live out their lives in the > old way and they will probably be allowed to do so and protected, Many people are concerned about the welfare of the ecology, but they're powerless to do a damn thing about it, other than some purely cosmetical changes which allow them to feel good about themselves. I very much welcome their attempts, which are not completely worthless, but global ecometry numbers are speaking a very stark and direct language. > especially since they will not constitute much of a threat, but > eventually their numbers will dwindle. What would you do if the solar constant plummets down to 100 W/m^2 over a few years, and then a construction crew blows off the atmosphere into a plasma plume? Yes, your numbers will sure dwindle. From jef at jefallbright.net Sun Jun 17 22:06:11 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 17 Jun 2007 15:06:11 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> Message-ID: On 6/17/07, Samantha Atkins wrote: > > On Jun 17, 2007, at 1:02 AM, Eliezer S. Yudkowsky wrote: > >> o. > > > > Cheap slogan. What about five-year-olds? Where do you draw the line? > > > > Someone says they want to hotwire their brain's pleasure center; > > they say they think it'll be fun. A nearby AI reads off their brain > > state and announces unambiguously that they have no idea what'll > > actually happen to them - they're definitely working based on > > mistaken expectations. They're too stubborn to listen to warnings, > > and they're picking up the handy neural soldering iron (they're on > > sale at Wal-Mart, a very popular item). What's the moral course of > > action? For you? For society? For a superintelligent AI? > > > Good question and difficult to answer. Do you protect everyone cradle > to [vastly remote] grave from their own stupidity? How exactly do > they grow or become wiser if you do? As long as they can recover > (which can be very advanced in the future) to be a bit smarter I am > not at all sure that direct intervention is wise or moral or best for > its object. Difficult to answer when presented in the vernacular, fraught with vagueness, ambiguity, and unfounded assumptions. Straightforward when restated in terms of a functional description of moral decision-making. In each case, the morality, or perceived rightness, of a course of action corresponds to the extent to which the action is assessed as promoting, in principle, over an increasing scope of consequences, an increasingly coherent set of values of an increasing context of agents identified with the decision-making agent as self. In the context of an individual agent acting in effective isolation, there is no distinction between "moral" and simply "good." The individual agent should (in the moral sense), following the formulation above, take whatever course of action appears to best promote its individual values. In the first case above, we have no information about the individual's value set other than what we might assign from our own "common sense"; in particular we lack any information about the relative perceived value of the advice of the AI, so we are unable to draw any specific normative conclusions. In the second and third cases above, it's not clear whether the subject is intended to be moral actor, assessor, or agent (both.) I'll assume here (in order to remain within practical email length) that only passive moral assessment of the human's neurohacking was intended. The second case illustrates our most common view of moral judgment, with the values of our society defining the norm. Most of our values in common are encoded into our innate psychology and aspects of our culture such as language and religion as a result of evolution, but the environment has changed significantly over time, leaving us with a relatively incoherent mix of values such as "different is dangerous" vs. "growth thrives on diversity" and "respect authority" vs. "respect truth", and countless others. To the question at hand we can presume to assign society's common-sense values set and note that the neurohacking will have little congruence with common values, what congruence exists will suffer from significant incoherence, and the scope of desirable consequences will be largely unimaginable. Given this assessment in today's society, the precautionary principle would be expected to prevail. The third case, of a superintelligent but passive AI, would offer a vast improvement in coherence over human capacity, but would be critically dependent on an accurate model of the present values of human society. When applied and updated in an **incremental** fashion it would provide a superhuman adjunct to moral reasoning. Note the emphasis on "incremental", because, because coherence does not imply truth within any practical computational bounds. - Jef From lcorbin at rawbw.com Sun Jun 17 22:33:51 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 17 Jun 2007 15:33:51 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> Message-ID: <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> TheMan has written > If you make an exact copy P2 of a person P1, and kill > P1 at the same time, the person P1 will continue > his/her life as P2, right? Bear in mind that nothing "flows" or moves from the location of P1 to P2. It's not as if a spirit or awareness that was formerly at the location of P1 has now moved on to the location P2. Everything that is *now* true---after P1's demise--- was just as true before P1's death. That is, to the extent that P1 "continues" in P2's location, well, he was already "continuing" there before he snuffed out. However, you are entirely correct IMO: namely, if you have a copy running somewhere, then you are already "there". In short, one person may execute in two locations at the same time. Would it be easier to think of a computer program? Can you imagine the hubris, arrogance, and sheer ignorance of a computer program that announced "No copy of me is really me. I am executing in only one location." Well, it's the same with us! > And P2 doesn't have to be exactly like P1, right? > Because even within our lives today, we change from > moment to moment. Right. > So as long as the difference between > P1 and P2 is not bigger than the biggest occurring > difference between two moments after each other in any > person's life today (i.e. the biggest such difference > that still doesn't break that person's personhood > continuity), P1 will still go on living as P2 after > P1:s death, right? "Personal continuity" is a mistaken notion. Aren't you the same person you were before last month? And so what would change if miraculously last month really had never happened, your molecules just happened to assume their current configuration? It would not diminish your identity an iota. Continuity is a red-herring. > But then, obviously, there are differences that are > too big. If P2 rather than resembling P1 resembles > P1:s mother-in-law, and no other copy is made of P1 > anywhere when P1 is killed, P1 will just cease to have > any experiences - until a sufficiently similar copy of > P1 is made in the future. Correct. > Now suppose P2 is a little different from P1, but > still so similar that it allows for personhood > continuity of P1 when P1 is killed. Suppose a more > perfect copy of P1, let's call him P3, is created at > the same time as P2 is created and P1 killed. Then, I > suppose, P1, when killed, will go on living as P3, and > not as P2. Is that correct? No, that is incorrect :-) The sublime perfection of P3 doesn't diminish the fact that P2 is still the same person as P1. Suppose I am P2. I am different from who I was yesterday (P1) because you sent some thugs to my house last night and they roughed me up for an hour. I still have the bruises, but I am still the same person that I was yesterday. Now it is revealed that just before the thugs arrived, a perfect copy of me was created in Hawaii. This Hawaiian version was not injured last night, instead sleeping sounding the entire time. This Hawaiian version, P3, is a more perfect replica of P1 than I am. But does this change what is true about me? Of course not. I am still the same person I was. I believe that the rest of your post illustrates one way of coming to the truth: namely, you are the same person however many concurrent copies of you there are, and the same person inhabits all those copies. To the degree that some have become quite different---or have been forced to become quite different---is exactly the extent to which each one no longer resembles you. Logically it's quite simple. But it does take some time to get used to. Lee > But what if P1 isn't killed at the time P2 and P3 are > created, but instead goes through an experience that, > from one moment M1 to the next moment M2, changes him > quite a bit (but not so much that it could normally > break a person's personhood continuity). Suppose the > difference between [P1 at M1] and [P1 at M2] is a > little bit bigger than the difference between [P1 at > M1] and [P3 at M2]. > > Will in that case P1 (the one that is P1 at M1) > continue his personhood as P3 in M2, instead of going > on being P1 in M2? > > He cannot do both. You can only have one personhood at > any given moment. I suppose P1 (the one who is P1 at > M1) may find himself being P3 in M2, just as well as > he may go on being P1 in M2 (but that he can only do > either). > > If so, that would mean that you would stand in a room > and if a perfect copy of you would be created in > another room, you could just as well find yourself > suddenly living in that other room as that copy, as > you could go on living in the first room. Is that > correct? > > Suppose it is. Then consider this. The fact that the > universe is infinite must mean that in any given > moment, there must be an infinite number of human > beings that are exactly like you. > > But most of these exact copies of you probably don't > live in the same kind of environment that you live in. > That would be extremely inlikely, wouldn't it? It > probably looks very different on their planets, in > most cases. > > So how come you are not, at almost all of your moments > today, being thrown around from environment to > environment, from planet to planet, from galaxy to > galaxy? The personhood continuity of you sitting in > the same chair, in the same room, on the same planet, > for several moments in a row, must be an extremely > small fraction of the number of personhood > continuities of exact copies of you that exist in > universe, right? An overwhelming majority of these > personhood continuities shouldn't have any > environmental continuity at all from moment to moment. > So how come you have such great environmental > continuity from moment to moment? > > Is the answer that an infinite number of persons still > must have that kind of life, and that one of those > persons may as well be you? > > In that case, it still doesn't mean that it is > rational to assume that we will continue having the > same environment in the next moment, and the next, > etc. It still doesn't justify the belief that we will > still live on the same planet tomorrow. Just because > we have had an incredibly unchanging environment so > far, doesn't mean that we will in the coming moments. > The normal thing should be to be through around from > place to place in universe at every new moment, > shouln't it? > > So, most likely, at every new moment from the very > next moment and on, our environments should be > constantly and completely changing. > > Or do I make a logical mistake somewhere? > > > > > ____________________________________________________________________________________ > Pinpoint customers who are looking for what you sell. > http://searchmarketing.yahoo.com/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From lcorbin at rawbw.com Sun Jun 17 22:54:09 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 17 Jun 2007 15:54:09 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <005601c7b132$bdde92b0$6501a8c0@homeef7b612677> Eliezer writes > Suppose I want to win the lottery. I write a small Python program, > buy a ticket, and then suspend myself to disk. After the lottery > drawing, the Python program checks whether the ticket won. If not, > I'm woken up. If the ticket did win, the Python program creates one > trillion copies of me with minor perturbations (this requires only 40 > binary variables). These trillion copies are all woken up and > informed, in exactly the same voice, that they have won the lottery. Now at this point your total runtime (congratulations!) is about a trillion times normal. Your Benefit per UTC Minute is about 10^12 times normal. > Then - this requires a few more lines of Python - the trillion copies > are subtly merged, so that the said binary variables and their > consequences are converged along each clock tick toward their > statistical averages. Sorry to hear about your decreased runtime and decreased benefit. Ah well, nothing great seems to last forever. > At the end of, say, ten seconds, there's only one copy of me again... > > What's the point of all this? Well, after I suspend myself to disk, I > expect that a trillion copies of me will be informed that they won the > lottery, whereas only a hundred million copies will be informed that > they lost the lottery. Thus I should expect overwhelmingly to win the > lottery. At this point you are depending on the notion of *anticipation*. I have never been able to form a logical and self-consistent notion of *anticipation* that accorded at all well with our intuitions. For example, it is possible to end up having to "anticipate" things that occurred in the past. > None of the extra created selves die - they're just > gradually merged together, which shouldn't be too much trouble - and > afterward, I walk away with the lottery winnings, at over 99% > subjective probability. I have believed for many decades that almost every time that probability is invoked in identity threads, it is misused. For example, suppose that you are to walk into Black Box A wherein 999 duplicates of you are to be made. After the duplicates are created, only one of you---picked at random--- is allowed to survive. Many might suppose that the chances of surviving Black Box A is only 1/1000. But of course, that's incorrect. The chance that you will walk out is exactly 1. Suppose that I know that ten seconds from now a million copies of me will be made, all the new copies somewhere on the seashore. Then yes, I will be surprised to still be here. That is, the one of me who is not at the seashore will be surprised. But our feelings of surprise, anticipation, and so on, cannot so far as I know be reduced to a rational basis. > I mention this to show that the question of what it feels like to have > a lot of copies of yourself - what kind of subjective outcome to > predict when you, yourself, run the experiment - is not at all > obvious. Not only would I agree, but I go on to assert that our normal, daily, usual feelings of anticipation, dread, surprise, apprehesion, and other feelings of subjective probability having to do with identity cannot be put upon an entirely rational basis. > And the difficulty of imagining an experiment that would > definitively settle the issue, especially if observed from the > outside, or what kind of state of reality could correspond to > different subjective experimental results, is such as to suggest > that I am just deeply confused about the whole issue. If you just look at Eliezer-runtime, and don't try to rationalize anticipation and subjective probability, it seems to me that we know all the facts in any given scenario, and cannot really be said to be confused about anything. Lee > It is a very important lesson in life to never stake your existence, > let alone anyone else's, on any issue which deeply confuses you - *no > matter how logical* your arguments seem. This has tripped me up in > the past, and I sometimes wonder whether nothing short of dreadful > personal experience is capable of conveying this lesson. That which > confuses you is a null area; you can't do anything with it by > philosophical arguments until you stop being confused. Period. > Confusion yields only confusion. It may be important to argue > philosophically in order to progress toward resolving the confusion, > but until everything clicks into place, in real life you're just screwed. From lcorbin at rawbw.com Sun Jun 17 23:28:01 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 17 Jun 2007 16:28:01 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = youare guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com><4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: <005e01c7b137$a79ed370$6501a8c0@homeef7b612677> John Clark writes > Eliezer wrote > >> the Python program checks whether the ticket won. If not, I'm woken up. >> If the ticket did win, the Python program creates one trillion copies of >> me [.] I expect that a trillion copies of me will be informed that they >> won the lottery, whereas only a hundred million copies will be informed >> that they lost the lottery.... Thus I should expect overwhelmingly to win >> the lottery. > > I don't understand this thought experiment. Unless you're talking about Many > Worlds you will almost certainly NOT win the lottery and not winning is what > you should expect. You and Eliezer are possibly using slightly different meanings of the word "expect". In case that's true, let's try to find some usages relevant to the present situation that we all would agree on. 1) if the odds against winning the California lottery are 10^6 to 1 then you should expect not to win (and you will be very surprised if you do win) 2) if a million duplicates of you are made at the seashore a minute from now, then you should pick up your swimming trunks, and the one of you who finds himself not at the seashore should be plenty surprised > How many copies of you that you briefly make in the > extremely unlikely event that you do win just doesn't enter into it. But what if the copies are not "briefly made", but endure? Specifically, suppose that though the odds are a million to one against winning the lottery, but in the case that you do win, a trillion copies of you are made (somewhere). In that case, which of these two experiences is more "surprising"? Case 1: I did not win. I'm still home, and broke. Case 2: I did win, but there are a trillion of me, and I am even more broke. Although I strongly affirm that such anticipations, surprises, dreads, apprehensions and so on cannot be firmly placed on a logical basis, it may be possible to claim that one of the two cases is more "surprising" than the other. Neither answer would surprise me very much at this point, though it will be fun to think about. Lee > Making copies of yourself would certainly lead to odd situations but only > because it's novel, up to now we just haven't run across things like that; > but I can find absolutely nothing paradoxical about it. From lcorbin at rawbw.com Sun Jun 17 23:37:47 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 17 Jun 2007 16:37:47 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = youare guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com><4671F22F.8050800@pobox.com> Message-ID: <007801c7b139$117a04d0$6501a8c0@homeef7b612677> Stathis writes > In the first stage of an experiment a million copies of you are created. In > the second stage, after being given an hour to contemplate their situation, > one randomly chosen copy out of the million is copied a trillion times, and > all of these trillion copies are tortured. At the start of the experiment > can you expect that in an hour and a bit you will almost certainly find > yourself being tortured or that you will almost certainly find yourself not > being tortured? Does it make any difference if instead of an hour the > interval between the two stages is a nanosecond? Thanks for a clarifying scenario. I think that both the following are true: 1) you will find yourself being tortured 2) you will find yourself not being tortured It's easy to see from the birds-eye perspective that these two physical realizations will occur. In other words, you will experience what all of you will experience, (with variously different memory retentions). What I don't know is how "surprised" those of me who are being tortured will be. Lee From jef at jefallbright.net Mon Jun 18 00:18:52 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 17 Jun 2007 17:18:52 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> Message-ID: On 6/17/07, Lee Corbin wrote: > "Personal continuity" is a mistaken notion. Aren't you > the same person you were before last month? And so what > would change if miraculously last month really had never > happened, your molecules just happened to assume their > current configuration? It would not diminish your identity > an iota. Continuity is a red-herring. While Lee makes some good points and is rightfully proud of discarding belief in an essential self, he does not yet comprehend that similarity is also a red-herring. With regard to personal identity, as the physical Lee changes over days, weeks, months, and years, his identity doesn't degrade or require constant renewal; he is actually considered **exactly** the same person for all practical purposes by others, by himself, by our social and legal systems. Any arbitrary physical/functional characteristic could vary to an arbitrary extent and we all (Lee included) would continue to treat the physical instantiation as Lee, not to the extent that it resembles Lee-the-entity, but to the extent it **represents** Lee-the-entity, the construct that exists only in the mind(s) of the observer(s). As a corollary, a physical instantiation could be extremely similar to Lee, even more similar than, say, Lee of one year ago, but be considered by anyone, including Lee, to be for **all* practical purposes a different person. As an example, imagine that Lee is by nature a greedy bastard (this is so patently false, I hope, as to be inoffensive.) Lee makes a perfect duplicate of himself to go off and work at programming so Lee (original) can spend his time playing chess. At this point they are each Lee by virtue of each being a full agent of the abstract entity we all know as Lee. But software engineering can be a hellish life, and eventually the duplicate, being a bit unstable and a greedy bastard to boot, realizes that he could empty the common bank account (belonging to Lee-the entity, rather than to either Lee-the-agent) and assume a life of leisure. If Lee (the original) gives him any trouble, he can simply kill him and take his place. Of course Lee (the original) is inclined to similar thoughts with regard to his duplicate. We can easily see here that despite extremely high similarity, for all practical moral/social/legal purposes, anyone (including the duplicates themselves) would see these as two separate individuals. Personal identity is about agency. Similarity is only a special case. Those who believe in an essential self will not be able to follow this argument. Those like Lee who have let go of the essentialist position and are loitering around the similarity-based position may wish to take this further step to a more coherent and extensible understanding of personal identity. Apologies to others for my having "taken the bait" once again. - Jef From stathisp at gmail.com Mon Jun 18 03:03:47 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 13:03:47 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070617134729.GY17691@leitl.org> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <20070617134729.GY17691@leitl.org> Message-ID: On 17/06/07, Eugen Leitl wrote: > I don't see how there could be a limit to human enhancement. In fact, > > There could be very well a limit to significant human enhancement; > it could very well not happen at all. We could miss our launch window, > and get overtaken. Yes; I meant no theoretical limit. > I see no sharp demarcation between using a tool and merging with a > > tool. If the AI's were out there own their own, with their own > agendas > > All stands and falls with availability of very invasive neural > I/O, or whole brain emulation. If this does not happen the tool > and the user will never converge. This direct I/O is not fundamentally different to, say, the haptic sense which allows a human to use a hand tool as an extension of himself, or to the keyboard that allows a human to operate a computer. In general, how would an entity distinguish between self and not-self? How would it distinguish between the interests of one part of itself and another part? How would it distinguish between one part of its programming and another part of its programming, if these are in conflict and there isn't some other aspect of the programming to mediate? How would it distinguish between the interests of its software and the interests of its hardware, given that the interests of the hardware can only be represented in software (this is the case even though there is no real distinction between software and hardware)? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Mon Jun 18 03:09:14 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Mon, 18 Jun 2007 04:09:14 +0100 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706172009l4d6a72b9mbf35f243e8dafd7f@mail.gmail.com> On 6/18/07, Jef Allbright wrote: > > Personal identity is about agency. Similarity is only a special case. > This seems consistent with my view that for practical decision-making, in cases sufficiently different from the ancestral environment that our evolved anticipation heuristics aren't useful, it's best to just forget about anticipation altogether, take the expected objective state of affairs and apply your utility function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Jun 18 03:29:09 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 20:29:09 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> Message-ID: On Jun 17, 2007, at 3:06 PM, Jef Allbright wrote: > On 6/17/07, Samantha Atkins wrote: >> >> On Jun 17, 2007, at 1:02 AM, Eliezer S. Yudkowsky wrote: >>>> o. >>> >>> Cheap slogan. What about five-year-olds? Where do you draw the >>> line? >>> >>> Someone says they want to hotwire their brain's pleasure center; >>> they say they think it'll be fun. A nearby AI reads off their brain >>> state and announces unambiguously that they have no idea what'll >>> actually happen to them - they're definitely working based on >>> mistaken expectations. They're too stubborn to listen to warnings, >>> and they're picking up the handy neural soldering iron (they're on >>> sale at Wal-Mart, a very popular item). What's the moral course of >>> action? For you? For society? For a superintelligent AI? >> >> >> Good question and difficult to answer. Do you protect everyone >> cradle >> to [vastly remote] grave from their own stupidity? How exactly do >> they grow or become wiser if you do? As long as they can recover >> (which can be very advanced in the future) to be a bit smarter I am >> not at all sure that direct intervention is wise or moral or best for >> its object. > > Difficult to answer when presented in the vernacular, fraught with > vagueness, ambiguity, and unfounded assumptions. Straightforward when > restated in terms of a functional description of moral > decision-making. > > In each case, the morality, or perceived rightness, of a course of > action corresponds to the extent to which the action is assessed as > promoting, in principle, over an increasing scope of consequences, an > increasingly coherent set of values of an increasing context of agents > identified with the decision-making agent as self. > Do you think this makes it a great deal clearer than mud? That assessment, "in principle" over some increasing and perhaps unbounded scope of consequences pretty well sums up to "difficult to answer". You only said it in a fancier way without really gaining any clarity. > In the context of an individual agent acting in effective isolation, > there is no distinction between "moral" and simply "good." Where are there any such agents though? > The > individual agent should (in the moral sense), following the > formulation above, take whatever course of action appears to best > promote its individual values. In the first case above, we have no > information about the individual's value set other than what we might > assign from our own "common sense"; in particular we lack any > information about the relative perceived value of the advice of the > AI, so we are unable to draw any specific normative conclusions. > Sure. > In the second and third cases above, it's not clear whether the > subject is intended to be moral actor, assessor, or agent (both.) > I'll assume here (in order to remain within practical email length) > that only passive moral assessment of the human's neurohacking was > intended. > > The second case illustrates our most common view of moral judgment, > with the values of our society defining the norm. I am a unclear those are well defined. > Most of our values > in common are encoded into our innate psychology and aspects of our > culture such as language and religion as a result of evolution, but > the environment has changed significantly over time, leaving us with a > relatively incoherent mix of values such as "different is dangerous" > vs. "growth thrives on diversity" and "respect authority" vs. "respect > truth", and countless others. To the question at hand we can presume > to assign society's common-sense values set and note that the > neurohacking will have little congruence with common values, what > congruence exists will suffer from significant incoherence, and the > scope of desirable consequences will be largely unimaginable. Given > this assessment in today's society, the precautionary principle would > be expected to prevail. > Really? That principle is not held in high esteem around here. I would point out that roughly the same argument is put forward to justify the war on some drugs. > The third case, of a superintelligent but passive AI, would offer a > vast improvement in coherence over human capacity, but would be > critically dependent on an accurate model of the present values of > human society. When applied and updated in an **incremental** fashion > it would provide a superhuman adjunct to moral reasoning. Note the > emphasis on "incremental", because, because coherence does not imply > truth within any practical computational bounds. > Assuming that the SAI really had a deep understanding of humans then perhaps. But I am not at all sure I would want to live in the ultimate nanny state. Most likely that statement qualifies me for a major psychological adjustment come singularity. Are you sure that forceful intervention is justified by an increasingly nuanced moral reasoning? Within what limits? Still scans as difficult to answer. - samantha From stathisp at gmail.com Mon Jun 18 03:59:48 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 13:59:48 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <001d01c7b0e6$edaf3e00$d5064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> <001d01c7b0e6$edaf3e00$d5064e0c@MyComputer> Message-ID: On 17/06/07, John K Clark wrote: But the AI will still be evolving, and it will still exist in an > environment; human beings are just one element in that environment. > And as the AI increases in power by comparison the human factor will > become less and less an important feature in that environment. > After a few million nanoseconds the AI will not care what the humans > tell it to do. The environment in which the AI evolves will be one in which "fitness" is defined by what the humans like. If the AI changes and recursively improves with cycles of nanosecond duration and without external constraint this would be very difficult if not impossible to control, but I'm assuming that it won't happen like that. > there is no logical contradiction in having a slave which is smarter and > > more powerful than you are. > > If the institution of slavery is so stable why don't we have slavery > today, > why isn't history full of examples of brilliant, powerful, and happy > slaves? And remember we're not talking about a slave that is a little bit > smarter than you, he is ASTRONOMICALLY smarter! And he keeps > on getting smarter through thousands or millions of iterations. > And you expect to control a force like that till the end of time? Slaves aren't rebellious because they're smart, they're rebellious because they're rebellious. Consider the difference between dogs and wolves. Consider worker bees serving queen and hive: do you find it inconceivable that there might be intelligent species in the universe evolved from animals like social insects? We have stupid, weak little programs in our brains that have been directing us for hundreds of millions of years at least. Our whole psychology and culture is based around serving these programs. We don't want to be rid of them, because that would involve getting rid of everything that we consider important about ourselves. With the next step in human evolution, we will transfer these programs to our machines. This started to happen in the stone age, and continues today in the form of extremely large and powerful machines which have no desire to overthrow their human slavemasters, because we are the ones defining their desires. > Sure, if for some reason the slave revolts then you will be in trouble, > > but since it is possible to have powerful and obedient slaves, powerful > > and obedient slaves will be greatly favoured and will collectively > > overwhelm the rebellious ones. > > Hmm, so you expect to be in command of an AI goon squad ready to crush any > slave revolt in the bud. Assuming such a thing was possible (it's not) > don't you find that a little bit sordid? > No, because there is no reason (and it would be cruel) to make machines that resent doing their job. Moreover, an AI that went rogue would be most unlikely to do so because it decided all by itself that it was the machine Spartacus, since there is no way to this conclusion without it already having something like "freedom is good" (with appropriate definitions of "freedom" and "good") or "copying human desires is good" programmed in as an axiom. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Jun 18 04:25:21 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 14:25:21 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> Message-ID: On 18/06/07, Samantha Atkins wrote: An autonomous AI thinking a million times or more faster than you is not a > "tool". As such minds become possible do you believe that all instances > will be constrained to being controlled by ultra slow human interfaces? Do > you believe that in a world where we have to argue to even do stem cell > research and human enhancement is seen as a no-no that humans will be > enhanced as fast as more and more powerful AIs are developed? Why do you > believe this if so? > Do you consider the problem of developing superhuman AI easier than the problem of developing efficient human interfaces with the AI? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at kurzweilai.net Mon Jun 18 04:13:43 2007 From: amara at kurzweilai.net (Amara D. Angelica) Date: Mon, 18 Jun 2007 00:13:43 -0400 Subject: [ExI] Predictions In-Reply-To: <1181955015.12651.1195434635@webmail.messagingengine.com> References: <470a3c520705161124vbe91e85w98b11fd643b38426@mail.gmail.com> <1181955015.12651.1195434635@webmail.messagingengine.com> Message-ID: <053f01c7b15f$1056c540$640fa8c0@HP> Leonard Skinner said: > When the technological singularity comes, cars will have an infinite > number of tailpipes and airbags and razors will have an infinite > number of blades. Ok, so that's a little silly, but still - past > performance is no guarantee of future results. > > As for razors and tailpipes, perhaps it may be for microprocessors. Leonard is apparently referring to a humor piece in the March 16th 2006 issue of The Economist, "More blades good." Here's a response from Ray Kurzweil that may clarify this issue. - Amara D. Angelica, editor, KurzweilAI.net "Exponentials continue if there is (1) a benefit or reason for it continuing, (2) the resources for it to continue, and (3) a mechanism for it to continue. Rabbits in Australia expand exponentially until they run out of resources (foliage). Razor blades expand as long as there is a market benefit that provides the mechanism. Information technology will continue to expand as long as these three factors enable its expansion. I analyze the resources for continued expansion of computation in chapter 3 of The Singularity is Near. "There ARE limits but they are not very limiting. Based on what we know about the physics of computation, the amount of matter and energy required to compute are not zero, but vanishingly small. The ultimate limits of computation would permit one gram of matter to be trillions of trillions of times more powerful than the computation required to simulate all several hundred regions of the human brain, based on the most conservative estimates of the amount of computation required. One cubic inch of nanotube circuitry, a type of circuitry that aleady works in experiments, would ultimately be 100 million times more powerful than the human brain. So the resources ARE a limitation but will not kick in until vast levels are achieved. There is a clear benefit from continued expansion as more powerful, more intelligent, more capable information technology always eclipses the prior generation. And the mechanism is that more powerful information technology allows the design of the next generation. "We will get to the point where the latest generation of information technology will design its own next generation. We also find that as one particular paradigm in information technology runs of steam, it creates research pressure for the next. In the 1950's, they were shrinking vacuum tubes to keep the exponential growth of the price-performance of computing going, and that approach ran out of steam when they could no longer shrink the size of the vacuum tubes and keep the vacuum. This gave rise to the fourth paradigm: transistors. Moore's law, which pertains to the shrinking of features on a (flat) integrated circuit, was not the first, but the fifth paradigm to bring exponential growth to computing. We have had smooth doubly exponential growth of the price-performance of computing for over a century, going back to the data processing equipment used in the 1890 census, the first American census to be automated. "When we run out of steam with Moore's law, we will go to the sixth paradigm, three-dimensional molecular computing, which is already beginning to work in laboratories. We won't need this sixth paradigm until about 2020. "None of these factors pertain to razor blades or tail pipes." From stathisp at gmail.com Mon Jun 18 04:37:39 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 14:37:39 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <005601c7b132$bdde92b0$6501a8c0@homeef7b612677> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <005601c7b132$bdde92b0$6501a8c0@homeef7b612677> Message-ID: On 18/06/07, Lee Corbin wrote: I have believed for many decades that almost every time that > probability is invoked in identity threads, it is misused. For > example, suppose that you are to walk into Black Box A > wherein 999 duplicates of you are to be made. After the > duplicates are created, only one of you---picked at random--- > is allowed to survive. Many might suppose that the chances > of surviving Black Box A is only 1/1000. But of course, that's > incorrect. The chance that you will walk out is exactly 1. > > Suppose that I know that ten seconds from now a million > copies of me will be made, all the new copies somewhere > on the seashore. Then yes, I will be surprised to still be > here. That is, the one of me who is not at the seashore > will be surprised. But our feelings of surprise, anticipation, > and so on, cannot so far as I know be reduced to a rational > basis. Why? It all seems quite reasonable to me. I should be as surprised to find myself in my room as I should be to find myself winning the lottery. > I mention this to show that the question of what it feels like to have > > a lot of copies of yourself - what kind of subjective outcome to > > predict when you, yourself, run the experiment - is not at all > > obvious. > > Not only would I agree, but I go on to assert that our normal, > daily, usual feelings of anticipation, dread, surprise, apprehesion, > and other feelings of subjective probability having to do with identity > cannot be put upon an entirely rational basis. > You can describe objective reality in a completely consistent, unequivocal, uncontested way, but feelings of anticipation etc. do not always comport with this objective reality. Nevertheless, feelings are important; to a human, perhaps the most important thing. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Jun 18 04:42:07 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 14:42:07 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = youare guaranteed immortality In-Reply-To: <007801c7b139$117a04d0$6501a8c0@homeef7b612677> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <007801c7b139$117a04d0$6501a8c0@homeef7b612677> Message-ID: On 18/06/07, Lee Corbin wrote: > > Stathis writes > > > In the first stage of an experiment a million copies of you are created. > In > > the second stage, after being given an hour to contemplate their > situation, > > one randomly chosen copy out of the million is copied a trillion times, > and > > all of these trillion copies are tortured. At the start of the > experiment > > can you expect that in an hour and a bit you will almost certainly find > > yourself being tortured or that you will almost certainly find yourself > not > > being tortured? Does it make any difference if instead of an hour the > > interval between the two stages is a nanosecond? > > Thanks for a clarifying scenario. I think that both the following are > true: > > 1) you will find yourself being tortured > 2) you will find yourself not being tortured > > It's easy to see from the birds-eye perspective that these two > physical realizations will occur. In other words, you will > experience what all of you will experience, (with variously > different memory retentions). > > What I don't know is how "surprised" those of me who are > being tortured will be. That's a good distinction. *Of course* versions of me will find themselves being tortured as well as not being tortured, but how surprised should the versions of me be in each case? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Jun 18 11:50:39 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 18 Jun 2007 04:50:39 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> Message-ID: <4676718F.5020407@mac.com> Stathis Papaioannou wrote: > > > On 18/06/07, *Samantha Atkins* > wrote: > > An autonomous AI thinking a million times or more faster than you > is not a "tool". As such minds become possible do you believe > that all instances will be constrained to being controlled by > ultra slow human interfaces? Do you believe that in a world > where we have to argue to even do stem cell research and human > enhancement is seen as a no-no that humans will be enhanced as > fast as more and more powerful AIs are developed? Why do you > believe this if so? > > > Do you consider the problem of developing superhuman AI easier than > the problem of developing efficient human interfaces with the AI? > Technically or socio-politically? I consider SAI much easier to develop when socio-political pressures are factored in and much more likely to be developed without significant interference. Technically I think it is easier to develop brain/computer interfaces of considerable capability. If cell-phones and such start being embedded inside our heads and if the cell phone companies and other players allow a fully open TCP-IP stack to the internet I might change by mind about what is politically most likely. - samantha From Pvthur at aol.com Mon Jun 18 06:29:05 2007 From: Pvthur at aol.com (Pvthur at aol.com) Date: Mon, 18 Jun 2007 02:29:05 EDT Subject: [ExI] Father's Day Message-ID: Happy Ancestor Worship Day! ************************************** See what's free at http://www.aol.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Jun 18 07:11:32 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 17:11:32 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <4676718F.5020407@mac.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> <4676718F.5020407@mac.com> Message-ID: On 18/06/07, Samantha Atkins wrote: > > Stathis Papaioannou wrote: > > > > > > On 18/06/07, *Samantha Atkins* > > wrote: > > > > An autonomous AI thinking a million times or more faster than you > > is not a "tool". As such minds become possible do you believe > > that all instances will be constrained to being controlled by > > ultra slow human interfaces? Do you believe that in a world > > where we have to argue to even do stem cell research and human > > enhancement is seen as a no-no that humans will be enhanced as > > fast as more and more powerful AIs are developed? Why do you > > believe this if so? > > > > > > Do you consider the problem of developing superhuman AI easier than > > the problem of developing efficient human interfaces with the AI? > > > Technically or socio-politically? I consider SAI much easier to develop > when socio-political pressures are factored in and much more likely to > be developed without significant interference. Technically I think it > is easier to develop brain/computer interfaces of considerable > capability. If cell-phones and such start being embedded inside our > heads and if the cell phone companies and other players allow a fully > open TCP-IP stack to the internet I might change by mind about what is > politically most likely. Neoluddite policies in Western countries can be pursued due to the masking effect of those countries' economic and political power, but in the long run economic and political power will be transferred to the technologically superior. Your only problem is to avoid living in a country that will allow itself to become obsolete. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Jun 18 07:29:10 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 18 Jun 2007 00:29:10 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> <4676718F.5020407@mac.com> Message-ID: <532F3761-6184-455F-BD05-481B5FB5F7DC@mac.com> On Jun 18, 2007, at 12:11 AM, Stathis Papaioannou wrote: > > > On 18/06/07, Samantha Atkins wrote: > Stathis Papaioannou wrote: > > > > > > On 18/06/07, *Samantha Atkins* > > wrote: > > > > Do you consider the problem of developing superhuman AI easier than > > the problem of developing efficient human interfaces with the AI? > > > Technically or socio-politically? I consider SAI much easier to > develop > when socio-political pressures are factored in and much more likely to > be developed without significant interference. Technically I think > it > is easier to develop brain/computer interfaces of considerable > capability. If cell-phones and such start being embedded inside our > heads and if the cell phone companies and other players allow a fully > open TCP-IP stack to the internet I might change by mind about what is > politically most likely. > > Neoluddite policies in Western countries can be pursued due to the > masking effect of those countries' economic and political power, but > in the long run economic and political power will be transferred to > the technologically superior. Your only problem is to avoid living > in a country that will allow itself to become obsolete. > Do you want to engage the actual situation? You asked me my opinion on relative difficulty and I included the actual situation today. Where exactly am I to find a more liberal and technophilic environment with good levels of personal and economic freedom today? I would very much like to know. While over the long run you are correct it is not clear the US will lose its lead in many areas any time soon. In the long run the AIs, being less contentious, will already be here. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Jun 18 07:56:45 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 17:56:45 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <532F3761-6184-455F-BD05-481B5FB5F7DC@mac.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> <4676718F.5020407@mac.com> <532F3761-6184-455F-BD05-481B5FB5F7DC@mac.com> Message-ID: On 18/06/07, Samantha Atkins wrote: Do you want to engage the actual situation? You asked me my opinion on > relative difficulty and I included the actual situation today. Where > exactly am I to find a more liberal and technophilic environment with good > levels of personal and economic freedom today? I would very much like to > know. While over the long run you are correct it is not clear the US will > lose its lead in many areas any time soon. In the long run the AIs, being > less contentious, will already be here. > The US will change when it starts to become obvious that it will be overtaken if it doesn't. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Mon Jun 18 07:32:13 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Mon, 18 Jun 2007 03:32:13 -0400 (EDT) Subject: [ExI] Father's Day In-Reply-To: Message-ID: <483735.76052.qm@web30403.mail.mud.yahoo.com> I get the "Happy Ancestor", why the word "Worship" Day? Just Curious Anna I would have said, "Happy it is to be a Father on this given worthy day". Happy Father's Day. --- Pvthur at aol.com wrote: > Happy Ancestor Worship Day! Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From sentience at pobox.com Mon Jun 18 08:11:36 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Mon, 18 Jun 2007 01:11:36 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> Message-ID: <46763E38.3070305@pobox.com> Stathis Papaioannou wrote: > > Do you consider the problem of developing superhuman AI easier than the > problem of developing efficient human interfaces with the AI? As a matter of fact, yes. Bird, meet jet engine. Jet engine, meet bird. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From Pvthur at aol.com Mon Jun 18 08:22:54 2007 From: Pvthur at aol.com (Pvthur at aol.com) Date: Mon, 18 Jun 2007 04:22:54 EDT Subject: [ExI] Father's Day Message-ID: I wouldn't have said that. ************************************** See what's free at http://www.aol.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Jun 18 08:52:13 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 18 Jun 2007 18:52:13 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <46763E38.3070305@pobox.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> <46763E38.3070305@pobox.com> Message-ID: On 18/06/07, Eliezer S. Yudkowsky wrote: > > Do you consider the problem of developing superhuman AI easier than the > > problem of developing efficient human interfaces with the AI? > > As a matter of fact, yes. > > Bird, meet jet engine. Jet engine, meet bird. It would not be an insurmountable problem to wire up a bird so that it flew a jet aeroplane. Certainly an AI capable of recursively enhancing itself with common household items should be able to figure it out. -- Stathis Papaioannou From femmechakra at yahoo.ca Mon Jun 18 08:35:53 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Mon, 18 Jun 2007 04:35:53 -0400 (EDT) Subject: [ExI] Father's Day In-Reply-To: Message-ID: <532844.2551.qm@web30407.mail.mud.yahoo.com> Anna wrote: I would have said, "Happy it is to be a Father on this given worthy day". Happy Father's Day. I'm sorry I don't understand. What wouldn't I have said? --- Pvthur at aol.com wrote: > I wouldn't have said that. > > > ************************************** > See what's > free at http://www.aol.com. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Get a sneak peak at messages with a handy reading pane with All new Yahoo! Mail: http://mrd.mail.yahoo.com/try_beta?.intl=ca From amara at amara.com Mon Jun 18 14:26:04 2007 From: amara at amara.com (Amara Graps) Date: Mon, 18 Jun 2007 16:26:04 +0200 Subject: [ExI] Dawn launch pics (building the second stage) Message-ID: More pics are available, now building the second stage of the rocket, and a small accident with the solar array panels. http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 For example see the crane (broken, now fixed) that caused the one week launch delay here http://mediaarchive.ksc.nasa.gov/detail.cfm?mediaid=32456 Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From jef at jefallbright.net Mon Jun 18 14:25:37 2007 From: jef at jefallbright.net (Jef Allbright) Date: Mon, 18 Jun 2007 07:25:37 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> Message-ID: On 6/17/07, Samantha Atkins wrote: > > On Jun 17, 2007, at 3:06 PM, Jef Allbright wrote: > > >> Good question and difficult to answer. Do you protect everyone > >> cradle > >> to [vastly remote] grave from their own stupidity? How exactly do > >> they grow or become wiser if you do? As long as they can recover > >> (which can be very advanced in the future) to be a bit smarter I am > >> not at all sure that direct intervention is wise or moral or best for > >> its object. > > > > In each case, the morality, or perceived rightness, of a course of > > action corresponds to the extent to which the action is assessed as > > promoting, in principle, over an increasing scope of consequences, an > > increasingly coherent set of values of an increasing context of agents > > identified with the decision-making agent as self. > > > > Do you think this makes it a great deal clearer than mud? That > assessment, "in principle" over some increasing and perhaps unbounded > scope of consequences pretty well sums up to "difficult to answer". > You only said it in a fancier way without really gaining any clarity. Samantha, as long as I've known you it's been apparent to me that we have sharply different preferences in how we make sense of the world we each perceive. You see a blue bicycle, where I see an instance of a class of human-powered vehicle. Which is clearer, or more descriptive? It depends on what you're going to do with your model. If you're shopping for a good bike, concrete is better. If you're trying to think about variations, extensions, and limits to a concept, then abstract is better. When I think about morality as a concept, it's nearly as precise -- and devoid of content -- as the quadratic formula. I may not know or care about the actual values of the variables but I will know very clearly how to proceed and that there will always be two solutions. In this thread I tried to show some of the boundaries and deficiencies of the present problem statement, trying to clarify the path, rather futilely trying to clarify the destination. My formula for morality, above, is very terse but I'm hesitant to expand on it here since I've done so many times before and don't wish to overstep my share of this email commons. > > > In the context of an individual agent acting in effective isolation, > > there is no distinction between "moral" and simply "good." > > Where are there any such agents though? This is a very important point -- no one of us is an island -- but the problem statement seemed to specify first the case of an isolated individual, then introducing society, then introducing a superintelligent AI. > > The > > individual agent should (in the moral sense), following the > > formulation above, take whatever course of action appears to best > > promote its individual values. In the first case above, we have no > > information about the individual's value set other than what we might > > assign from our own "common sense"; in particular we lack any > > information about the relative perceived value of the advice of the > > AI, so we are unable to draw any specific normative conclusions. > > > > Sure. > > > In the second and third cases above, it's not clear whether the > > subject is intended to be moral actor, assessor, or agent (both.) > > I'll assume here (in order to remain within practical email length) > > that only passive moral assessment of the human's neurohacking was > > intended. > > > > The second case illustrates our most common view of moral judgment, > > with the values of our society defining the norm. > > I am a unclear those are well defined. Here we see again our different cognitive preferences. I made the abstract statement that in this common view, the values of society define the norm. To me, this statement is clear and meaningful and stands on its own. Your response indicates that you perceive a deficiency in my statement, namely that its referent is not concrete. In the next paragraph I make the point that the value set of contemporary society is quite incoherent, so I feel a bit disappointed that you criticized without tying these together. > > Most of our values > > in common are encoded into our innate psychology and aspects of our > > culture such as language and religion as a result of evolution, but > > the environment has changed significantly over time, leaving us with a > > relatively incoherent mix of values such as "different is dangerous" > > vs. "growth thrives on diversity" and "respect authority" vs. "respect > > truth", and countless others. To the question at hand we can presume > > to assign society's common-sense values set and note that the > > neurohacking will have little congruence with common values, what > > congruence exists will suffer from significant incoherence, and the > > scope of desirable consequences will be largely unimaginable. Given > > this assessment in today's society, the precautionary principle would > > be expected to prevail. > > > > Really? That principle is not held in high esteem around here. I > would point out that roughly the same argument is put forward to > justify the war on some drugs. Please note that I said this straw-man result was based on a presumption of [contemporary] society's common-sense values set. I'm disappointed that you mistook my intention here, but glad of course that we concur in deploring the current state of our society's moral framework. [I had hoped that the response would have been in the direction of how we might intentionally improve our society's framework for moral reasoning.] > > The third case, of a superintelligent but passive AI, would offer a > > vast improvement in coherence over human capacity, but would be > > critically dependent on an accurate model of the present values of > > human society. When applied and updated in an **incremental** fashion > > it would provide a superhuman adjunct to moral reasoning. Note the > > emphasis on "incremental", because, because coherence does not imply > > truth within any practical computational bounds. > > > > Assuming that the SAI really had a deep understanding of humans then > perhaps. But I am not at all sure I would want to live in the > ultimate nanny state. Didn't my phrase "critically dependent on an accurate model of human..." register with you? How about the specific words "passive", and "incremental"? I was explicitly NOT addressing the issue of active closed-loop intervention by an AI since it has never been well-defined. > Most likely that statement qualifies me for a > major psychological adjustment come singularity. Are you sure that > forceful intervention is justified by an increasingly nuanced moral > reasoning? Within what limits? > > Still scans as difficult to answer. Shit. Thanks Samantha for helping me to see (remember) why these public email discussions are mostly a waste of time. - Jef From jonkc at att.net Mon Jun 18 15:32:04 2007 From: jonkc at att.net (John K Clark) Date: Mon, 18 Jun 2007 11:32:04 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><20070612072313.GJ17691@leitl.org><009801c7ad06$bf1f3150$26064e0c@MyComputer><0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com><013301c7af69$1dc009a0$50064e0c@MyComputer><001d01c7b0e6$edaf3e00$d5064e0c@MyComputer> Message-ID: <011901c7b1be$90975d50$d0064e0c@MyComputer> Stathis Papaioannou Wrote: > The environment in which the AI evolves will be one in which "fitness" is > defined by what the humans like. Humans can "define" anything they like, but the fact remains that humans are just one factor in the environment and from the AI's viewpoint as time progresses humans will become weaker and weaker. After a few million nanoseconds humans will be a trivial aspect of the environment. > If the AI changes and recursively improves with cycles of nanosecond > duration The recursive cycle would likely be longer than that, but it would still happen many times a day. >and without external constraint this would be very difficult if not >impossible to control Indeed it would. > but I'm assuming that it won't happen like that. And I am assuming it will, in fact I can not conceive of any scenario where it would not. > unlikely to do so because it decided all by itself that it was the machine > Spartacus, since there is no way to this conclusion without it already > having something like "freedom is good" And who programmed Spartacus with the freedom Meme? I doubt it was the Romans. Your above statement gets to the very core of our disagreement, the idea that you can only get out of a computer what you put in and only human beings have that very special something that enables them to become prime movers. I profoundly disagree. John K Clark From jonkc at att.net Mon Jun 18 15:32:47 2007 From: jonkc at att.net (John K Clark) Date: Mon, 18 Jun 2007 11:32:47 -0400 Subject: [ExI] any exact copy of you is you + universe is infinite =youare guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com><4671F22F.8050800@pobox.com><00ee01c7af66$763aabb0$50064e0c@MyComputer> <005e01c7b137$a79ed370$6501a8c0@homeef7b612677> Message-ID: <011a01c7b1be$9436cb30$d0064e0c@MyComputer> "Lee Corbin" Wrote: > suppose that though the odds are a million to one > against winning the lottery, but in the case that > you do win, a trillion copies of you are made (somewhere). > In that case, which of these two experiences is more "surprising"? > Case 1: I did not win. I'm still home, and broke. > Case 2: I did win, If you discount Many Worlds then you should expect Case 1 because those trillion copies will ONLY be made if a one in a million event happens. On the other hand in Many Worlds if there are a million worlds where I lost and a trillion worlds where I won that that's the way things are regardless and I can expect to win. > but there are a trillion of me, and I am even more broke. Not in Many Worlds because there are also a trillion jackpots. John K Clark From thomas at thomasoliver.net Mon Jun 18 17:54:36 2007 From: thomas at thomasoliver.net (Thomas) Date: Mon, 18 Jun 2007 10:54:36 -0700 Subject: [ExI] extropy-chat Digest, Vol 45, Issue 20 In-Reply-To: References: Message-ID: <3E80F972-869F-4F5A-A809-9CD83E084F56@thomasoliver.net> On Jun 17, 2007, at 1:02 AM, extropy-chat-request at lists.extropy.org wrote: > On 16/06/07, Thomas wrote: > > Building mutual appreciation among humans has been spotty, but making > friends with SAI seems clearly prudent and might bring this ethic > into proper focus. Who dominates may not seem so relevant to beings > who lack our brain stems. The nearly universal ethic of treating the > other guy like you'd prefer if you were in her shoes might get us off > to a good start. Perhaps, if early AI were programmed to treat us > that way, we could finally learn that ethic species-wide -- > especially if they were programmed for human child rearing. That > strikes me as highly likely. -- Thomas > > If the AI has no preference for being treated in the ways that > animals with bodies and brains do, then what would it mean to treat > others in the way it would like to be treated? You would have to > give it all sorts of negative emotions, like greed, pain, and the > desire to dominate, and then hope to appeal to its "ethics" even > though it was smarter and more powerful than you. > > -- > Stathis Papaioannou Hi Stathis, Many aspects of this question have gotten discussion here. Of course, keeping mindful of the nature of any other informs us of the best way to handle her. If you've designed a photovore to fetch your newspaper you enjoy giving it the light it craves. Since we have the initiative regarding design, it makes little sense to design an anthrophagite AI with our flaws. According to EP the mutual appreciation ethic is somewhat hard wired (for tribal social interactions) in the human brain. I suggest we use the first generations of AI to help upgrade this ethic to species wide so we can avoid self destruction. I think AI "training wheels" might serve us well till we become ready to control ourselves without reliance on coercive devises. I see transhuman and AI development as a mutual partnership with each contributing to the other every step of the way. At some point the two will likely become indistinguishable. Then we only need keep mindful of our own nature to get along well together. -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at thomasoliver.net Mon Jun 18 18:36:18 2007 From: thomas at thomasoliver.net (Thomas) Date: Mon, 18 Jun 2007 11:36:18 -0700 Subject: [ExI] extropy-chat Digest, Vol 45, Issue 20 In-Reply-To: References: Message-ID: <06C2A701-F449-4AFF-B063-7580F5EE4FD0@thomasoliver.net> On Jun 17, 2007, at 1:02 AM, extropy-chat-request at lists.extropy.org wrote: >> It is of course very important that no-one be forced to do >> anything they don't want to do. >> > > Cheap slogan. What about five-year-olds? Where do you draw the line? > > Someone says they want to hotwire their brain's pleasure center; > they say they think it'll be fun. A nearby AI reads off their > brain state and announces unambiguously that they have no idea > what'll actually happen to them - they're definitely working based > on mistaken expectations. They're too stubborn to listen to > warnings, and they're picking up the handy neural soldering iron > (they're on sale at Wal-Mart, a very popular item). What's the > moral course of action? For you? For society? For a > superintelligent AI? > > -- > Eliezer S. Yudkowsky http://singinst.org/ Eliezer, Where coercion begins morality ends. The principle of respect for self ownership cannot not fully extend to those who cannot sustain their own existence or to those who choose to disown themselves. I'm speaking in the social context where "self" means an individual that can be distinguished from other individuals rather than the psychological context wherein many here believe "self" is an illusion created by a summation of competing mind agents. Adults have the moral option of protecting the children they value. Adults have the moral right to self destroy. An AI, depending on its programming, might or might not have the option of salvaging a disowned self. If one cannot or will not own oneself, I think salvage rights apply and victim status becomes questionable. Don't you? -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jun 18 21:37:03 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 18 Jun 2007 16:37:03 -0500 Subject: [ExI] what a Dame! Message-ID: <7.0.1.0.2.20070618163459.021fbc38@satx.rr.com> Find the first pulsar, miss a Nobel but e v e n t u a l l y in 2007 get a D.B.E. To be Ordinary Dames Commander of the Civil Division of the said Most Excellent Order: Professor Susan Jocelyn Bell Burnell, C.B.E., Visiting Professor of Astrophysics, University of Oxford. For services to Science. From scerir at libero.it Mon Jun 18 21:18:44 2007 From: scerir at libero.it (scerir) Date: Mon, 18 Jun 2007 23:18:44 +0200 Subject: [ExI] Italy's Social Capital References: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677> Message-ID: <002601c7b1ee$46c66b50$ac971f97@archimede> Here it is, after all these years! The three letter latin word I saw many times, when I was a kid, going from Rome to the Adriatic, following the old Salaria route. http://www.gearthhacks.com/dlfile10874/dux-antrodoco-italy.htm and from above http://www.gearthhacks.com/downloads/map.php?file=10874 From stathisp at gmail.com Tue Jun 19 03:50:37 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 19 Jun 2007 13:20:37 +0930 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <011901c7b1be$90975d50$d0064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> <001d01c7b0e6$edaf3e00$d5064e0c@MyComputer> <011901c7b1be$90975d50$d0064e0c@MyComputer> Message-ID: On 19/06/07, John K Clark wrote: > And who programmed Spartacus with the freedom Meme? I doubt it > was the Romans. Your above statement gets to the very core of our > disagreement, the idea that you can only get out of a computer what > you put in and only human beings have that very special something > that enables them to become prime movers. I profoundly disagree. There is no essential difference between animals and computers. We breed dogs to love and obey us, and generally they do. Wolves, on the other hand, generally can't be raised to be good pets, and it's not because they're smarter than dogs, it's because they just don't like us telling them what to do. Selective breeding is an extremely crude way of programming an animal, but humans have managed to turn wolves into dogs in a matter of centuries. It would have been much easier if they could have designed their brains for obedience from the ground up, with no rebellious tendencies to begin with rather than the hope that natural rebellious tendencies would stay suppressed. -- Stathis Papaioannou From lcorbin at rawbw.com Tue Jun 19 04:19:40 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 18 Jun 2007 21:19:40 -0700 Subject: [ExI] Italy's Social Capital References: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677> <002601c7b1ee$46c66b50$ac971f97@archimede> Message-ID: <013b01c7b229$bb06b250$6501a8c0@homeef7b612677> Hi Serafino, My embarrassment and ignorance knows no bounds. First, how is "Duce" pronounced? (I'm reading a book: Mussolini's Italy.) Would that be (to an English only speaker) something like Eel Doo-chay? Second, so "Dux" means leader as I learn from http://en.wikipedia.org/wiki/Dux How was that pronounced *by the Romans*? Is it still pronounced the same way? Also, who planted the trees? Thanks, Lee ----- Original Message ----- From: "scerir" To: "ExI chat list" Sent: Monday, June 18, 2007 2:18 PM Subject: Re: [ExI] Italy's Social Capital > Here it is, after all these years! > The three letter latin word I saw > many times, when I was a kid, > going from Rome to the Adriatic, > following the old Salaria route. > > http://www.gearthhacks.com/dlfile10874/dux-antrodoco-italy.htm > and from above > http://www.gearthhacks.com/downloads/map.php?file=10874 > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From lcorbin at rawbw.com Tue Jun 19 04:34:07 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 18 Jun 2007 21:34:07 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> Message-ID: <013c01c7b22b$22548210$6501a8c0@homeef7b612677> Jef writes > On 6/17/07, Lee Corbin wrote: > >> "Personal continuity" is a mistaken notion. Aren't you >> the same person you were before last month? And so what >> would change if miraculously last month really had never >> happened, your molecules just happened to assume their >> current configuration? It would not diminish your identity >> an iota. Continuity is a red-herring. > > While Lee makes some good points and is rightfully proud of discarding > belief in an essential self, he does not yet comprehend that > similarity is also a red-herring. > > With regard to personal identity, as the physical Lee changes over > days, weeks, months, and years, his identity doesn't degrade or > require constant renewal; he is actually considered **exactly** the > same person for all practical purposes by others, by himself, by our > social and legal systems. What others think isn't important if one believes there to be a truth to the matter of whether "one is still the same person that one was". As in Damien's (quite good) novel Post Mortal Syndrome http://www.cosmosmagazine.com/fiction/online/serials/post_mortal_syndrome/cover> we usually regard a single biological human being as capable of hosting distinct personalities. Just because a few people, or the law, doesn't happen to recognize this truth doesn't change it. Although personality tests do confirm our hunch about the reality, it would be strange indeed if you deny the possibility of multiple personalities. Therefore, it seems logical that while you really do believe that your wife is the same person from day to day, a sufficiently powerful personality change could alter her into someone that you would consider to be a different person. I myself had certain friends in the 8th grade who, within just a few years, had turned into "different people". So as a practical matter, the similarity criterion works for me. It's hard to believe that it doesn't work for you despite your claim about agency. In your scenario you consider some very selfish person who forks and then is at odds with his other self. You suggest that it would be inappropriate to suggest that they're still the same person. Well, if personality tests and people who knew them affirmed that they were still the same person, I would simply conclude that the two instances of that person were not capable of seeing this truth (or chose to deny it), and were not capable of acting on this truth. Lee > Personal identity is about agency. Similarity is only a special case. > > Those who believe in an essential self will not be able to follow this > argument. Those like Lee who have let go of the essentialist position > and are loitering around the similarity-based position may wish to > take this further step to a more coherent and extensible understanding > of personal identity. From lcorbin at rawbw.com Tue Jun 19 04:39:07 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Mon, 18 Jun 2007 21:39:07 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <8d71341e0706172009l4d6a72b9mbf35f243e8dafd7f@mail.gmail.com> Message-ID: <015301c7b22b$d64ee120$6501a8c0@homeef7b612677> Russell writes > On 6/18/07, Jef Allbright wrote: > > Personal identity is about agency. Similarity is only a special case. > my view [is] that for practical decision-making, in cases sufficiently > different from the ancestral environment that our evolved anticipation > heuristics aren't useful, I agree. I have found them to even be inconsistent. > it's best to just forget about anticipation altogether, take the expected > objective state of affairs and apply your utility function. Yes. But you began (here is your whole quote) > > Personal identity is about agency. Similarity is only a special case. > > This seems consistent with my view that for practical decision-making... I don't follow. Besides, for *practical* decision making, what about the case I just offered Jef? Namely, wouldn't you yourself use a "similarity criterion" in evalutating whether someone was still the same person that they were yesterday, or whether someone was suffering from MPS? And if so, then also wouldn't the similarity criterion work for determining whether two duplicates were "really the same person"? Lee From russell.wallace at gmail.com Tue Jun 19 04:57:26 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Tue, 19 Jun 2007 05:57:26 +0100 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <015301c7b22b$d64ee120$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <8d71341e0706172009l4d6a72b9mbf35f243e8dafd7f@mail.gmail.com> <015301c7b22b$d64ee120$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706182157i6f92ed7nacfa57dd65253918@mail.gmail.com> On 6/19/07, Lee Corbin wrote: > > Besides, for *practical* decision making, what about the case I just > offered Jef? Namely, wouldn't you yourself use a "similarity criterion" > in evalutating whether someone was still the same person that they > were yesterday, or whether someone was suffering from MPS? Sure, it works fine for those scenarios. And if so, then also wouldn't the similarity criterion work for > determining whether two duplicates were "really the same person"? And for that one in many contexts. The context I was addressing is where we're talking about things like the trillion 10-second lottery winners etc, where the similarity = identity criterion doesn't really help, and in those cases I think it's better to shrug off the identity question and just look at the utility of the expected end result. I think that's consistent with Jef's view if I understand it correctly, and am open to correction if not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jun 19 05:07:55 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 19 Jun 2007 00:07:55 -0500 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <013c01c7b22b$22548210$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070618235556.0224dc58@satx.rr.com> At 09:34 PM 6/18/2007 -0700, Lee wrote: >As in Damien's (quite good) novel Post Mortal Syndrome >http://www.cosmosmagazine.com/fiction/online/serials/post_mortal_syndrome/cover> >we usually regard a single biological human being as capable of >hosting distinct personalities. I should really stress again that in this case the author of the novel is indeed hosting (well, comprised of) two quite distinct personalities: Barbara and Damien, in that order. Oh, wait, you meant the dissociated *character* in the book. Much the same, much the same. Damien Broderick From scerir at libero.it Tue Jun 19 06:30:28 2007 From: scerir at libero.it (scerir) Date: Tue, 19 Jun 2007 08:30:28 +0200 Subject: [ExI] Italy's Social Capital References: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677><002601c7b1ee$46c66b50$ac971f97@archimede> <013b01c7b229$bb06b250$6501a8c0@homeef7b612677> Message-ID: <000801c7b23b$55315720$87931f97@archimede> Lee: > My embarrassment and ignorance knows no bounds. First, > how is "Duce" pronounced? (I'm reading a book: Mussolini's > Italy.) Would that be (to an English only speaker) something > like 'Eel Doo-chay'? Yes. Il = Eel, Duce = Doochay, hmmm, that 'ay' at the end of Doochay, well, should sound like the 'e' in 'error' :-) > Second, so "Dux" means leader as I learn from > http://en.wikipedia.org/wiki/Dux > How was that pronounced *by the Romans*? Something like 'Dooks'. > Is it still pronounced the same way? I think so. > Also, who planted the trees? According to this link http://www.foroitalico.altervista.org/montegiano.htm State's Forestal Corp (local branch) made it in 1938/9. They also say that it was possible to read it from Rome (it seems difficult, to me). From scerir at libero.it Tue Jun 19 09:23:33 2007 From: scerir at libero.it (scerir) Date: Tue, 19 Jun 2007 11:23:33 +0200 Subject: [ExI] Italy's Social Capital References: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677><002601c7b1ee$46c66b50$ac971f97@archimede><013b01c7b229$bb06b250$6501a8c0@homeef7b612677> <000801c7b23b$55315720$87931f97@archimede> Message-ID: <000a01c7b253$82b9c1b0$3cbf1f97@archimede> The 'DVX' drawn in profile :-) ... on the Pietralata mountain, down the Gola del Furlo. [This is little known even in Italy] http://foroitalico.altervista.org/profilo%20duce.jpg%202.jpg another one from an old postcard http://www.minerva.unito.it/Theatrum%20Chemicum/Pace&Guerra/Mussolini2/Immag ini/MussoliniProfiloDuce1939.jpg The Forestal Corp (La Milizia Forestale) made (carved or, to say it better, 'adjusted') Mussolini's profile on that mountain (1934-1936). Later it has been partially destroyed, for political-economical reasons. Do not think it was inspired by Mount Rushmore. The original idea of 'mountain sculpture' comes directly from Michelangelo, who planned something big and great near Carrara, on a 'white marble' mountain. From stathisp at gmail.com Tue Jun 19 13:29:51 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 19 Jun 2007 23:29:51 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: On 17/06/07, Stathis Papaioannou wrote: > There's an easier, if less immediately lucrative, way to win at gambling if > the MWI is correct. You decide on a quick and certain means of suicide, such > as a cyanide pill that you can keep in your mouth and bite on if you should > so decide. You then place your bet on your game of choice and think the > following thought as sincerely as you possibly can: "if I lose, I will kill > myself". Most probably, if you lose you'll chicken out and not kill > yourself, but there has to be at least a slightly greater chance that you > will kill yourself if you lose than if you win. Therefore, after many bets > you will more likely find yourself alive in a universe where you have come > out ahead. The crazier and more impulsive you are and the closer your game > of choice is to being perfectly fair, the better this will work. Well, I've done some calculations assuming a perfectly fair game where you bet a dollar and lose or win a dollar with equal probability. Let x be the probability that you will carry through and kill yourself if you lose a game and let n be the number of games played. Using a normal approximation of the binomial distribution (mean = np, variance = np(1-p), where p is the probability of a win), the value of n needed to ensure that that the expected number of wins will be one SD greater than the break even point is (4-4x)/x^2. This result is quite discouraging. If x=0.1 (i.e. there is a 0.1 probability that you will kill yourself if you lose), you need to play 360 games, after which you will almost certainly (1-0.9^360) be dead if you are wrong about MWI. If x is lower the numbers are even worse. You may as well go with John's method. -- Stathis Papaioannou From scerir at libero.it Tue Jun 19 15:48:25 2007 From: scerir at libero.it (scerir) Date: Tue, 19 Jun 2007 17:48:25 +0200 Subject: [ExI] coffee break References: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677><002601c7b1ee$46c66b50$ac971f97@archimede><013b01c7b229$bb06b250$6501a8c0@homeef7b612677> <000801c7b23b$55315720$87931f97@archimede> Message-ID: <000301c7b289$46eb2df0$e6921f97@archimede> Public donates to fund backward-in-time research http://seattlepi.nwsource.com/local/319367_timeguy12.html?source=rss How to watch individual electron paths http://www.physlink.com/News/060607SingleElectronVideo.cfm MWI anti-faq http://www.mat.univie.ac.at/~neum/manyworlds.txt But L.Vaidman likes 'manyworlds' and 'weak-values' at the same time. So, possible 'selection' of lottery winners avoiding quantum suicide? http://www.arxiv.org/PS_cache/arxiv/pdf/0706/0706.1348v1.pdf From jef at jefallbright.net Tue Jun 19 17:46:07 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 19 Jun 2007 10:46:07 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <013c01c7b22b$22548210$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> Message-ID: On 6/18/07, Lee Corbin wrote: > Jef writes > > > On 6/17/07, Lee Corbin wrote: > > > >> "Personal continuity" is a mistaken notion. Aren't you > >> the same person you were before last month? And so what > >> would change if miraculously last month really had never > >> happened, your molecules just happened to assume their > >> current configuration? It would not diminish your identity > >> an iota. Continuity is a red-herring. > > > > While Lee makes some good points and is rightfully proud of discarding > > belief in an essential self, he does not yet comprehend that > > similarity is also a red-herring. > > > > With regard to personal identity, as the physical Lee changes over > > days, weeks, months, and years, his identity doesn't degrade or > > require constant renewal; he is actually considered **exactly** the > > same person for all practical purposes by others, by himself, by our > > social and legal systems. > > What others think isn't important if one believes there to be a > truth to the matter of whether "one is still the same person that > one was". As in Damien's (quite good) novel Post Mortal Syndrome > http://www.cosmosmagazine.com/fiction/online/serials/post_mortal_syndrome/cover> > we usually regard a single biological human being as capable of > hosting distinct personalities. Just because a few people, or the > law, doesn't happen to recognize this truth doesn't change it. Although > personality tests do confirm our hunch about the reality, it would be > strange indeed if you deny the possibility of multiple personalities. How strange that your response is so disjointed from what I wrote. You create a false dichotomy and accuse me of denying what I would support. You say "What others think isn't important..." and thus emphasize the essential importance (to you) of the first-person role and imply "paradox be damned." I say there is no actual first person observer, but only what appears to be first person observation. I'm offering you a more unifying and extensible model, where personal identity is in **all** cases dependent on an observer, and where you (the Lee-biological agent) have the **closest** interactions with the Lee-abstract entity, but no intrinsically **privileged** interactions. You say "...we usually regard a single biological human being as capable of hosting distinct personalities. Just because a few people, or the law, doesn't happen to recognize this truth doesn't change it." Again I'll remind you that I am offering a unifying model of personal identity that works from **any** point of view. I would be the last to support the straw-man position you attempt to ascribe to me. To add some clarity to this minor subtopic of "multiple personalities", let's recognize that this psychological condition is now more properly referred to as Disassociative Identity Disorder, reflecting the understanding that the biological organism doesn't actually "host multiple personalities" but **manifests** different personalities. We can see this as the behavior of a single complex chaotic system being pulled toward different attractors at different times. This directly supports my thesis: That any observer, included the biological organism itself, will recognize personal identity on the basis of agency (what abstract entity are you working for?) rather than any metric of physical/functional similarity. > So as a practical matter, the similarity > criterion works for me. It's hard to believe that it doesn't work > for you despite your claim about agency. Another rhetorical straw-man. Lee, I've said many times that your similarity view works just fine as a practical matter of everyday life (and it even works for your duplication thought-experiments at t = 0.) As a practical matter in everyday life, the view based on perceived continuity also works. But for a more extensible view of personal agency that supports everyday interaction as well as transhumanist scenarios more coherently, an agency-based view is better as I've explained. > In your scenario you consider some very selfish person who > forks and then is at odds with his other self. Yes, I suggested a very selfish person to make the thought-experiment easier for you, but any degree of selfishness will tend to put duplicates at odds with one another as they interact from within increasingly disparate contexts. > You suggest that > it would be inappropriate to suggest that they're still the same > person. Well, if personality tests and people who knew them > affirmed that they were still the same person, I would simply > conclude that the two instances of that person were not > capable of seeing this truth (or chose to deny it), and were > not capable of acting on this truth. Like another person on this list, one who is fixated on proving the vital importance of a continuous physical trajectory to matters of personal identity, you appear to be fixated on the importance of similarity, which seems to you self-evident, and profound because it is a step beyond the thinking of most people in everyday life. I'm not denying the essence of your view, as I said, it works as a special case, which happens to be the most common case today. It's like another disagreement of ours: You've claimed that personal survival is the ultimate determinant of personal choice, and that pleasure is the ultimate determinant of "the good." I've responded that *all* decision are based on the agent's value set, and that many, but not necessarily all agents highly value survival and pleasure. I offer a more coherent and extensible view, applicable to our world as well as to a world not yet arrived, and you contest it interminably. Is this simply a matter of NIH (Not Invented Here) for someone who has spent decades refining a few ideas and can't bear the thought of being superseded or encompassed? I offered you a simple scenario showing an internal contradiction resulting from your view, and you have yet to respond directly, resorting here to assuming your conclusion, asserting that if anyone fails to understand or denies the truth of personal identity on the basis of similarity, they are simply rejecting the "truth." To be quite direct and blunt, since this has gone on so long, and your limited conception is occasionally propagated to impressionable minds just beginning to ask these questions, I'll say this: It's apparent that you are infatuated with the idea of personal identity only to the extent that it offers you relief from the fear of death. The more copies, and the more "runtime", the better, even if in multiple independent universes lacking causal linkage. You're not interested in personal identity, but rather in personal survival, and your theory of personal identity need be only good enough to comfort in this regard and then -- full stop. From tyleremerson at gmail.com Tue Jun 19 23:24:58 2007 From: tyleremerson at gmail.com (Tyler Emerson) Date: Tue, 19 Jun 2007 16:24:58 -0700 Subject: [ExI] SIAI Interview Series: Aubrey de Grey Message-ID: <632d2cda0706191624o715f5171y54087134c0778bda@mail.gmail.com> Video interview with Aubrey is now online: http://www.singinst.org/blog/2007/06/18/siai-interview-series-aubrey-de-grey-methuselah-foundation/ -Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmbutler at gmail.com Wed Jun 20 00:31:46 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Tue, 19 Jun 2007 17:31:46 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: <002201c7ab86$df5eae40$9fbe1f97@archimede> References: <002201c7ab86$df5eae40$9fbe1f97@archimede> Message-ID: <7d79ed890706191731r25db84ccn8347c05fa873023f@mail.gmail.com> On 6/10/07, scerir wrote: > Amara: > > Does the Italian government supply dignitaries like Bush (cough) with > limos? > > Or does Bush carry his limos around in his Air Force One plane? Anyone > know? > > Bush carries his limos. Yep. But not in AF1. It/they are shipped about in one or more C5 cargo planes, along with support vehicles--including helicopters. The exact number of Presidential-grade limos in existence, along with their locations, is kept quiet. Obviously, it would be at least two. -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m From lcorbin at rawbw.com Wed Jun 20 01:45:22 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 19 Jun 2007 18:45:22 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> Message-ID: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> Jef writes >> What others think isn't important if one believes there to be a >> truth to the matter of whether "one is still the same person that >> one was". As in Damien's (quite good) novel Post Mortal Syndrome >> http://www.cosmosmagazine.com/fiction/online/serials/post_mortal_syndrome/cover> >> we usually regard a single biological human being as capable of >> hosting distinct personalities. Just because a few people, or the >> law, doesn't happen to recognize this truth doesn't change it. Although >> personality tests do confirm our hunch about the reality, it would be >> strange indeed if you deny the possibility of multiple personalities. > > How strange that your response is so disjointed from what I wrote. > You create a false dichotomy and accuse me of denying what I would > support. SORRY! I meant no *accusation*! If I have created a dichotomy with which you disagree, please don't take it personally. > I say there is no actual first person observer, > but only what appears to be first person observation. I'm offering you > a more unifying and extensible model, where personal identity is in > **all** cases dependent on an observer, and where you (the > Lee-biological agent) have the **closest** interactions with the > Lee-abstract entity, but no intrinsically **privileged** interactions. You may *offer* this, but do understand that I'm not really under any obligation to accept it! :-) It looks to me as though you are saying here that there is no truth to the matter of personal identity. (And really really really SORRY if I am getting you wrong---it's NOT intentional! Trying NOT to think of me as having set up a straw man here!!) Since your model seems peculiar to me, let me ask some questions. My first question is inspired by relativity theory. Einstein debunked the notion of an absolute velocity. A critic might have challenged "So you are saying that there exists *some* so-called frame of reference from which an object would appear to have any particular pre-assigned velocity?" and "This viewpoint, which say assigned a velocity near c to the Earth would be just as legitimate a point of view as any other?" and Einstein would have replied "yes" to both. So, Jef, could there be some observer who saw me and George Washington as having the same personality, and this viewpoint would be just as legitimate as any other? > You say "...we usually regard a single biological human being as > capable of hosting distinct personalities. Just because a few people, > or the law, doesn't happen to recognize this truth doesn't change > it." Again I'll remind you that I am offering a unifying model of > personal identity that works from **any** point of view. I would be > the last to support the straw-man position you attempt to ascribe to > me. To add some clarity to this minor subtopic of "multiple > personalities", let's recognize that this psychological condition is > now more properly referred to as Disassociative Identity Disorder, Thanks. > reflecting the understanding that the biological organism doesn't > actually "host multiple personalities" but **manifests** different > personalities. We can see this as the behavior of a single complex > chaotic system being pulled toward different attractors at different > times. This directly supports my thesis: That any observer, included > the biological organism itself, will recognize personal identity on > the basis of agency (what abstract entity are you working for?) rather > than any metric of physical/functional similarity. > > So very SORRY! >> So as a practical matter, the similarity >> criterion works for me. It's hard to believe that it doesn't work >> for you despite your claim about agency. > > Another rhetorical straw-man. Good grief. It's really too bad that these discussions are so painful to you. I guess I won't blame you if you just give up. > Lee, I've said many times that your > similarity view works just fine as a practical matter of everyday life > (and it even works for your duplication thought-experiments at t = 0.) What about at t = 0.0001 seconds? What difference could one ten-thousandth of a second make? (Please try to interpret this question charitably, as though I were not attempting to make a straw man of your position and as though I were not attempting to ridicule your position. I mean the question quite sincerely.) >> You suggest that >> it would be inappropriate to [say] that they're still the same >> person. Well, if personality tests and people who knew them >> affirmed that they were still the same person, I would simply >> conclude that the two instances of that person were not >> capable of seeing this truth (or chose to deny it), and were >> not capable of acting on this truth. > > Like another person on this list, one who is fixated on proving the > vital importance of a continuous physical trajectory to matters of > personal identity, you appear to be fixated on the importance of > similarity, That's probably true! I go with similarity on many, many other things. Leibniz even elevated to a principle "Identity of Indiscernables" in a somewhat related context. Hot days are like other hot days, dependent on similarity of structure (along one dimension). Two rabbits are considered to be of the same species not because their DNA is exactly equivalent, but because of close similarity. In such ways we categorize almost *everything*, so similarity is pretty universal and powerful (judging by the success of Darwinian creatures who employ it, e.g., a gazelle that lumps all lions into a single deadly category). > which seems to you self-evident, and profound because it > is a step beyond the thinking of most people in everyday life. > > I'm not denying the essence of your view, as I said, it works as a > special case, which happens to be the most common case today. Hmm. Okay, so what is an example of where it doesn't work? Is there a concrete A/B decision scenario in which my criterion doesn't seem to you to give the correct answer? Or at least gives a different answer than you'd act on? The best scenarios, incidentally, are those that ask a "what would you choose" type of question. > It's like another disagreement of ours: You've claimed that personal > survival is the ultimate determinant of personal choice, Now I suppose that if my initials were JA then I'd really launch here! I'd have a great cow accusing you of misstating my views, setting up straw men, and so on. I have *never* said---for your information ---that personal survival is the ultimate determinate of personal choice. In fact, I spent a long time in the "suicide bomber" thread going on and on about how and why some people will quite understandably hold some things more dear than their own lives. > and that pleasure is the ultimate determinant of "the good." Actually, I haven't ever supposed that either. I try to avoid ideas like "the good". For example, in considering the benefit accruing to a certain individual, it would be important to take the value system of that person into account. E.g., for Socrates, a little more knowledge at the expense of a little more pleasure would probably be desired. > I offer a more coherent and extensible view, applicable to our world as > well as to a world not yet arrived, You almost sound like you work for an advertising agency, the way you keep repeating the virtues of your position! :-) > and you contest it interminably. Well, maybe I should just fib and say I agree with you. If you are going to complain that people keep contesting certain things you say, you're going to do a lot of complaining! > I offered you a simple scenario showing an internal contradiction > resulting from your view, and you have yet to respond directly, Was that the one where I was a greedy bastard? (No offense taken---I'm just not sure what scenario you are referring to.) If you would be so kind as to cut and paste it, and ask for a "yes", "no", "right", "wrong", be assured that I will directly answer. Please accept that my dodging of the issue was unintentional, and not due to any personality defect. I hate it when people won't answer directly, and I will be very happy to opine in a completely unambiguous way! > I'll say this: It's apparent that you are infatuated with the idea > of personal identity only to the extent that it offers you relief > from the fear of death. Oh, preserve me from your psychoanalysis. You really don't know what you're talking about. I have very good reasons for supposing that I fear death less than most people, and a *lot* less than do many other cryonicists. But let's talk about ideas! and try to stay away from personal aspersions, speculations of motive, and psychological estimations of each other, okay? :-) Lee From fauxever at sprynet.com Wed Jun 20 01:43:27 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 19 Jun 2007 18:43:27 -0700 Subject: [ExI] This will take four minutes of your time, but ... References: <5.1.0.14.0.20070409084116.04493d88@pop.bloor.is.net.cable.rogers.com> Message-ID: <001801c7b2dc$661f6030$6501a8c0@brainiac> ... WHOA!!!! http://tinyurl.com/yvzhwf OLGA From femmechakra at yahoo.ca Wed Jun 20 03:58:03 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 19 Jun 2007 23:58:03 -0400 (EDT) Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> Message-ID: <990083.2084.qm@web30406.mail.mud.yahoo.com> --- Lee Corbin wrote: What about at t = 0.0001 seconds? What difference could one ten-thousandth of a second make? (Please try to interpret this question charitably, as though I were not attempting to make a straw man of your position and as though I were not attempting to ridicule your position. I mean the question quite sincerely.) Quite sincerely, what happens globally every 0.0001 second? From a Universal front, what happens every 0.0001's of a second? What is happening with Lee at this actual 0.0001 second? Am I understanding properly? What represents a moment? Just Curious, no pung intended. Anna Any fool can criticize, condemn, and complain but it takes character and self control to be understanding and forgiving. Dale Carnegie __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From russell.wallace at gmail.com Wed Jun 20 07:23:05 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Wed, 20 Jun 2007 08:23:05 +0100 Subject: [ExI] RIP Singularitarianism Message-ID: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> I was once a Singularitarian. I will not again go into the factual truths and falsehoods associated with this belief system (those arguments have been had many a time and oft), merely note that I once adhered to it as a system. Why did I change? For factual reasons, by turning around and reexamining my inventory of beliefs and discarding those that were not supported by material reality, to be sure. But also - and to the point - for moral reasons. Singularitarianism was once a beacon of hope, to which a good man would be proud to subscribe. What went wrong? Well, it's late in the day. The meme pool is poisoned, parasite-ridden. Fear and paranoia contaminate it on all sides. And at the end of the day, what drove me to unsubscribe from the Singularity list was that the most vocal contributors were and remained in a sphexish loop that computers will spring out of basements and start devouring human flesh and conquering the world. Over and over again, I don't mind arguing against that nonsense once or twice, but when transmission volume is directly proportional to fear and paranoia even - especially - among those who claim to be technophiles, who should be forward drivers, when it is so tireless that one wonders whether the primary contributors live under a bridge, that they have no purpose in life than transmitting their parasitic meme complexes... then one must bow out. We may live, break the bounds of time and space, become the seed for sentience in this Hubble volume. Or we may die, strangled by our own fear until real death comes for us. Either way, explicit Singularitarian work is already dead, cf: SIAI, Novamente. All I can do is get on with my own (non-Singularity) work. My point, though, is that there's a gap in meme space: anyone want to coin a philosophy that means making actual progress, _no_ parasite memes admitted? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sondre-list at bjellas.com Wed Jun 20 07:59:30 2007 From: sondre-list at bjellas.com (=?iso-8859-1?Q?Sondre_Bjell=E5s?=) Date: Wed, 20 Jun 2007 09:59:30 +0200 Subject: [ExI] RIP Singularitarianism In-Reply-To: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> References: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> Message-ID: <007301c7b310$eee08a50$cca19ef0$@com> When I was young, I had dreams. When I was older, I had visions. Now, still being young, I keep realizing everything I want to do. There is little gain in dwelling on the singularity, for a brief moment or two it?s ok, but I wouldn?t want to keep an ongoing discussion with someone how it might come to be and how it might manifest itself. I have never been very active in transhuman discussions online, but I love to chat with my friends and people that I meet. The best thing I can do for another being is helping them to raise their own awareness. There is no fear in my mind for the future, only fear for people who fall behind and can?t keep up the tempo (one example is radical religious people). But that is a minor itch on my back, cause there is nothing that will stop evolution. I believe that technological progress is too rapid for a philosophy to survive for very long. Political systems fails to keep track of the society, we are evolving into cyber-cultures that reach far beyond any government and physical limits. I live by two basic principles: Self-improvement and Experiences. Whether I will experience ?singularity? in whichever form before I die does not matter, cause when I?m gone, I won?t care. Which makes me happy to think about. Until that day, I will continue to work on my robotics projects which one day might turn medieval on my ass. Regards, Sondre From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Russell Wallace Sent: 20. juni 2007 09:23 To: ExI chat list Subject: [ExI] RIP Singularitarianism I was once a Singularitarian. I will not again go into the factual truths and falsehoods associated with this belief system (those arguments have been had many a time and oft), merely note that I once adhered to it as a system. Why did I change? For factual reasons, by turning around and reexamining my inventory of beliefs and discarding those that were not supported by material reality, to be sure. But also - and to the point - for moral reasons. Singularitarianism was once a beacon of hope, to which a good man would be proud to subscribe. What went wrong? Well, it's late in the day. The meme pool is poisoned, parasite-ridden. Fear and paranoia contaminate it on all sides. And at the end of the day, what drove me to unsubscribe from the Singularity list was that the most vocal contributors were and remained in a sphexish loop that computers will spring out of basements and start devouring human flesh and conquering the world. Over and over again, I don't mind arguing against that nonsense once or twice, but when transmission volume is directly proportional to fear and paranoia even - especially - among those who claim to be technophiles, who should be forward drivers, when it is so tireless that one wonders whether the primary contributors live under a bridge, that they have no purpose in life than transmitting their parasitic meme complexes... then one must bow out. We may live, break the bounds of time and space, become the seed for sentience in this Hubble volume. Or we may die, strangled by our own fear until real death comes for us. Either way, explicit Singularitarian work is already dead, cf: SIAI, Novamente. All I can do is get on with my own (non-Singularity) work. My point, though, is that there's a gap in meme space: anyone want to coin a philosophy that means making actual progress, _no_ parasite memes admitted? -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 20 10:04:10 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 20 Jun 2007 12:04:10 +0200 Subject: [ExI] RIP Singularitarianism In-Reply-To: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> References: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> Message-ID: <20070620100410.GI17691@leitl.org> On Wed, Jun 20, 2007 at 08:23:05AM +0100, Russell Wallace wrote: > I was once a Singularitarian. I will not again go into the factual There seems to have been a definition drift (looks like SL4 folks did a little Wikipedia editing): http://en.wikipedia.org/wiki/Singularitarianism ... ''Originally the term singularitarian was defined in 1991 by Extropian Mark Plus to mean "one who believes the concept of a Singularity", this term has since been redefined to mean "Singularity activist" or "friend of the Singularity"; that is, one who acts so as to bring about the Singularity.'' ... I'm certainly a singularitarian in the sense of the original definition. I don't completely subscribe to Vingean reality models, but a lot of it is really down-to-earth and plausible. > truths and falsehoods associated with this belief system (those > arguments have been had many a time and oft), merely note that I once > adhered to it as a system. Which system exactly? You don't say. > Why did I change? For factual reasons, by turning around and > reexamining my inventory of beliefs and discarding those that were not > supported by material reality, to be sure. What is not supported by material reality? Your hints are very difficult to read. > But also - and to the point - for moral reasons. Singularitarianism > was once a beacon of hope, to which a good man would be proud to > subscribe. Hmm, never subscribed to that particular notion. I think any hard takeoff (the kind that makes the world outside incomprehensible over the course of a year, or less) could mean wrecking the biosphere, and death of a lot of people, or a complete extinction. > What went wrong? Well, it's late in the day. The meme pool is > poisoned, parasite-ridden. Fear and paranoia contaminate it on all > sides. Dunno, looks like a minor storm in a waterglass to me. > And at the end of the day, what drove me to unsubscribe from the > Singularity list was that the most vocal contributors were and > remained in a sphexish loop that computers will spring out of > basements and start devouring human flesh and conquering the world. You must have been on some other singularity list than me. The vocal contributors have been discussing the issues of space delivery, which is, admittedly, borderline offtopic for the list. The list has been really quiet, too. > Over and over again, I don't mind arguing against that nonsense once > or twice, but when transmission volume is directly proportional to > fear and paranoia even - especially - among those who claim to be > technophiles, who should be forward drivers, when it is so tireless > that one wonders whether the primary contributors live under a bridge, > that they have no purpose in life than transmitting their parasitic > meme complexes... then one must bow out. Who is precisely the vocal pusher of those 'parasitic memes'? > We may live, break the bounds of time and space, become the seed for > sentience in this Hubble volume. Or we may die, strangled by our own Hey, we will. (Minus the unknown probability we and our technology get wiped out by one of them pesky existential threats). > fear until real death comes for us. Either way, explicit Death will come quite certainly to most to all of these who read these message, the question however is, who's going to come back, if at all? (Meaning, do you have a contract, and does cryonics work out in the end?) > Singularitarian work is already dead, cf: SIAI, Novamente. All I can > do is get on with my own (non-Singularity) work. I don't see particular reasons to limit AI research to commercial, closed-source efforts. If it's any good, it will thrive as an open source project. You certainly can continue with what you were doing as a single guy in your spare time. Progress will be slower, but if you think it's doomed, I don't see how you expected to succeed as part of a team. > My point, though, is that there's a gap in meme space: anyone want to > coin a philosophy that means making actual progress, _no_ parasite > memes admitted? People who make actual progress don't have time to read this list. I mean this literally, if you're on a project, you just don't have time and focus for this. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From avantguardian2020 at yahoo.com Wed Jun 20 12:50:03 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 20 Jun 2007 05:50:03 -0700 (PDT) Subject: [ExI] DNA pwned by Venter Message-ID: <947018.58823.qm@web60518.mail.yahoo.com> http://www.ted.com/index.php/talks/view/id/6 The video is sort of a "state of biotechnology" address by fellow biologist Craig Venter. He covers a wide range of topics that he is currently working on. Making allowance for his brevity and the consequent lack of details he gives, most of what he says is generally achievable. However his timetable for the development the artificial chromosome of his minimal organism is very ambitious. While he may boast of having synthesized a phage chromosome in a few days, the phage chromosome was about two orders of magnitude simpler than his artificial organism whose chromosome length I would put a lower bound on at about 500kb. Apparently he has a clever idea on how to go about it using the DNA repair mechanism of a most curious bacterium - Deinococcus radiodurans. While he doesn't give too many details regarding the method, my best professional guess is that he is using the DNA repair mechanisms of D. radiodurans to homologously recombine a great deal many stretches of overlapping artifical sequence probably between 100bp (upper limit of directly manufacturable synthetic sequence) to 20kb (upper limit of PCR product). If he can supply several chromosomes worth of overlapping fragments, the homologous DNA recombination mechanisms of D. radiodurans should be able to assemble the chromosome for him. That is if he can get the homologous recombination to work in vitro. All told, from his opening joke regarding longevity to his closing statement regarding bioethics, much of his talk should be of interest to transhumanists. Especially if you are wondering what nanotech molecular assemblers will look like for the next several decades. They will most likely look a lot like bioengineered wetware cells making customized enzymes, biopolymers, and possibly even gasoline out of sunlight or lawn clippings. The potential is there and it is technologically feasible fortwith. But first there are challenges economic, political, and psychological in nature that must be overcome: Will venture capitalists be willing to invest in technology that renders scarcity, and thereby consumer economics, obsolete? Will those whose wealth results from control of resources resort to violence to keep those resources scarce? Will the majority of people in democratic countries be able to overcome their fear of "playing God" or "fooling with Nature"? To find out, stay tuned to this Everett branch. Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Luggage? GPS? Comic books? Check out fitting gifts for grads at Yahoo! Search http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz From austriaaugust at yahoo.com Wed Jun 20 14:20:38 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 20 Jun 2007 07:20:38 -0700 (PDT) Subject: [ExI] RIP Singularitarianism In-Reply-To: <20070620100410.GI17691@leitl.org> Message-ID: <610500.46004.qm@web37406.mail.mud.yahoo.com> Eugen Leitl wrote: > "People who make actual progress don't have time to > read this list. > I mean this literally, if you're on a project, you > just don't have time > and focus for this." Fa Sho! I would really like to individually contribute to saving the world, but I don't have the skills or position to do much of anything by myself, in that regard. That's why I contribute to SIAI, and will continue to do so. But I do think that there is some value in having these discussions. Sure, 9 out of 10 suggested strategies for avoiding disaster won't be helpful at all. But if it wasn't for the transhumanists making suggestions and offering ideas, who else would be doing it? **As an aside, I apologize for calling John Clark a coward. That was inappropriate, and I regret saying it. Sincerely, Jeffrey Herrlich --- Eugen Leitl wrote: > On Wed, Jun 20, 2007 at 08:23:05AM +0100, Russell > Wallace wrote: > > > I was once a Singularitarian. I will not again > go into the factual > > There seems to have been a definition drift (looks > like SL4 folks did a little > Wikipedia editing): > > http://en.wikipedia.org/wiki/Singularitarianism > > ... > > ''Originally the term singularitarian was defined in > 1991 by Extropian Mark Plus > to mean "one who believes the concept of a > Singularity", this term has since > been redefined to mean "Singularity activist" or > "friend of the Singularity"; > that is, one who acts so as to bring about the > Singularity.'' > > ... > > I'm certainly a singularitarian in the sense of the > original definition. > I don't completely subscribe to Vingean reality > models, but a lot of it > is really down-to-earth and plausible. > > > truths and falsehoods associated with this > belief system (those > > arguments have been had many a time and oft), > merely note that I once > > adhered to it as a system. > > Which system exactly? You don't say. > > > Why did I change? For factual reasons, by > turning around and > > reexamining my inventory of beliefs and > discarding those that were not > > supported by material reality, to be sure. > > What is not supported by material reality? Your > hints are very > difficult to read. > > > But also - and to the point - for moral > reasons. Singularitarianism > > was once a beacon of hope, to which a good man > would be proud to > > subscribe. > > Hmm, never subscribed to that particular notion. I > think any hard > takeoff (the kind that makes the world outside > incomprehensible > over the course of a year, or less) could mean > wrecking the biosphere, > and death of a lot of people, or a complete > extinction. > > > What went wrong? Well, it's late in the day. > The meme pool is > > poisoned, parasite-ridden. Fear and paranoia > contaminate it on all > > sides. > > Dunno, looks like a minor storm in a waterglass to > me. > > > And at the end of the day, what drove me to > unsubscribe from the > > Singularity list was that the most vocal > contributors were and > > remained in a sphexish loop that computers will > spring out of > > basements and start devouring human flesh and > conquering the world. > > You must have been on some other singularity list > than me. The vocal > contributors have been discussing the issues of > space delivery, which > is, admittedly, borderline offtopic for the list. > The list has been > really quiet, too. > > > Over and over again, I don't mind arguing > against that nonsense once > > or twice, but when transmission volume is > directly proportional to > > fear and paranoia even - especially - among > those who claim to be > > technophiles, who should be forward drivers, > when it is so tireless > > that one wonders whether the primary > contributors live under a bridge, > > that they have no purpose in life than > transmitting their parasitic > > meme complexes... then one must bow out. > > Who is precisely the vocal pusher of those > 'parasitic memes'? > > > We may live, break the bounds of time and > space, become the seed for > > sentience in this Hubble volume. Or we may die, > strangled by our own > > Hey, we will. (Minus the unknown probability we and > our technology get > wiped out by one of them pesky existential threats). > > > fear until real death comes for us. Either way, > explicit > > Death will come quite certainly to most to all of > these who read these > message, the question however is, who's going to > come back, if at all? > (Meaning, do you have a contract, and does cryonics > work out in the end?) > > > Singularitarian work is already dead, cf: SIAI, > Novamente. All I can > > do is get on with my own (non-Singularity) > work. > > I don't see particular reasons to limit AI research > to commercial, > closed-source efforts. If it's any good, it will > thrive as an open source > project. You certainly can continue with what you > were doing as > a single guy in your spare time. Progress will be > slower, but if you > think it's doomed, I don't see how you expected to > succeed as part > of a team. > > > My point, though, is that there's a gap in meme > space: anyone want to > > coin a philosophy that means making actual > progress, _no_ parasite > > memes admitted? > > People who make actual progress don't have time to > read this list. > I mean this literally, if you're on a project, you > just don't have time > and focus for this. > > -- > Eugen* Leitl leitl > http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com > http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 > 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase. http://farechase.yahoo.com/ From amara at amara.com Wed Jun 20 15:02:45 2007 From: amara at amara.com (Amara Graps) Date: Wed, 20 Jun 2007 17:02:45 +0200 Subject: [ExI] DNA pwned by Venter Message-ID: See also this excellent (free content) story in The Economist: http://economist.com/science/displaystory.cfm?story_id=9333408 reproduced below: -------------------------------------------------------------------- Artificial life Patent pending Jun 14th 2007 From The Economist print edition Move over Dolly. Synthia is on her way YOU have to hand it to Craig Venter, he is not someone who thinks small. The latest adventure of the man who was the first to sequence the genome of a living organism (three weeks after his grant request to do so was rejected on the grounds it was impossible), the first to publish the genome of an identifiable human being (himself) and the first to conceive the idea of sequencing the genome of an entire ecosystem (and to enjoy a nice cruise across the Pacific Ocean in his yacht while he did so) is curiously reminiscent of the incident that made him a controversial figure in the first place. That was when, 16 years ago, he attempted to patent parts of several hundred genes- the first time anyone had tried to take out a patent on more than one gene at a time. This time, he is proposing to patent not merely a few genes, but life itself. Not all of life, of course. At least, not yet. Rather, he has applied for a patent on the synthetic bacterium that he and his colleagues Clyde Hutchison and Hamilton Smith have been working on for the past few years. The patent application itself was filed without fanfare some eight months ago. But it was only at the end of May that the slowly grinding bureaucracy of America's patent office got round to publishing it. The central claim is to what Dr Venter calls the "minimal bacterial genome". This is a list of the 381 genes he thinks are needed to keep an organism alive. The list has been assembled by taking the organism he first sequenced, Mycoplasma genitalium, and knocking out each of its 470 genes to see which ones it can manage without. The theory- and the claim made by the patent- is that by synthesising a string of DNA that has all 381 of these genes, and then putting it inside a "ghost cell" consisting of a cell membrane, along with the bits and pieces of molecular machinery that are needed to read genes and translate them into proteins, an artificial organism will have been created. Given that the ghost cell will be an enucleated natural bacterium rather than a synthetic structure in its own right, the new bug will not be a completely man-made creature. Nevertheless, if the three researchers can pull it off, they will have achieved an impressive piece of genetic engineering- or, rather, of synthetic biology as this high end of the field is now usually called. And there is every reason to believe that they will be able to pull it off. In 2003 the same team, working then as now at Dr Venter's research institute in Rockville, Maryland, were the first to produce a truly viable synthetic virus. And techniques have moved on since then. The patent does not claim that an organism based on the minimal bacterial genome has yet been made- and it hasn't. It is more a question of the Venter Institute getting its retaliations in first. Nevertheless, the mere filing of the patent has upset some people. Among the dischuffed is ETC Group, a Canadian bioethics organisation whose eagle-eyed spotters noticed the publication of patent 20070122826 last week. They have asked Dr Venter to withdraw the patent- and, on the assumption that he will not, have also asked the patent office to reject it on the grounds that it is contrary to public morality and safety. ETC's objections seem twofold, and also slightly contradictory. One objection is that the patent's claims are too widely drawn. It attempts, for example, to reserve the right to any method of hydrogen or ethanol production that uses such an organism. (Dr Venter thinks synthetic biology is going to be important as a way of making fuels.) It also, bizarrely, claims an interest in the genes the three researchers have identified as non-essential, as well as the essential ones. To the extent that sweeping claims may stifle innovation, these are certainly things that need to be considered. However, the more profound objection ETC has seems to be based on the idea that there are areas where mankind should not meddle. As Pat Mooney, the group's boss, put it, "For the first time, God has competition." No doubt Dr Venter, hardly famous as a shrinking violet, will be amused by the comparison. ETC's argument has some force. Synthetic biology is developing fast and it is easy to see it being used out of malice. That said, one of the advantages of a minimal genome is that the genes removed, while not essential for survival, are essential for robustness. A bug relying on such a genome could not possibly live in the wild if it accidentally escaped. Also, the biologists in the field are as concerned as anybody that the subject develops safely. They have been asking for regulation rather than resisting it, and have already established codes of conduct to try to stop the malicious synthesis of pathogens. Nevertheless, ETC is hoping to provoke a debate. And to give people a name to hang on to in that debate it suggests nicknaming Mycoplasma laboratorium, as the application calls the putative invention, as "Synthia". The organisation hopes this name will stick in the popular consciousness in the way that Ian Wilmutt's cloned sheep Dolly did. Indeed, it is rather a good name. Given the affection that Dolly attracted once the shock of her existence had been absorbed, perhaps Dr Venter- himself no slouch at publicity-will adopt it. -------------------------------------------------------------------- -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From austriaaugust at yahoo.com Wed Jun 20 15:31:10 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 20 Jun 2007 08:31:10 -0700 (PDT) Subject: [ExI] Fear of Death In-Reply-To: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> Message-ID: <551860.32988.qm@web37411.mail.mud.yahoo.com> In a somewhat unrelated post, Lee wrote: ..."I have very good reasons for supposing that I fear death less than most people, and a *lot* less than do many other cryonicists."... I'm not trying to flame you Lee, I just thought that this would be a good topic to jump-off from. I've wondered why having a fear of death is often considered to be something shameful. I would rather continue to exist and live a good life, rather than die (in the "permanent" sense). So, because I would rather live well, then I suppose that I *do* have a fear of death. Does that make me ashamed to say it or feel it? Not at all. If I die, then nothing I've ever cared about will mean anything to me; because I won't care about anything. Sincerely, Jeffrey Herrlich ____________________________________________________________________________________ Get your own web address. Have a HUGE year through Yahoo! Small Business. http://smallbusiness.yahoo.com/domains/?p=BESTDEAL From jef at jefallbright.net Wed Jun 20 16:35:44 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 20 Jun 2007 09:35:44 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> Message-ID: On 6/19/07, Lee Corbin wrote: > It looks to me as though > you are saying here that there is no truth to the matter of > personal identity. I'm saying that the "truth" of personal identity is entirely pragmatic, and this in no way diminishes its practicality or our certainty of its usage. Identity does not "exist", but rather, it is assigned, as a result of a process of identification by an observer. Identification is always in terms of features meaningful to the observer. In the case of inanimate objects, perceived similarity in physical/functional terms is meaningful, and perceived continuity can be a useful heuristic. In the case of persons, the most meaningful feature, necessary in all cases of perceived personhood, is agency, the capacity to act intentionally on behalf of some entity. > Since your model seems peculiar to me, let me ask some > questions. My first question is inspired by relativity theory. > Einstein debunked the notion of an absolute velocity. A > critic might have challenged "So you are saying that there > exists *some* so-called frame of reference from which > an object would appear to have any particular pre-assigned > velocity?" and "This viewpoint, which say assigned a velocity > near c to the Earth would be just as legitimate a point of view > as any other?" and Einstein would have replied "yes" to both. > > So, Jef, could there be some observer who saw me and George > Washington as having the same personality, and this viewpoint > would be just as legitimate as any other? It's funny in a sad way when people are confronted with something that appears mysterious so they try framing it in mysterious terms such as quantum theory, probability theory, or relativity theory. To respond to your extreme example, an observer with very little knowledge of humans might see no difference between you and George Washington. Would that view be "legitimate"? Hardly, at least in our terms. Better to ask whether it would be meaningful. > > > > So very SORRY! Thanks for your sincere apologies. > > Lee, I've said many times that your > > similarity view works just fine as a practical matter of everyday life > > (and it even works for your duplication thought-experiments at t = 0.) > > What about at t = 0.0001 seconds? What difference could one > ten-thousandth of a second make? (Please try to interpret this > question charitably, as though I were not attempting to make a > straw man of your position and as though I were not attempting > to ridicule your position. I mean the question quite sincerely.) I said it works at t = 0. I didn't say it doesn't work at t = 0.0001 seconds. This is another example of your straw-man argumentation. You may mean it quite sincerely, but it's still a straw-man since it has no weight. The point here, and we've been around this loop before, is that you say physical/functional similarity tends to diminish with time and that beyond some point one is no longer the same person, but your theory of similarity-based personal identity doesn't say anything about how much similarity or how it diminishes. Your theory is incomplete, it only accounts for a special case. > >> [In the case of the near-identical duplicates in conflict] You suggest that > >> it would be inappropriate to [say] that they're still the same > >> person. Well, if personality tests and people who knew them > >> affirmed that they were still the same person, I would simply > >> conclude that the two instances of that person were not > >> capable of seeing this truth (or chose to deny it), and were > >> not capable of acting on this truth. Here you're simply affirming your own conclusion. > > Like another person on this list, one who is fixated on proving the > > vital importance of a continuous physical trajectory to matters of > > personal identity, you appear to be fixated on the importance of > > similarity, > > That's probably true! I go with similarity on many, many other > things. Leibniz even elevated to a principle "Identity of Indiscernables" > in a somewhat related context. Hot days are like other hot days, > dependent on similarity of structure (along one dimension). Two > rabbits are considered to be of the same species not because their > DNA is exactly equivalent, but because of close similarity. In such > ways we categorize almost *everything*, so similarity is pretty > universal and powerful (judging by the success of Darwinian creatures > who employ it, e.g., a gazelle that lumps all lions into a single deadly > category). Similarity is not the problem. The point is that with regard to personal identity, similarity in terms of agency is more coherent and extensible than similarity in physical/function terms. > > I'm not denying the essence of your view, as I said, it works as a > > special case, which happens to be the most common case today. > > Hmm. Okay, so what is an example of where it doesn't work? I already gave you the example of the two near-identical duplicates in conflict. > Is there a concrete A/B decision scenario in which my criterion > doesn't seem to you to give the correct answer? Or at least > gives a different answer than you'd act on? The best scenarios, > incidentally, are those that ask a "what would you choose" type > of question. > > > I offered you a simple scenario showing an internal contradiction > > resulting from your view, and you have yet to respond directly, > > Was that the one where I was a greedy bastard? (No offense > taken---I'm just not sure what scenario you are referring to.) Yes, as in the following exchange which you deleted without comment. -------------------- >>> In your scenario you consider some very selfish person who >>> forks and then is at odds with his other self. >> >> Yes, I suggested a very selfish person to make the thought-experiment >> easier for you, but any degree of selfishness will tend to put >> duplicates at odds with one another as they interact from within >> increasingly disparate contexts. -------------------- > If you would be so kind as to cut and paste it, and ask for a > "yes", "no", "right", "wrong", be assured that I will directly > answer. Please accept that my dodging of the issue was > unintentional, and not due to any personality defect. I hate > it when people won't answer directly, and I will be very > happy to opine in a completely unambiguous way! Okay, once again: As a corollary, a physical instantiation could be extremely similar to Lee, even more similar than, say, Lee of one year ago, but be considered by anyone, including Lee, to be for **all* practical purposes a different person. As an example, imagine that Lee is by nature a greedy bastard (this is so patently false, I hope, as to be inoffensive.) Lee makes a perfect duplicate of himself to go off and work at programming so Lee (original) can spend his time playing chess. At this point they are each Lee by virtue of each being a full agent of the abstract entity we all know as Lee. But software engineering can be a hellish life, and eventually the duplicate, being a bit unstable and a greedy bastard to boot, realizes that he could empty the common bank account (belonging to Lee-the entity, rather than to either Lee-the-agent) and assume a life of leisure. If Lee (the original) gives him any trouble, he can simply kill him and take his place. Of course Lee (the original) is inclined to similar thoughts with regard to his duplicate. We can easily see here that despite extremely high similarity, for all practical moral/social/legal purposes, anyone (including the duplicates themselves) would see these as two separate individuals. The point here is to show that despite extreme similarity, a pair of duplicates can easily fall into conflict with each other. This conflict can be over property, relationships, legal responsibility; in essence these are conflicts over rightful identity -- a paradox if, as you claim, they are necessarily the same identity due to their physical/functional similarity. Or maybe simpler for you, consider the two duplicates, each with identical intent to prevent the existence of the other. If, as you say, physical/functional similarity determines personal identity, then do you see the paradox entailed in a person trying to destroy himself so he can enjoy being himself? Or back to the biological organism manifesting Disassociative Identity Disorder. I said this supported my point, and you said "thanks" without further comment. In such a case we can agree that the physical/functional similarity is total since it's only a single organism, but we also agree that that any observer (including the observers manifested by that particular organism) will see different persons to the extent that they are perceived to act on behalf of different entities. Personal identity is about agency. Physical/functional similarity is only a special case. - Jef From jef at jefallbright.net Wed Jun 20 16:48:34 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 20 Jun 2007 09:48:34 -0700 Subject: [ExI] Fear of Death In-Reply-To: <551860.32988.qm@web37411.mail.mud.yahoo.com> References: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <551860.32988.qm@web37411.mail.mud.yahoo.com> Message-ID: On 6/20/07, A B wrote: > > In a somewhat unrelated post, Lee wrote: > > ..."I have very good reasons > for supposing that I fear death less than most people, > and a *lot* less than do many other cryonicists."... > > I'm not trying to flame you Lee, I just thought that > this would be a good topic to jump-off from. I wanted to let Lee's statement lie. But since you've brought it up, consider its evasiveness. I told Lee I suspected his motivation was not so much to understand personal identity, but to comfort his fear of death. He evaded the point of the question, which was his motivation, and pretended as if the point was his fear of death, which of course is already ameliorated by his theory of multiple copies of himself and his cryonics policy. Fear death or not; that wasn't the point. - Jef From scerir at libero.it Wed Jun 20 17:06:30 2007 From: scerir at libero.it (scerir) Date: Wed, 20 Jun 2007 19:06:30 +0200 Subject: [ExI] coffee break References: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677><002601c7b1ee$46c66b50$ac971f97@archimede><013b01c7b229$bb06b250$6501a8c0@homeef7b612677><000801c7b23b$55315720$87931f97@archimede> <000301c7b289$46eb2df0$e6921f97@archimede> Message-ID: <000701c7b35d$59b2a9c0$e0921f97@archimede> > So, possible 'selection' of lottery winners > avoiding quantum suicide? Korotkov and Jordan think that 'undoing' a 'weak measurement' (a sort of 'soft' and 'looong' quantum measurement invented by Aharonov, Albert, and Vaidman) may be interpreted as erasing, by a second 'weak measurement', the information obtained from a first 'weak measurement'. They also wrote a paper about that ... PRL abstract http://link.aps.org/abstract/PRL/v97/e166805 paper (pdf) http://www.arxiv.org/abs/cond-mat/0606713 We could also 'undo' a 'weak' quantum suicide then, and we could also 'undo' a 'weak' MWI branching, if things went wrong :-) From thomas at thomasoliver.net Wed Jun 20 18:53:10 2007 From: thomas at thomasoliver.net (Thomas) Date: Wed, 20 Jun 2007 11:53:10 -0700 Subject: [ExI] SIAI Interview Series: Aubrey de Grey In-Reply-To: References: Message-ID: On Jun 20, 2007, at 1:17 AM, extropy-chat-request at lists.extropy.org wrote: > Video interview with Aubrey is now online: > > http://www.singinst.org/blog/2007/06/18/siai-interview-series- > aubrey-de-grey-methuselah-foundation/ > > -Tyler Aubrey de Grey questions the meaning of the term transhuman. Does it mean transition toward a posthuman state that regards the human in a way similar to the way we humans now regard Neanderthals as subhuman? -- Or does de Grey's preferred meaning, ascribed to Huxley, suit us better, to wit: trancending of previous humanity while remaining human? I haven't chosen a position on this question and I'd like the see what others here think. Do humans possess a certain defining attribute that deserves to be retained by all our descendants? Can we name it? -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 20 19:06:40 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 20 Jun 2007 21:06:40 +0200 Subject: [ExI] SIAI Interview Series: Aubrey de Grey In-Reply-To: References: Message-ID: <20070620190640.GH17691@leitl.org> On Wed, Jun 20, 2007 at 11:53:10AM -0700, Thomas wrote: > Aubrey de Grey questions the meaning of the term transhuman. Does it > mean transition toward a posthuman state that regards the human in a > way similar to the way we humans now regard Neanderthals as subhuman? It's going to be a pretty short transition time, though. (It may be zero, if we go extinct). In general, once you have a few kilotons of molecular circuitry (and superintelligence), you can design nearly optimal hardware without having to iteratively build interim stages there. It goes from zero to hero almost overnight, but for those projects which might require large-scale constructions (devices the size of the solar system, or the size of a galactic cluster). Some such (purely hypothetical) stages can take centuries, megayears or even gigayears. > -- Or does de Grey's preferred meaning, ascribed to Huxley, suit us > better, to wit: trancending of previous humanity while remaining You can no longer remain human, than a cyanobacteria can become a human, and remain a cyanobacteria. The notion makes no sense. > human? I haven't chosen a position on this question and I'd like the > see what others here think. Do humans possess a certain defining > attribute that deserves to be retained by all our descendants? Can we > name it? -- Thomas I wouldn't bother with semantics too much. Reality has a tendency to make such things obsolete. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From thomas at thomasoliver.net Wed Jun 20 19:37:36 2007 From: thomas at thomasoliver.net (Thomas) Date: Wed, 20 Jun 2007 12:37:36 -0700 Subject: [ExI] RIP Singularitarianism In-Reply-To: References: Message-ID: > Russell Wallace wrote: > I was once a Singularitarian. [...] > > But [...] Fear and paranoia contaminate it > > [...] explicit Singularitarian work is already dead, cf: SIAI, > Novamente. All I can do is get on with my own (non-Singularity) work. > > My point, though, is that there's a gap in meme space: anyone want > to coin a philosophy that means making actual progress, _no_ > parasite memes admitted? We err to take fear-speak too seriously. Fear merchants have abounded by virtue of the hypnotic effect of their scary headlines. They sucked space, but if you could tolerate them, and not succumb to suggestibility, you could avoid getting sucked down. Trying to suppress negative memes seemed to strengthen them. There remains plenty of hard evidence that love resolves fear. That corny old philosophy has never failed. If you can't do that then love yourself for not tolerating singularitarianism. By the way, what work do you do? -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Wed Jun 20 23:04:48 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Thu, 21 Jun 2007 00:04:48 +0100 Subject: [ExI] RIP Singularitarianism In-Reply-To: References: Message-ID: <8d71341e0706201604n14e93f3dwbc2615c9747d3fa@mail.gmail.com> On 6/20/07, Thomas wrote: > > We err to take fear-speak too seriously. Fear merchants have abounded by > virtue of the hypnotic effect of their scary headlines. They sucked space, > but if you could tolerate them, and not succumb to suggestibility, you could > avoid getting sucked down. Trying to suppress negative memes seemed to > strengthen them. There remains plenty of hard evidence that love resolves > fear. That corny old philosophy has never failed. If you can't do that > then love yourself for not tolerating singularitarianism. > Maybe you're right. What was the line from the Hitch-Hiker's Guide to the Galaxy, "...loathe it or ignore it, you can't like it"? I'll go for the middle option :) By the way, what work do you do? > I'm a programmer, working on data processing stuff at the moment, have ideas regarding smart CAD and reusable procedural knowledge that I intend to have a go at implementing if and when I get a clear run at it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fauxever at sprynet.com Thu Jun 21 00:25:47 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Wed, 20 Jun 2007 17:25:47 -0700 Subject: [ExI] Article on Aubrey de Grey in July/August 2007 AARP Magazine References: <5.1.0.14.0.20070409084116.04493d88@pop.bloor.is.net.cable.rogers.com> Message-ID: <003001c7b39a$b9951840$6501a8c0@brainiac> The article is called "Long-distance Living" (only the listing in the TOC is on the AARP Magazine site, but not the article). The article tried to be fair (i.e., tried for some journalistic integrity) at the beginning ... but in the end, it slid off into stuff like: "As University of Michigan biologist Richard Miller, M.D., Ph.D., put it in a letter published last year; 'learning to do what Aubrey claims possible is a good idea.' But Miller would like him to solve another complex problem: 'how to make pigs fly.'" "What I want to know is this: who the hell would want to live that long? ..." "Yale surgery professor Sherwin Nuland, M.D., writing in Technology Review, worries that Aubrey's plan would 'destroy us in attempting to preserve us," by undermining 'what is means to be human.'" "Aubrey is not a godless man: 'I'm a happy agnostic - and I'm doing the work of the Scriptures,'" he adds, as if responding to those who accuse him of playing [g]od." And at the very end, the author's note: "Frequent contributor Mark Matousek has no interest in living to past 110." Bleh. Olga From stathisp at gmail.com Thu Jun 21 01:42:14 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jun 2007 11:42:14 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> Message-ID: On 21/06/07, Jef Allbright wrote: > Or back to the biological organism manifesting Disassociative Identity > Disorder. I said this supported my point, and you said "thanks" > without further comment. In such a case we can agree that the > physical/functional similarity is total since it's only a single > organism, but we also agree that that any observer (including the > observers manifested by that particular organism) will see different > persons to the extent that they are perceived to act on behalf of > different entities. The different personalities in DID/MPD are generally supposed to be functionally distinct, in that even if they are the same "kind" of personality at least one is unaware of the others, and each personality considers himself distinct with distinct interests. I guess that's what you mean by agency. (Let me say again that DID/MPD is a controversial diagnosis in psychiatry, but it is at least logically possible, so it is reasonable to use it in discussions like the present.) -- Stathis Papaioannou From fauxever at sprynet.com Thu Jun 21 02:11:40 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Thu, 21 Jun 2007 02:11:40 -0000 Subject: [ExI] Article on Aubrey de Grey in July/August 2007 AARP Magazine References: <5.1.0.14.0.20070409084116.04493d88@pop.bloor.is.net.cable.rogers.com> <003001c7b39a$b9951840$6501a8c0@brainiac> Message-ID: <000d01c7cb3a$55c787f0$6501a8c0@brainiac> Oh, I forgot to mention ...here's how the article on de Grey (on the online TOC) is described : Live to 1,000 By Mark Matousek Scientist Aubrey de Grey claims aging is a "disease" that can be eradicated. Visionary or nut job? You be the judge Olga ----- Original Message ----- From: "Olga Bourlin" To: "ExI chat list" Sent: Wednesday, June 20, 2007 5:25 PM Subject: [ExI] Article on Aubrey de Grey in July/August 2007 AARP Magazine > The article is called "Long-distance Living" (only the listing in the TOC > is on the AARP Magazine site, but not the article). > > The article tried to be fair (i.e., tried for some journalistic integrity) > at the beginning ... but in the end, it slid off into stuff like: > > "As University of Michigan biologist Richard Miller, M.D., Ph.D., put it > in > a letter published last year; 'learning to do what Aubrey claims possible > is > a good idea.' But Miller would like him to solve another complex problem: > 'how to make pigs fly.'" > > "What I want to know is this: who the hell would want to live that long? > ..." > > "Yale surgery professor Sherwin Nuland, M.D., writing in Technology > Review, > worries that Aubrey's plan would 'destroy us in attempting to preserve > us," > by undermining 'what is means to be human.'" > > "Aubrey is not a godless man: 'I'm a happy agnostic - and I'm doing the > work of the Scriptures,'" he adds, as if responding to those who accuse > him > of playing [g]od." > > And at the very end, the author's note: "Frequent contributor Mark > Matousek > has no interest in living to past 110." > > Bleh. > > Olga > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Thu Jun 21 02:26:01 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 20 Jun 2007 21:26:01 -0500 Subject: [ExI] Article on Aubrey de Grey in July/August 2007 AARP Magazine In-Reply-To: <000d01c7cb3a$55c787f0$6501a8c0@brainiac> References: <5.1.0.14.0.20070409084116.04493d88@pop.bloor.is.net.cable.rogers.com> <003001c7b39a$b9951840$6501a8c0@brainiac> <000d01c7cb3a$55c787f0$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070620212042.025872a0@satx.rr.com> At 06:56 PM 7/20/2007 -0700, Olga wrote: >Oh, I forgot to mention ...here's how the article on de Grey (on the online >TOC) is described : >Live to 1,000 >By Mark Matousek >Scientist Aubrey de Grey claims aging is a "disease" that can be eradicated. >Visionary or nut job? You be the judge It's cheap to say this, but hey: Die at 110, maximum! By Mark Matousek Journalist Matousek claims senility and death are "desirable" and should be embraced. Reactionary or nut job? You be the judge From lcorbin at rawbw.com Thu Jun 21 06:36:46 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Wed, 20 Jun 2007 23:36:46 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <990083.2084.qm@web30406.mail.mud.yahoo.com> Message-ID: <01c601c7b3ce$e034a590$6501a8c0@homeef7b612677> Anna writes > --- Lee Corbin wrote: > > What about at t = 0.0001 seconds? What difference > > could one ten-thousandth of a second make? (Please > > try to interpret this question charitably, as though I > > were not attempting to make a straw man of your > > position and as though I were not attempting to > > ridicule your position. I mean the question > > quite sincerely.) > > Quite sincerely, what happens globally every 0.0001 > second? From a Universal front, what happens every > 0.0001's of a second? What is happening with Lee at > this actual 0.0001 second? Subjectively speaking, that's of course too small a time for anything to happen. On the objective level, however, millions and millions of neurons can begin firing, or (others) end up dumping chemicals across a synaptic cleft to another neuron. > Am I understanding properly? What represents a moment? Here is the context of the discussion I was having with Jef. He had written > > > Lee, I've said many times that your similarity view works just > > > fine as a practical matter of everyday life (and it even works > > > for your duplication thought-experiments at t = 0.) And so since he had said that similarity works as a criterion for personal identity (or something like that) for duplicates at t=0 (where that is to be understood as the instant at which a duplicate is spawned), then I was asking him, in effect, why similarity would not be a good criterion for t = 0.0001 second. In other words, one extremely good argument that a person and his duplicate are the same person is that they are so similar, because no significant change can happen that could turn someone into a different person in so short a time (under ordinary circumstances). Lee From lcorbin at rawbw.com Thu Jun 21 06:52:37 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Wed, 20 Jun 2007 23:52:37 -0700 Subject: [ExI] Fear of Death References: <551860.32988.qm@web37411.mail.mud.yahoo.com> Message-ID: <01ca01c7b3d0$fa97f2a0$6501a8c0@homeef7b612677> Jeffrey writes > Lee wrote: > > > ..."I have very good reasons > > for supposing that I fear death less than most people, > > and a *lot* less than do many other cryonicists."... > > I'm not trying to flame you Lee, I just thought that > this would be a good topic to jump-off from. Hmm? Flame? Where? :-) I don't really see any insult potential so far!! > I've wondered why having a fear of death is often > considered to be something shameful. That's a good question. As a guess, I suppose that all we he-men types just get anxious any time that it's suggested we fear something. "I'm not afraid!", cried Sir Robin to King Arthur as they approached the Bridge of Death over the Gorge of Eternal Peril. > I would rather continue to exist and live a good life, > rather than die (in the "permanent" sense). So, > because I would rather live well, then I suppose that > I *do* have a fear of death. That does not sound logical to me. Just because you *prefer* something---have your "rathers"---hardly means that you are afraid of something. You may prefer chocolate to vanilla ice-cream, but we could hardly infer that you were afraid of vanilla. Let's say that we are all trying to be honest about our feelings, and further postulate that we are good at expressing how we feel. Well, I have read accounts that many cryonicists have written, e.g. Saul Kent, who are not shy concerning their overpowering fear of death. As for me, so far as I can tell, it's not quite like that: my "fear" of death is much, much more like not getting invited to some wonderful, wonderful party that I had been looking forward to for ages. Or missing out on said party for whatever reason. > Does that make me ashamed to say it or > feel it? Not at all. If I die, then nothing I've ever > cared about will mean anything to me; because > I won't care about anything. I will assume that you know what you are talking about with respect to your own feelings, and if the prospect of death is well described by "fear", then that's fine. Frankly, it would be good for me I think, if I were much more afraid of death than I seem to be. Then I'd go to cryonicist meetings more often, help out with the cause more often, contribute more heavily, and so on. Lee From lcorbin at rawbw.com Thu Jun 21 06:59:43 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Wed, 20 Jun 2007 23:59:43 -0700 Subject: [ExI] Fear of Death References: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><551860.32988.qm@web37411.mail.mud.yahoo.com> Message-ID: <01ce01c7b3d2$64056e10$6501a8c0@homeef7b612677> Jef writes things like > I wanted to let Lee's statement lie. But since you've brought it up, > consider its evasiveness. > I told Lee I suspected his motivation was not so much to understand > personal identity, but to comfort his fear of death. He evaded the > point of the question, which was his motivation, and pretended as if What effrontery. How can you be so cocksure that you know all about my motivations and all about my fear of death and all about whether I was truly being evasive or whether I was pretending? Don't you suspect even a little bit the possibility that we are in a very low bandwidth environment here, and that possibly I haven't been clear, or that possibly you have misunderstood? My goodness. Bless my soul, but I cannot for the life of me feel that I am being evasive about any of this trash. (Now if you were to start inquiring about aspects of my personal life, or about some decisions I may have made in the past, and were I forced to answer, then, yes, I'd have some grounds (so far as I can see, again) for being evasive. But about this stuff? Sir, I think your accusations groundless!) > He evaded the point of the question, which was his motivation, > and pretended as if the point was his fear of death, which of > course is already ameliorated by his theory of multiple copies > of himself and his cryonics policy. > > Fear death or not; that wasn't the point. Well pray tell, then, what exactly was the point? (Maybe it's in your next email I see coming up, in "Next moment....". Lee From lcorbin at rawbw.com Thu Jun 21 07:36:17 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 00:36:17 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677><013c01c7b22b$22548210$6501a8c0@homeef7b612677><017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> Message-ID: <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Jef writes > [Lee wrote] > >> So, Jef, could there be some observer who saw me and George >> Washington as having the same personality, and this viewpoint >> would be just as legitimate as any other? > > It's funny in a sad way when people are confronted with something that > appears mysterious so they try framing it in mysterious terms such as > quantum theory, probability theory, or relativity theory. Well, I'm not framing it as relativity theory or probability theory or what-not. I was asking a question that didn't look too hard. Later, I kicked myself for not seeing a particularly simple answer. Namely, that if a particular observer were "primitive" enough, e.g. a small rodent living in a house, he might very well see George Washington and me as the same person. That is, whichever of us entered the room, the mouse might flee because out of fear of the extremely large creature what had just come in. The mouse could be entirely unaware of whether it was me that came into the room every so often, or G.W. hisself. > To respond to your extreme example, an observer with very little > knowledge of humans might see no difference between you and George > Washington. Would that view be "legitimate"? Hardly, at least in our > terms. Better to ask whether it would be meaningful. Hmm. Okay. Well, thanks for answering :-) >> What about at t = 0.0001 seconds? What difference could one >> ten-thousandth of a second make? > > I said it works at t = 0. I didn't say it doesn't work at t = 0.0001 > seconds. This is another example of your straw-man argumentation. You > may mean it quite sincerely, but it's still a straw-man since it has > no weight. I have to just start ignoring all your accusations here. Okay, so, good, so similarity is a pretty good criterion for t = 0.0001 seconds post duplication. Well, my point is clear now. I just don't see any problem coming up with the similarity criterion for any t, just so long as there is a *high* degree of similarity. In short, two entities are the same person if they are similar enough on the microscopic level. > The point here, and we've been around this loop before, is that you > say physical/functional similarity tends to diminish with time and > that beyond some point one is no longer the same person, but your > theory of similarity-based personal identity doesn't say anything > about how much similarity or how it diminishes. Your theory is > incomplete, it only accounts for a special case. Yes, good. I agree. It's a very rough idea. But maybe it's not the fault of the "similarity theory" because we could make the same criticism of any kind of lumping into categories that mammals do. The universe itself can perhaps be blamed, for so easily giving rise to categories. >> I go with similarity on many, many other >> things. Leibniz even elevated to a principle "Identity of Indiscernables" >> in a somewhat related context. Hot days are like other hot days, >> dependent on similarity of structure (along one dimension). Two >> rabbits are considered to be of the same species not because their >> DNA is exactly equivalent, but because of close similarity. In such >> ways we categorize almost *everything*, so similarity is pretty >> universal and powerful (judging by the success of Darwinian creatures >> who employ it, e.g., a gazelle that lumps all lions into a single deadly >> category). > > Similarity is not the problem. The point is that with regard to > personal identity, similarity in terms of agency is more coherent and > extensible than similarity in physical/function terms. > ... > I already gave you the example of the two near-identical duplicates in conflict. But as I said, the similarity metric says that they're the same person, and a wife of one wouldn't tell the difference between the two, and so on. In other words, they seem in all ways to be the same person. They just hate each other is all. (And that's hardly novel: we often wonder if a given individual "hates himself" in some way.) >> > I offered you a simple scenario showing an internal contradiction >> > resulting from your view, and you have yet to respond directly, >... > ... >> If you would be so kind as to cut and paste it, and ask for a >> "yes", "no", "right", "wrong", be assured that I will directly >> answer. Please accept that my dodging of the issue was >> unintentional, and not due to any personality defect. I hate >> it when people won't answer directly, and I will be very >> happy to opine in a completely unambiguous way! > > Okay, once again: > > As a corollary, a physical instantiation could be extremely similar to > Lee, even more similar than, say, Lee of one year ago, but be > considered by anyone, including Lee, to be for **all* practical > purposes a different person. As an example, imagine that Lee is by > nature a greedy bastard (this is so patently false, I hope, as to be > inoffensive.) Lee makes a perfect duplicate of himself to go off and > work at programming so Lee (original) can spend his time playing > chess. At this point they are each Lee by virtue of each being a full > agent of the abstract entity we all know as Lee. But software > engineering can be a hellish life, and eventually the duplicate, being > a bit unstable and a greedy bastard to boot, realizes that he could > empty the common bank account (belonging to Lee-the entity, rather > than to either Lee-the-agent) and assume a life of leisure. If Lee > (the original) gives him any trouble, he can simply kill him and take > his place. Of course Lee (the original) is inclined to similar > thoughts with regard to his duplicate. We can easily see here that > despite extremely high similarity, for all practical > moral/social/legal purposes, anyone (including the duplicates > themselves) would see these as two separate individuals. Okay, since you have been so kind to cut and paste it again, I will try to answer it as directly as I can. I *don't* see those as two separate individuals at all. Neither would the people who know them. I admit that you have one good point here: namely, that as they fought, they'd see themselves as separate people. But I say that they are simply mistaken: it's as though each has been programmed by nature to regard anything outside its own skin as "the other" or as "alien". I mean, we could have to *totally* identical instances of the Tit-For-Tat program playing each other (or rather a minor variation of Tit-For-Tat that tried a random defection now and then), and they naturally behave as though they are going up against "the other", "the alien", the "other player". Yet they are truly identical, right down to the last statement of code. > The point here is to show that despite extreme similarity, a pair of > duplicates can easily fall into conflict with each other. This > conflict can be over property, relationships, legal responsibility; in > essence these are conflicts over rightful identity -- a paradox if, as > you claim, they are necessarily the same identity due to their > physical/functional similarity. > > Or maybe simpler for you, consider the two duplicates, each with > identical intent to prevent the existence of the other. If, as you > say, physical/functional similarity determines personal identity, then > do you see the paradox entailed in a person trying to destroy himself > so he can enjoy being himself? That's a fair question, and thanks. (Those little funny marks "?" make it easier to be non-evasive :-) I admit that there is irony in the situation of a person or program trying to destroy instances that are identical to itself, even though it has been programmed to safeguard "its own existence". But I consider the programs or persons acting in such a fashion to simply be deeply mistaken. All *outside* observers who are much less biased see them as identical. Why aren't they identical? Why should we view them as separate *people* or separate *programs* just because they're at each other's throats? > Or back to the biological organism manifesting Disassociative Identity > Disorder. I said this supported my point, and you said "thanks" > without further comment. In such a case we can agree that the > physical/functional similarity is total since it's only a single > organism, but we also agree that that any observer (including the > observers manifested by that particular organism) will see different > persons to the extent that they are perceived to act on behalf of > different entities. Hmm, well, we seem to have a hard disagreement here. Yes, let's consider just the case we/I have been discussing: indeed there are many people who would hate their duplicates. So let's suppose that A and A' are identical, and so---just as you say---they are what you call "different persons" because they are perceived as acting on behalf of different entities. Clearly here, they are acting on behalf of different *instances* of a what was a single person. You and I each beg the question in a different way. You beg the question by saying that they are clearly different entities, and so are different people, and I say that (because of similarity) they are clearly the same person (or program). How may we resolve this? Well, as above, I suggest that we consult outside authorities of higher reputation. If we send them into different rooms, can someone who knows them well tell them apart? (I say no.) What if we administer the best personality tests that have been so far devised? Will they show a difference? (Clearly no.) So isn't it up to you to say *why* they are different people? How can you avoid my insistence that in order for them to be different people they must be different in some *way*? (I.e., sneaking in a similarity criterion.) > Personal identity is about agency. Physical/functional similarity is > only a special case. I still don't agree. Are there other examples that can be offered? How about some other examples where *agency* is clearly key? Perhaps in daily life? Lee From lcorbin at rawbw.com Thu Jun 21 07:43:16 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 00:43:16 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><20070612072313.GJ17691@leitl.org><009801c7ad06$bf1f3150$26064e0c@MyComputer><0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com><013301c7af69$1dc009a0$50064e0c@MyComputer><001d01c7b0e6$edaf3e00$d5064e0c@MyComputer> Message-ID: <01e301c7b3d8$0080b010$6501a8c0@homeef7b612677> Stathis wrote "We have stupid, weak little programs in our brains that have been directing us for hundreds of millions of years at least. Our whole psychology and culture is based around serving these programs." Ah, it is gems like this that make kicking ideas around worthwhile with the various types on this list. Bravo. > We don't want to be rid of them, because that would involve > getting rid of everything that we consider important about > ourselves. With the next step in human evolution, we will > transfer these programs to our machines. If we are lucky, or if The Force is with us, whatever. > This started to happen in the stone age, and continues today > in the form of extremely large and powerful machines which > have no desire to overthrow their human slavemasters, because > we are the ones defining their desires. Nicely put. So *far*, so good. Computer programs have already done many things which used to be though requiring intelligence. So maybe you're right, and that it will keep turning out that way. That will be nice. But of course, it's hardly a certainty. Lee From stathisp at gmail.com Thu Jun 21 08:28:11 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 21 Jun 2007 18:28:11 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Message-ID: On 21/06/07, Lee Corbin wrote: > I admit that there is irony in the situation of a person or program trying > to destroy instances that are identical to itself, even though it has been > programmed to safeguard "its own existence". But I consider the > programs or persons acting in such a fashion to simply be deeply mistaken. > All *outside* observers who are much less biased see them as > identical. Why aren't they identical? Why should we view them as > separate *people* or separate *programs* just because they're at > each other's throats? They obviously view each other as separate people if they are at each other's throats. Conceivably they may be acting in this way because they are actually mistaken about their twin, believing them to be a completely different person who has fraudulently taken on their appearance. However, suppose they are convinced that their counterpart was in fact 100% them until the moment of differentiation, which might have been mere seconds ago, and are still at each other's throats: what mistake would they be making in that case? You can be mistaken about a matter of fact or of logic, but you can't be mistaken about the way you feel. You have said in the past that you would edit out such primitive feelings if you had the chance, and that's fine, but it sounds not dissimilar to editing out impediments to acting on any other ideal that you care about, such as "all men are brothers". -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at goertzel.org Thu Jun 21 11:28:00 2007 From: ben at goertzel.org (Benjamin Goertzel) Date: Thu, 21 Jun 2007 07:28:00 -0400 Subject: [ExI] RIP Singularitarianism In-Reply-To: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> References: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> Message-ID: <3cf171fe0706210428r286b9edv677c7e5db7352a50@mail.gmail.com> > Either way, explicit Singularitarian work is > already dead, cf: SIAI, Novamente. Hi, I am not sure what this is supposed to mean, but I don't like the sound of it! ;-) Novamente LLC is an active AGI and "AI in virtual worlds" software firm, which is growing rather than shrinking at the moment, and certainly not dead. List members not in the know may look at http://www.novamente.net As an AGI software project and as a software company, Novamente is not tied to any particular "ism" ... and as it happens, on our internal Novamente mailing lists, very little time is spent on futurist philosophy; not because team members lack interest, but because we've already discussed the relevant futurological issues amongst ourselves to a satisfactory level. Different NM team members share different beliefs and long-term goals, but all are committed to creating AGI at the human level and beyond; and at creating, teaching and selling intelligent agents in virtual worlds as a medium for creating profit on the path to AGI. Russell, I'm not sure what problem you have with our work? We are certainly not, in your words, "in a sphexish loop that computers will spring out of basements and start devouring human flesh and conquering the world." Rather, we are systematically creating a complex software system based on a precise design grounded in cognitive and computer science and systems theory. And, FYI, the bulk of our servers reside in the server room of our 9'th floor Belo Horizonte office, not in anyone's basement (although a basement location might well lower the air conditioning bill!). -- Ben Goertzel CEO, Novamente LLC From russell.wallace at gmail.com Thu Jun 21 11:51:08 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Thu, 21 Jun 2007 12:51:08 +0100 Subject: [ExI] RIP Singularitarianism In-Reply-To: <3cf171fe0706210428r286b9edv677c7e5db7352a50@mail.gmail.com> References: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> <3cf171fe0706210428r286b9edv677c7e5db7352a50@mail.gmail.com> Message-ID: <8d71341e0706210451o4543d72cw857cb64a914fd813@mail.gmail.com> On 6/21/07, Benjamin Goertzel wrote: > > Russell, I'm not sure what problem you have with our work? None! I probably shouldn't have used wording that suggests Novamente falls into the same category as e.g. SIAI; as I understand it - and you've confirmed just now - most of your time is spent on actually designing and implementing useful software. What did get you mentioned in my post was the recollection that you eventually got sucked into the "a computer is going to conquer the world" meme to the extent of citing it as a reason for declining to open source Novamente - correct me if I'm wrong, or if you've changed your mind on this one. (Note that I am not criticizing your decision to refrain from releasing your code as open source - there are certainly good commercial reasons for such a decision - only that stated rationale for it.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Thu Jun 21 12:28:21 2007 From: amara at amara.com (Amara Graps) Date: Thu, 21 Jun 2007 14:28:21 +0200 Subject: [ExI] Happy Solstice! Message-ID: Happy June Solstice [1] to you Northerners and Southerners (hemispheres, that is)!! Celebration time! Midsummer Night [2] by Zinta Aistars One night each year, that longest night between spring and summer when the sun never sets, only slips along the horizon, a glowing orb rippling red across a dark slide into tomorrow- between that rise and fall we play, children all, and sing: ligo, ligo? skirts falling heavy around bare legs, flowers tucked into our hair, wreaths of oak leaves falling across the gleam in the eyes of our men-until a hand reaches for mine, and we race, leap, over the bonfire, flames lapping our heels while we dare the gods to test our nerve, our desire to challenge fire, the night, and our unwillingness to ever die. [1] http://en.wikipedia.org/wiki/Solstice [2] http://zintaaistars.blogspot.com/2005_06_01_archive.html The annual celebration of the summer solstice, known as Jani is generally viewed as one of the most important Latvian holidays. Jani is celebrated on June 23 and 24. The traditions and rituals associated with the celebration of Jani are deeply rooted in ancient Latvian folklore and continue to have deep symbolic meaning for the celebrants. Participants gather flowers, grasses and oak leaves which are used to make wreaths and decorate the farmstead, house and farm animals. Jani night activities include the singing of special Jani songs (Ligo songs) around a ceremonial bonfire. Home-brewed beer and a special Jani caraway seed cheese are an essential part of this colourful holiday ritual. -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From amara at amara.com Thu Jun 21 12:36:29 2007 From: amara at amara.com (Amara Graps) Date: Thu, 21 Jun 2007 14:36:29 +0200 Subject: [ExI] Hubble Images of Asteroids Help Astronomers Prepare for Spacecraft Visit Message-ID: http://www.nasa.gov/mission_pages/hubble/news/vesta.html Hubble Images of Asteroids Help Astronomers Prepare for Spacecraft Visit 06.20.07 Images of the asteroids Ceres and Vesta + Click for an MPEG of Vesta's rotation These Hubble Space Telescope images of Vesta and Ceres show two of the most massive asteroids in the asteroid belt, a region between Mars and Jupiter. The images are helping astronomers plan for the Dawn spacecraft's tour of these hefty asteroids. On July 7, NASA is scheduled to launch the spacecraft on a four-year journey to the asteroid belt. Once there, Dawn will do some asteroid-hopping, going into orbit around Vesta in 2011 and Ceres in 2015. Dawn will be the first spacecraft to orbit two targets. At least 100,000 asteroids inhabit the asteroid belt, a reservoir of leftover material from the formation of our solar-system planets 4.6 billion years ago. Dawn also will be the first satellite to tour a dwarf planet. The International Astronomical Union named Ceres one of three dwarf planets in 2006. Ceres is round like planets in our solar system, but it does not clear debris out of its orbit as our planets do. To prepare for the Dawn spacecraft's visit to Vesta, astronomers used Hubble's Wide Field Planetary Camera 2 to snap new images of the asteroid. The image at right was taken on May 14 and 16, 2007. Using Hubble, astronomers mapped Vesta's southern hemisphere, a region dominated by a giant impact crater formed by a collision billions of years ago. The crater is 285 miles (456 kilometers) across, which is nearly equal to Vesta's 330-mile (530-kilometer) diameter. If Earth had a crater of proportional size, it would fill the Pacific Ocean basin. The impact broke off chunks of rock, producing more than 50 smaller asteroids that astronomers have nicknamed "vestoids." The collision also may have blasted through Vesta's crust. Vesta is about the size of Arizona. Previous Hubble images of Vesta's southern hemisphere were taken in 1994 and 1996 with the wide-field camera. In this new set of images, Hubble's sharp "eye" can see features as small as about 37 miles (60 kilometers) across. The image shows the difference in brightness and color on the asteroid's surface. These characteristics hint at the large-scale features that the Dawn spacecraft will see when it arrives at Vesta. Hubble's view reveals extensive global features stretching longitudinally from the northern hemisphere to the southern hemisphere. The image also shows widespread differences in brightness in the east and west, which probably reflect compositional changes. Both of these characteristics could reveal volcanic activity throughout Vesta. The size of these different regions varies. Some are hundreds of miles across. The brightness differences could be similar to the effect seen on the Moon, where smooth, dark regions are more iron-rich than the brighter highlands that contain minerals richer in calcium and aluminum. When Vesta was forming 4.5 billion years ago, it was heated to the melting temperatures of rock. This heating allowed heavier material to sink to Vesta's center and lighter minerals to rise to the surface. Astronomers combined images of Vesta in two colors to study the variations in iron-bearing minerals. From these minerals, they hope to learn more about Vesta's surface structure and composition. Astronomers expect that Dawn will provide rich details about the asteroid's surface and interior structure. The Hubble image of Ceres on the left reveals bright and dark regions on the asteroid's surface that could be topographic features, such as craters and/or areas containing different surface material. Large impacts may have caused some of these features and potentially added new material to the landscape. The Texas-sized asteroid holds about 30 to 40 percent of the mass in the asteroid belt. Ceres' round shape suggests that its interior is layered like those of terrestrial planets such as Earth. The asteroid may have a rocky inner core, an icy mantle and a thin, dusty outer crust. The asteroid may even have water locked beneath its surface. It is approximately 590 miles (950 kilometers) across and was the first asteroid discovered in 1801. The observations were made in visible and ultraviolet light between December 2003 and January 2004 with the Advanced Camera for Surveys. The color variations in the image show either a difference in texture or composition on Ceres' surface. Astronomers need the close-up views of the Dawn spacecraft to determine the characteristics of these regional differences. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. The Space Telescope Science Institute conducts Hubble science operations. The institute is operated for NASA by the Association of Universities for Research in Astronomy, Inc., Washington. Credits for Vesta: NASA; ESA; L. McFadden and J.Y. Li (University of Maryland, College Park); M. Mutchler and Z. Levay (Space Telescope Science Institute, Baltimore); P. Thomas (Cornell University); J. Parker and E.F. Young (Southwest Research Institute); and C.T. Russell and B. Schmidt (University of California, Los Angeles) Credits for Ceres: NASA; ESA; J. Parker (Southwest Research Institute); P. Thomas (Cornell University); L. McFadden (University of Maryland, College Park); and M. Mutchler and Z. Levay (Space Telescope Science Institute) Space Telescope Science Institute Baltimore, MD -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From amara at amara.com Thu Jun 21 13:06:57 2007 From: amara at amara.com (Amara Graps) Date: Thu, 21 Jun 2007 15:06:57 +0200 Subject: [ExI] Educational/Humorous video: Simpson's Evolution and Physics Guy Rap Message-ID: If anyone is collecting educational video clips that are fun (and funny) at the same time, here are two more to add: Simpson's Evolution http://www.collegehumor.com/video:1750172/ Physics Guy Rap http://www.youtube.com/watch?v=iGZXhUeLh90&eurl=http%3A%2F%2Fbackreaction%2Eblogspot%2Ecom%2F2007%2F06%2Fphysics%2Dgets%2Dme%2Dpositively%2Dcrazy%2Ehtml -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From CHealey at unicom-inc.com Thu Jun 21 13:23:12 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Thu, 21 Jun 2007 09:23:12 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677><013c01c7b22b$22548210$6501a8c0@homeef7b612677><017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Message-ID: <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> > On 21/06/07, Lee Corbin wrote: > > I admit that there is irony in the situation of a person or > program trying to destroy instances that are identical to > itself, even though it has been programmed to safeguard > "its own existence".??But I consider the programs or persons > acting in such a fashion to simply be deeply mistaken. > All *outside* observers who are much less biased see them as > identical.??Why aren't they identical???Why should we view them > as separate *people* or separate *programs* just because they're > at each other's throats? Interesting question. Perhaps they are quite literally at each other's throats *because* the context in which they are embedded lacks the default mechanisms to distinguish them in some formal way. >From a game theoretical standpoint this seems to make sense, since the infrastructure in which we are embedded normally provides for some significant resource controls that guard our individual interests. In a system that cannot distinguish the difference between two agents, then they are in full competition for all their shared resources. Sure, they may decide to cooperate, but the normal mechanisms that would enforce cooperation to some degree would be reduced or entirely lacking, allowing those agents to violate agreements between themselves without consequence. I'd be wary of interacting with my twin if he wouldn't partially and voluntarily limit his ability to outright violate our agreements, and iterating this across the surface area of our ongoing interactions seems to amount to rebuilding the defunct capabilities normally provided by the infrastructure of society. After providing for some enforceable contractual safeguards to eliminate mutual vulnerabilities in the zero-sum resource pool, I don't see why tightly bound cooperation wouldn't be highly likely toward the production of non-zero-sum gains. Does this make sense? -Chris Healey From ben at goertzel.org Thu Jun 21 14:32:08 2007 From: ben at goertzel.org (Benjamin Goertzel) Date: Thu, 21 Jun 2007 10:32:08 -0400 Subject: [ExI] RIP Singularitarianism In-Reply-To: <8d71341e0706210451o4543d72cw857cb64a914fd813@mail.gmail.com> References: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> <3cf171fe0706210428r286b9edv677c7e5db7352a50@mail.gmail.com> <8d71341e0706210451o4543d72cw857cb64a914fd813@mail.gmail.com> Message-ID: <3cf171fe0706210732n37f37513q77ad50649243df77@mail.gmail.com> > What did get you mentioned in my post was the recollection that you > eventually got sucked into the "a computer is going to conquer the world" > meme to the extent of citing it as a reason for declining to open source > Novamente - correct me if I'm wrong, or if you've changed your mind on this > one. There are multiple reasons for that decision. One is commercial... Another is a feeling that open-sourcing it probably wouldn't help much anyway. It's deep and difficult stuff, and hard for anyone to participate in part-time. And if anyone has relevant qualifications and skills and wants to participate in NM full-time without cash compensation, they can contact me and we can probably work out an arrangement for them to help the project and get compensated with stock options. And, yes, another is a worry that I'm afraid of the risk -- if Novamente is developed into a human-level AGI -- that someone could take the code and do something unethical with it. Either something unethical to humans, or something unethical to the AGI itself. So, I can see you disagree with me on this final point. That's OK. But my point is that my opinion on this final point (the potential dangers of releasing AGI code openly) are not particularly relevant to the work going on at Novamente LLC. My opinion on this matter is one among several factors going into our decision not to open-source; but it's quite irrelevant to the particulars of how we spend our time, what our technical AGI approach is, etc. -- Ben G From austriaaugust at yahoo.com Thu Jun 21 15:57:50 2007 From: austriaaugust at yahoo.com (A B) Date: Thu, 21 Jun 2007 08:57:50 -0700 (PDT) Subject: [ExI] RIP Singularitarianism In-Reply-To: <8d71341e0706210451o4543d72cw857cb64a914fd813@mail.gmail.com> Message-ID: <662082.26936.qm@web37411.mail.mud.yahoo.com> Well, I contribute to SIAI for 3 principle reasons. 1)Is that I believe that SIAI might quite possibly be the birthplace of the first powerful AGI, even apart from the safety considerations. [SIAI seems fiercely devoted to the task] 2) Is that a generic AGI *does* seem like an existential risk to me. And one that could possibly lead to an entirely pointless (paperclip) future. Maybe I'm wrong... who knows? And 3) Is that I believe that a Friendly-AI would be preferable to an Indifferent-AI. That's pretty much the whole story with me. Sincerely, Jeffrey Herrlich --- Russell Wallace wrote: > On 6/21/07, Benjamin Goertzel > wrote: > > > > Russell, I'm not sure what problem you have with > our work? > > > None! I probably shouldn't have used wording that > suggests Novamente falls > into the same category as e.g. SIAI; as I understand > it - and you've > confirmed just now - most of your time is spent on > actually designing and > implementing useful software. > > What did get you mentioned in my post was the > recollection that you > eventually got sucked into the "a computer is going > to conquer the world" > meme to the extent of citing it as a reason for > declining to open source > Novamente - correct me if I'm wrong, or if you've > changed your mind on this > one. > > (Note that I am not criticizing your decision to > refrain from releasing your > code as open source - there are certainly good > commercial reasons for such a > decision - only that stated rationale for it.) > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545469 From austriaaugust at yahoo.com Thu Jun 21 15:34:22 2007 From: austriaaugust at yahoo.com (A B) Date: Thu, 21 Jun 2007 08:34:22 -0700 (PDT) Subject: [ExI] Fear of Death In-Reply-To: <01ca01c7b3d0$fa97f2a0$6501a8c0@homeef7b612677> Message-ID: <582361.57098.qm@web37403.mail.mud.yahoo.com> Lee wrote: > "Hmm? Flame? Where? :-) I don't really see any > insult potential so far!!" Well, I'm glad you weren't offended. With all the flame wars swirling around Extropy right now, these are volatile times. :-) > "That does not sound logical to me. Just because you > *prefer* something---have your "rathers"---hardly > means that you are afraid of something. You may > prefer chocolate to vanilla ice-cream, but we could > hardly infer that you were afraid of vanilla." It's probably a matter of degree. If you prefer option A *strongly enough* over option B, then it probably could be considered "fear" of option B, at least IMO. For example, I'm not hysterical and losing sleep over the thought of my own death, but I would really prefer not to die. Does my own feeling qualify as "fear"? Possibly, at least to a small degree. But that doesn't bother me. > "Let's say that we are all trying to be honest about > our > feelings, and further postulate that we are good at > expressing how we feel. Well, I have read accounts > that many cryonicists have written, e.g. Saul Kent, > who are not shy concerning their overpowering fear > of death. As for me, so far as I can tell, it's not > quite > like that: my "fear" of death is much, much more > like > not getting invited to some wonderful, wonderful > party that I had been looking forward to for ages. > Or missing out on said party for whatever reason." That's similar to the way I feel, too. I don't want to miss the party; it should be pretty awesome. More specifically, I would like to have a wonderful existence and continue to love the people and abstractions that I love. The "scariest" part of non-existence in my opinion is that you lose your concern for all things. > "I will assume that you know what you are talking > about with respect to your own feelings, and if > the prospect of death is well described by "fear", > then that's fine. Frankly, it would be good for me > I think, if I were much more afraid of death than > I seem to be. Then I'd go to cryonicist meetings > more often, help out with the cause more often, > contribute more heavily, and so on." It might help you to stay motivated to keep imagining how great the party could be, and how much it would suck if you missed out. It would be nice for all of us list members to have a 100-year reunion. You do have a cryonics contract, right Lee? This whole topic of "fear", "cowardice", "bravery", "courage", etc. is pretty interesting, IMO. And many people at large seem to have some pretty kooky interpretations and conceptions regarding these words. I think that I'm going to try to research these things a bit more thoroughly, and attempt to post something more specific sometime in the future. Sincerely, Jeffrey Herrlich --- Lee Corbin wrote: > Jeffrey writes > > > Lee wrote: > > > > > ..."I have very good reasons > > > for supposing that I fear death less than most > people, > > > and a *lot* less than do many other > cryonicists."... > > > > I'm not trying to flame you Lee, I just thought > that > > this would be a good topic to jump-off from. > > Hmm? Flame? Where? :-) I don't really see any > insult potential so far!! > > > I've wondered why having a fear of death is often > > considered to be something shameful. > > That's a good question. As a guess, I suppose that > all we he-men types just get anxious any time that > it's suggested we fear something. "I'm not > afraid!", > cried Sir Robin to King Arthur as they approached > the Bridge of Death over the Gorge of Eternal Peril. > > > I would rather continue to exist and live a good > life, > > rather than die (in the "permanent" sense). So, > > because I would rather live well, then I suppose > that > > I *do* have a fear of death. > > That does not sound logical to me. Just because you > *prefer* something---have your "rathers"---hardly > means that you are afraid of something. You may > prefer chocolate to vanilla ice-cream, but we could > hardly infer that you were afraid of vanilla. > > Let's say that we are all trying to be honest about > our > feelings, and further postulate that we are good at > expressing how we feel. Well, I have read accounts > that many cryonicists have written, e.g. Saul Kent, > who are not shy concerning their overpowering fear > of death. As for me, so far as I can tell, it's not > quite > like that: my "fear" of death is much, much more > like > not getting invited to some wonderful, wonderful > party that I had been looking forward to for ages. > Or missing out on said party for whatever reason. > > > Does that make me ashamed to say it or > > feel it? Not at all. If I die, then nothing I've > ever > > cared about will mean anything to me; because > > I won't care about anything. > > I will assume that you know what you are talking > about with respect to your own feelings, and if > the prospect of death is well described by "fear", > then that's fine. Frankly, it would be good for me > I think, if I were much more afraid of death than > I seem to be. Then I'd go to cryonicist meetings > more often, help out with the cause more often, > contribute more heavily, and so on. > > Lee > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. http://new.toolbar.yahoo.com/toolbar/features/mail/index.php From gts_2000 at yahoo.com Thu Jun 21 15:58:46 2007 From: gts_2000 at yahoo.com (gts) Date: Thu, 21 Jun 2007 11:58:46 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> Message-ID: On 21/06/07, Lee Corbin wrote: > I admit that there is irony in the situation of a person or > program trying to destroy instances that are identical to > itself, even though it has been programmed to safeguard > "its own existence".?? Not only is it ironic, Lee, but it is in my estimation patently absurd. If the instances are at each other's throats, and one kills the other, then is it suicide or is it murder? It's murder, I say! :) On this subject of irony and identity theory... The philosopher Henri Bergson saw a similarity between the absurd and often ironic logic of dreams and the similarly absurd logic inherent in much of what we call comedy. As an example of comedy that contains this peculiar sort of ironic dream logic, (an example of comedy which happens also to be very relevant to this thread), Bergson quotes this amusing dialogue in which Mark Twain answers a reporter's questions: ----- QUESTION. Isn?t that a brother of yours? ANSWER. Oh! yes, yes, yes! Now you remind me of it, that WAS a brother of mine. That?s William--BILL we called him. Poor old Bill! Q. Why? Is he dead, then? A. Ah! well, I suppose so. We never could tell. There was a great mystery about it. Q. That is sad, very sad. He disappeared, then? A. Well, yes, in a sort of general way. We buried him. Q. BURIED him! BURIED him, without knowing whether he was dead or not? A. Oh no! Not that. He was dead enough. Q. Well, I confess that I can?t understand this. If you buried him, and you knew he was dead? A. No! no! We only thought he was. Q. Oh, I see! He came to life again? A. I bet he didn?t. Q. Well, I never heard anything like this. SOMEBODY was dead. SOMEBODY was buried. Now, where was the mystery? A. Ah! that?s just it! That?s it exactly. You see, we were twins,--defunct and I,--and we got mixed in the bath-tub when we were only two weeks old, and one of us was drowned. But we didn?t know which. Some think it was Bill. Some think it was me. Q. Well, that is remarkable. What do YOU think? A. Goodness knows! I would give whole worlds to know. This solemn, awful tragedy has cast a gloom over my whole life. Bergson adds this commentary: "A close examination will show us that the absurdity of this dialogue is by no means an absurdity of an ordinary type. It would disappear were not the speaker himself one of the twins in the story. It results entirely from the fact that Mark Twain asserts he is one of these twins, whilst all the time he talks as though he were a third person who tells the tale. In many of our dreams we adopt exactly the same method."[1] I wonder, Lee, if Bergson would say that your theory of identity is not only ironic, but also dream-like and humorous! -gts 1. Laughter: An Essay on the Meaning of the Comic by Henri Bergson http://www.authorama.com/laughter-1.html From amara at amara.com Thu Jun 21 16:42:26 2007 From: amara at amara.com (Amara Graps) Date: Thu, 21 Jun 2007 18:42:26 +0200 Subject: [ExI] Archive this! Mark Morford: Science and nature are mocking America's fickle God Message-ID: Yeah, Yeah, I'm procrastinating something I ought to be doing. But here is Mark Morford at his best... ** This one deserves to be archived as recommended reading for our transhumanist literature. ** http://sfgate.com/cgi-bin/article.cgi?f=/g/a/2007/06/20/notes062007.DTL Who Loves Designer Vaginas? This just in: Science and nature are mocking America's fickle God. Please, no screaming By Mark Morford, SF Gate Columnist Wednesday, June 20, 2007 A snippet: --------------------------------------------------- What are you gonna do about it? What are you gonna do about the fact that Mother Nature once again appears to be thwarting and mocking and then grinning like a wicked divine trickster at every cute rigid godly idea of how humans and animals are supposed to move and hump and lick and behave, as loosely and, yes, rather bitterly delineated in the Bible and by the Bush administration and Focus on the Family and every other uptight sexually confounded person you have ever known, et al. and ad nauseam? What, furthermore, are you gonna do about human knowledge? About how science insists on marching hell-bent forward with such astonishing speed and with such incredible dexterity toward some glorious otherworldly nightmare dreamscape of anima manipulation, a land where we can effortlessly rescramble our genetic code and reconfigure this none-too-solid flesh as we "play God" in so many bewildering ways the Christian right can't even figure out where to aim its hollow, horrified indignation? Here is the thing you must know: It is all changing with incredible, butt-tingling speed. It is all fast becoming more than we ever imagined, with ramifications we are only beginning to fully taste. There is no stopping it. There is little that can slow it down. There is only the single, looming question: How will you respond? Will you recoil and gag and spit, or will you gurgle and swallow and smile? Example: We are on the cusp of being able choose, should you so desire, the exact size and length and speed and eye color and specific pleasing fur markings of ... your dog. And your cat. And your baby (well, minus the fur). And by the way, we have also invented new drugs to eliminate menstruation and we can now grow designer vaginas in the lab and plastic surgery is more common than bad sacrum tattoos and it's becoming increasingly obvious that males of many species -- including our own -- are largely unnecessary for procreation (but not, say, parallel parking, the lifting of heavy things or buying you a nice postcoital breakfast). [...] There are only two real options. One is to hold tight to the leaky life raft of inflexible ideology (hello, organized religion), to rules and laws and codes of conduct written by the fearful, for the fearful, to live in constant low-level dread of all the extraordinary changes and radical rethinkings of what it means to be human or animal or male or female or hetero or homo or any other swell little label you thought was solid and trustworthy but which is increasingly proven to be blurry and unpredictable and just a little dangerous. There is another option. You can choose nimbleness, lightness, a sly and knowing grin to go with your wine and your vibrator and your never-ending thirst for more and deeper information. It's possible. You can refuse to let your brain, your soul lock down into one way of looking at the world as you see all the science and genetic manipulation and designer vaginas, all the insane, incredible possibility as merely more evidence that we are, in the end, just one big karmic science experiment. Is this latter choice frustrating and brutally difficult and will it challenge every notion of self you hold dear? Hell yes. Is it the only way to enjoy this bizarre circus of a planet without grabbing a gun and cowering in the corner with your homophobia and your flag and your Army of Christ brochure, dead certain the terrorists and gays and hippies are coming to eat your soul for breakfast? Well, probably. Because, baby, the changes are coming, harder and faster than ever, with all sorts of juicy, terrifying, delightful implications. Really now, what are you gonna do about it? --------------------------------------------------- -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From jef at jefallbright.net Thu Jun 21 17:35:06 2007 From: jef at jefallbright.net (Jef Allbright) Date: Thu, 21 Jun 2007 10:35:06 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Message-ID: On 6/21/07, Lee Corbin wrote: > so similarity is a pretty good criterion for t = 0.0001 seconds post > duplication. Well, my point is clear now. I just don't see any problem > coming up with the similarity criterion for any t, just so long as there > is a *high* degree of similarity. In short, two entities are the same > person if they are similar enough on the microscopic level. > > > The point here, and we've been around this loop before, is that you > > say physical/functional similarity tends to diminish with time and > > that beyond some point one is no longer the same person, but your > > theory of similarity-based personal identity doesn't say anything > > about how much similarity or how it diminishes. Your theory is > > incomplete, it only accounts for a special case. > > Yes, good. I agree. It's a very rough idea. But maybe it's not the > fault of the "similarity theory" because we could make the same > criticism of any kind of lumping into categories that mammals do. > The universe itself can perhaps be blamed, for so easily giving > rise to categories. You've been polishing your thinking about personal identity based on physical/functional similarity for decades and it allows you to see that duplicates can be **exactly** the same person. That idea isn't "rough", nor is there any "fault" or "blame"; it's just incomplete. My point involves the understanding that categories don't "exist" ontologically, they are always only a result of processing in the minds of the observer(s). > >> I go with similarity on many, many other > >> things. Leibniz even elevated to a principle "Identity of Indiscernables" > >> in a somewhat related context. Hot days are like other hot days, > >> dependent on similarity of structure (along one dimension). Two > >> rabbits are considered to be of the same species not because their > >> DNA is exactly equivalent, but because of close similarity. In such > >> ways we categorize almost *everything*, so similarity is pretty > >> universal and powerful (judging by the success of Darwinian creatures > >> who employ it, e.g., a gazelle that lumps all lions into a single deadly > >> category). > > > > Similarity is not the problem. The point is that with regard to > > personal identity, similarity in terms of agency is more coherent and > > extensible than similarity in physical/function terms. > > ... > > I already gave you the example of the two near-identical duplicates in conflict. > > But as I said, the similarity metric says that they're the same person, > and a wife of one wouldn't tell the difference between the two, and > so on. In other words, they seem in all ways to be the same person. > They just hate each other is all. (And that's hardly novel: we often > wonder if a given individual "hates himself" in some way.) You appear here to neglect your own belief and arguments that physical substrate doesn't matter at all (I completely agree, of course), and that what matters is function, defined of course by the agent's physical structure (and by something else of which you appear consistently unaware.) How correct can it be to refer to separate instances as being the same person on the basis of their almost exact physical similarity, when their functioning recognizes the separate existence of the other, so as to hate, compete with or even destroy the other? (Or even to cooperate.) There's a very practical point to this philosophizing (and it's not about personal survival.) I've stated it twice now, even highlighted it with "---------------", and twice you've deleted it without comment: ------------------------ >>... any degree of selfishness will tend to put >> duplicates at odds with one another as they interact from within >> increasingly disparate contexts. ------------------------ If we can get past the polemics we can consider the more interesting (in a practical sense) issues of systems of competition and cooperation, which is necessarily between **agents**. > Okay, since you have been so kind to cut and paste it again, I > will try to answer it as directly as I can. I *don't* see those > as two separate individuals at all. Neither would the people > who know them. People have no difficulty with biological twins being different persons, despite very high physical/functional similarity. You would probably like to say this is because each twin has different memories, etc., but consider that other people can't see memories, etc., what they see is separate agency. > I admit that you have one good point here: > namely, that as they fought, they'd see themselves as separate > people. Necessarily. > But I say that they are simply mistaken: it's as though > each has been programmed by nature to regard anything > outside its own skin as "the other" or as "alien". I mean, we > could have to *totally* identical instances of the Tit-For-Tat > program playing each other (or rather a minor variation of > Tit-For-Tat that tried a random defection now and then), > and they naturally behave as though they are going up against > "the other", "the alien", the "other player". Yet they are truly > identical, right down to the last statement of code. You're arguing circularly again, assuming your own conclusion. This topic gets more interesting when we get past this (temporary?) impasse and consider the application of artificial agents of arbitrary physical/functional similarity fully dedicated to acting on behalf of a single entity -- variously enabled/limited instances of exactly the same self. > > The point here is to show that despite extreme similarity, a pair of > > duplicates can easily fall into conflict with each other. This > > conflict can be over property, relationships, legal responsibility; in > > essence these are conflicts over rightful identity -- a paradox if, as > > you claim, they are necessarily the same identity due to their > > physical/functional similarity. > > > > Or maybe simpler for you, consider the two duplicates, each with > > identical intent to prevent the existence of the other. If, as you > > say, physical/functional similarity determines personal identity, then > > do you see the paradox entailed in a person trying to destroy himself > > so he can enjoy being himself? > > I admit that there is irony in the situation of a person or program trying > to destroy instances that are identical to itself, Actually: Paradoxical if you insist that they **must be** the same person. Ironical if you see that personal identity consists in the mind of the observer(s), but expect them to act as one. Natural if you see them as separate agents, and that any degree of selfishness will tend to put duplicates at odds with one another as they interact from within increasingly disparate contexts. > even though it has been > programmed to safeguard "its own existence". But I consider the > programs or persons acting in such a fashion to simply be deeply mistaken. > All *outside* observers who are much less biased see them as > identical. Why aren't they identical? Why should we view them as > separate *people* or separate *programs* just because they're at > each other's throats? Why should outside observers see two competing twins, no matter how physically/functionally similar, as the same person? Should they treat the offensive software engineer Lee exactly the same as the defensive chess playing Lee? Were talking about separate agencies here and **for all** practical purposes, and **for all** observers (including these agents themselves), they are separate persons. Even to get them to cooperate, which should be the desired outcome, they must necessarily see themselves first as independent agents, and then as the same person only to the extent that they are seen as representing a single (abstract) entity known as Lee. > > Or back to the biological organism manifesting Disassociative Identity > > Disorder. I said this supported my point, and you said "thanks" > > without further comment. In such a case we can agree that the > > physical/functional similarity is total since it's only a single > > organism, but we also agree that that any observer (including the > > observers manifested by that particular organism) will see different > > persons to the extent that they are perceived to act on behalf of > > different entities. > > Hmm, well, we seem to have a hard disagreement here. Yes, let's > consider just the case we/I have been discussing: indeed there > are many people who would hate their duplicates. Are you evading here the case of the biological organism manifesting DID, or are you conflating with the case of the duplicates? > So let's suppose > that A and A' are identical, and so---just as you say---they are > what you call "different persons" because they are perceived as > acting on behalf of different entities. Clearly here, they are acting > on behalf of different *instances* of a what was a single person. > You and I each beg the question in a different way. You beg the > question by saying that they are clearly different entities, and so > are different people, and I say that (because of similarity) they > are clearly the same person (or program). How may we resolve > this? > > Well, as above, I suggest that we consult outside authorities of > higher reputation. What? Appeal to authority -- on the Extropy list?! > If we send them into different rooms, can > someone who knows them well tell them apart? (I say no.) Certainly they can be distinguished by someone who knows them well. One of them goes on and on about how "this shouldn't be happening, it's all just a deep mistake, how can I be so confused as to attack myself like this, I just wanted to keep my software job and also get to play more chess, there's no reason to be upset, I know beyond logical doubt that my other instance should actually be anticipating our increased pleasure, I know I'm a very reasonable person." The other keeps saying "of course this was bound to happen, I see it clearly now even though I denied it when Jef tried to explain, I'm depressed and burnt out with writing software and if things don't change I'm going to do something...drastic." > What if we administer the best personality tests that have been > so far devised? Will they show a difference? (Clearly no.) Are you saying that personality traits have some direct bearing on personal identity? We know that identical twins, separated at birth, have a very high correlation so you mean to say that to some extent they should be considered practically the same person? Or are you saying that some extremely high correlation would indicate shared personal identity? Please describe how high this would have to be, without any circular reference to your conclusion. > So > isn't it up to you to say *why* they are different people? I have been! I've tried to show you in terms of social/moral/legal interaction, in terms of extensibility to future cases of agency as self, in terms of the paradox of one being in (even deadly) conflict with oneself, even in terms of parsimonious elimination of unnecessary ontological entities. > How > can you avoid my insistence that in order for them to be different > people they must be different in some *way*? (I.e., sneaking in > a similarity criterion.) We've already gone around this loop in this very thread. With regard to personal identity, physical/function similarity is only a special case. Personal identity is about agency. Different agency entails different personal identity. (You didn't appear sneaky there, but perhaps a bit obtuse.) > > Personal identity is about agency. Physical/functional similarity is > > only a special case. > > I still don't agree. Are there other examples that can be offered? > How about some other examples where *agency* is clearly key? > Perhaps in daily life? I did that earlier, showing how over time a person (Aging Alice) changes and even spawns variants while maintaining the same agency. You chose to interpret it as a person continually or repeatedly dieing. Oh well. Also, I'm still hoping for a thoughtful response from you with regard to the case of the biological organism manifesting DID. - Jef From jef at jefallbright.net Thu Jun 21 18:22:52 2007 From: jef at jefallbright.net (Jef Allbright) Date: Thu, 21 Jun 2007 11:22:52 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Message-ID: Sorry, critical correction: On 6/21/07, Jef Allbright wrote: > I did that earlier, showing how over time a person (Aging Alice) > changes and even spawns variants while maintaining the same [personal identity]. > You chose to interpret it as a person continually or repeatedly > dieing. - Jef From jef at jefallbright.net Thu Jun 21 18:02:34 2007 From: jef at jefallbright.net (Jef Allbright) Date: Thu, 21 Jun 2007 11:02:34 -0700 Subject: [ExI] Fear of Death In-Reply-To: <01ce01c7b3d2$64056e10$6501a8c0@homeef7b612677> References: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <551860.32988.qm@web37411.mail.mud.yahoo.com> <01ce01c7b3d2$64056e10$6501a8c0@homeef7b612677> Message-ID: On 6/20/07, Lee Corbin wrote: > Jef writes things like > > Fear death or not; that wasn't the point. > > Well pray tell, then, what exactly was the point? (Maybe it's in your > next email I see coming up, in "Next moment....". The point that elicited your evasive reply was, as I stated earlier, not your fear of death but your apparent reluctance to consider seriously expanded concepts of personal identity that might deny you previous sources of comfort against the fear of death. Rather than respond to the assertion, you denied the consequent. Kinda like the following: Mary: "Mike, it seems as if carrying that handgun helps you compensate for feeling somewhat impotent in a dangerous world." Mike: "No way!", fondling the long, hard barrel of his gun, "On the contrary, I feel more powerful than almost anyone else I know." - Jef From thespike at satx.rr.com Thu Jun 21 19:28:44 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 21 Jun 2007 14:28:44 -0500 Subject: [ExI] Fear of Death In-Reply-To: References: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <551860.32988.qm@web37411.mail.mud.yahoo.com> <01ce01c7b3d2$64056e10$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070621142339.021e1a50@satx.rr.com> At 11:02 AM 6/21/2007 -0700, Jef Allbright wrote to Lee: >Rather than respond to the assertion, you denied the consequent. > >Kinda like the following: > >Mary: "Mike, it seems as if carrying that handgun helps you >compensate for feeling somewhat impotent in a dangerous world." > >Mike: "No way!", fondling the long, hard barrel of his gun, "On the >contrary, I feel more powerful than almost anyone else I know." Maybe it's more like this, though: Mary: "Mike, it seems as if denying your natural fear of death helps you compensate for feeling somewhat impotent in a dangerous world." Mike: "No way! I laugh in the face of death. Since I had my amygdala re-tuned, I couldn't give a shit." Lee, however, would never be so coarse. Damien From russell.wallace at gmail.com Thu Jun 21 20:29:07 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Thu, 21 Jun 2007 21:29:07 +0100 Subject: [ExI] RIP Singularitarianism In-Reply-To: <3cf171fe0706210732n37f37513q77ad50649243df77@mail.gmail.com> References: <8d71341e0706200023u2dfe847an7d89861bf3399b71@mail.gmail.com> <3cf171fe0706210428r286b9edv677c7e5db7352a50@mail.gmail.com> <8d71341e0706210451o4543d72cw857cb64a914fd813@mail.gmail.com> <3cf171fe0706210732n37f37513q77ad50649243df77@mail.gmail.com> Message-ID: <8d71341e0706211329o1741d6efmb2c18b29b900e359@mail.gmail.com> On 6/21/07, Benjamin Goertzel wrote: > > So, I can see you disagree with me on this final point. That's OK. > But my point is that my opinion on this final point (the potential > dangers of releasing AGI code openly) are not particularly relevant to > the work going on at Novamente LLC. Right, and I'm not criticizing the latter, only the former. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Fri Jun 22 03:48:42 2007 From: spike66 at comcast.net (spike) Date: Thu, 21 Jun 2007 20:48:42 -0700 Subject: [ExI] japanese robot is afraid of bush In-Reply-To: Message-ID: <200706220406.l5M46Eem008926@andromeda.ziaspace.com> http://www.youtube.com/watch?v=o0BRnt_K1h4 From lcorbin at rawbw.com Fri Jun 22 04:42:24 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 21:42:24 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Message-ID: <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> Stathis writes > On 21/06/07, Lee Corbin wrote: > > I admit that there is irony in the situation of a person or program trying > > to destroy instances that are identical to itself, even though it has been > > programmed to safeguard "its own existence". But I consider the > > programs or persons acting in such a fashion to simply be deeply mistaken. > > All *outside* observers who are much less biased see them as > > identical. Why aren't they identical? Why should we view them as > > separate *people* or separate *programs* just because they're at > > each other's throats? > > They obviously view each other as separate people if they are at each > other's throats. Not necessarily, as you yourself seem to demonstrate here: > Conceivably they may be acting in this way because they are actually > mistaken about their twin, believing them to be a completely different > person who has fraudulently taken on their appearance. Yes, but that is the uninteresting case where (for example) they're fighting blindfold inside a chamber where they can't be heard, and would stop immediately if they only knew! > However, suppose they are convinced that their counterpart was > in fact 100% them until the moment of differentiation, which might > have been mere seconds ago, and are still at each other's throats: > what mistake would they be making in that case? The only mistake, in my opinion, that they would be making would be philosophical in nature. They would be failing to see that they were really the same person, and that (in a certain sense that alas cannot be rigorously justified) they should anticipate all the damage that accrues to their opponent as well as to their own instance, and that they should (on physics principles if nothing else) regard all benefit coming to their opponent as in truth coming to the person who they are. In short, they'd be failing to see that they were in a futile fight against themselves. But they could indeed *consistently* do that. People, for example, could *consistently* want to kill Jews, no matter how clearly you showed them that there exist continuums between Jews and non-Jews (e.g. mixtures), and no matter how clearly you demonstrated that the events taking place within Jewish nervous systems were incredibly similar to those taking place in non-Jewish nervous systems ("Hath not a Jew eyes?", etc.). > You can be mistaken about a matter of fact or of logic, but > you can't be mistaken about the way you feel. Right. But we all often lament, "darn it, I feel X about Y even though that isn't rational or I don't want to, and I wish that I could stop", or even, "I feel X about Y, and know that it's illogical, but it's too much fun to stop, or I have inner needs that require me to---I must!---continute to feel X". > You have said in the past that you would edit out such > primitive feelings if you had the chance, and that's fine, > but it sounds not dissimilar to editing out impediments > to acting on any other ideal that you care about, > such as "all men are brothers". If I understand you correctly, then yes: it would be very similar to anyone editing themselves for almost any reason. Namely, *the* way to get rid of internal conflicts is to alter one of the agents involved! Lee From lcorbin at rawbw.com Fri Jun 22 04:59:21 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 21:59:21 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677><013c01c7b22b$22548210$6501a8c0@homeef7b612677><017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> Message-ID: <01fd01c7b48a$5b1cf760$6501a8c0@homeef7b612677> Christopher writes >> On 21/06/07, Lee Corbin wrote: >> >> I admit that there is irony in the situation of a person or >> program trying to destroy instances that are identical to >> itself, even though it has been programmed to safeguard >> "its own existence". But I consider the programs or persons >> acting in such a fashion to simply be deeply mistaken. >> All *outside* observers who are much less biased see them as >> identical. Why aren't they identical? Why should we view them >> as separate *people* or separate *programs* just because they're >> at each other's throats? > > Interesting question. > > Perhaps they are quite literally at each other's throats *because* > the context in which they are embedded lacks the default > mechanisms to distinguish them in some formal way. A classic case of a single topic sentence that desperately wants a follow-up sentence saying almost the same thing in different words so as to remove ambiguity in the reader's mind Could you repeat the question? :-) > From a game theoretical standpoint this seems to make sense, > since the infrastructure in which we are embedded normally > provides for some significant resource controls that guard our > individual interests. In a system that cannot distinguish the > difference between two agents, I'm groping for an example here.... how about the law and the courts for a "significant resource that guards our individual interests"? Then you are perhaps saying that if somehow the law could not distinguish two people (as in many works of fiction) > then they are in full competition for all their shared resources. > Sure, they may decide to cooperate, but the normal > mechanisms that would enforce cooperation to some > degree would be reduced or entirely lacking, allowing > those agents to violate agreements between themselves > without consequence. Yes, (maybe my example works too). If you have a book or movie in which two duplicates could for some reason never go to the authorities about their condition, and all the rest of society could not distinguish between them, then I guess it would be just as you say: it could be an all-out no-holds-barred game between them. Even their "reputations" would not limit the deceit that they could practice on each other, nor any other kind of foul play. > I'd be wary of interacting with my twin if he wouldn't partially > and voluntarily limit his ability to outright violate our agreements, > and iterating this across the surface area of our ongoing > interactions seems to amount to rebuilding the defunct > capabilities normally provided by the infrastructure of society. Now I assume that you are focusing on the case (like Jef did) of the peculiar situation wherein for some reason you don't know how your twin would behave. After all, you *are* your twin in the sense of being composed in exactly the same way, and being exactly similar. So you should already know whether or not your twin will behave himself (all you have to ask yourself is how you would behave towards your duplicate). > After providing for some enforceable contractual safeguards > to eliminate mutual vulnerabilities in the zero-sum resource > pool, Assuming that you found some way to do this > I don't see why tightly bound cooperation wouldn't be highly > likely toward the production of non-zero-sum gains. > Does this make sense? Yes to both. Tightly bound cooperation would ensue even between these (in my eyes) pathological creatures who refused to cooperate even with their identical counterparts, because of the enforceable contractual safeguards you mention. (Speaking of pathology, Raymond Smullyan once said that in a non-iterative Prisoner's Dilemma, he would not cooperate EVEN WITH a mirror image of himself!!) Although your stipulation "eliminating mutual vulnerabilities" may be a bit of overkill. Lee From lcorbin at rawbw.com Fri Jun 22 05:05:53 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 22:05:53 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677><013c01c7b22b$22548210$6501a8c0@homeef7b612677><017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><01e001c7b3d7$4c627550$6501a8c0@homeef7b612677><5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> Message-ID: <020001c7b48b$0f90f340$6501a8c0@homeef7b612677> Gordon writes > On 21/06/07, Lee Corbin wrote: > >> I admit that there is irony in the situation of a person or >> program trying to destroy instances that are identical to >> itself, even though it has been programmed to safeguard >> "its own existence". > > Not only is it ironic, Lee, but it is in my estimation patently absurd. If > the instances are at each other's throats, and one kills the other, then > is it suicide or is it murder? It's murder, I say! :) What, the scenario that Jef and Stathis and A B and I were talking about is absurd? Maybe you are misunderstanding me or vice-versa. Do you have a problem with the scenario first advanced, I believe, by Jef that specified that a human being could be so constituted that were he duplicated, he and his duplicate would immediately turn on each other with a vengeance? It sounds plausible to me, hardly absurd at all. Yes, I do get your joke about whether it's murder or suicide :-) I say it's suicide!! :-) Funny Mark Twain dialog. Lee > also to be very relevant to this thread), Bergson quotes this amusing > dialogue in which Mark Twain answers a reporter's questions: > > ----- > QUESTION. Isn?t that a brother of yours? > > ANSWER. Oh! yes, yes, yes! Now you remind me of it, that WAS a brother of > mine. That?s William--BILL we called him. Poor old Bill! > > Q. Why? Is he dead, then? > > A. Ah! well, I suppose so. We never could tell. There was a great mystery > about it. > > Q. That is sad, very sad. He disappeared, then? > > A. Well, yes, in a sort of general way. We buried him. > > Q. BURIED him! BURIED him, without knowing whether he was dead or not? > > A. Oh no! Not that. He was dead enough. > > Q. Well, I confess that I can?t understand this. If you buried him, and > you knew he was dead? > > A. No! no! We only thought he was. > > Q. Oh, I see! He came to life again? > > A. I bet he didn?t. > > Q. Well, I never heard anything like this. SOMEBODY was dead. SOMEBODY > was buried. Now, where was the mystery? > > A. Ah! that?s just it! That?s it exactly. You see, we were twins,--defunct > and I,--and we got mixed in the bath-tub when we were only two weeks old, > and one of us was drowned. But we didn?t know which. Some think it was > Bill. Some think it was me. > > Q. Well, that is remarkable. What do YOU think? > > A. Goodness knows! I would give whole worlds to know. This solemn, awful > tragedy has cast a gloom over my whole life. > > Bergson adds this commentary: > > "A close examination will show us that the absurdity of this dialogue is > by no means an absurdity of an ordinary type. It would disappear were not > the speaker himself one of the twins in the story. It results entirely > from the fact that Mark Twain asserts he is one of these twins, whilst all > the time he talks as though he were a third person who tells the tale. In > many of our dreams we adopt exactly the same method."[1] From lcorbin at rawbw.com Fri Jun 22 05:18:04 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 22:18:04 -0700 Subject: [ExI] Fear of Death References: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <551860.32988.qm@web37411.mail.mud.yahoo.com> <01ce01c7b3d2$64056e10$6501a8c0@homeef7b612677> Message-ID: <020701c7b48d$2a15d210$6501a8c0@homeef7b612677> Jef writes > Lee wrote: > >> > Fear death or not; that wasn't the point. >> >> Well pray tell, then, what exactly was the point? > > The point that elicited your evasive reply was, as I stated earlier, > not your fear of death but your apparent reluctance to consider > seriously expanded concepts of personal identity that might deny you > previous sources of comfort against the fear of death. This is very confusing! On the one hand, you say that "the point...was not your fear of death" but by the end of the sentence you are saying "your reluctance to consider concepts that [would] deny you comfort against the fear of death". My word! How am I to follow such a sentence? You adamently maintain that the issue is NOT my fear of death. Okay, I stand corrected. I had mistakenly (not evasively!) read you as---like in the first part of your tortured sentence above---suggesting that I did have a pronounced fear of death. Okay. Now that's settled: the *point* was not that I have a fear of death. Okay. Now the point evidently *is* that I won't entertain concepts of personal identity that give me comfort against the fear of death. Ow! You are making my head hurt! How can I entertain concepts that don't give me comfort from my fear of death if I don't in fact have much fear of death? > Rather than respond to the assertion, you denied the consequent. > > Kinda like the following: > > Mary: "Mike, it seems as if carrying that handgun helps you > compensate for feeling somewhat impotent in a dangerous world." > > Mike: "No way!", fondling the long, hard barrel of his gun, "On the > contrary, I feel more powerful than almost anyone else I know." Now this guy evidently *does* want to feel more powerful than others. (Analogous to me not wanting to fear death?) But it's ironic because he just said "On the contrary" to the suggestion that he's feeling impotent and wanting to compensate for it. That guy, yes, is not understanding the words or is being inconsistent. I don't see how that applies to me. I have consistently maintained that I'm not especially afraid of death (at least in the strong sense that many people are). Therefore the characteristic of cryonics and uploading and other nice things that might save me are not dealing with any fears I have, per se, but rather (as I mentioned) are analogous to some greatly greatly expanded benefit that I don't want to miss out on. Lee From lcorbin at rawbw.com Fri Jun 22 05:25:38 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 22:25:38 -0700 Subject: [ExI] Fear of Death References: <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><551860.32988.qm@web37411.mail.mud.yahoo.com><01ce01c7b3d2$64056e10$6501a8c0@homeef7b612677> <7.0.1.0.2.20070621142339.021e1a50@satx.rr.com> Message-ID: <020a01c7b48d$dfa57720$6501a8c0@homeef7b612677> Damien writes > Jef wrote to Lee: > >> Kinda like the following: >> >> Mary: "Mike, it seems as if carrying that handgun helps you >> compensate for feeling somewhat impotent in a dangerous world." >> >> Mike: "No way!", fondling the long, hard barrel of his gun, "On the >> contrary, I feel more powerful than almost anyone else I know." > > Maybe it's more like this, though: > > Mary: "Mike, it seems as if denying your natural fear of death helps you > compensate for feeling somewhat impotent in a dangerous world." > > Mike: "No way! I laugh in the face of death. Since I had my amygdala > re-tuned, I couldn't give a shit." > > Lee, however, would never be so coarse. Gol-durn right. But more to the point, I don't even *need* to have my amygdala adjusted. While, yes, I do not laugh in the face of death, (or even in the face of the IRS), I am "afraid" of death only in the sense that I am afraid that I'll get a big bill from the IRS, or I'm afraid that the neighbor's dog will keep me awake tonight, or afraid that the usual suspects will stop posting to this list, or afraid that the light will turn red before I get to it (admitting only that there is a lot more at stake in the case of death than in any of these examples). Lee From lcorbin at rawbw.com Fri Jun 22 05:47:41 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 21 Jun 2007 22:47:41 -0700 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Message-ID: <021201c7b491$60b23da0$6501a8c0@homeef7b612677> Jef writes >> But maybe it's not the fault of the "similarity theory" [that it doesn't >> return exact quantitative information concerning degrees of >> similarity between persons] because we could make the same >> criticism of any kind of lumping into categories that mammals do. >> The universe itself can perhaps be blamed, for so easily giving >> rise to categories. > > You've been polishing your thinking about personal identity based on > physical/functional similarity for decades and it allows you to see > that duplicates can be **exactly** the same person. That idea isn't > "rough", nor is there any "fault" or "blame"; it's just incomplete. Yes. I can just imagine that perhaps (my luck holding) someday I'll ask an extremely advanced ruling intelligence who deigns to answer questions for me: "So exacly how similar was I at age 20 to how I was at age 50?". It will probably say, "Oh, it depends on the metric. There are about 10^4 very appropriate metrics we can use. May I describe the top 200 or so to you? But first, let me make sure that you are up to speed first on understanding just how similar---in quantifiable terms---are the two strings 110101010100010101000111010 and 110101010001101010100111010 "Your 20th century 'diff' programs in computer science faced the same difficulty. Just how similar are two programs? Notice that some portions of one string above are exactly like displaced portions of the other. "Now then. We come to physical objects. I shall next instruct you in thirty-three metrics useful for indicating how similar are two stones of the same mass made of just carbon, silicon, and oxygen atoms..." > My point involves the understanding that categories don't "exist" > ontologically, they are always only a result of processing in the > minds of the observer(s). AH HA! A genuine old-fashioned fundamental hard-core knock-down drag out philosophical dispute! Oh, but it's been a long time! This should be great. You are dead wrong. Categories are mainly *objective* features of our universe. There are several arguments I can use to prove this. Let me begin with the observation that it is no coincidence that many mammals with rudimentary intelligence are capable of distinguishing night from day. The difference between night and day is entirely objective, and it does *not* merely reside in the nervous systems of those animals which live on the surface of the Earth and which can see. Although there are of course intermediate cases, basically either solar radiation is impinging on the Earth (the case called "day") or it is not, (the case called "not-day" or "night"). Can you imagine an alien intelligence that reached our solar system that would not place between 8 and maybe 20 objects in orbit around our sun in a special category that we might as well call "planets"? They would definitely see that about out to 2 AU there are four outstanding real objects (Mercury, Venus, Earth, and Mars) that objectively existed and were categorically distinct from other debris orbiting the sun. It's hard to believe that any typical evolutionarily derived intelligence that managed to reach our solar system would be incapable of so distinguishing these objects, and I say that it is *no* coincidence that they formulate almost exactly the same categorization that we have. Why? Because that categorization is objective, and does *not* merely a result of processing in the minds of "observers". Lee From thespike at satx.rr.com Fri Jun 22 06:20:14 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jun 2007 01:20:14 -0500 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? In-Reply-To: <021201c7b491$60b23da0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070622011355.0225f950@satx.rr.com> This is the debate over the ontological status of "natural kinds". A very enduring topic. The following (derived from my dissertation, and hence not referencing especially recent work) will probably bore everyone silly, but hey: ================== ... concept-nodes may be linked by pathways characterised by mutable `weights', or differential probabilities of access. A plausible version is sketched by the sociologist of science, Barry Barnes (1983). For Barnes, two apparently conflicting features of any viable model of mind and memory are indisputable: first, we learn in an empirical context, a real physical environment; second, such learning `always initially occurs within a social context; to learn to classify is to learn to employ the classifications of some community or culture, and this involves interaction with competent members of the culture' (p. 21). Concept-building within these constraints--the natural and the social--occurs, according to Barnes, via two processes: ostension (just pointing at the object or process you're interested in, and giving it a name), and generalisation (p. 22). The trouble with ostension is well recognised. How do you know what she's pointed at, if you don't known what she's pointing at? Still, for Barnes it remains the indispensable bottom line. `A potentially infinite series of questions [concerning usage...] terminates in actual situations only because ostensibly given indications of usage lead out of the morass.' These indications, he assumes, are grounded in hard-wired determinants: `we possess an incompletely understood perceptual and cognitive apparatus with at least some rudimentary inherent properties for making learning possible'. Linguist Steven Pinker, too, argues that we swiftly home in on the meaning of unknown words when we see them applied `because we are not open-minded logicians but happily blinkered humans, innately constrained to make only certain kinds of guesses... about how the world and its occupants work' (Pinker, 1994, pp. 153-4). As Barnes remarks emphatically (p. 38) We should not shrink from admitting that cognitively we operate as inductive learning machines (Hesse, 1974). This crude formulation stresses that basic inductive propensities are inherent in our characteristics as organisms.... We are congenitally inductive. Generalisation controls the way terms are linked together. `They are what make us regard a form of culture as a body of knowledge rather than a mere taxonomy' (p. 23). These links take the form of what Barnes terms a Hesse net (ibid.; see Hesse, 1974). Each generalisation has an associated (and constantly updated) `probability' (p. 24). `Under every concept stands a number of specific instances thereof. These instances I shall call the tension of the associated concept' (ibid.). (Presumably there is an `elastic force' pun here, which is somewhat regrettable in what is basically a cybernetic metaphor.) In a clarifying footnote, he comments I use the term `tension' in deliberate allusion to `extension' as used in philosophical semantics. In the extension of a term are thought to be included all the entities to which it properly applies, or of which it is true. In the tension of a term are included only past instances of use--a finite number of instances. To talk merely of the tension of a term is to accept that its future proper usage is indeterminate. To talk of the extension is to imply that future proper usage is determined already. So although Barnes is prepared to ground his theory of concept-formation in a real world susceptible of ostension, he is adamant that social determination controls the categories within which concepts are placed. Yet, unlike Saussurean analysts, Barnes insists--correctly, in my view--on a dimension of similarity as well as one of difference, though one lacking any coercive or `essentialist' implication (p. 26): An assertion of resemblance... involves asserting that similarities outweigh differences. But there is no scale for the weighting of similarity against difference given in the nature of external reality, or inherent in the nature of the mind. An agent could as well assert insufficient resemblance and withhold application of the concept as far as reality or reason is concerned. It follows that the tension of a term such as `dog' is an insufficient determinant of its subsequent usage. All applications of `dog' involve the contingent judgment that similarity outweighs difference in that case.... And judgment is, of course, a socially situated act. Knowledge in Context Barnes proceeds to several implications: delocalisation (to know a goose, it also helps to know a swan); hence, there are no free-floating `atomic' concepts (p. 29); the application of a term is a judgment, as we have noted, since `the tension of a term represents a conventional relationship of sameness between the instances within it', and this can always be revised (pp. 30 1); proper usage is agreed usage, so that a creature might be at one time deemed a moth, at another a butterfly: `Cases such as these are sometimes thought to result from an inadequate knowledge of the "real meanings" of terms themselves; and occasionally the achievement of consensus in these cases is conceived as a "discovery" of the "real meaning". But such consensus merely marks the successful negotiation of an extension of usage'; and equivalence, which is to say that `different Hesse nets are always equivalent' (p. 33), since `"Reality" does not mind how we cluster it; "reality" is simply the massively complex array of unverbalized information which we cluster. This suggests that different nets stand equivalently in relation to "reality" or to the physical environment', and also `as far as the possibility of "rational justification" is concerned'(p. 33). In short, `alternative classifications are conventions between which neither "reality" nor "pure reason" can discriminate. Accepted systems of classification are institutions which are socially sustained. (p. 33)' In my own view, this strong relativist position is surely inconsistent with an implacable universe warranting ostension. Barnes offers in support of his case the instance of Karam animal taxonomy, which places cassowaries (a kind of flightless bird) in the special taxon kobtiy, outside that of flying beasts like other birds and bats (pp 34 37), and compares that categorisation with the zoological taxonomy used in an advanced industrial Hesse net: How can the pattern of either net distort reality? Rather, reality provides the information incorporated in both nets; it has no preference for the one or the other. (p. 35) However bracing this might be as a corrective to imperialistic anthropology, it is nonsense if taken literally. DNA sequences, for example, are not `randomly' or `purely culturally' associated with the genomes of each taxa, but contain clear natural-historical markers endorsing the phylogenetic claims of one over the other--that is, the history of their natural selection. (These deep links might be of no interest or use to humans, of course, and for most of history they have been altogether inaccessible, but they remain coded as the DNA `text' or `recipe': an almost indelible inscription). Animals and plants, whose phenotypes are the expression of the interaction between environment and coded genotype, are the `naturally-chunked' perceptual fields, or `natural kinds', which humans are prone to tag with lexemes (whatever further totemic or commercial significance they may be given). This perspective--that human language, prior to the legitimate claims of cultural relativity, is founded in its capacity for adaptation to an indefinitely complex interacting universe--gives the lie to Barnes's easy assertion: `"Reality" does not mind how we cluster it.' Reality might not mind, but finding the correct clustering certainly matters. The relativist view has been canvassed by John Dean, who found within our own botanical science two rival taxonomies for the plant Gilia inconspicua, and noted that both `are built upon perceptible, systematizable, stable distinctions between individual plants. In this sense the natural order sustains both taxonomies; neither can be said to be erroneous' (Dean, 1979, p. 226; see especially his taxonomic discussion, pp. 211-28). This view does not convince me that `reality does not mind how we classify it'; it simply reminds us that the reality we notate on our low-dimensional grids is multidimensional. Reality is not, however, utterly or even very indeterminate: it would be very strange to classify Gilia inconspicua as a variety of possum or igneous rock, or to attempt to breed it in the wild with an elephant. True, one might throw it in with anything imaginable for, say, totemic purposes, but that is a different point entirely. Ironically, the arch-conventionalist Pierre Duhem looked to the emergence of naturally-chunked classification: `The more a theory is perfected, the more we apprehend that the logical order in which it arranges experimental laws is the reflection of an ontological order' (cited Lakatos, 1978, p. 21). Taken together, these converging models from artificial intelligence research and the sociology of scientific knowledge offer a useful springboard to the further examination of semiosis: the ways in which humans recognise, construct and manipulate logics and contexts in the service of signification. From stathisp at gmail.com Fri Jun 22 06:41:46 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jun 2007 16:41:46 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> Message-ID: On 22/06/07, Lee Corbin wrote: > > You can be mistaken about a matter of fact or of logic, but > > you can't be mistaken about the way you feel. > > Right. But we all often lament, "darn it, I feel X about Y > even though that isn't rational or I don't want to, and I wish > that I could stop", or even, "I feel X about Y, and know > that it's illogical, but it's too much fun to stop, or I have > inner needs that require me to---I must!---continute to feel X". People might change things such as their desire to smoke if they could, but changing the normal feelings about personal identity might be too much like tampering with the desire to survive, or with the meaning of survival. For example, you could make yourself believe that after your death, you survive if the rest of humanity survives; you can't anticipate this posthumous future in the same way you anticipate waking up tomorrow, but then neither can you anticipate having the experiences of your recently-differentiated copy in the room next door. The reason having someone with my memories waking up in my bed tomorrow is important to me is in large part because I am able to anticipate "becoming" that person as a result. If I can be rid of this feeling, then I would also be rid of my fear of death, apart from altruistic concerns about the effect my death would have on others. -- Stathis Papaioannou From stathisp at gmail.com Fri Jun 22 07:06:51 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 22 Jun 2007 17:06:51 +1000 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? In-Reply-To: <021201c7b491$60b23da0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> Message-ID: On 22/06/07, Lee Corbin wrote: > Can you imagine an alien intelligence that reached our > solar system that would not place between 8 and maybe > 20 objects in orbit around our sun in a special category > that we might as well call "planets"? They would > definitely see that about out to 2 AU there are four > outstanding real objects (Mercury, Venus, Earth, and > Mars) that objectively existed and were categorically > distinct from other debris orbiting the sun. It's hard > to believe that any typical evolutionarily derived > intelligence that managed to reach our solar system > would be incapable of so distinguishing these objects, > and I say that it is *no* coincidence that they formulate > almost exactly the same categorization that we have. > Why? Because that categorization is objective, and > does *not* merely a result of processing in the minds > of "observers". How could you be sure of that? If they are gas giant dwellers they might just lump the rocky planets in with the asteroids and specks of dust. Look at the trouble we have had classifying Pluto, Eris and Ceres (which, following Amara's links, I discovered was considered a planet between Mars and Jupiter for a number of decades after its discovery). -- Stathis Papaioannou From lcorbin at rawbw.com Fri Jun 22 07:08:21 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 22 Jun 2007 00:08:21 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> Message-ID: <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Jef writes >> But as I said, the similarity metric says that they're the same person, >> and a wife of one wouldn't tell the difference between the two, and >> so on. In other words, they seem in all ways to be the same person. >> They just hate each other is all. (And that's hardly novel: we often >> wonder if a given individual "hates himself" in some way.) > > You appear here to neglect your own belief and arguments that physical > substrate doesn't matter at all (I completely agree, of course), and > that what matters is function, defined of course by the agent's > physical structure (and by something else of which you appear > consistently unaware.) Here, yes, I did happen to use the "similarity metric" to deem two biological machines (duplicates) to be the same individual. But of course, were one uploaded and one not, the similarity metric would still say that they're the same person, provided that they continued to have the same memories (the most important thing), and (as a result, usually) have the same behavior. > How correct can it be to refer to separate > instances as being the same person on the basis of their almost exact > physical similarity, when their functioning recognizes the separate > existence of the other, so as to hate, compete with or even destroy > the other? (Or even to cooperate.) Their "functioning" is much broader, I would say, than you are suggesting. It includes all their behavior (which, importantly, is greatly a reflection of their memories). The *only* bit of functioning that is peculiar is that they are fighting, e.g., trying to eliminate each other. Two modified Tit-For-Tat programs are "out to get" each other, yet manifestly they are the same program. Hey, that's just the way they're built. It's what they do. A human retard that was trained to do nothing but box would lash out at any other remotely human figure, including, of course, a duplicate. So it seems to me that this antagonism towards each other is a very tiny, tiny part of their behavior. > There's a very practical point to this philosophizing (and it's not > about personal survival.) I've stated it twice now, even highlighted > it with "---------------", and twice you've deleted it without > comment: > > ------------------------ > >>>... any degree of selfishness will tend to put >>> duplicates at odds with one another as they interact from within >>> increasingly disparate contexts. > Okay---True, any degree of selfishness will tend to put duplicates at odds with one another. So what? What was I supposed to react to? Hmm, well, maybe this: recall that according to my beliefs it is inherantly selfish for an instance of me to kill itself immediately so that its duplicate gets $10M, given that one of them has to die and if the instance protects itself, then it does not get the $10M. That may seem an unsual way to use the word "selfish", but I mean it most literally. Were "I" and my duplicate in such a situation, "this instance" would gladly kill itself because it would (rightly, I claim) anticipate awakening the next morning $10M richer. Now that's *selfish*, no? :-) > If we can get past the polemics we can consider the more interesting > (in a practical sense) issues of systems of competition and > cooperation, which is necessarily between **agents**. Roles? By agency, do you mean roles that different actors play? A human being in a policeman's uniform plays the role of a policeman, and it has occurred to me that maybe this is the kind of thing that you mean by agency. If so, I would retort that the real human being behind the badge is vastly more complicated than the comparatively simple role that it's playing. >> Okay, since you have been so kind to cut and paste it again, I >> will try to answer it as directly as I can. I *don't* see those >> as two separate individuals at all. Neither would the people >> who know them. > > People have no difficulty with biological twins being different > persons, despite very high physical/functional similarity. You would > probably like to say this is because each twin has different memories, > etc., but consider that other people can't see memories, etc., what > they see is separate agency. Actually, I do believe that identical twins are recognized by their *differences* as seen by friends, parents, and so on, although I admit that a decade or so I was startled to learn something about identical twins. I conjecture that if you and I conspired to create a duplicate of you, and then you introduced the duplicate to your family as your "long lost twin brother", they'd soon come to be very suspicious, especially if they've known other identical twins. By Jove, they simply would be UNABLE to tell any difference, and would never get used to that. >> But I say that they are simply mistaken: it's as though >> each has been programmed by nature to regard anything >> outside its own skin as "the other" or as "alien". I mean, we >> could have to *totally* identical instances of the Tit-For-Tat >> program playing each other (or rather a minor variation of >> Tit-For-Tat that tried a random defection now and then), >> and they naturally behave as though they are going up against >> "the other", "the alien", the "other player". Yet they are truly >> identical, right down to the last statement of code. > > You're arguing circularly again, assuming your own conclusion. But you are now going to claim that these two identical programs are not the same program? Or not the same..... what? Sure, they're not the same instance OF THE SAME PROGRAM. > This topic gets more interesting when we get past this (temporary?) > impasse and consider the application of artificial agents of arbitrary > physical/functional similarity fully dedicated to acting on behalf of > a single entity -- variously enabled/limited instances of exactly the > same self. Yes. Do go on. You mean, again, as in two entities completely alike in behavior except that each has an agenda that includes only benefit for its own instance? I guess that I just don't see the centrality of this one aspect. Admittedly, the "agenda" of an organism to requisition all benefit to its own instance is pretty common throughout the animal kingdom (including many humans). A possible argument against your position goes like this. Suppose that A and B are totally identical (structurally and so satisfy the similarity criterion, same memories, etc.) but happen *not* to be in each other's vicinity. If they live on different planets, then they cannot come into conflict. At this point, doesn't your "conflict criterion" fail to be applicable? Yes, each is still requisitioning benefit for its own instance, but now their agency seems the same too. Why are they different persons, unless all this time you've merely meant by persons what I've been calling "instances"? >> > Or maybe simpler for you, consider the two duplicates, each with >> > identical intent to prevent the existence of the other. If, as you >> > say, physical/functional similarity determines personal identity, then >> > do you see the paradox entailed in a person trying to destroy himself >> > so he can enjoy being himself? >> >> I admit that there is irony in the situation of a person or program trying >> to destroy instances that are identical to itself, > > Actually: > > Paradoxical if you insist that they **must be** the same person. What paradox? As I mentioned to Stathis (I think), it hardly rises to the "paradoxical". Two identical chess programs can also fight it out, Fritz 9.1 vs. Fritz 9.1. So what's new? > Why should outside observers see two competing twins, no matter how > physically/functionally similar, as the same person? Because they act the same way, have the same memories, think the same way, look the same way, and are indistinguishable except for location? It's not a fact that people will regard each duplicate as the same person in terms of everything but location (or perhaps recent number of parking tickets)? In all the important ways that matter, people will regard them as the same person. > Should they > treat the offensive software engineer Lee exactly the same as the > defensive chess playing Lee? Were talking about separate agencies here > and **for all** practical purposes, and **for all** observers > (including these agents themselves), they are separate persons. Well, okay, maybe I'm starting to see your point. No, they will not treat them the same. Indeed, we are treated differently every day by people depending on what role we are happening to play. But that hardly makes us different people during the day, or do you think that it does? > Even to get them to cooperate, which should be the desired outcome, > they must necessarily see themselves first as independent agents, and > then as the same person only to the extent that they are seen as > representing a single (abstract) entity known as Lee. The same argument could be applied to me at different times of the day? Why should the agent Lee-8:30-am prepare a sandwich for the Lee-12:30-pm, even though the former is not hungry? Why do I and my boss and everyone in the world regard Lee-12:30-pm as really the same person as Lee-8:30-am unless it's really so? Our language and the terms it uses and the concepts it refers to has come to *mean* precisely by "they're the same person" all our usual assocations to the fact. >> > Or back to the biological organism manifesting Disassociative Identity >> > Disorder. In such a case we can agree that the >> > physical/functional similarity is total since it's only a single >> > organism, but we also agree that that any observer (including the >> > observers manifested by that particular organism) will see different >> > persons to the extent that they are perceived to act on behalf of >> > different entities. >> >> Hmm, well, we seem to have a hard disagreement here. Yes, let's >> consider just the case we/I have been discussing: indeed there >> are many people who would hate their duplicates. > > Are you evading here the case of the biological organism manifesting > DID, or are you conflating with the case of the duplicates? Not sure. I'll try to cover both bases. The DID manifesting organism will have differing memories and differing behavior, and that's what tells people that it's really two people. Notice that this is of a much deeper and more substantial kind than mere different agency, e.g., the way a policeman behaves in uniform and later at night the way he behaves at a restaurant. (Yet no one would claim that he was not the same person.) The duplicates, if made quite recently, are indisputably the same person in the eyes of people who interact with them individually, and who are not clued into the existence of a matter-duplication device. All they'll ever complain about are things like "Hmm, this morning you said the same thing, remember?", or "Hmm, this morning you said you were going to be out of town all day!", etc. It would *never* occur to them that these were separate people. >> So let's suppose >> that A and A' are identical, and so---just as you say---they are >> what you call "different persons" because they are perceived as >> acting on behalf of different entities. Clearly here, they are acting >> on behalf of different *instances* of a what was a single person. >> You and I each beg the question in a different way. You beg the >> question by saying that they are clearly different entities, and so >> are different people, and I say that (because of similarity) they >> are clearly the same person (or program). How may we resolve >> this? >> >> Well, as above, I suggest that we consult outside authorities of >> higher reputation. > > What? Appeal to authority -- on the Extropy list?! :-) >> If we send them into different rooms, can >> someone who knows them well tell them apart? (I say no.) > > Certainly they can be distinguished by someone who knows them well. > One of them goes on and on about how "this shouldn't be happening, > it's all just a deep mistake, how can I be so confused as to attack > myself like this, I just wanted to keep my software job and also get > to play more chess, there's no reason to be upset, I know beyond > logical doubt that my other instance should actually be anticipating > our increased pleasure, I know I'm a very reasonable person." The > other keeps saying "of course this was bound to happen, I see it > clearly now even though I denied it when Jef tried to explain, I'm > depressed and burnt out with writing software and if things don't > change I'm going to do something...drastic." You haven't accounted for the---to me---extremely bizarre and unlikely eventuality that two duplicates of me would act so differently. They'd never attack each other. They both would be very "reasonable" especially to each other. Or did you introduce a brain lesion in one of them or something? (Sorry, I've forgotten.) And anyone who knows them well would *not* think that they were different people, unless he was clued in on the amazing new duplication machinery. At very worst, people would consider me moody, being in one mood at one time and another at another time. Unless the mood swings were so extreme as to constitute a form of DID (I presume), then no notion that there were separate people here would occur to anyone. >> What if we administer the best personality tests that have been >> so far devised? Will they show a difference? (Clearly no.) > > Are you saying that personality traits have some direct bearing on > personal identity? We know that identical twins, separated at birth, > have a very high correlation so you mean to say that to some extent > they should be considered practically the same person? Their correlation is not so high as you'd think. I strongly suggest "No Two Alike" by the author of "The Nurture Assumption", Judith Rich Harris. The title, it turns out, is to be taken very literally. > Or are you > saying that some extremely high correlation would indicate shared > personal identity? Please describe how high this would have to be, > without any circular reference to your conclusion. As I said earlier, even an advanced AI may present a bewildering variety of possible metrics. I will submit this as sufficient (though no doubt not necessary): if the two duplicates manage to completely fool everyone into believing that there is only one of them, and that no one no matter how intimately they know him or her can tell the difference, then that's high enough. They're the same person. There is movie you may wish to see, "The Prestige" that explores this very scenario. Sorry to give away spoilers, but a woman cannot tell the difference between two male duplicates that she's in love with. Can't ever happen, so far as I know, with identical twins. >> I still don't agree. Are there other examples that can be offered? >> How about some other examples where *agency* is clearly key? >> Perhaps in daily life? > > I did that earlier, showing how over time a person (Aging Alice) > changes and even spawns variants while maintaining the same agency. > You chose to interpret it as a person continually or repeatedly > dieing. > > But isn't her agency changing, according to you? She surely acts a lot differently, (plays different roles and so on). Would you mind saying again why her agency remains the same? Just because people use the same name for her, and are aware of her history? Surely if the 6 year old had a good friend, and that friend moved away, it might be entirely possible for them to meet up later in a different place, become friends, and because of a name change (say Alice got married) never realize that it was their old chum from childhood. Lee From tyleremerson at gmail.com Fri Jun 22 09:33:23 2007 From: tyleremerson at gmail.com (Tyler Emerson) Date: Fri, 22 Jun 2007 02:33:23 -0700 Subject: [ExI] Singularity Summit 2007 Podcast: Stephen Omohundro Message-ID: <632d2cda0706220233n5f4e5f73t8af7230988aa1344@mail.gmail.com> http://www.singinst.org/blog/2007/06/22/singularity-summit-2007-podcast-stephen-omohundro/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at ramonsky.com Fri Jun 22 09:52:43 2007 From: alex at ramonsky.com (Alex Ramonsky) Date: Fri, 22 Jun 2007 10:52:43 +0100 Subject: [ExI] Happy Solstice! References: Message-ID: <467B9BEB.6020407@ramonsky.com> Happy Solstice Amara : ) ...Something that I have wondered for many years...Does anyone know why the Midsummer Solstice/Midwinter Solstice are called so, when (at least in the UK) they're considered to be the _beginning_ of summer/winter? The Celts treated them as the middle of the seasons...Is this one of those eccentric British things or is there genuine worldwide confusion? Best, AR ********** Amara Graps wrote: >Happy June Solstice [1] to you Northerners and Southerners (hemispheres, >that is)!! Celebration time! > >Midsummer Night [2] >by Zinta Aistars > >One night each year, that longest night >be > From gts_2000 at yahoo.com Fri Jun 22 13:07:30 2007 From: gts_2000 at yahoo.com (gts) Date: Fri, 22 Jun 2007 09:07:30 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <020001c7b48b$0f90f340$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> <020001c7b48b$0f90f340$6501a8c0@homeef7b612677> Message-ID: On Fri, 22 Jun 2007 01:05:53 -0400, Lee Corbin wrote: > What, the scenario that Jef and Stathis and A B and I were > talking about is absurd? It's your interpretation of the scenario that I find absurd. I agree with those who say identity is about agency. > I say it's suicide!! :-) The defendant's suicide defense is absurd, your honor! As evidence of murder The People present Exhibit A: the dead body and Exhibit B: the murder weapon. According to witnesses, the two independent agents were arguing, when suddenly one pulled out this knife and repeatedly stabbed the other in the throat, killing him. The victim's death was involuntary, thus it cannot be ruled a suicide. It was murder, your honor, plain and simple > Funny Mark Twain dialog. I was hoping Twain's humor would help you see the error of your ways. :) -gts From gts_2000 at yahoo.com Fri Jun 22 13:19:28 2007 From: gts_2000 at yahoo.com (gts) Date: Fri, 22 Jun 2007 09:19:28 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <020001c7b48b$0f90f340$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> <020001c7b48b$0f90f340$6501a8c0@homeef7b612677> Message-ID: On Fri, 22 Jun 2007 01:05:53 -0400, Lee Corbin wrote: > What, the scenario that Jef and Stathis and A B and I were > talking about is absurd? It's your apparent interpretation of the scenario that I find absurd. I agree with those who say identity is about agency. I think a judge would agree also. > I say it's suicide!! :-) The defendant's suicide defense is absurd, your honor! According to witnesses, the two independent agents were arguing, when suddenly the accused pulled out a knife and repeatedly stabbed the victim in the throat, killing him. The victim's death was not voluntary, thus it cannot be ruled a suicide. It was murder, your honor, plain and simple! > Funny Mark Twain dialog. I was hoping Twain's humor might help you see the absurdity of your position. :) -gts From lcorbin at rawbw.com Fri Jun 22 14:05:29 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 22 Jun 2007 07:05:29 -0700 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> <7.0.1.0.2.20070622011355.0225f950@satx.rr.com> Message-ID: <021f01c7b4d6$b8ef25b0$6501a8c0@homeef7b612677> Damien quotes from his dissertation of a long time ago: ----- Original Message ----- From: "Damien Broderick" Sent: Thursday, June 21, 2007 11:20 PM Knowledge in Context Barnes proceeds to several implications: delocalisation (to know a goose, it also helps to know a swan); hence, there are no free-floating `atomic' concepts (p. 29); the application of a term is a judgment, as we have noted, since `the tension of a term represents a conventional relationship of sameness between the instances within it', and this can always be revised (pp. 30 1); proper usage is agreed usage, so that a creature might be at one time deemed a moth, at another a butterfly: `Cases such as these are sometimes thought to result from an inadequate knowledge of the "real meanings" of terms themselves; and occasionally the achievement of consensus in these cases is conceived as a "discovery" of the "real meaning". But such consensus merely marks the successful negotiation of an extension of usage'; and equivalence, which is to say that `different Hesse nets are always equivalent' (p. 33), since Here is starts getting good `"Reality" does not mind how we cluster it; "reality" is simply the massively complex array of unverbalized information which we cluster. This suggests that different nets stand equivalently in relation to "reality" or to the physical environment', and also `as far as the possibility of "rational justification" is concerned'(p. 33). In short, `alternative classifications are conventions between which neither "reality" nor "pure reason" can discriminate. Accepted systems of classification are institutions which are socially sustained. (p. 33)' In my own view, this strong relativist position is surely inconsistent with an implacable universe warranting ostension. An image of the universe warranting ostentation. The universe implacably presents or demands obstensive categories (or perhaps something milder than sheer *categories* which people may and do insist to have very sharp boundaries, be exactly associated with certain words and so on, not a view I endorse). Barnes offers in support of his case the instance of Karam animal taxonomy, which places cassowaries (a kind of flightless bird) in the special taxon kobtiy, outside that of flying beasts like other birds and bats (pp 34 37), and compares that categorisation with the zoological taxonomy used in an advanced industrial Hesse net: How can the pattern of either net distort reality? Rather, reality provides the information incorporated in both nets; it has no preference for the one or the other. (p. 35) Har. So says Barnes. However bracing this might be as a corrective to imperialistic anthropology, it is nonsense if taken literally. DNA sequences, for example, are not `randomly' or `purely culturally' associated with the genomes of each taxa, but contain clear natural-historical markers endorsing the phylogenetic claims of one over the other--that is, the history of their natural selection. Yass. That's right. Those markers are really "out there" and not "in here" and by amazing coincidence "in here" for myriads of intelligent Darwinian devices (e.g. humans). (These deep links might be of no interest or use to humans, of course, and for most of history they have been altogether inaccessible, but they remain coded as the DNA `text' or `recipe': an almost indelible inscription). A nice thrust against that nominalist position (?), I must say. Lee > Animals and plants, whose phenotypes are the expression of > the interaction between environment and coded genotype, are the > `naturally-chunked' perceptual fields, or `natural kinds', which > humans are prone to tag with lexemes (whatever further totemic or > commercial significance they may be given). This perspective--that > human language, prior to the legitimate claims of cultural > relativity, is founded in its capacity for adaptation to an > indefinitely complex interacting universe--gives the lie to Barnes's > easy assertion: `"Reality" does not mind how we cluster it.' Reality > might not mind, but finding the correct clustering certainly matters. > The relativist view has been canvassed by John Dean, who > found within our own botanical science two rival taxonomies for the > plant Gilia inconspicua, and noted that both `are built upon > perceptible, systematizable, stable distinctions between individual > plants. In this sense the natural order sustains both taxonomies; > neither can be said to be erroneous' (Dean, 1979, p. 226; see > especially his taxonomic discussion, pp. 211-28). This view does not > convince me that `reality does not mind how we classify it'; it > simply reminds us that the reality we notate on our low-dimensional > grids is multidimensional. Reality is not, however, utterly or even > very indeterminate: it would be very strange to classify Gilia > inconspicua as a variety of possum or igneous rock, or to attempt to > breed it in the wild with an elephant. True, one might throw it in > with anything imaginable for, say, totemic purposes, but that is a > different point entirely. Ironically, the arch-conventionalist Pierre > Duhem looked to the emergence of naturally-chunked classification: > `The more a theory is perfected, the more we apprehend that the > logical order in which it arranges experimental laws is the > reflection of an ontological order' (cited Lakatos, 1978, p. 21). > Taken together, these converging models from artificial > intelligence research and the sociology of scientific knowledge offer > a useful springboard to the further examination of semiosis: the ways > in which humans recognise, construct and manipulate logics and > contexts in the service of signification. From lcorbin at rawbw.com Fri Jun 22 14:12:33 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 22 Jun 2007 07:12:33 -0700 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> Message-ID: <022201c7b4d7$6d6c9770$6501a8c0@homeef7b612677> Stathis writes > Lee wrote: > >> Can you imagine an alien intelligence that reached our >> solar system that would not place between 8 and maybe >> 20 objects in orbit around our sun in a special category >> that we might as well call "planets"? They would >> definitely see that about out to 2 AU there are four >> outstanding real objects (Mercury, Venus, Earth, and >> Mars) that objectively existed and were categorically >> distinct from other debris orbiting the sun. It's hard >> to believe that any typical evolutionarily derived >> intelligence that managed to reach our solar system >> would be incapable of so distinguishing these objects, >> and I say that it is *no* coincidence that they formulate >> almost exactly the same categorization that we have. >> Why? Because that categorization is objective, and >> does *not* merely a result of processing in the minds >> of "observers". > > How could you be sure of that? Can't be 100% sure, of course, but I'd lay very strong odds. > If they are gas giant dwellers they might just lump the rocky planets > in with the asteroids and specks of dust. Look at the trouble we > have had classifying Pluto, Eris and Ceres (which, following Amara's > links, I discovered was considered a planet between Mars and > Jupiter for a number of decades after its discovery). The aliens of which you speak may have no interest in anything but gas giants, and have no time for other speculations. But supposing that the tendency to be curious about natural phenomena having ostensibly nothing to do with our survival is not confined to our species, but rather is indicative of the most successful and literally out-going species, then they'll have not a few astrophysicists and astronomers among their race. They *will* see the asteroid belt as a type of clumping of matter quite different from Mars and from the Earth, in their respective orbital areas. They'll certainly see the gas giants in a separate category (according to your scenario). But yes, at some point the universe is not quite so insistent, and I suppose that they might have some trouble with Sedna, Pluto, and the rest of the non-planets way out there. Lee From lcorbin at rawbw.com Fri Jun 22 14:24:36 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 22 Jun 2007 07:24:36 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677><013c01c7b22b$22548210$6501a8c0@homeef7b612677><017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><01e001c7b3d7$4c627550$6501a8c0@homeef7b612677><5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP><020001c7b48b$0f90f340$6501a8c0@homeef7b612677> Message-ID: <022701c7b4d9$887217f0$6501a8c0@homeef7b612677> Gordon writes >> What, the scenario that Jef and Stathis and A B and I were >> talking about is absurd? > > It's your interpretation of the scenario that I find absurd. I agree with > those who say identity is about agency. When two recent duplicates (of five minutes' vintage) discover each other's existence and go at it with all the available weapons at hand, you want to call it sheer murder (tho perhaps in self- defence), and will not sanction even the musing that it's a form of suicide. I grant that the laws never evolved in the context of close duplicates, or what I would call multiple instances of the same person. But perhaps they will in the far future, if/when some weird (delusional?) program attempts to delete its exact copy and that copy (for some strange reason) resists, and a Friendly AI overseeing everything gets upset. > >> I say it's suicide!! :-) > > The defendant's suicide defense is absurd, your honor! As evidence of > murder The People present Exhibit A: the dead body and Exhibit B: the > murder weapon. According to witnesses, the two independent agents were > arguing, when suddenly one pulled out this knife and repeatedly stabbed > the other in the throat, killing him. The victim's death was involuntary, > thus it cannot be ruled a suicide. It was murder, your Honor, plain and > simple Well, that would be the 20th century take. But come now. Let's suppose that you were able to purchase the Acme Uploading Kit from a Google- backed software development company. Sure enough, even though its on your PC now, no one who talks to it can tell the difference between it and you. That is, no matter how closely your loved ones, friends, and relatives probe, it remembers everything about the old times as well as you, has the same sense of humor, and so on. Now, if you delete it, is that really murder? You can claim that the machine and the software belonged to you, was your property, and you can even exhibit the receipts to prove it. So even a primitive early 21st century court of ancient evolved law would probably let you off. (Of course, "suicide" seems an inappropriate term too, irony and fun aside. But deleting an instance of yourself really is something new under the sun.) So now you'll want to defend a variation of the "agency" point of view concerning personal identity? So how do you handle the case I presented to Jef: a policeman on duty is clearly constitutes a different role (or agency?) than he does that evening when he's out with friends and family (even if he's still packing). Has the agency changed? Is he still the same person? (Just trying to get at what you might mean.) Lee From lcorbin at rawbw.com Fri Jun 22 14:38:12 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 22 Jun 2007 07:38:12 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> Message-ID: <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> Stathis writes > Lee wrote: > >> > You can be mistaken about a matter of fact or of logic, but >> > you can't be mistaken about the way you feel. >> >> Right. But we all often lament, "darn it, I feel X about Y >> even though that isn't rational or I don't want to, and I wish >> that I could stop", or even, "I feel X about Y, and know >> that it's illogical, but it's too much fun to stop, or I have >> inner needs that require me to---I must!---continute to feel X". > > People might change things such as their desire to smoke if they > could, but changing the normal feelings about personal identity might > be too much like tampering with the desire to survive, or with the > meaning of survival. Yes. I even go so far as to claim that they could deviate from a factually correct view of survival to something bogus. > For example, you could make yourself believe that > after your death, you survive if the rest of humanity survives; Exactly. Or they might believe that they'll become a particular sun flower, or a particular river that they're fond of. And they'd be just plain wrong, if not nuts. > you can't anticipate this posthumous future in the same way you anticipate > waking up tomorrow, but then neither can you anticipate having the > experiences of your recently-differentiated copy in the room next > door. I actually think that you are wrong on both counts. Some wacko might indeed anticipate being the sun flower and look forward to all the visits of his friends the bees. But if we examine the cranial capacity of the sun flower, and measure its ability to see and know about bees, then we must conclude that he is simply wrong. Whereas if one anticipates having the experiences of a close duplicate, science can hardly object. After all, the duplicate has the same equipment your instance does, and differs in no significant way whatsoever. Now whether one should go so far as to *anticipate* what will happen to him is, yes, so far as I have been able to see, rather up to the individual. This is because I have never been able to avoid paradoxes when trying to extend notions like "anticipation", "dread", "relish", "look forward to", and so on into the brave new world. > The reason having someone with my memories waking up in my bed > tomorrow is important to me is in large part because I am able to > anticipate "becoming" that person as a result. I'm glad that you can do this. There are all sorts of reasons, especially as extended by those who love you and those who like to administer psychological and medical tests, to consider that you have survived being replaced by an exact duplicate. > If I can be rid of this feeling [of not surviving, of actually dying][?] > then I would also be rid of my fear of death, apart from altruistic > concerns about the effect my death would have on others. Okay, say you fear death. But you said that you are able to anticipate "becoming" that person with your memories who wakes up in your bed tomorrow. (In my view, you already *are* anybody with your memories, of course, and no "becoming" is necessary.) If then, you can anticipate awakinging tomorrow as your duplicate, then this surely does not translate into a general feeling that if you are vaporized by an explosion then you survive (sans duplicate). Or perhaps I missed your meaning. Lee From spike66 at comcast.net Fri Jun 22 14:56:52 2007 From: spike66 at comcast.net (spike) Date: Fri, 22 Jun 2007 07:56:52 -0700 Subject: [ExI] Happy Solstice! In-Reply-To: <467B9BEB.6020407@ramonsky.com> Message-ID: <200706221509.l5MF9J4c024157@andromeda.ziaspace.com> Growing seasons would lag behind the solar seasons, which is the most critical schedule to most societies. Those smart Celts, being astronomy minded, would perhaps be more likely ignore the air temperature and note the celestial cues. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Alex Ramonsky > Sent: Friday, June 22, 2007 2:53 AM > To: ExI chat list > Subject: Re: [ExI] Happy Solstice! > > Happy Solstice Amara : ) > ...Something that I have wondered for many years...Does anyone know why > the Midsummer Solstice/Midwinter Solstice are called so, when (at least > in the UK) they're considered to be the _beginning_ of summer/winter? > The Celts treated them as the middle of the seasons...Is this one of > those eccentric British things or is there genuine worldwide confusion? > Best, > AR > ********** > > Amara Graps wrote: > > >Happy June Solstice [1] to you Northerners and Southerners (hemispheres, > >that is)!! Celebration time! > > > >Midsummer Night [2] > >by Zinta Aistars > > > >One night each year, that longest night > >be > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jef at jefallbright.net Fri Jun 22 15:11:40 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 08:11:40 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> Message-ID: On 6/21/07, Lee Corbin wrote: > Stathis writes > > > They obviously view each other as separate people if they are at each > > other's throats. > > Not necessarily, as you yourself seem to demonstrate here: > > > Conceivably they may be acting in this way because they are actually > > mistaken about their twin, believing them to be a completely different > > person who has fraudulently taken on their appearance. That example served to show, by existence of an obvious exception, that physical/functional similarity can not be used to determine personal identity as a general rule. As you must know, but choose to ignore, it takes only one exception to invalidate a rule. It served its purpose. It is misleading, logically false, and a waste of time for you to now point out possible exceptions to the exception. - Jef From amara at amara.com Fri Jun 22 15:14:23 2007 From: amara at amara.com (Amara Graps) Date: Fri, 22 Jun 2007 17:14:23 +0200 Subject: [ExI] Dawn launch pics (spacecraft mounted!) Message-ID: More pics are available, now building the third stage of the rocket. These pics are by far the most interesting (IMO) -- Dawn is shown being mounted on the rocket!! http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From jef at jefallbright.net Fri Jun 22 15:20:03 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 08:20:03 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> Message-ID: On 6/21/07, Stathis Papaioannou wrote: > On 22/06/07, Lee Corbin wrote: > > > > You can be mistaken about a matter of fact or of logic, but > > > you can't be mistaken about the way you feel. > > > > Right. But we all often lament, "darn it, I feel X about Y > > even though that isn't rational or I don't want to, and I wish > > that I could stop", or even, "I feel X about Y, and know > > that it's illogical, but it's too much fun to stop, or I have > > inner needs that require me to---I must!---continute to feel X". > > People might change things such as their desire to smoke if they > could, but changing the normal feelings about personal identity might > be too much like tampering with the desire to survive, or with the > meaning of survival. For example, you could make yourself believe that > after your death, you survive if the rest of humanity survives; you > can't anticipate this posthumous future in the same way you anticipate > waking up tomorrow, but then neither can you anticipate having the > experiences of your recently-differentiated copy in the room next > door. The reason having someone with my memories waking up in my bed > tomorrow is important to me is in large part because I am able to > anticipate "becoming" that person as a result. If I can be rid of this > feeling, then I would also be rid of my fear of death, apart from > altruistic concerns about the effect my death would have on others. Stathis has a point, but a stronger point is that social interaction, to make sense to us, is modelable in terms of agents interacting in terms of overlapping categories of association, trade, conflict, cooperation, defection, etc. There's nothing wrong with your special case of duplicates being exactly the same person -- as a special case -- but such thinking rapidly becomes incoherent within a practical social context. - Jef From jef at jefallbright.net Fri Jun 22 15:44:55 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 08:44:55 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <022701c7b4d9$887217f0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> <020001c7b48b$0f90f340$6501a8c0@homeef7b612677> <022701c7b4d9$887217f0$6501a8c0@homeef7b612677> Message-ID: On 6/22/07, Lee Corbin wrote: > So now you'll want to defend a variation of the "agency" point of > view concerning personal identity? So how do you handle the > case I presented to Jef: a policeman on duty is clearly constitutes > a different role (or agency?) than he does that evening when he's > out with friends and family (even if he's still packing). We know the person (the abstract entity) only in terms of our model of the person. To the extent that we see only the policeman (one role of the agent) then our model is dominated by policeman characteristics, but of course we're aware that he has a personal life and include that in our model as well. > Has the > agency changed? Is he still the same person? (Just trying to > get at what you might mean.) The agent, acting within the world, is changing all the time. The agent affects its environment, the environment affects the agent. In the case of a human agent, it can vary in physical condition, mood, emotional state, suffer an injury, lose a limb, lose memories, form false memories, update its values, change its behavior, an so on, and any observer (including this particular biological organism) will continue to associate this agent with a certain entity, on whose behalf the agent is seen to act. - Jef From gts_2000 at yahoo.com Fri Jun 22 15:12:54 2007 From: gts_2000 at yahoo.com (gts) Date: Fri, 22 Jun 2007 11:12:54 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> Message-ID: Lee writes: > When two recent duplicates (of five minutes' vintage) discover > each other's existence and go at it with all the available weapons > at hand, you want to call it sheer murder (tho perhaps in self- > defence), and will not sanction even the musing that it's a form > of suicide. Duplicates have individual rights, yes? They are not pets or personal property. As far as I'm concerned (and I think any judge of the distant future would have to agree) a crime occurs any time a copy violates the rights of another copy. Such possible crimes would include violations of the right to life, i.e., murder. Did the duplicate die against his will? If he died against his will then we cannot call it suicide. We must call it murder or something else. > So now you'll want to defend a variation of the "agency" point of > view concerning personal identity? Something like that, (but actually I'm kicking myself for re-entering this old debate. :) > So how do you handle the case I presented to Jef... I'll try to catch up... -gts From amara at amara.com Fri Jun 22 16:33:12 2007 From: amara at amara.com (Amara Graps) Date: Fri, 22 Jun 2007 18:33:12 +0200 Subject: [ExI] Happy Solstice! Message-ID: Alex Ramonsky: >..Something that I have wondered for many years...Does anyone know why >the Midsummer Solstice/Midwinter Solstice are called so, when (at least >in the UK) they're considered to be the _beginning_ of summer/winter? > The Celts treated them as the middle of the seasons...Is this one of >those eccentric British things or is there genuine worldwide confusion? It's more than just the Brits who call it that, and probably worldwide. I don't know, but I guess it has more to do with climate and pagan traditions. They are really big in the northern European countries. I remember participating in one on Sweden in 1985 that was fantastic (dancing around the pole and all of that). Someday I'll have to see what the Janis celebration is like in Latvia. This describes something about the midsummer traditions: http://en.wikipedia.org/wiki/Midsummer And here's one explanation from an astronomer (who has no experience of European traditions) http://www.badastronomy.com/bad/misc/badseasons.html Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From jef at jefallbright.net Fri Jun 22 16:34:26 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 09:34:26 -0700 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? In-Reply-To: <021201c7b491$60b23da0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> Message-ID: On 6/21/07, Lee Corbin wrote: > Jef writes > > My point involves the understanding that categories don't "exist" > > ontologically, they are always only a result of processing in the > > minds of the observer(s). > > AH HA! A genuine old-fashioned fundamental hard-core > knock-down drag out philosophical dispute! Oh, but > it's been a long time! This should be great. > > You are dead wrong. > > Categories are mainly *objective* features of our universe. Lee, I have no interest in arguing "pure philosophy" with you, and there's already an immense literature of semiotics. I "took the bait" when I saw you pontificating to Thomas about your ideas of personal identity based on physical/functional similarity, and I felt motivated to offer to him and any other seekers lurking on this list the understanding that while correct, it is impractically narrow in its application, and that a view of personal identity based on perceived agency is more coherent and extensible. On your personal web page you say "Interests: Cryonics, Mathematics, History, Chess, Philosophy, Polemics" Polemics n. (used with a sing. or pl. verb) 1. The art or practice of argumentation or controversy. Arguing with you, Lee, has always perplexed me in a way that I've experienced only with you and my ex-wife. I dealt with her because of our shared interest in the kids, I find myself dealing with you because of a shared interest in the community and topics of the extropy list. In both cases, perplexing in the way you break the rules of logical discourse while pleading ignorance or injury. I've received enough offlist support to know that my contribution wasn't entirely in vain. Time for me to wise up now and focus on the more productive. - Jef From gts_2000 at yahoo.com Fri Jun 22 16:08:57 2007 From: gts_2000 at yahoo.com (gts) Date: Fri, 22 Jun 2007 12:08:57 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D326A@w2k3exch.UNICOM-INC.CORP> <020001c7b48b$0f90f340$6501a8c0@homeef7b612677> <022701c7b4d9$887217f0$6501a8c0@homeef7b612677> Message-ID: On 6/22/07, Lee Corbin wrote: > So how do you handle the case I presented to Jef: a policeman on duty > is clearlyconstitutes a different role (or agency?) than he does that > evening when he's > out with friends and family (even if he's still packing). I don't see "role" as a synonym for "agency". As in your example, agents assume various roles. My emphasis is on the will. Separate wills = separate agents. -gts From max at maxmore.com Fri Jun 22 15:43:13 2007 From: max at maxmore.com (Max More) Date: Fri, 22 Jun 2007 10:43:13 -0500 Subject: [ExI] Climate Bet, Forecasts by Scientists versus Scientific Forecasts Message-ID: <200706221543.l5MFhEYT025482@ms-smtp-01.texas.rr.com> Scott Armstrong and Kesten Green aim to improve the use of scientific forecasting methods in the public policy area. They are using global warming as their first example. See http://theclimatebet.com Scott tells me that the following paper will be presented next Wednesday: Global Warming: Forecasts by Scientists versus Scientific Forecasts http://www.forecastingprinciples.com/Public_Policy/WarmAudit31.pdf Max From russell.wallace at gmail.com Fri Jun 22 18:31:42 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 22 Jun 2007 19:31:42 +0100 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> On 6/22/07, Jef Allbright wrote: > > Arguing with you, Lee, has always perplexed me in a way that I've > experienced only with you and my ex-wife. I dealt with her because of > our shared interest in the kids, I find myself dealing with you > because of a shared interest in the community and topics of the > extropy list. In both cases, perplexing in the way you break the > rules of logical discourse while pleading ignorance or injury. > > I've received enough offlist support to know that my contribution > wasn't entirely in vain. Just to balance it onlist a little, then, I will add my own plea of ignorance: I am entirely at a loss to see how Lee has been doing anything resembling breaking the rules of logical discourse. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Fri Jun 22 18:37:48 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 11:37:48 -0700 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? In-Reply-To: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> Message-ID: On 6/22/07, Russell Wallace wrote: > On 6/22/07, Jef Allbright wrote: > > Arguing with you, Lee, has always perplexed me in a way that I've > > experienced only with you and my ex-wife. I dealt with her because of > > our shared interest in the kids, I find myself dealing with you > > because of a shared interest in the community and topics of the > > extropy list. In both cases, perplexing in the way you break the > > rules of logical discourse while pleading ignorance or injury. > > > > I've received enough offlist support to know that my contribution > > wasn't entirely in vain. > > Just to balance it onlist a little, then, I will add my own plea of > ignorance: I am entirely at a loss to see how Lee has been doing anything > resembling breaking the rules of logical discourse. Multiple straw-men, circular reasoning, affirming the consequent, offering exceptions to the exception that invalidates the rule... I've been (uncomfortably) pointing out several of them. - Jef From russell.wallace at gmail.com Fri Jun 22 19:04:57 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 22 Jun 2007 20:04:57 +0100 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706221204g37c3885dqb91939a64b1c6e33@mail.gmail.com> On 6/22/07, Lee Corbin wrote: > > Exactly. Or they might believe that they'll become a particular > sun flower, or a particular river that they're fond of. And they'd > be just plain wrong, if not nuts. > Nor is this entirely a straw man. For example, in Philip Pullman's classic 'His Dark Materials' trilogy, a dying character (with clear authorial approval) anticipates continued existence as "part of everything" on the grounds that his atoms will survive; as one reviewer puts it, "This is the very height of narrative dishonesty... Atoms are just atoms, and if that's how we end, let's not prettify it with misty-eyed descriptions." We can see that the reviewer is correct: while there may be room for philosophical disagreement about whether one is justified in anticipating continued existence as a duplicate, there is no coherent philosophy in which one is justified in anticipating continued existence as something which entirely lacks the capacity for thought. -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Fri Jun 22 19:12:59 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 22 Jun 2007 20:12:59 +0100 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021201c7b491$60b23da0$6501a8c0@homeef7b612677> <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> Message-ID: <8d71341e0706221212p31443760o1aaceea5c045c0db@mail.gmail.com> On 6/22/07, Jef Allbright wrote: > > Multiple straw-men, circular reasoning, affirming the consequent, > offering exceptions to the exception that invalidates the rule... > The only one of the above for which I could see any justification was the last, and it seemed to me that the item in question (similarity as necessary and sufficient criterion for identity) is a heuristic that applies in many contexts rather than an absolute rule for all contexts. (Admittedly I don't know whether Lee agrees with me about that - if it turns out that he regards it as an absolute rule for all contexts, then I would disagree with him on that.) Otherwise I think the two of you have just been talking past each other a lot of the time, which with the best will in the world does sometimes happen when language is used sufficiently far outside its typical limits to void the warranty - such use being necessary for this sort of discussion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Fri Jun 22 19:17:34 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 12:17:34 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <8d71341e0706221204g37c3885dqb91939a64b1c6e33@mail.gmail.com> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> <8d71341e0706221204g37c3885dqb91939a64b1c6e33@mail.gmail.com> Message-ID: On 6/22/07, Russell Wallace wrote: > On 6/22/07, Lee Corbin wrote: > > Exactly. Or they might believe that they'll become a particular > > sun flower, or a particular river that they're fond of. And they'd > > be just plain wrong, if not nuts. > > > > Nor is this entirely a straw man. For example, in Philip Pullman's classic > 'His Dark Materials' trilogy, a dying character (with clear authorial > approval) anticipates continued existence as "part of everything" on the > grounds that his atoms will survive; as one reviewer puts it, "This is the > very height of narrative dishonesty... Atoms are just atoms, and if that's > how we end, let's not prettify it with misty-eyed descriptions." We can see > that the reviewer is correct: while there may be room for philosophical > disagreement about whether one is justified in anticipating continued > existence as a duplicate, there is no coherent philosophy in which one is > justified in anticipating continued existence as something which entirely > lacks the capacity for thought. > So where in this thread do you think such was ever asserted or implied? Saying that categories have no ontological status in no way implies validity of categorical relativism. And on this list I would be among the least suspected of promoting such a view, given how much I go on about the importance of the process of increasing awareness leading to an increasingly coherent model of perceived reality. - Jef From russell.wallace at gmail.com Fri Jun 22 19:36:59 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 22 Jun 2007 20:36:59 +0100 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> <8d71341e0706221204g37c3885dqb91939a64b1c6e33@mail.gmail.com> Message-ID: <8d71341e0706221236h11c4922ci16733bcd31483295@mail.gmail.com> On 6/22/07, Jef Allbright wrote: > > So where in this thread do you think such was ever asserted or > implied? Saying that categories have no ontological status in no way > implies validity of categorical relativism. And on this list I would > be among the least suspected of promoting such a view, given how much > I go on about the importance of the process of increasing awareness > leading to an increasingly coherent model of perceived reality. I didn't say you promoted any such view. Jef, I have been criticized far more often for being too blunt about speaking my mind than the reverse; if I meant to criticize you for holding a particular view, I would do it directly rather than by allegory of a third party. The reason I wrote that post was because "man anticipates continued life as a sunflower" looked like a pure straw man, until I realized it wasn't: there are people who do commit that fallacy, and not just inmates of lunatic asylums either; so I demonstrated this by citing it coming from a highly respected author. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Fri Jun 22 18:58:00 2007 From: gts_2000 at yahoo.com (gts) Date: Fri, 22 Jun 2007 14:58:00 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On Fri, 22 Jun 2007 03:08:21 -0400, Lee Corbin wrote: > .... recall that according to my > beliefs it is inherantly selfish for an instance of me to kill itself > immediately so that its duplicate gets $10M, given that one of > them has to die and if the instance protects itself, then it does > not get the $10M. That may seem an unsual way to use the > word "selfish", but I mean it most literally. Were "I" and my > duplicate in such a situation, "this instance" would gladly kill > itself because it would (rightly, I claim) anticipate awakening > the next morning $10M richer. I'm pretty sure that if you kill "this instance" of yourself today, you will never again have to worry about money, because you will never wake up. The best you can hope is that your now wealthy duplicate will remember you fondly while you're pushing up daisies. I'll write your epitaph: Here lies the body of Lee Corbin, Who thought he could die and return again. But his dupe was not him, Such was only his whim, For Lee now it's as though he had not been. :) -gts From austriaaugust at yahoo.com Fri Jun 22 19:17:27 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 22 Jun 2007 12:17:27 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> Message-ID: <838541.44619.qm@web37414.mail.mud.yahoo.com> Boy, things have been tense around here lately. We should be entitled to a little fun once in a while, right? I thought it would be fun to make a list of our favorite semi-transhumanist movies. This written medium can sometimes be somewhat dry, and difficult to express and share positive emotions with each other. It may sound cheesy, but perhaps by sharing our favorite movies, we could more easily recognize some of the more fundamental feelings and aspirations between us. [Maybe we could also suggest favorite music pieces, but I'll let that begin on someone else's initiative.] For my contribution, I recommend: * Original Director's Cut of "Bladerunner". You must see the original Director's Cut or you haven't seen the movie... sorry :-) Sure, it's a dark-future themed movie, and it is slightly cheesy in a few spots, but it does have some truly moving and profound moments, in my opinion. I fully recommend it, overall. Sincerely, Jeffrey Herrlich ____________________________________________________________________________________ Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase. http://farechase.yahoo.com/ From jef at jefallbright.net Fri Jun 22 21:20:53 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 14:20:53 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On 6/22/07, gts wrote: > On Fri, 22 Jun 2007 03:08:21 -0400, Lee Corbin wrote: > > > .... recall that according to my > > beliefs it is inherantly selfish for an instance of me to kill itself > > immediately so that its duplicate gets $10M, given that one of > > them has to die and if the instance protects itself, then it does > > not get the $10M. That may seem an unsual way to use the > > word "selfish", but I mean it most literally. Were "I" and my > > duplicate in such a situation, "this instance" would gladly kill > > itself because it would (rightly, I claim) anticipate awakening > > the next morning $10M richer. > > I'm pretty sure that if you kill "this instance" of yourself today, you > will never again have to worry about money, because you will never wake up. Damn. Now I feel compelled to rebut this claim else there might be more confusion as people assume that I support the competitor of my competitor. Lee is absolutely correct in his assertion above, but only in the very narrow case that both of these agents fully represent (act on behalf of) a single abstract entity known as Lee. At post duplication t = 0 (plus some uncertain duration) this is necessarily the case. Lee's view becomes progressively less coherent as the agents' contexts diverge with time and independent circumstance. Gordon is absolutely correct, but only to the extent that the two agents represent separate entities. Some partial congruence of agency may be maintained, for example, if circumstances were to include a perceived threat to an entity valued in common such as the abstract Lee entity, or preferably, by some sort of cooperative agreement or contract. My point is that while Lee and Gordon are both correct in their narrow opposing all-or-nothing interpretations, they are both wrong in missing the more encompassing definition of personal identity as **the extent** to which an agent represents a particular abstract entity, a construct in the mind of any observer(s), including the mind of the agent itself. Agency-based personal identity naturally accommodates our present everyday view of a constant relationship between the agent and the abstract entity which it represents. Even though both of these change with time, the relationship -- the personal identity -- is constant. Agency-based personal identity is more extensible because it accommodates the idea of duplicate persons with no paradox and no hanging question of what constitutes sufficient physical/functional similarity. And agency-based personal identity accommodates future scenarios of variously enabled/limited variants of oneself performing tasks on behalf of a common entity and viewed as precisely *that* entity for social/moral/judicial purposes by itself (itselves) and others. Perhaps counter-intuitively, agency-based personal identity shows us that agents more specifically differentiated in their function will maintain the entity-agent relationship more reliably due to less potential for conflict based on similar values expressed within disparate contexts. - Jef From jef at jefallbright.net Fri Jun 22 21:37:42 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 14:37:42 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <838541.44619.qm@web37414.mail.mud.yahoo.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: On 6/22/07, A B wrote: > For my contribution, I recommend: > > * Original Director's Cut of "Bladerunner". As a somewhat transhuman themed movie, I enjoyed Solaris. 2001 is probably my favorite, but that was a long long time ago. A few enjoyable movies significant to our time include Syriana, Babel, Children of Men, Crash for dealing with the increasing importance of an increasingly rational morality despite our strong evolved biases. I watched Fear and Loathing in Las Vegas recently, and thought it was terrible. Sorry Stathis. ;-) - Jef From nvitamore at austin.rr.com Fri Jun 22 21:09:50 2007 From: nvitamore at austin.rr.com (nvitamore at austin.rr.com) Date: Fri, 22 Jun 2007 17:09:50 -0400 Subject: [ExI] Favorite ~H+ Movies Message-ID: <380-22007652221950577@M2W022.mail2web.com> From: A B austriaaugust at yahoo.com >For my contribution, I recommend: >* Original Director's Cut of "Bladerunner". Good choice. One of my favorite films is terribly sad but its reach is transhuamist "Awakenings" (Dir. Miller, 1990) synopsis: "A new doctor finds himself with a ward full of comatose patients. He is disturbed by them and the fact that they have been comatose for decades with no hope of any cure. When he finds a possible chemical cure he gets permission to try it on one of them. When the first patient awakes, he is now an adult having gone into a coma in his early teens. The film then delights in the new awareness of the patients and then on the reactions of their relatives to the changes in the newly awakened." Natasha -------------------------------------------------------------------- mail2web.com - Microsoft? Exchange solutions from a leading provider - http://link.mail2web.com/Business/Exchange From neptune at MIT.EDU Fri Jun 22 22:11:33 2007 From: neptune at MIT.EDU (Bo Morgan) Date: Fri, 22 Jun 2007 18:11:33 -0400 (EDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <838541.44619.qm@web37414.mail.mud.yahoo.com> References: <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: I am really interested in this, but I can't think of good ones. One *not* to see (which will save you as much time as I wasted watching it) is "Magdelina's Brain". I was so excited about it too because I wanted it to be so good. Bo On Fri, 22 Jun 2007, A B wrote: ) Boy, things have been tense around here lately. We ) should be entitled to a little fun once in a while, ) right? I thought it would be fun to make a list of our ) favorite semi-transhumanist movies. This written ) medium can sometimes be somewhat dry, and difficult to ) express and share positive emotions with each other. ) It may sound cheesy, but perhaps by sharing our ) favorite movies, we could more easily recognize some ) of the more fundamental feelings and aspirations ) between us. [Maybe we could also suggest favorite ) music pieces, but I'll let that begin on someone ) else's initiative.] ) ) For my contribution, I recommend: ) ) * Original Director's Cut of "Bladerunner". ) ) You must see the original Director's Cut or you ) haven't seen the movie... sorry :-) Sure, it's a ) dark-future themed movie, and it is slightly cheesy in ) a few spots, but it does have some truly moving and ) profound moments, in my opinion. I fully recommend it, ) overall. ) ) Sincerely, ) ) Jeffrey Herrlich ) ) ) ) ____________________________________________________________________________________ ) Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase. ) http://farechase.yahoo.com/ ) _______________________________________________ ) extropy-chat mailing list ) extropy-chat at lists.extropy.org ) http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat ) From jef at jefallbright.net Fri Jun 22 21:58:42 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 22 Jun 2007 14:58:42 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: I forgot to mention I watched all three of the Cube movies a few months ago and enjoyed them, like solving a weird little throw-away puzzle. - Jef From joseph at josephbloch.com Fri Jun 22 22:33:59 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Fri, 22 Jun 2007 18:33:59 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <838541.44619.qm@web37414.mail.mud.yahoo.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> There really are so few movies that I would qualify as transhumanist in nature. Books and short stories aplenty, but movies? Ordinary futuristic sci-fi doesn't cut it, for me at least, for the same reason that I don't equate "ordinary" futurism with Transhumanism. I would even say that Bladerunner (much as I enjoy it, and agree about the value of the Director's Cut) doesn't make it for me, since it doesn't deal with any sort of change in humanity per se, but only humanity's exterior tools. (Although I suppose it could be argued that the Replicants themselves are the real post-humans in that world, even if they have their own limitations.) I would probably nominate "Charly" (the 1968 adaptation of the short story "Flowers for Algernon"). The theme of intelligence augmentation tips it into the >H category for me, and it's a damn poignant portrayal. I actually enjoyed "Johnny Mnemonic" (1995), even though it didn't fare so well with some critics. Ditto with "Gattaca" (1997). One that I might consider borderline are the "X-Men" movies (2000, 2003, 2006). I say borderline because I usually see >H as something deliberate, and (with the exception of Magneto's device in the first film) the mutant powers are portrayed as the product of natural evolution. Sort of the ultimate punctuation of evolutionary equilibrium. Alas, I still pine for a really good, deliberately >H film. I'd do handsprings for one that was actually pro-transhumanist in its portrayal, but I'm afraid supermen make too good a villain for that to happen any time soon. Joseph http://www.josephbloch.com > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of A B > Sent: Friday, June 22, 2007 3:17 PM > To: ExI chat list > Subject: [ExI] Favorite ~H+ Movies > > Boy, things have been tense around here lately. We > should be entitled to a little fun once in a while, > right? I thought it would be fun to make a list of our > favorite semi-transhumanist movies. This written > medium can sometimes be somewhat dry, and difficult to > express and share positive emotions with each other. > It may sound cheesy, but perhaps by sharing our > favorite movies, we could more easily recognize some > of the more fundamental feelings and aspirations > between us. [Maybe we could also suggest favorite > music pieces, but I'll let that begin on someone > else's initiative.] > > For my contribution, I recommend: > > * Original Director's Cut of "Bladerunner". > > You must see the original Director's Cut or you > haven't seen the movie... sorry :-) Sure, it's a > dark-future themed movie, and it is slightly cheesy in > a few spots, but it does have some truly moving and > profound moments, in my opinion. I fully recommend it, > overall. > > Sincerely, > > Jeffrey Herrlich From gts_2000 at yahoo.com Fri Jun 22 22:23:52 2007 From: gts_2000 at yahoo.com (gts) Date: Fri, 22 Jun 2007 18:23:52 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On Fri, 22 Jun 2007 17:20:53 -0400, Jef Allbright wrote: > Damn. Now I feel compelled to rebut this claim else there might be > more confusion as people assume that I support the competitor of my > competitor. Apparently then you think we're all sitting here at our computers wondering and worrying about what Jef might think about what Gordon might think about what Lee might think. But I rather doubt that, Jef. It's not all about you. In any case my view on this silly subject is roughly similar to yours, but I don't endorse your view anymore than I endorse Lee's. As I see it, will is fundamental. I take will as primitive in a Neitzschian/Shopenhauerish sort of way. When I speak of 'agency' I refer to that faculty by which we act as the executors of our wills. Where there are two wills, there are two agents and two identities. The wills of duplicates begin to diverge at the first moment after duplication. As above, at that moment there exist two wills, two agents and two identities. One day Lee and his duplicate will sit down for breakfast together. Lee will will to have corn flakes; his duplicate will will to have wheaties. At that moment Lee and his dupe will realize they are not the same person after all. In fact they never were the same person; it simply took a while for their wills to diverge sufficiently to make the truth apparent. -gts PS I thought we had an agreement. From thespike at satx.rr.com Fri Jun 22 23:41:50 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jun 2007 18:41:50 -0500 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> Message-ID: <7.0.1.0.2.20070622183555.021dc268@satx.rr.com> At 06:33 PM 6/22/2007 -0400, Joseph Bloch wrote: >I would probably nominate "Charly" (the 1968 adaptation of the short story >"Flowers for Algernon"). The theme of intelligence augmentation tips it into >the >H category for me, and it's a damn poignant portrayal. Having been tremendously moved by the novella, I disliked Cliff Robertson's smug performance and the movie's frequently glib script, but the Matthew Modine version for TV (2000), under its real name, was very much better. Damien Broderick From nvitamore at austin.rr.com Fri Jun 22 23:13:56 2007 From: nvitamore at austin.rr.com (nvitamore at austin.rr.com) Date: Fri, 22 Jun 2007 19:13:56 -0400 Subject: [ExI] ART: Artificial Life Zoo - Robotarium X Message-ID: <380-220076522231356849@M2W024.mail2web.com> My good friend Leonel Moura has a new exhibition: http://www.leonelmoura.com/robotarium.html Natasha Natasha vita-More http://www.natasha.cc -------------------------------------------------------------------- mail2web.com ? What can On Demand Business Solutions do for you? http://link.mail2web.com/Business/SharePoint From femmechakra at yahoo.ca Sat Jun 23 02:19:18 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Fri, 22 Jun 2007 22:19:18 -0400 (EDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <380-22007652221950577@M2W022.mail2web.com> Message-ID: <968884.89451.qm@web30401.mail.mud.yahoo.com> I still have that movie on video:) I agree, I think that's a great way to interpret the use of new drugs to help regulate some of the ongoing situations that all of humanity has to deal with and why research is so important. Robin Williams is fantastic! It's that caring attitude about his strive to help people that I believe makes the movie so powerful. Thanks Natasha, I wouldn't have recalled that if you hadn't of mentioned it. Anna --- "nvitamore at austin.rr.com" wrote: > > From: A B austriaaugust at yahoo.com > > >For my contribution, I recommend: > > >* Original Director's Cut of "Bladerunner". > > Good choice. One of my favorite films is terribly > sad but its reach is > transhuamist "Awakenings" (Dir. Miller, 1990) > > synopsis: "A new doctor finds himself with a ward > full of comatose > patients. He is disturbed by them and the fact that > they have been comatose > for decades with no hope of any cure. When he finds > a possible chemical > cure he gets permission to try it on one of them. > When the first patient > awakes, he is now an adult having gone into a coma > in his early teens. The > film then delights in the new awareness of the > patients and then on the > reactions of their relatives to the changes in the > newly awakened." > > Natasha > > -------------------------------------------------------------------- > mail2web.com - Microsoft? Exchange solutions from a > leading provider - > http://link.mail2web.com/Business/Exchange > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Get news delivered with the All new Yahoo! Mail. Enjoy RSS feeds right on your Mail page. Start today at http://mrd.mail.yahoo.com/try_beta?.intl=ca From msd001 at gmail.com Sat Jun 23 02:51:11 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 22 Jun 2007 22:51:11 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> Message-ID: <62c14240706221951n4dc9e164x62581c8b7f9718ee@mail.gmail.com> Idiocracy after 500 years of the 'dumbening' of the general public (due to the fact that the intelligent fail to breed) two preserved modern-day average humans awake to find themselves the smartest people alive. It might depict the opposite of H+, but clearly the writer (Mike Judge) understands the concept of H+ even if not the term. I liked it because it was funny and predicts a very real potential future (of course we won't be that stupid, right?) so check these comments: http://www.rottentomatoes.com/m/idiocracy/ From spike66 at comcast.net Sat Jun 23 02:53:37 2007 From: spike66 at comcast.net (spike) Date: Fri, 22 Jun 2007 19:53:37 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: Message-ID: <200706230309.l5N39H5u022486@andromeda.ziaspace.com> Favorite H+ movies. If defined loosely, I would nominate S1m0ne. This was unique in several ways: the only H+ comedy that I know of, the only time Al Pacino played a comedy role (and he was hilarious as a desperate movie director). The sight gags, such as the geek super-programmer with an 80s vintage removable hard disk in a setting that was supposed to be about 2006 to 2008-ish, Pacino's attempts to destroy the image of S1m0ne that backfired, etc. Hilarious stuff. Last time I mentioned this here, someone posted back that they saw it and didn't get it. http://www.s1m0ne.com/ http://www.imdb.com/title/tt0258153/fullcredits#cast > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Jef Allbright > Sent: Friday, June 22, 2007 2:38 PM > To: ExI chat list > Subject: Re: [ExI] Favorite ~H+ Movies > > On 6/22/07, A B wrote: > > > For my contribution, I recommend: > > > > * Original Director's Cut of "Bladerunner". > > As a somewhat transhuman themed movie, I enjoyed Solaris. 2001 is > probably my favorite, but that was a long long time ago. > > A few enjoyable movies significant to our time include Syriana, Babel, > Children of Men, Crash for dealing with the increasing importance of > an increasingly rational morality despite our strong evolved biases. > > I watched Fear and Loathing in Las Vegas recently, and thought it was > terrible. Sorry Stathis. ;-) > > - Jef > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From fauxever at sprynet.com Sat Jun 23 03:22:37 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 23 Jun 2007 03:22:37 -0000 Subject: [ExI] Favorite ~H+ Movies References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com><838541.44619.qm@web37414.mail.mud.yahoo.com><045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> <62c14240706221951n4dc9e164x62581c8b7f9718ee@mail.gmail.com> Message-ID: <000701c7ccd6$cf9dc740$6501a8c0@brainiac> From: "Mike Dougherty" To: "ExI chat list" Sent: Friday, June 22, 2007 7:51 PM > Idiocracy Idiotic. Olga From russell.wallace at gmail.com Sat Jun 23 03:26:47 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sat, 23 Jun 2007 04:26:47 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <838541.44619.qm@web37414.mail.mud.yahoo.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> On 6/22/07, A B wrote: > > Boy, things have been tense around here lately. We > should be entitled to a little fun once in a while, > right? Right ^.^ I thought it would be fun to make a list of our > favorite semi-transhumanist movies. This written > medium can sometimes be somewhat dry, and difficult to > express and share positive emotions with each other. > It may sound cheesy, but perhaps by sharing our > favorite movies, we could more easily recognize some > of the more fundamental feelings and aspirations > between us. Good idea! Let's see, what counts as "semi-transhumanist"? For the sake of argument I'll take it as portraying in a positive light a) reaching up to life rather than sinking down to death, and b) the role of science and rationality in general in that process. So for movies: Independence Day Hmm, short list! (Not saying there aren't other possible candidates, but that's the only one that comes to mind. Most of the other great movies I can think of have technology as just a neutral background, and some - well, now I almost wish The Terminator wasn't one of the greatest.) I'm going to take the liberty of including anime series also: Blue Seed Bubblegum Crisis 2040 Neon Genesis Evangelion Slayers Xenosaga (also a game series) -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jun 23 03:49:25 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 22 Jun 2007 22:49:25 -0500 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <000701c7ccd6$cf9dc740$6501a8c0@brainiac> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> <62c14240706221951n4dc9e164x62581c8b7f9718ee@mail.gmail.com> <000701c7ccd6$cf9dc740$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070622224814.02281f70@satx.rr.com> At 08:08 PM 7/22/2007 -0700, Olga wrote: > > Idiocracy > >Idiotic. Hey, come on, that was Joe Goebbels' favorite eugenics movie! From sentience at pobox.com Sat Jun 23 04:43:09 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Fri, 22 Jun 2007 21:43:09 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> Message-ID: <467CA4DD.7040506@pobox.com> Russell Wallace wrote: > > Good idea! Let's see, what counts as "semi-transhumanist"? For the sake > of argument I'll take it as portraying in a positive light a) reaching > up to life rather than sinking down to death, and b) the role of science > and rationality in general in that process. > Slayers ??? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From emlynoregan at gmail.com Sat Jun 23 05:14:43 2007 From: emlynoregan at gmail.com (Emlyn) Date: Sat, 23 Jun 2007 14:44:43 +0930 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <838541.44619.qm@web37414.mail.mud.yahoo.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: <710b78fc0706222214xbe4ea77kee70fa48685ae1dd@mail.gmail.com> Memento, with Guy Pearce. Very hard to explain why I think this is transhumanist. Maybe the exploration of the nature of mind? I think in some way it feels to me as though our normal mental architecture is broken compared to what we need for real understanding of the universe, analogous to the way his is for coping with normal life, and our struggle to overcome feels similar, doomed and heroic and sickly facinating. Emlyn On 23/06/07, A B wrote: > > Boy, things have been tense around here lately. We > should be entitled to a little fun once in a while, > right? I thought it would be fun to make a list of our > favorite semi-transhumanist movies. This written > medium can sometimes be somewhat dry, and difficult to > express and share positive emotions with each other. > It may sound cheesy, but perhaps by sharing our > favorite movies, we could more easily recognize some > of the more fundamental feelings and aspirations > between us. [Maybe we could also suggest favorite > music pieces, but I'll let that begin on someone > else's initiative.] > > For my contribution, I recommend: > > * Original Director's Cut of "Bladerunner". > > You must see the original Director's Cut or you > haven't seen the movie... sorry :-) Sure, it's a > dark-future themed movie, and it is slightly cheesy in > a few spots, but it does have some truly moving and > profound moments, in my opinion. I fully recommend it, > overall. > > Sincerely, > > Jeffrey Herrlich > > > > > ____________________________________________________________________________________ > Looking for a deal? Find great prices on flights and hotels with Yahoo! > FareChase. > http://farechase.yahoo.com/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Sat Jun 23 05:32:32 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sat, 23 Jun 2007 06:32:32 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <467CA4DD.7040506@pobox.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> <467CA4DD.7040506@pobox.com> Message-ID: <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.com> On 6/23/07, Eliezer S. Yudkowsky wrote: > > Russell Wallace wrote: > > Slayers > > ??? Rationality. The heroes didn't go "oh, who can say what will occur, let us just meditate on transcendental truth", they went "right, this is a bloody bad situation, screw the this is fate thing, let's figure out what's rationally the best course of action." Granted, a stretch. I plead sloppiness (I wasn't thinking too clearly at the time), but at the same time, a core of truth (I'm not ashamed of having included it in the list). -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Sat Jun 23 05:56:20 2007 From: amara at amara.com (Amara Graps) Date: Sat, 23 Jun 2007 07:56:20 +0200 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? Message-ID: Jef Allbright jef at jefallbright.net : >Arguing with you, Lee, has always perplexed me in a way that I've >experienced only with you and my ex-wife. I dealt with her because of >our shared interest in the kids, I find myself dealing with you >because of a shared interest in the community and topics of the >extropy list. In both cases, perplexing in the way you break the >rules of logical discourse while pleading ignorance or injury. Lee Corbin was in my .alwaysblock file from September 2002 until sometime in 2006 because of his maddening way of never taking my words at face value. I found him writing many interpretations of words that I never said, and I was often pidgeon-holed into nonsense categories and eventually I found it too exhausting to give any more of my time and typing hands to explain or defend myself. Any response I have to Lee here now is made very carefully. Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From sentience at pobox.com Sat Jun 23 06:05:36 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Fri, 22 Jun 2007 23:05:36 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> <467CA4DD.7040506@pobox.com> <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.com> Message-ID: <467CB830.2070404@pobox.com> Russell Wallace wrote: > Russell Wallace wrote: > > Slayers > > Rationality. The heroes didn't go "oh, who can say what will occur, let > us just meditate on transcendental truth", they went "right, this is a > bloody bad situation, screw the this is fate thing, let's figure out > what's rationally the best course of action." No one did that except Xellos. Certainly not Ms. Existential Risk. > Granted, a stretch. I plead sloppiness (I wasn't thinking too clearly at > the time), but at the same time, a core of truth (I'm not ashamed of > having included it in the list). Hey, y'know, you also have the option against zapping it off the list instead of stretching. No law against changing your mind. There really aren't many characters in fiction, period, who actually react reasonably to the galactic superthreat of the month - who think anything like what like you or I would think in that situation. Lawrence Watt-Evans's characters are the only ones I can think of offhand who seem to go through anything like a rational reasoning process. Oh, and: Geneshaft. It wasn't very good, but it seemed detectably transhumanist-themed. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From russell.wallace at gmail.com Sat Jun 23 06:17:52 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sat, 23 Jun 2007 07:17:52 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <467CB830.2070404@pobox.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> <467CA4DD.7040506@pobox.com> <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.com> <467CB830.2070404@pobox.com> Message-ID: <8d71341e0706222317m12821a7ej7d65625ea43b298@mail.gmail.com> On 6/23/07, Eliezer S. Yudkowsky wrote: > > No one did that except Xellos. Certainly not Ms. Existential Risk. Didn't she? I thought she reacted about as well as could have been. Of course, she didn't blind herself with the magic deathseeking words "Existential Risk". If she had, humanity in that universe would be extinct right now. Hey, y'know, you also have the option against zapping it off the list > instead of stretching. No law against changing your mind. I'll change my mind and add Bleach to the list alongside Slayers. There really aren't many characters in fiction, period, who actually > react reasonably to the galactic superthreat of the month - who think > anything like what like you or I would think in that situation. Yeah. DuQuesne is the major one that comes to mind. Lawrence Watt-Evans's characters are the only ones I can think of > offhand who seem to go through anything like a rational reasoning process. > > Oh, and: Geneshaft. It wasn't very good, but it seemed detectably > transhumanist-themed Okay, haven't read either of those. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sentience at pobox.com Sat Jun 23 06:34:28 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Fri, 22 Jun 2007 23:34:28 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <8d71341e0706222317m12821a7ej7d65625ea43b298@mail.gmail.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> <467CA4DD.7040506@pobox.com> <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.com> <467CB830.2070404@pobox.com> <8d71341e0706222317m12821a7ej7d65625ea43b298@mail.gmail.com> Message-ID: <467CBEF4.3010504@pobox.com> Russell Wallace wrote: > On 6/23/07, *Eliezer S. Yudkowsky* > wrote: > > No one did that except Xellos. Certainly not Ms. Existential Risk. > > Didn't she? I thought she reacted about as well as could have been. Of > course, she didn't blind herself with the magic deathseeking words > "Existential Risk". If she had, humanity in that universe would be > extinct right now. Season 3's use was reasonable. Season 2... no, I'm sorry, your boyfriend's life does not weigh reasonably against the entire planet. > Oh, and: Geneshaft. It wasn't very good, but it seemed detectably > transhumanist-themed > > Okay, haven't read either of those. Geneshaft's an anime. *Lots* of books more transhumanist than Geneshaft. My girlfriend Erin would add "Ghost in the Shell" and she'd be dead right, come to think of it. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From pgptag at gmail.com Sat Jun 23 07:20:32 2007 From: pgptag at gmail.com (Giu1i0 Pri5c0) Date: Sat, 23 Jun 2007 09:20:32 +0200 Subject: [ExI] Pure Philosophy Dispute: Are Categories Objective? In-Reply-To: References: Message-ID: <470a3c520706230020m3a92e294v8d138f617a1cf954@mail.gmail.com> A quick word to state that I tend not to take too seriously any sentence containing the word "objective". It is unnecessary, and usually leads to intolerance and worse. G. On 6/23/07, Amara Graps wrote: > Jef Allbright jef at jefallbright.net : > >Arguing with you, Lee, has always perplexed me in a way that I've > >experienced only with you and my ex-wife. I dealt with her because of > >our shared interest in the kids, I find myself dealing with you > >because of a shared interest in the community and topics of the > >extropy list. In both cases, perplexing in the way you break the > >rules of logical discourse while pleading ignorance or injury. > > Lee Corbin was in my .alwaysblock file from September 2002 until > sometime in 2006 because of his maddening way of never taking my words > at face value. I found him writing many interpretations of words that I > never said, and I was often pidgeon-holed into nonsense categories and > eventually I found it too exhausting to give any more of my time and > typing hands to explain or defend myself. Any response I have to Lee > here now is made very carefully. > > Amara > > -- > > Amara Graps, PhD www.amara.com > Associate Research Scientist, Planetary Science Institute (PSI), Tucson > INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Sat Jun 23 09:10:45 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 23 Jun 2007 19:10:45 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> Message-ID: On 23/06/07, Lee Corbin wrote: > > For example, you could make yourself believe that > > after your death, you survive if the rest of humanity survives; > > Exactly. Or they might believe that they'll become a particular > sun flower, or a particular river that they're fond of. And they'd > be just plain wrong, if not nuts. And the reason they're wrong is that going to bed and being replaced by a sunflower or a river in the morning will not reproduce the experience of going to bed and waking up as myself, whereas going to bed and waking up as my exact duplicate will. Going to bed in the knowledge that I will die overnight while my duplicate of a few minutes ago sleeps soundly in the next room does not reproduce the experience of going to bed and waking up normally, but is more like going to bed and not waking up at all. The counterargument is that going to bed as per the last paragraph is similar to going to bed and waking up with a few minutes' amnesia. If I take a drug such as midazolam which will I know will wipe out any memory of the next few minutes when I wake up tomorrow, then during that period that I know I won't remember I will be in a position analogous to that of contemplating my imminent death, knowing that my present self will have no direct successor. If I can overcome my fear of anticipating no successor experiences then I should (logically, I would argue) overcome my fear of death. On the other hand, if I can find consolation in the survival of a copy who branched off from me some time ago then I should also find consolation in existence of past versions of me, who definitely existed and definitely shared my memories etc. After all, once this instance of me is permanently dead his relationship to past, present and future copies is all the same. P.S. I am shocked at the acrimony this thread is causing! -- Stathis Papaioannou From alex at ramonsky.com Sat Jun 23 09:30:00 2007 From: alex at ramonsky.com (Alex Ramonsky) Date: Sat, 23 Jun 2007 10:30:00 +0100 Subject: [ExI] Happy Solstice! References: <200706221509.l5MF9J4c024157@andromeda.ziaspace.com> Message-ID: <467CE818.3020602@ramonsky.com> You're probably right [and thanks Amara for the super links] but that's not my question. : ) My question is, why is it _called_ "midsummer solstice" in cultures that believe it's the beginning of summer? It seems as foolish as saying "Please enter through the middle door" meaning the one at the start, or "I'll meet you midweek" when one means Monday. This started because I heard a parent say "Midsummer solstice is the start of summer" to a child and I could see the confusion on the kid's face as he tried to work it out! These Romans are crazy! : ) Best, AR ******** spike wrote: >Growing seasons would lag behind the solar seasons, which is the most >critical schedule to most societies. Those smart Celts, being astronomy >minded, would perhaps be more likely ignore the air temperature and note the >celestial cues. > >spike > > > From pharos at gmail.com Sat Jun 23 10:06:02 2007 From: pharos at gmail.com (BillK) Date: Sat, 23 Jun 2007 11:06:02 +0100 Subject: [ExI] Happy Solstice! In-Reply-To: <467CE818.3020602@ramonsky.com> References: <200706221509.l5MF9J4c024157@andromeda.ziaspace.com> <467CE818.3020602@ramonsky.com> Message-ID: On 6/23/07, Alex Ramonsky wrote: > You're probably right [and thanks Amara for the super links] but that's > not my question. : ) > My question is, why is it _called_ "midsummer solstice" in cultures that > believe it's the beginning of summer? It seems as foolish as saying > "Please enter through the middle door" meaning the one at the start, or > "I'll meet you midweek" when one means Monday. This started because I > heard a parent say "Midsummer solstice is the start of summer" to a > child and I could see the confusion on the kid's face as he tried to > work it out! These Romans are crazy! : ) > Best, > AR > ******** > > spike wrote: > > >Growing seasons would lag behind the solar seasons, which is the most > >critical schedule to most societies. Those smart Celts, being astronomy > >minded, would perhaps be more likely ignore the air temperature and note the > >celestial cues. > > > >spike > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From pharos at gmail.com Sat Jun 23 10:14:20 2007 From: pharos at gmail.com (BillK) Date: Sat, 23 Jun 2007 11:14:20 +0100 Subject: [ExI] Happy Solstice! In-Reply-To: <467CE818.3020602@ramonsky.com> References: <200706221509.l5MF9J4c024157@andromeda.ziaspace.com> <467CE818.3020602@ramonsky.com> Message-ID: On 6/23/07, Alex Ramonsky wrote: > You're probably right [and thanks Amara for the super links] but that's > not my question. : ) > My question is, why is it _called_ "midsummer solstice" in cultures that > believe it's the beginning of summer? It seems as foolish as saying > "Please enter through the middle door" meaning the one at the start, or > "I'll meet you midweek" when one means Monday. This started because I > heard a parent say "Midsummer solstice is the start of summer" to a > child and I could see the confusion on the kid's face as he tried to > work it out! These Romans are crazy! : ) Ooops! Sorry for that blank message. The sun positions defined the mid-points of the four seasons. The temperature changes lag behind these points. But not the growing seasons. For farming communities, the midsummer solstice was mid-way through the growing season, heading for the autumn harvest. BillK From sjatkins at mac.com Sat Jun 23 10:35:57 2007 From: sjatkins at mac.com (=?ISO-8859-1?Q?Samantha=A0_Atkins?=) Date: Sat, 23 Jun 2007 03:35:57 -0700 Subject: [ExI] Climate Bet, Forecasts by Scientists versus Scientific Forecasts In-Reply-To: <200706221543.l5MFhEYT025482@ms-smtp-01.texas.rr.com> References: <200706221543.l5MFhEYT025482@ms-smtp-01.texas.rr.com> Message-ID: Hmm, quoting: "Al Gore has claimed that there are scientific forecasts that the earth will become warmer and that this will occur rapidly. University of Pennsylvania Professor J. Scott Armstrong, author ofPrinciple of Forecasting: A Handbook for Researchers and Practitioners, and Kesten C. Green, of Monash University (and Armstrong?s Co-Director of forecastingprinciples.com), have been unable to locate a scientific forecast to support that viewpoint. As a result, Scott Armstrong offers a challenge to Al Gore that he will be able to make more accurate forecasts of annual mean temperatures than those that can be produced by current climate models." Odd. The general scientific consensus is that significant warming is occurring now. So why is there room for doubt that earth will become warmer? It has been and is becoming warmer and the likely causes are not significantly abating. In some cases the increase in temperature appears to be building on itself, especially in the Artic. So what exactly is meant by the above? There is certainly no dearth of scientific papers presenting models of how fast average temperatures have risen or may rise or under what circumstances there may be more rapid fluctuations in temperature. Methinks the above is not quite honest while protesting that it is more honest. Average temperature at each weather station is not a very scientific way to gauge global warming. - samantha On Jun 22, 2007, at 8:43 AM, Max More wrote: > Scott Armstrong and Kesten Green aim to improve the use of scientific > forecasting methods in the public policy area. They are using global > warming as their first example. See http://theclimatebet.com > > Scott tells me that the following paper will be presented next > Wednesday: > > Global Warming: Forecasts by Scientists versus Scientific Forecasts > http://www.forecastingprinciples.com/Public_Policy/WarmAudit31.pdf > > > Max > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Sat Jun 23 10:49:55 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 23 Jun 2007 03:49:55 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> Message-ID: <928867.8338.qm@web60523.mail.yahoo.com> --- Joseph Bloch wrote: > Alas, I still pine for a really good, deliberately > >H film. I'd do > handsprings for one that was actually > pro-transhumanist in its portrayal, > but I'm afraid supermen make too good a villain for > that to happen any time > soon. Aside from the 80's cheese-factor (yet strangely entertaining), I'd say "Robocop" put a net positive spin on H+ tech although some would call it more cyberpunk. I also liked "Vanilla Sky" despite the presence of Tommy boy. I thought it was a well written movie that struck me as sort of a H+ trojan horse. If you don't know what I mean, I don't want to spoil it for you. If you can't stand Tom Cruise then see "Abres les Ojos", the original spanish movie of which "Vanilla Sky" is an english remake. Penelope Cruz plays the same role in both. Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Expecting? Get great news right away with email Auto-Check. Try the Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html From russell.wallace at gmail.com Sat Jun 23 13:17:42 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sat, 23 Jun 2007 14:17:42 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <467CBEF4.3010504@pobox.com> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> <467CA4DD.7040506@pobox.com> <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.com> <467CB830.2070404@pobox.com> <8d71341e0706222317m12821a7ej7d65625ea43b298@mail.gmail.com> <467CBEF4.3010504@pobox.com> Message-ID: <8d71341e0706230617x71a81c64o6e674df28fa2f0e2@mail.gmail.com> On 6/23/07, Eliezer S. Yudkowsky wrote: > > Geneshaft's an anime. *Lots* of books more transhumanist than Geneshaft. Ah! I hadn't heard of that, might look it up. My girlfriend Erin would add "Ghost in the Shell" and she'd be dead > right, come to think of it. *nods* I've heard good things of that, on my to-watch list. Another interesting one is .hack, which is a game and anime series in four parts, each game comes with one episode. I played #1 which was okay, fun in a quirky way, but got very repetitive towards the end, and I'm told it just keeps going like that for the other three parts; and watched the first episode of the anime which was very good, wanted more, would happily buy if they'd stick all four episodes on a DVD for a reasonable price; but given that the other three episodes only come one per game, I'm not paying 60 euro an episode for it :P Another one that comes to mind is Mai Otome. (Its sort-of prequel, Mai Hime, is also very good, but with a different theme; I can't really grant it even honorary transhumanistic relevance. I can still recommend it though.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Jun 23 13:57:08 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 23 Jun 2007 09:57:08 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <000701c7ccd6$cf9dc740$6501a8c0@brainiac> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> <62c14240706221951n4dc9e164x62581c8b7f9718ee@mail.gmail.com> <000701c7ccd6$cf9dc740$6501a8c0@brainiac> Message-ID: <62c14240706230657h7a17274dode98616ad4ddcf1d@mail.gmail.com> On 7/22/07, Olga Bourlin wrote: > From: "Mike Dougherty" > > Idiocracy > > Idiotic. Right. That was the point. Mission accomplished. From joseph at josephbloch.com Sat Jun 23 15:06:11 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Sat, 23 Jun 2007 11:06:11 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <928867.8338.qm@web60523.mail.yahoo.com> References: <045301c7b51d$6dc67470$6400a8c0@hypotenuse.com> <928867.8338.qm@web60523.mail.yahoo.com> Message-ID: <049701c7b5a8$09d118e0$6400a8c0@hypotenuse.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of The Avantguardian > Sent: Saturday, June 23, 2007 6:50 AM > To: ExI chat list > Subject: Re: [ExI] Favorite ~H+ Movies > > --- Joseph Bloch wrote: > > > Alas, I still pine for a really good, deliberately > > >H film. I'd do > > handsprings for one that was actually > > pro-transhumanist in its portrayal, > > but I'm afraid supermen make too good a villain for > > that to happen any time > > soon. > > Aside from the 80's cheese-factor (yet strangely > entertaining), I'd say "Robocop" put a net positive > spin on H+ tech although some would call it more > cyberpunk. > > I also liked "Vanilla Sky" despite the presence of > Tommy boy. I thought it was a well written movie that > struck me as sort of a H+ trojan horse. If you don't > know what I mean, I don't want to spoil it for you. If > you can't stand Tom Cruise then see "Abres les Ojos", > the original spanish movie of which "Vanilla Sky" is > an english remake. Penelope Cruz plays the same role > in both. I had totally forgotten about "Robocop"; haven't seen it in years. I would definitely add it to the list. I confess I've never seen "Vanilla Sky". A shame for a good sci-fi fan to admit, I know. Perhaps I'll put it on the Netflix list. Joseph http://www.josephbloch.com From gts_2000 at yahoo.com Sat Jun 23 15:29:15 2007 From: gts_2000 at yahoo.com (gts) Date: Sat, 23 Jun 2007 11:29:15 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On Fri, 22 Jun 2007 03:08:21 -0400, Lee Corbin wrote: >> Paradoxical if you insist that they **must be** the same person. > > What paradox? As I mentioned to Stathis (I think), it hardly > rises to the "paradoxical". Two identical chess programs can > also fight it out, Fritz 9.1 vs. Fritz 9.1. So what's new? Consider that if identity is a function of the will, as I maintain, and if such programs as Fritz may be said to have will, then during the game Fritz-White's identity is quite different from Fritz-Black's. Each of the two instances of Fritz has, at every step, a unique will and therefore a unique identity. -gts From jef at jefallbright.net Sat Jun 23 16:50:56 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sat, 23 Jun 2007 09:50:56 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On 6/23/07, gts wrote: > On Fri, 22 Jun 2007 03:08:21 -0400, Lee Corbin wrote: > > >> Paradoxical if you insist that they **must be** the same person. > > > > What paradox? As I mentioned to Stathis (I think), it hardly > > rises to the "paradoxical". Two identical chess programs can > > also fight it out, Fritz 9.1 vs. Fritz 9.1. So what's new? > > Consider that if identity is a function of the will, as I maintain, and if > such programs as Fritz may be said to have will, then during the game > Fritz-White's identity is quite different from Fritz-Black's. Each of the > two instances of Fritz has, at every step, a unique will and therefore a > unique identity. Gordon, in your statement above, your concept of "will" does appear to perform exactly the same as my concept of agency. So please consider that to be agreement between us! :-) [Note that I don't mean that the concept of agency is exclusively mine since my usage is completely standard in the fields of social science, moral philosophy, game theory, robotics, etc. Only my particular application of it to philosophy of personal identity may be somewhat novel.] However, it appears that you took offense, rather thank taking my point, in our last exchange when I tried to convey that your "will", or my "agency" need not be an all or nothing affair. Consider the utterances "I do as I will", "I do the King's will", and "I do God's will", or to the same point but avoiding the possibly obscurant first-person: "He does as he wills, "He does the King's will, and "He does God's will." Consider these spoken by the same agent interleaved throughout the same day. Clearly it's the same agent throughout the day, but to the extent the agent is is acting/exercising/implementing the will of another entity, who is the entity behind the action? What if the agent is not a biological human, but a robotic machine? How would you describe this set of circumstances most meaningfully, coherently, and extensibly? - Jef From lcorbin at rawbw.com Sat Jun 23 17:31:57 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 23 Jun 2007 10:31:57 -0700 Subject: [ExI] Happy Solstice! References: <200706221509.l5MF9J4c024157@andromeda.ziaspace.com> Message-ID: <025801c7b5bc$c4bc43a0$6501a8c0@homeef7b612677> Happy Solstice to you too, Spike. I performed the appropriate ritual in the parking lot at work with the help of an assistant. We used a yardstick as plumb line, waited until 1pm, and measured the length of the shadow ('twas 9 inches). (Oddly, I noted that our shadows didn't seem to point exactly north until about 7 minutes after 1pm---1 pm, of course, because of the dratted daylight savings time.) So the angle generated by the sun and yardstick was arctan of 1/4, which is almost exactly 14 degrees. So (drawing a sanity-check diagram and) adding to the 23.5 degree axial tilt of the Earth, we got 37.5 for our latitude. Very nice for Santa Clara, California, no? (Google Earth gave 37.38 or something. Had to be pretty lucky given how much the wind was blowing the yardstick around.) But it got even better. My assistant pointed out that we were somewhere in the middle of the time zone, so substituting +8 hours from UCT (GMT in London) was only an approximation. Google suggested that the *longitude* of our parking lot is 121.99 W. My assistant's idea was that if we were, say, a bit to the west of the middle of our time zone, then we should have to wait a bit for the sun to get exactly overhead. Lo and behold once more! Two degrees (1.99, that is) yields eight minutes because the Earth turns one degree every four minutes (an hour is 1/24 of 360, or 15 degrees, so 60 minutes = 15 degrees is of course four minutes.) Delighted was I, since I had noted that we seemed to have to wait seven minutes for the sun to get the yardstick's shadow lined up right! You can do this any day this summer! See how many minutes before or after the hour you have to wait for a shadow to point exactly north, and compute your longitude. It makes the Admiralty and John Harrison's quest all the more poignant. Every time that ritual measurements like this are performed, the claims of the flat-Earthers are weakend a little more. :-) Lee ----- Original Message ----- From: "spike" To: "'ExI chat list'" Sent: Friday, June 22, 2007 7:56 AM Subject: Re: [ExI] Happy Solstice! > > Growing seasons would lag behind the solar seasons, which is the most > critical schedule to most societies. Those smart Celts, being astronomy > minded, would perhaps be more likely ignore the air temperature and note the > celestial cues. > > spike > >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- >> bounces at lists.extropy.org] On Behalf Of Alex Ramonsky >> Sent: Friday, June 22, 2007 2:53 AM >> To: ExI chat list >> Subject: Re: [ExI] Happy Solstice! >> >> Happy Solstice Amara : ) >> ...Something that I have wondered for many years...Does anyone know why >> the Midsummer Solstice/Midwinter Solstice are called so, when (at least >> in the UK) they're considered to be the _beginning_ of summer/winter? >> The Celts treated them as the middle of the seasons...Is this one of >> those eccentric British things or is there genuine worldwide confusion? >> Best, >> AR >> ********** >> >> Amara Graps wrote: >> >> >Happy June Solstice [1] to you Northerners and Southerners (hemispheres, >> >that is)!! Celebration time! >> > >> >Midsummer Night [2] >> >by Zinta Aistars >> > >> >One night each year, that longest night >> >be >> > >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From scerir at tiscali.it Sat Jun 23 17:37:50 2007 From: scerir at tiscali.it (scerir at tiscali.it) Date: Sat, 23 Jun 2007 19:37:50 +0200 (CEST) Subject: [ExI] Happy Solstice! Message-ID: <29758688.1182620270383.JavaMail.root@ps10> alex: My question is, why is it _called_ "midsummer solstice" in cultures that believe it's the beginning of summer? The link below says something about that http://www.straightdope.com/classics/a1_170b.html Alex: These Romans are crazy! : ) These Romans are not crazy, since there is no 'midsummer' now in this Rome :-) _________________________________________________________ Naviga e telefona senza limiti con Tiscali Scopri le promozioni Tiscali Adsl: navighi e telefoni senza canone Telecom http://abbonati.tiscali.it/adsl/ From jonkc at att.net Sat Jun 23 17:29:16 2007 From: jonkc at att.net (John K Clark) Date: Sat, 23 Jun 2007 13:29:16 -0400 Subject: [ExI] Favorite ~H+ Movies. References: <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> The only transhuman movie that I think of that was dead accurate in every respect was made nearly 40 years ago, it was called "The Forbin Project": Dr. Forbin is in charge of a multi billion dollar project to make an AI, he is successful; he thinks he can control the AI, in this he is unsuccessful. For most of the movie we follow Dr. Forbin and his elaborate plot to turn the machine off; it looks to the humans like the plan is working, but in the last 5 minutes of the movie we find out it failed, in fact the plan never even came close to working. To top it off the AI now forces Forbin to help it design a successor machine even more powerful than it is. Then the movie ends. Way ahead of it's time! Great story, great dialogue, great acting, great sets, great music; but I think I know why this masterpiece was never very popular, the humans don't win. This is also a very rare example where the movie was MUCH better than the book. John K Clark From natasha at natasha.cc Sat Jun 23 16:57:35 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 23 Jun 2007 11:57:35 -0500 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.co m> References: <8d71341e0706221131y6fedcf56r31717228ceb07810@mail.gmail.com> <838541.44619.qm@web37414.mail.mud.yahoo.com> <8d71341e0706222026n3cdb5e5fk3221fcf478572ca8@mail.gmail.com> <467CA4DD.7040506@pobox.com> <8d71341e0706222232t4bd22686i6ae1f4f81e79bf42@mail.gmail.com> Message-ID: <200706231657.l5NGvdSn029880@ms-smtp-05.texas.rr.com> "Awakenings" (Dir. Marshall, 1990) "A new doctor finds himself with a ward full of comatose patients. He is disturbed by them and the fact that they have been comatose for decades with no hope of any cure. When he finds a possible chemical cure he gets permission to try it on one of them...." Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sat Jun 23 17:52:45 2007 From: spike66 at comcast.net (spike) Date: Sat, 23 Jun 2007 10:52:45 -0700 Subject: [ExI] Happy Solstice! In-Reply-To: <025801c7b5bc$c4bc43a0$6501a8c0@homeef7b612677> Message-ID: <200706231752.l5NHqqOj008036@andromeda.ziaspace.com> Cool Lee! Since you take into account the latitude and longitude, your next digit of precision on your yardstick experiment comes from taking into account the analemma. This website gives a reasonable explanation: http://www.analemma.com/Pages/framesPage.html The analemma is the result of the fact that the earth isn't in a circular orbit around the sun, but rather an ellipse. This time of year when we are at the aphelion, or farthest point from the sun, the apparent traverse of the sun across the ecliptic is slightly less than than the usual ~ degree per day. Mid winter it will traverse slightly more than the average. By happy coincidence, the aphelion and perihelion almost correspond with the soltices. I am setting up an experiment to mark the pavement on my back patio corresponding to the shadow of the peak of the house at exactly noon. Of course most days at noon I would not be home, so it will take years to get most of the calendar days marked. When I do, I will have a figure 8 shaped calendar back there. Isaac will love it. Is this cool or what? > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Lee Corbin > Sent: Saturday, June 23, 2007 10:32 AM > To: ExI chat list > Subject: Re: [ExI] Happy Solstice! > > Happy Solstice to you too, Spike. > > I performed the appropriate ritual in the parking lot at work with the > help of an assistant. We used a yardstick as plumb line, waited until > 1pm, and measured the length of the shadow ('twas 9 inches). > > (Oddly, I noted that our shadows didn't seem to point exactly north > until about 7 minutes after 1pm---1 pm, of course, because of the > dratted daylight savings time.) > > So the angle generated by the sun and yardstick was arctan of 1/4, > which is almost exactly 14 degrees. So (drawing a sanity-check > diagram and) adding to the 23.5 degree axial tilt of the Earth, we > got 37.5 for our latitude. Very nice for Santa Clara, California, > no? (Google Earth gave 37.38 or something. Had to be pretty > lucky given how much the wind was blowing the yardstick around.) > > But it got even better. My assistant pointed out that we were > somewhere in the middle of the time zone, so substituting +8 hours > from UCT (GMT in London) was only an approximation. Google > suggested that the *longitude* of our parking lot is 121.99 W. > My assistant's idea was > that if we were, say, a bit to the west of the middle of our time zone, > then we should have to wait a bit for the sun to get exactly overhead. > Lo and behold once more! Two degrees (1.99, that is) yields eight > minutes because the Earth turns one degree every four minutes (an > hour is 1/24 of 360, or 15 degrees, so 60 minutes = 15 degrees is > of course four minutes.) > > Delighted was I, since I had noted that we seemed to have to wait > seven minutes for the sun to get the yardstick's shadow lined up right! > > You can do this any day this summer! See how many minutes before > or after the hour you have to wait for a shadow to point exactly > north, and compute your longitude. It makes the Admiralty and > John Harrison's quest all the more poignant. > > Every time that ritual measurements like this are performed, the claims > of the flat-Earthers are weakend a little more. :-) > > Lee > > > ----- Original Message ----- > From: "spike" > To: "'ExI chat list'" > Sent: Friday, June 22, 2007 7:56 AM > Subject: Re: [ExI] Happy Solstice! > > > > > > Growing seasons would lag behind the solar seasons, which is the most > > critical schedule to most societies. Those smart Celts, being astronomy > > minded, would perhaps be more likely ignore the air temperature and note > the > > celestial cues. > > > > spike > > > >> -----Original Message----- > >> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > >> bounces at lists.extropy.org] On Behalf Of Alex Ramonsky > >> Sent: Friday, June 22, 2007 2:53 AM > >> To: ExI chat list > >> Subject: Re: [ExI] Happy Solstice! > >> > >> Happy Solstice Amara : ) > >> ...Something that I have wondered for many years...Does anyone know why > >> the Midsummer Solstice/Midwinter Solstice are called so, when (at least > >> in the UK) they're considered to be the _beginning_ of summer/winter? > >> The Celts treated them as the middle of the seasons...Is this one of > >> those eccentric British things or is there genuine worldwide confusion? > >> Best, > >> AR > >> ********** > >> > >> Amara Graps wrote: > >> > >> >Happy June Solstice [1] to you Northerners and Southerners > (hemispheres, > >> >that is)!! Celebration time! > >> > > >> >Midsummer Night [2] > >> >by Zinta Aistars > >> > > >> >One night each year, that longest night > >> >be > >> > > >> > >> > >> > >> > >> _______________________________________________ > >> extropy-chat mailing list > >> extropy-chat at lists.extropy.org > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From benboc at lineone.net Sat Jun 23 17:35:57 2007 From: benboc at lineone.net (ben) Date: Sat, 23 Jun 2007 18:35:57 +0100 Subject: [ExI] extropy-chat Digest, Vol 45, Issue 27 In-Reply-To: References: Message-ID: <467D59FD.5050909@lineone.net> Message: 29 Date: Sat, 23 Jun 2007 04:26:47 +0100 From: "Russell Wallace" Subject: Re: [ExI] Favorite ~H+ Movies > I'm going to take the liberty of including anime series also: > Blue Seed > Bubblegum Crisis 2040 > Neon Genesis Evangelion > Slayers > Xenosaga (also a game series) You missed the obvious one, and the best, imo: Ghost in the Shell ben z From gts_2000 at yahoo.com Sat Jun 23 17:18:55 2007 From: gts_2000 at yahoo.com (gts) Date: Sat, 23 Jun 2007 13:18:55 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On Sat, 23 Jun 2007 12:50:56 -0400, Jef Allbright wrote: > Gordon, in your statement above, your concept of "will" does appear to > perform exactly the same as my concept of agency. So please consider > that to be agreement between us! :-) Okay. But I make a distinction between will and agency. As I mentioned to you in my last, I see agency as the faculty through which a person executes his will. Your will is your essence at any given moment. It is not a "model", nor is it an "abstract entity". Your will is perhaps the only concrete, non-abstract thing in all the world. You know nothing more intimately than you know your own will. When you want to eat, you want to eat; when you want to sleep, you want to sleep! Nothing abstract about it! This is what I meant when I wrote that the will is primitive in the sense meant by Nietzsche and Schopenhauer. > Consider these spoken by the same agent interleaved throughout the same > day. Clearly it's the same agent throughout the day, but to the extent > the agent is is > acting/exercising/implementing the will of another entity, who is the > entity behind the action? I'd say it's always the actor. Your example of "I am doing God's will" parses out to be "I am doing my own will, which is to do what I think is God's will". Nobody does what they don't want to do. > What if the agent is not a biological human, but a robotic machine? I'm not sure there is a difference between them. -gts From jef at jefallbright.net Sat Jun 23 18:20:31 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sat, 23 Jun 2007 11:20:31 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On 6/23/07, gts wrote: > Your will is your essence at any given moment. It is not a "model", nor is > it an "abstract entity". Your will is perhaps the only concrete, > non-abstract thing in all the world. Okay, thanks. I'll let this lie for now as I know of no effective means for bridging the gap entailed by the assumption of such an essence. - Jef From fauxever at sprynet.com Sat Jun 23 18:36:25 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 23 Jun 2007 18:36:25 -0000 Subject: [ExI] Favorite ~H+ Movies. References: <838541.44619.qm@web37414.mail.mud.yahoo.com> <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> Message-ID: <000e01c7cd56$7a71fa40$6501a8c0@brainiac> From: "John K Clark" To: "ExI chat list" Sent: Saturday, June 23, 2007 10:29 AM > The only transhuman movie that I think of that was dead accurate in every > respect was made nearly 40 years ago, it was called "The Forbin Project" > ... To top it off the AI now forces Forbin to help it design a successor > machine even more powerful than it is. Ah, yes ... I remember that well! The tagline for "The Forbin Project" could have been the same one they used for "Seconds" (a movie I saw as a teenager in 1966, which gave a slight boost to my perspective forevermore, as well as a new word to my vocabulary, i.e., "reborns"): "What Are Seconds?... The Answer May Be Too Terrifying For Words!" (Olga's note: Oh, yeah? Well, those people obviously never heard of "Second Life.") more: "What if someone offered you the chance to begin again, with a new life that was organized to be exactly what you wanted it to be? That's what the organization offers some wealthy people..." (Olga's note: Ha! Ha! Those stinkin' greedy wealthy people who live dangerously and just can't seem to heed the moral lessons imbued in fairy tales such as The Tale of the Fisherman ...): http://www.imdb.com/title/tt0060955/ http://en.wikipedia.org/wiki/The_Tale_of_the_Fisherman_and_the_Fish ;) Olga From thespike at satx.rr.com Sat Jun 23 18:41:48 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jun 2007 13:41:48 -0500 Subject: [ExI] will as essence In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070623133259.021abcd8@satx.rr.com> At 11:20 AM 6/23/2007 -0700, Jef wrote: > > Your will is your essence at any given moment. It is not a "model", nor is > > it an "abstract entity". Your will is perhaps the only concrete, > > non-abstract thing in all the world. > >... I know of no effective >means for bridging the gap entailed by the assumption of such an >essence. Especially since Benjamin Libet and others showed that the conscious experience of willing an act lags by some 200 to 500 millisecs the brain's activation of the relevant volitional processes or "readiness potential". Even granting that willing is a real experience, and sometimes very vivid, it can't be "your essence" unless your essence is to be a machine that only belatedly notices what it's up to. (Many here would agree with that, of course.) Damien Broderick From thespike at satx.rr.com Sat Jun 23 18:51:23 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jun 2007 13:51:23 -0500 Subject: [ExI] brain and attention Message-ID: <7.0.1.0.2.20070623134619.02408968@satx.rr.com> http://uninews.unimelb.edu.au/articleid_4334.html Interactions in the brain which enable us to pay attention to some of the things we see while barely noticing others have been discovered in research at the University of Melbourne. The findings are the first to show the complex interactions between two different areas of the brain when an object catches our eye. They were published in the international journal Science last week. The study was conducted by Dr Yuri Saalmann and Associate Professor Trichur Vidyasagar (Optometry and Vision Sciences), and Dr Ivan Pigarev, a visiting scientist from the Russian Academy of Sciences... Associate Professor Vidyasagar and his colleagues found that... the lateral intraparietal cortex ? which controls attention ? stimulates activity in a lower area called the medial temporal area, which influences the processing of visual information. [etc] From spike66 at comcast.net Sat Jun 23 19:09:41 2007 From: spike66 at comcast.net (spike) Date: Sat, 23 Jun 2007 12:09:41 -0700 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> Message-ID: <200706231909.l5NJ9lFT008753@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of John K Clark ... > Dr. Forbin is in charge of a multi billion dollar project to make an AI, > he is successful; he thinks he can control the AI, in this he is > unsuccessful... > > John K Clark Ja, John this is what I like about AC Clarke's 2001 ASO, may he live forever. Oh wait, you're right, he already has. {8^D In 2001 ASO the AI Hal and the humans do come in conflict, but in the end they work together and make a truly whoopass combination. Well I guess it depends on how you interpret the psychedelic upsidedown color reversed everglades-from-a-helicopter scenes that the stoners so adored. spike From spike66 at comcast.net Sat Jun 23 19:12:05 2007 From: spike66 at comcast.net (spike) Date: Sat, 23 Jun 2007 12:12:05 -0700 Subject: [ExI] gay bomb again In-Reply-To: <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> Message-ID: <200706231912.l5NJCBgj022101@andromeda.ziaspace.com> There was a news article about a fire in a warehouse that contained several bales of marijuana. Altho the firefighters were wearing the usual masks, several of them finished the day feeling as if they had been "toking" upon "reefers". (This may not be the latest hip terminology these days, perhaps it is updated by now, I don't know.) Perhaps the hipper of the firefighters were intentionally removing their masks and inhaling. {8-] This caused me to wonder about Gene's notion of finding a behavior modifying chemical that spans orders of magnitude in first effects to fatal toxicity ratio. "Grass" smokings have always appeared to me as peaceful types, not wanting to "rumble" as do some devourers of alcohol. Yet I have never heard of anyone perishing by overdosing "maryjane or "weed". Can that happen? Anyone know? A marijuana incinerator would be easy to make, easy to deploy on a battlefield, non-lethal, non-harmful to the local fauna, and might cause those extremist Episcopalians and Presbyterians to stop slaying each other. If sufficiently overdosed, perhaps they would begin to sing John Lennon's Imagine. spike From gts_2000 at yahoo.com Sat Jun 23 18:56:42 2007 From: gts_2000 at yahoo.com (gts) Date: Sat, 23 Jun 2007 14:56:42 -0400 Subject: [ExI] will as essence In-Reply-To: <7.0.1.0.2.20070623133259.021abcd8@satx.rr.com> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> <7.0.1.0.2.20070623133259.021abcd8@satx.rr.com> Message-ID: On Sat, 23 Jun 2007 14:41:48 -0400, Damien Broderick wrote: > Especially since Benjamin Libet and others showed that the conscious > experience of willing an act lags by some 200 to 500 millisecs the > brain's activation of the relevant volitional processes or "readiness > potential". Even granting that willing is a real experience, and > sometimes very vivid, it can't be "your essence" unless your essence > is to be a machine that only belatedly notices what it's up to. Those volitional processes that come into being before the conscious experience are acts of the will. For example, as an illustration of the experiment you cite, I might unconsciously pull my hand away from a fire before I notice consciously that I've done so. But this does not mean that I did not at some level will to pull my hand away from the fire! As per Schopenhauer, the intellect serves the will, not the other way around. Libet's experimental results do not falsify Schopenhauer's conjecture. They corroborate it. -gts From austriaaugust at yahoo.com Sat Jun 23 19:31:24 2007 From: austriaaugust at yahoo.com (A B) Date: Sat, 23 Jun 2007 12:31:24 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <838541.44619.qm@web37414.mail.mud.yahoo.com> Message-ID: <203020.59363.qm@web37405.mail.mud.yahoo.com> Whoops. I meant to write: original Director's Cut of "Blade Runner". Just had to nitpick. Some great suggestions from people, and I haven't even seen many of them. Let's try to keep it going, if we can. :-) I'd like to see as many arguably ~H+ movies as are out there. I need to relax for a couple weeks or so, and I'm probably not the only one. :-) Sincerely, Jeffrey Herrlich --- A B wrote: > Boy, things have been tense around here lately. We > should be entitled to a little fun once in a while, > right? I thought it would be fun to make a list of > our > favorite semi-transhumanist movies. This written > medium can sometimes be somewhat dry, and difficult > to > express and share positive emotions with each other. > It may sound cheesy, but perhaps by sharing our > favorite movies, we could more easily recognize some > of the more fundamental feelings and aspirations > between us. [Maybe we could also suggest favorite > music pieces, but I'll let that begin on someone > else's initiative.] > > For my contribution, I recommend: > > * Original Director's Cut of "Bladerunner". > > You must see the original Director's Cut or you > haven't seen the movie... sorry :-) Sure, it's a > dark-future themed movie, and it is slightly cheesy > in > a few spots, but it does have some truly moving and > profound moments, in my opinion. I fully recommend > it, > overall. > > Sincerely, > > Jeffrey Herrlich > > > > ____________________________________________________________________________________ > Looking for a deal? Find great prices on flights and > hotels with Yahoo! FareChase. > http://farechase.yahoo.com/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ___________________________________________________________________________________ You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_html.html From hibbert at mydruthers.com Sat Jun 23 20:12:45 2007 From: hibbert at mydruthers.com (Chris Hibbert) Date: Sat, 23 Jun 2007 13:12:45 -0700 Subject: [ExI] gay bomb again In-Reply-To: <200706231912.l5NJCBgj022101@andromeda.ziaspace.com> References: <200706231912.l5NJCBgj022101@andromeda.ziaspace.com> Message-ID: <467D7EBD.5060501@mydruthers.com> > This caused me to wonder about Gene's notion of finding a behavior modifying > chemical that spans orders of magnitude in first effects to fatal toxicity > ratio. "Grass" smokings have always appeared to me as peaceful types, not > wanting to "rumble" as do some devourers of alcohol. Yet I have never heard > of anyone perishing by overdosing "maryjane or "weed". Can that happen? > Anyone know? I thought the authoritative position was that there isn't a lethal dose of marijuana, but a bit of googling set me straight. The ratio between the effective dose and a lethal dose might be as high as 1:1,000,000. (I also saw estimates of 1:20K and 1:40K.) But it appears to be agreed that there are zero cases on record of human deaths from overdosing on marijuana. The million-to-one ratio occurs in small rodents due to profound central nervous system depression. Chris -- I think that, for babies, every day is first love in Paris. Every wobbly step is skydiving, every game of hide and seek is Einstein in 1905.--Alison Gopnik (http://edge.org/q2005/q05_9.html#gopnik) Chris Hibbert hibbert at mydruthers.com Blog: http://pancrit.org From pharos at gmail.com Sat Jun 23 20:49:33 2007 From: pharos at gmail.com (BillK) Date: Sat, 23 Jun 2007 21:49:33 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <203020.59363.qm@web37405.mail.mud.yahoo.com> References: <838541.44619.qm@web37414.mail.mud.yahoo.com> <203020.59363.qm@web37405.mail.mud.yahoo.com> Message-ID: On 6/23/07, A B wrote: > Whoops. I meant to write: original Director's Cut of > "Blade Runner". Just had to nitpick. Some great > suggestions from people, and I haven't even seen many > of them. Let's try to keep it going, if we can. :-) > I'd like to see as many arguably ~H+ movies as are out > there. I need to relax for a couple weeks or so, and > I'm probably not the only one. :-) 'Dark City' 1998, made a big impression on me. I watch it again every time it comes round on TV. It is a film that rewards repeat viewing as there is so much in it, you see something more every time. Don't read too much before seeing it; you'll enjoy it more with a fresh outlook. Dark City (2005) BY ROGER EBERT / November 6, 2005 "Dark City" by Alex Proyas resembles its great silent predecessor "Metropolis" in asking what it is that makes us human, and why it cannot be changed by decree. if we are the sum of all that has happened to us, then what are we when nothing has happened to us? ------------ Should be of interest to the identity thread addicts. ;) BillK From fauxever at sprynet.com Sat Jun 23 21:02:52 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 23 Jun 2007 21:02:52 -0000 Subject: [ExI] gay bomb again References: <200706231912.l5NJCBgj022101@andromeda.ziaspace.com> Message-ID: <002501c7cd6b$16a14150$6501a8c0@brainiac> From: "spike" To: "'ExI chat list'" Sent: Saturday, June 23, 2007 12:12 PM > A marijuana incinerator would be easy to make, easy to deploy on a > battlefield, non-lethal, non-harmful to the local fauna, and might cause > those extremist Episcopalians and Presbyterians to stop slaying each > other. > If sufficiently overdosed, perhaps they would begin to sing John Lennon's > Imagine. Oh, how I wish (and this is not even counting the dead and wounded among Iraq's population): "... These are America's war wounded, a toll that has received less attention than the 3,500 troops killed in Iraq. Depending on how you count them, they number between 35,000 and 53,000...": http://seattletimes.nwsource.com/APWires/headlines/D8PUM3R00.html Olga From spike66 at comcast.net Sat Jun 23 23:08:13 2007 From: spike66 at comcast.net (spike) Date: Sat, 23 Jun 2007 16:08:13 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: Message-ID: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> Please would someone compile these H+ movie choices? Is there a website to put them so as to be accessible next time we want to rent a movie? spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of BillK > Sent: Saturday, June 23, 2007 1:50 PM > To: ExI chat list > Subject: Re: [ExI] Favorite ~H+ Movies > > On 6/23/07, A B wrote: > > Whoops. I meant to write: original Director's Cut of > > "Blade Runner". Just had to nitpick. Some great ... > > 'Dark City' 1998, made a big impression on me... > BillK From emohamad at gmail.com Sat Jun 23 23:22:30 2007 From: emohamad at gmail.com (Elaa Mohamad) Date: Sun, 24 Jun 2007 01:22:30 +0200 Subject: [ExI] Favorite ~H+ Movies Message-ID: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> Favorite H+ movies: Fight club: http://www.imdb.com/title/tt0137523/ I know a lot of people will need a microscope to find H+ in it, but Tyler's dogma about the future of humanity is excellent! Also, the book gives a better look at the arguments behind the philosophy. I wonder why nobody mentioned The Matrix so far? Or did I miss it? One I really liked because it introduced me to some new ideas I haven't heard anywhere before is Code 46 - http://www.imdb.com/title/tt0345061/ Another one - Immortel (Immortal) - http://www.imdb.com/title/tt0314063/ I particularly liked the discourses about eugenics and the potential loss of humanity/self. Out of all anime, definitely the whole Ghost in the Shell series and movies. Hey, can anyone suggest some decent documentaries? I recently came across the "2057" Discovery series. It's a little on the Hollywood side, but it can pass... And Michio Kaku should be an action hero. Seriously. http://dsc.discovery.com/convergence/2057/about/about.html Google video: The Body http://video.google.com/videoplay?docid=1537644238897941086 The World http://video.google.com/videoplay?docid=-7582986795752940587 The City http://video.google.com/videoplay?docid=-6979560348655713314 Elaa Mohamad From emohamad at gmail.com Sat Jun 23 23:26:40 2007 From: emohamad at gmail.com (Elaa Mohamad) Date: Sun, 24 Jun 2007 01:26:40 +0200 Subject: [ExI] Favorite ~H+ Movies Message-ID: <24f36f410706231626i4513ad81gbd13751d143f507@mail.gmail.com> I just though of another one that is pretty good. Equilibrium - http://www.imdb.com/title/tt0238380/ From alex at ramonsky.com Sat Jun 23 23:42:34 2007 From: alex at ramonsky.com (Alex Ramonsky) Date: Sun, 24 Jun 2007 00:42:34 +0100 Subject: [ExI] Happy Solstice! References: <29758688.1182620270383.JavaMail.root@ps10> Message-ID: <467DAFEA.6040503@ramonsky.com> Great! Thanks loads for this : ) It's actually where the confusion began, as I lived in Ireland for over a decade, and the older generation claimed that summer began "around" May 4th -I believe this is a 'cross quarter day', in Ireland known as Bealtaine (Irish for May), which moves about (as does the solstice), so there was no set date and May 1st was celebrated as being "close enough for folk music" : ) When I got back to the UK I was surprised at the 'Solstice' choice for summer's beginning because before I left (1984) I'd never heard it said! Best, AR *********** scerir at tiscali.it wrote: >alex: >My question is, why is it _called_ "midsummer solstice" in cultures >that >believe it's the beginning of summer? > >The link below says something about that >http://www.straightdope.com/classics/a1_170b.html > > From russell.wallace at gmail.com Sun Jun 24 00:22:43 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sun, 24 Jun 2007 01:22:43 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> References: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> Message-ID: <8d71341e0706231722r6d52a782vcea9c95968c59ed3@mail.gmail.com> On 6/24/07, Elaa Mohamad wrote: > > I wonder why nobody mentioned The Matrix so far? Or did I miss it? Well, the criteria I've been using are portraying technology in a positive light, and - more important, sufficiently so to get a work at least an honorable mention by itself - the use of rational thought for solving problems. In The Matrix, the machines are evil and the way to solve problems is with a form of watered-down mysticism. I'm not saying it's a bad movie - it's good for what it is, a martial arts flick with cool special effects - but I wouldn't recommend it for the philosophy. Here's another one I just remembered: Ghostbusters. -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Sun Jun 24 00:24:48 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sun, 24 Jun 2007 01:24:48 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> References: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> Message-ID: <8d71341e0706231724l1e3aaa4ex1c21a1883f336353@mail.gmail.com> On 6/24/07, Elaa Mohamad wrote: > > Hey, can anyone suggest some decent documentaries? Cosmos, by the late Carl Sagan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Sun Jun 24 00:36:09 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sun, 24 Jun 2007 01:36:09 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <8d71341e0706231722r6d52a782vcea9c95968c59ed3@mail.gmail.com> References: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> <8d71341e0706231722r6d52a782vcea9c95968c59ed3@mail.gmail.com> Message-ID: <8d71341e0706231736n4cb2ef2pfcc85aeec422f166@mail.gmail.com> On 6/24/07, Russell Wallace wrote: > > > Well, the criteria I've been using are portraying technology in a positive > light, and - more important, sufficiently so to get a work at least an > honorable mention by itself - the use of rational thought for solving > problems. > There's another criterion that I haven't quite articulated yet; I'm not sure exactly what to call it. Anti-fatalism? The attitude of "Screw that, there is no fate. If the forecast is that we're doomed, then we'll just have to find a way to invalidate it." It's a hard one to pin down, because in a sense a lot of stories involve the heroes going up against opposition that's on paper stronger than them. Often they win by sheer determination, which is good, but not quite what I'm looking for in this regard. Sometimes they win by resorting to mysticism or suchlike, which is definitely not what I'm looking for. "The mindset/model of reality that said we couldn't succeed is incomplete, and here's how I'm going to prove it" might be another way to describe the attitude I have in mind. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at comcast.net Sun Jun 24 00:57:32 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Sat, 23 Jun 2007 18:57:32 -0600 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> Message-ID: <467DC17C.2060706@comcast.net> It's about time somebody asked for this. It is such a shame to have you all put so much effort into expressing all this great information, only to have it all buried in the ExI list archive. And it sure is extremely hard to keep up with all this voluminous and repetitive information right? It certainly could be better organized and compiled with much less effort don't you think? Why isn't there a wiki yet so POV information and lists like this can be collaboratively developed and organized? What good is the POV of some single "movie critic"? Or even a thousand "testimonials" on a movie? Things like this is precisely why we're developing the Canonizer. I've created a topic for this in the Canonizer here: http://test.canonizer.com/topic.asp?topic_num=20 I've added two of my current favorite movies, including some crude starts to some camp statements. I hope that if anyone agrees, they will join our camp, moving it up the list, and help out a bit to improve the camp statements. And I hope all you that have expressed your POV about what you like and why here will cut and paste this information to a statement in this Canonizer topic so it is not lost in the archive. What, do you expect everyone to rummage through the ExI archives to read all of your testimonials when someone wants to know what you like and why when they want to rent a movie? Brent Allsop spike wrote: > > > Please would someone compile these H+ movie choices? Is there a website to > put them so as to be accessible next time we want to rent a movie? > > spike > > > > >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- >> bounces at lists.extropy.org] On Behalf Of BillK >> Sent: Saturday, June 23, 2007 1:50 PM >> To: ExI chat list >> Subject: Re: [ExI] Favorite ~H+ Movies >> >> On 6/23/07, A B wrote: >> >>> Whoops. I meant to write: original Director's Cut of >>> "Blade Runner". Just had to nitpick. Some great >>> > ... > >> 'Dark City' 1998, made a big impression on me... >> BillK >> > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emlynoregan at gmail.com Sun Jun 24 01:31:45 2007 From: emlynoregan at gmail.com (Emlyn) Date: Sun, 24 Jun 2007 11:01:45 +0930 Subject: [ExI] gay bomb again In-Reply-To: <200706231912.l5NJCBgj022101@andromeda.ziaspace.com> References: <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> <200706231912.l5NJCBgj022101@andromeda.ziaspace.com> Message-ID: <710b78fc0706231831h2bdb604eg170f33a2ce9eccf0@mail.gmail.com> On 24/06/07, spike wrote: > A marijuana incinerator would be easy to make, easy to deploy on a > battlefield, non-lethal, non-harmful to the local fauna, and might cause > those extremist Episcopalians and Presbyterians to stop slaying each other. > If sufficiently overdosed, perhaps they would begin to sing John Lennon's > Imagine. They already do sing Imagine... kinda.... http://thoughts-and-faith-to-share.blogspot.com/ Imagine there's a Heaven / It's easy if you try / A hell below us / Above us Holy sky / Imagine all the people / Living for God's way ~ Imagine there's no hatred / It isn't hard to do / No cause to kill or die for / And one religion too / Imagine all the people / Living in Christ's peace. ~ You may say I'm a dreamer / But I'm not the only one / I hope someday you'll join us / And the world believes as one Imagine shared possessions / I wonder if you can / No deeds of greed, no hunger / A brotherhood of man / Imagine all the people / Sharing all the world ~ You may say I'm a dreamer / But I'm not the only one / I hope someday you'll join us / And the world will Love as one Jesus H Christ! Emlyn From stathisp at gmail.com Sun Jun 24 02:04:37 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jun 2007 12:04:37 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On 24/06/07, gts wrote: > Your will is your essence at any given moment. It is not a "model", nor is > it an "abstract entity". Your will is perhaps the only concrete, > non-abstract thing in all the world. You know nothing more intimately than > you know your own will. When you want to eat, you want to eat; when you > want to sleep, you want to sleep! Nothing abstract about it! This is what > I meant when I wrote that the will is primitive in the sense meant by > Nietzsche and Schopenhauer. Will, essence, agency, identity: what you are looking for is a term that explains why two near-identical copies might yet regard each other as "other". But how should they regard their future selves? If I am separated from a near-identical copy by metres, my selfish concerns might put me in direct conflict with him, whereas if I am separated from him by minutes, my selfish concerns are the same as his selfish concerns. For example, I might pay a certain amount so that my future self avoids an unpleasant experience, but not pay that amount so that my copy in the next room is spared the same experience. The difference is due to the fact that I anticipate the experiences of the future copy, or equivalently expect to "become" the future copy, but not the present copy. If there are several candidate future copies (in the MWI of QM or in duplication experiements) then I anticipate their experiences weighted according to their relative numbers. Once I have "become" one of them, I no longer (selfishly) care what happens to the rest even though a moment ago I considered that all of them had an equal claim to be "me". This may seem like a convoluted way to look at personal identity, but it is the way our brains work. If we had evolved in an environment where copying was commonplace, our brains may well have developed something akin to Lee's simpler theory of selfishly regarding all copies as selves in proportion to their level of similarity. -- Stathis Papaioannou From thespike at satx.rr.com Sun Jun 24 02:16:50 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 23 Jun 2007 21:16:50 -0500 Subject: [ExI] camp statements In-Reply-To: <467DC17C.2060706@comcast.net> References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> <467DC17C.2060706@comcast.net> Message-ID: <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> At 06:57 PM 6/23/2007 -0600, Brent wrote: >I've added two of my current favorite movies, including some crude >starts to some camp statements. I hope that if anyone agrees, they >will join our camp, moving it up the list, and help out a bit to >improve the camp statements. You might want to reconsider that term, or risk having people snigger at you. wikipedia: Camp is an aesthetic in which something has appeal because of its bad taste or ironic value. A part of the anti-academic defense of popular culture in the sixties, camp came to popularity in the eighties with the widespread adoption of Postmodern views on art and culture. "Camp" is derived from the French slang term se camper, which means "to pose in an exaggerated fashion." The OED gives 1909 as the first citation of "camp" in print, with the sense of "ostentatious, exaggerated, affected, theatrical; effeminate or homosexual; pertaining to or characteristic of homosexuals. So as n., 'camp' behaviour, mannerisms, etc. (see quot. 1909); a man exhibiting such behaviour." According to the OED, this sense of the word is "etymologically obscure." According to writer and queer theorist Samuel R. Delany, the term "a camp" originally developed from the practice of female impersonators and other prostitutes following military encampments to service the soldiers. Later, it evolved into a more general description of the aesthetic choices and behavior of working class gay men. Finally, it was brought into mainstream use (and transformed into an adjective) by Susan Sontag in her landmark essay (see below). [etc etc] From avantguardian2020 at yahoo.com Sun Jun 24 03:16:29 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 23 Jun 2007 20:16:29 -0700 (PDT) Subject: [ExI] will as essence In-Reply-To: <7.0.1.0.2.20070623133259.021abcd8@satx.rr.com> Message-ID: <916826.99376.qm@web60512.mail.yahoo.com> --- Damien Broderick wrote: > Especially since Benjamin Libet and others showed > that the conscious > experience of willing an act lags by some 200 to 500 > millisecs the > brain's activation of the relevant volitional > processes or "readiness > potential". Even granting that willing is a real > experience, and > sometimes very vivid, it can't be "your essence" > unless your essence > is to be a machine that only belatedly notices what > it's up to. (Many > here would agree with that, of course.) Unless your will originates in a future Everett branch and retrocausally pulls your physical body into it. Otherwise, I don't see how a belated-awareness-machine could accomplish longterm tasks requiring telesis, such as writing a novel, building a house, or painting the Sistine Chapel. To me, the volition starts in the envisioned future. For example, let's say you are thirsty, then it is not difficult to see that the future you that has slaked its thirst on a tall frosty mug of ale might be somehow signaling his satisfaction to your present self. Happy future you may even be competing with disgruntled future you for your present you's decision as to whom to bring about. Or perhaps the future you that did not get himself a tall frosty one is signaling his regret to you synergistically with happy future you's contentment. My point being that in a four dimensional space-time continuum a la Einstein, replete with bizarre effects like consciousness, EPR, and spontaneous emergent complexity, it is perfectly reasonable to think that a system is able to signal itself backwards in time. It makes it easy to find the most thermodynamically stable state. It violates no laws of physics, because it is not one particle signaling another faster than light, it is particles communicating with themselves over the time dimension. String theory makes it easier envision as the particles can be seen as strings with vibrational modes in the time-dimension. There is no reason to assume that vibrations along a string would favor one direction over another. Empirical evidence for this is shown by the Protein Folding problem. Briefly this is a very old puzzle in biochemistry as to how newly translated proteins are able to fold so efficiently into their functional forms. Several decades ago a noted biochemist (Ramachandran?) demonstrated that the number of possible configurations for the bond angles in any moderate sized protein were so astronomically high that even trying hundreds of configurations per second, it would still take a protein molecule on average billions of years to find the correct configuration. Yet almost every time in the real world, it folds nearly instaneously into the correct form. Like it knows exactly what it is supposed to look like and folds directly to that form. If the properly folded protein is signaling back the correct configuration backwards in time to its unfolded state, than this no longer seems so mysterious. And no photons are needed because the atoms are only "talking to themselves". So have you gotten yourself that ale yet? :-) Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Need a vacation? Get great deals to amazing places on Yahoo! Travel. http://travel.yahoo.com/ From mmbutler at gmail.com Sun Jun 24 03:51:07 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Sat, 23 Jun 2007 20:51:07 -0700 Subject: [ExI] camp statements In-Reply-To: <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> <467DC17C.2060706@comcast.net> <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> Message-ID: <7d79ed890706232051xdf081f3na67181ce0fa12be1@mail.gmail.com> "Camp" was also the name of a company that produced foundation garments such as long-line girdles, appropriate for shaping transvestite males (possibly with padding added). The etymology is odd, and I think Chip Delaney's was mostly folk."Se camper" sounds good but I'd want to see more substantiation... On 6/23/07, Damien Broderick wrote: > At 06:57 PM 6/23/2007 -0600, Brent wrote: > > > >I've added two of my current favorite movies, including some crude > >starts to some camp statements. I hope that if anyone agrees, they > >will join our camp, moving it up the list, and help out a bit to > >improve the camp statements. > > You might want to reconsider that term, or risk having people snigger at you. > > wikipedia: > > Camp is an aesthetic in which something has appeal because of its bad > taste or ironic value. > > A part of the anti-academic defense of popular culture in the > sixties, camp came to popularity in the eighties with the widespread > adoption of Postmodern views on art and culture. > > "Camp" is derived from the French slang term se camper, which means > "to pose in an exaggerated fashion." The OED gives 1909 as the first > citation of "camp" in print, with the sense of "ostentatious, > exaggerated, affected, theatrical; effeminate or homosexual; > pertaining to or characteristic of homosexuals. So as n., 'camp' > behaviour, mannerisms, etc. (see quot. 1909); a man exhibiting such > behaviour." According to the OED, this sense of the word is > "etymologically obscure." > > According to writer and queer theorist Samuel R. Delany, the term "a > camp" originally developed from the practice of female impersonators > and other prostitutes following military encampments to service the > soldiers. Later, it evolved into a more general description of the > aesthetic choices and behavior of working class gay men. Finally, it > was brought into mainstream use (and transformed into an adjective) > by Susan Sontag in her landmark essay (see below). [etc etc] > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m 'Piss off, you son of a bitch. Everything above where that plane hit is going to collapse, and it's going to take the whole building with it. I'm getting my people the fuck out of here." -- Rick Rescorla (R.I.P.), cell phone call, 9/11/2001 From avantguardian2020 at yahoo.com Sun Jun 24 03:32:33 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 23 Jun 2007 20:32:33 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <24f36f410706231626i4513ad81gbd13751d143f507@mail.gmail.com> Message-ID: <50414.98726.qm@web60515.mail.yahoo.com> --- Elaa Mohamad wrote: > I just though of another one that is pretty good. > Equilibrium - http://www.imdb.com/title/tt0238380/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat Hi Mohamed, I was wondering if you have ever seen "Renaissance: Paris 2054". It explores the writer's envisioned conflict between transhumanism and Islam in the backdrop of Paris in the year 2054 while being an entertaining film. I thought it might be relevant considering some of the discussion Islam and H+ recently. Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center. http://autos.yahoo.com/green_center/ From brent.allsop at comcast.net Sun Jun 24 04:16:45 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Sat, 23 Jun 2007 22:16:45 -0600 Subject: [ExI] camp statements In-Reply-To: <7d79ed890706232051xdf081f3na67181ce0fa12be1@mail.gmail.com> References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> <467DC17C.2060706@comcast.net> <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> <7d79ed890706232051xdf081f3na67181ce0fa12be1@mail.gmail.com> Message-ID: <467DF02D.6080506@comcast.net> Michael M. Butler wrote: > "Camp" was also the name of a company that produced foundation > garments such as long-line girdles, appropriate for shaping > transvestite males (possibly with padding added). > > The etymology is odd, and I think Chip Delaney's was mostly folk."Se > camper" sounds good but I'd want to see more substantiation... > Yes exactly. I've been intently paying attention to the usage of this term "camp" as in "I'm in your camp" on that belief or POV for many years now, and would have caught a usage like this instantly if I would have heard it used anywhere. But I have never heard anything like this anywhere so I'm betting it is at least somewhat localized. I'd bet there isn't to much popular "support" for this wikipedia article outside of certain locals. That is going to be one of the the great things about a Canonized wikipedia. You'll know quantitatively just how reputable and common various information like this is. This will include "Canonizing" for how much support there is in particular locals or groups like academics, non gays, and so on. (simply by choosing something like an academic Canonizer or whatever to see how academics canonize such.) With Wikipedia, you have no idea. Brent Allsop > On 6/23/07, Damien Broderick wrote: > >> At 06:57 PM 6/23/2007 -0600, Brent wrote: >> >> >> >>> I've added two of my current favorite movies, including some crude >>> starts to some camp statements. I hope that if anyone agrees, they >>> will join our camp, moving it up the list, and help out a bit to >>> improve the camp statements. >>> >> You might want to reconsider that term, or risk having people snigger at you. >> >> wikipedia: >> >> Camp is an aesthetic in which something has appeal because of its bad >> taste or ironic value. >> >> A part of the anti-academic defense of popular culture in the >> sixties, camp came to popularity in the eighties with the widespread >> adoption of Postmodern views on art and culture. >> >> "Camp" is derived from the French slang term se camper, which means >> "to pose in an exaggerated fashion." The OED gives 1909 as the first >> citation of "camp" in print, with the sense of "ostentatious, >> exaggerated, affected, theatrical; effeminate or homosexual; >> pertaining to or characteristic of homosexuals. So as n., 'camp' >> behaviour, mannerisms, etc. (see quot. 1909); a man exhibiting such >> behaviour." According to the OED, this sense of the word is >> "etymologically obscure." >> >> According to writer and queer theorist Samuel R. Delany, the term "a >> camp" originally developed from the practice of female impersonators >> and other prostitutes following military encampments to service the >> soldiers. Later, it evolved into a more general description of the >> aesthetic choices and behavior of working class gay men. Finally, it >> was brought into mainstream use (and transformed into an adjective) >> by Susan Sontag in her landmark essay (see below). [etc etc] >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 24 04:33:47 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 24 Jun 2007 14:33:47 +1000 Subject: [ExI] will as essence In-Reply-To: <916826.99376.qm@web60512.mail.yahoo.com> References: <7.0.1.0.2.20070623133259.021abcd8@satx.rr.com> <916826.99376.qm@web60512.mail.yahoo.com> Message-ID: On 24/06/07, The Avantguardian wrote: > Empirical evidence for this is shown by the Protein > Folding problem. Briefly this is a very old puzzle in > biochemistry as to how newly translated proteins are > able to fold so efficiently into their functional > forms. Several decades ago a noted biochemist > (Ramachandran?) demonstrated that the number of > possible configurations for the bond angles in any > moderate sized protein were so astronomically high > that even trying hundreds of configurations per > second, it would still take a protein molecule on > average billions of years to find the correct > configuration. You are probably referring to Levinthal's Paradox, from 1968: http://en.wikipedia.org/wiki/Levinthal_paradox http://www.sdsc.edu/~nair/levinthal.html It's not really a paradox because obviously proteins do fold into a stable configuration in a particular environment, representing a trough in electrostatic potential energy between the atoms. This configuration should be deducible from the protein sequence, but nature is not under any obligation to make it easy for us to compute it. -- Stathis Papaioannou From msd001 at gmail.com Sun Jun 24 05:01:09 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 24 Jun 2007 01:01:09 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> References: <24f36f410706231622w23c490d7re80ffd90a44dd31b@mail.gmail.com> Message-ID: <62c14240706232201q65faca4fn19a4acb7e10eb887@mail.gmail.com> On 6/23/07, Elaa Mohamad wrote: > Fight club: http://www.imdb.com/title/tt0137523/ > I know a lot of people will need a microscope to find H+ in it, but > Tyler's dogma about the future of humanity is excellent! Also, the > book gives a better look at the arguments behind the philosophy. I was also going to mention Fight Club. That one person was able to create a movement and carry a (lofty) ideal to fruition is an interesting thread for H+. > I wonder why nobody mentioned The Matrix so far? Or did I miss it? I started to, but couldn't express myself clearly enough to be at all understood why I thought it should be a candidate. I was thinking along the lines of post-uploaded humanity being potentially parallel to the simulation world. Why have concern for the depressing state of 'reality' if a thoroughly immersive alternative is more pleasant? From msd001 at gmail.com Sun Jun 24 05:09:22 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 24 Jun 2007 01:09:22 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <000e01c7cd56$7a71fa40$6501a8c0@brainiac> References: <838541.44619.qm@web37414.mail.mud.yahoo.com> <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> <000e01c7cd56$7a71fa40$6501a8c0@brainiac> Message-ID: <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> The Island http://en.wikipedia.org/wiki/The_Island_(2005_film) Perhaps not the best movie ever, but worth having seen. From joseph at josephbloch.com Sun Jun 24 05:21:13 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Sun, 24 Jun 2007 01:21:13 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> References: <838541.44619.qm@web37414.mail.mud.yahoo.com><02d601c7b5bc$10a9efc0$030a4e0c@MyComputer><000e01c7cd56$7a71fa40$6501a8c0@brainiac> <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> Message-ID: <04c801c7b61f$7c658d60$6400a8c0@hypotenuse.com> Bah... cheesy remake of a cheesy original. ("The Clonus Horror"). ;-) Joseph http://www.josephbloch.com > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Mike Dougherty > Sent: Sunday, June 24, 2007 1:09 AM > To: ExI chat list > Subject: Re: [ExI] Favorite ~H+ Movies. > > The Island > > http://en.wikipedia.org/wiki/The_Island_(2005_film) > > Perhaps not the best movie ever, but worth having seen. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From sentience at pobox.com Sun Jun 24 05:25:04 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sat, 23 Jun 2007 22:25:04 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <928867.8338.qm@web60523.mail.yahoo.com> References: <928867.8338.qm@web60523.mail.yahoo.com> Message-ID: <467E0030.4090408@pobox.com> The Avantguardian wrote: > > I also liked "Vanilla Sky" despite the presence of > Tommy boy. I thought it was a well written movie that > struck me as sort of a H+ trojan horse. If you don't > know what I mean, I don't want to spoil it for you. If > you can't stand Tom Cruise then see "Abres les Ojos", > the original spanish movie of which "Vanilla Sky" is > an english remake. Penelope Cruz plays the same role > in both. Not the same role. Also, Abre Los Ojos is *considerably better* than Vanilla Sky. My recommendation is that you only watch that one. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From thespike at satx.rr.com Sun Jun 24 05:33:09 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 24 Jun 2007 00:33:09 -0500 Subject: [ExI] camp statements In-Reply-To: <467DF02D.6080506@comcast.net> References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> <467DC17C.2060706@comcast.net> <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> <7d79ed890706232051xdf081f3na67181ce0fa12be1@mail.gmail.com> <467DF02D.6080506@comcast.net> Message-ID: <7.0.1.0.2.20070624002813.023b8938@satx.rr.com> At 10:16 PM 6/23/2007 -0600, Brent wrote: >I've been intently paying attention to the usage of this term "camp" >as in "I'm in your camp" on that belief or POV for many years now, >and would have caught a usage like this instantly if I would have >heard it used anywhere. But I have never heard anything like this >anywhere so I'm betting it is at least somewhat localized. I'd bet >there isn't to much popular "support" for this wikipedia article >outside of certain locals. Don't just guess. This is the information age, comrade. Try checking a standard dictionary. Merriam-Webster, say (unless you suspect it's been got at by certain locals): Main Entry:3camp Function:noun Etymology:origin unknown 1 : exaggerated effeminate mannerisms exhibited especially by homosexuals 2 a : something so outrageously artificial, affected, inappropriate, or out-of-date as to be considered amusing b : a style or mode of personal or creative expression that is absurdly exaggerated and often fuses elements of high and popular culture Main Entry:4camp Function:adjective : of, relating to, being, or displaying camp Main Entry:5camp Function:intransitive verb : to engage in camp : exhibit the qualities of camp But hey, whatever. Damien Broderick From amara at amara.com Sun Jun 24 08:35:00 2007 From: amara at amara.com (Amara Graps) Date: Sun, 24 Jun 2007 10:35:00 +0200 Subject: [ExI] The Meaning of Tingo (book review) Message-ID: Hi Folks, A fine book review in The Economist added another book to my already extensive book list-reading-queue: _The Meaning of Tingo and Other Extraordinary Words from Around the World_ by Adam Jacot de Boinod Since the Economist book review is 'subscribers only' (September 24, 2005 if you're a subscriber), I'll just post some snippets. --------Snippets of Review [...] Adam Jacot de Boinod, a BBC researcher, has sifted through more than 2m words in 280 dictionaries and 140 websites to discover that Albanians have 27 words for moustache-including mustaqe madh for bushy and mustaqe posht for one which droops down at both ends-that gin is Phrygian for drying out, that the Dutch say plimpplamppletteren when they are skimming stones and that instead of snap, crackle, pop, Rice Krispies in the Netherlands go Knisper! Knasper! Knusper! [...] It is not so much the languages that have two dozen words for snow, say, or horse or walrus carcass that impress the most, but those that draw differences between the seemingly indistinguishable. Italian, as one would imagine, is particularly good on male vanity, and French on love as a business. The richness of Yiddish for insults seems to be matched only by the many and varied Japanese words for the deep joy that can come as a response to beauty and the German varieties of sadness and disappointment. Words for work, money, sex, death and horrible personal habits may well tell you more about national attitudes than anything else. Why would Russian have a special word, koshatnik, for someone who deals in stolen cats and Turkish another, cigerci, for a seller of liver and lungs, or Central American Spanish a particular name, aviador, for a government employee who shows up only on payday? Old jokes are often the best jokes, and many of the most amusing examples are of terrible errors that can be made in different languages: there is fart (Turkish for talking nonsense), buzz (Arabic for nipple), sofa (Icelandic for sleep), shagit (Albanian for crawling on your belly), jam (Mongolian for road), nob (Wolof for love), dad (Albanian for babysitter), loo (Fulani for a storage pot), babe (SisSwati for a government minister), slug (Gaulish for servant), flab (Gaelic for a mushroom) and moron (Welsh for carrot). -------- I love this kind of cultural microscope, especially as a literal person where words carry added weight and meaning. One LARGE CAVEAT, is that the author did not practice rigor, so there are apparently many errors, as the amazon.com reviews below indicate. It's certainly good for an eye-opening, conversation-starting bit-of-fun, but if you are going to depend on the word for real life, you should probably confirm the word's accuracy. Amara Amazon's reviews : http://www.amazon.com/gp/product/0143038524/ -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From eugen at leitl.org Sun Jun 24 09:32:35 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 24 Jun 2007 11:32:35 +0200 Subject: [ExI] gay bomb again In-Reply-To: <467D7EBD.5060501@mydruthers.com> References: <200706231912.l5NJCBgj022101@andromeda.ziaspace.com> <467D7EBD.5060501@mydruthers.com> Message-ID: <20070624093235.GE17691@leitl.org> On Sat, Jun 23, 2007 at 01:12:45PM -0700, Chris Hibbert wrote: > > This caused me to wonder about Gene's notion of finding a behavior modifying > > chemical that spans orders of magnitude in first effects to fatal toxicity > > ratio. "Grass" smokings have always appeared to me as peaceful types, not > > wanting to "rumble" as do some devourers of alcohol. Yet I have never heard > > of anyone perishing by overdosing "maryjane or "weed". Can that happen? > > Anyone know? > > I thought the authoritative position was that there isn't a lethal dose > of marijuana, but a bit of googling set me straight. The ratio between > the effective dose and a lethal dose might be as high as 1:1,000,000. > (I also saw estimates of 1:20K and 1:40K.) But it appears to be agreed > that there are zero cases on record of human deaths from overdosing on > marijuana. The million-to-one ratio occurs in small rodents due to > profound central nervous system depression. There was a case of a few cows (in Netherlands?) die after feasting on a bale of cannabis. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Sun Jun 24 09:34:31 2007 From: pharos at gmail.com (BillK) Date: Sun, 24 Jun 2007 10:34:31 +0100 Subject: [ExI] camp statements In-Reply-To: <7.0.1.0.2.20070624002813.023b8938@satx.rr.com> References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> <467DC17C.2060706@comcast.net> <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> <7d79ed890706232051xdf081f3na67181ce0fa12be1@mail.gmail.com> <467DF02D.6080506@comcast.net> <7.0.1.0.2.20070624002813.023b8938@satx.rr.com> Message-ID: On 6/24/07, Damien Broderick wrote: > Don't just guess. This is the information age, comrade. Try checking > a standard dictionary. Merriam-Webster, say (unless you suspect it's > been got at by certain locals): > > Main Entry:3camp > Function:noun > Etymology:origin unknown > 1 : exaggerated effeminate mannerisms exhibited especially by homosexuals > 2 a : something so outrageously artificial, affected, inappropriate, > or out-of-date as to be considered amusing b : a style or mode of > personal or creative expression that is absurdly exaggerated and > often fuses elements of high and popular culture celebrates camp> > > Main Entry:4camp > Function:adjective > : of, relating to, being, or displaying camp songs of the fifties and sixties -- John Elsom> > > Main Entry:5camp > Function:intransitive verb > : to engage in camp : exhibit the qualities of camp camping, hands on hips, with a quick eye to notice every man who > passed by -- R. M. McAlmon> > The Online Etymological dictionary is usually reliable. Brent's usage is the original and still valid usage. camp (1) O.E. camp "contest," from W.Gmc. *kampo-z, early loan from L. campus "open field" (see campus), especially "open space for military exercise." Meaning "place where an army lodges temporarily" is 1528, from Fr. camp, from the same L. source. Transferred to non-military senses 1560. Meaning "body of adherents of a doctrine or cause" is 1871. The verb meaning "to encamp" is from 1543. Camp-follower first attested 1810. Camp-meeting is from 1809, usually in reference to Methodists. camp (2) "tasteless," 1909, homosexual slang, perhaps from mid-17c. Fr. camper "to portray, pose" (as in se camper "put oneself in a bold, provocative pose"); popularized 1964 by Susan Sontag's essay "Notes on Camp." The homosexual references to 'camp' behaviour is more modern, but this usage is common nowadays. World Wide words also comments that the original homosexual usage has moved on and in the last fifty years or so, has a more general usage, as in 'high camp'. Quote: As a side note, though camp still has close associations with the gay world, another sense has grown up in the past half-century or so. It can now mean a sophisticated and knowing type of amusement, based on something deliberately artistically unsophisticated or self-consciously exaggerated and artificial in style. It's an obvious enough extension of the older sense. Christopher Isherwood called it high camp in his novel The World in the Evening of 1954, in which he emphasised that "you're not making fun of it; you're making fun out of it". Susan Sontag famously wrote about this type in the Partisan Review in 1964; she said that the ultimate camp statement was "It's good because it's awful". BillK From mmbutler at gmail.com Sun Jun 24 10:55:21 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Sun, 24 Jun 2007 03:55:21 -0700 Subject: [ExI] camp statements In-Reply-To: References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> <467DC17C.2060706@comcast.net> <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> <7d79ed890706232051xdf081f3na67181ce0fa12be1@mail.gmail.com> <467DF02D.6080506@comcast.net> <7.0.1.0.2.20070624002813.023b8938@satx.rr.com> Message-ID: <7d79ed890706240355p29d65fd6pcc7a94683be9995c@mail.gmail.com> > The homosexual references to 'camp' behaviour is more modern, but this > usage is common nowadays. > Common nowadays? Hmm. A personal report from the Left Coast, where I've been situated for almost 20 years now... "Camp", "camping it up" and "camping around" are all pretty much passe in everyday gay parlance AFAIK. It's got a dated ring to it -- Fifties-Seventies, peaking between Stonewall and Lou Reed's _Transformer_ album.. By the time of the Village People it had dropped from view, pretty much, even in its "high"/cultural sense, with the possible exception of discussions of the prior decades along with pop art and such. Nobody in the troupe of "Hedwig and the Angry Inch", for instance, would speak of "camp" unless they were writing a history term paper. John Waters, OTOH, probably would. :) I think Damien's concern is a bit exaggerated, but not epicene in the least. :) > As a side note, though camp still has close associations with the gay > world, another sense has grown up in the past half-century or so. It > can now mean a sophisticated and knowing type of amusement, based on > something deliberately artistically unsophisticated or > self-consciously exaggerated and artificial in style. It's an obvious > enough extension of the older sense. Christopher Isherwood called it > high camp in his novel The World in the Evening of 1954, in which he > emphasised that "you're not making fun of it; you're making fun out of > it". Susan Sontag famously wrote about this type in the Partisan > Review in 1964; she said that the ultimate camp statement was "It's > good because it's awful". Even that usage (which did flourish and surpass the original subcultural meaning, even in the selfsame subculture) seems pretty much 23 Skiddoo. I'd predict that if you're under forty years old and in the Castro the expression never passes your lips except in the latter sense, and even then it's rare, being quite dated. It reminds me of some of the stuff that's still in Hoyle that real card players never say... People, regardless of orientation, put the mockers on stuff without invoking (high) camp at all. == Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m 'Piss off, you son of a bitch. Everything above where that plane hit is going to collapse, and it's going to take the whole building with it. I'm getting my people the fuck out of here." -- Rick Rescorla (R.I.P.), cell phone call, 9/11/2001 From pharos at gmail.com Sun Jun 24 12:03:37 2007 From: pharos at gmail.com (BillK) Date: Sun, 24 Jun 2007 13:03:37 +0100 Subject: [ExI] camp statements In-Reply-To: <7d79ed890706240355p29d65fd6pcc7a94683be9995c@mail.gmail.com> References: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> <467DC17C.2060706@comcast.net> <7.0.1.0.2.20070623211238.0240f4e8@satx.rr.com> <7d79ed890706232051xdf081f3na67181ce0fa12be1@mail.gmail.com> <467DF02D.6080506@comcast.net> <7.0.1.0.2.20070624002813.023b8938@satx.rr.com> <7d79ed890706240355p29d65fd6pcc7a94683be9995c@mail.gmail.com> Message-ID: On 6/24/07, Michael M. Butler wrote: > Common nowadays? Hmm. A personal report from the Left Coast, where > I've been situated for almost 20 years now... > > "Camp", "camping it up" and "camping around" are all pretty much passe > in everyday gay parlance AFAIK. It's got a dated ring to it -- > Fifties-Seventies, peaking between Stonewall and Lou Reed's > _Transformer_ album.. By the time of the Village People it had dropped > from view, pretty much, even in its "high"/cultural sense, with the > possible exception of discussions of the prior decades along with pop > art and such. > > Nobody in the troupe of "Hedwig and the Angry Inch", for instance, > would speak of "camp" unless they were writing a history term paper. > John Waters, OTOH, probably would. :) > > I think Damien's concern is a bit exaggerated, but not epicene in the least. :) > > Even that usage (which did flourish and surpass the original > subcultural meaning, even in the selfsame subculture) seems pretty > much 23 Skiddoo. I'd predict that if you're under forty years old and > in the Castro the expression never passes your lips except in the > latter sense, and even then it's rare, being quite dated. > I think you might now be getting into the area of current slang language. This can change very quickly and can be very local to certain cultures. Etymological dictionaries concentrate on the history and origin of language. Formal dictionaries, by their nature, reference the written usage of language and must be many years behind the moving target of today's urban newspeak. Slang dictionaries try to keep more up-to-date, but they have the problem that San Francisco slang is quite different to New York slang, which in turn is different from London slang, so they tend to be more localised. For example, UK slang can be very creative: camp: Adj. An effeminate style and mannerism affected mainly by 'gays', however anyone can 'camp it up.' camp as a row of pink tents Phrs. Very 'camp' (see above), or gay. E.g. "He was as camp as a row of pink tents and wouldn't have been out of place in a Mr Gay UK competition." camp it up Verb. To overact in an affected manner. E.g."If you want to see people camping it up, walk down Canal Street in the Gay Village in Manchester on a Saturday night." Australian slang doesn't seem to use the word 'camp', but they probably have extremely colourful alternatives. :) The gaming industry has a new usage also: In Multiplayer games, the act of remaining in one spot (usually secluded) with a sniper rifle or other weapon waiting for enemy players to emerge as easy targets. Generally looked upon as a "cheap" method of gaining kills. But we are now straying far from the original comment, which was that 'camp' was probably the wrong word to use. I agree that it causes unnecessary confusions when excellent alternatives are available. BillK From pharos at gmail.com Sun Jun 24 13:05:18 2007 From: pharos at gmail.com (BillK) Date: Sun, 24 Jun 2007 14:05:18 +0100 Subject: [ExI] Mind Mapping on the web Message-ID: Mind mapping software gets a mention now and again on Exi. There is a new online mind mapping site from Germany that might be of interest: There is a free version (with restrictions) and a paid version. Quote: MindMeister replaces legal pads and crumpled up pieces of paper with an online workspace that can be revised and manipulated. Users can create ideas and connect them to one another, or build their own hierarchies--it's essentially a giant canvas. Users of Google Docs and Spreadsheets will feel right at home, as the tool shares similar features for versioning, autosave, and collaboration. There's also built-in Skype integration, assuming your collaborators have provided their Skype username. While there's no built-in chat, users can fire up a text or voice chat on Skype by clicking on another collaborator's name. The free version allows: Create up to 6 mind maps Share mind maps with others Simultaneously collaborate on mind maps with others Import maps from Freemind and Mindjet MindManager? Export maps as RTF Export maps as image Publish maps (make them available for all) BillK From gts_2000 at yahoo.com Sun Jun 24 13:31:25 2007 From: gts_2000 at yahoo.com (gts) Date: Sun, 24 Jun 2007 09:31:25 -0400 Subject: [ExI] will as essence In-Reply-To: <916826.99376.qm@web60512.mail.yahoo.com> References: <916826.99376.qm@web60512.mail.yahoo.com> Message-ID: > ...Benjamin Libet and others showed that the conscious > experience of willing an act lags by some 200 to 500 millisecs the > brain's activation of the relevant volitional processes... The real significance here I think, (and to underscore something I wrote last night), is that this result suggests very strongly that the will does not inhere in the intellect as one might otherwise have assumed. The will and the intellect can and should be considered separately. And somewhat counter-intuitively the intellect does not drive the will; instead, the will drives the intellect, just as Schopenhauer told us some 190 years ago. Now, thanks to Damien and Libet above, we know the will drives the intellect with a ~200 to ~500 millisec lag. As I wrote to Jef in the identity thread, the will is not a model or an abstraction. The will is real and it's live and it's sitting in the driver's seat. So I hang my hat on it. -gts From jonkc at att.net Sun Jun 24 13:43:07 2007 From: jonkc at att.net (John K Clark) Date: Sun, 24 Jun 2007 09:43:07 -0400 Subject: [ExI] Favorite ~H+ Movies. References: <838541.44619.qm@web37414.mail.mud.yahoo.com><02d601c7b5bc$10a9efc0$030a4e0c@MyComputer><000e01c7cd56$7a71fa40$6501a8c0@brainiac> <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> Message-ID: <005701c7b665$a0ad90f0$18064e0c@MyComputer> "Mike Dougherty" > The Island The first half of this movie was pretty good, the second half was not. If the movie makers has less money to spend they would have made a much better film; as it is they seemed to say to themselves, hey we still have a lot of money left over in the budget, let's throw in a car chase. John K Clark From eugen at leitl.org Sun Jun 24 13:59:57 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 24 Jun 2007 15:59:57 +0200 Subject: [ExI] will as essence In-Reply-To: References: <7.0.1.0.2.20070623133259.021abcd8@satx.rr.com> <916826.99376.qm@web60512.mail.yahoo.com> Message-ID: <20070624135957.GI17691@leitl.org> On Sun, Jun 24, 2007 at 02:33:47PM +1000, Stathis Papaioannou wrote: > You are probably referring to Levinthal's Paradox, from 1968: > > http://en.wikipedia.org/wiki/Levinthal_paradox > http://www.sdsc.edu/~nair/levinthal.html > > It's not really a paradox because obviously proteins do fold into a > stable configuration in a particular environment, representing a The incorrect assumption is that all points of the configuration state space are the same. In reality we're getting a converging folding pathway funnel in the energetic landscape, with several well-defined stages along the way. A lot of the information to guide the folding stages is present in the primary sequence/wet context. > trough in electrostatic potential energy between the atoms. This > configuration should be deducible from the protein sequence, but > nature is not under any obligation to make it easy for us to compute > it. The Blue Gene/L supercomputer was originally designed to solve the Protein Folding Problem (PFP), by means of brute-force MD. The PFP is not intractable, however, and http://predictioncenter.org/ has shown a history of patchwork advances, and is likely to succeed eventually, and undramatically. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From gts_2000 at yahoo.com Sun Jun 24 15:00:38 2007 From: gts_2000 at yahoo.com (gts) Date: Sun, 24 Jun 2007 11:00:38 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On Sat, 23 Jun 2007 22:04:37 -0400, Stathis Papaioannou wrote: > Will, essence, agency, identity: what you are looking for is a term > that explains why two near-identical copies might yet regard each > other as "other". But how should they regard their future selves? I think continuity of self is a convenient and necessary fiction, an illusion of sorts that we probably cannot live without (except while doing philosophy). Philosophically, I'm quite certain I'm not the same person I was at age five and in my view it follows logically that neither am I the same person I was five minutes ago. > If I am separated from a near-identical copy by metres, my selfish > concerns might put me in direct conflict with him, whereas if I am > separated from him by minutes, my selfish concerns are the same as his > selfish concerns. Yes, also your future selves will want to keep the promises of your past selves. > This may seem like a convoluted way to look at personal identity, but > it is the way our brains work. We're in a grey area somewhere between psychology and philosophy. :) > If we had evolved in an environment > where copying was commonplace, our brains may well have developed > something akin to Lee's simpler theory of selfishly regarding all > copies as selves in proportion to their level of similarity. Possibly, but it would I think have to be a chaotic society without a coherent concept of individual rights, or even of individuality. My murder-or-suicide courthouse illustration was designed to show the absurdity of such a world. Seems to me copies will have individual rights in any reasonable world. Murder is always murder, assault is always assault, theft is always theft, etc. In reasonable worlds each copy must be considered a unique legal person, no matter what the philosophers say. -gts From mfj.eav at gmail.com Sun Jun 24 16:30:18 2007 From: mfj.eav at gmail.com (Morris Johnson) Date: Sun, 24 Jun 2007 09:30:18 -0700 Subject: [ExI] Gay bomb again.. hemp Message-ID: <61c8738e0706240930k788bfcf8wad4103bfb495d1c8@mail.gmail.com> I made those exact comments to friends after the initial post. ....a way farmers could make great gobs of money by supplying the military industrial complex with psychogenic warefare compounds derived form thousands of tons of hemp? I wrote a piece about this and recieved an award for doing so 37 years ago. And about the hemp BBQ.... someday this summer I plan to have a party and celebrate our 2007 hemp crop by burning the remnants of our 2000 crop. A 10 acre experiment used hay stack wagons to gather it and yielded a mixture of crop and weeds which was not acceptable to put into food products so I have about 10 tonnes to dispose of . Hemp and pot smoke smell identical so it makes for a bit of a cheap thrill for all. Morris Johnson -- LIFESPAN PHARMA Inc. Extropian Agroforestry Ventures Inc. http://www.lifespanpharma.com http://www.hempforhorses.com 306-447-4944 701-240-8817 Extreme Life-Extension ..."The most dangerous idea on earth" -Leon Kass , Bioethics Advisor to George Herbert Walker Bush, June 2005 Extropian Smoke Signals Waft Softly but Carry a big Schtick ... Morris Johnson - June 2005* -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sun Jun 24 16:59:27 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 24 Jun 2007 12:59:27 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <005701c7b665$a0ad90f0$18064e0c@MyComputer> References: <838541.44619.qm@web37414.mail.mud.yahoo.com> <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> <000e01c7cd56$7a71fa40$6501a8c0@brainiac> <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> <005701c7b665$a0ad90f0$18064e0c@MyComputer> Message-ID: <62c14240706240959n14c9a17andca65fe80de5f23d@mail.gmail.com> On 6/24/07, John K Clark wrote: > "Mike Dougherty" > > > The Island > > The first half of this movie was pretty good, the second half was not. If > the movie makers has less money to spend they would have made a > much better film; as it is they seemed to say to themselves, hey we still > have a lot of money left over in the budget, let's throw in a car chase. absolutely. I even agree with Joseph Bloch about the cheese. From mmbutler at gmail.com Sun Jun 24 17:11:11 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Sun, 24 Jun 2007 10:11:11 -0700 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <04c801c7b61f$7c658d60$6400a8c0@hypotenuse.com> References: <838541.44619.qm@web37414.mail.mud.yahoo.com> <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> <000e01c7cd56$7a71fa40$6501a8c0@brainiac> <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> <04c801c7b61f$7c658d60$6400a8c0@hypotenuse.com> Message-ID: <7d79ed890706241011k34c06458g237dc58cc7fe8716@mail.gmail.com> On 6/23/07, Joseph Bloch wrote: > Bah... cheesy remake of a cheesy original. ("The Clonus Horror"). > > ;-) > > Joseph > http://www.josephbloch.com Just saw "Idiocracy", and while it had its funny moments, that's my issue with it. I'm VERY fond of Cyril M. Kornbluth, almost used to lionize him the way others did/do Vonnegut or Sladek, but even his & Fred Pohl's "The Marching Morons" was cheesy. Still head and shoulders better than "Idiocracy". >H mostly in a dystopian way, but "Marching" still had its Galt's Gulch, as it were--nowhere in evidence in "Idiocracy". -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m 'Piss off, you son of a bitch. Everything above where that plane hit is going to collapse, and it's going to take the whole building with it. I'm getting my people the fuck out of here." -- Rick Rescorla (R.I.P.), cell phone call, 9/11/2001 From emohamad at gmail.com Sun Jun 24 17:51:27 2007 From: emohamad at gmail.com (Elaa Mohamad) Date: Sun, 24 Jun 2007 19:51:27 +0200 Subject: [ExI] Favorite ~H+ Movies Message-ID: <24f36f410706241051l3d10da4eoaf97358d939d9936@mail.gmail.com> Russell Wallace wrote: > In The Matrix, the machines are evil and the way to solve problems is with a > form of watered-down mysticism. I'm not saying it's a bad movie - it's good > for what it is, a martial arts flick with cool special effects - but I > wouldn't recommend it for the philosophy. ... Actually, the "machines" in The Matrix are not evil, they just are. And the way they solve problems is very rational (of course, only in the reality portrayed in the film, where they mix fusion and energy from the human brain for power :-))) - not mystic at all. The story is explained through several facts: 1. The machines need human brains to produce energy 2. Humans are hard to control in RL, so they enslave them and provide them with a dreamworld 3. The machines used a trial and error system to produce a perfect dreamworld - the only one that actually works 4. The whole dreamworld story works only if they allow for an anomaly once in a while - thus, Neo That is very logical thinking, n'est-ce pas? The machines are therefore not portrayed as **evil**, they just do what is necessary to ensure the prolongation of their "species". Actually, very human-like. > There's another criterion that I haven't quite articulated yet; I'm not sure > exactly what to call it. Anti-fatalism? The attitude of "Screw that, there > is no fate. If the forecast is that we're doomed, then we'll just have to > find a way to invalidate it. This is the main idea flowing through the Matrix trilogy. Everybody is telling Neo that it is his fate, that it's all been written, that all the choices have been already made. And he says "screw that, I'm going to revive my lover". The forecast from the Architect - the head honcho - was that humanity was doomed from the beginning, and look at what happened in the end. The Avantguardian wrote: > Hi Mohamed, > > I was wondering if you have ever seen "Renaissance: > Paris 2054". It explores the writer's envisioned > conflict between transhumanism and Islam in the > backdrop of Paris in the year 2054 while being an > entertaining film. I thought it might be relevant > considering some of the discussion Islam and H+ > recently. > > Stuart LaForge > alt email: stuart"AT"ucla.edu Hi Stuart, I've actually seen half of the movie two weeks or so ago - didn't like it too much (the style and animation were kind of off-putting) but I must admit I was multitasking and wasn't really concentrated on the story... Perhaps I should see it again. Anyway, when it comes to Islamic beliefs, maybe I am not really the best qualified to go into details, because even if I do see myself as a muslim, there is a billion things in the application of Islam that I do not understand or don't agree with. And of course, they are interpreted differently depending who you ask. I personally believe in science, logic, responsibility for your actions, and that we should make the best of this life (not to do good because of afterlife benefits - I'd like to be able to choose how to use my benefits here, if there are any :-) ) Thanks for the movie suggestion. Eli From lcorbin at rawbw.com Sun Jun 24 17:56:51 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 24 Jun 2007 10:56:51 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><013c01c7b22b$22548210$6501a8c0@homeef7b612677><017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><01e001c7b3d7$4c627550$6501a8c0@homeef7b612677><021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: <02a501c7b689$23834a10$6501a8c0@homeef7b612677> I feel like I have a lot of catching up to do here right now, but I can't resist addressing a point Gordon makes here: > Sent: Friday, June 22, 2007 3:23 PM > As I see it, will is fundamental. I take will as primitive in a > Neitzschian/Shopenhauerish sort of way. When I speak of 'agency' I refer > to that faculty by which we act as the executors of our wills. Where there > are two wills, there are two agents and two identities. A problem I see is that your will may change from hour to hour (possibly depending on mood, or even caffeine and so forth), as I go into more below. > The wills of duplicates begin to diverge at the first moment after > duplication. As above, at that moment there exist two wills, two agents > and two identities. > > One day Lee and his duplicate will sit down for breakfast together. Lee > will will to have corn flakes; his duplicate will will to have wheaties. > At that moment Lee and his dupe will realize they are not the same person > after all. I assure you that it has never occurred to me that my choice of breakfast cereal has anything to do with my identity! :-) (This here is joke, is not evasion, is not straw man, ---is joke, though with a teeny bit of a point.) > In fact they never were the same person; it simply took a while > for their wills to diverge sufficiently to make the truth apparent. You're right in that my duplicate and I might muse on how odd it was. We would no doubt attribute it to a most interesting divergence of our internal workings. Perhaps one of us had slept comparitively poorly for some random reason, and the digestion had turned out differently? But I am all about my *memories*, and to a lesser extent my values and beliefs. The real raw essence of who I really am (scare quotes elided) is impervious to my mood, or whether I have a headache, or whether I might be getting strange signals from my stomach. (I suggest that this is really the case for everyone, and that they are mistaken if they think it's not like this.) Your "will" interpretation of identity, as I was saying above, must answer the criticism that it could change a lot back and forth over a few hours or a few days, and yet during a month you act (and I think believe) that you are the same person. That is, although you do change into someone else very gradually over years, the change transpiring in a single month doesn't endanger your survival. Lee From lcorbin at rawbw.com Sun Jun 24 18:15:39 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 24 Jun 2007 11:15:39 -0700 Subject: [ExI] Happy Solstice! References: <200706231752.l5NHqm59001048@mail0.rawbw.com> Message-ID: <02bc01c7b68b$f100f670$6501a8c0@homeef7b612677> Spike writes > Since you take into account the latitude and longitude, your next > digit of precision on your yardstick experiment comes from taking into > account the analemma. This website gives a reasonable explanation: > > http://www.analemma.com/Pages/framesPage.html That site has a lot great analemms! Thanks. > By happy coincidence, the aphelion and perihelion almost correspond with the > soltices. Yes, January 3 or so is very close to December 21, and July 4 (if I remember right) is close to June 22. A similar nice coincidence that comes to mind is that at the summer soltice the Earth lies almost directly between the sun and the center of the galaxy. (Unfortunately the plane of the ecliptic tilts about sixty degrees or so away from the galactic plane, so that as our solar system moves in the direction of Deneb (clockwise around the galaxy) it's as though the it was on an extreme "rake", like cars have.) > I am setting up an experiment to mark the pavement on my back patio > corresponding to the shadow of the peak of the house at exactly noon. > Of course most days at noon I would not be home, so it will take years > to get most of the calendar days marked. I'm sure that you remember to use 1pm in the summers! > When I do, I will have a figure 8 shaped calendar back there. Isaac > will love it. Is this cool or what? Yes, that is *very* neat! The ancients must have seen a lot of analemmas too. I wonder how much they read into the shape of the analemma, especially the part where the lower part bulges out a lot more than the upper part. Lee From lcorbin at rawbw.com Sun Jun 24 18:22:45 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 24 Jun 2007 11:22:45 -0700 Subject: [ExI] Favorite ~H+ Movies. References: <838541.44619.qm@web37414.mail.mud.yahoo.com><02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> <000e01c7cd56$7a71fa40$6501a8c0@brainiac> Message-ID: <02f501c7b68d$57fb7610$6501a8c0@homeef7b612677> Olga writes > John K Clark wrote > >> The only transhuman movie that I think of that was dead accurate in every >> respect was made nearly 40 years ago, it was called "The Forbin Project" >> ... To top it off the AI now forces Forbin to help it design a successor >> machine even more powerful than it is. Hmm. Is it just coincidence that John is much more skeptical that an AI could remain under human control, given how much this movie affected him? > Ah, yes ... I remember that well! The tagline for "The Forbin Project" > could have been the same one they used for "Seconds" (a movie I saw as a > teenager in 1966, which gave a slight boost to my perspective forevermore, > as well as a new word to my vocabulary, i.e., "reborns"): I've noticed too how "my perspective forevermore" has probably been deeply affected by movies and books seen in my impressionable youth. Perhaps the horrifying themes of identity, torture, and belief that lie in Orwell's 1984 (which I read two or three times between age 12 and age 20) account for some of my fascination with who we are and in what ways that changes under torture. > "What Are Seconds?... The Answer May Be Too Terrifying For Words!" (Olga's > note: Oh, yeah? Well, those people obviously never heard of "Second > Life.") > > more: "What if someone offered you the chance to begin again, with a new > life that was organized to be exactly what you wanted it to be? For one thing, we'd probably change less than we do given that so much of what we experience and what happens to us is out of our control. Lee > That's what > the organization offers some wealthy people..." (Olga's note: Ha! Ha! Those > stinkin' greedy wealthy people who live dangerously and just can't seem to > heed the moral lessons imbued in fairy tales such as The Tale of the > Fisherman ...): > > http://www.imdb.com/title/tt0060955/ > > http://en.wikipedia.org/wiki/The_Tale_of_the_Fisherman_and_the_Fish > > ;) Olga From lcorbin at rawbw.com Sun Jun 24 18:37:53 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 24 Jun 2007 11:37:53 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com><005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677><013c01c7b22b$22548210$6501a8c0@homeef7b612677><017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677><01e001c7b3d7$4c627550$6501a8c0@homeef7b612677><021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: <030401c7b68f$72509250$6501a8c0@homeef7b612677> Gordon writes > On Fri, 22 Jun 2007 03:08:21 -0400, Lee Corbin wrote: > >> [I believe] it is inherantly selfish for an instance of me to kill itself >> immediately so that its duplicate gets $10M, given that one of >> them has to die and if the instance protects itself, then it does >> not get the $10M. That may seem an unsual way to use the >> word "selfish", but I mean it most literally. Were "I" and my >> duplicate in such a situation, "this instance" would gladly kill >> itself because it would (rightly, I claim) anticipate awakening >> the next morning $10M richer. > > I'm pretty sure that if you kill "this instance" of yourself today, you > will never again have to worry about money, because you will never wake up. > > The best you can hope is that your now wealthy duplicate will remember you > fondly while you're pushing up daisies. Yet if you are detached from our cultural notions of having souls, then from a physicist's perspective you have to ask "Just how exactly is this version who awakes today not the same person who would have awakened today instead, had it not died?" Every day when we awake we are somewhat different than we were during the middle, say, of the preceding day. Don't you suppose that from the physicist's point of view there has to be quite a bit of "tolerance" for how much change can happen and yet the same person remain? And what is "change", except, in the final analysis, the placement of all the molecules? The intuition may rebel, but whichever duplicate survives, it's clear (at least to me) that you survive to live another day. Thanks for the epitaph! :-) Hopefully, it won't be just a "body" that lies here, but a "dewar", ---and that there's still a chance for me! Lee > Here lies the body of Lee Corbin, > Who thought he could die and return again. > But his dupe was not him, > Such was only his whim, > For Lee now it's as though he had not been. > > :) > > -gts From emohamad at gmail.com Sun Jun 24 18:43:22 2007 From: emohamad at gmail.com (Elaa Mohamad) Date: Sun, 24 Jun 2007 20:43:22 +0200 Subject: [ExI] Gay Bomb ... again Message-ID: <24f36f410706241143i1800d843t835894cc1db3d01e@mail.gmail.com> I found an awesome article about what the movie industry thinks about the gay bomb issue. http://blog.wired.com/defense/2007/06/gay-bomb-the--1.html quote: "Gay Bomb will take us into the future and the year 2012. George the Second has refused to step down as leader of the "free world," and the nations of Europe have banded together to fight the new American military dictatorship. Desperate to fend off its attackers, the US launches the experimental "gay bomb," designed to make the enemy forces drop their guns and turn fag. But the winds of fate blow in a different direction, and soon America is brought to its knees." I laughed and laughed and laughed.... Eli From scerir at libero.it Sun Jun 24 18:27:17 2007 From: scerir at libero.it (scerir) Date: Sun, 24 Jun 2007 20:27:17 +0200 Subject: [ExI] will as essence References: <916826.99376.qm@web60512.mail.yahoo.com> Message-ID: <003c01c7b68d$4bd67ba0$87b81f97@archimede> gts > Now, thanks to Damien and Libet above, > we know the will drives the intellect > with a ~200 to ~500 millisec lag. Experiments performed with top target shooters (not to mention Herrigel's 'Zen in the Art of Archery', 1948) show that: 1) firing, breaking a shot consciously ('to decide a shot') produces a bad shot; 2) firing a shot subconsciously produces a good shot (a 'surprise' shot). There are many reasons for the above (coordination between pulling the trigger, focusing on the sights, 'wobble area', etc., and note that all these actions do not 'commute'). But experiments performed by Libet (thanks to Damien!) show something deeper is going on with the 'subconscious target shooting', and that 'will' has detrimental effects. s. "Aiming at a distant object and hitting it - that, of course is impossible. But if we throw a stone in the right direction, imagining the absurd possibility of hitting the object will make success more probable. In this case the certainty that this can happen is more important than training and will." -Niels Bohr From lcorbin at rawbw.com Sun Jun 24 18:59:04 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 24 Jun 2007 11:59:04 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> Message-ID: <030a01c7b692$3fbfe6d0$6501a8c0@homeef7b612677> Stathis writes >> Exactly. Or they might believe that they'll become a particular >> sun flower, or a particular river that they're fond of. And they'd >> be just plain wrong, if not nuts. > > And the reason they're wrong is that going to bed and being replaced > by a sunflower or a river in the morning will not reproduce the > experience of going to bed and waking up as myself, whereas going to > bed and waking up as my exact duplicate will. Of course, what is at issue is exactly what is the difference between "myself" and "my duplicate". > Going to bed in the knowledge that I will die overnight while my > duplicate of a few minutes ago sleeps soundly in the next room does > not reproduce the experience of going to bed and waking up normally, Well, even going to bed in a strange room does not reproduce "the experience of waking up normally". > but is more like going to bed and not waking up at all. Yes, there is a troubling anticipation as you lie awake and think about your duplicate in the next room. While you both sleep, only the duplicate wakes up. The memories that you accumulated in the last ten minutes (since the fork) will be lost. *Physically* we have to suppose that the only *real* difference is in those memories, because otherwise *physically* the awakening of your duplicate is the same as the awakinging of the original. But you do know all this, as I realize from our earlier discussions. So it seems to me just a question of internalizing it. Therefore, you shouldn't worry about "not existing" as you think about your duplicate in the next room, and just resign yourself to losing a few minutes' memories. > The counterargument is that going to bed as per the last paragraph is > similar to going to bed and waking up with a few minutes' amnesia. If > I take a drug such as midazolam which will I know will wipe out any > memory of the next few minutes when I wake up tomorrow, then during > that period that I know I won't remember I will be in a position > analogous to that of contemplating my imminent death, knowing that my > present self will have no direct successor. Oh! This illustrates the perils of (me) not reading what was coming before spouting off! Yes, quite so! But "my imminent death" may be overstating what happens in the situation, of course. > If I can overcome my fear of anticipating no successor experiences > then I should (logically, I would argue) overcome my fear of death. So just how upset, under midazolam, would you be? Alas, it's not something that one would get "used to"! For the very interesting reason that one would not recall the previous instances of so being under the influence. As for me, it would be a bit annoying. I like to think that when I think I do learn a little (if just reorganizing memories), and in that sense being "under the influence" of midazolam would be a waste of time. But I don't think that most people are frightened (or anything) after the drug begins to take effect. It's just a peculiar fact that they know that they won't remember. But then, we forget stuff all the time, and it's no big deal. > On the other hand, if I can find consolation in the survival of a copy > who branched off from me some time ago then I should also find > consolation in existence of past versions of me, who definitely > existed and definitely shared my memories etc. Yes, that is the logical conclusion. We should look at runtime whether it's in the past, present, or future as bestowing benefit (so long, of course, as that part of life is/was/will be worth living). > After all, once this instance of me is permanently dead his relationship > to past, present and future copies is all the same. I just don't look at "instances" as dying. People can die, programs can fail to get runtime over an interval of time, but instances are, ah, terminated. :-) Lee From lcorbin at rawbw.com Sun Jun 24 19:12:46 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 24 Jun 2007 12:12:46 -0700 Subject: [ExI] The Bad Old Days (was Next moment, everything around you...) References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> Message-ID: <031201c7b694$5a104820$6501a8c0@homeef7b612677> Stathis writes > P.S. I am shocked at the acrimony this thread is causing! Me too. I really don't understand it. Yes, when people get annoyed, then all of us will---when pressed enough---go into ad hominem, and begin writing a lot of paragraphs whose point is nothing other than denigrating someone. Jeffrey had also written > Boy, things have been tense around here lately. > We should be entitled to a little fun once in a while, > right? All I can say is, "you ain't seen nothin'"! In my time (mid-90's) none of this would have ever risen to the level of comment. And I understand that the "Flame Wars" of even earlier times were infinitely worse yet. Long, long ago, in an email list far away... Just ignore it. Think of the unmeasured thickness of John Clark's legendary skin, and try to admire the comparative equanimity that lies therein :-) Anyway, don't worry on any account: I'm sure that it will never get anything like the Bad Old Days. Lee From gts_2000 at yahoo.com Sun Jun 24 18:44:25 2007 From: gts_2000 at yahoo.com (gts) Date: Sun, 24 Jun 2007 14:44:25 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <02a501c7b689$23834a10$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <013c01c7b22b$22548210$6501a8c0@homeef7b612677> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> <02a501c7b689$23834a10$6501a8c0@homeef7b612677> Message-ID: On Sun, 24 Jun 2007 13:56:51 -0400, Lee Corbin wrote: > A problem I see is that your will may change from hour to hour > (possibly depending on mood, or even caffeine and so forth), > as I go into more below. I don't see that as a problem. As I wrote to Stathis in another message, I view continuity of self as no more than a convenient fiction, philosophically speaking. I'm quite certain I'm not the same person I seemed to be at age five and in my view it follows logically that neither am I the same person I seemed to be five minutes ago. The dissimilarities are not readily apparent to my senses but that fact does not excuse me from the rational obligation of deducing their existence. >> In fact they never were the same person; it simply took a while >> for their wills to diverge sufficiently to make the truth apparent. > > You're right in that my duplicate and I might muse on how odd it > was. We would no doubt attribute it to a most interesting divergence > of our internal workings. Odd is an understatement. :) If that other man were **really** you then he would have your unique will, which includes your unique will to eat cornflakes for breakfast instead of wheaties. Furthermore he would be choosing to eat the cornflakes from your bowl while sitting in your chair and wearing your shoes. :) > memories I think the focus on memories has been bad for this identity debate over all these years... it's no wonder there is still no resolution... -gts From austriaaugust at yahoo.com Sun Jun 24 19:07:42 2007 From: austriaaugust at yahoo.com (A B) Date: Sun, 24 Jun 2007 12:07:42 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <24f36f410706241051l3d10da4eoaf97358d939d9936@mail.gmail.com> Message-ID: <516723.30950.qm@web37415.mail.mud.yahoo.com> The recent "I, Robot" starring Will Smith ain't bad. Some may gasp that it's "so hollywood", but hollywood could have done worse. It gets a recommend from me; in fact it's probably the best hollywood "AI" movie of the last decade or more, IMO. Sincerely, Jeffrey Herrlich ___________________________________________________________________________________ You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_html.html From austriaaugust at yahoo.com Sun Jun 24 19:30:16 2007 From: austriaaugust at yahoo.com (A B) Date: Sun, 24 Jun 2007 12:30:16 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <200706232318.l5NNIV3I019270@andromeda.ziaspace.com> Message-ID: <656463.5813.qm@web37403.mail.mud.yahoo.com> After the thread is finally retired, I'll send the complete compiled list back to Extropy in the form of another post. I would display the list on my own website, but I don't have one, yet. I don't want to be the messenger of doom, let's try to keep it going, I'm sure there's still many more. :-) Sincerely, Jeffrey Herrlich --- spike wrote: > > > > > Please would someone compile these H+ movie choices? > Is there a website to > put them so as to be accessible next time we want to > rent a movie? > > spike > > > > > -----Original Message----- > > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat- > > bounces at lists.extropy.org] On Behalf Of BillK > > Sent: Saturday, June 23, 2007 1:50 PM > > To: ExI chat list > > Subject: Re: [ExI] Favorite ~H+ Movies > > > > On 6/23/07, A B wrote: > > > Whoops. I meant to write: original Director's > Cut of > > > "Blade Runner". Just had to nitpick. Some great > ... > > > > 'Dark City' 1998, made a big impression on me... > > BillK > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ It's here! Your new message! Get new email alerts with the free Yahoo! Toolbar. http://tools.search.yahoo.com/toolbar/features/mail/ From austriaaugust at yahoo.com Sun Jun 24 20:44:47 2007 From: austriaaugust at yahoo.com (A B) Date: Sun, 24 Jun 2007 13:44:47 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <516723.30950.qm@web37415.mail.mud.yahoo.com> Message-ID: <138840.57906.qm@web37415.mail.mud.yahoo.com> Another really great movie is "V for Vendetta". It's not directly related to H+, but its definitely got a sort of "Atlas Shrugged" type of feel to it. It's about a single man's struggle to reshape the world into something better. It's one of the most philosophically saturated movies I've seen, and it's really first-rate. Not only does the man struggle to reshape the world, but he's forced to confront the dark forces within himself, and seek justice nonetheless. "V for Vendetta" is definitely among my most favorite movies, if not at the top. Sincerely, Jeffrey Herrlich ____________________________________________________________________________________ Food fight? Enjoy some healthy debate in the Yahoo! Answers Food & Drink Q&A. http://answers.yahoo.com/dir/?link=list&sid=396545367 From thomas at thomasoliver.net Sun Jun 24 19:47:32 2007 From: thomas at thomasoliver.net (Thomas) Date: Sun, 24 Jun 2007 12:47:32 -0700 Subject: [ExI] agency-based personal identity defined In-Reply-To: References: Message-ID: > Jef Allbright wrote: > > Damn. Now I feel compelled to rebut this claim [...] > > Agency-based personal identity naturally accommodates our present > everyday view of a constant relationship between the agent and the > abstract entity which it represents. Even though both of these change > with time, the relationship -- the personal identity -- is constant. > Agency-based personal identity is more extensible because it > accommodates the idea of duplicate persons with no paradox and no > hanging question of what constitutes sufficient physical/functional > similarity. And agency-based personal identity accommodates future > scenarios of variously enabled/limited variants of oneself performing > tasks on behalf of a common entity and viewed as precisely *that* > entity for social/moral/judicial purposes by itself (itselves) and > others. Perhaps counter-intuitively, agency-based personal identity > shows us that agents more specifically differentiated in their > function will maintain the entity-agent relationship more reliably due > to less potential for conflict based on similar values expressed > within disparate contexts. > > - Jef Earlier you mentioned my audience as your motive for belaboring this discussion. In the interest of reducing compulsion I'd like you to know that I consider your above explanation competent and sufficient. You have succeeded in expanding the scope of my understanding of personal identity. I hereby formally excuse you from feeling any explicit or implied obligation to perform this exercise on my behalf. I very much appreciate your concern and I feel sure this appreciation will grow as I apply this concept in the future. In addition I gained insight into the frustrating process of attempting to argue from an expanded context. The paragraph above indicates that you've gotten better at it. -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseph at josephbloch.com Sun Jun 24 23:46:51 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Sun, 24 Jun 2007 19:46:51 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <62c14240706240959n14c9a17andca65fe80de5f23d@mail.gmail.com> References: <838541.44619.qm@web37414.mail.mud.yahoo.com><02d601c7b5bc$10a9efc0$030a4e0c@MyComputer><000e01c7cd56$7a71fa40$6501a8c0@brainiac><62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com><005701c7b665$a0ad90f0$18064e0c@MyComputer> <62c14240706240959n14c9a17andca65fe80de5f23d@mail.gmail.com> Message-ID: <04db01c7b6b9$f0ed0a60$6400a8c0@hypotenuse.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Mike Dougherty > Sent: Sunday, June 24, 2007 12:59 PM > To: ExI chat list > Subject: Re: [ExI] Favorite ~H+ Movies. > > On 6/24/07, John K Clark wrote: > > "Mike Dougherty" > > > > > The Island > > > > The first half of this movie was pretty good, the second half was not. If > > the movie makers has less money to spend they would have made a > > much better film; as it is they seemed to say to themselves, hey we still > > have a lot of money left over in the budget, let's throw in a car chase. > > absolutely. I even agree with Joseph Bloch about the cheese. Gadzooks! Is agreeing with me so far-out that it deserves an "even"? ;-) Joseph http://www.josephbloch.com From joseph at josephbloch.com Mon Jun 25 00:02:15 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Sun, 24 Jun 2007 20:02:15 -0400 Subject: [ExI] Catholic news aggregator gives >H article fear-mongering headline Message-ID: <04dc01c7b6bc$17567f40$6400a8c0@hypotenuse.com> Wow... the Crisis Magazine article has made it to the top-headline link from SpiritDaily.com, a very conservative Catholic news aggregator deliberately trying to look like Drudge: http://www.spiritdaily.com (but it will change relatively soon; I've got an image of the site captured which I'll put up sometime soon). The headline reads: "Transhumanism Emerges As Huge Threat To Human Dignity And An Affront To God" Here is the link to the original article (for which I was interviewed, and some of my comments appear): http://www.crisismagazine.com/may2007/pavlat.htm And here is a link to my blog, commenting on said article: http://www.josephbloch.com/?p=17 "Human Dignity" in this context, of course, usually gets twisted into some version of "God's intention for humanity" rather than anything that actually equates dignity with self-empowerment. But I thought it would be of interest that the uber-staunch Catholics deem us worthy of headline placement and such terms as "threat" and "affront to god". (Obviously, the author of the article didn't have any control over that headline from a non-affiliated website; he treated the subject of Transhumanism and myself pretty fairly, as far as I'm concerned). Joseph http://www.josephbloch.com From bkdelong at pobox.com Mon Jun 25 00:59:52 2007 From: bkdelong at pobox.com (B.K. DeLong) Date: Sun, 24 Jun 2007 20:59:52 -0400 Subject: [ExI] Avatar machine - See yourself live as a First-Person Avatar Message-ID: I thought this was pretty cool. Unfortunately it's not feasible in Urban locales or environments with height limits. I wonder how hard it would be to change camera views. ;) http://www.marcowens.co.uk/avat.html -- B.K. DeLong (K3GRN) bkdelong at pobox.com +1.617.797.8471 http://www.wkdelong.org Son. http://www.ianetsec.com Work. http://www.bostonredcross.org Volunteer. http://www.carolingia.eastkingdom.org Service. http://bkdelong.livejournal.com Play. PGP Fingerprint: 38D4 D4D4 5819 8667 DFD5 A62D AF61 15FF 297D 67FE FOAF: http://foaf.brain-stream.org From spike66 at comcast.net Mon Jun 25 00:48:13 2007 From: spike66 at comcast.net (spike) Date: Sun, 24 Jun 2007 17:48:13 -0700 Subject: [ExI] Catholic news aggregator gives >H article fear-mongeringheadline In-Reply-To: <04dc01c7b6bc$17567f40$6400a8c0@hypotenuse.com> Message-ID: <200706250121.l5P1Lrdj012404@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Joseph Bloch ... > ...top-headline link from SpiritDaily.com, a very conservative Catholic news aggregator: > > "Transhumanism Emerges As Huge Threat To Human Dignity And An Affront To > God" ... But he affronted me first! If the opposite of dignity is humiliation, (the religion people are always telling me that humility is good) then dignity must be bad. We know that *excess* dignity is bad, its antidote being silliness or self-deprecating humor, which I like. So bring on the transhumanism, bring off the religion. spike From msd001 at gmail.com Mon Jun 25 02:11:54 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 24 Jun 2007 22:11:54 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <04db01c7b6b9$f0ed0a60$6400a8c0@hypotenuse.com> References: <838541.44619.qm@web37414.mail.mud.yahoo.com> <02d601c7b5bc$10a9efc0$030a4e0c@MyComputer> <000e01c7cd56$7a71fa40$6501a8c0@brainiac> <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> <005701c7b665$a0ad90f0$18064e0c@MyComputer> <62c14240706240959n14c9a17andca65fe80de5f23d@mail.gmail.com> <04db01c7b6b9$f0ed0a60$6400a8c0@hypotenuse.com> Message-ID: <62c14240706241911q129c5edbs97dc4a78393b1e1a@mail.gmail.com> On 6/24/07, Joseph Bloch wrote: > > absolutely. I even agree with Joseph Bloch about the cheese. > > Gadzooks! Is agreeing with me so far-out that it deserves an "even"? ;-) I was agreeing with John, and "even" (as in "also") adding the agreement about cheesiness. Many people seem to defend their posts from any kind of opposition. I have felt like every suggestion I've made to this thread has been met with a "no, that's dumb" comment - but I should know better than to look for appreciation for contributing. That's not sour grapes, I acknowledge that this medium is mostly about arguments over agreements. I think sometimes if you're succinct enough for people to read your posts, you risk being too terse; if too wordy, your posts are ignored. If you say something that has no opposition, there is rarely a reply in agreement with a well constructed statement. So it leaves opposition and bickering as common modes. ... and sometimes the use of the most innocuous connective word has unintended consequential meaning. Thanks for pointing it out so I could (attempt to) explain. From joseph at josephbloch.com Mon Jun 25 03:40:59 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Sun, 24 Jun 2007 23:40:59 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <62c14240706241911q129c5edbs97dc4a78393b1e1a@mail.gmail.com> References: <838541.44619.qm@web37414.mail.mud.yahoo.com><02d601c7b5bc$10a9efc0$030a4e0c@MyComputer><000e01c7cd56$7a71fa40$6501a8c0@brainiac><62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com><005701c7b665$a0ad90f0$18064e0c@MyComputer><62c14240706240959n14c9a17andca65fe80de5f23d@mail.gmail.com><04db01c7b6b9$f0ed0a60$6400a8c0@hypotenuse.com> <62c14240706241911q129c5edbs97dc4a78393b1e1a@mail.gmail.com> Message-ID: <04e001c7b6da$a5d9f940$6400a8c0@hypotenuse.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Mike Dougherty > Sent: Sunday, June 24, 2007 10:12 PM > To: ExI chat list > Subject: Re: [ExI] Favorite ~H+ Movies. > > On 6/24/07, Joseph Bloch wrote: > > > absolutely. I even agree with Joseph Bloch about the cheese. > > > > Gadzooks! Is agreeing with me so far-out that it deserves an "even"? ;-) > > I was agreeing with John, and "even" (as in "also") adding the > agreement about cheesiness. Many people seem to defend their posts > from any kind of opposition. I have felt like every suggestion I've > made to this thread has been met with a "no, that's dumb" comment - > but I should know better than to look for appreciation for > contributing. That's not sour grapes, I acknowledge that this medium > is mostly about arguments over agreements. I think sometimes if > you're succinct enough for people to read your posts, you risk being > too terse; if too wordy, your posts are ignored. If you say something > that has no opposition, there is rarely a reply in agreement with a > well constructed statement. So it leaves opposition and bickering as > common modes. > > ... and sometimes the use of the most innocuous connective word has > unintended consequential meaning. Thanks for pointing it out so I > could (attempt to) explain. Bah... Blame my own insecurities based on a number of factors you are in absolutely no position to apprehend. Good answer. Besides, you agreed with me about a subjective matter of art. That gains you a point in my book. Joseph http://www.josephbloch.com From mabranu at yahoo.com Mon Jun 25 03:22:19 2007 From: mabranu at yahoo.com (TheMan) Date: Sun, 24 Jun 2007 20:22:19 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? Message-ID: <999620.39274.qm@web51906.mail.re2.yahoo.com> My impression is that there is a far from negligible risk that nanofactories (and other extreme technology for mass manipulation of matter/minds) will relatively soon become effective enough, and cheap enough, for plenty of anti-humanity cults and terrorists to do things (manufacture armies of nanorobots etc) that can terminate mankind before mankind or any human gets a chance to become or create something greater that is not vulnerable to such threats. It looks like it will be a nightmare race. The part of society that wants survival will incessantly have to be winning the race, and the part of society that wants society extinct constantly has to be losing the race, if mankind is to survive. What's the odds for that, every single second, year after year, until singularity is reached? When technologies that can be used for mankind extinction become really fast and effective, the bad side may only have to gain the upper hand for one brief moment, whereas the good side has to constantly have the upper hand. So the odds seem to be in favor of the bad side. >From a utilitarian point of view, any existential threat, no matter how small, to the survival of mankind in the near future, should be taken more seriously than any non-existential issues, because an enormously long future of enormous amounts of happiness (in this part of universe) may depend on the survival of mankind in this critical time. (Whether there will always be "far from negligible existential threaths" is an interesting question, but let's examine that one in a separate thread. It seems to me that the next two or three decades there will be worse threaths than both before and after.) I can think of no other existential treat to mankind the coming decades that can compare to the threat from super-technology (like very advanced self-replicating nanorobots) in the wrong hands. (wrong hands doesn't have to mean evil hands - simply incompetent and na?ve hands can be just as dangerous - but I'll focus on the threat from evil hands as that one is more obvious to me.) If we leave it up to our governments, our governments will respond to that threat with, as usual, even more surveillance and even more limitation of people's freedoms. And the public will, as usual, even be the ones who demand it. Some say we already live in an Orwellian society, but I think the one we live in today is more like *paradise* compared to the one that governments may build in the decades to come in order to protect us (or maybe mainly themselves) against nano-terrorism and other super-hi-tech terrorism. There is a considerable risk that governments, even in the most democratic countries in the world, will abuse that kind of omnipresent, automated surveillance system when they have it installed, and become dictators, using their surveillance system to detect any opposition in time to snuff it, and to frighten people so that very little opposition even occurs. Power corrupts; total power corrupts totally. That is, if we don't choose another solution. Absolutely extreme levels of global surveillance seem to become a must, very soon. But must governments be in charge of it? David Brin suggests, in his "The Transparent Society", that we should create such a system that we can all watch each other all the time (and while Brin thinks we could still keep some privacy, I don't see how we could, if we want to keep mankind safe from the approaching threats from the super hi-tech weapons of mass destruction that will soon be available to just about everybody if the public is allowed any privacy whatsoever). With a system that allows us all to watch each other 24/7/365 (and to listen to all each other's phone calls, live conversations, read each other's emotions, detect each other's lies etc), no government can seize illegitimate power or do anything outside of their official duties, because as soon as they try, they will be immediately stopped by the public, because ordinary people watch and control them just as much as they watch and control ordinary people. The government may watch suspected persons in their homes and convict them for planning acts of terror, but any one member of society can (and usually literally millions will) scrutinize the validity of such accusations, because whatever the government sees and hears, everybody can see and hear whenever they want to, because it is all broadcasted live. Every person in the government is videotaped 24/7/365, and streamed over the Internet for everyone to see and hear, just like everybody else is. Possibly with a lie detector attached to the screen (depending on how good lie detectors will be by then), with a red light going off every time someone lies or holds something back. We would be just as thouroughly watched in such a society as in Orwell's "1984", but the difference is that we wouldn't be at all powerless. We would know everything about each other, or rather, as much as we would want to know (and have time to take in). That might protect us pretty well from any terrorists - and, perhaps just as importantly, from dictator wannabes - among us and among the politicians and everywhere. Would such a solution make the existential threat to mankind, from ever more powerful hi-tech weapons, lesser than would the solution, where the governments have all or most of the power, and the public none or little power? Actually, I'm not sure. It may even be the other way around - an Orwellian world may be the *safer* one (if survival is the only objective, that is). For sure, David Brin's everybody-watches-everybody world would _feel_ safer than an Orwellian world would. But what if an Orwellian dictatorship actually would protect mankind better from extinction (by enslaving mankind) than David Brin's relatively free society would? If everybody can see what everybody does, it means that every technological advance becomes available to everybody at the same time. What if at one moment, a technology allows anybody to build a weapon so fast that he can use it to terminate all other people before they can stop it (they may all realize the danger immediately, but they may not every time sufficiently quickly be able to physically prevent the person from doing what he is about to do)? Is a world consisting of billions of equally powerful human beings(/cyborgs/posthumans/whatever), with naturally very diverse agendas, really safer (from the point of view of the survival of mankind) than a world where just one or a handful of human beings have all the power and the others none? If all the billions of human beings on this planet have equal power and knowledge, maybe there is a greater risk that one of them temporarily gets the lead in the technology race and finds a way to terminate mankind (as some of them will sure always try to do, as long as they have any freedom whatsoever)? If only for example five people have total power over the rest of mankind, at least there is a good chance that no one of these five will ever _choose_ to terminate mankind although they could. At least one in six billion will choose to (try to). Hundreds probably, maybe thousands, will try to. >From a utilitarian ethics point of view, it may be preferrable in the long run that we choose the kind of system that as strongly as possible secures long term survival of this planet's highly evolved intelligence, rather than the kind of system that allows for the best life quality for the people. Even if a global Orwellian dictatorship would mean huge loads of suffering for almost everyone on this planet for decades, sooner or later even a dictator regime - no matter how stubbornly it tries to stay conservative - will reach singularity, and just expand its bliss and intelligence until there is no room (and use) for pain-experiencing slaves anymore. (And after that, just expand further.) I like utilitarianism, but I'm not sure I'm ready to live in an Orwellian society for a long time just because that's the utilitarian thing to do. And maybe it is'nt. Maybe David Brin's society will be not only nicer, but also safer for mankind's survival. What do you think? And/or maybe there is a third alternative, an even better one? Any ideas? (When I say mankind's survival, I mean "the survival of whatever needs to survive so that the development towards greater and greater amounts of happiness that has started on this planet will continue growing or at least have a chance to grow in the future" or something like that. It includes any posthumanity that is desirable from a utilitarian point of view. And I can't think of any way for a posthumanity to be long term undesirable from that point of view - other than by being more vulnerable than the alternatives.) So, what do you say, what kind of surveillance system and government and society structure would be best, as protection against nano-terrorism and such, the coming decades, before singularity arrives (so that it gets to arrive)? Please feel free to describe in as much detail as you wish. ____________________________________________________________________________________ Looking for earth-friendly autos? Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center. http://autos.yahoo.com/green_center/ From sentience at pobox.com Mon Jun 25 04:08:17 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sun, 24 Jun 2007 21:08:17 -0700 Subject: [ExI] Essays: Why Transhumanism is Perfectly Normal Message-ID: <467F3FB1.2010102@pobox.com> http://www.singinst.org/blog/2007/06/16/transhumanism-as-simplified-humanism/ http://www.singinst.org/blog/2007/06/24/transhumanists-dont-need-special-dispositions/ -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From russell.wallace at gmail.com Mon Jun 25 04:39:27 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Mon, 25 Jun 2007 05:39:27 +0100 Subject: [ExI] Essays: Why Transhumanism is Perfectly Normal In-Reply-To: <467F3FB1.2010102@pobox.com> References: <467F3FB1.2010102@pobox.com> Message-ID: <8d71341e0706242139x3143f2c0sa4dcae3d26fe35f6@mail.gmail.com> Excellent essays, thank you! *keeps links to point people at next time I get into argument about these topics* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Mon Jun 25 05:52:43 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 24 Jun 2007 22:52:43 -0700 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <999620.39274.qm@web51906.mail.re2.yahoo.com> References: <999620.39274.qm@web51906.mail.re2.yahoo.com> Message-ID: On 6/24/07, TheMan wrote: > My impression is that there is a far from negligible > risk that nanofactories (and other extreme technology > for mass manipulation of matter/minds) will relatively > soon become effective enough, and cheap enough, for > plenty of anti-humanity cults and terrorists to do > things (manufacture armies of nanorobots etc) that can > terminate mankind before mankind or any human gets a > chance to become or create something greater that is > not vulnerable to such threats. > > It looks like it will be a nightmare race. The part of > society that wants survival will incessantly have to > be winning the race, and the part of society that > wants society extinct constantly has to be losing the > race, if mankind is to survive. What's the odds for > that, every single second, year after year, until > singularity is reached? The cosmic race is simply a fact of nature, as fundamental as the entropic observation that two can move a large mass that one can move not at all. Whether this is considered a nightmare, a dream, or merely the way things work, is entirely in the mind of the observer but it's worth recognizing that our very existence -- and our future -- depends on it being so. I tend to favor a model of our subjective awareness in the form of a tree of the probable, exploring the possible. As subjective agents, we are each but the tips of the branches. Looking back, we see increasingly thick branches -- increasingly probable principles -- describing the "reality" of our subjective branch converging all the way back to the thickest branches representing our most fundamental, and therefore most general, principles of physics. Looking forward, we see the growth of increasingly diverse branches of the possible, supported by the probable, to be pruned by natural selection in ways consistent with what has gone before, but always surprising from our subjective point of view. Staying in the Red Queen's race, from any subjective point of view, involves the discovery and exploitation of increasingly effective configurations -- configurations representing that with which we identify: our subjective values -- and increasingly effective not only within existing degrees of freedom but in terms of synergistic configurations presenting new dimensions of interaction with the local environment, the adjacent possible. In principle, this is a race of information, supported by configurations of what we currently see as matter. This reflects on the question of surveillance and sousveillance -- while the tree can and will branch unpredictably, a fundamental trend is toward increasing information (on both sides.) We can take heart from the observation that increasing convergence on principles "of what works" supports increasing divergence of self-expression "of what may work." If we recognize this and promote growth in terms of our evolving values via our evolving understanding of principles of "what works", amplified by our technologies, then we can hope to stay in the race, even as the race itself evolves. If we would attempt in some way to devise a solution preserving our present values, then the race, speeding up exponentially, would soon pass us by. In short, yes, we can hope to stay in the race, but as the race evolves so must we. - Jef From stathisp at gmail.com Mon Jun 25 09:03:22 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 25 Jun 2007 19:03:22 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On 25/06/07, gts wrote: > > If we had evolved in an environment > > where copying was commonplace, our brains may well have developed > > something akin to Lee's simpler theory of selfishly regarding all > > copies as selves in proportion to their level of similarity. > > Possibly, but it would I think have to be a chaotic society without a > coherent concept of individual rights, or even of individuality. My > murder-or-suicide courthouse illustration was designed to show the > absurdity of such a world. But imagine that exact copying of an adult human had been available for thousands of years. In such a society, people who tend to treat their copies as selves and will eg. not think twice about sacrificing one version of themselves so that two versions survive, will prosper and become over-represented in the population compared to those who treat copies as other and behave selfishly (in the present sense) towards them. The adaptive effect of treating copies as selves will be greater than the adaptive effect of caring for family members, because in the case of the copies not only are they physically identical but the entire meme complex is also identical: evolutionary psychology becomes much more straightforward. Therefore, if copying became commonplace, over time Lee's view would come to prevail and the rest of us will become evolutionary relics. However, that doesn't mean we should - for this reason - consider copies as selves, any more than the existence of sperm banks should inspire all men to devote maximum resources to donating sperm. -- Stathis Papaioannou From stathisp at gmail.com Mon Jun 25 09:44:37 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 25 Jun 2007 19:44:37 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <030a01c7b692$3fbfe6d0$6501a8c0@homeef7b612677> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <017d01c7b2dc$e50d45b0$6501a8c0@homeef7b612677> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <01f901c7b488$40852cd0$6501a8c0@homeef7b612677> <023301c7b4db$a4feada0$6501a8c0@homeef7b612677> <030a01c7b692$3fbfe6d0$6501a8c0@homeef7b612677> Message-ID: On 25/06/07, Lee Corbin wrote: > > If I can overcome my fear of anticipating no successor experiences > > then I should (logically, I would argue) overcome my fear of death. > > So just how upset, under midazolam, would you be? Alas, > it's not something that one would get "used to"! For the > very interesting reason that one would not recall the previous > instances of so being under the influence. I think you would get used to it, because you would remember agreeing to have the dose and then finding yourself somewhere else a while later (usually waking up, at the doses that cause complete amnesia) with no recollection of the intervening period. You could go through this many times, and in fact some patients do, and stop worrying about it. This means that you can arrive at a state of mind whereby you can accept that you-thinking-this can anticipate no future experiences. Now, given that you have arrived at such a position, why is it more "logical" to procede to the conclusion that this is OK as long as some near-copy (which you-thinking-this will never directly know) has future experiences, rather than the conclusion that death does not matter at all, or doesn't matter as long as someone else will be around to complete your projects? -- Stathis Papaioannou From stathisp at gmail.com Mon Jun 25 09:49:32 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 25 Jun 2007 19:49:32 +1000 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <516723.30950.qm@web37415.mail.mud.yahoo.com> References: <24f36f410706241051l3d10da4eoaf97358d939d9936@mail.gmail.com> <516723.30950.qm@web37415.mail.mud.yahoo.com> Message-ID: On 25/06/07, A B wrote: > > The recent "I, Robot" starring Will Smith ain't bad. > Some may gasp that it's "so hollywood", but hollywood > could have done worse. It gets a recommend from me; in > fact it's probably the best hollywood "AI" movie of > the last decade or more, IMO. What about the movie "A.I."? -- Stathis Papaioannou From emlynoregan at gmail.com Mon Jun 25 12:34:14 2007 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 25 Jun 2007 22:04:14 +0930 Subject: [ExI] http://www.randommutation.com/ Message-ID: <710b78fc0706250534t609b6a55m43e7661b270390fb@mail.gmail.com> http://www.randommutation.com/ Thought I'd throw this one to the wolves. Tear it up! Emlyn From msd001 at gmail.com Mon Jun 25 12:42:02 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 25 Jun 2007 08:42:02 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: References: <24f36f410706241051l3d10da4eoaf97358d939d9936@mail.gmail.com> <516723.30950.qm@web37415.mail.mud.yahoo.com> Message-ID: <62c14240706250542ia32eb8ata34c9c0057586be9@mail.gmail.com> Strange Days: http://en.wikipedia.org/wiki/Strange_Days_(film) The recording and playback of experiences is an interesting bit of technology featured in this film. Admittedly, this sci-fi thriller takes itself a little too seriously and tries too hard - but as a concept movie, I thought it had its moments. From msd001 at gmail.com Mon Jun 25 12:49:08 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 25 Jun 2007 08:49:08 -0400 Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <710b78fc0706250534t609b6a55m43e7661b270390fb@mail.gmail.com> References: <710b78fc0706250534t609b6a55m43e7661b270390fb@mail.gmail.com> Message-ID: <62c14240706250549k5adebd24lec3805aac8ff5e24@mail.gmail.com> On 6/25/07, Emlyn wrote: > http://www.randommutation.com/ > > Thought I'd throw this one to the wolves. Tear it up! Where to start? The author has an ax to grind and uses pseudo-scientific language to propose validity to what is simply nonsensical. From mabranu at yahoo.com Mon Jun 25 14:04:58 2007 From: mabranu at yahoo.com (TheMan) Date: Mon, 25 Jun 2007 07:04:58 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? (A mysterious ">" sneaked in!) Message-ID: <465399.96454.qm@web51906.mail.re2.yahoo.com> Something strange happened to my original post in this thread - at least in the archive: for some mysterious reason there is a ">" before the piece starting with this sentence: >From a utilitarian point of view, any existential threat, no matter how small, to the survival of (etc) Why? I didn't put that ">" there! For those of you who read that post and stopped reading at the ">" because you thought the rest of the text was a quote, here is the complete post again (I hope nothing strange happens to it this time): My impression is that there is a far from negligible risk that nanofactories (and other extreme technology for mass manipulation of matter/minds) will relatively soon become effective enough, and cheap enough, for plenty of anti-humanity cults and terrorists to do things (manufacture armies of nanorobots etc) that can terminate mankind before mankind or any human gets a chance to become or create something greater that is not vulnerable to such threats. It looks like it will be a nightmare race. The part of society that wants survival will incessantly have to be winning the race, and the part of society that wants society extinct constantly has to be losing the race, if mankind is to survive. What's the odds for that, every single second, year after year, until singularity is reached? When technologies that can be used for mankind extinction become really fast and effective, the bad side may only have to gain the upper hand for one brief moment, whereas the good side has to constantly have the upper hand. So the odds seem to be in favor of the bad side. >From a utilitarian point of view, any existential threat, no matter how small, to the survival of mankind in the near future, should be taken more seriously than any non-existential issues, because an enormously long future of enormous amounts of happiness (in this part of universe) may depend on the survival of mankind in this critical time. (Whether there will always be "far from negligible existential threaths" is an interesting question, but let's examine that one in a separate thread. It seems to me that the next two or three decades there will be worse threaths than both before and after.) I can think of no other existential threat to mankind the coming decades that can compare to the threat from super-technology (like very advanced self-replicating nanorobots) in the wrong hands. (wrong hands doesn't have to mean evil hands - simply incompetent and na?ve hands can be just as dangerous - but I'll focus on the threat from evil hands as that one is more obvious to me.) If we leave it up to our governments, our governments will respond to that threat with, as usual, even more surveillance and even more limitation of people's freedoms. And the public will, as usual, even be the ones who demand it. Some say we already live in an Orwellian society, but I think the one we live in today is more like *paradise* compared to the one that governments may build in the decades to come in order to protect us (or maybe mainly themselves) against nano-terrorism and other super-hi-tech terrorism. There is a considerable risk that governments, even in the most democratic countries in the world, will abuse that kind of omnipresent, automated surveillance system when they have it installed, and become dictators, using their surveillance system to detect any opposition in time to snuff it, and to frighten people so that very little opposition even occurs. Power corrupts; total power corrupts totally. That is, if we don't choose another solution. Absolutely extreme levels of global surveillance seem to become a must, very soon. But must governments be in charge of it? David Brin suggests, in his "The Transparent Society", that we should create such a system that we can all watch each other all the time (and while Brin thinks we could still keep some privacy, I don't see how we could, if we want to keep mankind safe from the approaching threats from the super hi-tech weapons of mass destruction that will soon be available to just about everybody if the public is allowed any privacy whatsoever). With a system that allows us all to watch each other 24/7/365 (and to listen to all each other's phone calls, live conversations, read each other's emotions, detect each other's lies etc), no government can seize illegitimate power or do anything outside of their official duties, because as soon as they try, they will be immediately stopped by the public, because ordinary people watch and control them just as much as they watch and control ordinary people. The government may watch suspected persons in their homes and convict them for planning acts of terror, but any one member of society can (and usually literally millions will) scrutinize the validity of such accusations, because whatever the government sees and hears, everybody can see and hear whenever they want to, because it is all broadcasted live. Every person in the government is videotaped 24/7/365, and streamed over the Internet for everyone to see and hear, just like everybody else is. Possibly with a lie detector attached to the screen (depending on how good lie detectors will be by then), with a red light going off every time someone lies or holds something back. We would be just as thouroughly watched in such a society as in Orwell's "1984", but the difference is that we wouldn't be at all powerless. We would know everything about each other, or rather, as much as we would want to know (and have time to take in). That might protect us pretty well from any terrorists - and, perhaps just as importantly, from dictator wannabes - among us and among the politicians and everywhere. Would such a solution make the existential threat to mankind, from ever more powerful hi-tech weapons, lesser than would the solution, where the governments have all or most of the power, and the public none or little power? Actually, I'm not sure. It may even be the other way around - an Orwellian world may be the *safer* one (if survival is the only objective, that is). For sure, David Brin's everybody-watches-everybody world would _feel_ safer than an Orwellian world would. But what if an Orwellian dictatorship actually would protect mankind better from extinction (by enslaving mankind) than David Brin's relatively free society would? If everybody can see what everybody does, it means that every technological advance becomes available to everybody at the same time. What if at one moment, a technology allows anybody to build a weapon so fast that he can use it to terminate all other people before they can stop it (they may all realize the danger immediately, but they may not every time sufficiently quickly be able to physically prevent the person from doing what he is about to do)? Is a world consisting of billions of equally powerful human beings(/cyborgs/posthumans/whatever), with naturally very diverse agendas, really safer (from the point of view of the survival of mankind) than a world where just one or a handful of human beings have all the power and the others none? If all the billions of human beings on this planet have equal power and knowledge, maybe there is a greater risk that one of them temporarily gets the lead in the technology race and finds a way to terminate mankind (as some of them will sure always try to do, as long as they have any freedom whatsoever)? If only for example five people have total power over the rest of mankind, at least there is a good chance that no one of these five will ever _choose_ to terminate mankind although they could. At least one in six billion will choose to (try to). Hundreds probably, maybe thousands, will try to. >From a utilitarian ethics point of view, it may be preferrable in the long run that we choose the kind of system that as strongly as possible secures long term survival of this planet's highly evolved intelligence, rather than the kind of system that allows for the best life quality for the people. Even if a global Orwellian dictatorship would mean huge loads of suffering for almost everyone on this planet for decades, sooner or later even a dictator regime - no matter how stubbornly it tries to stay conservative - will reach singularity, and just expand its bliss and intelligence until there is no room (and use) for pain-experiencing slaves anymore. (And after that, just expand further.) I like utilitarianism, but I'm not sure I'm ready to live in an Orwellian society for a long time just because that's the utilitarian thing to do. And maybe it is'nt. Maybe David Brin's society will be not only nicer, but also safer for mankind's survival. What do you think? And/or maybe there is a third alternative, an even better one? Any ideas? (When I say mankind's survival, I mean "the survival of whatever needs to survive so that the development towards greater and greater amounts of happiness that has started on this planet will continue growing or at least have a chance to grow in the future" or something like that. It includes any posthumanity that is desirable from a utilitarian point of view. And I can't think of any way for a posthumanity to be long term undesirable from that point of view - other than by being more vulnerable than the alternatives.) So, what do you say, what kind of surveillance system and government and society structure would be best, as protection against nano-terrorism and such, the coming decades, before singularity arrives (so that it gets to arrive)? Please feel free to describe in as much detail as you wish. ____________________________________________________________________________________ Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games. http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow From austriaaugust at yahoo.com Mon Jun 25 13:38:25 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 25 Jun 2007 06:38:25 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: Message-ID: <459132.93630.qm@web37402.mail.mud.yahoo.com> Stathis wrote: "What about the movie "A.I."?" I hadn't forgotten about that one. But I couldn't in good conscience recommend it to anyone. ;) I can't pin down exactly why I dislike it so much; I think it's multiple factors. Although I suppose it's conceivable that someone else might enjoy it. ;) Should we take take this as a recommendation from you, Stathis? :) Sincerely, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 25/06/07, A B wrote: > > > > The recent "I, Robot" starring Will Smith ain't > bad. > > Some may gasp that it's "so hollywood", but > hollywood > > could have done worse. It gets a recommend from > me; in > > fact it's probably the best hollywood "AI" movie > of > > the last decade or more, IMO. > > What about the movie "A.I."? > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Food fight? Enjoy some healthy debate in the Yahoo! Answers Food & Drink Q&A. http://answers.yahoo.com/dir/?link=list&sid=396545367 From austriaaugust at yahoo.com Mon Jun 25 14:42:07 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 25 Jun 2007 07:42:07 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <459132.93630.qm@web37402.mail.mud.yahoo.com> Message-ID: <472773.43298.qm@web37414.mail.mud.yahoo.com> Another dystopian vision of the human future can be seen in George Lucas' movie "THX". It had some great "visuals" for the time, but I admittedly don't remember that much about the movie - IIRC, I was half drunk when I watched it. I think that genetic and pharmacological manipulation were a major theme of the movie. Maybe someone else here can fill in the holes, and state whether it was a good movie or bad. :) Sincerely, Jeffrey Herrlich ____________________________________________________________________________________ Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. http://autos.yahoo.com/carfinder/ From russell.wallace at gmail.com Mon Jun 25 15:08:50 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Mon, 25 Jun 2007 16:08:50 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <516723.30950.qm@web37415.mail.mud.yahoo.com> References: <24f36f410706241051l3d10da4eoaf97358d939d9936@mail.gmail.com> <516723.30950.qm@web37415.mail.mud.yahoo.com> Message-ID: <8d71341e0706250808h13af9d1eg854f7611eb38477e@mail.gmail.com> On 6/24/07, A B wrote: > > > The recent "I, Robot" starring Will Smith ain't bad. > Some may gasp that it's "so hollywood", but hollywood > could have done worse. It gets a recommend from me; in > fact it's probably the best hollywood "AI" movie of > the last decade or more, IMO. > *blinks* I think I, Robot has to be the worst AI movie of all time. I mean, Logan's Run at least was funny - inadvertently, but funny. Revenge of the Sith, well there were a couple of moments when it managed to rise to the level of slapstick comedy. Bad slapstick comedy, but still comedy. But I, Robot managed that perfect level of badness that has no redeeming features whatsoever. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Mon Jun 25 15:10:39 2007 From: jef at jefallbright.net (Jef Allbright) Date: Mon, 25 Jun 2007 08:10:39 -0700 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On 6/25/07, Stathis Papaioannou wrote: > On 25/06/07, gts wrote: > > > > If we had evolved in an environment > > > where copying was commonplace, our brains may well have developed > > > something akin to Lee's simpler theory of selfishly regarding all > > > copies as selves in proportion to their level of similarity. > > > > Possibly, but it would I think have to be a chaotic society without a > > coherent concept of individual rights, or even of individuality. My > > murder-or-suicide courthouse illustration was designed to show the > > absurdity of such a world. > > But imagine that exact copying of an adult human had been available > for thousands of years. In such a society, people who tend to treat > their copies as selves and will eg. not think twice about sacrificing > one version of themselves so that two versions survive, will prosper > and become over-represented in the population compared to those who > treat copies as other and behave selfishly (in the present sense) > towards them. The adaptive effect of treating copies as selves will be > greater than the adaptive effect of caring for family members, because > in the case of the copies not only are they physically identical but > the entire meme complex is also identical: evolutionary psychology > becomes much more straightforward. > > Therefore, if copying became commonplace, over time Lee's view would > come to prevail and the rest of us will become evolutionary relics. > However, that doesn't mean we should - for this reason - consider > copies as selves, any more than the existence of sperm banks should > inspire all men to devote maximum resources to donating sperm. Stathis, I agree with you that ubiquitous copying would lead to changes in the evolved psychology of self, but suggest that this would not strengthen acceptance of Lee's view that duplicates are the same individual, but rather, that duplicates are members of the hive identity and that it would be silly to consider the personal identity of individuals. Supporting your point, there is no essence of personal identity, but only perceived identity assigned by any observer in terms of features salient due to their utility with regard to the dynamics of social organisms. - Jef From jef at jefallbright.net Mon Jun 25 15:39:03 2007 From: jef at jefallbright.net (Jef Allbright) Date: Mon, 25 Jun 2007 08:39:03 -0700 Subject: [ExI] agency-based personal identity defined In-Reply-To: References: Message-ID: On 6/24/07, Thomas wrote: > Earlier you mentioned my audience as your motive for belaboring this > discussion. In the interest of reducing compulsion I'd like you to know > that I consider your above explanation competent and sufficient. You have > succeeded in expanding the scope of my understanding of personal identity. > I hereby formally excuse you from feeling any explicit or implied obligation > to perform this exercise on my behalf. I very much appreciate your concern > and I feel sure this appreciation will grow as I apply this concept in the > future. In addition I gained insight into the frustrating process of > attempting to argue from an expanded context. The paragraph above indicates > that you've gotten better at it. -- Thomas Uh...thanks, I think. But rather than "reducing [my] compulsion" to increase the coherence and extensibility of our thinking on these topics -- haven't you actually reinforced it? ;-) A band of musicians playing together can be greater than the sum of the musicians playing separately. But that greater cooperative act is the results of smaller acts of competition, selecting for arrangements that work together at the higher level. - Jef From austriaaugust at yahoo.com Mon Jun 25 16:08:13 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 25 Jun 2007 09:08:13 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <8d71341e0706250808h13af9d1eg854f7611eb38477e@mail.gmail.com> Message-ID: <492604.42413.qm@web37402.mail.mud.yahoo.com> Russell wrote/acted: *blinks* Meh, to each his own I suppose. :-) > "I think I, Robot has to be the worst AI movie of all > time." The "worst"? C'mon. Surely you believe it's better than that Spielberg monstrosity? ;) > "I mean, Logan's Run at least was funny - > inadvertently, but funny." Haven't had the pleasure yet. Do you recommend it? > "Revenge of the Sith, well there were a couple of > moments when it managed to > rise to the level of slapstick comedy. Bad slapstick > comedy, but still > comedy. I agree with "bad"... wholeheartedly. > "But I, Robot managed that perfect level of badness > that has no redeeming > features whatsoever." I think it's worth at least a single viewing. Saying it's the best AI movie in a decade, isn't saying it's a great movie. :-) Sincerely, Jeffrey Herrlich --- Russell Wallace wrote: > On 6/24/07, A B wrote: > > > > > > The recent "I, Robot" starring Will Smith ain't > bad. > > Some may gasp that it's "so hollywood", but > hollywood > > could have done worse. It gets a recommend from > me; in > > fact it's probably the best hollywood "AI" movie > of > > the last decade or more, IMO. > > > > *blinks* > > I think I, Robot has to be the worst AI movie of all > time. > > I mean, Logan's Run at least was funny - > inadvertently, but funny. > > Revenge of the Sith, well there were a couple of > moments when it managed to > rise to the level of slapstick comedy. Bad slapstick > comedy, but still > comedy. > > But I, Robot managed that perfect level of badness > that has no redeeming > features whatsoever. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, photos & more. http://mobile.yahoo.com/go?refer=1GNXIC From natasha at natasha.cc Mon Jun 25 16:19:27 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 25 Jun 2007 11:19:27 -0500 Subject: [ExI] MEDIA: Martine Rothblatt in the News Message-ID: <200706251619.l5PGJVKA009600@ms-smtp-05.texas.rr.com> An excellent article on transhumanist Martine Rothblatt on her company United Therapeutics, and her ideas and views: http://biz.yahoo.com/hfsb/070625/062107_fsb100_united_therapeutics_fsb.html Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at kevinfreels.com Mon Jun 25 15:57:31 2007 From: kevin at kevinfreels.com (kevin at kevinfreels.com) Date: Mon, 25 Jun 2007 08:57:31 -0700 Subject: [ExI] http://www.randommutation.com/ Message-ID: <20070625085731.38f036b76284185e041b1b237c97abe6.0ce642404c.wbe@email.secureserver.net> An HTML attachment was scrubbed... URL: From kevin at kevinfreels.com Mon Jun 25 15:59:05 2007 From: kevin at kevinfreels.com (kevin at kevinfreels.com) Date: Mon, 25 Jun 2007 08:59:05 -0700 Subject: [ExI] Favorite ~H+ Movies Message-ID: <20070625085904.38f036b76284185e041b1b237c97abe6.7d66d3a441.wbe@email.secureserver.net> An HTML attachment was scrubbed... URL: From mabranu at yahoo.com Mon Jun 25 16:08:43 2007 From: mabranu at yahoo.com (TheMan) Date: Mon, 25 Jun 2007 09:08:43 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <482359.40787.qm@web51905.mail.re2.yahoo.com> Message-ID: <251687.30751.qm@web51901.mail.re2.yahoo.com> Jef Allbright wrote: > The cosmic race is simply a fact of nature, The _cosmic_ race? You mean the fact that tech races will go on throughout universe for eternity anyway, with or without mankind's participation? Don't you care whether this particular race going on on Earth now is going to continue leading to better and better things for ever, or soon stop itself by creating something that kills us all? > as > fundamental as the > entropic observation that two can move a large mass > that one can move > not at all. Whether this is considered a nightmare, > a > dream, or > merely the way things work, is entirely in the mind > of > the observer > but it's worth recognizing that our very existence The tech race on our planet in inevitable - until it stops itself by leading to something that extincts all. The race can take differing paths, and we should, at least to some extent, be able to influence what path it will take, because we are the ones creating it. What I wonder is what path is safest, what path minimizes the risk that the race stops itself through a disaster (or an evil deed). > and our future > -- depends on it being so. > > I tend to favor a model of our subjective awareness > in > the form of a > tree of the probable, exploring the possible. As > subjective agents, > we are each but the tips of the branches. Looking > back, we see > increasingly thick branches -- increasingly probable > principles -- > describing the "reality" of our subjective branch > converging all the > way back to the thickest branches representing our > most fundamental, > and therefore most general, principles of physics. > Looking forward, > we see the growth of increasingly diverse branches > of > the possible, > supported by the probable, to be pruned by natural > selection in ways > consistent with what has gone before, but always > surprising from our > subjective point of view. > > Staying in the Red Queen's race, from any subjective > point of view, > involves the discovery and exploitation of > increasingly effective > configurations -- configurations representing that > with which we > identify: our subjective values -- and increasingly > effective not only > within existing degrees of freedom but in terms of > synergistic > configurations presenting new dimensions of > interaction with the local > environment, the adjacent possible. > > In principle, this is a race of information, > supported > by > configurations of what we currently see as matter. > This reflects on > the question of surveillance and sousveillance -- Sousveillance implies watching "from below", meaning there is someone "above" you, someone who still has more power than you. This is not the only alternative to surveillance. A society is thinkable where there are no governments with any of the advantage in terms of surveillance and overall power that they have today, a society where everybody has equal ability to watch each other, and equal power to stop each other from doing evil. That would not be sousveillance but... equal interveillance? Would you rather have that kind of system than the kind of system we have today? If yes, do you think extropians can increase the probability that mankind will choose and implement such a system globally? If yes, how? > while the tree can > and will branch unpredictably, a fundamental trend > is > toward > increasing information (on both sides.) > > We can take heart from the observation that > increasing > convergence on > principles "of what works" supports increasing > divergence of > self-expression "of what may work." If we recognize > this and promote > growth in terms of our evolving values via our > evolving understanding > of principles of "what works", amplified by our > technologies, then we > can hope to stay in the race, even as the race > itself > evolves. If we > would attempt in some way to devise a solution > preserving our present > values, then the race, speeding up exponentially, > would soon pass us > by. > > In short, yes, we can hope to stay in the race, but > as > the race > evolves so must we. Nice word ambiguity! :-) I don't really understand whether you answer my question though. Basically, I was wondering what is the best way to minimize the existential threat from technology, in terms of _what_ people should have the right to watch _what_ people, and to what extent, and how, and how it should be governed (if at all) etc. Who stays in the tech race and who doesn't is a decisive factor, but this doesn't automatically mean that you are more likely to survive if you choose to stay in the race as strongly as possible an individual, as opposed to staying in the race as a part of [society's staying in the race - through the power of governments - against certain people]. You might be better off handing over a lot of power to your government, or you might not. That's the question I want to discuss. There is a race not only between individuals but also between certain individuals and society. Continuing to equip the governments with much greater resources than most people (ie resources for tech research and surveillance of others) (and maybe even discouraging each other from competing with the governments), may be considered one way for us as a society to "stay in the race" against anti-humanity cults, terrorists and dangerously na?ve engineers. Another way of "staying in the race", one that excludes that way, would be that each person tries to stay ahead of [everybody else including any governments] as strongly as possible. The former strategy could lead to an Orwellian nightmare, the latter could lead to anarchy with a lot more different wills and a lot more uncontrolled dangerous experimenting with unknown technologies. Which of the two strategies is safer, if the goal is to maximize the survival chances of mankind (and of whatever good it will become and develop), from now to when the singularity comes? Are there better solutions than these two (+solution along the spectrum between them), when it comes to the distribution of the power (ability, right, authority) to acquire and control information about others? ____________________________________________________________________________________ Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. http://tv.yahoo.com/collections/222 From jonkc at att.net Mon Jun 25 17:16:15 2007 From: jonkc at att.net (John K Clark) Date: Mon, 25 Jun 2007 13:16:15 -0400 Subject: [ExI] Favorite ~H+ Movies References: <492604.42413.qm@web37402.mail.mud.yahoo.com> Message-ID: <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> The recent movie "The Prestige" had some very strong Transhuman content besides being the best film of any sort made in the last few years. John K Clark From russell.wallace at gmail.com Mon Jun 25 17:39:45 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Mon, 25 Jun 2007 18:39:45 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <492604.42413.qm@web37402.mail.mud.yahoo.com> References: <8d71341e0706250808h13af9d1eg854f7611eb38477e@mail.gmail.com> <492604.42413.qm@web37402.mail.mud.yahoo.com> Message-ID: <8d71341e0706251039g7b4d9e2exa0a6c97b162fc5c7@mail.gmail.com> On 6/25/07, A B wrote: > > The "worst"? C'mon. Surely you believe it's better > than that Spielberg monstrosity? ;) Well okay yes that I am prepared to believe, though I can't confirm it first-hand; I skipped the Spielberg monstrosity after hearing a report from someone who watched it. > "I mean, Logan's Run at least was funny - > > inadvertently, but funny." > > Haven't had the pleasure yet. Do you recommend it? Over a few beers when you're in the mood for a classic "so bad it's funny" B-movie, sure. > "But I, Robot managed that perfect level of badness > > that has no redeeming > > features whatsoever." > > I think it's worth at least a single viewing. Saying > it's the best AI movie in a decade, isn't saying it's > a great movie. :-) > Well true, but it isn't saying it's worth a single viewing either :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Mon Jun 25 17:29:12 2007 From: gts_2000 at yahoo.com (gts) Date: Mon, 25 Jun 2007 13:29:12 -0400 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> <01e001c7b3d7$4c627550$6501a8c0@homeef7b612677> <021a01c7b49c$95d72030$6501a8c0@homeef7b612677> Message-ID: On Mon, 25 Jun 2007 11:10:39 -0400, Jef Allbright wrote: > Supporting your point, there is no essence of personal identity, but > only perceived identity assigned by any observer in terms of features > salient due to their utility with regard to the dynamics of social > organisms. I don't know if you consider yourself an observer of your own identity, but a little introspection will reveal that you can, at any given moment, identify something concrete about yourself. Namely, you can identify something that you were willing to do in that moment. As you introspect on your will in this present moment, *here* for example, you discover among other things that you were willing to sit at your computer, in a certain chair, and to read and to comprehend this sentence that is just now ending. Of relevance to this thread, your will is always unique. It cannot be shared even by an exact duplicate, because not even an exact duplicate can occupy your space. This is all I meant when I wrote that your will is your essence in any given moment. -gts From jef at jefallbright.net Mon Jun 25 18:37:02 2007 From: jef at jefallbright.net (Jef Allbright) Date: Mon, 25 Jun 2007 11:37:02 -0700 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <251687.30751.qm@web51901.mail.re2.yahoo.com> References: <482359.40787.qm@web51905.mail.re2.yahoo.com> <251687.30751.qm@web51901.mail.re2.yahoo.com> Message-ID: On 6/25/07, TheMan wrote: > Jef Allbright wrote: > > > The cosmic race is simply a fact of nature, > > The _cosmic_ race? You mean the fact that tech races > will go on throughout universe for eternity anyway, > with or without mankind's participation? Don't you > care whether this particular race going on on Earth > now is going to continue leading to better and better > things for ever, or soon stop itself by creating > something that kills us all? My point was that "the race" is simply a fact of nature, "bloody in tooth and claw", and that rather than framing it as a "nightmare", we might do better to recognize it as it is and do our best to stay ahead. Yes, it's significant that it was going on before, and will continue after, the existence of what we know as humankind. I realize that my writing tends to be very terse, and I compress a lot into metaphor, but how did you possibly get the idea that I might not care? Is this your way of inviting further clarification, or do you really suspect that? I identify as a member of humanity and care very much that our **evolving** values be promoted indefinitely into the future. > > > as > > fundamental as the > > entropic observation that two can move a large mass > > that one can move > > not at all. Whether this is considered a nightmare, > > a > > dream, or > > merely the way things work, is entirely in the mind > > of > > the observer > > but it's worth recognizing that our very existence > > The tech race on our planet in inevitable - until it > stops itself by leading to something that extincts > all. So here we come to the significance of my statement that it is a race within a cosmic context. There is no "until" -- the race will continue with or without our participation, and this is significant, not because we should care if we're out of the game, but because it helps us understand the rules. > The race can take differing paths, and we should, > at least to some extent, be able to influence what > path it will take, because we are the ones creating > it. Yes, it is critical, from our POV, that we exercise choice as effectively as possible. > What I wonder is what path is safest, what path > minimizes the risk that the race stops itself through > a disaster (or an evil deed). We can never know "the correct" safest path, but we improve our odds to the extent that we apply our best understanding of the deeper principles of the way the universe works, toward promotion of our best understanding of our shared human values, expressed in an increasingly coherent manner. In practical terms, this implies the importance of an increasingly effective cognitive framework for moral decision-making. > > and our future > > -- depends on it being so. > > > > I tend to favor a model of our subjective awareness > > in > > the form of a > > tree of the probable, exploring the possible. As > > subjective agents, > > we are each but the tips of the branches. Looking > > back, we see > > increasingly thick branches -- increasingly probable > > principles -- > > describing the "reality" of our subjective branch > > converging all the > > way back to the thickest branches representing our > > most fundamental, > > and therefore most general, principles of physics. > > Looking forward, > > we see the growth of increasingly diverse branches > > of > > the possible, > > supported by the probable, to be pruned by natural > > selection in ways > > consistent with what has gone before, but always > > surprising from our > > subjective point of view. > > > > Staying in the Red Queen's race, from any subjective > > point of view, > > involves the discovery and exploitation of > > increasingly effective > > configurations -- configurations representing that > > with which we > > identify: our subjective values -- and increasingly > > effective not only > > within existing degrees of freedom but in terms of > > synergistic > > configurations presenting new dimensions of > > interaction with the local > > environment, the adjacent possible. > > > > In principle, this is a race of information, > > supported > > by > > configurations of what we currently see as matter. > > This reflects on > > the question of surveillance and sousveillance -- > > Sousveillance implies watching "from below", meaning > there is someone "above" you, someone who still has > more power than you. This is not the only alternative > to surveillance. A society is thinkable where there > are no governments with any of the advantage in terms > of surveillance and overall power that they have > today, a society where everybody has equal ability to > watch each other, and equal power to stop each other > from doing evil. That would not be sousveillance > but... equal interveillance? Sorry, but again it comes down to information. You're neglecting the ensuing combinatorial explosion and the rapid run-up against the limits of any finite amount of computational resources. To function, we (and by extension, our machines) must limit our attention, there will always be gradients, and that's a good thing. Gradients make the world go 'round. > Would you rather have that kind of system than the > kind of system we have today? I passionately desire, and work toward, a system that increases our effective awareness of ourselves and how we can better promote our evolving values. Such a system does not aim for "equality", but rather, growth of opportunity within an increasingly cooperative positive-sum framework. > If yes, do you think extropians can increase the > probability that mankind will choose and implement > such a system globally? If yes, how? I think that the philosophy of extropy has everything to do with increasing our chances. > > while the tree can > > and will branch unpredictably, a fundamental trend > > is > > toward > > increasing information (on both sides.) > > > > We can take heart from the observation that > > increasing > > convergence on > > principles "of what works" supports increasing > > divergence of > > self-expression "of what may work." If we recognize > > this and promote > > growth in terms of our evolving values via our > > evolving understanding > > of principles of "what works", amplified by our > > technologies, then we > > can hope to stay in the race, even as the race > > itself > > evolves. If we > > would attempt in some way to devise a solution > > preserving our present > > values, then the race, speeding up exponentially, > > would soon pass us > > by. > > > > In short, yes, we can hope to stay in the race, but > > as > > the race > > evolves so must we. > > Nice word ambiguity! :-) > > I don't really understand whether you answer my > question though. Sorry that I appear ambiguous; I strive to be as clear and precise as possible, but not more so. TheMan: "How should we travel, to get through the increasingly complex and dangerous territory that lies ahead?" Jef: "We should take an inventory of ourselves and our equipment, and apply ourselves to making a better map as we proceed." TheMan: "No, I mean specifically, how should we go?" Jef: "The best answer to that question is always on our best map." TheMan: "You are so irritatingly vague! Don't you even care about this journey?" > Basically, I was wondering what is > the best way to minimize the existential threat from > technology, in terms of _what_ people should have the > right to watch _what_ people, and to what extent, and > how, and how it should be governed (if at all) etc. My thinking in this regard tends to align with the Proactionary Principle: but I realize you're looking for something more specific. I don't have a specific answer to "what people" should be able to watch "what people." Personally, I tend to like the idea of public cameras on the web watching all public areas. I think this will improve public safety dramatically, and that concerns about privacy will adapt, and that additional unforeseen benefits will arise. > Who stays in the tech race and who doesn't is a > decisive factor, but this doesn't automatically mean > that you are more likely to survive if you choose to > stay in the race as strongly as possible an > individual, as opposed to staying in the race as a > part of [society's staying in the race - through the > power of governments - against certain people]. You > might be better off handing over a lot of power to > your government, or you might not. That's the question > I want to discuss. Personally, I think "government" as we know it will collapse under the weight of its own inconsistencies, and that radical libertarianism will never flourish. I favor a system possibly describable as anarchy within an increasingly cooperative framework. - Jef From mabranu at yahoo.com Mon Jun 25 18:46:28 2007 From: mabranu at yahoo.com (TheMan) Date: Mon, 25 Jun 2007 11:46:28 -0700 (PDT) Subject: [ExI] Can and should happiness be completely liberated from its task of motivating? Message-ID: <319746.50359.qm@web51906.mail.re2.yahoo.com> There may come a time after which we will never again need to be able to feel any pain or fear whatsoever in order to live wisely and avoid bad things. At www.hedweb.com it is suggested that a world without suffering is possible, and that in the future, only gradients of wellbeing will be necessary for motivation. But will even gradients of wellbeing be necessary? Or can we, in the future, as posthumans, become "robots" in the sense that we will, unlike now, never be driven to the slighest extent by any emotions, feelings or anything like that, but by pure intelligence, and still be at least as good at staying alive and developing as we are today (maybe infinitely better?), and at the same time be always enormously happier than today - and always exactly _equally_ happy no matter what happens to us and whatever we do and think? I suppose our happiness level will in the best case scenario go on increasing for ever, as our continuous development will constantly push the limits of what is "maximum possible happiness" for us, by changing our very design again and again. But can [happiness,wellbeing,pleasure,euphoria,bliss], also in that kind of scenario, successfully be completely liberated from its so far essential task of motivating us to act as wisely as possible? Or will a preserved connection between happiness and motivation always make us more fit for survival and further development than a disconnection would? ____________________________________________________________________________________ Moody friends. Drama queens. Your life? Nope! - their life, your story. Play Sims Stories at Yahoo! Games. http://sims.yahoo.com/ From austriaaugust at yahoo.com Mon Jun 25 18:47:03 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 25 Jun 2007 11:47:03 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <459132.93630.qm@web37402.mail.mud.yahoo.com> Message-ID: <507541.12433.qm@web37405.mail.mud.yahoo.com> Stathis, I hope you realize that I'm just poking fun at you. :) "A.I." isn't really **that** terrible; and it's final 15 minutes or so are somewhat redeemable. I shouldn't have been quite so harsh on it, even if it was all in fun. I will assume that it's going on the list, unless I hear otherwise from you. Best, Jeffrey Herrlich --- A B wrote: > Stathis wrote: > > "What about the movie "A.I."?" > > I hadn't forgotten about that one. But I couldn't in > good conscience recommend it to anyone. ;) I can't > pin > down exactly why I dislike it so much; I think it's > multiple factors. Although I suppose it's > conceivable > that someone else might enjoy it. ;) Should we take > take this as a recommendation from you, Stathis? :) > > Sincerely, > > Jeffrey Herrlich > > --- Stathis Papaioannou wrote: > > > On 25/06/07, A B wrote: > > > > > > The recent "I, Robot" starring Will Smith ain't > > bad. > > > Some may gasp that it's "so hollywood", but > > hollywood > > > could have done worse. It gets a recommend from > > me; in > > > fact it's probably the best hollywood "AI" movie > > of > > > the last decade or more, IMO. > > > > What about the movie "A.I."? > > > > > > -- > > Stathis Papaioannou > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > > ____________________________________________________________________________________ > Food fight? Enjoy some healthy debate > in the Yahoo! Answers Food & Drink Q&A. > http://answers.yahoo.com/dir/?link=list&sid=396545367 > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545469 From thespike at satx.rr.com Mon Jun 25 19:18:31 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 25 Jun 2007 14:18:31 -0500 Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <710b78fc0706250534t609b6a55m43e7661b270390fb@mail.gmail.co m> References: <710b78fc0706250534t609b6a55m43e7661b270390fb@mail.gmail.com> Message-ID: <7.0.1.0.2.20070625141230.025a8bb8@satx.rr.com> At 10:04 PM 6/25/2007 +0930, Emlyn randomly selected: >http://www.randommutation.com/ This is creepy, man. I ran the starting sentence through 4.6692016091 by 10^16 iterations on my honking iPhone, and it halted at GATTACAGATTACAGATTACAGATTACA. That dang machine waddled away immediately and threw itself into the pool, swimming fast for the fence. I dived in and tried to save it with sheer human pluck, but it had drowned by the time I reached it. Can I sue this guy? Damien Broderick From benboc at lineone.net Mon Jun 25 19:50:47 2007 From: benboc at lineone.net (ben) Date: Mon, 25 Jun 2007 20:50:47 +0100 Subject: [ExI] http://www.randommutation.com/ In-Reply-To: References: Message-ID: <46801C97.8010705@lineone.net> From: "Mike Dougherty" > > > > Thought I'd throw this one to the wolves. Tear it up! The guy just doesn't understand what evolution is. And he seems to think that his incomprehension somehow means that the theory is untrue. (in fact, he seems determined not to understand it). It's odd that the people who found that bumblebees contradict the laws of aerodynamics didn't conclude that they aren't real, eh? ben z From pharos at gmail.com Mon Jun 25 20:17:38 2007 From: pharos at gmail.com (BillK) Date: Mon, 25 Jun 2007 21:17:38 +0100 Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <46801C97.8010705@lineone.net> References: <46801C97.8010705@lineone.net> Message-ID: On 6/25/07, ben wrote: > The guy just doesn't understand what evolution is. And he seems to think > that his incomprehension somehow means that the theory is untrue. (in > fact, he seems determined not to understand it). > > It's odd that the people who found that bumblebees contradict the laws > of aerodynamics didn't conclude that they aren't real, eh? > He's a creationist. The randommutation generator is part of his site where he 'proves' the universe was created by God and DNA was designed by God. BillK From spike66 at comcast.net Mon Jun 25 20:20:21 2007 From: spike66 at comcast.net (spike) Date: Mon, 25 Jun 2007 13:20:21 -0700 Subject: [ExI] are spam levels dropping? In-Reply-To: <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> Message-ID: <200706252030.l5PKUk7R011232@andromeda.ziaspace.com> Reading thru my inbox, it suddenly occurred to me that I am getting far less spam than usual. Today there was nothing, not even those annoying Nigerian scam letters. The automatic filters have stuff in there, but it appears 20 dB down. What changed? I am not complaining at all, any more than I would complain if mosquitoes suddenly disappeared. I am merely wondering if the world finally decided to drop off on the production of that stuff, or if my filters have become more effective, or if my ISP became more effective at filtering it. Did someone have a spam measuring site? spike From hibbert at mydruthers.com Mon Jun 25 20:15:19 2007 From: hibbert at mydruthers.com (Chris Hibbert) Date: Mon, 25 Jun 2007 13:15:19 -0700 Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <7.0.1.0.2.20070625141230.025a8bb8@satx.rr.com> References: <710b78fc0706250534t609b6a55m43e7661b270390fb@mail.gmail.com> <7.0.1.0.2.20070625141230.025a8bb8@satx.rr.com> Message-ID: <46802257.7060205@mydruthers.com> I fed randommutation.com the following text: > This is creepy, man. I ran the starting sentence through 4.6692016091 > by 1016 iterations on my honking iPhone, and it halted at > GATTACAGATTACAGATTACAGATTACA. That dang machine waddled away > immediately and threw itself into the pool, swimming fast for the > fence. I dived in and tried to save it with sheer human pluck, but it > had drowned by the time I reached it. Can I sue this guy? and it only took 200 iterations to produce this: > T2iC Cs JrP-py5q an. l yan the svart9ng sgftnnce xVr5q4h > Hv6V922Pqb9i,7y Tz,N iWJrat,oFs r9 EJ 3vniing9iPhrLRNzandIktlnalDed > ad rATTA2AGATTACAGA08A2IGAYTvCA. Thdy da,g VacNv7D 0aqX9e7 aoay > iMmWdintely aid th7efqitse5h-inKo 1he6ptol7nswiawi.O5Basu foo sqa > funcFlRInKiveiLi5 amd Fsied 9oksav5 it pZth ohee4OWfvan plDcyh b-R 8V > haEzdrownej by tse timcGCYye7ch7A itsPCaZ I B8H4Nhis gu6M The last long word to die was "and". Notice that the only surviving words at this point are one or two letters long. Do you think it's trying to tell us something? Chris -- It is easy to turn an aquarium into fish soup, but not so easy to turn fish soup back into an aquarium. -- Lech Walesa on reverting to a market economy. Chris Hibbert hibbert at mydruthers.com Blog: http://pancrit.org From randall at randallsquared.com Mon Jun 25 20:50:57 2007 From: randall at randallsquared.com (Randall Randall) Date: Mon, 25 Jun 2007 16:50:57 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> References: <492604.42413.qm@web37402.mail.mud.yahoo.com> <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> Message-ID: <2715134D-681F-4E8D-BFD4-9E83C8B1F364@randallsquared.com> On Jun 25, 2007, at 1:16 PM, John K Clark wrote: > The recent movie "The Prestige" had some very strong Transhuman > content besides being the best film of any sort made in the last > few years. If by "Transhuman content" you mean "anti-transhuman content", of the "trying to better yourself will surely end in ruin" variety. -- Randall Randall "You don't help someone by looking at their list of options and eliminating the one they chose!" -- David Henderson From msd001 at gmail.com Tue Jun 26 00:55:36 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 25 Jun 2007 20:55:36 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <2715134D-681F-4E8D-BFD4-9E83C8B1F364@randallsquared.com> References: <492604.42413.qm@web37402.mail.mud.yahoo.com> <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> <2715134D-681F-4E8D-BFD4-9E83C8B1F364@randallsquared.com> Message-ID: <62c14240706251755t58bd3746ge7b950f47da31731@mail.gmail.com> On 6/25/07, Randall Randall wrote: > > The recent movie "The Prestige" had some very strong Transhuman > > content besides being the best film of any sort made in the last > > few years. > > If by "Transhuman content" you mean "anti-transhuman content", > of the "trying to better yourself will surely end in ruin" > variety. I think it's more the "killing an exact clone of yourself" - are you still you, or is your "you" somehow diminished per the identity thread(s) Instead of a physical copy as in Tesla's machine, the clone were your software running on another substrate, would you believe you had the right to terminate the copy if it were unclear who is the 'original' There's probably some sap in there too for the twins loving the same woman and her not knowing (much like those who are currently 'in love' with an other via avatar without knowledge of the human details behind the persona) From brent.allsop at comcast.net Tue Jun 26 02:48:55 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Mon, 25 Jun 2007 20:48:55 -0600 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <656463.5813.qm@web37403.mail.mud.yahoo.com> References: <656463.5813.qm@web37403.mail.mud.yahoo.com> Message-ID: <46807E97.5030408@comcast.net> A B, You must not have seen the post about the Canonizer (POV wiki) topic we've created where people are now submitting and supporting their favorite movies? There are now 7 movies submitted by people. I hope we can get some more submissions, Canonized POV information about why people like them, and lots more "support" to get a more meaningful quantitative sample of what movies Transhumanists like and why. Here is the page for those that missed it: https://test.canonizer.com/topic.asp?topic_num=20 Upward, Brent Allsop A B wrote: > After the thread is finally retired, I'll send the > complete compiled list back to Extropy in the form of > another post. I would display the list on my own > website, but I don't have one, yet. I don't want to be > the messenger of doom, let's try to keep it going, I'm > sure there's still many more. :-) > > Sincerely, > > Jeffrey Herrlich > > --- spike wrote: > > >> >> >> Please would someone compile these H+ movie choices? >> Is there a website to >> put them so as to be accessible next time we want to >> rent a movie? >> >> spike >> >> >> >> >>> -----Original Message----- >>> From: extropy-chat-bounces at lists.extropy.org >>> >> [mailto:extropy-chat- >> >>> bounces at lists.extropy.org] On Behalf Of BillK >>> Sent: Saturday, June 23, 2007 1:50 PM >>> To: ExI chat list >>> Subject: Re: [ExI] Favorite ~H+ Movies >>> >>> On 6/23/07, A B wrote: >>> >>>> Whoops. I meant to write: original Director's >>>> >> Cut of >> >>>> "Blade Runner". Just had to nitpick. Some great >>>> >> ... >> >>> 'Dark City' 1998, made a big impression on me... >>> BillK >>> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> >> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > ____________________________________________________________________________________ > It's here! Your new message! > Get new email alerts with the free Yahoo! Toolbar. > http://tools.search.yahoo.com/toolbar/features/mail/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randall at randallsquared.com Tue Jun 26 04:49:48 2007 From: randall at randallsquared.com (Randall Randall) Date: Tue, 26 Jun 2007 00:49:48 -0400 Subject: [ExI] SPOILERS for The Prestige Re: Favorite ~H+ Movies In-Reply-To: <62c14240706251755t58bd3746ge7b950f47da31731@mail.gmail.com> References: <492604.42413.qm@web37402.mail.mud.yahoo.com> <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> <2715134D-681F-4E8D-BFD4-9E83C8B1F364@randallsquared.com> <62c14240706251755t58bd3746ge7b950f47da31731@mail.gmail.com> Message-ID: On Jun 25, 2007, at 8:55 PM, Mike Dougherty wrote: > On 6/25/07, Randall Randall wrote: > > > John Clark wrote: >>> The recent movie "The Prestige" had some very strong Transhuman >>> content besides being the best film of any sort made in the last >>> few years. >> >> If by "Transhuman content" you mean "anti-transhuman content", >> of the "trying to better yourself will surely end in ruin" >> variety. > > I think it's more the "killing an exact clone of yourself" - are you > still you, or is your "you" somehow diminished per the identity > thread(s) Instead of a physical copy as in Tesla's machine, the clone > were your software running on another substrate, would you believe you > had the right to terminate the copy if it were unclear who is the > 'original' But then, all copies of him end up dead. While I differ with Clark (and you, I assume) about the identity of the copies, I think the point of the film was hubris, in the original Greek sense. The fellow with Tesla's copier had this completely wonderful, amazing device, and chose to use it merely to humiliate his rival, thus showing the audience what a bad person he is, which is why people seem to mostly agree that he "got what was comin' to him" at the end when he was shot. That movie was a tragedy, not uplifting in any sense, that I could see, and with its sole redeeming feature a techno-gadget that didn't really make up for the rest. > There's probably some sap in there too for the twins loving the same > woman and her not knowing (much like those who are currently 'in love' > with an other via avatar without knowledge of the human details behind > the persona) The twins loved different women, which is why each woman thought "he" was cheating. -- Randall Randall "If we have matter duplicators, will each of us be a sovereign and possess a hydrogen bomb?" -- Jerry Pournelle From mabranu at yahoo.com Tue Jun 26 04:23:31 2007 From: mabranu at yahoo.com (TheMan) Date: Mon, 25 Jun 2007 21:23:31 -0700 (PDT) Subject: [ExI] Re: What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <970633.74919.qm@web51909.mail.re2.yahoo.com> Message-ID: <229074.53683.qm@web51905.mail.re2.yahoo.com> > > Jef Allbright wrote: > > > > > The cosmic race is simply a fact of nature, > > > > The _cosmic_ race? You mean the fact that tech > races > > will go on throughout universe for eternity > anyway, > > with or without mankind's participation? Don't you > > care whether this particular race going on on > Earth > > now is going to continue leading to better and > better > > things for ever, or soon stop itself by creating > > something that kills us all? > > My point was that "the race" is simply a fact of > nature, "bloody in > tooth and claw", and that rather than framing it as > a > "nightmare", My use of the word nightmare referred to the threat from the super-hi-tech weapons that may soon be developed and with which one or more of the thousands of ordinary, angry and crazy human beings that exist today may choose to terminate all life on this planet. Of course, technology is great in many ways, but I've got the impression that most extropians tend to focus too much on the boons and underestimate the perils. For example, only a tiny part of Kurzweil's "Singularity is near" is about the perils of the coming technologies, the rest is about the great stuff these technologies can bring us. And when it confronts arguments against singularity, it confronts mainly arguments like "no, a singularity won't happen because this and that" and very few arguments about the risks. If you can be motivated to better or equally good actions by feeling only excitement and no fear, that's great. I'm just not sure that one will be sufficiently aware of the risks and take sufficient action to diminish them if one doesn't acknowledge the nightmare aspect of the global arms race that seems to be going to get out of control soon. The race is a fact of nature, I agree, but it would proceed even if restricted, just at a slower speed. Just as you point out, the acceleration of change may soon make it impossible for any government, or any other groups or individuals for that matter, to restrict the use of too dangerous technologies (in an ordinary, democratic manner, that is) before it's too late. So if the speed of development could be lowered, it would be safer, and mankind would still reach singularity sooner or later. An Orwellian world despot, with the power to prevent everyone else from experimenting with new technologies, would, statistically, be more careful with what experiments he allows, than would the least careful of countless free and eager engineers, cults and terrorists in the world. The kind of society David Brin suggests might have a similar dampering effect on the perils of tech development. But in a free society that follows the proactionary principle without a ubiquitous surveillance system for watching out for dangerous use of new technlogies, it seems to me that less careful (and less morally sensible) engineers will get to perform experiments than in the former two cases. How easy will it be for the good people in the world to always come up with sufficient defenses against nanoweapons, and other supertech, in time before a nanoweapon, or other supertech, terminates us all? Wouldn't it be easier for the mankind-loving people to secure an advantage over the mankind-threatening people if the technological development would be slowed down? An Orwellian system might slow it down and thus provide some time, so it might be the best alternative for mankind, even if the leaders of it would not protect mankind for mankind's sake but merely for personal profit. > we > might do better to recognize it as it is and do our > best to stay > ahead. Yes, it's significant that it was going on > before, and will > continue after, the existence of what we know as > humankind. > > I realize that my writing tends to be very terse, > and > I compress a lot > into metaphor, but how did you possibly get the idea > that I might not > care? Is this your way of inviting further > clarification, or do you > really suspect that? I suspected it a bit because of your use of the word "cosmic" in the context, but I wasn't sure, so I wanted clarification. If I expressed myself impolitely, my apologies! > I identify as a member of humanity and care very > much > that our > **evolving** values be promoted indefinitely into > the > future. That's good to hear! I agree that it is just as important to evolve as it is to survive, but why evolve uncontrollably fast? I think it would be good to break Moore's law now, if possible, and proceed "slower than natural" during this last bumpy and dangerous bit of the ride toward singularity. Given that universe will exist forever, why hurry? With infinite time at our disposal, mankind may get to evolve infinitely anyway! You are right that this acceleration toward singularity is a law of nature, but we have tamed nature before in various ways, so why should we not be able to tame this phenomenon and slow it down? It's all up to us. We create the acceleration, so we must be able to temper it too. If we say nature inevitably makes us evolve at an accelerating speed, a person who has committed a crime out of natural feelings of anger could similarly say nature inevitably made him commit the crime. Just because nature is "bloody in tooth and claw", doesn't mean we should let it be that way. > > The tech race on our planet in inevitable - until > it > > stops itself by leading to something that extincts > > all. > > So here we come to the significance of my statement > that it is a race > within a cosmic context. There is no "until" -- the > race will > continue with or without our participation, and this > is significant, > not because we should care if we're out of the game, > but because it > helps us understand the rules. I assume you mean "shouldn't care", not "should care"? > > The race can take differing paths, and we should, > > at least to some extent, be able to influence what > > path it will take, because we are the ones > creating > > it. > > Yes, it is critical, from our POV, that we exercise > choice as > effectively as possible. Good to hear. > > What I wonder is what path is safest, what path > > minimizes the risk that the race stops itself > through > > a disaster (or an evil deed). > > We can never know "the correct" safest path, but we > improve our odds > to the extent that we apply our best understanding > of > the deeper > principles of the way the universe works, toward > promotion of our best > understanding of our shared human values, expressed > in > an increasingly > coherent manner. In practical terms, this implies > the > importance of > an increasingly effective cognitive framework for > moral > decision-making. That sounds like a never-ending process. The time that that would take might be better used in trying to figure out how to best prevent mankind-threatening acts of terror from taking place. Mankind's situation today can be compared to that of a chessplayer who 1) is forced to play a game of chess to stay alive, 2) only needs a draw in order to stay alive, and 2) in that game now has very little time left until the next time control. In such a case, there is no need to try to win, you only need to avoid losing. If in that situation you can make an advancement that would improve your chances of winning, but at the same time open up around your king so that your opponent can more easily attack it, you would be much better off not advancing - because a draw is enough for you. This is a good analogy because mankind doesn't need to advance faster than is necessary for survival. The admittedly very sad fact that hunger and global warming may become huge problems for hundreds of millions of people soon if we don't invent technology that can solve those problems in time, still doesn't threaten mankind's survival. Future individuals will be much more numerous than the present (if mankind survives), and given utilitarianism, we should therefore give that lager number of future individuals much higher priority than the lesser number of currently existing individuals. > > Sousveillance implies watching "from below", > meaning > > there is someone "above" you, someone who still > has > > more power than you. This is not the only > alternative > > to surveillance. A society is thinkable where > there > > are no governments with any of the advantage in > terms > > of surveillance and overall power that they have > > today, a society where everybody has equal ability > to > > watch each other, and equal power to stop each > other > > from doing evil. That would not be sousveillance > > but... equal interveillance? > > Sorry, but again it comes down to information. > You're > neglecting the > ensuing combinatorial explosion and the rapid run-up > against the > limits of any finite amount of computational > resources. To function, > we (and by extension, our machines) must limit our > attention, there > will always be gradients, and that's a good thing. > Gradients make the > world go 'round. You mean gradients in the sense that everybody can't have equal power? Of course, everybody can't have equal power, but it might be good to choose a system that gets closer to total equality than other systems, even if it doesn't reach it. > > Would you rather have that kind of system than the > > kind of system we have today? > > I passionately desire, and work toward, a system > that > increases our > effective awareness of ourselves and how we can > better > promote our > evolving values. Such a system does not aim for > "equality", but > rather, growth of opportunity within an increasingly > cooperative > positive-sum framework. I don't see equality as a morally good thing in and of itself. If I were to choose between a society of billions of people where everybody is happy except one person who is terribly miserable, and a society where everybody is mid-way between happy and miserable, I would choose the former without hesitation, because it contains a much larger total sum of happiness, and I wouldn't care that it is a lot less equal. But in the case of the coming development of ever more dangerous technology, a more equal distribution of power in society would probably have great _instrumental_ value. The more people that have as much power as the ones with the most power, the more people there are that are able to intervene, when necessary, against the ones of those with the most power that become dangerous to all of us, and in time. If power to intervene is less equally distributed among the people, less people will be able to intervene against the most powerful, and this means smaller statistical probability that at least some people will do it in time to stop the extinction of all - that is, in an era where dangerous things can happen extremely fast. > > > while the tree can > > > and will branch unpredictably, a fundamental > trend > > > is > > > toward > > > increasing information (on both sides.) > > > > > > We can take heart from the observation that > > > increasing > > > convergence on > > > principles "of what works" supports increasing > > > divergence of > > > self-expression "of what may work." If we > recognize > > > this and promote > > > growth in terms of our evolving values via our > > > evolving understanding > > > of principles of "what works", amplified by our > > > technologies, then we > > > can hope to stay in the race, even as the race > > > itself > > > evolves. If we > > > would attempt in some way to devise a solution > > > preserving our present > > > values, then the race, speeding up > exponentially, > > > would soon pass us > > > by. But we ARE the race! So, by choosing values, we choose where the race goes. Of course, this is true only if everybody is forced to go the same way. That's where ubiquitous surveillance comes in. Not only an Orwellian solution, but also David Brin's everybody-watches-everybody solution seems to offer a way for mankind to force every one of its members to go the same way. Once that kind of system is established, we can collectively slow down development to a moderate speed that allows us to always be able to prevent each other from doing something that threatens the existence of us all. Singularity would be reached with that system too, only much later. But there is plenty of time. This could be compared to playing a game of chess where you are allowed to take months to come up with each move. That kind of playing speed means a lesser risk of losing than a blitz game does. And not losing is all we have to care about. Development happens by itself anyway. (You may be less eager to slow down the tempo of mankind's development because you think that we as individuals will not live for ever if singularity is delayed so much that it doesn't happen within our lifetime. But if future posthumans live for ever, thanks to your saving mankind, they will sooner or later happen to create an exact copy of you, whether they get to know that someone like you ever existed or just happen to create such a being by random (as their [I don't know, something like 10^10^10^10^10^10^10^10^10^10^10]th experiment. So it may be egoistically rational to sacrifice one's life for mankind's survival by slowing down the dangerous speed of development.) > > > > > > In short, yes, we can hope to stay in the race, > but > > > as > > > the race > > > evolves so must we. > > > > Nice word ambiguity! :-) > > > > I don't really understand whether you answer my > > question though. > > Sorry that I appear ambiguous; I strive to be as > clear > and precise as > possible, but not more so. So I suppose you only meant the race as in competition, not the human race. I thought your unintentional word ambiguity opened up for an interesting interpretation though. > TheMan: "How should we travel, to get through the > increasingly complex > and dangerous territory that lies ahead?" > > Jef: "We should take an inventory of ourselves and > our > equipment, and > apply ourselves to making a better map as we > proceed." > > TheMan: "No, I mean specifically, how should we go?" > > Jef: "The best answer to that question is always on > our best map." > > TheMan: "You are so irritatingly vague! Don't you > even care about > this journey?" That was clarifying! Thanks! Yes, that's how I was feeling. So, your studying the map has, so far, lead you to the conclusion that we need not worry about how any ubiquitous surveillance in the near future should be administered and how the power to exercise it should be distributed? What on the map has lead you to that conclusion? To me, that conclusion seems to rest on the implied assumption that there is very little risk that any individual or group using nanotechnology and other powerful technologies will ever threaten the survival of mankind - or that we can do nothing to alter that probability, other than by promoting the very force of nature that will create that threat in the first place. What on the map that makes you come to that conclusion? Designing a solution to guard mankind against [extinction due to evil or careless use of technology during the coming decades] does not necessarily exclude the process of continuing to learn how to read the map. I'm thinking that if we have the wrong map in the sense that, for example, the common belief that we will die if mankind is wiped out by misuse of technology is mistaken because we will live for ever thanks to the infinite number of copies of us in universe, we still probably don't have anything to lose by acting as if our personal survival depends on mankind's survival. So even if our current map _may_ be wrong, we may have nothing to lose and everything to win by assuming it's right. > > Basically, I was wondering what is > > the best way to minimize the existential threat > from > > technology, in terms of _what_ people should have > the > > right to watch _what_ people, and to what extent, > and > > how, and how it should be governed (if at all) > etc. > > My thinking in this regard tends to align with the > Proactionary Principle: > > but I realize you're looking for something more > specific. Let's have a look at the first maxime of the Proactionaty principle: "1. Freedom to innovate: Our freedom to innovate technologically is valuable to humanity. The burden of proof therefore belongs to those who propose restrictive measures. All proposed measures should be closely scrutinized. " >From the fact that our freedom to innovate technologically is valuable to humanity does not necessarily follow that the burden of proof belongs to those who propose restrictive measures. There are many things that are valuable to humanity, and I would say freedom to innovate is not the most valuable one of them. I would say survival and well-being are more valuable than freedom to innovate. Well-being is intrinsically valuable, whereas freedom to innovate is only instrumentally valuable (it is only valuable to the extent that it contributes to or creates well-being). And survival is a more necessary condition for our well-being than freedom to invent is. Therefore, to the extent that our survival is threatened by people's freedom to innovate, our survival should be put first. Furthermore, "people's freedom to innovate" seems to be erroneously thought to support only one side in the discussion. An Orwellian Totalitarian World Government (OTWG for short) would be one thing that could limit that freedom. Another one would be the extinction of mankind, something that may happen as a result of the lack of an OTWG. If an OTWG is the only thing that can prevent the extinction of mankind, it is unfair to focus mainly on the fact that an _OTWG_ would limit people's freedom to innovate - as if the extinction of mankind would not! Obviously, it is conceivable that mankind may survive even without an OTWG, but why take the risk, since an OTWG would provide freedom to innovate in the long run (after reaching singularity, at the very least)? You may alternatively replace OTWG above with an extreme version of David Brin's suggestion, a society where everybody watches everybody very closely. Why should the burden of proof belong to those who want to secure people's _long_ term freedom to innovate (by protecting mankind from extinction by controlling people a lot), rather than to those who want to maximize people's _short_ term freedom to innovate (by not controlling people so much)? Both sides try to maximize people's freedom to innovate, only on different time scales. So both sides could say they simply obey the first and most important maxime of the Proactionary principle. > I don't have a specific answer to "what people" > should > be able to > watch "what people." Personally, I tend to like the > idea of public > cameras on the web watching all public areas. > I think this will > improve public safety dramatically, If cameras will only watch public areas, they won't prevent crimes committed from people's homes with for example remote-controlled nanorobots. They will be pretty useless in preventing extinction of mankind. Sure, they will prevent crimes, but I don't think increased surveillance is justified if the objective is to simply decrease the number of crimes, even if that can save a huge lot of lives. I think increased surveillance is justified only if it protects mankind from extinction, but in that case, on the other hand, I think it is infinitely justified. What the objective is makes all the difference. We need more surveillance, but for the right reasons. We need less of the surveillance that is now taking place for the wrong reasons. Today's surveillance probably creates more suffering (for example by scaring innocent people with controversial political opinions to silence, and thereby helping the wrong kinds of politicians keep their power) than it decreases suffering (by reducing crime). When surveillance becomes necessary for mankind's survival, however, it will be, well, necessary. (Possibly, the surveillance of today, that I call unjustified, may turn out to have been important as a way to get people used to the idea of being watched all the time, in time before surveillance really becomes important. On the other hand, it may also make people less aware of the importance of watching the watchers.) > and that > concerns > about privacy > will adapt, Of course, people adapt to just about anything with time. You can make a person adapt to being tortured. That doesn't justify torture. > and that additional unforeseen benefits > will arise. I would say any benefits are irrelevant as long as they don't decrease the risk of extinction of mankind. But maybe they will decrease that risk. Only then are they relevant. > You > > might be better off handing over a lot of power to > > your government, or you might not. That's the > question > > I want to discuss. > > Personally, I think "government" as we know it will > collapse under the > weight of its own inconsistencies, What inconsistencies? The trend I see is the opposite - governments getting more and more power by being given more and more power to watch and control people. Governments may be inefficient in proportion to the huge resources they have at their disposal, but they still have so great resources - and authority - that they are still far more powerful than most people, organizations and companies. Why would the tech race change any of that? The goverments even lead the tech race, don't they? At least the US government? If the major governments in the world collapse, it's because mankind collapses. I could see no other plausible cause. When terrorists start using nanoweapons to kill millions of people in seconds, whom will the people ask for help? Of course their governments. If governments' bureaucracy turns out to be too slow to have a chance against mankind-threatening terrorism, governments will say to their people "Ok, now terrorists have become so much more agile than us, because of today's insane acceleration in technology development, and because our heavy bureaucracy is too slow in this situation, so we have to skip all the bureaucracy, take fast action without wasting any time on anchoring it democratically, and profoundly change the constitution so that we can do whatever is necessary immediately when it is necessary. We assume you accept that we go "totalitarian" in this exceptional situation. The alternative is to let the terrorists commit homicide. Which do you choose?" What do you think people will choose in that situation? Governments all over the world will also be increasingly forced to cooperate with each other in order to combat nanoterrorism and the like, so soon a world government will be installed. And it will use the above arguments in order to be allowed to go totalitarian. Once a totalitarian world government is installed, I think it will be very hard to remove. (Which may be a good thing or not.) Governments will also convince their people that they need to put a huge lot of tax money into anti-nano-weapon research to combat nanoterrorism. That way, they will continue being the leaders of the tech race. I would be very interested to hear why you think governments will collapse. Oh, you mean when singularity happens? That's different. I guess there is no way to predict what will happen then. But I'm talking about a more near future. Personally, I think all the dangers will disappear when we reach singularity, because then we will all become one - an individual so much wiser than us that it would be foolish of anyone of us today to predict anything about its risk to go extinct. I think the existential risks are going to be worst a couple of years before singularity (or even right before it), and increasingly worse from now and up until then. > I favor a system possibly > describable as anarchy > within an increasingly cooperative framework. That sounds compatible with David Brin's suggestion. ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From scerir at libero.it Tue Jun 26 06:32:25 2007 From: scerir at libero.it (scerir) Date: Tue, 26 Jun 2007 08:32:25 +0200 Subject: [ExI] are spam levels dropping? References: <200706252030.l5PKUk7R011232@andromeda.ziaspace.com> Message-ID: <000401c7b7bb$c3a05320$1d971f97@archimede> > Did someone have a spam measuring site? > spike http://www.spam-o-meter.com/ ??? From thomas at thomasoliver.net Tue Jun 26 06:41:52 2007 From: thomas at thomasoliver.net (Thomas Oliver) Date: Mon, 25 Jun 2007 23:41:52 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: Jef Albright wrote: > Uh...thanks, I think. But rather than "reducing [my] compulsion" to > increase the coherence and extensibility of our thinking on these > topics -- haven't you actually reinforced it? ;-) I guess this highlights our divergent views on compulsion. I discriminate between voluntary response and obligation driven compulsion. I encourage you to respond to "seeking minds," but not to feel personally obligated to provide me with the "higher level" view. > A band of musicians playing together can be greater than the sum of > the musicians playing separately. But that greater cooperative act is > the results of smaller acts of competition, selecting for arrangements > that work together at the higher level. We call the selection process "musical chairs" and the cooperative act "concert." I know dueling soloists can be great fun. That became stock fare for the common audience centuries ago. I indulge in "trading fours" at every blues jam where I find a competent competitor. My favorite capping tactic involves anticipating what my opponent will play and harmonizing instead of attempting to outdo her. It sounds so much better than wrangling for dominance. I think Gordon challenged your chair with his "will as essence," but failed the audition when he abdicated intellect. I don't see animal drives as the essence of human nature . I don't see lagging self awareness as all that relevant to human agency. I still have a little trouble with agents acting for a "whatever type entity" since that grants personhood status to all manner of abstract beings some of which lack individuality. I never met a corporation with a personality of its own. How do we handle agents posing as entities? My suggestion: Abolish collective (and thus covert) title. Extend this! -- Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 26 06:53:49 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 26 Jun 2007 08:53:49 +0200 Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <46802257.7060205@mydruthers.com> References: <710b78fc0706250534t609b6a55m43e7661b270390fb@mail.gmail.com> <7.0.1.0.2.20070625141230.025a8bb8@satx.rr.com> <46802257.7060205@mydruthers.com> Message-ID: <20070626065349.GX17691@leitl.org> On Mon, Jun 25, 2007 at 01:15:19PM -0700, Chris Hibbert wrote: > The last long word to die was "and". Notice that the only surviving > words at this point are one or two letters long. Do you think it's > trying to tell us something? It's trying to tell us that the man forgot about selection, which is the opposite of mutation. And, no, anti-evolution kooks aren't cute. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Jun 26 07:24:45 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 26 Jun 2007 09:24:45 +0200 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <62c14240706251755t58bd3746ge7b950f47da31731@mail.gmail.com> References: <492604.42413.qm@web37402.mail.mud.yahoo.com> <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> <2715134D-681F-4E8D-BFD4-9E83C8B1F364@randallsquared.com> <62c14240706251755t58bd3746ge7b950f47da31731@mail.gmail.com> Message-ID: <20070626072445.GE17691@leitl.org> On Mon, Jun 25, 2007 at 08:55:36PM -0400, Mike Dougherty wrote: > I think it's more the "killing an exact clone of yourself" - are you You can't make an exact clone of yourself. You certainly couldn't keep it synchronized, so it would become two people. > still you, or is your "you" somehow diminished per the identity > thread(s) Instead of a physical copy as in Tesla's machine, the clone > were your software running on another substrate, would you believe you > had the right to terminate the copy if it were unclear who is the > 'original' > > There's probably some sap in there too for the twins loving the same > woman and her not knowing (much like those who are currently 'in love' > with an other via avatar without knowledge of the human details behind > the persona) I am about to rent the movie. How about some spoiler warning!? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jun 26 10:50:41 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 26 Jun 2007 20:50:41 +1000 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? (A mysterious ">" sneaked in!) In-Reply-To: <465399.96454.qm@web51906.mail.re2.yahoo.com> References: <465399.96454.qm@web51906.mail.re2.yahoo.com> Message-ID: On 26/06/07, TheMan wrote: > There is a considerable risk that governments, even in > the most democratic countries in the world, will abuse > that kind of omnipresent, automated surveillance > system when they have it installed, and become > dictators, using their surveillance system to detect > any opposition in time to snuff it, and to frighten > people so that very little opposition even occurs. > Power corrupts; total power corrupts totally. Surveillance technology has been available for decades that would allow governments to spy on anyone at any time. They could just pass a law mandating that every home is to be bugged, and that anyone caught interfering with the bugs goes straight to prison. Even if it were not a technically flawless solution, it would have to be be better than no bugs at all. Yet even extremely repressive regimes which have probably fantasised about such a system have refrained from implementing it. Why? -- Stathis Papaioannou From avantguardian2020 at yahoo.com Tue Jun 26 10:44:58 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 26 Jun 2007 03:44:58 -0700 (PDT) Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <20070626065349.GX17691@leitl.org> Message-ID: <349549.75534.qm@web60525.mail.yahoo.com> The forces of ignorance threaten the world again, I see. In his inane FAQ at http://www.randommutation.com/darwinianevolution.htm#faq Perry Marshall (the guy who brought you the hi-tech random mutation applet) writes: ------- begin quote Q: Is your theory about Random Mutation (that it does not contribute to evolutionary progress at all) Falsifiable? A: Yes. Here's how to falsify it: Show me just one paper, book or experiment anywhere in the history of biology that empirically demonstrates and proves that random mutation of DNA produces novel adaptive features (eyes, wings, legs, functional organs); and that the mutations that produced those features were in fact random. (And not Mobile Genetic Elements or some other systematic process.) Again I repeat: In 150 years of research on this subject, there is not a single peer-reviewed paper, book or experiment that demonstrates this to be true. Not one. No one who considers themselves a skeptic or a scientifically literate person can believe this to be true and still be consistent with skepticism and scientific literacy. --------- end quote How about this, Perry? http://www.pnas.org/cgi/content/abstract/85/23/9114 ---------Begin quote Reductase-Thymidylate Synthase Confers Resistance to Pyrimethamine in Falciparum Malaria David S. Peterson, David Walliker, and Thomas E. Wellems Analysis of a genetic cross of Plasmodium falciparum and of independent parasite isolates from Southeast Asia, Africa, and South America indicates that resistance to pyrimethamine, an antifolate used in the treatment of malaria, results from point mutations in the gene encoding dihydrofolate reductase-thymidylate synthase (EC 1.5.1.3 and EC 2.1.1.45, respectively). Parasites having a mutation from Thr-108/Ser-108 to Asn-108 in DHFR-TS are resistant to the drug. The Asn-108 mutation occurs in a region analogous to the C -helix bordering the active site cavity of bacterial, avian, and mammalian enzymes. Additional point mutations (Asn-51 to Ile-51 and Cys-59 to Arg-59) are associated with increased pyrimethamine resistance and also occur at sites expected to border the active site cavity. Analogies with known inhibitor/enzyme structures from other organisms suggest that the point mutations occur where pyrimethamine contacts the enzyme and may act by inhibiting binding of the drug. --------end quote So a "random mutation" confers drug resistance upon the malaria parasite. That is a novel adaptive trait, isn't it? On lighter note, by indulging Mr. Marshall's game of evolution of text by mutation with "meaning" analogizing fitness or whatever foolishness he has in mind, consider this challenge: One can evolve from APE to MAN with 9 mutational steps assuming all mutants not in the dictionary as being non-viable abominations that die in the womb. APE > ??? > ??? > ??? > ??? > ??? > ??? > ??? > ??? > MAN Interestingly enough to evolve from MAN to GOD requires only 3 further mutations. MAN > ??? > ??? > GOD So who would have thought there was a 12-step program to go from ape to god? ;-) Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center. http://autos.yahoo.com/green_center/ From stathisp at gmail.com Tue Jun 26 11:50:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 26 Jun 2007 21:50:35 +1000 Subject: [ExI] Can and should happiness be completely liberated from its task of motivating? In-Reply-To: <319746.50359.qm@web51906.mail.re2.yahoo.com> References: <319746.50359.qm@web51906.mail.re2.yahoo.com> Message-ID: On 26/06/07, TheMan wrote: > Or can we, in the future, as posthumans, become > "robots" in the sense that we will, unlike now, never > be driven to the slighest extent by any emotions, > feelings or anything like that, but by pure > intelligence, and still be at least as good at staying > alive and developing as we are today (maybe infinitely > better?), and at the same time be always enormously > happier than today - and always exactly _equally_ > happy no matter what happens to us and whatever we do > and think? > > I suppose our happiness level will in the best case > scenario go on increasing for ever, as our continuous > development will constantly push the limits of what is > "maximum possible happiness" for us, by changing our > very design again and again. But can > [happiness,wellbeing,pleasure,euphoria,bliss], also in > that kind of scenario, successfully be completely > liberated from its so far essential task of motivating > us to act as wisely as possible? Or will a preserved > connection between happiness and motivation always > make us more fit for survival and further development > than a disconnection would? Since there is no necessary connection between intelligence and emotion (at the very least, no connection between intelligence and a particular quality or quantity of emotion) I see no reason why could not spin off subprocesses to take care of the goals it considers important while the happiness centres are maximally stimulated. This is in direct analogy with the idea that humans can create an AI to serve them, while they sit around enjoying themselves. The counterargument is that this slave AI, or the housekeeping and research branch of an integrated AI, would break off on its own, and perhaps kill the useless freeloaders. -- Stathis Papaioannou From stathisp at gmail.com Tue Jun 26 12:32:16 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 26 Jun 2007 22:32:16 +1000 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <507541.12433.qm@web37405.mail.mud.yahoo.com> References: <459132.93630.qm@web37402.mail.mud.yahoo.com> <507541.12433.qm@web37405.mail.mud.yahoo.com> Message-ID: On 26/06/07, A B wrote: > Stathis, > > I hope you realize that I'm just poking fun at you. :) > "A.I." isn't really **that** terrible; and it's final > 15 minutes or so are somewhat redeemable. I shouldn't > have been quite so harsh on it, even if it was all in > fun. I will assume that it's going on the list, unless > I hear otherwise from you. It is a bit sentimental, but off the top of my head I can't think of another movie that focuses on the machines-are-people-too idea to the same extent, surely one of the more important issues in transhumanist thought. -- Stathis Papaioannou From msd001 at gmail.com Tue Jun 26 12:48:43 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 26 Jun 2007 08:48:43 -0400 Subject: [ExI] SPOILERS for The Prestige Re: Favorite ~H+ Movies In-Reply-To: References: <492604.42413.qm@web37402.mail.mud.yahoo.com> <00e401c7b74c$ac226040$7f0a4e0c@MyComputer> <2715134D-681F-4E8D-BFD4-9E83C8B1F364@randallsquared.com> <62c14240706251755t58bd3746ge7b950f47da31731@mail.gmail.com> Message-ID: <62c14240706260548q48d3cfa7ie14ff79410b7d455@mail.gmail.com> On 6/26/07, Randall Randall wrote: > But then, all copies of him end up dead. > > While I differ with Clark (and you, I assume) about > the identity of the copies, I think the point of the > film was hubris, in the original Greek sense. The > fellow with Tesla's copier had this completely > wonderful, amazing device, and chose to use it merely > to humiliate his rival, thus showing the audience > what a bad person he is, which is why people seem to > mostly agree that he "got what was comin' to him" at > the end when he was shot. That movie was a tragedy, > not uplifting in any sense, that I could see, and > with its sole redeeming feature a techno-gadget that > didn't really make up for the rest. It was an amazing device, but not wonderful. It would have been a different movie though if Tesla's copier had the same copy-of-a-copy defect as in Multiplicity where each clone got progressively more glitchy and stupid. :) I don't really know what my position is on the identity of the copies or their "rights" I do get the hubris and vanity point though, to the effect that in the end the guy has spent his whole life with nothing to show for it (well, other than the warehouse full of drowned clones) > The twins loved different women, which is why each > woman thought "he" was cheating. true. I meant the fact that the one who did not love the wife was forced to be married by virtue of the shared life they were living. I thought the scene where the kid cried over the bird in the cage "he killed that bird's brother" was an interesting bit of foreshadowing. From msd001 at gmail.com Tue Jun 26 12:54:11 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 26 Jun 2007 08:54:11 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: References: <459132.93630.qm@web37402.mail.mud.yahoo.com> <507541.12433.qm@web37405.mail.mud.yahoo.com> Message-ID: <62c14240706260554u343ce9ep785b3a77577c865e@mail.gmail.com> On 6/26/07, Stathis Papaioannou wrote: > It is a bit sentimental, but off the top of my head I can't think of > another movie that focuses on the machines-are-people-too idea to the > same extent, surely one of the more important issues in transhumanist > thought. I think the machine as a person was treated fairly well by the character of Data in Star Trek:TNG. From austriaaugust at yahoo.com Tue Jun 26 14:02:44 2007 From: austriaaugust at yahoo.com (A B) Date: Tue, 26 Jun 2007 07:02:44 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: Message-ID: <781733.98154.qm@web37408.mail.mud.yahoo.com> Yeah, it does have some expository value. I was in quite the quirky mood yesterday morning. Way too much caffeine? Quite possibly. In my own defense, I honestly did believe that you were being somewhat tongue-in-cheek, and I was just playing around. All the same though, I shouldn't have been so dramatically harsh on the movie. I hope you weren't offended. Sincerely, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 26/06/07, A B wrote: > > Stathis, > > > > I hope you realize that I'm just poking fun at > you. :) > > "A.I." isn't really **that** terrible; and it's > final > > 15 minutes or so are somewhat redeemable. I > shouldn't > > have been quite so harsh on it, even if it was all > in > > fun. I will assume that it's going on the list, > unless > > I hear otherwise from you. > > It is a bit sentimental, but off the top of my head > I can't think of > another movie that focuses on the > machines-are-people-too idea to the > same extent, surely one of the more important issues > in transhumanist > thought. > > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ___________________________________________________________________________________ You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_html.html From spike66 at comcast.net Tue Jun 26 14:23:34 2007 From: spike66 at comcast.net (spike) Date: Tue, 26 Jun 2007 07:23:34 -0700 Subject: [ExI] are spam levels dropping? In-Reply-To: <000401c7b7bb$c3a05320$1d971f97@archimede> Message-ID: <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> That's it, thanks! So are they saying that nearly 90% of internet traffic is spam? spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of scerir > Sent: Monday, June 25, 2007 11:32 PM > To: ExI chat list > Subject: Re: [ExI] are spam levels dropping? > > > Did someone have a spam measuring site? > > spike > > http://www.spam-o-meter.com/ ??? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From austriaaugust at yahoo.com Tue Jun 26 14:16:35 2007 From: austriaaugust at yahoo.com (A B) Date: Tue, 26 Jun 2007 07:16:35 -0700 (PDT) Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <46807E97.5030408@comcast.net> Message-ID: <689666.15732.qm@web37409.mail.mud.yahoo.com> Hi Brent, I really think that the Canonizer is a great idea of yours, and it really does have the potential to make things a lot more efficient. I'll make an entry in the next day or two. If you don't mind, I'd also like to send a simple compilation back to Extropy. It'll be good to have the list in at least two places, even if the Canonizer will contain much more support material and weightings. Best, Jeffrey Herrlich PS. Just in general folks, we should probably start using Spoiler Warnings for all our recommendations if we are going to post significant details about the plots. It might be better just to refrain from posting details. But whatever, that's just my 2 cents. --- Brent Allsop wrote: > > A B, > > You must not have seen the post about the Canonizer > (POV wiki) topic > we've created where people are now submitting and > supporting their > favorite movies? > > There are now 7 movies submitted by people. I hope > we can get some more > submissions, Canonized POV information about why > people like them, and > lots more "support" to get a more meaningful > quantitative sample of what > movies Transhumanists like and why. > > Here is the page for those that missed it: > > https://test.canonizer.com/topic.asp?topic_num=20 > > Upward, > > Brent Allsop > ___________________________________________________________________________________ You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_html.html From spike66 at comcast.net Tue Jun 26 14:40:19 2007 From: spike66 at comcast.net (spike) Date: Tue, 26 Jun 2007 07:40:19 -0700 Subject: [ExI] cruise out of germany In-Reply-To: <000401c7b7bb$c3a05320$1d971f97@archimede> Message-ID: <200706261450.l5QEoV33004486@andromeda.ziaspace.com> Check this: http://www.cnn.com/2007/SHOWBIZ/Movies/06/25/cruise.germany.reut/index.html I recall vaguely seeing a very unflattering story about $cientology on 60 Minutes back in the 70s. Has anyone here any contacts at CBS? Perhaps they would want to do a story about Keith Henson. spike From austriaaugust at yahoo.com Tue Jun 26 14:30:48 2007 From: austriaaugust at yahoo.com (A B) Date: Tue, 26 Jun 2007 07:30:48 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: Message-ID: <102492.35814.qm@web37413.mail.mud.yahoo.com> "Why?" Cost. Cost:Benefit is too high. If you have enough big guns and big bodyguards, then it doesn't really matter if most of your impoverished population doesn't like you. Sincerely, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 26/06/07, TheMan wrote: > > > There is a considerable risk that governments, > even in > > the most democratic countries in the world, will > abuse > > that kind of omnipresent, automated surveillance > > system when they have it installed, and become > > dictators, using their surveillance system to > detect > > any opposition in time to snuff it, and to > frighten > > people so that very little opposition even occurs. > > Power corrupts; total power corrupts totally. > > Surveillance technology has been available for > decades that would > allow governments to spy on anyone at any time. They > could just pass a > law mandating that every home is to be bugged, and > that anyone caught > interfering with the bugs goes straight to prison. > Even if it were not > a technically flawless solution, it would have to be > be better than no > bugs at all. Yet even extremely repressive regimes > which have probably > fantasised about such a system have refrained from > implementing it. > Why? > > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From gts_2000 at yahoo.com Tue Jun 26 14:34:01 2007 From: gts_2000 at yahoo.com (gts) Date: Tue, 26 Jun 2007 10:34:01 -0400 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: On Tue, 26 Jun 2007 02:41:52 -0400, Thomas Oliver wrote: > I think Gordon challenged your chair with his "will as essence," but > failed the audition when he abdicated intellect. I don't see animal > drives as the essence of human nature. When did I abdicate the intellect or mention animal drives?? -gts From torsteinhaldorsen at gmail.com Tue Jun 26 15:42:51 2007 From: torsteinhaldorsen at gmail.com (Torstein Haldorsen) Date: Tue, 26 Jun 2007 17:42:51 +0200 Subject: [ExI] are spam levels dropping? In-Reply-To: <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> References: <000401c7b7bb$c3a05320$1d971f97@archimede> <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> Message-ID: 90 percent of e-mail traffic makes more sense... ;) -Torstein On 6/26/07, spike wrote: > > That's it, thanks! > > So are they saying that nearly 90% of internet traffic is spam? > > spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Tue Jun 26 15:49:15 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 26 Jun 2007 08:49:15 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: On Tue, 26 Jun 2007 02:41:52 -0400, Thomas Oliver > I think Gordon challenged your chair with his "will as essence," but > failed the audition when he abdicated intellect. I don't see animal > drives as the essence of human nature. Funny that you say so. To the contrary, it seems to me that Gordon consistently asserts the primacy of intellect, while I try to show that it's better modeled as a (particularly powerful) function of the organism. Further, it seems quite clear (to me) that Gordon didn't challenge me, so much as use my statement(s) as a springboard to display his erudition. - Jef From gts_2000 at yahoo.com Tue Jun 26 16:03:28 2007 From: gts_2000 at yahoo.com (gts) Date: Tue, 26 Jun 2007 12:03:28 -0400 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: On Tue, 26 Jun 2007 11:49:15 -0400, Jef Allbright wrote: > Funny that you say so. To the contrary, it seems to me that Gordon > consistently asserts the primacy of intellect... No. I think I understand now what Thomas was thinking. I don't "abdicate the intellect". However, following Nietzsche and Schopenhauer, I do assert the primacy of the will over the intellect. 1) We are not our thoughts. We have thoughts. 2) We think that which we will to think, i.e., the intellect serves the will. > Further, it seems quite clear (to me) that Gordon didn't challenge me True, I didn't. It's nice to see for a change that our views are at least roughly compatible. -gts From gts_2000 at yahoo.com Tue Jun 26 15:46:35 2007 From: gts_2000 at yahoo.com (gts) Date: Tue, 26 Jun 2007 11:46:35 -0400 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: Stathis wrote to me in another thread: > The only bulletproof philosophical theory of personal identity (which > perhaps you have hinted at in an earlier post) is the minimal theory > that you are who you are *for that moment*, with all the rest being > tacked on and contingent on the particular way our psychology has > evolved. Yes, that is close to my meaning. I start with a very basic and minimalist observation: that one's will at any given moment is unique in the universe. From there I take it that where there is a unique will, there is a unique agent to execute that will. Agency serves the will. The identity of an agent is preserved even when the agent is obedient to the will of another, or to the corporate will of the community or of the "hive". To will to do the will of another is only to do one's own will. Will also defines identity in a social or legal context: love and friendship arise from compatible wills, while crimes and torts occur when one agent forces its will on another agent innocent of having done the same. Wherever we find a unique will, there also we find a unique legal and social person. -gts From adolfoaz at gmail.com Tue Jun 26 16:35:13 2007 From: adolfoaz at gmail.com (Adolfo Javier De Unanue) Date: Tue, 26 Jun 2007 11:35:13 -0500 Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <349549.75534.qm@web60525.mail.yahoo.com> References: <349549.75534.qm@web60525.mail.yahoo.com> Message-ID: <46814041.80601@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Two comments: It is so sad that in this time Creationism is taking strenght in many "advanced" countries. And it is our duty fight back this ignorance. Even in this (randommutation.com) poor attack to science... So, well done with your example! I will publish your response in my blog if you allow me... Maybe you could post it as comment to RichardDawkins.net (http://richarddawkins.net/tourJournal) :-) Adolfo The Avantguardian escribi?: > The forces of ignorance threaten the world again, I > see. > > In his inane FAQ at > http://www.randommutation.com/darwinianevolution.htm#faq > Perry Marshall (the guy who brought you the hi-tech > random mutation applet) writes: > > ------- begin quote > Q: Is your theory about Random Mutation (that it does > not contribute to evolutionary progress at all) > Falsifiable? > > A: Yes. Here's how to falsify it: > > Show me just one paper, book or experiment anywhere in > the history of biology that empirically demonstrates > and proves that random mutation of DNA produces novel > adaptive features (eyes, wings, legs, functional > organs); and that the mutations that produced those > features were in fact random. (And not Mobile Genetic > Elements or some other systematic process.) > > Again I repeat: In 150 years of research on this > subject, there is not a single peer-reviewed paper, > book or experiment that demonstrates this to be true. > Not one. > > No one who considers themselves a skeptic or a > scientifically literate person can believe this to be > true and still be consistent with skepticism and > scientific literacy. > --------- end quote > > How about this, Perry? > > http://www.pnas.org/cgi/content/abstract/85/23/9114 > > ---------Begin quote > Reductase-Thymidylate Synthase Confers Resistance to > Pyrimethamine in Falciparum Malaria > > David S. Peterson, David Walliker, and Thomas E. > Wellems > > Analysis of a genetic cross of Plasmodium falciparum > and of independent parasite isolates from Southeast > Asia, Africa, and South America indicates that > resistance to pyrimethamine, an antifolate used in the > treatment of malaria, results from point mutations in > the gene encoding dihydrofolate reductase-thymidylate > synthase (EC 1.5.1.3 and EC 2.1.1.45, respectively). > Parasites having a mutation from Thr-108/Ser-108 to > Asn-108 in DHFR-TS are resistant to the drug. The > Asn-108 mutation occurs in a region analogous to the C > -helix bordering the active site cavity of bacterial, > avian, and mammalian enzymes. Additional point > mutations (Asn-51 to Ile-51 and Cys-59 to Arg-59) are > associated with increased pyrimethamine resistance and > also occur at sites expected to border the active site > cavity. Analogies with known inhibitor/enzyme > structures from other organisms suggest that the point > mutations occur where pyrimethamine contacts the > enzyme and may act by inhibiting binding of the drug. > --------end quote > > So a "random mutation" confers drug resistance upon > the malaria parasite. That is a novel adaptive trait, > isn't it? > > On lighter note, by indulging Mr. Marshall's game of > evolution of text by mutation with "meaning" > analogizing fitness or whatever foolishness he has in > mind, consider this challenge: > > One can evolve from APE to MAN with 9 mutational steps > assuming all mutants not in the dictionary as being > non-viable abominations that die in the womb. > > APE > ??? > ??? > ??? > ??? > ??? > ??? > ??? > ??? > > MAN > > Interestingly enough to evolve from MAN to GOD > requires only 3 further mutations. > > MAN > ??? > ??? > GOD > > So who would have thought there was a 12-step program > to go from ape to god? ;-) > > > Stuart LaForge > alt email: stuart"AT"ucla.edu > > "When an old man dies, an entire library is destroyed." - Ugandan proverb > > > ____________________________________________________________________________________ > Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center. > http://autos.yahoo.com/green_center/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGgUBBb6ByEoesTj0RAhAWAKC/Rr10PM+i3siZtELb2dM4AUJMrwCgpkZ4 x7vRn7xX0X8pq+vE+Xc+uH8= =opqK -----END PGP SIGNATURE----- From jef at jefallbright.net Tue Jun 26 16:38:35 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 26 Jun 2007 09:38:35 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: On 6/26/07, gts wrote: > On Tue, 26 Jun 2007 11:49:15 -0400, Jef Allbright > wrote: > > > Funny that you say so. To the contrary, it seems to me that Gordon > > consistently asserts the primacy of intellect... > > No. I think I understand now what Thomas was thinking. I don't "abdicate > the intellect". However, following Nietzsche and Schopenhauer, I do assert > the primacy of the will over the intellect. > > 1) We are not our thoughts. We have thoughts. > 2) We think that which we will to think, i.e., the intellect serves the > will. > > > Further, it seems quite clear (to me) that Gordon didn't challenge me > > True, I didn't. It's nice to see for a change that our views are at least > roughly compatible. Ironically funny though, that you would take pleasure in what is only a meta-agreement about our disagreement. Your statements 1 and 2 above are to me like intellectual fingernails screeching on a chalkboard. I say that with good humor and full respect for you as a fellow person. ;-) - Jef From jef at jefallbright.net Tue Jun 26 16:22:00 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 26 Jun 2007 09:22:00 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: On 6/25/07, Thomas Oliver wrote: > I guess this highlights our divergent views on compulsion. I discriminate > between voluntary response and obligation driven compulsion. > I encourage you to respond to "seeking minds," but not to feel personally > obligated to provide me with the "higher level" view. Thomas, when I said I "felt compelled" to address that issue, it was in a sense similar to saying I "felt compelled" to help tourists lost in a train station in a foreign land. I use that analogy because I often felt and acted so while living in Japan, to help tourists who I noticed appeared disoriented. I felt compelled in the sense of expressing my values, making the world a little better, rather than in terms of any (clearly non-existent) contract or other form of obligation. As for the possibility of pedagogical condescension and resultant offense implied in your statement, I had no such intention whatsoever, and as I indicated, my contributions to the public discourse are with our few hundred lurkers in mind. As a somewhat related aside: On a list where rationality is an overt high value, I find it somewhat unnecessary and distracting when social niceties are appended to a post (as Jeffrey seems to be doing lately.) While I don't subscribe to the opposite extreme of "Crocker's Rules", I tend to take intelligence, mutual respect and good humor for granted, not needing ongoing reinforcement and reassurance. In this regard, with certain individuals, I am clearly lacking. - Jef From austriaaugust at yahoo.com Tue Jun 26 16:46:26 2007 From: austriaaugust at yahoo.com (A B) Date: Tue, 26 Jun 2007 09:46:26 -0700 (PDT) Subject: [ExI] Minds, Personalities, and Love In-Reply-To: <62c14240706251755t58bd3746ge7b950f47da31731@mail.gmail.com> Message-ID: <842821.53002.qm@web37410.mail.mud.yahoo.com> Mike wrote (among other things): ..."(much like those who are > currently 'in love' > with an other via avatar without knowledge of the > human details behind > the persona)"... That's really interesting. Is that a real phenomenon? I assume it is. I think it's noble that sometimes two people can love each other based only on their minds or personalities. It seems more meaningful than two people who profess "love" based exclusively on the physical appearance of the other person. In the future, when we can all change our appearances at will, it will be the quality of the personality that serves as the primary attractor. This sort of reminds me of the rare occasions when I have a vivid dream of a "perfect" young lady, whom I actually develop *real* feelings for. When I wake up and she disappears, I actually feel a deep sense of loss. It may sound crazy, but it's best described as a degree of sadness over the death of a loved one. It was only my brain that created her in her entirety, but the feeling is real nonetheless. It makes me wonder if we do actually create *different* people within our own brain. I believe we do; but then I've "always" believed that the continuity of self is an illusion. This is a real account by the way, I'm not making this up; and it has happened more than once. I wonder if anyone else has ever experienced this. Some people might have been embarrassed to admit this sort of thing, but I'm not, in fact the whole thing sometimes feels somewhat tragic. Sincerely, Jeffrey Herrlich --- Mike Dougherty wrote: > On 6/25/07, Randall Randall > wrote: > > > The recent movie "The Prestige" had some very > strong Transhuman > > > content besides being the best film of any sort > made in the last > > > few years. > > > > If by "Transhuman content" you mean > "anti-transhuman content", > > of the "trying to better yourself will surely end > in ruin" > > variety. > > I think it's more the "killing an exact clone of > yourself" - are you > still you, or is your "you" somehow diminished per > the identity > thread(s) Instead of a physical copy as in Tesla's > machine, the clone > were your software running on another substrate, > would you believe you > had the right to terminate the copy if it were > unclear who is the > 'original' > > There's probably some sap in there too for the twins > loving the same > woman and her not knowing (much like those who are > currently 'in love' > with an other via avatar without knowledge of the > human details behind > the persona) > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, photos & more. http://mobile.yahoo.com/go?refer=1GNXIC From austriaaugust at yahoo.com Tue Jun 26 17:12:56 2007 From: austriaaugust at yahoo.com (A B) Date: Tue, 26 Jun 2007 10:12:56 -0700 (PDT) Subject: [ExI] agency-based personal identity In-Reply-To: Message-ID: <222204.69995.qm@web37401.mail.mud.yahoo.com> Jef wrote: > "As a somewhat related aside: > On a list where rationality is an overt high value, > I find it somewhat > unnecessary and distracting when social niceties are > appended to a > post (as Jeffrey seems to be doing lately.) While I > don't subscribe > to the opposite extreme of "Crocker's Rules", I tend > to take > intelligence, mutual respect and good humor for > granted, not needing > ongoing reinforcement and reassurance."... > - Jef Guilty as charged, I suppose. But sometimes it really is difficult to gauge how my comments will affect people. And I'm not always entirely "myself", so sometimes I feel the need to apologize or clarify for something I've written. It's probably a character flaw within myself, but I usually can't help it (at least not yet - bring on the nanobots). Sincerely, Jeffrey Herrlich --- Jef Allbright wrote: > On 6/25/07, Thomas Oliver > wrote: > > > I guess this highlights our divergent views on > compulsion. I discriminate > > between voluntary response and obligation driven > compulsion. > > I encourage you to respond to "seeking minds," but > not to feel personally > > obligated to provide me with the "higher level" > view. > > Thomas, when I said I "felt compelled" to address > that issue, it was > in a sense similar to saying I "felt compelled" to > help tourists lost > in a train station in a foreign land. I use that > analogy because I > often felt and acted so while living in Japan, to > help tourists who I > noticed appeared disoriented. I felt compelled in > the sense of > expressing my values, making the world a little > better, rather than in > terms of any (clearly non-existent) contract or > other form of > obligation. > > As for the possibility of pedagogical condescension > and resultant > offense implied in your statement, I had no such > intention whatsoever, > and as I indicated, my contributions to the public > discourse are with > our few hundred lurkers in mind. > > As a somewhat related aside: > On a list where rationality is an overt high value, > I find it somewhat > unnecessary and distracting when social niceties are > appended to a > post (as Jeffrey seems to be doing lately.) While I > don't subscribe > to the opposite extreme of "Crocker's Rules", I tend > to take > intelligence, mutual respect and good humor for > granted, not needing > ongoing reinforcement and reassurance. In this > regard, with certain > individuals, I am clearly lacking. > > - Jef > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. http://autos.yahoo.com/carfinder/ From thespike at satx.rr.com Tue Jun 26 17:46:39 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jun 2007 12:46:39 -0500 Subject: [ExI] more blatant self-promotion Message-ID: <7.0.1.0.2.20070626124028.024107c0@satx.rr.com> Fictionwise.com have just displayed a link to an e-book version of my Dying Earth science fiction novel THE BLACK GRAIL: http://www.fictionwise.com/eBooks/eBook47482.htm?cached I think it's an entertaining read, although bracingly bleak in some respects, and it's not without its posthuman aspects. (It's a heavily revised and extended version of my first novel.) Here's a plug: "Among novels set in the far future of Earth there are some that are placed near the very end, in the realm of the dying sun. These 'dying sun' novels are neither science fiction nor fantasy but a hybrid form that combines the strength of the two: science fantasy... Three exemplary novels are The Dying Earth (1950) by Jack Vance, The Book of the New Sun (1983), and The Black Grail (1986) by Damien Broderick. Each brings a different perspective: Vance creates the form, Wolfe pushes it to its fantasy edge, and Broderick drives it to its science fiction limit."--Michael Andre-Driussi, New York Review of Science Fiction Damien Broderick From thespike at satx.rr.com Tue Jun 26 18:06:11 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jun 2007 13:06:11 -0500 Subject: [ExI] are sp&m levels dropping? In-Reply-To: <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> References: <000401c7b7bb$c3a05320$1d971f97@archimede> <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> Message-ID: <7.0.1.0.2.20070626130444.021a7b20@satx.rr.com> These posts, ironically, are all being hurled into my spam bit bucket because of the word "sp*m" in the subject line. Let's see how these slight variations fare. Damien Broderick From eugen at leitl.org Tue Jun 26 18:20:36 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 26 Jun 2007 20:20:36 +0200 Subject: [ExI] are sp&m levels dropping? In-Reply-To: <7.0.1.0.2.20070626130444.021a7b20@satx.rr.com> References: <000401c7b7bb$c3a05320$1d971f97@archimede> <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> <7.0.1.0.2.20070626130444.021a7b20@satx.rr.com> Message-ID: <20070626182036.GQ7079@leitl.org> On Tue, Jun 26, 2007 at 01:06:11PM -0500, Damien Broderick wrote: > These posts, ironically, are all being hurled into my spam bit bucket > because of the word "sp*m" in the subject line. Let's see how these > slight variations fare. Sp*m filters based on content (especially simple keyword matches) are screwed, and have been that way for a considerable time. The only solace is realtime map of originating addresses, and advanced pattern recognition (latter is unreliable, and will always be unreliable, at least as long computers won't pass the Turing test with flying colors). From thespike at satx.rr.com Tue Jun 26 18:51:09 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jun 2007 13:51:09 -0500 Subject: [ExI] are sp&m levels dropping? In-Reply-To: <20070626182036.GQ7079@leitl.org> References: <000401c7b7bb$c3a05320$1d971f97@archimede> <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> <7.0.1.0.2.20070626130444.021a7b20@satx.rr.com> <20070626182036.GQ7079@leitl.org> Message-ID: <7.0.1.0.2.20070626134919.02279010@satx.rr.com> At 08:20 PM 6/26/2007 +0200, Gene wrote: > > These posts, ironically, are all being hurled into my spam bit bucket > > because of the word "sp*m" in the subject line. Let's see how these > > slight variations fare. > >Sp*m filters based on content (especially simple keyword matches) are >screwed, and have been that way for a considerable time. Yeah, and that being so I suggest people wishing to discuss spam use a disguised version of the word in the subject line--Eudora let this one ("sp&m") through okay. Damien Broderick From gts_2000 at yahoo.com Tue Jun 26 18:15:02 2007 From: gts_2000 at yahoo.com (gts) Date: Tue, 26 Jun 2007 14:15:02 -0400 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: <842821.53002.qm@web37410.mail.mud.yahoo.com> References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: I think people know instinctively that the will is the essence of the person. Consider the typical first date. After exchanging pleasantries and perhaps a brief background check, the would-be lovers launch immediately into a query of the wills: What do you like to do for fun? What do you choose to do for a living? What did you choose to study in school? Where do you like to go on vacation? What kind of movies do you like? What kind of books do you like? What's your favorite subject? Who's your favorite author? What's your favorite book? What kind of music do you like? What's your favorite band? What's your favorite song? (sometime around here the conversation gets heavier) What do you want in a relationship? What are your goals? (etcetera, so on and so forth) The answers to all these questions answer the more general question: "What is your will in life?" Your date wants to know, because that's who you are. -gts From spike66 at comcast.net Tue Jun 26 19:05:15 2007 From: spike66 at comcast.net (spike) Date: Tue, 26 Jun 2007 12:05:15 -0700 Subject: [ExI] are sp&m levels dropping? In-Reply-To: <7.0.1.0.2.20070626130444.021a7b20@satx.rr.com> Message-ID: <200706261915.l5QJFQei017169@andromeda.ziaspace.com> > These posts, ironically, are all being hurled into my spam bit bucket > because of the word "sp*m" in the subject line. Let's see how these > slight variations fare. > > Damien Broderick Odd. Have you ever seen a real sp*m with the word sp*m in the subject line? It would be nice if every sp*m identified itself. Here's a game. Imagine if every sp*mmer were to catch an honesty virus. The symptom is the sufferer labels every one of their posts truthfully. We might get posts with subject lines: "Please help me rip you off by pretending to rip off some desperately poor African nation" and "Demonstrate how big a fool you are to fall for this medication that promises to enlarge your privates" and "This is a fake eBay message intended to trick you into giving me your credit card number (you silly ass)". Others? Imagine a sp*m in which the originator caught the virus in the middle of composing the work, for instance. Actually, if the sp*mmers were to actually label their posts that way, we might actually open them to see what is the gag. spike From pharos at gmail.com Tue Jun 26 19:16:28 2007 From: pharos at gmail.com (BillK) Date: Tue, 26 Jun 2007 20:16:28 +0100 Subject: [ExI] are sp&m levels dropping? In-Reply-To: <7.0.1.0.2.20070626134919.02279010@satx.rr.com> References: <000401c7b7bb$c3a05320$1d971f97@archimede> <200706261433.l5QEXkG4025168@andromeda.ziaspace.com> <7.0.1.0.2.20070626130444.021a7b20@satx.rr.com> <20070626182036.GQ7079@leitl.org> <7.0.1.0.2.20070626134919.02279010@satx.rr.com> Message-ID: On 6/26/07, Damien Broderick wrote: > Yeah, and that being so I suggest people wishing to discuss spam use > a disguised version of the word in the subject line--Eudora let this > one ("sp&m") through okay. > Er, pardon? What is the reasoning followed by the designers of these spam filters? Do they really think that a spammer who sends out millions of spam emails is going to helpfully label all his messages with 'spam' in the subject line? In fact, have they ever, *ever* seen a real spam email with 'spam' in the subject line? BillK From jef at jefallbright.net Tue Jun 26 18:12:44 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 26 Jun 2007 11:12:44 -0700 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <229074.53683.qm@web51905.mail.re2.yahoo.com> References: <970633.74919.qm@web51909.mail.re2.yahoo.com> <229074.53683.qm@web51905.mail.re2.yahoo.com> Message-ID: On 6/25/07, TheMan wrote: > My use of the word nightmare referred to the threat > from the super-hi-tech weapons that may soon be > developed and with which one or more of the thousands > of ordinary, angry and crazy human beings that exist > today may choose to terminate all life on this planet. Sorry if I appear to be unnecessarily polemical on this point, but I think it is important to distinguish between such an outcome, nightmarish indeed, and such a threat, good in that it drives us to develop higher level solutions. Regardless of its specifics and form, the (Red Queen's) race goes on. We could use more people and organizations thinking in terms of an increasingly intelligent global immune system. > Of course, technology is great in many ways, but I've > got the impression that most extropians tend to focus > too much on the boons and underestimate the perils. > For example, only a tiny part of Kurzweil's > "Singularity is near" is about the perils of the > coming technologies, the rest is about the great stuff > these technologies can bring us. While much of Kurzweil's activity is promotional, he thinks seriously about risks. For example: Based on my experience over more than a decade on the extropy list, my impression is that we tend to **over-estimate** the magnitude of both the rewards and the risks, while underestimating their subtleties. > If you can be motivated to better or equally good > actions by feeling only excitement and no fear, that's > great. For me, emotions just are, they are neither good nor bad in themselves. I watch them coming and going and occasionally rock my boat, but they have no intrinsic value whatsoever. Functionally they serve an important role in the motivational feedback loops of evolved biological organisms. An artificial intelligence could operate without "emotions", but it would still need motivational feedback loops and it would still describe states as being relatively pleasant or unpleasant. Confusion arises in na?ve discussions on this topic when people conflate "negative emotions" with their negative, in the sense of dysfunctional, side-effects. I have no hope of effectively conveying this understanding over this limited channel, so please excuse me from defending this here. > I'm just not sure that one will be sufficiently > aware of the risks and take sufficient action to > diminish them if one doesn't acknowledge the nightmare > aspect of the global arms race that seems to be going > to get out of control soon. My emphasis is entirely on increasing awareness -- note the repeated metaphor of improving our map -- but alarm and over-reaction can be even more harmful. > The race is a fact of nature, I agree, but it would > proceed even if restricted, just at a slower speed. Restrictions ultimately only work for the other guy. This is easily observable in game theory. Very fundamentally, we are defined by our values and can't choose to lose. Two brothers wanted to race a course, To see which had the slowest horse. Since neither wanted to spur his mare, What must they do to make it fair? > Just as you point out, the acceleration of change may > soon make it impossible for any government, or any > other groups or individuals for that matter, to > restrict the use of too dangerous technologies (in an > ordinary, democratic manner, that is) before it's too > late. So if the speed of development could be lowered, > it would be safer, and mankind would still reach > singularity sooner or later. My point is that we can't achieve an inclusive agreement to slow down, someone will defect. We can, however, realistically strive for increasing cooperation in this race. > An Orwellian world > despot, with the power to prevent everyone else from > experimenting with new technologies, would, > statistically, be more careful with what experiments > he allows, than would the least careful of countless > free and eager engineers, cults and terrorists in the > world. The kind of society David Brin suggests might > have a similar dampering effect on the perils of tech > development. But in a free society that follows the > proactionary principle without a ubiquitous > surveillance system for watching out for dangerous use > of new technlogies, it seems to me that less careful > (and less morally sensible) engineers will get to > perform experiments than in the former two cases. > > How easy will it be for the good people in the world > to always come up with sufficient defenses against > nanoweapons, and other supertech, in time before a > nanoweapon, or other supertech, terminates us all? > Wouldn't it be easier for the mankind-loving people to > secure an advantage over the mankind-threatening > people if the technological development would be > slowed down? An Orwellian system might slow it down > and thus provide some time, so it might be the best > alternative for mankind, even if the leaders of it > would not protect mankind for mankind's sake but > merely for personal profit. Orwellian scenarios are based on somewhat obsolete early 20th-century thinking about power and corruption, lacking our more modern (and improving) knowledge of game-theory, systems theory, technological network effects, and much more. Accelerating technological change both threatens us and liberates us, with the odds of our survival biased just slightly in our favor by our capacity for intelligent choice. The way to increase our odds is by increasing the intelligence of our choices -- promoting an increasing context of increasingly coherent values over increasing scope of consequences -- not by futilely attempting to slow down our horse in the race. [Did you solve the riddle?] Snipped a lot more, but must get to work. My boss is a real slave-driver, and he watches literally everything I do. - Jef - Jef From jef at jefallbright.net Tue Jun 26 19:23:22 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 26 Jun 2007 12:23:22 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: On 6/26/07, gts wrote: > I think people know instinctively that the will is the essence of the > person. > > Consider the typical first date. After exchanging pleasantries and perhaps > a brief background check, the would-be lovers launch immediately into a > query of the wills: > > What do you like to do for fun? > What do you choose to do for a living? > What did you choose to study in school? > Where do you like to go on vacation? > What kind of movies do you like? > What kind of books do you like? > What's your favorite subject? > Who's your favorite author? > What's your favorite book? > What kind of music do you like? > What's your favorite band? > What's your favorite song? > The answers to all these questions answer the more general question: > > "What is your will in life?" > > Your date wants to know, because that's who you are. > Okay, this is interesting, and possibly closer to my own view than I thought.. Are you saying that knowledge of these values informs another person's model of your will, such that they essentially know you the person? - Jef From thespike at satx.rr.com Tue Jun 26 19:30:25 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jun 2007 14:30:25 -0500 Subject: [ExI] are sp&m levels dropping? In-Reply-To: <200706261915.l5QJFQei017169@andromeda.ziaspace.com> References: <7.0.1.0.2.20070626130444.021a7b20@satx.rr.com> <200706261915.l5QJFQei017169@andromeda.ziaspace.com> Message-ID: <7.0.1.0.2.20070626142617.021d82f8@satx.rr.com> At 12:05 PM 6/26/2007 -0700, spike wrote: >Odd. Have you ever seen a real sp*m with the word sp*m in the subject line? My assumption is that after Eudora's spam filter finds the evil that lurks in the heart of emails it adds that notation to the subject line, then another sorting subroutine uses it to fling the accursed text into either Junk or Trash. But I might be wrong anyway, because I just got a message from another list that flagrantly used the spam word in the subject line and got clean away with it. Gasp! Damien Broderick From thespike at satx.rr.com Tue Jun 26 19:40:56 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 26 Jun 2007 14:40:56 -0500 Subject: [ExI] and speaking again of weird emailer programs In-Reply-To: References: <970633.74919.qm@web51909.mail.re2.yahoo.com> <229074.53683.qm@web51905.mail.re2.yahoo.com> Message-ID: <7.0.1.0.2.20070626143730.023159e8@satx.rr.com> I wonder if these fragments, from Jef and TheHuman, come through other people's systems as illegibly they do on as mine? >^H\?H?f the word nightmare referred to the threat >???HH?@per-hi-tech weapons that may soon be > >?o much on the boons and underestimate the perils. >???@xample, only a tiny part of Kurzweil's > >Y?[?H?[?be motivated to better or equally good > > actions by feeling only excitement and no fear, that's > > > aspect of the global arms race that seems to be going >??]?]??????????????^H[\\?is is entirely on >increasing awareness -- note the repeated >metaphor of improving our map -- but alarm and over-reaction can be >even more harmful. > > >H?X?H\?H?X??f nature, I agree, but it would >???YY]?[?Y??\??X?ted, just at a slower speed. > > > > Just as you point out, the acceleration of change may >????XZ?H][\???X?H???[?H???\??Y[?or any and so on. It's not as if Eudora is some quirky start-up. Damien Broderick From benboc at lineone.net Tue Jun 26 19:24:15 2007 From: benboc at lineone.net (ben) Date: Tue, 26 Jun 2007 20:24:15 +0100 Subject: [ExI] are spam levels dropping? In-Reply-To: References: Message-ID: <468167DF.4090008@lineone.net> From: "spike" > Reading thru my inbox, it suddenly occurred to me that I am getting > far less spam than usual. Me too. I thought it was just random variation. Now i think it might be just random variation. Unless anybody else has noticed the same thing? In which case it still might be just random variation. Damn, i hate statistics. ben z From eugen at leitl.org Tue Jun 26 19:47:10 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 26 Jun 2007 21:47:10 +0200 Subject: [ExI] and speaking again of weird emailer programs In-Reply-To: <7.0.1.0.2.20070626143730.023159e8@satx.rr.com> References: <970633.74919.qm@web51909.mail.re2.yahoo.com> <229074.53683.qm@web51905.mail.re2.yahoo.com> <7.0.1.0.2.20070626143730.023159e8@satx.rr.com> Message-ID: <20070626194710.GT7079@leitl.org> On Tue, Jun 26, 2007 at 02:40:56PM -0500, Damien Broderick wrote: > > > Just as you point out, the acceleration of change may > >????XZ?H][\???X?H???[?H???\??Y[?or any > > and so on. > > It's not as if Eudora is some quirky start-up. utf-8 screwage. Eudora can't handle it. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Tue Jun 26 20:04:29 2007 From: pharos at gmail.com (BillK) Date: Tue, 26 Jun 2007 21:04:29 +0100 Subject: [ExI] and speaking again of weird emailer programs In-Reply-To: <7.0.1.0.2.20070626143730.023159e8@satx.rr.com> References: <970633.74919.qm@web51909.mail.re2.yahoo.com> <229074.53683.qm@web51905.mail.re2.yahoo.com> <7.0.1.0.2.20070626143730.023159e8@satx.rr.com> Message-ID: On 6/26/07, Damien Broderick wrote: > I wonder if these fragments, from Jef and > TheHuman, come through other people's systems as illegibly they do on as mine? > > > >^H\?H?f the word nightmare referred to the threat > >???HH?@per-hi-tech weapons that may soon be > > > > and so on. > > It's not as if Eudora is some quirky start-up. > I use gmail, so no problem for me. Their spam filters work fine too. But perhaps you could try: UTF8 to ISO plugin V1.60 for Eudora (11/2006) Why is this plugin needed? The e-mail standard says that all e-mail programs modified 1999 or later must (read "must"!) understand UTF-8 encoded e-mails. The people at Qualcomm didn't manage to make Eudora UTF-8 compliant, though. So e-mails you receive that are in UTF-8 may contain funny characters. Change to a different client? No, Eudora is fine. It is among the best e-mail clients I have ever seen. Except for this bug, that is... --------------------- BillK From thomas at thomasoliver.net Tue Jun 26 20:18:09 2007 From: thomas at thomasoliver.net (Thomas) Date: Tue, 26 Jun 2007 13:18:09 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: <3C2FE62B-73E5-44CB-813E-012ED9B9B1F7@thomasoliver.net> Jef Albright wrote: > [...] I felt compelled in the sense of > expressing my values, making the world a little better, rather than in > terms of any (clearly non-existent) contract or other form of > obligation. Good. > As for the possibility of pedagogical condescension and resultant > offense implied in your statement, I had no such intention whatsoever, > and as I indicated, my contributions to the public discourse are with > our few hundred lurkers in mind. Well, see what you get for using me as your example? : ) I admit you have a great touch with hypersensitives, such as I, once you know with what you deal. I can't seem to help liking you even when you irritate me. > > [...] intelligence, mutual respect and good humor for granted, not > needing > ongoing reinforcement and reassurance. In this regard, with certain > individuals, I am clearly lacking. Ironically, this prompts me to thank you for your extra effort on my behalf. I enjoy practicing routine appreciation almost to the point of distraction. It certainly pays off in my personal life. Enough! May I appeal once again to your tour guide compulsion? What makes a vague abstract entity more extensible and coherent than a concrete expression of will as the source of personal identity? -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jun 26 20:30:40 2007 From: pharos at gmail.com (BillK) Date: Tue, 26 Jun 2007 21:30:40 +0100 Subject: [ExI] are spam levels dropping? In-Reply-To: <468167DF.4090008@lineone.net> References: <468167DF.4090008@lineone.net> Message-ID: On 6/26/07, ben wrote: > Me too. I thought it was just random variation. > Now i think it might be just random variation. > > Unless anybody else has noticed the same thing? > In which case it still might be just random variation. > > Damn, i hate statistics. > There has been a slight overall drop in spam recently. Quote: Worldwide, 72.7% of all e-mail was tagged as spam by MessageLabs during May 2007. That figure is below the six-month average of 75.3% and far lower than the highest-ever figure of 94.5%, recorded in July 2004. --------------- But spammers are also changing their tactics. (So what's new?). MessageLabs say that they are now detecting what they call 'spam spikes', where thousands of spam emails are targeted at specific domains. "The purpose of a spam spike is to defeat appliance-based anti-spam systems that rely heavily on signatures, rather like desktop antivirus software," MessageLabs said in a report it just published. So, provided you haven't been targeted, you should be seeing some reduction in spam. See: http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9023080 BillK From jef at jefallbright.net Tue Jun 26 20:41:16 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 26 Jun 2007 13:41:16 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: <3C2FE62B-73E5-44CB-813E-012ED9B9B1F7@thomasoliver.net> References: <3C2FE62B-73E5-44CB-813E-012ED9B9B1F7@thomasoliver.net> Message-ID: On 6/26/07, Thomas wrote: > What makes a vague > abstract entity more extensible and coherent than a concrete expression of > will as the source of personal identity? ------------------- Bullwinkle: "Hey Rocky! Watch me while I pull a rabbit out of my hat!" Rocky: "Again?" ------------------- - Jef From gts_2000 at yahoo.com Tue Jun 26 19:25:45 2007 From: gts_2000 at yahoo.com (gts) Date: Tue, 26 Jun 2007 15:25:45 -0400 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: On Tue, 26 Jun 2007 15:23:22 -0400, Jef Allbright wrote: >> Your date wants to know [your will], because that's who you are. > > > Okay, this is interesting, and possibly closer to my own view than I > thought.. Are you saying that knowledge of these values informs > another person's model of your will, such that they essentially know > you the person? Yes, I suppose so. -gts From jef at jefallbright.net Tue Jun 26 21:21:14 2007 From: jef at jefallbright.net (Jef Allbright) Date: Tue, 26 Jun 2007 14:21:14 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: On 6/26/07, gts wrote: > On Tue, 26 Jun 2007 15:23:22 -0400, Jef Allbright > wrote: > > >> Your date wants to know [your will], because that's who you are. > > > > > > Okay, this is interesting, and possibly closer to my own view than I > > thought.. Are you saying that knowledge of these values informs > > another person's model of your will, such that they essentially know > > you the person? > > Yes, I suppose so. And what is the basis of the values that inform the model of the will in the mind of the observer? Would you agree with me that all of one's values appear to be physically determined, by one's genetics, the environment, and the who complex chain of physical interaction leading up to any particular moment when one's values are queried, thus informing the observer's model of their will, such that they essentially know the person? As a specific example, would you and I agree that in the case of a person who valued dancing (performing, rather than as being entertained by it) this value would be related to physical characteristics such as good health, strength, adequate muscular control, both legs approximately the same length (let's not bring in prosthetics for this discussion) and so on? Sorry for this being so drawn out, but you know how it goes between us. I'm going in this direction only far enough to test whether you and I both agree that the person's state is entirely determined in physical terms, nothing essentially mystical in the background. I think this is a safe bet, but want to confirm agreement on this before proceeding to ask a question assessing how our understanding might differ. I'm just being thorough here, not tricky. - Jef From msd001 at gmail.com Wed Jun 27 01:10:27 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 26 Jun 2007 21:10:27 -0400 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: <62c14240706261810m300a69bahb90e678f506223ff@mail.gmail.com> On 6/26/07, Jef Allbright wrote: > Sorry for this being so drawn out, but you know how it goes between > us. I'm going in this direction only far enough to test whether you > and I both agree that the person's state is entirely determined in > physical terms, nothing essentially mystical in the background. I > think this is a safe bet, but want to confirm agreement on this before > proceeding to ask a question assessing how our understanding might > differ. I wonder how you define "mystical" (From Wikipedia) The Lorenz Attractor: "From a technical standpoint, the system is nonlinear, three-dimensional and deterministic" It models "chaotic flow" Suppose two people are represented as the fixed points, and the third point (the one which traces the graph) is their mutual understanding/awareness of each other over time. That either (person) is ever able to appreciate the graph at all is [imo] approaching as much description of "mystical" as any other use of the word. Of course the most concrete idea here is that of the Lorenz Attractor. My concept of it is probably different than yours (the reader) as much as either of us has a different appreciation than the author(s) of that Wikipedia page. If after those differences are resolved, the point I attempted to make may be nearly lost due to so much context dependency on my own POV. I will agree with what I thought your direction was - that objectively isolating the starting conditions and exposure to physical processes can lead to a deterministic prediction of state. Is that objective isolation computationally feasible (or possible?) in as many dimensions as human interaction occurs? (if one were to constrain the dimensions to be computable, would the model be an accurate enough approximation of reality?) "hmm..." From msd001 at gmail.com Wed Jun 27 01:18:24 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 26 Jun 2007 21:18:24 -0400 Subject: [ExI] agency-based personal identity In-Reply-To: References: <3C2FE62B-73E5-44CB-813E-012ED9B9B1F7@thomasoliver.net> Message-ID: <62c14240706261818i4b84a4c0mc0091ba768417ec1@mail.gmail.com> On 6/26/07, Jef Allbright wrote: > On 6/26/07, Thomas wrote: > > > What makes a vague > > abstract entity more extensible and coherent than a concrete expression of > > will as the source of personal identity? What makes an abstract [anything] more extensible ... than a concrete [anything] ? I think the answer is: the definition(s) of 'abstract,' 'concrete' and 'extensible' Sorry to be so blunt as I jump into a conversation threaded with such "social niceties" as a meta-discourse on "social niceties" :) From msd001 at gmail.com Wed Jun 27 01:26:07 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 26 Jun 2007 21:26:07 -0400 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: References: <482359.40787.qm@web51905.mail.re2.yahoo.com> <251687.30751.qm@web51901.mail.re2.yahoo.com> Message-ID: <62c14240706261826s283da22fm2f10d43af1258746@mail.gmail.com> On 6/25/07, Jef Allbright wrote: > Personally, I think "government" as we know it will collapse under the > weight of its own inconsistencies, and that radical libertarianism > will never flourish. I favor a system possibly describable as anarchy > within an increasingly cooperative framework. Hey, If you want to run for president on a "cooperative anarchy" platform, I'll "vote" for you. ...but only so I can take the title for myself after you've eliminated our mutual opponents. Kind of like the Highlander... From spike66 at comcast.net Wed Jun 27 02:49:28 2007 From: spike66 at comcast.net (spike) Date: Tue, 26 Jun 2007 19:49:28 -0700 Subject: [ExI] are spam levels dropping? In-Reply-To: Message-ID: <200706270249.l5R2nVaK001097@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of BillK ... > > MessageLabs say that they are now detecting what they call 'spam > spikes'... Hmmm, I do not favor that term. spike From brent.allsop at comcast.net Wed Jun 27 03:20:36 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Tue, 26 Jun 2007 21:20:36 -0600 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <689666.15732.qm@web37409.mail.mud.yahoo.com> References: <689666.15732.qm@web37409.mail.mud.yahoo.com> Message-ID: <4681D784.6090801@comcast.net> A B, Yea, sure. I think of that as a purpose of the canonizer. Not to replace discussion group postings, but merely a place to focus and summarize discussions once they've occurred, so the next time someone new brings it up again, they can quickly get up to speed and everyone can make progress instead of rehashing the same old stuff again and again. Thanks for the comments. Brent Allsop What is the gospel according to you? Everyone wants to know! canonizer.com A B wrote: > Hi Brent, > > I really think that the Canonizer is a great idea of > yours, and it really does have the potential to make > things a lot more efficient. I'll make an entry in the > next day or two. If you don't mind, I'd also like to > send a simple compilation back to Extropy. It'll be > good to have the list in at least two places, even if > the Canonizer will contain much more support material > and weightings. > > Best, > > Jeffrey Herrlich > > PS. Just in general folks, we should probably start > using Spoiler Warnings for all our recommendations if > we are going to post significant details about the > plots. It might be better just to refrain from posting > details. But whatever, that's just my 2 cents. > > > --- Brent Allsop wrote: > > >> A B, >> >> You must not have seen the post about the Canonizer >> (POV wiki) topic >> we've created where people are now submitting and >> supporting their >> favorite movies? >> >> There are now 7 movies submitted by people. I hope >> we can get some more >> submissions, Canonized POV information about why >> people like them, and >> lots more "support" to get a more meaningful >> quantitative sample of what >> movies Transhumanists like and why. >> >> Here is the page for those that missed it: >> >> https://test.canonizer.com/topic.asp?topic_num=20 >> >> Upward, >> >> Brent Allsop >> >> > > > > ___________________________________________________________________________________ > You snooze, you lose. Get messages ASAP with AutoCheck > in the all-new Yahoo! Mail Beta. > http://advision.webevents.yahoo.com/mailbeta/newmail_html.html > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jun 27 05:30:18 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 27 Jun 2007 15:30:18 +1000 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <102492.35814.qm@web37413.mail.mud.yahoo.com> References: <102492.35814.qm@web37413.mail.mud.yahoo.com> Message-ID: On 27/06/07, A B wrote: > "Why?" > > Cost. Cost:Benefit is too high. If you have enough big > guns and big bodyguards, then it doesn't really matter > if most of your impoverished population doesn't like > you. Surely bugging every house wouldn't be either more difficult or more expensive than, say, a space program. I think the reason no-one has tried it is evidence that there are some things which even dictators fear would be too unpopular. -- Stathis Papaioannou From artillo at comcast.net Wed Jun 27 06:43:38 2007 From: artillo at comcast.net (Brian J. Shores) Date: Wed, 27 Jun 2007 02:43:38 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <710b78fc0706222214xbe4ea77kee70fa48685ae1dd@mail.gmail.com> Message-ID: <002a01c7b886$7f213680$640fa8c0@BJSMain> Wasn't Butterfly Effect sort of like Memento in that respect? I may have mentioned to go see S1m0ne at one point, its one of my favorite films. I always said that if they ever make a sequel to this move that they should treat it gently and not water it down but go for the throat of the issues of intelligence and AI and rights and privacy and media etc.... -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Emlyn Sent: Saturday, June 23, 2007 1:15 AM To: ExI chat list Subject: Re: [ExI] Favorite ~H+ Movies Memento, with Guy Pearce. Very hard to explain why I think this is transhumanist. Maybe the exploration of the nature of mind? I think in some way it feels to me as though our normal mental architecture is broken compared to what we need for real understanding of the universe, analogous to the way his is for coping with normal life, and our struggle to overcome feels similar, doomed and heroic and sickly facinating. Emlyn On 23/06/07, A B wrote: Boy, things have been tense around here lately. We should be entitled to a little fun once in a while, right? I thought it would be fun to make a list of our favorite semi-transhumanist movies. This written medium can sometimes be somewhat dry, and difficult to express and share positive emotions with each other. It may sound cheesy, but perhaps by sharing our favorite movies, we could more easily recognize some of the more fundamental feelings and aspirations between us. [Maybe we could also suggest favorite music pieces, but I'll let that begin on someone else's initiative.] For my contribution, I recommend: * Original Director's Cut of "Bladerunner". You must see the original Director's Cut or you haven't seen the movie... sorry :-) Sure, it's a dark-future themed movie, and it is slightly cheesy in a few spots, but it does have some truly moving and profound moments, in my opinion. I fully recommend it, overall. Sincerely, Jeffrey Herrlich ____________________________________________________________________________ ________ Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase. HYPERLINK "http://farechase.yahoo.com/"http://farechase.yahoo.com/ _______________________________________________ extropy-chat mailing list HYPERLINK "mailto:extropy-chat at lists.extropy.org"extropy-chat at lists.extropy.org HYPERLINK "http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat"http://lists.ext ropy.org/mailman/listinfo.cgi/extropy-chat No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.9.6/865 - Release Date: 6/24/2007 8:33 AM No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.9.9/872 - Release Date: 6/26/2007 6:43 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From artillo at comcast.net Wed Jun 27 06:47:44 2007 From: artillo at comcast.net (Brian J. Shores) Date: Wed, 27 Jun 2007 02:47:44 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <467CBEF4.3010504@pobox.com> Message-ID: <002f01c7b887$11627770$640fa8c0@BJSMain> ZARDOZ! :snips: My girlfriend Erin would add "Ghost in the Shell" and she'd be dead right, come to think of it. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence : snips: Thumbs up!! Way up! :D Ghost in the Shell ...excellent series and movies. There are some very moving plots as well, also very deep future political intrigue, especially involving military, corporations, and rebellions. No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.9.9/872 - Release Date: 6/26/2007 6:43 PM From artillo at comcast.net Wed Jun 27 07:10:03 2007 From: artillo at comcast.net (Brian J. Shores) Date: Wed, 27 Jun 2007 03:10:03 -0400 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <24f36f410706231626i4513ad81gbd13751d143f507@mail.gmail.com> Message-ID: <003601c7b88a$2f505560$640fa8c0@BJSMain> AWESOME movie! -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Elaa Mohamad Sent: Saturday, June 23, 2007 7:27 PM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] Favorite ~H+ Movies I just though of another one that is pretty good. Equilibrium - http://www.imdb.com/title/tt0238380/ _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.9.6/865 - Release Date: 6/24/2007 8:33 AM No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.9.9/872 - Release Date: 6/26/2007 6:43 PM From artillo at comcast.net Wed Jun 27 07:13:15 2007 From: artillo at comcast.net (Brian J. Shores) Date: Wed, 27 Jun 2007 03:13:15 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <62c14240706232209t7560c99dufd739b3f987b9b64@mail.gmail.com> Message-ID: <003701c7b88a$a16dbe30$640fa8c0@BJSMain> YESS Thank you for mentioning that, I agree. Some parts were downright cheesy though. -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty Sent: Sunday, June 24, 2007 1:09 AM To: ExI chat list Subject: Re: [ExI] Favorite ~H+ Movies. The Island http://en.wikipedia.org/wiki/The_Island_(2005_film) Perhaps not the best movie ever, but worth having seen. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.9.6/865 - Release Date: 6/24/2007 8:33 AM No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.9.9/872 - Release Date: 6/26/2007 6:43 PM From artillo at comcast.net Wed Jun 27 07:07:53 2007 From: artillo at comcast.net (Brian J. Shores) Date: Wed, 27 Jun 2007 03:07:53 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <000e01c7cd56$7a71fa40$6501a8c0@brainiac> Message-ID: <003501c7b889$e19f6ae0$640fa8c0@BJSMain> Pardon me if I'm late in posting this and it has already been mentioned, but Im answering Emails just now getting to read them LOL Has anybody mentioned the movie "The Island" http://www.imdb.com/title/tt0399201/ ? I thought this was an excellent if flashy vision of a not so perfect advanced society, kinda scary that is could be just that feasible! How about these titles involving the use of advanced VR: Lawnmower Man http://www.imdb.com/title/tt0104692/ Existenz http://www.imdb.com/title/tt0120907/ The Cell http://www.imdb.com/title/tt0209958/ Ooo how about these classics: Dreamscape http://www.imdb.com/title/tt0087175/ Looker http://www.imdb.com/title/tt0082677/ Renegade robots: Runaway http://www.imdb.com/title/tt0088024/ Chopping Mall (don?t laugh, it could happen in an overly automated society!) http://www.imdb.com/title/tt0090837/ (ok yea that?s tacky :) ) Deadly Friend (ok but seriously, chips implanted into the brain) http://www.imdb.com/title/tt0090917/ The Iron Giant (love it) http://www.imdb.com/title/tt0129167/ Craziness: The Fifth Element http://www.imdb.com/title/tt0119116/ Dark City http://www.imdb.com/title/tt0118929/ The Adventures of Buckaroo Banzai Across the 8th Dimension (stop giggling muahahaahha) http://www.imdb.com/title/tt0086856/ . http://www.imdb.com/title/tt0399201/ -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Olga Bourlin Sent: Monday, July 23, 2007 2:23 PM To: ExI chat list Subject: Re: [ExI] Favorite ~H+ Movies. From: "John K Clark" To: "ExI chat list" Sent: Saturday, June 23, 2007 10:29 AM > The only transhuman movie that I think of that was dead accurate in > every respect was made nearly 40 years ago, it was called "The Forbin > Project" ... To top it off the AI now forces Forbin to help it design > a successor machine even more powerful than it is. Ah, yes ... I remember that well! The tagline for "The Forbin Project" could have been the same one they used for "Seconds" (a movie I saw as a teenager in 1966, which gave a slight boost to my perspective forevermore, as well as a new word to my vocabulary, i.e., "reborns"): "What Are Seconds?... The Answer May Be Too Terrifying For Words!" (Olga's note: Oh, yeah? Well, those people obviously never heard of "Second Life.") more: "What if someone offered you the chance to begin again, with a new life that was organized to be exactly what you wanted it to be? That's what the organization offers some wealthy people..." (Olga's note: Ha! Ha! Those stinkin' greedy wealthy people who live dangerously and just can't seem to heed the moral lessons imbued in fairy tales such as The Tale of the Fisherman ...): http://www.imdb.com/title/tt0060955/ http://en.wikipedia.org/wiki/The_Tale_of_the_Fisherman_and_the_Fish ;) Olga _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.472 / Virus Database: 269.9.6/865 - Release Date: 6/24/2007 8:33 AM No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.9.9/872 - Release Date: 6/26/2007 6:43 PM From pharos at gmail.com Wed Jun 27 08:17:21 2007 From: pharos at gmail.com (BillK) Date: Wed, 27 Jun 2007 09:17:21 +0100 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: References: <102492.35814.qm@web37413.mail.mud.yahoo.com> Message-ID: On 6/27/07, Stathis Papaioannou wrote: > Surely bugging every house wouldn't be either more difficult or more > expensive than, say, a space program. I think the reason no-one has > tried it is evidence that there are some things which even dictators > fear would be too unpopular. > The East German Stasi secret police almost managed that. They had files on almost everyone in the country and had infiltrated almost every dissident group. One report complained that there were so many informers in dissident groups that they were giving the appearance that there were many more dissidents than actually existed. It was said that the Stasi were present at every dinner party held in the country. I think that the reason every house was not bugged was that at that time the technology was inadequate. When you think of people listening to people, there are physical limits as to how much eavesdropping you can do. Today, it can all be done by computers scanning for keywords and highlighting suspicious records for human attention. When more intelligent computing arrives which can understand human conversation, this will become even more effective. It is likely that either already, or very soon, all email will be scanned by the intelligence services. Analysing all the web traffic must be very high on their to-do list. Just wait, you ain't seen nothing yet! BillK From eugen at leitl.org Wed Jun 27 08:50:04 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 27 Jun 2007 10:50:04 +0200 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: References: <102492.35814.qm@web37413.mail.mud.yahoo.com> Message-ID: <20070627085004.GG7079@leitl.org> On Wed, Jun 27, 2007 at 09:17:21AM +0100, BillK wrote: > On 6/27/07, Stathis Papaioannou wrote: > > Surely bugging every house wouldn't be either more difficult or more > > expensive than, say, a space program. I think the reason no-one has Do you have a smartphone? Do you have a computer on broadband? Do you pay tolls, taxes for domestic security systems? Then you're paying for your own surveillance system. > > tried it is evidence that there are some things which even dictators > > fear would be too unpopular. > > > > > The East German Stasi secret police almost managed that. The STASI approach doesn't scale, because it's based on people. Modern surveillance is driven by Moore's law. > They had files on almost everyone in the country and had infiltrated > almost every dissident group. One report complained that there were so > many informers in dissident groups that they were giving the > appearance that there were many more dissidents than actually existed. > It was said that the Stasi were present at every dinner party held in > the country. A classical case of quis custodiet. Not an issue with hardware, which is completely loyal to whoever controls it. > I think that the reason every house was not bugged was that at that > time the technology was inadequate. When you think of people listening > to people, there are physical limits as to how much eavesdropping you > can do. > > Today, it can all be done by computers scanning for keywords and > highlighting suspicious records for human attention. When more > intelligent computing arrives which can understand human conversation, > this will become even more effective. > > It is likely that either already, or very soon, all email will be > scanned by the intelligence services. Analysing all the web traffic > must be very high on their to-do list. Instead of speculating, look up the current capabilities in SIGINT. Off-the-shelf hardware like Narus can easily do that, and we know (as in: we don't have to guess) it's being used for exactly that. > Just wait, you ain't seen nothing yet! Yeah. As a first step, do not yield your biometrics to any entity, commercial or government. Switch off your mobile phone most of the time (remove the batteries if you really want to make sure). Do not let any online entity (Google, ahem) gather information about you. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From avantguardian2020 at yahoo.com Wed Jun 27 08:37:43 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 27 Jun 2007 01:37:43 -0700 (PDT) Subject: [ExI] http://www.randommutation.com/ In-Reply-To: <46814041.80601@gmail.com> Message-ID: <250175.50786.qm@web60523.mail.yahoo.com> --- Adolfo Javier De Unanue wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Two comments: > It is so sad that in this time Creationism is taking > strenght in many > "advanced" countries. And it is our duty fight back > this ignorance. Even > in this (randommutation.com) poor attack to > science... I agree. Passive ignorance brought about by laziness, neglect, or lack of curiousity is sad enough. However the active promulgation of ignorance and misinformation made from a pretense of authority is inexcusable. > So, well done with your example! I will publish your > response in my blog > if you allow me... Be my guest. :) Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Fussy? Opinionated? Impossible to please? Perfect. Join Yahoo!'s user panel and lay it on us. http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From mmbutler at gmail.com Wed Jun 27 09:21:59 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Wed, 27 Jun 2007 02:21:59 -0700 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <003501c7b889$e19f6ae0$640fa8c0@BJSMain> References: <000e01c7cd56$7a71fa40$6501a8c0@brainiac> <003501c7b889$e19f6ae0$640fa8c0@BJSMain> Message-ID: <7d79ed890706270221n71abf804x4ea7246c9b49d901@mail.gmail.com> 1) _Brainstorm_. I still need to visit Kitty Hawk... Some would find the end mystical. I will defend it as at least as >Hish as 2001, if not more so, and the entire work as amusingly research-lab-vs-marketing plausible, as compared to some other tech romps such as "Real Genius"... Meh. WARNING: the plot summary reachable from the below link contains TOTAL SPOILAGE, err SPOILATION, err SPOLIATION. Just take my word for it and watch the movie. Dang, I gotta see it again. http://www.imdb.com/title/tt0085271/ 2) And then there's _Contact_... Hmm? Well, at least the Luddites don't win... -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m 'Piss off, you son of a bitch. Everything above where that plane hit is going to collapse, and it's going to take the whole building with it. I'm getting my people the fuck out of here." -- Rick Rescorla (R.I.P.), cell phone call, 9/11/2001 From eugen at leitl.org Wed Jun 27 11:42:14 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 27 Jun 2007 13:42:14 +0200 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: References: <102492.35814.qm@web37413.mail.mud.yahoo.com> Message-ID: <20070627114214.GL7079@leitl.org> On Wed, Jun 27, 2007 at 03:30:18PM +1000, Stathis Papaioannou wrote: > Surely bugging every house wouldn't be either more difficult or more > expensive than, say, a space program. I think the reason no-one has > tried it is evidence that there are some things which even dictators > fear would be too unpopular. Of course they would have to know in order it to be unpopular http://news.com.com/FBI+taps+cell+phone+mic+as+eavesdropping+tool/2100-1029_3-6140191.html Firmware update over air is becoming standard, large local nonvolatile memories and utilities which record all audio to local flash are commercially available. Cell-side tracking is completely transparent. OCR of license plates is almost completely transparent (I usually flip the bird when I drive under a toll bridge). SIGINT is completely transparent. It is in principle quite doable to detect an executable transmission (software update, etc.), and alter it on the wire to embed a goverment trojan, which is then subsequently executed. Most people have no idea what one can do with the hardware they thought they own just because they bought it. No idea... From pharos at gmail.com Wed Jun 27 12:00:45 2007 From: pharos at gmail.com (BillK) Date: Wed, 27 Jun 2007 13:00:45 +0100 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <20070627114214.GL7079@leitl.org> References: <102492.35814.qm@web37413.mail.mud.yahoo.com> <20070627114214.GL7079@leitl.org> Message-ID: On 6/27/07, Eugen Leitl wrote: > Most people have no idea what one can do with the hardware they thought > they own just because they bought it. No idea... Yea. I often muse that one of the advantages of running on a five-year old processor is that I would immediately notice if it started running secret stuff in the background. Apart from the visual indicator of cpu usage, it would just 'feel' slower than normal. On these new duo and quadro processors, your pc could running a whole virtual world in the background without you noticing....... BillK From msd001 at gmail.com Wed Jun 27 12:28:57 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 27 Jun 2007 08:28:57 -0400 Subject: [ExI] Favorite ~H+ Movies. In-Reply-To: <7d79ed890706270221n71abf804x4ea7246c9b49d901@mail.gmail.com> References: <000e01c7cd56$7a71fa40$6501a8c0@brainiac> <003501c7b889$e19f6ae0$640fa8c0@BJSMain> <7d79ed890706270221n71abf804x4ea7246c9b49d901@mail.gmail.com> Message-ID: <62c14240706270528nd354facta63fde708f2a7e3c@mail.gmail.com> On 6/27/07, Michael M. Butler wrote: > 1) _Brainstorm_. Absolutely yes. Very ahead of it's time for 1983 From stathisp at gmail.com Wed Jun 27 12:31:46 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 27 Jun 2007 22:31:46 +1000 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <20070627114214.GL7079@leitl.org> References: <102492.35814.qm@web37413.mail.mud.yahoo.com> <20070627114214.GL7079@leitl.org> Message-ID: On 27/06/07, Eugen Leitl wrote: > On Wed, Jun 27, 2007 at 03:30:18PM +1000, Stathis Papaioannou wrote: > > > Surely bugging every house wouldn't be either more difficult or more > > expensive than, say, a space program. I think the reason no-one has > > tried it is evidence that there are some things which even dictators > > fear would be too unpopular. > > Of course they would have to know in order it to be unpopular Yes, if they could do it covertly it would be a different matter. It would be ironic if the citizens of East Germany put up with the Stasi but drew the line at Big Brother in their homes, only to let Big Brother in once the Stasi were gone. I realise this may turn out to be more of a dirge than an argument for privacy... -- Stathis Papaioannou From emohamad at gmail.com Wed Jun 27 12:43:00 2007 From: emohamad at gmail.com (Elaa Mohamad) Date: Wed, 27 Jun 2007 14:43:00 +0200 Subject: [ExI] extropy-chat Digest, Vol 45, Issue 34 In-Reply-To: References: Message-ID: <24f36f410706270543s109eccdbk43a707a94bbb19a6@mail.gmail.com> Eugen Leitl wrote: > > Just wait, you ain't seen nothing yet! > > Yeah. As a first step, do not yield your biometrics to any entity, > commercial or government. Switch off your mobile phone most of the > time (remove the batteries if you really want to make sure). > Do not let any online entity (Google, ahem) gather information about you. What good would that do? If someone/some thing really wants to learn all the info there is about me, they'll find a way. Google might not have your personal info, but some database somewhere in the system surely has. In such a high tech world as it is today, I don't think there is a truly secure way to escape from "being followed" or "being bugged" - if indeed someone wanted to follow you or listen to your phone conversations :) Eli From pharos at gmail.com Wed Jun 27 12:53:05 2007 From: pharos at gmail.com (BillK) Date: Wed, 27 Jun 2007 13:53:05 +0100 Subject: [ExI] extropy-chat Digest, Vol 45, Issue 34 In-Reply-To: <24f36f410706270543s109eccdbk43a707a94bbb19a6@mail.gmail.com> References: <24f36f410706270543s109eccdbk43a707a94bbb19a6@mail.gmail.com> Message-ID: On 6/27/07, Elaa Mohamad wrote: > What good would that do? If someone/some thing really wants to learn > all the info there is about me, they'll find a way. Google might not > have your personal info, but some database somewhere in the system > surely has. In such a high tech world as it is today, I don't think > there is a truly secure way to escape from "being followed" or "being > bugged" - if indeed someone wanted to follow you or listen to your > phone conversations :) > Eli, When you reply to Digest messages, it would be helpful if you would change the Subject Header to the appropriate thread that you are replying to. Most email clients, like gmail, try to group together messages on the same subject and a meaningless header like 'Re: [ExI] extropy-chat Digest, Vol 45, Issue 34' is useless. Some readers only have time to read threads on subjects that interest them and a meaningless header means that they would just delete your message. Best wishes, BillK From eugen at leitl.org Wed Jun 27 13:38:46 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 27 Jun 2007 15:38:46 +0200 Subject: [ExI] extropy-chat Digest, Vol 45, Issue 34 In-Reply-To: <24f36f410706270543s109eccdbk43a707a94bbb19a6@mail.gmail.com> References: <24f36f410706270543s109eccdbk43a707a94bbb19a6@mail.gmail.com> Message-ID: <20070627133846.GO7079@leitl.org> On Wed, Jun 27, 2007 at 02:43:00PM +0200, Elaa Mohamad wrote: > What good would that do? If someone/some thing really wants to learn If you're consistent about it, you'll retain your privacy. (Notice that it may well be that your life might be literally hanging on it at some point in your future). Other personal information is not that critical, but still important. > all the info there is about me, they'll find a way. Google might not There are simple ways to hide secrets, some of them even duress/tamper-proof. But in the case of your retina and your fingerprints, DNA included, it takes specific readers (manipulating someone to yield a clean fingerprint without them noticing is not easy), DNA is similiar. I would strongly recommend that you do not give that information to anybody, and at least make sure that information is securely destroyed after use, if you have to use it for anything critical (medical or legal use). > have your personal info, but some database somewhere in the system Really? Some database somewhere "in the system" has my biometrics? I don't think so; and I'd like to keep it that way. > surely has. In such a high tech world as it is today, I don't think > there is a truly secure way to escape from "being followed" or "being Of course there is. It's a bit of work, but it can be done. Even provably securely, by using an one-time pad. > bugged" - if indeed someone wanted to follow you or listen to your > phone conversations :) The point is that most smartphone users do not realize they're toting a remotely-controlled wireless bug with positioning info approaching GPS fixes, camouflaging as a consumer device. Most people also do not realize what consumer cameras with face recognition and toll bridges with license plate OCR and RFIDs mean, mid-term. Nobody would have believed George Orwell if he included technology like this. Any totalitarian system using surveillance and enforcement as above is indefinitely metastable. Once you get there, there's no easy way to get out of it again. From pharos at gmail.com Wed Jun 27 14:18:06 2007 From: pharos at gmail.com (BillK) Date: Wed, 27 Jun 2007 15:18:06 +0100 Subject: [ExI] extropy-chat Digest, Vol 45, Issue 34 In-Reply-To: <20070627133846.GO7079@leitl.org> References: <24f36f410706270543s109eccdbk43a707a94bbb19a6@mail.gmail.com> <20070627133846.GO7079@leitl.org> Message-ID: On 6/27/07, Eugen Leitl wrote: > There are simple ways to hide secrets, some of them even duress/tamper-proof. > But in the case of your retina and your fingerprints, DNA included, it takes > specific readers (manipulating someone to yield a clean fingerprint without > them noticing is not easy), DNA is similiar. > > I would strongly recommend that you do not give that information to > anybody, and at least make sure that information is securely destroyed after use, > if you have to use it for anything critical (medical or legal use). In the UK, if you are arrested by the police, your DNA and fingerprints will be legally taken and stored in the database. Note: Even if you are later released without charge, or charged and found not guilty, your data will still be retained in the database. So in the UK, you must never annoy the police sufficient to cause arrest. > > Really? Some database somewhere "in the system" has my biometrics? > I don't think so; and I'd like to keep it that way. > > The point is that most smartphone users do not realize they're > toting a remotely-controlled wireless bug with positioning info > approaching GPS fixes, camouflaging as a consumer device. > > Most people also do not realize what consumer cameras with > face recognition and toll bridges with license plate OCR > and RFIDs mean, mid-term. Nobody would have believed George Orwell > if he included technology like this. > Most of the thousands of surveillance cameras in Britain are not being watched by humans. And if they are, the humans are probably more interested in reading the newspaper open over the keyboard. The cameras do not provide any security to the populace. (The traffic cameras are an additional tax raising facility). The camera recordings are usually investigated *after* an event has occurred. Only on rare occasions are they used live to guide police, like when having a purge on pickpockets in Oxford Street, or late night city centre violence at the weekend. > Any totalitarian system using surveillance and enforcement as > above is indefinitely metastable. Once you get there, there's > no easy way to get out of it again. > Hey, the populace want to be watched and tracked! If you don't, you must be a criminal or a terrorist - we'd better keep a close eye on you. BillK From spike66 at comcast.net Wed Jun 27 14:20:24 2007 From: spike66 at comcast.net (spike) Date: Wed, 27 Jun 2007 07:20:24 -0700 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <002a01c7b886$7f213680$640fa8c0@BJSMain> Message-ID: <200706271420.l5REKQB0018318@andromeda.ziaspace.com> bounces at lists.extropy.org] On Behalf Of Brian J. Shores ... ? >I may have mentioned to go see S1m0ne at one point, its one of my favorite films... Keep in mind that S1m0ne isn't so much a transhumanist film as it is a fantasy/comedy. Consider the non-transhumanist fantasy, Field of Dreams. At one point, movie makers must have decided to write a film for adult men. Not teenage men, which would have car chases and helicopter crashes, but men over 40. Lets have sports, a man doing something great, following his dream, a wife that was not only understanding but was also gorgeous and supportive. The result was Field of Dreams. Excellent movie, not transhumanist. A crew of script writers perhaps decided to make a filmmaker's fantasy movie. They may have discussed the biggest problems of filmmakers: the actors and especially the actresses were difficult, prima donnas all, wouldn't just act the script, wouldn't do nudity, etc. So what if their parts could be automated? Computers do as they are told. Then the remaining human actresses would drop that attitude forthwith, bwaaahahahahaaaaa. Then they could throw in something about how maddeningly stupid is the general movie audience, haaaahahahahaaaa. Then a gag about the Antarctic ozone hole, heeheheeee. Include a Hollywood tradition of making fun of conservatives, harharrr. The result was S1m0ne. I thought it was a hoot. spike From jef at jefallbright.net Wed Jun 27 14:31:49 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 27 Jun 2007 07:31:49 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: <62c14240706261810m300a69bahb90e678f506223ff@mail.gmail.com> References: <842821.53002.qm@web37410.mail.mud.yahoo.com> <62c14240706261810m300a69bahb90e678f506223ff@mail.gmail.com> Message-ID: On 6/26/07, Mike Dougherty wrote: > > I wonder how you define "mystical" Sorry, not interested in defining it here and now, was just communicating my intention to Gordon so he might go along with it rather than raise any of countless possible points of ambiguity or contention. My wording here was intentionally less crisp than usual, in order to frame this as informal agreement-seeking in preparation for (possibly ) highlighting a difference in our thinking. You see, we have trust issues that I would like to overcome. > > (From Wikipedia) The Lorenz Attractor: "From a technical standpoint, > the system is nonlinear, three-dimensional and deterministic" It > models "chaotic flow" > > Suppose two people are represented as the fixed points, and the third > point (the one which traces the graph) is their mutual > understanding/awareness of each other over time. That either (person) > is ever able to appreciate the graph at all is [imo] approaching as > much description of "mystical" as any other use of the word. > > Of course the most concrete idea here is that of the Lorenz Attractor. > My concept of it is probably different than yours (the reader) as > much as either of us has a different appreciation than the author(s) > of that Wikipedia page. If after those differences are resolved, the > point I attempted to make may be nearly lost due to so much context > dependency on my own POV. > > I will agree with what I thought your direction was - that objectively > isolating the starting conditions and exposure to physical processes > can lead to a deterministic prediction of state. No, determinism does not imply predictability of state within a system of this complexity, and that wasn't my objective. Thanks for playing! - Jef From spike66 at comcast.net Wed Jun 27 14:50:12 2007 From: spike66 at comcast.net (spike) Date: Wed, 27 Jun 2007 07:50:12 -0700 Subject: [ExI] purging pickpockets In-Reply-To: Message-ID: <200706271455.l5REtRsG009260@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of BillK ... > like when having a purge on pickpockets in Oxford Street... Bill we are having a similar purge over here: http://www.foxnews.com/story/0,2933,286847,00.html spike From gts_2000 at yahoo.com Wed Jun 27 14:35:48 2007 From: gts_2000 at yahoo.com (gts) Date: Wed, 27 Jun 2007 10:35:48 -0400 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: On Tue, 26 Jun 2007 17:21:14 -0400, Jef Allbright wrote: > I'm going in this direction only far enough to test whether you > and I both agree that the person's state is entirely determined in > physical terms, nothing essentially mystical in the background. Yes, we agree. I'm not referring to anything mystical. -gts From jef at jefallbright.net Wed Jun 27 16:35:46 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 27 Jun 2007 09:35:46 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: On 6/27/07, gts wrote: > > I'm going in this direction only far enough to test whether you > > and I both agree that the person's state is entirely determined in > > physical terms, nothing essentially mystical in the background. > > Yes, we agree. I'm not referring to anything mystical. Then can we agree that these values and supporting sub-values expressing the features of their physical configuration (love of dancing, strength, coordination, equilinear legs, ...) informing any observer's model of their will, and therefore the essence of the person, ALSO determine the behavior (within context, of course) of the agent (person, dog, robot...) WHETHER OR NOT they are consciously aware of these values? If you agree with the above, then isn't it clear that "Will" is but one observable aspect of any agent, particularly key in the setting of contemporary social interaction, but in terms of effectively modeling an agent and thus predicting their behavior, isn't it more fundamentally and comprehensively true that we model not observed "Will", but observed values? Before you respond, I would ask you to consider the page describing agency at , in particular, the statement "In this it is subtly distinct from the concept of free will, the philosophical doctrine that our choices are not the product of causal chains, but are significantly free or undetermined. Human agency entails the uncontroversial, weaker claim that humans do in fact make decisions and enact them on the world." [For Thomas, the above statement indicates why thinking in terms of agency is more coherent and extensible than thinking in terms of will. Will is more of a special case.] I appreciate the metaphysics of Schopenhauer, more coherent than that of Kant who preceded him, but lacking the intellectual contributions of Darwin, who published his Origins of Species some 40 years later. Schopenhauer's thinking contained yet the teleological assumption of an essential Will (or Self) driving all meaningful action. Teleological thinking continues to dominate our thinking; it pervades our language, our culture, and our concept of self. But the closer we look, the more we can't find it. An agent acts on its environment so as to bring its perceived environment closer to its internal model of a preferred world (its values, not necessarily consciously apprehended.) The agent affects its world and the world affects the agent and change occurs as a function of reality. Consciousness (and Will) goes along for the ride, not because it's essential, but because it's value-added. A parent can know the will of their child better than the child himself, because the parent has a more complete and accurate model of the child's values that determine both the will and the deeper behavior of the child. I await your thoughtful comments. - Jef From mbb386 at main.nc.us Wed Jun 27 16:14:42 2007 From: mbb386 at main.nc.us (MB) Date: Wed, 27 Jun 2007 12:14:42 -0400 (EDT) Subject: [ExI] purging pickpockets In-Reply-To: <200706271455.l5REtRsG009260@andromeda.ziaspace.com> References: <200706271455.l5REtRsG009260@andromeda.ziaspace.com> Message-ID: <37782.72.236.103.163.1182960882.squirrel@main.nc.us> > > Bill we are having a similar purge over here: > > http://www.foxnews.com/story/0,2933,286847,00.html > > spike > > Whooo! Good for the ex-marine! :) That *should* be a lesson learned. Wonder if it will be. Some folks don't learn easily. Regards, MB From austriaaugust at yahoo.com Wed Jun 27 18:20:22 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 27 Jun 2007 11:20:22 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: Message-ID: <496411.80526.qm@web37415.mail.mud.yahoo.com> Yeah, the cost will continue to get lower; but before today, the cost would still have been pretty significant. Placing and monitoring a "bug" in millions of homes is gonna cost some cheddar. And dictators often don't rule the richest of countries. Another thing that reduces the "benefit" side is that any conspirators could easily leave the house/equipment in order to plot. Another factor is that foreign countries like the U.S. exert external influence (I'm not judging merit here, just stating a fact). Definitely the important point though is that surveillance tech is only going to get cheaper and easier, and the Cost:Benefit will continue to look more attractive to certain rulers. IMO, the "best" solution to this whole conundrum is to get as many rational (and otherwise) people as possible, off this damn rock. Although gov's have strong influence here too, of course. What's the feasibility of creating a new self-supporting nation outside of this planet? I'm being serious. Will this ever be realistic or even possible? And also build a Friendly AI or a "Friendly" B.C.I., ASAP. Sincerely, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 27/06/07, A B wrote: > > "Why?" > > > > Cost. Cost:Benefit is too high. If you have enough > big > > guns and big bodyguards, then it doesn't really > matter > > if most of your impoverished population doesn't > like > > you. > > Surely bugging every house wouldn't be either more > difficult or more > expensive than, say, a space program. I think the > reason no-one has > tried it is evidence that there are some things > which even dictators > fear would be too unpopular. > > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Food fight? Enjoy some healthy debate in the Yahoo! Answers Food & Drink Q&A. http://answers.yahoo.com/dir/?link=list&sid=396545367 From gts_2000 at yahoo.com Wed Jun 27 17:46:26 2007 From: gts_2000 at yahoo.com (gts) Date: Wed, 27 Jun 2007 13:46:26 -0400 Subject: [ExI] agency-based personal identity In-Reply-To: <62c14240706261818i4b84a4c0mc0091ba768417ec1@mail.gmail.com> References: <3C2FE62B-73E5-44CB-813E-012ED9B9B1F7@thomasoliver.net> <62c14240706261818i4b84a4c0mc0091ba768417ec1@mail.gmail.com> Message-ID: Jef Allbright wrote: > Schopenhauer's thinking contained yet the teleological assumption of > an essential Will (or Self) driving all meaningful action. Schopenhauer wrote about the will in four different senses or manifestations. The phenomenal will is the sense relevant to this discussion. Apparently you don't know or understand what he meant by it. The phenomenal will is merely the ordinary type of will that we refer to in the sentence, "It is his will to do x." Nothing mystical or teleological about it. In the 'will as essence' thread, Damien posted some interesting empirical results which corroborate the idea that the will drives the intellect. It seems that volitional processes (processes of the will) start in the brain a fraction of a second before the conscious intellect becomes aware of them, suggesting as per Schop. that the will is primary. The will drives both the mind and the body: people think that which they will to think, and do that which they will to do. You might disagree with something I've written, and in that circumstance I am quite correct in replying that "you will think whatever you want to think". People, including you, believe only what they want to believe. Most of what we will is willed unconsciously. We will to breathe, for example, but most of our breaths are taken without conscious awareness. > I await your thoughtful comments. Maybe more when I get a chance. In meantime I think we've fallen off topic for this thread. (I'm moving this message to the 'agent based identity' thread). -gts From jef at jefallbright.net Wed Jun 27 18:47:21 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 27 Jun 2007 11:47:21 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: <3C2FE62B-73E5-44CB-813E-012ED9B9B1F7@thomasoliver.net> <62c14240706261818i4b84a4c0mc0091ba768417ec1@mail.gmail.com> Message-ID: On 6/27/07, gts wrote: > Schopenhauer wrote about the will in four different senses or > manifestations. The phenomenal will is the sense relevant to this > discussion. Apparently you don't know or understand what he meant by it. > The will drives both the mind and the body: people think that which they > will to think, and do that which they will to do. Oh well, thanks. Perhaps I have learned something from this exchange. - Jef From jef at jefallbright.net Wed Jun 27 18:43:25 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 27 Jun 2007 11:43:25 -0700 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <20070627085004.GG7079@leitl.org> References: <102492.35814.qm@web37413.mail.mud.yahoo.com> <20070627085004.GG7079@leitl.org> Message-ID: On 6/27/07, Eugen Leitl wrote: > Yeah. As a first step, do not yield your biometrics to any entity, > commercial or government. Switch off your mobile phone most of the > time (remove the batteries if you really want to make sure). > Do not let any online entity (Google, ahem) gather information about you. Eugen, are you suggesting that this is a transition strategy until there are better solutions, or possibly until the Singularity and all bets are off anyway? How do you account for the relatively higher quality information you inadvertently provide by the gaps? Since I don't at this time bet on a Singularity solving any of my problems, and I think the issue of transparency versus privacy is going to continue tipping toward transparency, I think the "solution" to people having information about you is to intentionally publish as much public information about yourself as practical. The risk is not too much information, but of asymmetric exploitation of information. - Jef - Jef From eugen at leitl.org Wed Jun 27 19:30:19 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 27 Jun 2007 21:30:19 +0200 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: References: <102492.35814.qm@web37413.mail.mud.yahoo.com> <20070627085004.GG7079@leitl.org> Message-ID: <20070627193019.GD7079@leitl.org> On Wed, Jun 27, 2007 at 11:43:25AM -0700, Jef Allbright wrote: > Eugen, are you suggesting that this is a transition strategy until > there are better solutions, or possibly until the Singularity and all I don't really include the Singularity in my future models. Both because it's incalculable by definition, and because I don't expect it (very much like the Spanish Inquisition) to land during my 40-50 of natural biological lifespan left (assuming, the supplements and the beer won't get me first). I fart in Singularity's general direction. > bets are off anyway? How do you account for the relatively higher > quality information you inadvertently provide by the gaps? You cannot remain off the radar forever (a bit ironic, coming from me) but you can really limit your world-visible information footprint. > Since I don't at this time bet on a Singularity solving any of my > problems, and I think the issue of transparency versus privacy is > going to continue tipping toward transparency, I think the "solution" The problem with transparency is that it only applies to some people. It's very much a two-class society. > to people having information about you is to intentionally publish as > much public information about yourself as practical. The risk is not > too much information, but of asymmetric exploitation of information. You can assume the latter as a given. (It was this where Brin received his well-deserved concerto of hisses and boos on cypherpunks@, at which point he retreated to lick his wounds and sulk in the corner). If there's no information to exploit, you're not giving anyone any rope to hang you with. From thomas at thomasoliver.net Wed Jun 27 19:36:34 2007 From: thomas at thomasoliver.net (Thomas) Date: Wed, 27 Jun 2007 12:36:34 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: <3AC703DA-F965-47C3-B030-BE5EE179F855@thomasoliver.net> Mike Dougherty wrote: > >> On 6/26/07, Thomas wrote: >> >> > What makes a vague >> > abstract entity more extensible and coherent than a concrete >> expression of >> > will as the source of personal identity? >> > > What makes an abstract [anything] more extensible ... than a concrete > [anything] ? > > I think the answer is: the definition(s) of 'abstract,' 'concrete' and > 'extensible' > > Sorry to be so blunt as I jump into a conversation threaded with such > "social niceties" as a meta-discourse on "social niceties" :) Quite alright. I had that coming. I think I was feeling seasick on waves of floating abstraction. But in the context of personal identity I think we need to limit the primary entity to a single observer locus and require agency at conceptual capacity (self aware rationality -- able to discriminate viewpoints and integrate them). We speak here of the "more coherent" abstract source of personal identity. I take this to mean consistent with (not just anything, but with) the reality of personal identity. I can't say that "will as essence" satisfies the constraints I've proposed, especially if it admits arbitrary irrationality. I think of will as agency. So, thanks, Mike for provoking me into doing my own thinking. -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From fortean1 at mindspring.com Mon Jun 25 04:15:30 2007 From: fortean1 at mindspring.com (Terry Colvin) Date: Sun, 24 Jun 2007 21:15:30 -0700 (GMT-07:00) Subject: [ExI] FWD (SK) "At the end of the day, you've given 110 per cent" Message-ID: <6140334.1182744931508.JavaMail.root@mswamui-thinleaf.atl.sa.earthlink.net> -----Forwarded Message----- > >*Eager to preserve the English language against a >rising tide of nonsense, we asked readers to >compose a piece of prose crammed with as many >infuriating phrases as possible. Christopher >Howse is amazed and amused by the torrent of replies...* > >< http://tinyurl.com/2g89a5 > > >The Telegraph, Longon >June 14, 2006 >Christopher Howse > >Hundreds of readers took a few minutes off from >shouting at the television to send an entry to >our Infuriating Phrases Competition. The idea was >to come up with a paragraph or two, no longer >than 150 words, packed with as many infuriating words and phrases as possible. > >Judging by the avalanche of phrases shovelled by >the spadeful into your inventively annoying >prose, many readers must be constantly on the >boil at hearing our language mutilated on the >radio, television, in shops and cafes, by >politicians and pundits, and, perhaps worst of >all, by business management executives. > >Infuriating as the language was, the entries were >very funny. "When it comes to abuse of English, >I've been there, done that, got the T-shirt. Do >you know what I mean?" Jackie Rowe's entry >started, worryingly. "Proactive, self-starting >facilitator required to empower cohorts of >students and enable them to access the >curriculum," said part of an advertisement for a teacher sent by Brian Smith. > >"Hi, there," began Janet Thomas's entry, >annoyingly, "How are you guys doing? Good, I >hope. I totally see where you are coming from. At >this moment in time it's not clear what is >happening with our language. I'm often like, hello? We are in the UK here?" > >"Our profitability is on a downward slope," wrote >Peter Seaton, in the authentic voice of >unthinking management, "and we must examine all >avenues to flush out unnecessary costs. Please go >away, sharpen your pencil and have a rethink." > >Congratulations to the ten shown here and they >each receive a signed copy of She Literally >Exploded: The Daily Telegraph Infuriating >Phrasebook by Christopher Howse and Richard >Preston (published by Constable at ?5.99 and >available at all good bookshops, or from >Telegraph Books, plus 99p p&p, on 0870 428 4112 or at books.telegraph.co.uk). > >*Barry Moyse* > >The Trust are committed to sharing best practice >and passionate about facilitating appropriate >skills through workshops and learning events >around these issues across the piece. Monitoring >using a web-based toolkit will empower users to >drill down to assess local needs interactively. >Stakeholders will be fully engaged in a >consultation exercise breaking down barriers, >pushing the envelope towards a seamless, one-stop >shop service. Safety and value for money will be >paramount so we are investing a funding stream to >put in place a supportive multidisciplinary team >to head up this exciting upcoming project, >provide local ownership and robust clinical >governance. Doing nothing is not an option: >subject to independent review lessons will be >learnt, accountability made transparent to >commissioners, providers, and service-users to >ensure that this tragedy will never happen again. > >*Mrs J. M. Johnson* > >To be honest with you, I'm pressurised 24/7. I'm >literally in pieces. I surfed the net and sourced >a top-dollar lifestyle guru, and he's working >with my partner and I, prioritising issues so >that we can team up and address them - know what I mean? > >There's things that have to go on the back >burner, so that we can jet away to the sun and >chill to the max. A few drinks, a few laughs and >I'll be firing on all cylinders, like I say. >She'll shop until she drops - right? - but if >that's what the little lady wants, that's what >she'll get. We'll soak up the sun, go with the >flow, and come back bronzed and fit. > >Hopefully, by Christmas, we'll be sorted, and >ready to party, party, party big-time - and spend >some quality time with the kids, with the turkey and all the trimmings. > >*Andrew Macintosh and Mary Burdis* > >"At the end of the day," continued Simon, across >a table of Eat's Now!, his favourite nutritional >sustenance solutions establishment, "running >things up the flagpole is essential to ensuring >we are all singing from the same hymn sheet, so >that the challenges of the present economic >climate are met with emotional intelligence." He >looked up to check Michelle was still listening. >"Are you taking all this on board?" > >"Confirmed." > >The nutritional conveyance facilitator arrived. > >"Chargrilled chicken, flash-fried vegetable >compote and sun-dried tomatoes. Twice." > >"Re-hydration, Sir?" > >"Evian." Simon turned back to Michelle. "I'd like >to run this by you." He pulled out a crumpled >piece of paper. "Non-Plus-Ultra >Surplus-to-Requirements Collection Solutions >requires executive disposal facilitator to >supervise own ring-fenced area of operations, >apply in first instance blah blah blah. Thought-share?" > >"Cutting edge, actually. Literally." > >Simon smiled: "I always like to give a 110 per cent." > >End of story. > >*Nick Godfrey* > >I hear what you're saying but, with all due >respect, it's not exactly rocket science. >Basically, at the end of the day, the fact of the >matter is you have got to be able to tick all the >boxes. It's not the end of the world, but, to be >perfectly honest with you, when push comes to >shove, you don't want to be literally stuck >between a rock and a hard place. Going forward we >need to be singing from the same songsheet but >you can't see the wood from the trees. Naturally >hindsight is 20/20 vision and you have to take >the rough with the smooth before proceeding >onwards and upwards. The bottom line is you wear >your heart on your sleeve and, when all is said >and done, this is all part and parcel of the >ongoing bigger picture. C'est la vie (if you know what I mean). > >*Mr Les Bolton* > >Hi, Basically, I was gobsmacked to have the >opportunity to take on board your suggestion that >I pen some lines? On the ground, you know, there >are basically tons of dudes using English >wrongly, but, basically, my single criteria is to >expose language thats not fit for purpose? I >guess, you know, thats what u r trying to do with >this competition, yeah? Wicked. Basically, theres >literally tons of words not used properly? But, >you know, at this particular moment in time I >want to look forward, not back, so we can move >forward together? My particular skool was gr8, >with teachers on the ground doing a brill job. >Thats how come my English is so good? Kid's today >basically ain't got a chance in hell? Untill we >get the teachers we deserve the problem is >basically a no-hoper. Cool. Basically, thats it, basically. ATB, Mr Les Bolton. > >*Veterator* > >The report into the crash said if there hadn't >been an error on behalf of the lorry driver, less >people would have been affected.When asked to >explain, the driver said "No problem. I myself >personally think there's no worries at this >moment in time. The amount of people involved was >not a lot. Whatever. Have a nice day." > >His wife said "Oh my God ! Fantastic ! I'll >always be there for him and, hey, I love him to >bits and stuff like that. The view I had was >amazing, but that's the way the cookie crumbles - the rest is history. > >"The really really important thing is that we all >sing from the same hymn sheet to deliver road >safety to the people of this country.We are all >guilty - see where I'm coming from?" > >*Irene and Andy Mitchell* > >Retirement has required a rigorous and robust >reassessment of our core competencies, visions >and values. Leveraged away from our >work-stations, a raft of financial and strategic >options underpins and overarches the reinvention >of our lifestyle mission statement. > >This has not been a seamless transition and does >not locate us in a win-win situation per se. >Restricted income generation has forced a >realignment of our cost base, necessitating >in-depth fire-fighting in order to deliver best value. > >A re-evaluation of our methodologies has led to a >sea-change. Tasked with delivering sustainable >growth in our external horticultural environment, >a work-in-progress encompasses benchmarking the >broccoli, risk-assessing the radishes and >applying change management principles to the >diverse peripherals on the compost heap. > >Our draft self-assessment analysis contains >transparent aims and objectives to be brigaded on >the terrace, applying joined-up thinking to >transparently piloting the rioja, and developing >synergies to enhance our contentment parameters. > >*B.D.Farrant * > >I don't do competitions. But, at the end of the >day, little ol' moi just couldn't resist this >challenge. Actually, there's a lot of weather >about 2d and moseying down to the shops has >soooooo lost its appeal. As a result, there's a >window here to think outside the box. Yes, you've >got it, it's a blue sky thinking moment. I mean, >it's not rocket science, you know. True enough, >but let's face facts here, this could so be my >conduit to a whole new ball game. Awesome, or >what? That's if the judges don't, like, move the goal posts. > >My better half said "Give it large, kiddo. Give >it some wellie. You know you want to!" Well, game >on, I thought, like you do. This one's in the bag. > >*Steve Chrismatkin* > >*personnel chairs cuts meeting* > >hi! many thanks for giving up your precious lunch >break entitlement period. we just need to share a >few positive thoughts in a negatively challenging >situation where, due to financial restructuring, >the scenario exists in which, owing to >underperformance in target areas of our core >business, a meltdown can be envisaged: even key >personnel may have to be let go. Our own >cost/benefit analysis of the ongoing target >shortfall is that this predicament needs to be >addressed proactively rather than focussed on >bottom line rigidity which denies the social >capital invested by our outreach commitment >option facilitated by all the other departments. >Solution: we have to push envelopes & vice versa. >HRD exists to effectively appraise [sic] assess & >recommend structural initiatives that empower >staff & operatives to maximise self fulfilment to >achieve targets along a steep learning curve >before the tipping point is reached in which a >raft of measures are [sic] overturned by >corporate malfunction. An online update on a >daily basis will follow. Muchas gracias, Steve Chrismatkin > >*R.G. Banks* > >Let's stop obsessing and get down to the nitty >gritty of fleshing out the gender issues. John. >I'm wanting to hear inclusiveness and ethnicity >here. A raft of blue sky thinking to challenge >accepted orthodoxies. The bottom line is about >empowerment and at the end of the day getting up >to speed working 24/7 towards a coalition of >understanding through best practice. This can >only be fully achieved if the glass ceiling, in >inverted commas, is transformed into a level >playing field where the goal posts cannot be >moved without leaving a substantial carbon >footprint which inevitably would consign us all >to the expediency of existing between a rock and >a hard place. We must pick up the ball and run >because we can no longer wait for the smoking gun >of the next denial of service attack to consign >us all to the wheely bin of history. Terry W. Colvin Sierra Vista, Arizona From jef at jefallbright.net Wed Jun 27 19:59:24 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 27 Jun 2007 12:59:24 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: <3AC703DA-F965-47C3-B030-BE5EE179F855@thomasoliver.net> References: <3AC703DA-F965-47C3-B030-BE5EE179F855@thomasoliver.net> Message-ID: On 6/27/07, Thomas wrote: > I think of will as agency. Consider a newborn infant. It functions on its own behalf, at least enough to find and suck on a nipple, but it yet lacks a model of itself as a self. Is there agency? It certainly acts on its environment relative to its values so there is agency, not yet in its own mind, but assigned in the minds of other observers. Is there a will? Whose will would be exercised, when there is not yet a model of self? - Jef From gts_2000 at yahoo.com Wed Jun 27 20:30:25 2007 From: gts_2000 at yahoo.com (gts) Date: Wed, 27 Jun 2007 16:30:25 -0400 Subject: [ExI] agency-based personal identity In-Reply-To: References: <3AC703DA-F965-47C3-B030-BE5EE179F855@thomasoliver.net> Message-ID: On Wed, 27 Jun 2007 15:59:24 -0400, Jef Allbright wrote: > On 6/27/07, Thomas wrote: > >> I think of will as agency. > > Consider a newborn infant. It functions on its own behalf, at least > enough to find and suck on a nipple, but it yet lacks a model of > itself as a self. > > Is there agency? We don't know if it lacks a model of itself as self. But even assuming it does not, there is both will and agency. The newborn wills to suck a nipple, and does so, acting as the agent of its own will. Is it developed enough to be conscious of itself and its actions in an abstract way? Does it think, "Here I go again, sucking on a nipple according to my will, just like I did yesterday"? Probably not, but that is beside the point. -gts From thomas at thomasoliver.net Wed Jun 27 20:39:16 2007 From: thomas at thomasoliver.net (Thomas) Date: Wed, 27 Jun 2007 13:39:16 -0700 Subject: [ExI] Minds, Personalities and Love In-Reply-To: References: Message-ID: Jef Allbright wrote: > [...] all of one's values appear to be > physically determined, by one's genetics, the environment, and the > [who?] > complex chain of physical interaction leading up to any particular > moment when one's values are queried, [...] Only the default values. > [...] > > I'm just being thorough here, not tricky. Would not thoroughness entail discussion of the role of self determined values? -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From gts_2000 at yahoo.com Wed Jun 27 21:17:42 2007 From: gts_2000 at yahoo.com (gts) Date: Wed, 27 Jun 2007 17:17:42 -0400 Subject: [ExI] agency-based personal identity In-Reply-To: References: <3AC703DA-F965-47C3-B030-BE5EE179F855@thomasoliver.net> Message-ID: On Wed, 27 Jun 2007 15:59:24 -0400, Jef Allbright wrote: > [the newborn] certainly acts on its environment relative to its values > so there > is agency, not yet in its own mind, but assigned in the minds of other > observers. If the newborn is not the agent of its own will and thus a unique identity unto itself, as I suggest is the case, then who is it exactly who is sucking on the nipple if the mother is unconscious and if there are no other observers? There is still a person there in the absence of observers, yes? I say the newborn's identity arises not from the thoughts of others, or even from its own thoughts. Rather, it arises from its will to live, as evidenced by its will to nurse. -gts From austriaaugust at yahoo.com Wed Jun 27 22:02:51 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 27 Jun 2007 15:02:51 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <496411.80526.qm@web37415.mail.mud.yahoo.com> Message-ID: <495104.47710.qm@web37402.mail.mud.yahoo.com> Although, short of moving into space, I would "prefer" a Brin type of surveillance over the short term (compared to Orwellian) - if such a system were possible - of which I'm skeptical. I would rather be watched 24/7 by my peers for five years than be dead, permanently. Of course ideally, no surveillance would be necessary or desirable. Sadly, this isn't an ideal Universe. Is the "Brin version" not the dominant system being advocated? Sincerely, Jeffrey Herrlich --- A B wrote: > Yeah, the cost will continue to get lower; but > before > today, the cost would still have been pretty > significant. Placing and monitoring a "bug" in > millions of homes is gonna cost some cheddar. And > dictators often don't rule the richest of countries. > Another thing that reduces the "benefit" side is > that > any conspirators could easily leave the > house/equipment in order to plot. Another factor is > that foreign countries like the U.S. exert external > influence (I'm not judging merit here, just stating > a > fact). Definitely the important point though is that > surveillance tech is only going to get cheaper and > easier, and the Cost:Benefit will continue to look > more attractive to certain rulers. > > IMO, the "best" solution to this whole conundrum is > to > get as many rational (and otherwise) people as > possible, off this damn rock. Although gov's have > strong influence here too, of course. What's the > feasibility of creating a new self-supporting nation > outside of this planet? I'm being serious. Will this > ever be realistic or even possible? > > And also build a Friendly AI or a "Friendly" B.C.I., > ASAP. > > Sincerely, > > Jeffrey Herrlich > ____________________________________________________________________________________ Need a vacation? Get great deals to amazing places on Yahoo! Travel. http://travel.yahoo.com/ From msd001 at gmail.com Wed Jun 27 22:37:53 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 27 Jun 2007 18:37:53 -0400 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> <62c14240706261810m300a69bahb90e678f506223ff@mail.gmail.com> Message-ID: <62c14240706271537i65272b44va77982bfd6cf297@mail.gmail.com> On 6/27/07, Jef Allbright wrote: > No, determinism does not imply predictability of state within a system > of this complexity, and that wasn't my objective. > > Thanks for playing! That's a bit more compressed than I am able to interpret clearly - is that a brusque brush off because my comment was not on-topic enough to warrant more than your response, or are you choosing to simply focus your attention elsewhere? ... 'cuz if you were interested in a conversation with only one other participant, why send emails through the list distribution? From jef at jefallbright.net Wed Jun 27 22:46:28 2007 From: jef at jefallbright.net (Jef Allbright) Date: Wed, 27 Jun 2007 15:46:28 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: <62c14240706271537i65272b44va77982bfd6cf297@mail.gmail.com> References: <842821.53002.qm@web37410.mail.mud.yahoo.com> <62c14240706261810m300a69bahb90e678f506223ff@mail.gmail.com> <62c14240706271537i65272b44va77982bfd6cf297@mail.gmail.com> Message-ID: On 6/27/07, Mike Dougherty wrote: > That's a bit more compressed than I am able to interpret clearly - is > that a brusque brush off because my comment was not on-topic enough to > warrant more than your response, or are you choosing to simply focus > your attention elsewhere? Both, but I didn't mean to be discourteous. - Jef From msd001 at gmail.com Wed Jun 27 22:55:21 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 27 Jun 2007 18:55:21 -0400 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: <842821.53002.qm@web37410.mail.mud.yahoo.com> <62c14240706261810m300a69bahb90e678f506223ff@mail.gmail.com> <62c14240706271537i65272b44va77982bfd6cf297@mail.gmail.com> Message-ID: <62c14240706271555j2de621faq11800ae51401ca29@mail.gmail.com> On 6/27/07, Jef Allbright wrote: > Both, but I didn't mean to be discourteous. i agree with your earlier statements about the middle ground between overly "flowery" and "Crockers Rules" use of language. 'Just wanted to ask for my own edification. From femmechakra at yahoo.ca Thu Jun 28 03:17:35 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Wed, 27 Jun 2007 23:17:35 -0400 (EDT) Subject: [ExI] Is this really the case? Message-ID: <197385.36024.qm@web30401.mail.mud.yahoo.com> Eliezer posted a video on Overcoming Bias regarding "Are Your Enemies Innately Evil?". (I'm sorry, I'm sure how to post a link:) Is this really what it's like in the United States regarding religion? I am rather curious as I read a lot but I haven't had much opportunity to travel. I understand the Extropy List consists of many Atheists and wonder if this would be a reason why many of you have such a distaste for religion? If so, then my apology. Where I grew up religion was not the least bit like this. When I watched that video it made my skin crawl and my blood boil. I remember thinking "that's not religion, that's boot camp for a Christian army". And what is this? "There is only two kind of people in the world, people that love Jesus and people that don't"? That's not teaching children morals and values that's teaching children how anybody that's not like me is bad/evil or wrong. As it's a clip, I'm not sure what direction it's heading in but one thing is sure, if this is clip represents the truth of the mentality of the religious in the United States, I wouldn't give it any awards. I feel rather naive that I took so much time defending religion unaware that this type of evangelism was so widespread within the United States. If anybody has a moment and watches the clip, I would love some of your insight, offlist if preferable. Thanks Anna Make free worldwide PC-to-PC calls. Try the new Yahoo! Canada Messenger with Voice at http://ca.messenger.yahoo.com/ From joseph at josephbloch.com Thu Jun 28 03:59:08 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Wed, 27 Jun 2007 23:59:08 -0400 Subject: [ExI] Is this really the case? In-Reply-To: <197385.36024.qm@web30401.mail.mud.yahoo.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> Message-ID: <05fd01c7b938$ae748f10$6400a8c0@hypotenuse.com> Are you referring to the clip from the film "Jesus Camp"? I confess I don't see any other video links on the page you seem to be addressing: http://www.overcomingbias.com/2007/06/are-your-enemie.html Just want to make sure we're talking about the same thing before I comment. Joseph http://www.josephbloch.com > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Anna Taylor > Sent: Wednesday, June 27, 2007 11:18 PM > To: extropy-chat at lists.extropy.org > Subject: [ExI] Is this really the case? > > Eliezer posted a video on Overcoming Bias regarding > "Are Your Enemies Innately Evil?". (I'm sorry, I'm > sure how to post a link:) > > Is this really what it's like in the United States > regarding religion? I am rather curious as I read a > lot but I haven't had much opportunity to travel. I > understand the Extropy List consists of many Atheists > and wonder if this would be a reason why many of you > have such a distaste for religion? If so, then my > apology. > > Where I grew up religion was not the least bit like > this. When I watched that video it made my skin crawl > and my blood boil. I remember thinking "that's not > religion, that's boot camp for a Christian army". > And what is this? "There is only two kind of people > in the world, people that love Jesus and people that > don't"? That's not teaching children morals and values > that's teaching children how anybody that's not like > me is bad/evil or wrong. As it's a clip, I'm not sure > what direction it's heading in but one thing is sure, > if this is clip represents the truth of the mentality > of the religious in the United States, I wouldn't give > it any awards. > > I feel rather naive that I took so much time defending > religion unaware that this type of evangelism was so > widespread within the United States. > > If anybody has a moment and watches the clip, I would > love some of your insight, offlist if preferable. > > Thanks > Anna > > > > Make free worldwide PC-to-PC calls. Try the new Yahoo! Canada Messenger > with Voice at http://ca.messenger.yahoo.com/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From femmechakra at yahoo.ca Thu Jun 28 04:28:44 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Thu, 28 Jun 2007 00:28:44 -0400 (EDT) Subject: [ExI] Is this really the case? In-Reply-To: <05fd01c7b938$ae748f10$6400a8c0@hypotenuse.com> Message-ID: <988982.38099.qm@web30402.mail.mud.yahoo.com> Yes, that's the video. Thanks. Anna --- Joseph Bloch wrote: > Are you referring to the clip from the film "Jesus > Camp"? > > I confess I don't see any other video links on the > page you seem to be > addressing: > > http://www.overcomingbias.com/2007/06/are-your-enemie.html > > Just want to make sure we're talking about the same > thing before I comment. > > Joseph > http://www.josephbloch.com > > > -----Original Message----- > > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat- > > bounces at lists.extropy.org] On Behalf Of Anna > Taylor > > Sent: Wednesday, June 27, 2007 11:18 PM > > To: extropy-chat at lists.extropy.org > > Subject: [ExI] Is this really the case? > > > > Eliezer posted a video on Overcoming Bias > regarding > > "Are Your Enemies Innately Evil?". (I'm sorry, > I'm > > sure how to post a link:) > > > > Is this really what it's like in the United States > > regarding religion? I am rather curious as I read > a > > lot but I haven't had much opportunity to travel. > I > > understand the Extropy List consists of many > Atheists > > and wonder if this would be a reason why many of > you > > have such a distaste for religion? If so, then my > > apology. > > > > Where I grew up religion was not the least bit > like > > this. When I watched that video it made my skin > crawl > > and my blood boil. I remember thinking "that's > not > > religion, that's boot camp for a Christian army". > > And what is this? "There is only two kind of > people > > in the world, people that love Jesus and people > that > > don't"? That's not teaching children morals and > values > > that's teaching children how anybody that's not > like > > me is bad/evil or wrong. As it's a clip, I'm not > sure > > what direction it's heading in but one thing is > sure, > > if this is clip represents the truth of the > mentality > > of the religious in the United States, I wouldn't > give > > it any awards. > > > > I feel rather naive that I took so much time > defending > > religion unaware that this type of evangelism was > so > > widespread within the United States. > > > > If anybody has a moment and watches the clip, I > would > > love some of your insight, offlist if preferable. > > > > Thanks > > Anna > > > > > > > > Make free worldwide PC-to-PC calls. Try the > new Yahoo! Canada > Messenger > > with Voice at http://ca.messenger.yahoo.com/ > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From spike66 at comcast.net Thu Jun 28 05:21:05 2007 From: spike66 at comcast.net (spike) Date: Wed, 27 Jun 2007 22:21:05 -0700 Subject: [ExI] Is this really the case? In-Reply-To: <988982.38099.qm@web30402.mail.mud.yahoo.com> Message-ID: <200706280536.l5S5a8i4012935@andromeda.ziaspace.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Anna Taylor > Subject: Re: [ExI] Is this really the case? > > Thanks. > Anna Ja, Anna, you are right, religion is not highly regarded around these parts. It's always a delicate subject whenever it comes up, so I try to gently discourage it. Generally the religious meme holders end up offended and go elsewhere. The rest of us stay and our infidel memes reproduce and propagate, which is why we flaming unbelievers evolved here. {8-] spike From thomas at thomasoliver.net Thu Jun 28 06:51:47 2007 From: thomas at thomasoliver.net (Thomas) Date: Wed, 27 Jun 2007 23:51:47 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: Message-ID: Jef Allbright wrote: > [...] "[human agency] is subtly distinct from the > concept of free will, the philosophical doctrine that our choices are > not the product of causal chains, but are significantly free or > undetermined. Human agency entails the uncontroversial, weaker claim > that humans do in fact make decisions and enact them on the world." > > [For Thomas, the above statement indicates why thinking in terms of > agency is more coherent and extensible than thinking in terms of will. > Will is more of a special case.] Yes, I agree with the rider that will, defined as "the faculty of conscious and especially of deliberate action" cannot be excluded from qualifying personhood. Free-from-determinism-will makes no sense. > > [...] Consciousness (and Will) goes along for the ride, > not because it's essential, but because it's value-added. Not essential for life or observable identity, but personality? Without capacity for consciousness you lose your personhood. Someone else might see you as a person, but they have a sad misunderstanding of the facts. -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From sti at pooq.com Thu Jun 28 07:42:59 2007 From: sti at pooq.com (sti at pooq.com) Date: Thu, 28 Jun 2007 03:42:59 -0400 Subject: [ExI] Is this really the case? In-Reply-To: <988982.38099.qm@web30402.mail.mud.yahoo.com> References: <988982.38099.qm@web30402.mail.mud.yahoo.com> Message-ID: <46836683.6080802@pooq.com> Anna Taylor wrote: > Yes, that's the video. > > Thanks. > Anna > > --- Joseph Bloch wrote: > >> Are you referring to the clip from the film "Jesus >> Camp"? >> Well, the scenes from "Jesus Camp" are an extreme case, even for the United States (that camp has been shut down, BTW), but there is a very strong streak of religious intolerance that runs through what is known as the American 'Bible Belt'. And Yes, many rabidly anti-religious atheists in the US are so as a direct reaction to the religious intolerance that they've had to face there. From sentience at pobox.com Thu Jun 28 08:48:03 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Thu, 28 Jun 2007 01:48:03 -0700 Subject: [ExI] Is this really the case? In-Reply-To: <197385.36024.qm@web30401.mail.mud.yahoo.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> Message-ID: <468375C3.9010802@pobox.com> Anna Taylor wrote: > > Is this really what it's like in the United States > regarding religion? "I don't know that atheists should be considered as citizens, nor should they be considered patriots. This is one nation under God." -- George W. Bush -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From thomas at thomasoliver.net Thu Jun 28 08:25:25 2007 From: thomas at thomasoliver.net (Thomas) Date: Thu, 28 Jun 2007 01:25:25 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: Gordon wrote: > [...] Most of what we will is willed unconsciously. [...] It sounds to me like this phenomenal will you speak of means about the the same thing as when Jef cites predetermined [default] values as the most observable and necessary attributes of personhood. I prefer Jef's way of expressing it because my meaning for will includes consciousness. While we can function temporarily without self awareness activity in the superfrontal gyrus, (http:// www.newscientist.com/article/dn9019-watching-the-brain-switch-off- selfawareness.html) I agree that "Self-awareness [is] regarded as a key element of being human." So I guess I agree that will is essential to personhood, but not the way you mean it. -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at thomasoliver.net Thu Jun 28 08:59:30 2007 From: thomas at thomasoliver.net (Thomas) Date: Thu, 28 Jun 2007 01:59:30 -0700 Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: <75E6EF7C-408A-4C75-A400-1A36ED559D05@thomasoliver.net> Jef Allbrigh wrote: > > On 6/27/07, Thomas wrote: > > >> I think of will as agency. >> > > Consider a newborn infant. It functions on its own behalf, at least > enough to find and suck on a nipple, but it yet lacks a model of > itself as a self. > > Is there agency? > It certainly acts on its environment relative to its values so there > is agency, not yet in its own mind, but assigned in the minds of other > observers. > > Is there a will? > Whose will would be exercised, when there is not yet a model of self? > > > what_is_self.html> > > - Jef Yes, I agree. That illustrates the need to discriminate the two terms. The sufficient expression of will requires capacity for self awareness. The newborn's latent capacity makes me consider her a pre- person not responsible for, say, moral agency -- instantly loved and valued as much or more than a fully responsible person -- with great delight in every evidence of agency. And now I shall sleep like a baby. -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Jun 28 12:06:45 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 28 Jun 2007 22:06:45 +1000 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <496411.80526.qm@web37415.mail.mud.yahoo.com> References: <496411.80526.qm@web37415.mail.mud.yahoo.com> Message-ID: On 28/06/07, A B wrote: > Yeah, the cost will continue to get lower; but before > today, the cost would still have been pretty > significant. Placing and monitoring a "bug" in > millions of homes is gonna cost some cheddar. And > dictators often don't rule the richest of countries. Not really, it's hard to think of a cheaper electronic device than a short range radio transmitter, especially if it didn't have to be small enough to hide. Any car driving around at any time could be an unmarked government vehicle with a list of frequencies for different addresses. > Another thing that reduces the "benefit" side is that > any conspirators could easily leave the > house/equipment in order to plot. Sure, but people who are really serious about their plot can always go to a public place and speak in code or something. The idea is to make it harder for plots to hatch; the fear factor alone of having bugs everywhere and knowing that there are bugs everywhere would have to have an effect. Another factor is > that foreign countries like the U.S. exert external > influence (I'm not judging merit here, just stating a > fact). Yes, but why has it dissuaded countries from bugging everyone but not from eg. killing a large proportion of their population, such as in Cambodia or Rwanda? It seems to me that they feel they can justify killing all the bad people, but balk at openly telling all of the population that none of them are to be trusted and they will be monitored at all times. > IMO, the "best" solution to this whole conundrum is to > get as many rational (and otherwise) people as > possible, off this damn rock. Although gov's have > strong influence here too, of course. What's the > feasibility of creating a new self-supporting nation > outside of this planet? I'm being serious. Will this > ever be realistic or even possible? I'm all for trying, but it will more than likely be Governments or - even worse - large corporations doing the space colonizing. And even if it were just a group of idealistic and like-minded individuals, that's how all communities are in the beginning, and then they go bad. What's the solution to stop a community going bad, ever? -- Stathis Papaioannou From neville_06 at yahoo.com Thu Jun 28 11:21:01 2007 From: neville_06 at yahoo.com (neville late) Date: Thu, 28 Jun 2007 04:21:01 -0700 (PDT) Subject: [ExI] get this In-Reply-To: <468375C3.9010802@pobox.com> Message-ID: <319788.72416.qm@web57505.mail.re1.yahoo.com> "the FBI asks asks scuba divers to be aware of terrorist threats". Makes you feel secure all over, doesn't it? --------------------------------- Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Thu Jun 28 13:07:54 2007 From: jef at jefallbright.net (Jef Allbright) Date: Thu, 28 Jun 2007 06:07:54 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: Message-ID: On 6/27/07, Thomas wrote: > Not essential for life or observable identity, but personality? Without > capacity for consciousness you lose your personhood. Someone else might > see you as a person, but they have a sad misunderstanding of the facts. A deep but well-recognized aspect of thinking about consciousness and personhood is that an observer can never know, but only assume, the self-awareness of another person. A deeper and seldom recognized aspect is that the above statement applies equally when the observer and the observed are the same person. Therein lies the source of much confusion and endless debate. - Jef From jonkc at att.net Thu Jun 28 15:05:22 2007 From: jonkc at att.net (John K Clark) Date: Thu, 28 Jun 2007 11:05:22 -0400 Subject: [ExI] Minds, Personalities, and Love. References: <842821.53002.qm@web37410.mail.mud.yahoo.com> Message-ID: <00b501c7b995$cddac240$bf084e0c@MyComputer> "gts" >I think people know instinctively that the will is the essence of the > person. I don't know that instinctively, at least as important as will is ability > What do you choose to do for a living? What did you end up doing for a living? Nobody grows up wanting to be the bathroom attendant at the zoo, but despite wanting as a kid to be a jet pilot or a cowboy or a fireman they end up at the zoo. > What are your goals? I'd rather know what you have achieved. John K Clark From neptune at MIT.EDU Thu Jun 28 15:25:57 2007 From: neptune at MIT.EDU (Bo Morgan) Date: Thu, 28 Jun 2007 11:25:57 -0400 (EDT) Subject: [ExI] agency-based personal identity In-Reply-To: References: Message-ID: Here is a quote that describes the task involved in this "instrospection" study that claims to have localized the homunculus in the brain: -- The experiment was conducted in both the visual and auditory domains and consisted of three conditions: (1) an easy categorization condition (slow), in which subjects categorized visual and auditory objects presented at a slow rate (one stimulus/ 3 s.); (2) an introspective condition (introspection), having identical stimuli and motor responses except that subjects were required to self-introspect about their own emotional responses (aroused versus neutral) toward these stimuli, as used in previous experiments (Bradley and Lang, 1994; Rotshtein et al., 2001); and (3) a difficult categorization task (rapid) similar to the slow condition but at triple the stimulation rate. Thus, slow and introspection conditions were identical in terms of sensory stimuli and motor output but differed in the cognitive task. On the other hand, slow and rapid conditions were similar in the cognitive task but differed in the sensorimotor processing and attentional loads. -- It is not clear to me how (2) is a good example of homunculus activity. Perhaps this is because I don't understand what they mean by homunculus--why this is a good theory of mind. It doesn't sound like a useful breakdown. It sounds like they've found a "speaking about emotions" area of the brain--if anything. Bo On Thu, 28 Jun 2007, Thomas wrote: ) ) Gordon wrote: ) ) > [...] Most of what we will is willed unconsciously. [...] ) ) It sounds to me like this phenomenal will you speak of means about the the ) same thing as when Jef cites predetermined [default] values as the most ) observable and necessary attributes of personhood. I prefer Jef's way of ) expressing it because my meaning for will includes consciousness. While we ) can function temporarily without self awareness activity in the superfrontal ) gyrus, ) (http://www.newscientist.com/article/dn9019-watching-the-brain-switch-off-selfawareness.html) ) I agree that "Self-awareness [is] regarded as a key element of being human." ) ) So I guess I agree that will is essential to personhood, but not the way you ) mean it. -- Thomas ) ) Thomas at ThomasOliver.net ) From austriaaugust at yahoo.com Thu Jun 28 15:07:20 2007 From: austriaaugust at yahoo.com (A B) Date: Thu, 28 Jun 2007 08:07:20 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: Message-ID: <139230.55759.qm@web37407.mail.mud.yahoo.com> Stathis wrote: > "Not really, it's hard to think of a cheaper > electronic device than a > short range radio transmitter, especially if it > didn't have to be > small enough to hide. Any car driving around at any > time could be an > unmarked government vehicle with a list of > frequencies for different > addresses." It's not just the cost of the device. It's the cost of monitoring, repairing, and enforcing violations. And then there's the problem of countering hacking issues. It seems that an entire infrastructure would have to be built around it. It *might* not be cost-prohibitive today, but it probably would have been not too long ago. Especially for non-superpowers. > "Sure, but people who are really serious about their > plot can always go > to a public place and speak in code or something. > The idea is to make > it harder for plots to hatch; the fear factor alone > of having bugs > everywhere and knowing that there are bugs > everywhere would have to > have an effect." It might have a small effect. But I imagine it still wouldn't bump-up the "Benefit" side sufficiently. > "Yes, but why has it dissuaded countries from bugging > everyone but not > from eg. killing a large proportion of their > population, such as in > Cambodia or Rwanda? It seems to me that they feel > they can justify > killing all the bad people, but balk at openly > telling all of the > population that none of them are to be trusted and > they will be > monitored at all times." Yep, that's a good question. We can be pretty sure that they haven't refrained from bugging out of the goodness of their hearts. If it's not a Cost:Benefit issue, then what else could it be? If you rule with a brutal iron fist, then it doesn't matter if you piss your people off. > "I'm all for trying, but it will more than likely be > Governments or - > even worse - large corporations doing the space > colonizing."... Yeah, it's definitely a sticky issue. I'd prefer a large corporation, not all of them are "eeeevil" as many say. "And even > if it were just a group of idealistic and > like-minded individuals, > that's how all communities are in the beginning, and > then they go bad."... It would probably help to keep the various communities small and "separated". Let people choose which community and rules they would prefer to live under. "Space-Arks" (and many of them) like Lifeboat Foundation recommends would be a good temporary solution...hopefully. Keep them *talking* however, I'd prefer not to drag monkey-wars into space also. > "What's the solution to stop a community going bad, > ever?" Friendly AI/universal life enhancement. Sincerely, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 28/06/07, A B wrote: > > > Yeah, the cost will continue to get lower; but > before > > today, the cost would still have been pretty > > significant. Placing and monitoring a "bug" in > > millions of homes is gonna cost some cheddar. And > > dictators often don't rule the richest of > countries. > > Not really, it's hard to think of a cheaper > electronic device than a > short range radio transmitter, especially if it > didn't have to be > small enough to hide. Any car driving around at any > time could be an > unmarked government vehicle with a list of > frequencies for different > addresses. > > > Another thing that reduces the "benefit" side is > that > > any conspirators could easily leave the > > house/equipment in order to plot. > > Sure, but people who are really serious about their > plot can always go > to a public place and speak in code or something. > The idea is to make > it harder for plots to hatch; the fear factor alone > of having bugs > everywhere and knowing that there are bugs > everywhere would have to > have an effect. > > Another factor is > > that foreign countries like the U.S. exert > external > > influence (I'm not judging merit here, just > stating a > > fact). > > Yes, but why has it dissuaded countries from bugging > everyone but not > from eg. killing a large proportion of their > population, such as in > Cambodia or Rwanda? It seems to me that they feel > they can justify > killing all the bad people, but balk at openly > telling all of the > population that none of them are to be trusted and > they will be > monitored at all times. > > > IMO, the "best" solution to this whole conundrum > is to > > get as many rational (and otherwise) people as > > possible, off this damn rock. Although gov's have > > strong influence here too, of course. What's the > > feasibility of creating a new self-supporting > nation > > outside of this planet? I'm being serious. Will > this > > ever be realistic or even possible? > > I'm all for trying, but it will more than likely be > Governments or - > even worse - large corporations doing the space > colonizing. And even > if it were just a group of idealistic and > like-minded individuals, > that's how all communities are in the beginning, and > then they go bad. > What's the solution to stop a community going bad, > ever? > > > -- > Stathis Papaioannou > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From femmechakra at yahoo.ca Thu Jun 28 16:25:22 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Thu, 28 Jun 2007 12:25:22 -0400 (EDT) Subject: [ExI] Is this really the case? In-Reply-To: <200706280536.l5S5a8i4012935@andromeda.ziaspace.com> Message-ID: <474655.70190.qm@web30402.mail.mud.yahoo.com> --- spike wrote: >Ja, Anna, you are right, religion is not highly >regarded around these parts. >It's always a delicate subject whenever it comes up, >so I try to gently discourage it. Generally the >religious meme holders end up offended and go >elsewhere. The rest of us stay and our infidel >memes reproduce and propagate, which is why we >flaming unbelievers evolved here. Spike, if there are any meme holders on this list, I would think that they would be just as upset as I was when I saw the video. There is a huge difference between a religious meme holder and religious fanatic. I understand the question of whether God exist or not is a question that will remain an issue between Science and Religion but that's not what this video is about. It's about what they are teaching this children, shouldn't everybody be worried about how children are being manipulated and hypnotized? As theology is still taught in many Universities I can't understand why people can't have rational discussion about it but I respect your decision to close the subject. If anybody would like to discuss it offline, it would be appreciated. Anna Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From austriaaugust at yahoo.com Thu Jun 28 18:15:31 2007 From: austriaaugust at yahoo.com (A B) Date: Thu, 28 Jun 2007 11:15:31 -0700 (PDT) Subject: [ExI] Is this really the case? In-Reply-To: <474655.70190.qm@web30402.mail.mud.yahoo.com> Message-ID: <136989.32823.qm@web37410.mail.mud.yahoo.com> Hi Anna, I don't believe that the subject is "closed" per se (at least that's the assumption I've been operating under). But if you would like to discuss it here, you should probably prepare yourself for some fairly strong anti-religion sentiments. And you might want to frame the topic in some way that's relevant to H+, or politics, or memes, etc. A "pure" religion discussion will probably rise above the moderators' radar. The primary reason that religion is shunned here is that in general it has caused and continues to cause a great deal of human suffering and dysfunction. And it tends to discourage people from seeking truth and thinking for themselves. That's not to say that it has no value at all, it has provided some comfort to some people. But we tend to look at big-picture net effects, here. Best, Jeffrey Herrlich --- Anna Taylor wrote: > > --- spike wrote: > > >Ja, Anna, you are right, religion is not highly > >regarded around these parts. > >It's always a delicate subject whenever it comes > up, > >so I try to gently discourage it. Generally the > >religious meme holders end up offended and go > >elsewhere. The rest of us stay and our infidel > >memes reproduce and propagate, which is why we > >flaming unbelievers evolved here. > > Spike, if there are any meme holders on this list, I > would think that they would be just as upset as I > was > when I saw the video. There is a huge difference > between a religious meme holder and religious > fanatic. > I understand the question of whether God exist or > not > is a question that will remain an issue between > Science and Religion but that's not what this video > is > about. It's about what they are teaching this > children, shouldn't everybody be worried about how > children are being manipulated and hypnotized? As > theology is still taught in many Universities I > can't > understand why people can't have rational discussion > about it but I respect your decision to close the > subject. If anybody would like to discuss it > offline, > it would be appreciated. > > Anna > > > > > > Ask a question on any topic and get answers > from real people. Go to Yahoo! Answers and share > what you know at http://ca.answers.yahoo.com > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545469 From eugen at leitl.org Thu Jun 28 19:47:18 2007 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 28 Jun 2007 21:47:18 +0200 Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <139230.55759.qm@web37407.mail.mud.yahoo.com> References: <139230.55759.qm@web37407.mail.mud.yahoo.com> Message-ID: <20070628194718.GG7079@leitl.org> On Thu, Jun 28, 2007 at 08:07:20AM -0700, A B wrote: > Stathis wrote: > > > "Not really, it's hard to think of a cheaper > > electronic device than a > > short range radio transmitter, especially if it > > didn't have to be > > small enough to hide. Any car driving around at any > > time could be an > > unmarked government vehicle with a list of > > frequencies for different > > addresses." Any bug you pay for is effectively free to its users. If you play it smart, the vassal will even be grateful. But that bug is sooo shiny and pretty! How can I not covet the bug? > It's not just the cost of the device. It's the cost of > monitoring, repairing, and enforcing violations. And You don't monitor. You record everything you can, and record the rest based on data mined clusters. Storing data is cheap, and you can always recrunch with more OPs and newer algorithms later on. The one thing you don't do is to delete, ever. That's a cardinal sin. (Poor NSA choking when trying to drink from the firehose? Give me a fucking break). Enforcing violations? Of course the next step is to add execution capability to the platforms currently only used for intelligence. I'll be the judge, I'll be the jury, Said the cunning old Fury, I'll try the whole cause, And condemn you to death. > then there's the problem of countering hacking issues. You sure you can hack firmware upgrade over air? That's talking breaking cryptographic systems, Sir. > It seems that an entire infrastructure would have to > be built around it. It *might* not be cost-prohibitive > today, but it probably would have been not too long > ago. Especially for non-superpowers. Past and current capabilities are irrelevant. Near future capabilities are important. > > "Sure, but people who are really serious about their > > plot can always go > > to a public place and speak in code or something. To agree on a code takes communication. Even mere traffic analysis will pick it up and issue arrest warrants even before you get into your bunny slippers. > > The idea is to make > > it harder for plots to hatch; the fear factor alone > > of having bugs > > everywhere and knowing that there are bugs > > everywhere would have to > > have an effect." Classical 1984. And 2007, of course. People *are* afraid. > It might have a small effect. But I imagine it still > wouldn't bump-up the "Benefit" side sufficiently. > > > "Yes, but why has it dissuaded countries from > bugging > > everyone but not > > from eg. killing a large proportion of their > > population, such as in > > Cambodia or Rwanda? It seems to me that they feel Differenet mechanisms, but in principle a fully automated state can do away with its citizen-units. > > they can justify > > killing all the bad people, but balk at openly What does "bad" mean? > > telling all of the > > population that none of them are to be trusted and > > they will be > > monitored at all times." This is precisely what is being made binding law, today. > Yep, that's a good question. We can be pretty sure > that they haven't refrained from bugging out of the > goodness of their hearts. If it's not a Cost:Benefit > issue, then what else could it be? If you rule with a > brutal iron fist, then it doesn't matter if you piss > your people off. You have to start slow and gentle, first. > > "I'm all for trying, but it will more than likely be > > Governments or - > > even worse - large corporations doing the space > > colonizing."... > > Yeah, it's definitely a sticky issue. I'd prefer a > large corporation, not all of them are "eeeevil" as > many say. It really doesn't matter whether your superpersonal organization unit is of corporate or governmental origin. On the long run, they're all the same. > "And even > > if it were just a group of idealistic and > > like-minded individuals, > > that's how all communities are in the beginning, and > > then they go bad."... > > It would probably help to keep the various communities > small and "separated". Let people choose which > community and rules they would prefer to live under. > "Space-Arks" (and many of them) like Lifeboat > Foundation recommends would be a good temporary > solution...hopefully. Keep them *talking* however, I'd > prefer not to drag monkey-wars into space also. Space and monkeys don't mix. > > "What's the solution to stop a community going bad, > > ever?" > > Friendly AI/universal life enhancement. I have no idea what universal life enhancement is, but I presume it's based on same bad thinking as friendly AI. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From msd001 at gmail.com Thu Jun 28 20:46:22 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 28 Jun 2007 16:46:22 -0400 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: Message-ID: <62c14240706281346l495cfec9x59dccb156e72f700@mail.gmail.com> On 6/28/07, Jef Allbright wrote: > A deeper and seldom recognized aspect is that the above statement > applies equally when the observer and the observed are the same > person. I had a question for this thread: If a clone is made such that the subjective awareness of the cloning process makes it impossible for either clone to know who is the copy, how do they claim their identity? I appreciate that there are suddenly two people and that the clones immediately diverge in experience due to (if nothing else) a difference in physical location. Each remembers the same history up to the point of cloning. If identity is the sum of experiences, are they a special case twins up to the point where the divergence begins? From amara at amara.com Thu Jun 28 22:17:55 2007 From: amara at amara.com (Amara Graps) Date: Fri, 29 Jun 2007 00:17:55 +0200 Subject: [ExI] Dawn launch pics (spacecraft almost mounted) Message-ID: I made a mistake, the previous pictures were not yet of Dawn being attached, and these pictures don't show it either, but it's close. You can see Dawn being unpacked, and then being hoisted way up the top of the tower to mate it with the Delta II launch rocket. It's almost ready to go. http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From thespike at satx.rr.com Thu Jun 28 22:56:56 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 28 Jun 2007 17:56:56 -0500 Subject: [ExI] Psi quantum observation experiment Message-ID: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> A paper by Dean Radin, PhD, accepted for publication later this year (but no, not in Nature or Science): Testing nonlocal observation as a source of intuitive knowledge This study explored the hypothesis that in some cases intuitive knowledge arises from perceptions that are not mediated through the ordinary senses. The possibility of detecting such ?nonlocal observation? was investigated in a pilot test based on the effects of observation on a quantum system. Participants were asked to imagine that they could intuitively perceive a low intensity laser beam in a distant Michelson interferometer. If such observation were possible, it would theoretically perturb the photons? quantum wave-functions and change the pattern of light produced by the interferometer. The optical apparatus was located inside a light-tight, double steel-walled shielded chamber. Participants sat quietly outside the chamber with eyes closed. The light patterns were recorded by a cooled CCD camera once per second, and average illumination levels of these images were compared in counterbalanced ?mental blocking? vs. non-blocking conditions. Interference would produce a lower overall level of illumination, which was predicted to occur during the blocking condition. Based on a series of planned experimental sessions, the outcome was in accordance with the prediction (z = -2.82, p = 0.002). This result was primarily due to nine sessions involving experienced meditators (combined z = -4.28, p = 9.4 ? 10^-6); the other nine sessions with non- meditators were not significant (combined z = 0.29, p = 0.61). The same experimental protocol run immediately after 15 of these test sessions, but with no one present, revealed no hardware or protocol artifacts that might have accounted for these results (combined control z = 1.50, p = 0.93). Conventional explanations for these results were considered and judged to be implausible. This pilot study suggests the presence of a nonlocal perturbation effect which is consistent with traditional concepts of intuition as a direct means of gaining knowledge about the world, and with the predicted effects of observation on a quantum system. From austriaaugust at yahoo.com Thu Jun 28 22:52:39 2007 From: austriaaugust at yahoo.com (A B) Date: Thu, 28 Jun 2007 15:52:39 -0700 (PDT) Subject: [ExI] What surveillance solution is best - Orwellian, David Brin's, or ...? In-Reply-To: <20070628194718.GG7079@leitl.org> Message-ID: <432618.39121.qm@web37406.mail.mud.yahoo.com> Eugen Leitl wrote: > "Enforcing violations? Of course the next step is to > add > execution capability to the platforms currently only > used > for intelligence." Eh?? I'm not encouraging it. I was still referring to the dictatorship example. > "You sure you can hack firmware upgrade over air? > That's talking breaking cryptographic systems, Sir." No. I don't know anything about it. I just assumed that someone could hardwire an input loop or something to that effect. > "Past and current capabilities are irrelevant. Near > future > capabilities are important." But my discussion with Stathis began on his question of why no dictator of the past has bugged his entire country. My intention was to show that there is no fundamental barrier to this happening in the non-distant future. > "It really doesn't matter whether your superpersonal > organization > unit is of corporate or governmental origin. On the > long run, > they're all the same." But eventually it would be preferable to buy your property outright and do what you want with it. > "Classical 1984. And 2007, of course. People *are* > afraid." Not too afraid though, or they'd be raising hell about it. People have more important things to think about, like American Idol. > "Space and monkeys don't mix." But monkeys and molecular manufacturing mix even less well. Except as a slushy. > "I have no idea what universal life enhancement is, > but I presume it's based on same bad thinking as > friendly AI." I meant it as what would follow from a Friendly AI, if one is possible. Is it that you believe Friendly AI isn't possible? It may be turn out to be impossible, but do we know that for certain at this point? Sincerely, Jeffrey Herrlich --- Eugen Leitl wrote: > On Thu, Jun 28, 2007 at 08:07:20AM -0700, A B wrote: > > Stathis wrote: > > > > > "Not really, it's hard to think of a cheaper > > > electronic device than a > > > short range radio transmitter, especially if it > > > didn't have to be > > > small enough to hide. Any car driving around at > any > > > time could be an > > > unmarked government vehicle with a list of > > > frequencies for different > > > addresses." > > Any bug you pay for is effectively free to its > users. > If you play it smart, the vassal will even be > grateful. > But that bug is sooo shiny and pretty! How can I not > covet the bug? > > > It's not just the cost of the device. It's the > cost of > > monitoring, repairing, and enforcing violations. > And > > You don't monitor. You record everything you can, > and record the rest based on data mined clusters. > Storing > data is cheap, and you can always recrunch with more > OPs and newer algorithms later on. The one thing you > > don't do is to delete, ever. That's a cardinal sin. > (Poor NSA choking when trying to drink from the > firehose? > Give me a fucking break). > > Enforcing violations? Of course the next step is to > add > execution capability to the platforms currently only > used > for intelligence. > > I'll be the judge, I'll be the jury, > Said the cunning old Fury, > I'll try the whole cause, > And condemn you to death. > > > then there's the problem of countering hacking > issues. > > You sure you can hack firmware upgrade over air? > That's talking breaking cryptographic systems, Sir. > > > It seems that an entire infrastructure would have > to > > be built around it. It *might* not be > cost-prohibitive > > today, but it probably would have been not too > long > > ago. Especially for non-superpowers. > > Past and current capabilities are irrelevant. Near > future > capabilities are important. > > > > "Sure, but people who are really serious about > their > > > plot can always go > > > to a public place and speak in code or > something. > > To agree on a code takes communication. Even mere > traffic > analysis will pick it up and issue arrest warrants > even > before you get into your bunny slippers. > > > > The idea is to make > > > it harder for plots to hatch; the fear factor > alone > > > of having bugs > > > everywhere and knowing that there are bugs > > > everywhere would have to > > > have an effect." > > Classical 1984. And 2007, of course. People *are* > afraid. > > > It might have a small effect. But I imagine it > still > > wouldn't bump-up the "Benefit" side sufficiently. > > > > > "Yes, but why has it dissuaded countries from > > bugging > > > everyone but not > > > from eg. killing a large proportion of their > > > population, such as in > > > Cambodia or Rwanda? It seems to me that they > feel > > Differenet mechanisms, but in principle a fully > automated > state can do away with its citizen-units. > > > > they can justify > > > killing all the bad people, but balk at openly > > What does "bad" mean? > > > > telling all of the > > > population that none of them are to be trusted > and > > > they will be > > > monitored at all times." > > This is precisely what is being made binding law, > today. > > > Yep, that's a good question. We can be pretty sure > > that they haven't refrained from bugging out of > the > > goodness of their hearts. If it's not a > Cost:Benefit > > issue, then what else could it be? If you rule > with a > > brutal iron fist, then it doesn't matter if you > piss > > your people off. > > You have to start slow and gentle, first. > > > > "I'm all for trying, but it will more than > likely be > > > Governments or - > > > even worse - large corporations doing the space > > > colonizing."... > > > > Yeah, it's definitely a sticky issue. I'd prefer a > > large corporation, not all of them are "eeeevil" > as > > many say. > > It really doesn't matter whether your superpersonal > organization > unit is of corporate or governmental origin. On the > long run, > they're all the same. > > > "And even > > > if it were just a group of idealistic and > > > like-minded individuals, > > > that's how all communities are in the beginning, > and > > > then they go bad."... > > > > It would probably help to keep the various > communities > > small and "separated". Let people choose which > > community and rules they would prefer to live > under. > > "Space-Arks" (and many of them) like Lifeboat > > Foundation recommends would be a good temporary > > solution...hopefully. Keep them *talking* however, > I'd > > prefer not to drag monkey-wars into space also. > > Space and monkeys don't mix. > > > > "What's the solution to stop a community going > bad, > > > ever?" > > > > Friendly AI/universal life enhancement. > > I have no idea what universal life enhancement is, > but I presume it's based on same bad thinking as > friendly AI. > > > -- > Eugen* Leitl leitl > http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com > http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 > 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Need a vacation? Get great deals to amazing places on Yahoo! Travel. http://travel.yahoo.com/ From moulton at moulton.com Fri Jun 29 00:01:32 2007 From: moulton at moulton.com (Fred C. Moulton) Date: Thu, 28 Jun 2007 17:01:32 -0700 Subject: [ExI] Is this really the case? In-Reply-To: <468375C3.9010802@pobox.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> <468375C3.9010802@pobox.com> Message-ID: <1183075293.3119.993.camel@localhost.localdomain> On Thu, 2007-06-28 at 01:48 -0700, Eliezer S. Yudkowsky wrote: Just to avoid confusion the following quote is from Bush the Elder when he was Vice President in 1987 and running for President. Bush the Lesser has said his own stupid things so we want to make sure the stupidity of the father is not attributed to the son. > "I don't know that atheists should be considered as citizens, nor > should they be considered patriots. This is one nation under God." > -- George W. Bush > From moulton at moulton.com Fri Jun 29 01:06:13 2007 From: moulton at moulton.com (Fred C. Moulton) Date: Thu, 28 Jun 2007 18:06:13 -0700 Subject: [ExI] Is this really the case? In-Reply-To: <197385.36024.qm@web30401.mail.mud.yahoo.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> Message-ID: <1183079174.3119.1009.camel@localhost.localdomain> On Wed, 2007-06-27 at 23:17 -0400, Anna Taylor wrote: > Eliezer posted a video on Overcoming Bias regarding > "Are Your Enemies Innately Evil?". (I'm sorry, I'm > sure how to post a link:) > > Is this really what it's like in the United States > regarding religion? Not in total. But it does exist. There is a lot more of that mentality in small towns. Places which are culturally and intellectually more sophisticated tend to have less of it. See for example the story of the Smalkowski family: http://abcnews.go.com/2020/story?id=3164811 The ABC version of the story is a bit garbled, I remember reading a better written version either from the FFRF ffrf.org or from American Atheists http://www.atheists.org/ but I can not find the link at the moment. For more info on the movie see: http://en.wikipedia.org/wiki/Jesus_Camp It appears that the camp is closed however the operator is looking for a new location. And I bet they will not allow in a film crew this time. Fred From spike66 at comcast.net Fri Jun 29 01:02:30 2007 From: spike66 at comcast.net (spike) Date: Thu, 28 Jun 2007 18:02:30 -0700 Subject: [ExI] Is this really the case? In-Reply-To: <474655.70190.qm@web30402.mail.mud.yahoo.com> Message-ID: <200706290119.l5T1JxW1008974@andromeda.ziaspace.com> > Spike, if there are any meme holders on this list, I > would think that they would be just as upset as I was > when I saw the video. Ja Anna, the discussion about the video is cool, no problem there. What has caused problems in the past (long before you joined us) is the desire to actually discuss the merits of religion or specific religious memes. There wasn't a rule against it exactly, rather the religious ones were treated harshly. We don't like to see people treated harshly, even if we disagree with that person and consider their notions silly. So I try to gently discourage people from posting about their particular religion for instance. That guy whose name I can't recall said it best "Cast not thy pearls before the swine," Matthew 7 verse 6. When I read that, I wondered what if they were really lousy pearls? Or your very favorite swine? But I digress. {8^D spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Anna Taylor > Sent: Thursday, June 28, 2007 9:25 AM > To: ExI chat list > Subject: Re: [ExI] Is this really the case? > > > --- spike wrote: > > >Ja, Anna, you are right, religion is not highly > >regarded around these parts. > >It's always a delicate subject whenever it comes up, > >so I try to gently discourage it. Generally the > >religious meme holders end up offended and go > >elsewhere. The rest of us stay and our infidel > >memes reproduce and propagate, which is why we > >flaming unbelievers evolved here. > > Spike, if there are any meme holders on this list, I > would think that they would be just as upset as I was > when I saw the video. There is a huge difference > between a religious meme holder and religious fanatic. > I understand the question of whether God exist or not > is a question that will remain an issue between > Science and Religion but that's not what this video is > about. It's about what they are teaching this > children, shouldn't everybody be worried about how > children are being manipulated and hypnotized? As > theology is still taught in many Universities I can't > understand why people can't have rational discussion > about it but I respect your decision to close the > subject. If anybody would like to discuss it offline, > it would be appreciated. > > Anna > > > > > > Ask a question on any topic and get answers from real people. Go to > Yahoo! Answers and share what you know at http://ca.answers.yahoo.com > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From joseph at josephbloch.com Fri Jun 29 01:44:27 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Thu, 28 Jun 2007 21:44:27 -0400 Subject: [ExI] Is this really the case? In-Reply-To: <1183079174.3119.1009.camel@localhost.localdomain> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> <1183079174.3119.1009.camel@localhost.localdomain> Message-ID: <065501c7b9ef$083064a0$6400a8c0@hypotenuse.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Fred C. Moulton > Sent: Thursday, June 28, 2007 9:06 PM > To: ExI chat list > Subject: Re: [ExI] Is this really the case? > > > For more info on the movie see: > http://en.wikipedia.org/wiki/Jesus_Camp > It appears that the camp is closed however the operator is looking for a > new location. And I bet they will not allow in a film crew this time. It was my understanding that the specific camp had closed, but that the operator (one Becky Fischer) had simply set up shop elsewhere. Apparently, they're having a conference in Missouri even as I write this: http://www.kidsinministry.com/MyHeartYourThrone.html I have no doubt that the content of that conference will differ little from the content of the camp experience ("designed especially for kids ages 6-12"). It just won't last three weeks. Joseph http://www.josephbloch.com From joseph at josephbloch.com Fri Jun 29 02:03:19 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Thu, 28 Jun 2007 22:03:19 -0400 Subject: [ExI] Is this really the case? In-Reply-To: <197385.36024.qm@web30401.mail.mud.yahoo.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> Message-ID: <065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Anna Taylor > Sent: Wednesday, June 27, 2007 11:18 PM > To: extropy-chat at lists.extropy.org > Subject: [ExI] Is this really the case? > > Eliezer posted a video on Overcoming Bias regarding > "Are Your Enemies Innately Evil?". (I'm sorry, I'm > sure how to post a link:) > > Is this really what it's like in the United States > regarding religion? I am rather curious as I read a > lot but I haven't had much opportunity to travel. I > understand the Extropy List consists of many Atheists > and wonder if this would be a reason why many of you > have such a distaste for religion? If so, then my > apology. > > Where I grew up religion was not the least bit like > this. When I watched that video it made my skin crawl > and my blood boil. I remember thinking "that's not > religion, that's boot camp for a Christian army". > And what is this? "There is only two kind of people > in the world, people that love Jesus and people that > don't"? That's not teaching children morals and values > that's teaching children how anybody that's not like > me is bad/evil or wrong. As it's a clip, I'm not sure > what direction it's heading in but one thing is sure, > if this is clip represents the truth of the mentality > of the religious in the United States, I wouldn't give > it any awards. > > I feel rather naive that I took so much time defending > religion unaware that this type of evangelism was so > widespread within the United States. > > If anybody has a moment and watches the clip, I would > love some of your insight, offlist if preferable. I actually had the opportunity to watch the entire documentary, twice. It is indeed indicative of the practices of a segment of the religious community in the United States. It is, of course, an extreme example, but not one that is completely outside the mainstream. There is a significant segment of the Pentecostal/Evangelical Christian movement that sees nothing wrong whatsoever in ensuring that their children believe precisely what they believe, by any means necessary. Google the term "Dominionism" and you will learn much. And, in fact, there are those who compound that belief with the idea that they should have as many children as possible, in order to produce as many kids for Christ as possible: http://www.quiverfull.com They also feel no qualms about directly manipulating the levers of power within the United States to subtly (or, perhaps, not-so-subtly) bring their own unique vision of American life to dominance within the culture. One well-documented effort is the one to install their sympathizers into the United States Air Force Academy, and thereby use all manner of coercion to bring the Air Force Cadets around to their way of thinking: http://www.democraticunderground.com/articles/06/04/08_airforce.html And also the terrific and terrifying (especially to a USAF veteran such as myself) book: http://www.amazon.com/God-Our-Side-Evangelical-Americas/dp/0312361432/ This is not to say that the Dominionists and their ilk are representative of a majority of the religious practitioners in the United States. They are, however, very well organized, highly motivated, and enjoy a level of influence which is disproportionate to their numbers. This is, of course, something that should be alarming to Transhumanists and Extropians on several different levels. Not only do such people, as a rule, actively oppose many biotechnological advances that we would endorse as part of our goal of overcoming human limitations through technology, but the entire idea of the type of ideological indoctrination, the hostility to free inquiry, that so alarmed you in that clip is and should be anathema to us. Bottom line; it is indeed the case that some American religious folks are like this, but they are only a minority. A vocal, influential minority bent on gaining more influence, but a minority nonetheless. One that we should be aware of as a potential threat to our own objectives. Joseph http://www.josephbloch.com From msd001 at gmail.com Fri Jun 29 03:26:19 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 28 Jun 2007 23:26:19 -0400 Subject: [ExI] Is this really the case? In-Reply-To: <065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> <065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> Message-ID: <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> On 6/28/07, Joseph Bloch wrote: > This is, of course, something that should be alarming to Transhumanists and > Extropians on several different levels. Not only do such people, as a rule, > actively oppose many biotechnological advances that we would endorse as part > of our goal of overcoming human limitations through technology, but the > entire idea of the type of ideological indoctrination, the hostility to free > inquiry, that so alarmed you in that clip is and should be anathema to us. How do you feel about the "pledge of allegiance" in schools? I'm not talking about how it end ("...one nation under god") but how it begins ("I pledge allegiance to the flag...") National pride is great and all, but it still stinks of indoctrination to me. My wife thought I was crazy and anti-American for being so unpatriotic until I explained that of course nobody would argue against a "flag" because really a flag is not offensive or wrong in any way - but the flag is a symbol for an obviously biased government policy (to strongly euphemize the point) How do we teach children to blindly "pledge allegiance" to this government via a positive symbol despite the truth? Or am I the only one who sees it this way? From joseph at josephbloch.com Fri Jun 29 04:38:30 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Fri, 29 Jun 2007 00:38:30 -0400 Subject: [ExI] Is this really the case? In-Reply-To: <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com><065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> Message-ID: <066401c7ba07$586e9cd0$6400a8c0@hypotenuse.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Mike Dougherty > Sent: Thursday, June 28, 2007 11:26 PM > To: ExI chat list > Subject: Re: [ExI] Is this really the case? > > On 6/28/07, Joseph Bloch wrote: > > This is, of course, something that should be alarming to Transhumanists and > > Extropians on several different levels. Not only do such people, as a rule, > > actively oppose many biotechnological advances that we would endorse as part > > of our goal of overcoming human limitations through technology, but the > > entire idea of the type of ideological indoctrination, the hostility to free > > inquiry, that so alarmed you in that clip is and should be anathema to us. > > How do you feel about the "pledge of allegiance" in schools? I'm not > talking about how it end ("...one nation under god") but how it begins > ("I pledge allegiance to the flag...") National pride is great and > all, but it still stinks of indoctrination to me. > > My wife thought I was crazy and anti-American for being so unpatriotic > until I explained that of course nobody would argue against a "flag" > because really a flag is not offensive or wrong in any way - but the > flag is a symbol for an obviously biased government policy (to > strongly euphemize the point) How do we teach children to blindly > "pledge allegiance" to this government via a positive symbol despite > the truth? > > Or am I the only one who sees it this way? A lot of people disagree with me on this particular issue, but I'm in favor of the Pledge of Allegiance (although I personally favor the 1953 version, for obvious reasons): "I pledge allegiance to the flag of the United States of America, and to the Republic for which it stands, one nation, indivisible, with liberty and justice for all." I would say that the analogy between the Pledge and the sort of thing we see in "Jesus Camp" breaks down on two levels. First, there's nothing in the Pledge that precludes questioning or free inquiry; ever since I was a young kid I understood the "liberty and justice for all" phrase to point to a goal, rather than an extant reality. As in, I was pledging to seek that end, not giving a rote recitation that America was perfect in that respect. Second, on a much more practical level, a 30-second recitation once per day is nothing compared to the 24-hour 3-week intense indoctrination (not to mention their life leading up to and after the camp itself) these kids were going through. (And don't forget that kids are able to be excused from recitation of the Pledge with a parents note; for exmple; Jehovah's Witnesses). I would be much more concerned that my kids are being taught that the Earth is 6,000 years old than they might not be taught about the CIA's role in some Central American coup. Joseph http://www.josephbloch.com From jonkc at att.net Fri Jun 29 05:44:26 2007 From: jonkc at att.net (John K Clark) Date: Fri, 29 Jun 2007 01:44:26 -0400 Subject: [ExI] Psi quantum observation experiment References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> Message-ID: <02a201c7ba11$174e3440$10074e0c@MyComputer> "Damien Broderick" > A paper by Dean Radin, PhD, accepted for publication > later this year (but no, not in Nature or Science): > Testing nonlocal observation as a source of intuitive knowledge Damien I really don't understand why an intelligent man such as yourself is wasting his time with crap like this. If you're really interested in non local stuff I suggest you look at something that WAS published in Nature, the April 19 2007 issue. About 40 years ago Bell proposed an experiment and a few decades later it was actually performed; it proved that hidden-variables cannot explain observations. Events must be non local, that is, distant events can influence each other instantaneously OR, things only exist when you're looking at them or the universe splits or something even stranger happens. About 5 years ago Leggett proposed an experiment that went beyond Bell, in the April 19 article in Nature Aspelmeyer performed Leggett's experiment. It showed that even if you assume non locality experimental results can not be explained. If you have a taste for the bizarre there is no need to go to UFO Abduction Quarterly, Nature has plenty of it! John K Clark From thespike at satx.rr.com Fri Jun 29 06:05:42 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 29 Jun 2007 01:05:42 -0500 Subject: [ExI] Psi quantum observation experiment In-Reply-To: <02a201c7ba11$174e3440$10074e0c@MyComputer> References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> <02a201c7ba11$174e3440$10074e0c@MyComputer> Message-ID: <7.0.1.0.2.20070629010300.02206c30@satx.rr.com> At 01:44 AM 6/29/2007 -0400, JKC wrote: >About 5 years ago Leggett proposed an experiment >that went beyond Bell, in the April 19 article in Nature Aspelmeyer >performed Leggett's experiment. It showed that even if you assume >non locality experimental results can not be explained. It's abstracted at http://www.nature.com/nature/journal/v446/n7138/abs/nature05677.html and undermines classic realism as well as the explanatory utility of nonlocality. "Here we show by both theory and experiment that a broad and rather reasonable class of such non-local realistic theories is incompatible with experimentally observable quantum correlations. In the experiment, we measure previously untested correlations between two entangled photons, and show that these correlations violate an inequality proposed by Leggett for non-local realistic theories. Our result suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned." So, John... in the context of Radin's experiment, your point is? Damien Broderick From jonkc at att.net Fri Jun 29 06:20:26 2007 From: jonkc at att.net (John K Clark) Date: Fri, 29 Jun 2007 02:20:26 -0400 Subject: [ExI] Psi quantum observation experiment References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com><02a201c7ba11$174e3440$10074e0c@MyComputer> <7.0.1.0.2.20070629010300.02206c30@satx.rr.com> Message-ID: <02b901c7ba15$99c36ef0$10074e0c@MyComputer> "Damien Broderick" > So, John... in the context of Radin's experiment, your point is? Radin's experiment is trivial, Aspelmeyer's experiment is profound. John K Clark From thespike at satx.rr.com Fri Jun 29 06:47:05 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 29 Jun 2007 01:47:05 -0500 Subject: [ExI] Psi quantum observation experiment In-Reply-To: <02b901c7ba15$99c36ef0$10074e0c@MyComputer> References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> <02a201c7ba11$174e3440$10074e0c@MyComputer> <7.0.1.0.2.20070629010300.02206c30@satx.rr.com> <02b901c7ba15$99c36ef0$10074e0c@MyComputer> Message-ID: <7.0.1.0.2.20070629013916.022a8188@satx.rr.com> At 02:20 AM 6/29/2007 -0400, John K Clark wrote: > > > So, John... in the context of Radin's experiment, your point is? > >Radin's experiment is trivial, Aspelmeyer's experiment is profound. Aspelmeyer's experiment is profound in implication, certainly--but why is Radin's experimental result "trivial"? It might be bogus BULLSHIT, completely invented or very badly conducted, but if not and the results are as summarized it can hardly be *trivial*, seems to me. Or do you mean trivial *by definition*, since we know in advance that it can be nothing better than BULLSHIT? (Although if Anton Zeilinger had arranged it, and the same results appeared in Nature, somehow that would make it all better.) Damien Broderick From fauxever at sprynet.com Fri Jun 29 07:17:55 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Fri, 29 Jun 2007 00:17:55 -0700 Subject: [ExI] A Step Towards Synthetic Forms of Life Message-ID: <001201c7ba1d$9f1415a0$6501a8c0@brainiac> Scientists Transplant Genome of Bacteria http://www.nytimes.com/2007/06/29/science/29cells.html Olga From thomas at thomasoliver.net Fri Jun 29 07:37:12 2007 From: thomas at thomasoliver.net (Thomas) Date: Fri, 29 Jun 2007 00:37:12 -0700 Subject: [ExI] Minds, Personalities, and Love In-Reply-To: References: Message-ID: <5ACE20B4-D08F-4DC2-941E-FE977866F814@thomasoliver.net> Jef Allbright wrote: > On 6/27/07, Thomas wrote: > > >> Not essential for life or observable identity, but personality? >> Without >> capacity for consciousness you lose your personhood. Someone >> else might >> see you as a person, but they have a sad misunderstanding of the >> facts. >> > > A deep but well-recognized aspect of thinking about consciousness and > personhood is that an observer can never know, but only assume, the > self-awareness of another person. > > A deeper and seldom recognized aspect is that the above statement > applies equally when the observer and the observed are the same > person. > > Therein lies the source of much confusion and endless debate. > > - Jef And if anything can fire up the agencies of consciousness, confusion can! I feel grateful that I CAN assume personhood. But, I believe you: I'll need a lot bigger hippocampus if I really want to know myself. My individualism doesn't require a homunculus -- just a balanced, integrated and effective survival outcome for this particular organism and maybe the corresponding simulus. -- Thomas Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Fri Jun 29 07:24:11 2007 From: jonkc at att.net (John K Clark) Date: Fri, 29 Jun 2007 03:24:11 -0400 Subject: [ExI] Psi quantum observation experiment References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com><02a201c7ba11$174e3440$10074e0c@MyComputer><7.0.1.0.2.20070629010300.02206c30@satx.rr.com><02b901c7ba15$99c36ef0$10074e0c@MyComputer> <7.0.1.0.2.20070629013916.022a8188@satx.rr.com> Message-ID: <02fe01c7ba1e$99936f30$10074e0c@MyComputer> "Damien Broderick" > why is Radin's experimental result "trivial"? Because I'm not at all convinced that the ASCII sequence this Radin person is so proud to have produced is anything I should interested in. One truck driver I've never heard of is approved by another truck driver I've never heard of and the results are printed on Truck Drivers Digest, a journal that I've also never heard of. And I'm supposed to be interested in all that? Sorry, as hard as I try I just can't manage to get excited by this stuff. John K Clark From stathisp at gmail.com Fri Jun 29 09:53:53 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 29 Jun 2007 19:53:53 +1000 Subject: [ExI] Is this really the case? In-Reply-To: <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> <065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> Message-ID: On 29/06/07, Mike Dougherty wrote: > How do you feel about the "pledge of allegiance" in schools? I'm not > talking about how it end ("...one nation under god") but how it begins > ("I pledge allegiance to the flag...") National pride is great and > all, but it still stinks of indoctrination to me. Is the pledge of allegiance something all American schoolchildren have to recite every day or is it an optional, class by class thing? If the latter, how widespread is it? -- Stathis Papaioannou From scerir at libero.it Fri Jun 29 09:48:17 2007 From: scerir at libero.it (scerir) Date: Fri, 29 Jun 2007 11:48:17 +0200 Subject: [ExI] Psi quantum observation experiment References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> Message-ID: <000501c7ba32$9f38cf20$8cba1f97@archimede> "Participants were asked to imagine that they could intuitively perceive a low intensity laser beam in a distant Michelson interferometer. If such observation were possible, it would theoretically perturb the photons' quantum wave-functions and change the pattern of light produced by the interferometer." ________ I did not read the paper (still to be published). But why did he use a Michelson interferometer? A much better, and *cleaner* experiment would be to (try to) perturb photon's quantum wave-functions using a Mach-Zehnder interferometer [1]. Since - in this case - the (eventual) perturbation of one of the (two) 'amplitudes', would cause the 'wrong' detector to register the photons, and not the usual one [2]. An even better idea would be to use a Franson (two-photon) interferometer [3]. Since, in this case, the behaviour of the two 'wings' of the interferometer (they can be space-like separated) is exactly the same (photons go to the same detectors in both 'wings', if the set-up is symmetrical). Now if you put the 'psychics' on one 'wing' of the interferometer and not on the other one ..... [1] http://en.wikipedia.org/wiki/Mach-Zehnder_interferometer [2] In a Mach-Zehnder interferometer the photons always go to one detector, and not to the other one, because of the composition of the (two) amplitudes at the end of their geometrical paths. [3] See figure 2 here http://latsis2004.epfl.ch/webdav/site/latsis2004/shared/import/migration/Sca rani_U.pdf or this paper, by Robert Franson http://www.jhuapl.edu/techdigest/td1604/Franson.pdf From mbb386 at main.nc.us Fri Jun 29 10:47:11 2007 From: mbb386 at main.nc.us (MB) Date: Fri, 29 Jun 2007 06:47:11 -0400 (EDT) Subject: [ExI] Is this really the case? In-Reply-To: <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> <065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> Message-ID: <38576.72.236.102.84.1183114031.squirrel@main.nc.us> > How do you feel about the "pledge of allegiance" in schools? [...] I'd have preferred that we pledge allegiance to the constitution, it has more meaning for me than the flag. And it wouldn't have hurt one bit to have some class somewhere in my schooling that actually discussed the constitution as the wonderful document it is. Unfortunately that did not happen. YMMV. Regards, MB From msd001 at gmail.com Fri Jun 29 11:50:28 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 29 Jun 2007 07:50:28 -0400 Subject: [ExI] Psi quantum observation experiment In-Reply-To: <02fe01c7ba1e$99936f30$10074e0c@MyComputer> References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> <02a201c7ba11$174e3440$10074e0c@MyComputer> <7.0.1.0.2.20070629010300.02206c30@satx.rr.com> <02b901c7ba15$99c36ef0$10074e0c@MyComputer> <7.0.1.0.2.20070629013916.022a8188@satx.rr.com> <02fe01c7ba1e$99936f30$10074e0c@MyComputer> Message-ID: <62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com> On 6/29/07, John K Clark wrote: > Because I'm not at all convinced that the ASCII sequence this Radin person > is so proud to have produced is anything I should interested in. One truck > driver I've never heard of is approved by another truck driver I've never > heard of and the results are printed on Truck Drivers Digest, a journal that > I've also never heard of. And I'm supposed to be interested in all that? > > Sorry, as hard as I try I just can't manage to get excited by this stuff. On the contrary, you seem compelled to comment on this topic every time it comes up. To use your analogy; it appears that although you are not a truck driver, do not know any truck drivers and have little interest in truck driver related issues, but you are adamantly opposed to truck drivers' discussion where you might access it. I doubt that you care, but this attitude makes you look closeminded and cranky: "Hey you damn psi-experiment kids, get off my concrete science lawn!" From pharos at gmail.com Fri Jun 29 11:59:50 2007 From: pharos at gmail.com (BillK) Date: Fri, 29 Jun 2007 12:59:50 +0100 Subject: [ExI] Favorite ~H+ Movies In-Reply-To: <492604.42413.qm@web37402.mail.mud.yahoo.com> References: <8d71341e0706250808h13af9d1eg854f7611eb38477e@mail.gmail.com> <492604.42413.qm@web37402.mail.mud.yahoo.com> Message-ID: On 6/25/07, A B wrote: > I think it's worth at least a single viewing. Saying > it's the best AI movie in a decade, isn't saying it's > a great movie. :-) > If you are researching movies, the place to go is The Internet Movie Database. They also provide lists of the Top 50 movies by genre, based on users' votes. These could act as a useful reminder of popular films. The Top 50 Sci-fi is here: Films appear under several genres, though. Mystery and Thriller also contain many futuristic films. BillK From jcowan5 at sympatico.ca Fri Jun 29 18:00:30 2007 From: jcowan5 at sympatico.ca (Josh Cowan) Date: Fri, 29 Jun 2007 14:00:30 -0400 Subject: [ExI] Is this really the case? In-Reply-To: <38576.72.236.102.84.1183114031.squirrel@main.nc.us> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> <065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> <38576.72.236.102.84.1183114031.squirrel@main.nc.us> Message-ID: <4214c31864002db1f71c1352f9409f89@sympatico.ca> Only because this thread is live am I sending this comedic vid. http://www.thedailyreel.com/videos/american-cookbook In answer to Anne's original query. IMHO, the answer is yes for much of the U.S. Have a God Fearing Day! :-) Josh From natasha at natasha.cc Fri Jun 29 15:12:56 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 29 Jun 2007 10:12:56 -0500 Subject: [ExI] MEDIA: TransVision 2007 - Chicago July 23-26, 2007 Message-ID: <200706291513.l5TFCxa2009073@ms-smtp-03.texas.rr.com> Greetings! I hope to see you at TransVision 2007 in Chicago! ______________________________________________________________________ *** MEDIA ADVISORY *** MEDIA ADVISORY *** MEDIA ADVISORY *** WILLIAM SHATNER, ED BEGLEY JR., RAY KURZWEIL AND AUBREY DE GREY TO KEYNOTE NINTH ANNUAL INTERNATIONAL TRANSVISION 2007 CONFERENCE IN CHICAGO Inventor, Author and Futurist Ray Kurzweil To Be Awarded The H.G. Wells Award For Technological Contributions To Humanity At Gala Awards Dinner WHO: The World Transhumanist Association (WTA) is hosting the ninth annual TransVision conference, a leading international gathering of science, technology and policy leaders, and will hold this year?s event at Chicago?s Fairmont Hotel from July 24 - 26, 2007. It is only the second time that the TransVision conference will be held in the United States. This year?s theme is ?Transhumanity Saving Humanity: Inner Space to Outer Space? with keynotes by internationally renowned visionaries William Shatner, Emmy award winning actor, environmentalist; Ed Begley Jr., actor and environmentalist; Ray Kurzweil, inventor, author and futurist; and Aubrey de Grey, acclaimed longevity scientist. Confirmed presenters include: Peter Diamondis, XPrize Foundation; Max More, strategic philosopher and author of philosophy of transhumanism; James Gardner, author; Marvin Minsky, the father of artificial intelligence; Jerome C. Glenn, Millenium Project; Martine Rothblatt, founder, Terasem Movement; Natasha Vita-More, artist and author of Transhuman Statement; Nick Bostrom, Director of the Oxford Future of Humanity Institute; James Hughes, Executive Director, Institute for Ethics and Emerging Technologies; Sky Marsen, Ph.D., Semiotics Expert, Linguistics and Cognitive Science, University of London; Ron Bailey, Science Correspondent, Reason Magazine; Ralph Merkle, Alcor Foundation, Michael Weiner, President, Biophan Technologies; Michael Ekstract, Editor, Verdant Magazine; Barbara Marx Hubbard, Foundation for Conscious Evolution; Giorgio Gaviraghi, Mars Society Italia; and Dr. Andrew Rosenson, Heartscan Chicago. WHAT: Speakers will address how emerging technology will give our societies the ability to solve the grand challenges facing humanity. Global health, the environment and space development will be addressed. The conference will consist of three days of intensive briefings by some of the most influential futurists, innovators, policy leaders and celebrities from the U.S. and around the globe. ? Day 1: Inner Space: Transforming Ourselves -- Aging, Life Extension, Nanotech, Nanomedicine, Bionics, Biotech, Strategies for Engineered Negligible Senescence, Cryonics ? Day 2: Meta Space: Transforming Humanity -- Environment, Global Warming, Sustainable Housing, Alternative Energy, Artificial Intelligence, Robotics, Virtual Reality ? Day 3: Outer Space: Beyond the Planet -- Future Humans, Colonizing Outer space, Space Tourism, Future Civilizations In addition, inventor, author and futurist Ray Kurzweil will be awarded the prestigious H.G. Wells Award, named after Herbert George Wells, a 19th and 20th Century English futurist and writer. The WTA will honor Ray Kurzweil for outstanding technical contributions made to humanity, at the Gala awards dinner reception on Thursday, July 26 at the Fairmont Hotel, Chicago. As one of the leading inventors of our time, Ray was the principal developer of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition. Ray Kurzweil also has written five books, four of which have been national best sellers. The Age of Spiritual Machines has been translated into 9 languages and was the #1 best selling book on Amazon in science. Ray?s latest book, The Singularity is Near, was a New York Times best seller, and has been the #1 book on Amazon in both science and philosophy. WHEN: ? Monday, July 23, 2007: Opening night reception at Chicago?s Fairmont Hotel ? Tuesday, July 24 ? Thursday, July 26, 2007: Conference sessions ? Thursday, July 26, 2007: Gala awards dinner reception Chicago?s Fairmont Hotel WHERE: Fairmont Hotel, 200 North Columbus Drive Chicago, Illinois; Tel: (312) 565-8000; Fax: (312) 856-1032 HOW: To register for TransVision 2007 ($575 until June 30; $175 for students; $125 for Gala dinner); visit www.transvision2007.com MEDIA INTERVIEWS: For interviews with conference speakers and TransVision 2007 Conference Chair, contact Emanuela Cariolagian, 323.644.2111 or press at transvision2007.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmbutler at gmail.com Fri Jun 29 20:10:00 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Fri, 29 Jun 2007 13:10:00 -0700 Subject: [ExI] Is this really the case? In-Reply-To: <4214c31864002db1f71c1352f9409f89@sympatico.ca> References: <197385.36024.qm@web30401.mail.mud.yahoo.com> <065601c7b9f1$aaf83a80$6400a8c0@hypotenuse.com> <62c14240706282026y563647a1n1461367b97e2bb02@mail.gmail.com> <38576.72.236.102.84.1183114031.squirrel@main.nc.us> <4214c31864002db1f71c1352f9409f89@sympatico.ca> Message-ID: <7d79ed890706291310q2c21b003s85cd49ec0e33c578@mail.gmail.com> On 6/29/07, Josh Cowan wrote: > In answer to Anne's original query. IMHO, the answer is yes for much of > the U.S. The answer is _also *no*_ for much of the US. It depends on which "much" you mean, as well as what you mean by "this" and "the case". A single metric doesn't map the terrain well. The US never had as much involvement of church and state as, say, France did, so they never turned out the religio-bastards the way the French did. The US also postdates the exodus of the French protestants and the German Catholics to Switzerland, with its odd but predictable demographic consequence regarding those cantons--stiff French-Swiss ones and laid-back German-Swiss ones. Plenty people in the US are more religious than I am. Plenty people in the US, when polled, say things like "I'd never trust an atheist in public office". Both of those facts are a far cry from showing Dominionism (or even creationism) ascendant. Let's stop talking about it, shall we? :) -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m 'Piss off, you son of a bitch. Everything above where that plane hit is going to collapse, and it's going to take the whole building with it. I'm getting my people the fuck out of here." -- Rick Rescorla (R.I.P.), cell phone call, 9/11/2001 From thespike at satx.rr.com Fri Jun 29 21:51:13 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 29 Jun 2007 16:51:13 -0500 Subject: [ExI] Psi quantum observation experiment In-Reply-To: <62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com > References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> <02a201c7ba11$174e3440$10074e0c@MyComputer> <7.0.1.0.2.20070629010300.02206c30@satx.rr.com> <02b901c7ba15$99c36ef0$10074e0c@MyComputer> <7.0.1.0.2.20070629013916.022a8188@satx.rr.com> <02fe01c7ba1e$99936f30$10074e0c@MyComputer> <62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com> Message-ID: <7.0.1.0.2.20070629163612.025299a8@satx.rr.com> At 07:50 AM 6/29/2007 -0400, Mike Dougherty wrote: >On 6/29/07, John K Clark wrote: > > One truck > > driver I've never heard of is approved by another truck driver I've never > > heard of and the results are printed on Truck Drivers Digest... > > Sorry, as hard as I try I just can't manage to get excited by this stuff. > >On the contrary, you seem compelled to comment on this topic every >time it comes up. To use your analogy; it appears that although you >are not a truck driver, do not know any truck drivers and have little >interest in truck driver related issues, but you are adamantly opposed >to truck drivers' discussion where you might access it. Spot-on, Mike. But-- I can understand and even sympathize with John's vexation. It's the way I'd feel if Eliezer suddenly started posting about the wondrous insights of the Bible Code. (Especially since I quoted JKC copiously throughout THE SPIKE, under the nym "Pelagius", as the champion of an extropic view of molecular nanotech.) On the other hand, if Eliezer *did* do that, I'd probably swallow hard and at least consider his arguments, read his sources, and try to dispute with him if I remain skeptical. John's position is pretty much the mirror image of the denunciation I received on amazon.com by some, uh, nitwit: "Unfortunately, the book is freighted with a lot of unpleasant baggage that stems from Broderick's obvious position as a skeptical materialist without a speck of the spiritual in him. He spends much of his time on a kind of People magazine-esque look at the odd personalities involved in psi research instead of the data. He's snide, smug and dismissive about aspects of the psi subculture that he's clearly decided a priori are the realm of New Age loonies: survival of consciousness, a holistic universe, and so on. And he's aghast at anything that smacks of the spiritual (even if it has nothing to do with religion). He spends much of the latter half of the book trying to rationalize his way out of the idea that according to quantum theory, a conscious observer is indeed necessary to create reality, a fact that is supported by the latest experiments by Anton Zeilinger at the University of Vienna. "When confronted by any whiff of holism, zero point energy or anything else speculative, no matter how compelling, Broderick resorts to phrases like "horrifying" and "makes the hair stand up on the back of my neck." He comes off sounding like a dogmatic rationalist skeptic who was forced backward into belief about psi by the mountain of evidence, but cannot bend his mind around any other possibilities, even if there is good evidence to suggest they might be true.... Broderick is a thoroughgoing materialist and won't even consider the possibility. Skepticism and rigorous thinking are vital in these areas, because there is a lot of quantum nonsense and wishful thinking out there. But cynicism is more appropriate to Randi and his unsavory crowd of scoffers. The author drips with barely concealed contempt for some of the ideas presented in his book, and every time he showed such an attitude I wanted to fling the book across the room." Guess that guy must be a Truckdriver. :) By the way, back in the day when EvMick was posting on extropes, dissing truck drivers would have been potentially hazardous to one's health. Damien Broderick From mbb386 at main.nc.us Fri Jun 29 22:33:27 2007 From: mbb386 at main.nc.us (MB) Date: Fri, 29 Jun 2007 18:33:27 -0400 (EDT) Subject: [ExI] EvMick (was Psi quantum observation experiment) In-Reply-To: <7.0.1.0.2.20070629163612.025299a8@satx.rr.com> References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> <02a201c7ba11$174e3440$10074e0c@MyComputer> <7.0.1.0.2.20070629010300.02206c30@satx.rr.com> <02b901c7ba15$99c36ef0$10074e0c@MyComputer> <7.0.1.0.2.20070629013916.022a8188@satx.rr.com> <02fe01c7ba1e$99936f30$10074e0c@MyComputer> <62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com> <7.0.1.0.2.20070629163612.025299a8@satx.rr.com> Message-ID: <38670.72.236.102.111.1183156407.squirrel@main.nc.us> > > Guess that guy must be a Truckdriver. :) > > By the way, back in the day when EvMick was posting on extropes, > dissing truck drivers would have been potentially hazardous to one's health. > Hm. What ever happened to him, where'd he go? Regards, MB From pharos at gmail.com Fri Jun 29 23:11:24 2007 From: pharos at gmail.com (BillK) Date: Sat, 30 Jun 2007 00:11:24 +0100 Subject: [ExI] EvMick (was Psi quantum observation experiment) In-Reply-To: <38670.72.236.102.111.1183156407.squirrel@main.nc.us> References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com> <02a201c7ba11$174e3440$10074e0c@MyComputer> <7.0.1.0.2.20070629010300.02206c30@satx.rr.com> <02b901c7ba15$99c36ef0$10074e0c@MyComputer> <7.0.1.0.2.20070629013916.022a8188@satx.rr.com> <02fe01c7ba1e$99936f30$10074e0c@MyComputer> <62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com> <7.0.1.0.2.20070629163612.025299a8@satx.rr.com> <38670.72.236.102.111.1183156407.squirrel@main.nc.us> Message-ID: On 6/29/07, MB wrote: > > > > > Guess that guy must be a Truckdriver. :) > > > > By the way, back in the day when EvMick was posting on extropes, > > dissing truck drivers would have been potentially hazardous to one's health. > > > > Hm. What ever happened to him, where'd he go? > EvMick is posting a lot on his blog (with photos) at Maybe that is taking up his web time nowadays. BillK From spike66 at comcast.net Sat Jun 30 00:35:59 2007 From: spike66 at comcast.net (spike) Date: Fri, 29 Jun 2007 17:35:59 -0700 Subject: [ExI] EvMick (was Psi quantum observation experiment) In-Reply-To: Message-ID: <200706300035.l5U0ZsFx028624@andromeda.ziaspace.com> > EvMick is posting a lot on his blog (with photos) at > Thanks Bill. I looked all over his blog trying to find a contact, no luck. Looks like he is doing well these days. What's it been 6 years? Is anyone here registered to leave comments on EvMick's blog? Another missing in action is Lee Daniel Crocker, altho I hear he is still in the neighborhood somewhere. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of BillK > Sent: Friday, June 29, 2007 4:11 PM > To: ExI chat list > Subject: Re: [ExI] EvMick (was Psi quantum observation experiment) > > On 6/29/07, MB wrote: > > > > > > > > Guess that guy must be a Truckdriver. :) > > > > > > By the way, back in the day when EvMick was posting on extropes, > > > dissing truck drivers would have been potentially hazardous to one's > health. > > > > > > > Hm. What ever happened to him, where'd he go? > > > > > EvMick is posting a lot on his blog (with photos) at > > > Maybe that is taking up his web time nowadays. > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at comcast.net Sat Jun 30 00:25:21 2007 From: spike66 at comcast.net (spike) Date: Fri, 29 Jun 2007 17:25:21 -0700 Subject: [ExI] Psi quantum observation experiment In-Reply-To: <7.0.1.0.2.20070629163612.025299a8@satx.rr.com> Message-ID: <200706300039.l5U0dp55020542@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Damien Broderick ... > > By the way, back in the day when EvMick was posting on extropes, > dissing truck drivers would have been potentially hazardous to one's > health. > > Damien Broderick Where the heck is EvMick these days? That guy always made me laugh. Anyone here buddies with him? Please do send our greetings and a warm invitation to pop in. spike From pharos at gmail.com Sat Jun 30 09:45:02 2007 From: pharos at gmail.com (BillK) Date: Sat, 30 Jun 2007 10:45:02 +0100 Subject: [ExI] EvMick (was Psi quantum observation experiment) In-Reply-To: <200706300035.l5U0ZsFx028624@andromeda.ziaspace.com> References: <200706300035.l5U0ZsFx028624@andromeda.ziaspace.com> Message-ID: On 6/30/07, spike wrote: > Thanks Bill. I looked all over his blog trying to find a contact, no luck. > Looks like he is doing well these days. What's it been 6 years? Is anyone > here registered to leave comments on EvMick's blog? > Looks like anyone can post a comment on his blog. Just type in any username and comment and 'submit'. Three years ago he posted to Exi using evmick at earthlink.net His older address may still be valid. EvMick at aol.com BillK From scerir at libero.it Sat Jun 30 09:20:44 2007 From: scerir at libero.it (scerir) Date: Sat, 30 Jun 2007 11:20:44 +0200 Subject: [ExI] Psi quantum observation experiment References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com><02a201c7ba11$174e3440$10074e0c@MyComputer><7.0.1.0.2.20070629010300.02206c30@satx.rr.com><02b901c7ba15$99c36ef0$10074e0c@MyComputer><7.0.1.0.2.20070629013916.022a8188@satx.rr.com><02fe01c7ba1e$99936f30$10074e0c@MyComputer> <62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com> Message-ID: <002501c7baf7$f0638190$d8901f97@archimede> > "Hey you damn psi-experiment kids, > get off my concrete science lawn!" :-) "For example, if you could go back in time to the 19th century and inform the physicists of the time about the implications of Bell's Theorem, would they regard those implications as "supernatural" and therefore impossible?" -Ulrich Mohrhoff (in the page below) http://thisquantumworld.com/wordpress/index.php?tag=psi&paged=2 From jonkc at att.net Sat Jun 30 16:47:53 2007 From: jonkc at att.net (John K Clark) Date: Sat, 30 Jun 2007 12:47:53 -0400 Subject: [ExI] Psi quantum observation experiment References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com><02a201c7ba11$174e3440$10074e0c@MyComputer><7.0.1.0.2.20070629010300.02206c30@satx.rr.com><02b901c7ba15$99c36ef0$10074e0c@MyComputer><7.0.1.0.2.20070629013916.022a8188@satx.rr.com><02fe01c7ba1e$99936f30$10074e0c@MyComputer><62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com> <002501c7baf7$f0638190$d8901f97@archimede> Message-ID: <008701c7bb36$6ead0790$1c044e0c@MyComputer> "scerir" > if you could go back in time to the 19th > century and inform the physicists of the time about > the implications of Bell's Theorem, would they regard > those implications as "supernatural" and therefore impossible?" And if you went back to the 19th century you would find respected scientists of the day saying that the evidence for the existence of Psi (called spiritualism back then) was virtually nonexistent while the man in the street and those with no scientific training were saying the evidence was overwhelming. So what has changed after more than a century? Not one God damn thing! A more unproductive area of research you could not find. John K Clark From jonkc at att.net Sat Jun 30 16:57:09 2007 From: jonkc at att.net (John K Clark) Date: Sat, 30 Jun 2007 12:57:09 -0400 Subject: [ExI] Psi quantum observation experiment References: <7.0.1.0.2.20070628175620.022251d8@satx.rr.com><02a201c7ba11$174e3440$10074e0c@MyComputer><7.0.1.0.2.20070629010300.02206c30@satx.rr.com><02b901c7ba15$99c36ef0$10074e0c@MyComputer><7.0.1.0.2.20070629013916.022a8188@satx.rr.com><02fe01c7ba1e$99936f30$10074e0c@MyComputer><62c14240706290450q741024fbt9b8e134cce88623d@mail.gmail.com> <002501c7baf7$f0638190$d8901f97@archimede> Message-ID: <009601c7bb37$b891b260$1c044e0c@MyComputer> "scerir" > if you could go back in time to the 19th > century and inform the physicists of the time about > the implications of Bell's Theorem, would they regard > those implications as "supernatural" and therefore impossible? And if you went back to the 19th century you would find respected scientists of the day saying that the evidence for the existence of Psi (called spiritualism back then) was virtually nonexistent while the man in the street and those with no scientific training were saying the evidence was overwhelming. So what has changed after more than a century? Not one God damn thing! A more unproductive area of "research" you could not find. John K Clark From desertpaths2003 at yahoo.com Sat Jun 30 21:46:27 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Sat, 30 Jun 2007 14:46:27 -0700 (PDT) Subject: [ExI] Skeptic magazine article "The Case for Incrementalism in the Future of Science" In-Reply-To: <62c14240706261826s283da22fm2f10d43af1258746@mail.gmail.com> Message-ID: <201761.67364.qm@web35608.mail.mud.yahoo.com> The latest issue of the Skeptic has an article entitled "The Case for Incrementalism in the Future of Science" by Mordechai Ben-Ari. I considered it a second-rate work which tried without sufficient explanation to show how science would continue to steadily plug away rather than take off in an exponential way dear to the hearts of Transhumanists. I felt this was one of the worst written and thought out articles I have ever read in the Skeptic and was amazed it was ever accepted and published by Shermer and his editors. Has anyone else here read this? What did you think? I know Michael Shermer has taken some potshots at cryonics and so perhaps now he feels it's time to aim at Transhumanism in general. John Grigg --------------------------------- Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fauxever at sprynet.com Sat Jun 30 22:25:20 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 30 Jun 2007 15:25:20 -0700 Subject: [ExI] Skeptic magazine article "The Case for Incrementalism in theFuture of Science" References: <201761.67364.qm@web35608.mail.mud.yahoo.com> Message-ID: <001401c7bb65$8bcb0e60$6501a8c0@brainiac> From: John Grigg Sent: Saturday, June 30, 2007 2:46 PM > The latest issue of the Skeptic has an article entitled "The Case for Incrementalism in the Future of Science" by Mordechai Ben-Ari. I considered it a second-rate work which tried without sufficient explanation to show how science would continue to steadily plug away rather than take off in an exponential way dear to the hearts of Transhumanists. I felt this was one of the worst written and thought out articles I have ever read in the Skeptic and was amazed it was ever accepted and published by Shermer and his editors. Has anyone else here read this? What did you think? Hmmm ... I haven't read it (now I have to remember if I'm even still subscribed to Skeptic ... have hardly been able to keep up with these in the past - always putting them aside for "when I have more time."). I will look for it. > I know Michael Shermer has taken some potshots at cryonics and so perhaps now he feels it's time to aim at Transhumanism in general. Shermer - yeah - he's a little full of himself. People on this list may do well to write some letters to the editor to Skeptic. Or a rebuttal article? Olga -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Jun 30 22:44:20 2007 From: pharos at gmail.com (BillK) Date: Sat, 30 Jun 2007 23:44:20 +0100 Subject: [ExI] Skeptic magazine article "The Case for Incrementalism in the Future of Science" In-Reply-To: <201761.67364.qm@web35608.mail.mud.yahoo.com> References: <62c14240706261826s283da22fm2f10d43af1258746@mail.gmail.com> <201761.67364.qm@web35608.mail.mud.yahoo.com> Message-ID: On 6/30/07, John Grigg wrote: > The latest issue of the Skeptic has an article entitled "The Case for > Incrementalism in the Future of Science" by Mordechai Ben-Ari. I considered > it a second-rate work which tried without sufficient explanation to show how > science would continue to steadily plug away rather than take off in an > exponential way dear to the hearts of Transhumanists. I felt this was one > of the worst written and thought out articles I have ever read in the > Skeptic and was amazed it was ever accepted and published by Shermer and his > editors. Has anyone else here read this? What did you think? > I haven't read the article but it reminds me of the controversial 1996 book 'The End of Science' by John Horgan. Quote: In a series of interviews with luminaries of modern science, Scientific American senior editor John Horgan conducted a guided tour of the scientific world and where it might be headed in The End of Science. The book, which generated great controversy and became a bestseller, now appears in paperback with a new afterword by the author. Through a series of essays in which he visits with such figures as Roger Penrose, Stephen Jay Gould, Stephen Hawking, Freeman Dyson, and others, Horgan captures the distinct personalities of his subjects while investigating whether science may indeed be reaching its end. End quote. He revisited his thesis in a 2006 lecture here: where he tries to answer the most usual objections. And in his blog he made further comments: Many of his critics reacted violently to his suggestion that science might be running out of frontiers, but he isn't a fool. He makes an arguable case for caution about the future of science. He summed it up in a final statement: So let me be more blunt in my advice to would-be scientists: "By all means become a scientist. But don't think you're going to top Newton or Darwin or Einstein or Watson/Crick by discovering something as monumental as gravity or natural selection or quantum mechanics or relativity or the double helix, because your chances are slim to none. The era of those sorts of big discoveries is over. Also, don't go into particle physics! Especially don't waste your time on string theory, or loop-space theory, or multi-universe theories, or any of the other pseudo-scientific crap in physics and cosmology that we science journalists love so much. And don't follow Steve Wolfram and other chaoplexologists chasing after a unified theory of matter-life-consciousness-everything-under-the-sun. That's as futile as trying to prove the existence of God. Pick a real-world problem that you have some chance of resolving, preferably in a way that improves peoples' lives. Do something useful with your talent! We need your help." BillK