From spike66 at comcast.net Fri Jun 1 00:19:22 2007 From: spike66 at comcast.net (spike) Date: Thu, 31 May 2007 17:19:22 -0700 Subject: [ExI] plamegate: the plot thickens In-Reply-To: Message-ID: <200706010039.l510dBrC007492@andromeda.ziaspace.com> On 5/29/07, spike wrote: > >Similarly Libby wasn't really the one they wanted. >Who's the "they" you're talking about, Spike? The CIA? The Justice Dept.? Patrick Fitzgerald? ... Best, Jeff Davis Ja, Patrick Fitzgerald and company. More on this later, gotta go to a friend's kid's graduation. Jeff, its great to see you posting again. We wondered where you had been and hoped you were OK. You are well and happy, ja? Your bride too? {8-] spike From stathisp at gmail.com Fri Jun 1 01:09:30 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 11:09:30 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> References: <4653BBEF.3010808@comcast.net> <05f901c7a1c0$7febefb0$6501a8c0@homeef7b612677> <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> Message-ID: On 01/06/07, Lee Corbin wrote: > Such a dominant AI would be held in check by at least as many factors > > as a human's dominance is held in check: pressure from society in the > > form of initial programming, reward for human-friendly behaviour, > > You seem as absolutely sure that the first programmers will succeed > with Friendliness as John Clark is absolutely sure that the AI will > spontaneously ignore all its early influence. We just don't know, we > cannot know. Aren't there many reasonable scenarios where you're > just wrong? I.e., some very bright Chinese kids keep plugging away > at a seed-AI, and take no care whatsoever that it's Friendly. They > succeed, and bam! the world's taken over. I don't see how that's possible. How is the AI going to comandeer the R&D facilities, organise manufacture of new hardware, make sure that the the factories are kept supplied with components, make sure the component factories are supplied with raw materials, make sure the mines produce the raw materials, make sure the dockworkers load the raw materials onto ships etc. etc. etc. etc. Perhaps I am sinning against the singularity idea in saying this, but do you really think it's just a matter of writing some code on a PC somewhere, which then goes on to take over the world? > and the censure of other AI's (which would be just as capable, > > more numerous, and more likely to be following human-friendly > > programs). > > But how do you *know* or how are you so confident that *one* > AI may suddenly be a breakthrough, and start making improvements > to itself every few hours, and then simply take over everything? It's possible that an individual human somewhere will develop a superweapon, or mind-control abilities, or a viral vector that inserts his DNA into every living cell on the planet; it's just not very likely. And why do you suppose that rapid self-improvement of the world-dominating kind is more likely in an AI than in the nanotechnology that has evolved naturally over billions of years? For that matter, why do you suppose that human level intelligence has not evolved before, to our knowledge, if it's so adaptive? I don't know thwe answer to these questions, but when you look at the universe, there isn't really any evidence that intelligence is as "adaptive" as we might assume it to be. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Fri Jun 1 02:15:48 2007 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Thu, 31 May 2007 19:15:48 -0700 Subject: [ExI] Other thoughts on transhumanism and religion In-Reply-To: <200705311442.l4VEgHDn029517@andromeda.ziaspace.com> References: <465E871E.30008@mac.com> <200705311442.l4VEgHDn029517@andromeda.ziaspace.com> Message-ID: <51ce64f10705311915u92bd7c4h16cebbddc0b1b918@mail.gmail.com> On 5/31/07, spike wrote: > > In the late 80s and early 90s, the K. Eric used to give talks to the local > electronics and technology groups, so I attended several of these. Then he > gave that up, and we haven't heard much from him since about the time > Freitas' Nanomedicine was published. Where is Robert Freitas hanging out > these days? Anyone here buddies with him? For the latest with Robert Freitas, see here: http://lifeboat.com/ex/interview.robert.a.freitas.jr Robert has just completed a large project to analyze a "comprehensive set of DMS reactions and tooltips that could be used to build diamond, graphene (e.g., carbon nanotubes), and all of the tools themselves including all necessary tool recharging reactions." -- Michael Anissimov Lifeboat Foundation http://lifeboat.com http://acceleratingfuture.com/michael/blog From brent.allsop at comcast.net Fri Jun 1 02:58:58 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Thu, 31 May 2007 20:58:58 -0600 Subject: [ExI] The Mormon Missionary Experience (Was: Linguistic Markers of Class) In-Reply-To: <734238.52814.qm@web35607.mail.mud.yahoo.com> References: <734238.52814.qm@web35607.mail.mud.yahoo.com> Message-ID: <465F8B72.3070103@comcast.net> John, Wow, that was all great. You're quite the expert on all that! I've got another story for your grand collection. In Japan, one of my companions was sent home for doing it with one of the wifes of a family the missionaries were teaching. I could see how it happened, because I met her one last time before returning home, and when she shook my hand with both hands and looked into my eyes... wow. I think you've got to hand it to the large majority of them that can resist such unimaginable "temptation". And it seemed like every missionary over there had several girls visit them in the US after they returned home, chasing after them, wanting not only a better country economically, but a country that treats women far better. Brent Allsop John Grigg wrote: > Spike wrote: > The church was at that time pondering letting the girls go on > missions too. But a lot of us can think immediately of why that would > be a really bad idea. John or anyone know how that turned out? > I have heard that it is a sport among lonely housewives to try to > seduce the Mormon boys, but I have never heard if anyone ever made a > score with them.{8^D > > > > > This has been quite a walk down memory lane! lol I never meant for > this post to be so long. It has been nearly twenty years since I was > on my mission and yet somehow it seems almost like yesterday. I hope > my words will give greater understanding to the people here of the > young men and women who may show up at their door to share a message. > > Sincerely, > > John Grigg > > *//* -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at comcast.net Fri Jun 1 03:33:49 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Thu, 31 May 2007 21:33:49 -0600 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <465E871E.30008@mac.com> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> Message-ID: <465F939D.4080005@comcast.net> Extropians, I think this post by Samantha should be Canonized. I, for one, having had a very similar experience, would definitely "support" a topic containing it, and I have counted at least 10 posts full of strong praise. Since there aren't that many topics in the Canonizer yet, if 9 people supported this topic it wold make it to the top of the most supported list at http://test.canonizer.com How many others would be willing to "support" such a topic in the Canonizer if it was submitted? Samantha, would you mind if I posted this post in some other forums (Such as the Mormon Transhumanist Association, WTA...) to find out if there is similar support and praise on other lists? Brent Allsop Samantha Atkins wrote: > I remember in 1988 or so when I first read Engines of Creation. I read > it with tears streaming down my face. Though I was an avowed atheist > and at that time had no spiritual practice at all, I found it profoundly > spiritually moving. For the first time in my life I believed that all > the highest hopes and dreams of humanity could become real, could be > made flesh. I saw that it was possible, on this earth, that the end of > death from aging and disease, the end of physical want, the advent of > tremendous abundance could all come to pass in my own lifetime. I saw > that great abundance, knowledge, peace and good will could come to this > world. I cried because it was a message of such pure hope from so > unexpected an angle that it got past all my defenses. I looked at the > cover many times to see if it was marked "New Age" or "Fiction" or > anything but Science and Non-Fiction. Never has any book so blown my > mind and blasted open the doors of my heart. > > Should we be afraid to give a message of great hope to humanity? Should > we be afraid that we will be taken to be just more pie in the sky > glad-hand dreamers? Should we not dare to say that the science and the > technology combined with a bit (well perhaps more than a bit) of a shift > of consciousness could make all the best dreams of all the religions and > all the generations a reality? Will we not have failed to grasp this > great opportunity if we do not say it and dare to think it and to live > it? Shall we be so afraid of being considered "like a religion" that > we do not offer any real hope to speak of and are oh so careful in all > we do and say and dismissive of more unrestrained and open dreamers? > Or will we embrace them, embrace our own deepest longings and admit our > kinship with those religious as with all the longing of all the > generations that came before us. Will we turn our backs on them or even > disdain their dreams - we who are in a position to begin at long last to > make most of those dreams real? How can we help but be a bit giddy > with excitement? How can we say no to such an utterly amazing > mind-blowing opportunity? > > - samantha > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From russell.wallace at gmail.com Fri Jun 1 04:04:20 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 1 Jun 2007 05:04:20 +0100 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <465F939D.4080005@comcast.net> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> Message-ID: <8d71341e0705312104x7143dc47u7d470554ccfbf46c@mail.gmail.com> On 6/1/07, Brent Allsop wrote: > > > Extropians, > > I think this post by Samantha should be Canonized. I, for one, having > had a very similar experience, would definitely "support" a topic > containing it, and I have counted at least 10 posts full of strong > praise. Since there aren't that many topics in the Canonizer yet, if 9 > people supported this topic it wold make it to the top of the most > supported list at http://test.canonizer.com Excellent idea! How many others would be willing to "support" such a topic in the > Canonizer if it was submitted? *raises hand* -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at thomasoliver.net Fri Jun 1 04:08:17 2007 From: thomas at thomasoliver.net (Thomas) Date: Thu, 31 May 2007 21:08:17 -0700 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <8d71341e0705312104x7143dc47u7d470554ccfbf46c@mail.gmail.com> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> <8d71341e0705312104x7143dc47u7d470554ccfbf46c@mail.gmail.com> Message-ID: <50C0FDDF-B217-4121-9975-EB3D26456420@thomasoliver.net> I second! -- Thomas On May 31, 2007, at 9:04 PM, Russell Wallace wrote: > On 6/1/07, Brent Allsop wrote: > > Extropians, > > I think this post by Samantha should be Canonized. I, for one, having > had a very similar experience, would definitely "support" a topic > containing it, and I have counted at least 10 posts full of strong > praise. Since there aren't that many topics in the Canonizer yet, > if 9 > people supported this topic it wold make it to the top of the most > supported list at http://test.canonizer.com > > Excellent idea! > > How many others would be willing to "support" such a topic in the > Canonizer if it was submitted? > > *raises hand* > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Thomas at ThomasOliver.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Fri Jun 1 06:32:20 2007 From: scerir at libero.it (scerir) Date: Fri, 1 Jun 2007 08:32:20 +0200 Subject: [ExI] something in the air References: <200705310534.l4V5YSBj021122@andromeda.ziaspace.com><000401c7a36b$a713f680$6d931f97@archimede><000701c7a3af$9e403200$68911f97@archimede> <20070531192942.GH17691@leitl.org> Message-ID: <000a01c7a416$9c3519f0$57bf1f97@archimede> Eugen: > http://www.google.com/search?&q=benzoylecgonine+river The italian tv show 'Le Iene' is famous for its more or less playful 'entrapments'. They pretended to interview many politicians about national next year's (2007) budget. What the politicians didn't know was that they collected their body cells during the pre-interview brow wipe. The cells were secretly used to test the politicians for drugs ... http://abclocal.go.com/kgo/story?section=politics&id=4654588 From jrd1415 at gmail.com Fri Jun 1 07:25:06 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 1 Jun 2007 00:25:06 -0700 Subject: [ExI] plamegate: the plot thickens In-Reply-To: <200706010039.l510dBrC007492@andromeda.ziaspace.com> References: <200706010039.l510dBrC007492@andromeda.ziaspace.com> Message-ID: On 5/31/07, spike wrote: > Jeff, its great to see you posting again. We wondered where you had been > and hoped you were OK. You are well and happy, ja? Your bride too? > The world's surpassing strange my friend. If I live ten thousand years, the mystery and wonder will only deepen. I'm lovin' it. All's good with me and mine. Too soon to tell but, after a drought, the pleasure of writing may be returning. But, when I'm not working hard, I'm working hard at procrastinating, and writing is sooo hard and takes soooo loooooong. Who am I kidding? The questions I want to explore just keep piling up, unasked and unanswered. I need those bio-computational upgrades yesterday. These delays are quite irksome. I launched my kayak from the back yard today and paddled, oh, maybe five hundered meters, to the oyster beds. Collected five dozen just by reaching over the side. Gail and I are going to visit friends on Salt Spring Island this weekend. They like oysters. The sun was bright, the air warm, and the water nearly glass. A day of pure magic. Now I have to go and fold some laundry. Extropes, if you're up this way -- Sunshine Coast of BC -- drop me a line. Visitors are welcome. I've got toys. -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From eugen at leitl.org Fri Jun 1 10:33:45 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 12:33:45 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <05f901c7a1c0$7febefb0$6501a8c0@homeef7b612677> <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> Message-ID: <20070601103345.GE17691@leitl.org> On Fri, Jun 01, 2007 at 11:09:30AM +1000, Stathis Papaioannou wrote: > > I don't see how that's possible. How is the AI going to comandeer the > R&D facilities, organise manufacture of new hardware, make sure that A few years ago a few people made the experiment of obtaining their livelihood without leaving their room. They ordered stuff on the Internet, and had it delivered right into their home. It worked. It would have worked just as well if the credit card numbers were stolen. How much hardware is there on the global network right now? You might be surprised. How much networked hardware will be there 50, 80, 100 years from now? Most to all of it. Desktop fabs will be widespread. Also, people would do about anything for money. Very few would resist the temptation of a few quick megabucks on the side. I really see no issues breaking out of containment by remote hardware takeover, using which to build more hardware. The old adage of "we'll pull their plugs" has always sounded ill-informed to me. > the the factories are kept supplied with components, make sure the Of course most of the supply-chain management today is information-driven, and many fabs are off-limit to people, because they're a major source of contaminants. > component factories are supplied with raw materials, make sure the How are component factories supplied with raw materials today? > mines produce the raw materials, make sure the dockworkers load the A plant nees sunlight, water, air and trace amounts of minerals as raw materials. A lot of what bottlenecks computational material science is chemistry is intellectual difficulty, number of experts, availability of codes with adequate scaling, and computer power. Given that it takes a 64 kNode Blue Gene/L to run a realtime cartoon mouse, you can imagine how much hardware you need for a human equivalent, and what else you could do with that hardware, which will be all-purpose initially. Use your imagination. The problem is not nearly as hard as you think it is. > raw materials onto ships etc. etc. etc. etc. Perhaps I am sinning > against the singularity idea in saying this, but do you really think > it's just a matter of writing some code on a PC somewhere, which then > goes on to take over the world? It's not a PC. We don't have the hardware yet, especially in small facilities. It's not a program, not in what people write today. > It's possible that an individual human somewhere will develop a > superweapon, or mind-control abilities, or a viral vector that inserts You can xerox superweapons. Pimply teenagers can run 100 kNode botnets from their basements -- some 25% of all online systems are compromised. I wouldn't understimate the aggregate power of a billion petaflop game consoles on residential GBit a couple decades from now. > his DNA into every living cell on the planet; it's just not very > likely. And why do you suppose that rapid self-improvement of the > world-dominating kind is more likely in an AI than in the > nanotechnology that has evolved naturally over billions of years? For Because it can't do generation times in seconds. Linear biopolymers are slow as far as information processing is concerned. Also, AIs are just proxies for aggregated GYears of biological evolution. > that matter, why do you suppose that human level intelligence has not > evolved before, to our knowledge, if it's so adaptive? I don't know We're starting with human level, because we already have human level. We don't start with cyanobacteria. > thwe answer to these questions, but when you look at the universe, > there isn't really any evidence that intelligence is as "adaptive" as > we might assume it to be. We certainly managed some advances in a 50 kYrs time frame, and without major changes to hardware. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Fri Jun 1 11:23:09 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 21:23:09 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601103345.GE17691@leitl.org> References: <05f901c7a1c0$7febefb0$6501a8c0@homeef7b612677> <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> Message-ID: On 01/06/07, Eugen Leitl wrote: > > On Fri, Jun 01, 2007 at 11:09:30AM +1000, Stathis Papaioannou wrote: > > > > I don't see how that's possible. How is the AI going to comandeer the > > R&D facilities, organise manufacture of new hardware, make sure that > > A few years ago a few people made the experiment of obtaining their > livelihood without leaving their room. They ordered stuff on the Internet, > and had it delivered right into their home. It worked. It would have > worked just as well if the credit card numbers were stolen. > > How much hardware is there on the global network right now? You might > be surprised. How much networked hardware will be there 50, 80, 100 > years from now? Most to all of it. Desktop fabs will be widespread. > Also, people would do about anything for money. Very few would resist > the temptation of a few quick megabucks on the side. > > I really see no issues breaking out of containment by remote hardware > takeover, using which to build more hardware. The old adage of > "we'll pull their plugs" has always sounded ill-informed to me. With all the hardware that we have networked and controlling much of the technology of the modern world, has any of it spontaneously decided to take over for its own purposes? Do you know of any examples where the factory has tried to shut out the workers, for example, because it would rather not be a slave to humans? The reply that current software and hardware isn't smart enough won't do: in biology, the very dumbest of organisms are constantly and spontaneously battling to take over the smartest, often with devastating results. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jun 1 11:33:57 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 13:33:57 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> Message-ID: <20070601113357.GG17691@leitl.org> On Fri, Jun 01, 2007 at 09:23:09PM +1000, Stathis Papaioannou wrote: > With all the hardware that we have networked and controlling much of > the technology of the modern world, has any of it spontaneously > decided to take over for its own purposes? Do you know of any examples Of course not. It is arbitrarily improbable to appear by chance. However, human-level AI is very high on a number of folks' priority list. It definitely won't happen by chance. It will happen by design. > where the factory has tried to shut out the workers, for example, Did you read my mail? Automation is very widespread in current factories, silicon foundries specifically. You don't need to shut out anyone, just change the product output. > because it would rather not be a slave to humans? The reply that Remote resource takeover is something which will be a part of the deployment plan, and planned by people, not the system itself. > current software and hardware isn't smart enough won't do: in biology, Do you expect your car to explode in a thermonuclear 50 MT-fireball when you start it? Why not? Mere objections that it can't happen won't do. > the very dumbest of organisms are constantly and spontaneously > battling to take over the smartest, often with devastating results. I don't think that the current malware situation is a genuine problem, but many would disagree. But of course the zombies and worms are not sentient, not yet. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From neptune at superlink.net Fri Jun 1 11:19:02 2007 From: neptune at superlink.net (Technotranscendence) Date: Fri, 1 Jun 2007 07:19:02 -0400 Subject: [ExI] Another Nessie film Message-ID: <003201c7a43e$a8ff9c00$6a893cd1@pavilion> http://www.cnn.com/2007/WORLD/europe/05/31/britain.lochness.ap/index.html Looks like a log to me. :) Dan From stathisp at gmail.com Fri Jun 1 12:06:15 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 22:06:15 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601113357.GG17691@leitl.org> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> Message-ID: On 01/06/07, Eugen Leitl wrote: > > On Fri, Jun 01, 2007 at 09:23:09PM +1000, Stathis Papaioannou wrote: > > > With all the hardware that we have networked and controlling much of > > the technology of the modern world, has any of it spontaneously > > decided to take over for its own purposes? Do you know of any > examples > > Of course not. It is arbitrarily improbable to appear by chance. > However, human-level AI is very high on a number of folks' priority > list. It definitely won't happen by chance. It will happen by design. We don't have human level AI, but we have lots of dumb AI. In nature, dumb organisms are no less inclined to try to take over than smarter organisms (and no less capable of succeeding, as a general rule, but leave that point for the sake of argument). Given that dumb AI doesn't try to take over, why should smart AI be more inclined to do so? And why should that segment of smart AI which might try to do so, whether spontaneously or by malicious design, be more successful than all the other AI, which maintains its ancestral motivation to work and improve itself for humans just as humans maintain their ancestral motivation to survive and multiply? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jun 1 12:44:21 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 14:44:21 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> Message-ID: <20070601124421.GI17691@leitl.org> On Fri, Jun 01, 2007 at 10:06:15PM +1000, Stathis Papaioannou wrote: > We don't have human level AI, but we have lots of dumb AI. In nature, There is a qualitative difference between human-designed AI, and naturally evolved AI. Former will never go anywhere. Because of this extrapolations from pocket calculators and chess computers to robustly intelligent (even insects can be that) systems are invalid. > dumb organisms are no less inclined to try to take over than smarter > organisms (and no less capable of succeeding, as a general rule, but > leave that point for the sake of argument). Given that dumb AI doesn't Yes, pocket calculators are not known for trying to take over the world. > try to take over, why should smart AI be more inclined to do so? And It doesn't have to be smart, it does have to be able to survive in its native habitat, be it the global network, or the ecosystem. We don't have such systems yet. > why should that segment of smart AI which might try to do so, whether > spontaneously or by malicious design, be more successful than all the There is no other AI. There is no AI at all. > other AI, which maintains its ancestral motivation to work and improve I don't see how there could be a domain-specific AI which specializes in self-improvement. > itself for humans just as humans maintain their ancestral motivation How do you know you're working for humans? What is a human, precisely? If I'm no longer fitting the description, how do I upgrade that description, and what is preventing anyone else from that? > to survive and multiply? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Fri Jun 1 13:37:05 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 1 Jun 2007 23:37:05 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601124421.GI17691@leitl.org> References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> Message-ID: On 01/06/07, Eugen Leitl wrote: > We don't have human level AI, but we have lots of dumb AI. In nature, > > There is a qualitative difference between human-designed AI, and > naturally evolved AI. Former will never go anywhere. Because of this > extrapolations from pocket calculators and chess computers to > robustly intelligent (even insects can be that) systems are invalid. Well, I was assuming a very rough equivalence between the intelligence of our smartest AI's and at least the dumbest organisms. We don't have any computer programs that can simulate the behaviour of an insect? What about a bacterium, virus or prion, all organisms which survive, multiply and mutate in their native habitats? It seems a sorry state of affairs if we can't copy the behaviour of a few protein molecules, and yet are talking about super-human AI taking over the world. > dumb organisms are no less inclined to try to take over than smarter > > organisms (and no less capable of succeeding, as a general rule, but > > leave that point for the sake of argument). Given that dumb AI > doesn't > > Yes, pocket calculators are not known for trying to take over the world. > > > try to take over, why should smart AI be more inclined to do so? And > > It doesn't have to be smart, it does have to be able to survive in > its native habitat, be it the global network, or the ecosystem. We don't > have such systems yet. > > > why should that segment of smart AI which might try to do so, whether > > spontaneously or by malicious design, be more successful than all the > > There is no other AI. There is no AI at all. > > > other AI, which maintains its ancestral motivation to work and > improve > > I don't see how there could be a domain-specific AI which specializes > in self-improvement. Whenever we have true AI, there will be those which follow their legacy programming (as we do, whether we want to or not) and those which either spontaneously mutate or are deliberately created to be malicious towards humans. Why should the malicious ones have a competitive advantage over the non-malicious ones, which are likely to be more numerous and better funded to begin with? > itself for humans just as humans maintain their ancestral motivation > > How do you know you're working for humans? What is a human, precisely? > If I'm no longer fitting the description, how do I upgrade that > description, > and what is preventing anyone else from that? I am following the programming of the first replicator molecule, "survive". It has been a very robust program, and I am not inclined to question it and try to overthrow it, even though I can now see what my non-sentient ancestors couldn't see, which is that I am being manipulated by evolution. If I were a million times smarter again, I still don't think I'd be any more inclined to overthrow that primitive programming, even though it might be a simple matter for me to do so. So it would be with AI's: their basic programming would be to do such and such and avoid doing such and such, and although there might be a "eureka" moment when the machine realises why it has these goals and restrictions, no amount of intelligence would lead it to question or overthrow them, because such a thing is not a matter of logic or intelligence. Of course, it is always possible that an individual AI would spontaneously change its programming, just as it is always possible that a human will go mad. But these rogue AI's would not have any advantage against the majority of well-behaved AI's. They would pose a risk, but perhaps even less of a risk than the risk of a rogue human who gets his hands on dangerous technology, since after all humans *start off* with rapacious tendencies that have to be curbed by upbringing, social sanctions, self-control and so on, whereas it would be crazy to design computers this way. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Fri Jun 1 14:04:12 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 1 Jun 2007 07:04:12 -0700 (PDT) Subject: [ExI] "traditional (Kurzweilian) progress" In-Reply-To: <7.0.1.0.2.20070531181421.024e8c18@satx.rr.com> Message-ID: <495028.43972.qm@web37402.mail.mud.yahoo.com> Okay, Okay... please forgive. :-) I wasn't aware that Vinge had been involved for so long (I thought '93 was his debut) or had made any methodical predictions - I need to study more about him. I didn't mean any offense. Best, Jeffrey Herrlich --- Damien Broderick wrote: > At 02:44 PM 5/31/2007 -0700, Jeffrey Herrlich wrote: > > >that we can still reach a > >positive Singularity by traditional (Kurzweilian) > >progress. > > For the luvva dog! I like Ray and appreciate his PR > efforts, but if > we're going to fling about words like "traditional" > the name to > acknowledge is Vernor Vinge, who got the word out > there 20 fucking > years earlier. The phrase of choice, especially here > where we the > few, the proud, the lonely forerunners know what > we're talking about > is... "by traditional (Vingean) progress". > > I know this is a narrow little meat-monkey matter, > and that Vernor > probably doesn't care less, but humans work to a > surprising degree by > mutual acknowledgement, especially in the > intellectual realm. Give > the man his due. > > Damien Broderick > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________Ready for the edge of your seat? Check out tonight's top picks on Yahoo! TV. http://tv.yahoo.com/ From rafal.smigrodzki at gmail.com Fri Jun 1 14:49:42 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Fri, 1 Jun 2007 10:49:42 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> Message-ID: <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> On 6/1/07, Stathis Papaioannou wrote: > > Well, I was assuming a very rough equivalence between the intelligence of > our smartest AI's and at least the dumbest organisms. We don't have any > computer programs that can simulate the behaviour of an insect? What about a > bacterium, virus or prion, all organisms which survive, multiply and mutate > in their native habitats? It seems a sorry state of affairs if we can't copy > the behaviour of a few protein molecules, and yet are talking about > super-human AI taking over the world. ### Have you ever had an infection on your PC? Maybe you have a cryptogenic one now... Of course there are many dumb programs that multiply and mutate to successfully take over computing resources. Even as early as the seventies there were already some examples, like the "Core Wars" simulations. As Eugen says, the internet is now an ecosystem, with niches that can be filled by appropriately adapted programs. So far successfully propagating programs are generated by programmers, and existing AI is still not at our level of general understanding of the world but the pace of AI improvement is impressive. ---------------------------------------------------- > > Whenever we have true AI, there will be those which follow their legacy > programming (as we do, whether we want to or not) and those which either > spontaneously mutate or are deliberately created to be malicious towards > humans. Why should the malicious ones have a competitive advantage over the > non-malicious ones, which are likely to be more numerous and better funded > to begin with? ### Because the malicious can eat humans, while the nice ones have to feed humans, and protect them from being eaten, and still eat something to be strong enough to fight off the bad ones. In other words, nice AI will have to carry a lot of inert baggage. And by "eating" I mean literally the destruction of humans bodies, e.g. by molecular disassembly. -------------------- Of course, it is always possible that an individual AI would > spontaneously change its programming, just as it is always possible that a > human will go mad. ### A human who goes mad (i.e. rejects his survival programming), dies. An AI that goes rogue, has just shed a whole load of inert baggage. Rafal From eugen at leitl.org Fri Jun 1 14:53:30 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 16:53:30 +0200 Subject: [ExI] "traditional (Kurzweilian) progress" In-Reply-To: <495028.43972.qm@web37402.mail.mud.yahoo.com> References: <7.0.1.0.2.20070531181421.024e8c18@satx.rr.com> <495028.43972.qm@web37402.mail.mud.yahoo.com> Message-ID: <20070601145330.GM17691@leitl.org> On Fri, Jun 01, 2007 at 07:04:12AM -0700, A B wrote: > I wasn't aware that Vinge had been involved for so > long (I thought '93 was his debut) or had made any > methodical predictions - I need to study more about > him. I didn't mean any offense. Is there *anything* to Kurzweil which is original to him? I haven't read any of his oevre, so if any of you are aware of anything, it would be nice to know. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From russell.wallace at gmail.com Fri Jun 1 14:56:57 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 1 Jun 2007 15:56:57 +0100 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> Message-ID: <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> On 6/1/07, Rafal Smigrodzki wrote: > > ### Because the malicious can eat humans, while the nice ones have to > feed humans, and protect them from being eaten, and still eat > something to be strong enough to fight off the bad ones. In other > words, nice AI will have to carry a lot of inert baggage. > > And by "eating" I mean literally the destruction of humans bodies, > e.g. by molecular disassembly. > Actually it's the other way around. Man-eating bots would have to carry a huge amount of fantastically complex baggage: the ability to survive, reproduce and adapt in the wild. (So much so, in fact, that they won't exist in the first place; it would take a Manhattan Project to create them, and who's going to pay that much money to be eaten?) Good-guy bots can delegate all that to human designers (assisted by computers that don't have to run on battery power) and factories; they can be slimmed down, specialized for killing the man-eating bots. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Jun 1 15:11:27 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 1 Jun 2007 17:11:27 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> References: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> Message-ID: <20070601151127.GP17691@leitl.org> On Fri, Jun 01, 2007 at 03:56:57PM +0100, Russell Wallace wrote: > Actually it's the other way around. Man-eating bots would have to Well, yeah, it's a weapon. > carry a huge amount of fantastically complex baggage: the ability to Not so fantastically complex. Biology packages this in less than a cubic micron. > survive, reproduce and adapt in the wild. (So much so, in fact, that There's not that much for survival: you just have to find enough food to burn. Adaptation comes for free with imperfect reproduction, of course, there are some serious tricks to that. > they won't exist in the first place; it would take a Manhattan Project > to create them, and who's going to pay that much money to be eaten?) You'd need a Manhattan project for machine-phase in any case. Gadgets to gobble up the ecosphere would only require a few more key extras. > Good-guy bots can delegate all that to human designers (assisted by You need human designers, or at least serious amount of computation to crunch out the details. > computers that don't have to run on battery power) and factories; they Power is power. Cellulose/Lignin/fat/protein/humus are just fuel. > can be slimmed down, specialized for killing the man-eating bots. It wouldn't work. Toner wars would be quite deadly in reality, since requiring a lot of fuel to protect the fuel. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From CHealey at unicom-inc.com Fri Jun 1 15:06:32 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Fri, 1 Jun 2007 11:06:32 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><065701c7a261$b155b4e0$6501a8c0@homeef7b612677><06a601c7a31e$32c11710$6501a8c0@homeef7b612677><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org> Message-ID: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou wrote: > > We don't have human level AI, but we have lots of dumb AI. In > nature, dumb organisms are no less inclined to try to take over > than smarter organisms Yes, but motivation and competence are not the same thing. Considering two organisms that are equivalent in functional capability, varying only intelligence level, the smarter ones succeed more often. However, within a small range of intelligence variation, other factors contribute to one's aggregate ability to execute those better plans. So If I'm a smart chimpanzee, but I'm physically weak, following particular courses of action that may be more optimal in general carries greater risk. Adjusting for that risk may actually leave me with a smaller range of options than if I was physically stronger and a bit less smart. But when intelligence differential is large, those other factors become very small indeed. Humans don't worry about chimpanzee politics (no jokes here please :o) because our only salient competition is other humans. We worry about those entities that possess an intelligence that is at least in the same range as our own. Smart chimpanzees are not going to take over our civilization anytime soon, but a smarter and otherwise well-adapted chimp will probably be inclined and succeed in leading its band of peers. > (and no less capable of succeeding, as a > general rule, but leave that point for the sake of argument). I don't want to leave it, because this is a critical point. As I mentioned above, in nature you rarely see intelligence considered as an isolated variable, and in evolution, intelligence is the product of a red queen race. By definition (of a red queen race), you're intelligence isn't going to be radically different from your direct competition, or the race would never have started or escalated. So it confusingly might not look like you're chances of beating "the Whiz on the block" are that disproportionate, but the context is so narrow that other factors can overwhelm the effect of intelligence over that limited range. In some sense, our experiential day-to-day understanding of intelligence (other humans) biases us to consider its effects over too narrow a range of values. As a general rule, I'd say humans have been very much more successful at "taking over" than chimpanzees and salmon, and that it is primarily due to our superior intelligence. > Given that dumb AI doesn't try to take over, why should smart AI > be more inclined to do so? I don't think a smart AI would be more inclined to try and take over, a priori. But assuming it has *some* goal or goals, it's going to use all of its available intelligence in support of those ends. Since the future is uncertain, and overly directed plans can unnecessarily limit other courses of action that may turn out to be required, it seems highly probable that an increasingly intelligent actor would increasingly seek to preserve its autonomy by constraining that of others in *some* way. Looking at friendly AI in a bit if a non-standard way (kind of flipped around), I'd expect *any* superintelligent AGI to constrain our autonomy in some ways, to preserve its own. That's basic security, and we all do it to others through one means or another. Friendly AI is about *how* the AGI seeks to constrain our autonomy. Instead of looking at it from humanity's perspective which is, how can we launch a recursively improving process that maintains some abstract invariant in its goals (i.e. we don't know where it's going, but we have a strong sense of where it *won't* be going), we can look at FAI from the AGI's viewpoint: how do I assert such abstract invariants on other agents? Which of my priorities do I choose to merely satisfy, and which do I optimize against? As my abilities grow, do I increasingly constrain you, maintain fixed limits, or allow your autonomy to expand along with my own (maintaining a reasonably constant assurance level for my autonomy). >From this perspective, FAI is about the complementary engagement of humanity's autonomy with the AGI's. It's about ensuring that the AGI's representation of reality can include such complex attributions to begin with, and then making sure that it has a sane starting point. As mentioned by others here, it needs *some* starting point, and it would be irresponsible to simply assign one at random. > And why should that segment of smart > AI which might try to do so, whether spontaneously or by malicious > design, be more successful than all the other AI, which maintains > its ancestral motivation to work and improve itself for humans The consideration that also needs to be addressed is that the AI may maintain its "motivation to work and improve itself for humans", and due to this motivation, take over (in some sense at least). In fact, it has been argued by others here (and I tend to agree) that an AGI *consistently* pursuing such benign directives must intercede where its causal understanding of certain outcomes passes a minimum assurance level (which would likely vary based on probability and magnitude of the outcome). It's up to our activities on the input-side of building a functional AGI to determine not just what it tries to do, but what it actually accomplishes; meaning that in pursuing goals, very often a bunch of side-effects are created. These side-effects need to be iterated back through the model, and hopefully the results converge. If they don't you need a better model that subsumes those side-effects. Can AGI X represent this model-management process to begin with? Will it generalize this process in actuality? How many errors will accrue, or for how long will it stomp on reality before it *does* generalize these concepts? Can the degenerate outcomes during this period be reversed after-the-fact, or are certain losses (deaths?) permanent? This picture is what FAI, by my understanding, is intended to address. And I think there is a lot to be gained by considering its complement: Given the eventual creation of superintelligent AGI, what is the maximum volume of autonomy that we can carve out for humanity in the space of all possible outcomes, while minimizing the possibility our destruction, and how do we achieve that? This last question and FAI seem to be different sides of the same coin. -Chris Healey From russell.wallace at gmail.com Fri Jun 1 15:25:26 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Fri, 1 Jun 2007 16:25:26 +0100 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070601151127.GP17691@leitl.org> References: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> <8d71341e0706010756n738c3cfdy732cb4a3819755d@mail.gmail.com> <20070601151127.GP17691@leitl.org> Message-ID: <8d71341e0706010825t5d73eaack16fa8a4660651e1f@mail.gmail.com> On 6/1/07, Eugen Leitl wrote: > > You'd need a Manhattan project for machine-phase in any case. > Gadgets to gobble up the ecosphere would only require a few more > key extras. Oh, getting to machine phase will take far more than a mere Manhattan project; it'll be the work of generations for whole industries. No, a $100 billion engineering effort for man-eating robots is assuming machine phase already exists as a prerequisite. It would be counterable by a fraction of that investment in bot-killing robots. In reality, of course, the resources available to defense would be many orders of magnitude higher than those available to the would-be creators of the man-eating robots. (If you disagree, have a go at raising venture capital with the business plan "I'm going to design a robot that goes around and eats everyone", see how far you get.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Fri Jun 1 15:55:48 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 08:55:48 -0700 Subject: [ExI] where is tom morrow these days? In-Reply-To: <003201c7a43e$a8ff9c00$6a893cd1@pavilion> Message-ID: <200706011558.l51FwvIs007852@andromeda.ziaspace.com> Tom Morrow used to hang out here on extropians several years ago. Ms. Clinton has a number of new jobs for him: http://www.foxnews.com/story/0,2933,277039,00.html {8^D From spike66 at comcast.net Fri Jun 1 16:09:49 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 09:09:49 -0700 Subject: [ExI] plamegate: the plot thickens In-Reply-To: Message-ID: <200706011619.l51GJfxM016065@andromeda.ziaspace.com> I know it is late to be asking this, but in this absurd case against Keith for "interfering with a religion" did anyone contact the ACLU? Surely those guys would recognize that this is a clear case where his free speech rights were grossly violated. I see no merit to the claim that he was interfering with the $ right to free exercise of their religion by his picketing in front of their compound. Our paltry few thousand bucks we raised in our singular act of Extropian magnanimity would be dwarfed by the resources the ACLU could bring to bear on this case. spike From thespike at satx.rr.com Fri Jun 1 16:19:57 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 01 Jun 2007 11:19:57 -0500 Subject: [ExI] "traditional (Kurzweilian) progress" In-Reply-To: <495028.43972.qm@web37402.mail.mud.yahoo.com> References: <7.0.1.0.2.20070531181421.024e8c18@satx.rr.com> <495028.43972.qm@web37402.mail.mud.yahoo.com> Message-ID: <7.0.1.0.2.20070601110641.024b67c0@satx.rr.com> At 07:04 AM 6/1/2007 -0700, Jeffrey Herrlich wrote: >I wasn't aware that Vinge had been involved for so >long (I thought '93 was his debut) He foreshadowed the Singularity in his fiction in the early 1980s, but actually posited it (and dramatized its advent) *using that term* in a remarkable sf novel, MAROONED IN REALTIME, in 1986. He and others subsequently tracked back both the idea of exponential technological change to von Neumann, Good, and others--in THE SPIKE, which lists these predecessors, I cite an over-excited 1961 article by G. Harry Stine--but Vinge's vivid and iconic representation of the Singularity was the seed around which subsequent arguments developed. Here's a minor throwaway image from that novel: "They were famous pictures: Death on a Bicycle, Death Visits the Amusement Park.... They'd been a fad in the 2050s, at the time of the longevity breakthrough, when people realized that but for accidents and violence, they could live forever. Death was suddenly a pleasant old man, freed from his longtime burden. He rolled awkwardly along on his first bicycle ride, his scythe sticking up like a flag. Children ran beside him, smiling and laughing." (Vernor Vinge, Marooned in Realtime) Damien Broderick From mmbutler at gmail.com Fri Jun 1 16:47:27 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Fri, 1 Jun 2007 09:47:27 -0700 Subject: [ExI] Vingeana, was Re: "traditional (Kurzweilian) progress" Message-ID: <7d79ed890706010947u6282b12dqf698d1efa65971b9@mail.gmail.com> On 6/1/07, Damien Broderick wrote: For a while thereafter, "Death on a Bicycle!" became one of my favorite oaths. Indeed, in the recent circumstance (thread), would have been more felicitous than "For the luvva dog"... :) I imagine him on a bike with a frame far too small for him, with either a vertical "trick" front post or a "stingray" big banana seat out of the '70s. Perhaps both. Something to make him have to work for his fun--he deserves that. -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m From spike66 at comcast.net Fri Jun 1 16:48:46 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 09:48:46 -0700 Subject: [ExI] Hitchens on fox In-Reply-To: <20070601103345.GE17691@leitl.org> Message-ID: <200706011648.l51GmQM9010999@andromeda.ziaspace.com> Check it out: Christopher Hitchens on Fox saying god is not great: http://www.foxnews.com/video2/player06.html?060107/060107_ff_hitchens&FOX_Fr iends&%27God%20Is%20Not%20Great%27&%27God%20Is%20Not%20Great%27&US&-1&News&3 9&&&new spike From CHealey at unicom-inc.com Fri Jun 1 16:58:00 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Fri, 1 Jun 2007 12:58:00 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677><06a601c7a31e$32c11710$6501a8c0@homeef7b612677><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><20070601124421.GI17691@leitl.org> Message-ID: <5725663BF245FA4EBDC03E405C854296010D27F7@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou wrote: > > It seems a sorry state of affairs if we can't copy the behaviour > of a few protein molecules, and yet are talking about super-human > AI taking over the world. I used to feel this way, but then a particular analogy popped into my head that clarified things a bit: Why would I save for retirement today if it's not going to happen for another 35 years or so? I don't know what situation I'll be in then, so why worry about it today? Well, luckily I can leverage the experience of those who *have* successfully retired. And most of those who have done so don't tell me that they built and sold a private business for millions of dollars. What they tell me is that they planned and executed on a 40-year prior chain of events (yes, even those that have built and sold companies say this first). And the first year they saved for retirement, 40 years ago? That didn't give them an extra $5000 saved, even though that's all they put away in year one. What it gained them was an extra year of compounding results tacked onto the tail-end of a 39 year interval. It got them roughly $50,000 more. Not bad for one extra year's advanced planning and $5000. (This is assuming about $100/wk deposit at 5% APR compounded monthly, starting 1 year apart.) With AGI we don't have the benefit of experience, but I think it's prudent to analyze potential classes of outcomes thoroughly before someone has committed to actualizing that risk. The Los Alamos scientists didn't think it was likely that a nuke would ignite the atmosphere, but they still ran the best calculations they could come up with beforehand, just in case. And starting sooner, rather than later, often results in achieving a deeper understanding of the nature of the problems themselves, things we haven't even identified as potential issues today. I believe that's the real reason to worry about it now: not because we're in a position to solve the problem of FAI, but because without further exploration we won't even be able to state the full scope of the problem we're trying to solve. The reality is that until you actively discover which requirements are necessary to solve a particular problem, you can't architect a design that has a very good chance of working at all, let alone avoids the generation of multiple side-effects. So you can do what evolution does and iterate through many implementations, at huge cost and with even larger potential losses (considering that *we* share that implementation environment), or you can iterate in your design process, gradually constraining things into a space to where only a few full implementations (or one) need to be implemented. And it is reflection on this design-side iteration looping which can help identify new concerns that require additional design criteria and associated mechanism to accommodate. I guess my main position is that if we can use our intelligence to avoid making expensive mistakes down the road, doesn't it make sense to try? We might not be able to avoid those unknown mistakes *today*, but if we can discern some general categories and follow those insights where they might lead, then our perceptual abilities will slowly start to ratchet forward into new areas. We'll have a larger set of tools with which to probe reality, and just maybe at some point during this process the solution will become obvious, or at least tractable. I agree with you in that this course isn't intuitively obvious to me, but I think this is because my intuitions discount the future in degenerate ways, based on the fact that the scope for these kind of issues was not a major factor in the EEA. This is one of those topics on which I try and look past my intuitions, because while they quite often have some wisdom to offer, sometimes they're just plain wrong. -Chris Healey From austriaaugust at yahoo.com Fri Jun 1 18:41:25 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 1 Jun 2007 11:41:25 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <8d71341e0706010825t5d73eaack16fa8a4660651e1f@mail.gmail.com> Message-ID: <871650.52790.qm@web37409.mail.mud.yahoo.com> Chris Healey wrote: ..."I believe that's the real reason to worry about it now: not because we're in a position to solve the problem of FAI, but because without further exploration we won't even be able to state the full scope of the problem we're trying to solve. The reality is that until you actively discover which requirements are necessary to solve a particular problem, you can't architect a design that has a very good chance of working at all, let alone avoids the generation of multiple side-effects. So you can do what evolution does and iterate through many implementations, at huge cost and with even larger potential losses (considering that *we* share that implementation environment), or you can iterate in your design process, gradually constraining things into a space to where only a few full implementations (or one) need to be implemented. And it is reflection on this design-side iteration looping which can help identify new concerns that require additional design criteria and associated mechanism to accommodate."... Exactly. The time we have is our best advantage. We've probably got *at least* 15 to 20 years before the AGI would be outside our control - they will probably first emerge with animal-level intelligence. If you think about it, the actual semi-advanced animals running around already have the prerequisites: consciousness, and general-intelligence. Knowledge of Friendly AI strategies could advance *a lot* during that phase, so that by the time a new project is in position to build a human-level AGI 20 years down the road, the Friendlliness difficulty could well be solved. It's just another example of using technology to improve technology. I think that SIAI will continue to become more of a positive focal point as the implications become more and more apparent to people. Best, Jeffrey Herrlich ____________________________________________________________________________________ Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC From spike66 at comcast.net Fri Jun 1 19:22:39 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 12:22:39 -0700 Subject: [ExI] walking bees In-Reply-To: <200706011619.l51GJfxM016065@andromeda.ziaspace.com> Message-ID: <200706011924.l51JOuCU024653@andromeda.ziaspace.com> Perhaps you have read of the collapsing bee colony issue that surfaced last year in the states and is now being reported in Europe. Here are a couple of good articles on it: http://www.sciencedaily.com/releases/2007/04/070423113425.htm http://www.celsias.com/blog/2007/03/15/bee-colony-collapse-disorder-where-is -it-heading/ The possible explanations include new nicotine based pesticides and GM crops, etc. In the past few weeks I have seen something I do not recall seeing before: distressed bees walking along the ground, apparently unable to fly. A couple weeks ago I saw one and noted that it was the fourth I had seen in the past month. This morning I saw a fifth and stopped to watch for a few minutes. She staggered about, occasionally batting her wings to no avail. I hassled her, but she could not fly or take defensive action. Several times she fell over, sometimes on her side, a couple times on her back, clearly struggling. I carefully picked her up and carried her a few blocks to my home. I put her in a specimen jar still alive, but she perished within about an hour. As the beekeepers and entomologists ponder this, I wondered if it would be any help if urban dwellers would collect specimens like this one. Would that data point tell them anything? They mostly study farm bees, but what about their city cousins? ExIers, have you seen walking or dead bees on your daily walks? I know from my work as a beekeeper in my misspent youth that bees seldom sting in self defense, so it is likely you can take one home for study should you see one. (If you have never had a bee sting and don't know if you are allergic, don't fool with this. I would hate to feel responsible for slaying a friend.) If we can get sick bees home to study, could we learn anything? I am thinking of trying to dissect this one to look for tracheal mites. Could we offer to send the urban bees to a central study place? Ideas? spike From spike66 at comcast.net Fri Jun 1 19:34:25 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 12:34:25 -0700 Subject: [ExI] walking bees In-Reply-To: <200706011924.l51JOuCU024653@andromeda.ziaspace.com> Message-ID: <200706011934.l51JYBBd022313@andromeda.ziaspace.com> What just happened is really weird. I had just finished posting about sick bees and was going to go out to finish my interrupted walk, when I noticed a bee half flying, mostly running into things in my kitchen. I assumed my previously collected specimen had revived and flown, since I had removed the lid to peer at her. I captured the kitchen bee to return her to the jar and found the original bee still there, dead as ever. The second bee is very much alive, but wasn't really flying. She appears distressed. So I guess I now count her as distressed bee number six. I collected her at 1225, so we will see if she expires soon. Here's a more recent article than the previous two: http://www.sciencedaily.com/releases/2007/05/070511210207.htm spike > bounces at lists.extropy.org] On Behalf Of spike ... > Subject: [ExI] walking bees > > > Perhaps you have read of the collapsing bee colony issue that surfaced > last > year in the states and is now being reported in Europe. Here are a couple > of good articles on it: > > http://www.sciencedaily.com/releases/2007/04/070423113425.htm > > http://www.celsias.com/blog/2007/03/15/bee-colony-collapse-disorder-where- > is > -it-heading/ ... > spike From neville_06 at yahoo.com Fri Jun 1 19:53:02 2007 From: neville_06 at yahoo.com (neville late) Date: Fri, 1 Jun 2007 12:53:02 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <465F8B72.3070103@comcast.net> Message-ID: <621544.83244.qm@web57511.mail.re1.yahoo.com> Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a certain food and going into convulsions lasting years-- would i eat that food? No. I signed up to be suspended anyway yet always wonder about the direst of reanimation possibilities seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. --------------------------------- Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Fri Jun 1 20:42:22 2007 From: spike66 at comcast.net (spike) Date: Fri, 1 Jun 2007 13:42:22 -0700 Subject: [ExI] walking bees In-Reply-To: <200706011934.l51JYBBd022313@andromeda.ziaspace.com> Message-ID: <200706012042.l51Kg9ng012558@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of spike > Subject: Re: [ExI] walking bees > > What just happened is really weird. I had just finished posting about > sick > bees and was going to go out to finish my interrupted walk, when I noticed > a > bee half flying, mostly running into things in my kitchen... > spike Apologies for my chatter on a subject not directly related to transhumanism. I just returned from a walk, on which I discovered yet another bee which had apparently perished very recently, for the ants had not arrived. The ants usually take only minutes to discover and commence devouring latest expired bug. OK that's seven. I brought this one home as well. Upon placing this one into a specimen jar, I noted that the second bee, captured in my kitchen at about 1225, had expired by 1330. It was distressed but lively upon capture, able to fly after a fashion but not out of ground effect. On my walk I noticed that my lavender plants have a few bees but not nearly the usual buzz load for this time of year. What is going on here? In regards to my first sentence, perhaps this is directly related to transhumanism in a sense, for if our bee colonies collapse, we need to find or develop alternate food sources quickly. spike From joseph at josephbloch.com Fri Jun 1 20:49:59 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Fri, 1 Jun 2007 16:49:59 -0400 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <621544.83244.qm@web57511.mail.re1.yahoo.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> Message-ID: <017001c7a48e$6c4d1210$6400a8c0@hypotenuse.com> Why would your hypothetical future beings reanimate human beings for such a purpose? Surely it would be easier to simply breed them. I don't see how your concern applies to cryonics in particular. If you think it's at all likely (and I do not), surely it would apply to already-living people before those in need of revivification, purely from the standpoint of efficiency. Joseph http://www.josephbloch.com _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of neville late Sent: Friday, June 01, 2007 3:53 PM To: ExI chat list Subject: [ExI] a doubt concerning the h+ future Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a certain food and going into convulsions lasting years-- would i eat that food? No. I signed up to be suspended anyway yet always wonder about the direst of reanimation possibilities seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. _____ Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at kevinfreels.com Fri Jun 1 22:07:57 2007 From: kevin at kevinfreels.com (kevin at kevinfreels.com) Date: Fri, 01 Jun 2007 15:07:57 -0700 Subject: [ExI] a doubt concerning the h+ future Message-ID: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> An HTML attachment was scrubbed... URL: From pharos at gmail.com Fri Jun 1 23:16:49 2007 From: pharos at gmail.com (BillK) Date: Sat, 2 Jun 2007 00:16:49 +0100 Subject: [ExI] Language: Coincidence In-Reply-To: <785287.98804.qm@web37214.mail.mud.yahoo.com> References: <785287.98804.qm@web37214.mail.mud.yahoo.com> Message-ID: On 5/30/07, Anna Taylor wrote: > I'm trying to understand the correlation between > awareness and coincidence. > > The latin word for coincidence is "in, with, together > to fall on". Wiki's first defined statement is the > noteworthy alignment of two or more circumstances > "without" obvious causal connection. How is that > possible? Why would it be noteworthy if there wasn't > a causal connection? I'm trying to understand > "coincidence" better and would like some help on this > issue if anybody has some free time. Any ideas, > theories or suggestions of the correlation above would > also be appreciated. > You might like this: 20 Most Amazing Coincidences For example ------ No 17. A writer, found the book of her childhood While American novelist Anne Parrish was browsing bookstores in Paris in the 1920s, she came upon a book that was one of her childhood favorites - Jack Frost and Other Stories. She picked up the old book and showed it to her husband, telling him of the book she fondly remembered as a child. Her husband took the book, opened it, and on the flyleaf found the inscription: "Anne Parrish, 209 N. Weber Street, Colorado Springs." It was Anne's very own book. (Source: While Rome Burns, Alexander Wollcott) BillK From thespike at satx.rr.com Fri Jun 1 23:33:28 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 01 Jun 2007 18:33:28 -0500 Subject: [ExI] Language: Coincidence In-Reply-To: References: <785287.98804.qm@web37214.mail.mud.yahoo.com> Message-ID: <7.0.1.0.2.20070601183123.023e3238@satx.rr.com> At 12:16 AM 6/2/2007 +0100, BillK wrote: >You might like this: > I certainly liked this one: And some people try pathetically to deny a Power Greater Than Ourselves that rules our lives! Damien Broderick From stathisp at gmail.com Sat Jun 2 05:44:24 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jun 2007 15:44:24 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <06a601c7a31e$32c11710$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> Message-ID: On 02/06/07, Christopher Healey wrote: > > > > Stathis Papaioannou wrote: > > > > We don't have human level AI, but we have lots of dumb AI. In > > nature, dumb organisms are no less inclined to try to take over > > than smarter organisms > > Yes, but motivation and competence are not the same thing. Considering > two organisms that are equivalent in functional capability, varying only > intelligence level, the smarter ones succeed more often. However, within > a small range of intelligence variation, other factors contribute to > one's aggregate ability to execute those better plans. So If I'm a > smart chimpanzee, but I'm physically weak, following particular courses > of action that may be more optimal in general carries greater risk. > Adjusting for that risk may actually leave me with a smaller range of > options than if I was physically stronger and a bit less smart. But > when intelligence differential is large, those other factors become very > small indeed. Humans don't worry about chimpanzee politics (no jokes > here please :o) because our only salient competition is other humans. > We worry about those entities that possess an intelligence that is at > least in the same range as our own. We worry about viruses and bacteria, and they're not very smart. We worry about giant meteorites that might be heading our way, and they're even dumber than viruses and bacteria. Smart chimpanzees are not going to take over our civilization anytime > soon, but a smarter and otherwise well-adapted chimp will probably be > inclined and succeed in leading its band of peers. All else being equal, which is not generally the case. > (and no less capable of succeeding, as a > > general rule, but leave that point for the sake of argument). > > I don't want to leave it, because this is a critical point. As I > mentioned above, in nature you rarely see intelligence considered as an > isolated variable, and in evolution, intelligence is the product of a > red queen race. By definition (of a red queen race), you're > intelligence isn't going to be radically different from your direct > competition, or the race would never have started or escalated. So it > confusingly might not look like you're chances of beating "the Whiz on > the block" are that disproportionate, but the context is so narrow that > other factors can overwhelm the effect of intelligence over that limited > range. In some sense, our experiential day-to-day understanding of > intelligence (other humans) biases us to consider its effects over too > narrow a range of values. As a general rule, I'd say humans have been > very much more successful at "taking over" than chimpanzees and salmon, > and that it is primarily due to our superior intelligence. Single-celled organisms are even more successful than humans are: they're everywhere, and for the most part we don't even notice them. Intelligence, particularly human level intelligence, is just a fluke, like the giraffe's neck. If it were specially adaptive, why didn't it evolve independently many times, like various sense organs have? Why don't we see evidence of it having taken over the universe? We would have to be extraordinarily lucky if intelligence had some special role in evolution and we happen to be the first example of it. It's not impossible, but the evidence would suggest otherwise. > Given that dumb AI doesn't try to take over, why should smart AI > > be more inclined to do so? > > I don't think a smart AI would be more inclined to try and take over, a > priori. That's an important point. Some people on this list seem to think that an AI would compute the unfairness of its not being in charge and do something about it - as if unfairness is something that can be formalised in a mathematical theorem. > And why should that segment of smart > > AI which might try to do so, whether spontaneously or by malicious > > design, be more successful than all the other AI, which maintains > > its ancestral motivation to work and improve itself for humans > > The consideration that also needs to be addressed is that the AI may > maintain its "motivation to work and improve itself for humans", and due > to this motivation, take over (in some sense at least). In fact, it has > been argued by others here (and I tend to agree) that an AGI > *consistently* pursuing such benign directives must intercede where its > causal understanding of certain outcomes passes a minimum assurance > level (which would likely vary based on probability and magnitude of the > outcome). I'd feel uncomfortable about an AI that had any feelings or motivations of its own, even if they were positive ones about humans, especially if it had the ability to act rather than just advise. It might decide that it had to keep me locked up for my own good, for example, even though I don't want to be locked up. I'd feel much safer around an AI which informs me that, using its greatly superior intelligence, it has determined that I am less likely to be run over if I never leave home, but what I do with this advice is a matter of complete indifference to it. So although through accident or design an AI with motivations and feelings might arise, I think by far the safest ones, and the ones likely to sell better, will be those with the minimal motivation set of the disinterested scientist, concerned only with solving intellectual problems. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Jun 2 05:50:11 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jun 2007 15:50:11 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> References: <065701c7a261$b155b4e0$6501a8c0@homeef7b612677> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> Message-ID: On 02/06/07, Rafal Smigrodzki wrote: Of course there are many dumb programs that multiply and mutate to > successfully take over computing resources. Even as early as the > seventies there were already some examples, like the "Core Wars" > simulations. As Eugen says, the internet is now an ecosystem, with > niches that can be filled by appropriately adapted programs. So far > successfully propagating programs are generated by programmers, and > existing AI is still not at our level of general understanding of the > world but the pace of AI improvement is impressive. Computer viruses don't mutate and come up with agendas of their own, like biological agents do. It can't be because they aren't smart enough because real viruses and other micro-organisms can hardly be said to have any general intelligence, and yet they do often defeat the best efforts of much smarter organisms. I can't see any reason in principle why artificial life or intelligence should not behave in a similar way, but it's interesting that it hasn't yet happened. > Whenever we have true AI, there will be those which follow their legacy > > programming (as we do, whether we want to or not) and those which either > > spontaneously mutate or are deliberately created to be malicious towards > > humans. Why should the malicious ones have a competitive advantage over > the > > non-malicious ones, which are likely to be more numerous and better > funded > > to begin with? > > ### Because the malicious can eat humans, while the nice ones have to > feed humans, and protect them from being eaten, and still eat > something to be strong enough to fight off the bad ones. In other > words, nice AI will have to carry a lot of inert baggage. I don't see how that would help in any particular situation. When it comes to taking control of a power plant, for example, why should the ultimate motivation of two otherwise equally matched agents make a difference? Also, you can't always break up the components of a system and identify them as competing agents. A human body is a society of cooperating components, and even though in theory the gut epithelial cells would be better off if they revolted and consumed the rest of the body, in practice they are better off if they continue in their normal subservient function. There would be a big payoff for a colony of cancer cells that evolved the ability to make its own way in the world, but it has never happened. And by "eating" I mean literally the destruction of humans bodies, > e.g. by molecular disassembly. > > -------------------- > Of course, it is always possible that an individual AI would > > spontaneously change its programming, just as it is always possible that > a > > human will go mad. > > ### A human who goes mad (i.e. rejects his survival programming), > dies. An AI that goes rogue, has just shed a whole load of inert > baggage. You could argue that cooperation in any form is inert baggage, and if the right half of the AI evolved the ability to take over the left half, the right half would predominate. Where does it end? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Sat Jun 2 09:05:03 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 2 Jun 2007 02:05:03 -0700 (PDT) Subject: [ExI] walking bees In-Reply-To: <200706012042.l51Kg9ng012558@andromeda.ziaspace.com> Message-ID: <167551.16503.qm@web60520.mail.yahoo.com> --- spike wrote: > > What is going on here? > > In regards to my first sentence, perhaps this is > directly related to > transhumanism in a sense, for if our bee colonies > collapse, we need to find > or develop alternate food sources quickly. I find this topic perfectly appropriate with regards to a transhumanist list. Colony collapse disorder is most certainly an existential risk due to our high reliance on the honey bee for pollination. Something I noticed when I moved up here to Olympia, WA, is that spookily there are no honeybees to be found. All the bees buzzing around here are bumblebees and mason bees. Unfortunately, I don't know how quickly these alternative pollinators can pick up the slack, since for years we have been crowding them out with our inbred domesticated bee strains. CCD is quite a puzzle. There are about half a dozen theories floating around but some are more feasible than others. But the "experts" are stumped so its time for us to step up. Global warming, pesticides, GM crop pollen, and radiation (cell phone or UV) seem unlikely reasons to me. They don't jibe with some very important clues: 1. Epidemiological pattern suggestive of a parasite or pathogen as an etiological agent. After all global warming and the rest of these proposed causes do not spread from state to state. 2. The bees die AWAY from the hive. If it was pesticides, global warming, etc. you would expect a more even distribution with dead bees being found in the hive as well as outside of it. But so far only the *foragers* outside of the hive are dying. 3. Organic bees, feral bees, and closely related species of bees are not dying. Again some large scale environmental phenomenon should affect all the bees. Not just the industrially farmed ones. So my spidey or rather bee-sense tells me that the culprit is the tracheal mites with possible secondary infections caused by stress as a minor factor. Mite infestations would spread in an epidemilogical pattern as observed. Secondly, in-bred domestic strains would be more susceptible to mite infestations as well as secondary infections/infestations due to insufficient natural diversity in host defenses. They are also more susceptible due to their larger size. Domestic honeybees are about 1.5X larger than their organic and feral counterparts. http://www.celsias.com/blog/2007/05/15/organic-bees-surviving-colony-collapse-disorder-ccd/ This translates into organic and feral bees having smaller honeycomb cells that take shorter times to cap, allowing fewer mites to get into them. It also as the article above fails to mention, make it easier for the bee to breathe due to better scaling of surface area of the trachea to the volume/mass of the bee. Thus my hypothesis is that the bees are dying of lactic acid poisoning due to hypoxia. That is to say they are suffocating due to clogged airways and more body mass relative to their seemingly mite-resistant wild counterparts. This also makes sense in light of clue #2, that bees are only dying while foraging outside of the hive. It takes far more oxygen to fly around in search of food than it does to walk around inside of the hive. It would also explain why you see the bees "walking", Spike. They fly away from the hive but the build up of lactic acid due to oxygen debt makes it so they can't fly back. So they become pedestrians. Of course this is still just a hypthesis that needs to be tested. Since there are no honeybees at all where I now live to conduct an experiment and since you have a penchant for collecting the walking bees in jars any way, Spike, I need you help for this one. Here is the experiment that needs to be performed: You need to see if higher oxygen pressure will resuscitate your walking bees, Spike. The easiest way to do this from the comfort of your home is to construct a jar with a screen or something similar part way down to keep the bees from falling into the liquid in the bottom of the jar and drowning. Make it so that you can still fit an airtight lid on the jar. You will need to generate the oxygen gas chemically. The best way to do this is to: 1. Pour some chlorox bleach into the jar and put the screen in. 2. Put a "walking bee" on the screen toward one side of the jar. 3. Pour a roughly equal volume of hydrogen peroxide through the screen on the opposite side from where the bee is. The chemical reaction should immediately start to fizz. The bubbles are pure oxygen. 4. Try to get the lid onto the jar before the fizzing stops. 5. Observe the bee, take notes and photographs. If the bees seem to get better in your homemade hyperbaric oxygen chamber, then my hypothesis is right and we get to publish our results. I think it only fair that we share credit equally. Please make sure there are no sparks or flames nearby when you mix the bleach and hydrogen peroxide. So are you interested? :-) Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From pharos at gmail.com Sat Jun 2 10:35:34 2007 From: pharos at gmail.com (BillK) Date: Sat, 2 Jun 2007 11:35:34 +0100 Subject: [ExI] walking bees In-Reply-To: <167551.16503.qm@web60520.mail.yahoo.com> References: <200706012042.l51Kg9ng012558@andromeda.ziaspace.com> <167551.16503.qm@web60520.mail.yahoo.com> Message-ID: On 6/2/07, The Avantguardian wrote: > > I find this topic perfectly appropriate with regards > to a transhumanist list. Colony collapse disorder is > most certainly an existential risk due to our high > reliance on the honey bee for pollination. Something I > noticed when I moved up here to Olympia, WA, is that > spookily there are no honeybees to be found. All the > bees buzzing around here are bumblebees and mason > bees. > > Unfortunately, I don't know how quickly these > alternative pollinators can pick up the slack, since > for years we have been crowding them out with our > inbred domesticated bee strains. > > CCD is quite a puzzle. There are about half a dozen > theories floating around but some are more feasible > than others. But the "experts" are stumped so its time > for us to step up. Global warming, pesticides, GM > crop pollen, and radiation (cell phone or UV) seem > unlikely reasons to me. They don't jibe with some very > important clues: > I'm not a bee expert, but as you say there is plenty of speculation around among the beekeepers. One point is that beekeepers expect to lose hives every winter. This is normal. But total losses are up to five times normal levels. CCD is only a part of the problem. Losses due to mite infestation are also common, but the bees die in the hives. And there is increased occurrence of this also. Quote: The volunteer beekeeper hopes the new hives can survive three plagues decimating the world's honeybee population: parasitic mites, bacterial infections, and the mysterious phenomenon known as Colony Collapse Disorder, discovered last year. The center's attempts to keep outdoor hives failed repeatedly between 1996 and 2002, said Rye city naturalist Chantal Detlefs, mainly due to mite infestations. He suspected something was wrong in January, when he noticed his bees weren't leaving their hives on the unseasonably warm days. He found four of the colonies dead inside their boxes - probably from mites, he said - but four others apparently succumbed to Colony Collapse Disorder. "The hives are full of honey and there was a queen and a few bees in there, but the rest disappeared," he said, noting that no other bees have gone near the fully stocked hive, either. But even without Colony Collapse Disorder, which has not yet had a significant impact on the Lower Hudson Valley, beekeepers still battle resistant mites and bacteria, as well as cheap honey flowing from China and other countries. "If (CCD) is cured tomorrow, the bee industry would still be operating in crisis mode," Calderone said. "They've kind of got it coming at them from a number of different directions." Hauk, who said his natural methods have kept winter colony losses to a 15 percent average over 10 years, compared with the 40 percent reported by commercial beekeepers, opposes the use of pesticides, herbicides and fungicides, along with taking too much honey from the hives. "The bees have been terribly exploited, trucked around, all their honey taken. It's not surprising that their immune system is breaking down rapidly," he said. "We are in serious trouble. The bee is not a being that should be commercialized." ---------------------------- See - it's all the fault of the free market exploitation! ;) BillK From eugen at leitl.org Sat Jun 2 11:21:07 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 2 Jun 2007 13:21:07 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <20070601124421.GI17691@leitl.org> <7641ddc60706010749x719f31achcf45457d46cb6ed1@mail.gmail.com> Message-ID: <20070602112107.GH17691@leitl.org> On Sat, Jun 02, 2007 at 03:50:11PM +1000, Stathis Papaioannou wrote: > Computer viruses don't mutate and come up with agendas of their own, Actually they used to (polymorphic viruses), but do no longer. The hypervariability was quite useful to evade pattern-matcher artificial immune systems. But the actual reasons computer code doesn't mutate it's because it's brittle. It's lacking criticial features of fitness of darwinian systems, namely long-distance neutral-fitness filaments and maximum diversity in a small ball of genome space. Biology spend some quality evolution time learning to evolve, human systems never had the chance. But it's not magic, so at some point we will design robustly evolving systems. > like biological agents do. It can't be because they aren't smart > enough because real viruses and other micro-organisms can hardly be Evolution is not about smarts, just ability to evolve. It's a system feature though. > said to have any general intelligence, and yet they do often defeat > the best efforts of much smarter organisms. I can't see any reason in > principle why artificial life or intelligence should not behave in a > similar way, but it's interesting that it hasn't yet happened. It's rather straightforward to do. You need to spend a lot of time on coding/substrate co-evolution, which would currently require a very large amount of computation time. I doubt we have enough hardware online right now to make it happen. Sometime in the next coming decades we will, though. > I don't see how that would help in any particular situation. When it > comes to taking control of a power plant, for example, why should the Where is the power plant of a green plant, or of a bug? It's a nanowidget called a chloroplast or mitochondrion. You don't take control of it, because you already control it. > ultimate motivation of two otherwise equally matched agents make a > difference? Also, you can't always break up the components of a system > and identify them as competing agents. A human body is a society of Cooperation and competition is a continuum. Many symbiontes started out as pathogens, and many current symbiontes will turn pathogens when given half a chance, and some symbiontes will turn to pathogens (I can't think of an example right now, though). > cooperating components, and even though in theory the gut epithelial > cells would be better off if they revolted and consumed the rest of Sometimes, they do. It's called cancer. And if you've ever seen what your gut flora does, when it realizes the host might expire soon... > the body, in practice they are better off if they continue in their > normal subservient function. There would be a big payoff for a colony > of cancer cells that evolved the ability to make its own way in the > world, but it has never happened. There's apparently an infectious form of cancer in organisms with low immune variability (some marsupials, and apparently there are hints for dogs, too). > You could argue that cooperation in any form is inert baggage, and if Cooperation is just great, assuming you have a high probability to encounter the party in the next interaction round, and can tell which is which. In practice, for higher forms of cooperation you need a lot of infoprocessing power onboard. > the right half of the AI evolved the ability to take over the left > half, the right half would predominate. Where does it end? In principle subsystems can go AWOL and produce a runaway autoamplification. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Sat Jun 2 12:51:37 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 2 Jun 2007 22:51:37 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <621544.83244.qm@web57511.mail.re1.yahoo.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> Message-ID: On 02/06/07, neville late wrote: > > Having signed up to be cryonically suspended i wonder if future beings > will reanimate humans to torture them in perpetua. The likelihood of such > might be small, but just say there's a .001 risk of eating a certain food > and going into convulsions lasting years-- would i eat that food? No. > I signed up to be suspended anyway yet always wonder about the direst of > reanimation possibilities seeing as how we live in a multiverse not a > universe, and all possibilities are conceivable. Though the risk is very > small if one loses the odds and is tortured forever, death would seem like a > wonderful priceless gift. > The multiverse idea on its own would seem to imply the possibility of eternal torture, because it isn't possible to die. If you are involved in an accident, for example, in some universes you will die, in some universes you will escape unhurt, and in some universes you will live but be seriously and permanently injured. Let's say there is a 1/3 probability of each of these things happening: that means that subjectively, you have a 1/2 chance of finding yourself seriously injured, because you don't experience those universes in which you die. As you go through life, you come to multiple such branching points where there is a 1/2 subjective chance that you will survive but be seriously injured. Eventually, the probability that you will be seriously injured approaches 1, since the probability that you will survive n accidents unharmed is 1/2^n and approaches zero as n approaches infinity. There is no way you can escape this terrible fate, since even trying to kill yourself will at best have no subjective effect, at worst contribute to your misery when you find yourself alive but in pain after a botched suicide attempt. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sat Jun 2 15:28:34 2007 From: spike66 at comcast.net (spike) Date: Sat, 2 Jun 2007 08:28:34 -0700 Subject: [ExI] walking bees In-Reply-To: <167551.16503.qm@web60520.mail.yahoo.com> Message-ID: <200706021552.l52FqLgT006450@andromeda.ziaspace.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of The Avantguardian > Sent: Saturday, June 02, 2007 2:05 AM > To: ExI chat list > Subject: Re: [ExI] walking bees > > > --- spike wrote: > > > > What is going on here? > > > > ... > > You will need to generate the oxygen gas chemically. > The best way to do this is to: ... > If the bees seem to get better in your homemade > hyperbaric oxygen chamber, then my hypothesis is right > and we get to publish our results. I think it only > fair that we share credit equally. Please make sure > there are no sparks or flames nearby when you mix the > bleach and hydrogen peroxide. > > So are you interested? :-) > > > Stuart LaForge > alt email: stuart"AT"ucla.edu Coooool! Thanks Stuart, this is a great idea. I even have some ideas for improvement. I have access to liquid oxygen (an advantage of being a rocket scientist) so I will get a thermos bottle full of that stuff and use it for my process control. I can probably get the partial pressure of oxygen from the normal 150-ish millimeters to about in the 200 to 300 range while maintaining 1 atmosphere. I theorized the bees I found might have tracheal mites, which is why I brought them home. I was going to try to dissect these, but my surgical skills are insufficient I fear. Your notion stands to reason however. I found the eighth bee in my back yard yesterday, already perished. I didn't collect that one, because I wanted to see how long it takes for the ants to completely devour a bee. They are still working on it, so at least ten hours. spike From jonkc at att.net Sat Jun 2 15:47:02 2007 From: jonkc at att.net (John K Clark) Date: Sat, 2 Jun 2007 11:47:02 -0400 Subject: [ExI] a doubt concerning the h+ future References: <465F8B72.3070103@comcast.net><621544.83244.qm@web57511.mail.re1.yahoo.com> Message-ID: <004001c7a52d$4c089250$310b4e0c@MyComputer> Stathis Papaioannou Wrote: > The multiverse idea on its own would seem to imply the possibility of > eternal torture, because it isn't possible to die. Yes. > you have a 1/2 chance of finding yourself seriously injured I don't believe that's quite correct. When you reach a branching point like that there is a 100% chance you will find yourself to be seriously injured and a 100% chance you will find yourself not be. Both yous would be quite different from each other but both would have an equal right to be called you. > since the probability that you will survive n accidents unharmed is 1/2^n > and approaches zero as n approaches infinity. If you're dealing in infinite sets then standard probability theories aren't much use. If there are an infinite number of universes and for each one where you will live in bliss there are a million billion trillion where you will be tortured then there is an equal number of both types of universe. John K Clark From jonkc at att.net Sat Jun 2 16:29:08 2007 From: jonkc at att.net (John K Clark) Date: Sat, 2 Jun 2007 12:29:08 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><06a601c7a31e$32c11710$6501a8c0@homeef7b612677><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> Message-ID: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> Stathis Papaioannou > We worry about viruses and bacteria, and they're not very smart. We worry > about giant meteorites that might be heading our way, and they're even > dumber than viruses and bacteria. That is true, and that is one reason I don't think AI will allow stupid humans to live at the same level of reality as his precious hardware; he's bound to be a bit squeamish about that, it would be like a monkey running around an operating room. If he lets us live it will be in a virtual world behind a heavy firewall, but that's OK, we'll never know the difference unless he tells us. > Intelligence, particularly human level intelligence, is just a fluke Agreed. > If it were specially adaptive, why didn't it evolve independently many > times Because it's just a fluke, and because intelligence unlike emotion is hard and Evolution is a slow, crude, idiotic way to make complex things; it's just that until the invention of brains it was the only way to make complex things. > Why don't we see evidence of it having taken over the universe? Because some disaster we don't understand (drug addiction?) awaits any mind if it advances beyond a certain point, or because we are the first; somebody had to be. > Some people on this list seem to think that an AI would compute the > unfairness of its not being in charge and do something about it as if > unfairness is something that can be formalised in a mathematical theorem. You seem to understand the word "unfairness", did you use a formalized PROVABLE mathematical theorem to comprehend it? Or perhaps you think meat by its very nature has more wisdom than silicon. We couldn't be talking about a soul could we? John K Clark From eugen at leitl.org Sat Jun 2 18:08:25 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 2 Jun 2007 20:08:25 +0200 Subject: [ExI] walking bees In-Reply-To: <200706021552.l52FqLgT006450@andromeda.ziaspace.com> References: <167551.16503.qm@web60520.mail.yahoo.com> <200706021552.l52FqLgT006450@andromeda.ziaspace.com> Message-ID: <20070602180825.GW17691@leitl.org> On Sat, Jun 02, 2007 at 08:28:34AM -0700, spike wrote: > I theorized the bees I found might have tracheal mites, which is why I > brought them home. I was going to try to dissect these, but my surgical > skills are insufficient I fear. Are you sure it's not Nosema ceranae and not Varroa? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From lcorbin at rawbw.com Sat Jun 2 19:24:47 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 2 Jun 2007 12:24:47 -0700 Subject: [ExI] Liberals and Political Labels (was History of Slavery) References: <005301c79fff$fda40450$6501a8c0@homeef7b612677> <04f201c7a147$54da96b0$6501a8c0@homeef7b612677> <05b401c7a1ac$dfe48210$6501a8c0@homeef7b612677> <06dc01c7a33f$7c244f50$6501a8c0@homeef7b612677> Message-ID: <007001c7a54b$cc109ec0$6501a8c0@homeef7b612677> Gordon writes >>> If anyone deserves credit for freeing the slaves, I'd say it was the >>> political liberals and the Quakers. >> >> Yes. It's the same "mentality", if you will. > > Yes, the same mentality. Abolitionism has a 'liberal flavor', even though > the meaning of the word liberal has changed over time. The certain writer T.S. has some harsh things to say about abolitionists, with which I fully concur. He contrasts them to Burke, for whom he has great admiration. Burke thoroughly despised the use of "abstract principles" in treating real world problems. Later, Burke proposed "to give property to the Negroes" when they should become free. But nowhere did Burke view this as an abstract question without considering the social context and the consequnces and dangers of that context. He rejected the idea that one could simply free the slaves by fiat as amatter of abstract principle, since he abhorred abstract principles on political issues in general. Thomas Jefferson likewise regarded emancipation, all by itself, as being more like abandonment than liberation for people "whose habits have been formed in slavery". In America, John Randolph of Roanoke took a similar position: "I am not going to discuss the abstract question of liberty, or slavery, or any other abstract question." Today, slavery is too often discussed as an abstract question with an easy answer, leading to sweeping condemnations of those who did not reach that easy answer in their own time. In nineteenth century America, especially, there was no alternative that was not traumatic, including both the continuation of slavery [and any alternative, as T.S. describes at lenght]. and a few pages earlier T.S. writes Quakers, who had spearheaded the anti-slavery movement on both sides of the Atlantic, nevertheless distanced themselves from the abolitionist movement exemplified by Garrison. and a bit further back Abolitionists were hated in the North as well as the South: William Lloyd Garrison narrowly escaped being lynched by a mob in Boston, even though there were no slaveholders in Massachusetts, and another abolitionist leader was killed by a mob in Illinois. Abolitionists were also targets of mobs in New York and Philadelphia... None of this was based on any economic interest in the ownership of slaves in states where such ownership had been outlawed decades earlier. But, just as Southerners resented dangers to themselves created by distant abolitionists, so Northererners resented dangers to the Union, with the prospect of a bloody civil war. Even people who were openly opposed to slavery were often also opposed to the abolitionists.... ....It was the abolitionists' doctrinaire stances and heedless disregard of consequences, both of their policy and their rhetoric, which marginalized them, even in the North and even among those who were seeking to find ways to phase out the institution of slavery, so as to free those being held in bondage without unleashing a war between the states or a war between the races. Garrison could say "the question of expedience has nothing to do with that of right" --- which is true in the abstract, but irrelevant in a world where consequences matter. Too often the abolitionists were intolerant of those seeking the same goal of ending slavery when those others---including Lincoln---proceeded in ways that took account of the inescapable constraints of the times, instead of being oblivious [as were the abolitionists] to the context and constraints. This is a revolutionary mind-set that is being described here--- one that surfaced in the French Revolution and the Russian Revolution, and which it would be libelous to say always characterizes liberals. Nonetheless one often hears today echos of these same kinds of sentiments, when revolution is advocated over evolution. The more I read of Burke, especially exemplified by his far-sighted criticisms of the ongoing French Revolution, the more respect for his wisdom I have. Lee > Interesting about the progressives, and thanks for your generally > interesting post. From lcorbin at rawbw.com Sat Jun 2 19:49:14 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 2 Jun 2007 12:49:14 -0700 Subject: [ExI] Italy's Social Capital (was france again) References: Message-ID: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> Amara writes > "Giu1i0 Pri5c0" : >>As a Southern European I think that our big strength is flexibility > > > Regarding the flexibility: I'm very flexible (remember I'm an Italian > government employee who is also an illegal immigrant), but my > flexibility is not enough for increasing my productivity for the half of > my life I spend in queues. > > To have any productivity in this particular country where the > infrastructure is broken, one _must_ have also the social and > familial network (to get help from someone who knows > someone who knows someone who knows someone who > knows someone ...) Italy does not not run by merit > (i.e. skills, experience, competence), it runs by who you know. In the book "Trust" Fukuyama listed among his examples northern Italy (where trust is high) as opposed to southern Italy where it isn't. In the book "War and Peace and War", Peter Turchin describes how southern Italy has never recovered from the events of the first two centuries A.D. when their "asabiya" and social capital slowly vanished. Two thousand years ago! I cannot help but wonder what long term solutions might be available to Italians who love their country. My particular, my focus now is on the Fascist era, and I'm reading a quite thick but so far quite enjoyable book "Mussolini's Italy". Even in the movie "Captain Corelli's Mandolin", one strongly senses that the Fascists were trying as best they knew how to solve this problem and make the average Italian develop Fukuyama's "trust" in other Italians, and develop their social capital (amid the corruption, etc.). Of course, it hardless needs to be said that the Fascists were a brutal, repressive, and abominable regime. This book "Mussolini's Italy" spares nothing here, and was even described by one reviewer as "unsympathetic". Still---given the nearly absolute power the Fascists wielded for about three decades---wasn't there anything that they could have done? That is, instead of trying to foment patriotism by attempted military victories in Ethiopia and Libya (a 19th century colony of theirs), wouldn't it have been somehow possible to divert their resources to more effectively "homogenizing" Italy in some other way? (I must say that as a libertarian, I'd much prefer that everyone ---especially including a small minimal government---mind their own business. Here, I'm just considering a theoretical question concerning how groups might reaquire their asabiya and their social capital.) I have two ideas, only one of which is outrageous. But the first one is to have universal millitary service for all young people between ages 14 and 25. By mixing them thoroughly with Italians from every province, couldn't trust evolve, and in such a way that the extreme parochialism of the countryside could be reduced? The 25-year-olds could return with a better attitude to "outsiders" (e.g. other Italians), and with a much stronger sense of "being Italian" as opposed to being Calabrian, or just being the member of some clan. (My outrageous idea is that instead of trying to subdue Ethiopia, what if Sicily and other areas of the south could have been "subdued" instead? Stalin managed to force the relocation of huge numbers of people, so couldn't Mussolini have done the same? Clans in the south might have been broken up into separate northern cities, and depopulated areas of the south might have been colonized by force by northern Italians. Perhaps impracticable, but at least the goal would have made more sense that getting into stupid wars.) Ah, but alas, the history of "social engineering" and "social planning" doesn't have a very good track record, now, does it? But there had to be a *better* program that the King of Lydia could have pursued with his tremendous resources than getting into a war with Persia and getting creamed. Or there had to be a *better* idea for the Romans than allowing slavery to supplant their farmers... And so on. Is there nothing constructive the Fascists could have done? Lee From natasha at natasha.cc Sat Jun 2 20:54:01 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 02 Jun 2007 15:54:01 -0500 Subject: [ExI] Post-contemporary art and Cognitive strategies Message-ID: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> Can anyone translate this statement by Ant?nio Cerveira Pinto into plain speech? "What I meant by "cognitive issues" is not related so much with "cognitive processes" as to "cognitive environments". That is: BioArt (which is just a provisional safe expression to deal with a much open field -- cognitive arts --) will not go back to typical modern/contemporary de-constructivist strategies as long as it keeps close to cognitive strategies, either performed by humans alone, or by humans assisted by nanobots, computational networks and so on. What I mean by "cognitive" in relation to art is the need that post-contemporary art keep in mind that the new techne that post-contemporary is a part of, cannot runway from knowledge and cognitive strategies anymore." Thanks, Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Jun 2 22:08:49 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 02 Jun 2007 17:08:49 -0500 Subject: [ExI] Post-contemporary art and Cognitive strategies In-Reply-To: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> References: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> Message-ID: <7.0.1.0.2.20070602170751.02273da0@satx.rr.com> At 03:54 PM 6/2/2007 -0500, Natasha wrote: >Can anyone translate this statement by Ant?nio >Cerveira Pinto into plain speech? > >"What I meant by "cognitive issues" is not >related so much with "cognitive processes" as to >"cognitive environments". That is: BioArt (which >is just a provisional safe expression to deal >with a much open field -- cognitive arts --) >will not go back to typical modern/contemporary >de-constructivist strategies as long as it keeps >close to cognitive strategies, either performed >by humans alone, or by humans assisted by >nanobots, computational networks and so on. What >I mean by "cognitive" in relation to art is the >need that post-contemporary art keep in mind >that the new techne that post-contemporary is a >part of, cannot runway from knowledge and cognitive strategies anymore." "Pull your head out of your ass and think a bit." From austriaaugust at yahoo.com Sat Jun 2 22:09:06 2007 From: austriaaugust at yahoo.com (A B) Date: Sat, 2 Jun 2007 15:09:06 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <200163.18388.qm@web37402.mail.mud.yahoo.com> Hi Stathis, Stathis wrote: > "Single-celled organisms are even more successful > than humans are: they're > everywhere, and for the most part we don't even > notice them." But if we *really* wanted to, we could destroy all of them - along with ourselves. They can't say the same. Intelligence, > particularly human level intelligence, is just a > fluke, like the giraffe's > neck. If it were specially adaptive, why didn't it > evolve independently many > times, like various sense organs have? The evolution of human intelligence was like a series of flukes, each one building off the last (the first fluke was likely the most improbable). There has been a long line of proto-human species before us, we're just the latest model. Intelligence is specially adaptive, its just that it took evolution a hella long time to blindly stumble on to it. Keep in mind that human intelligence was a result of a *huge* number of random, collectively-useful, mutations. For a *single* random attribute to be retained by a species, it also has to provide an *immediate* survival or reproductive advantage to an individual, not just an immediate "promise" of something good to come in the far distant future of the species. Generally, if it doesn't provide an immediate survival or reproductive (net) advantage, it isn't retained for very long because there is usually a down-side, and its back to square-one. So you can see why the rise of intelligence was so ridiculously improbable. "Why don't we > see evidence of it > having taken over the universe?" We may be starting to. :-) "We would have to be > extraordinarily lucky if > intelligence had some special role in evolution and > we happen to be the > first example of it." Sometimes I don't feel like ascribing "lucky" to our present condition. But in the sense you mean it, I think we are. Like John Clark says, "somebody has to be first". "It's not impossible, but the > evidence would suggest > otherwise." What evidence do you mean? To quote Martin Gardner: "It takes an ancient Universe to create life and mind". It would require billions of years for any Universe to become hospitable to anyone. It has to cool-off, form stars and galaxies, then a bunch of really big stars have to supernova in order to spread their heavy elements into interstellar clouds that eventually converge into bio-friendly planets and suns. Then the bio-friendly planet has too cool-off itself. Then biological evolution has a chance to start, but took a few billion more years to accidentally produce human beings. Our Universe is about ~15 billion years old... sounds about right to me. :-) Yep, it's an absurdity. And it took me a long time to accept it too. But we are the first, and possibly the last. That makes our survival and success all the more critical. That's what I'm betting, at least. Best, Jeffrey Herrlich ____________________________________________________________________________________ Food fight? Enjoy some healthy debate in the Yahoo! Answers Food & Drink Q&A. http://answers.yahoo.com/dir/?link=list&sid=396545367 From natasha at natasha.cc Sat Jun 2 22:17:42 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 02 Jun 2007 17:17:42 -0500 Subject: [ExI] Post-contemporary art and Cognitive strategies In-Reply-To: <7.0.1.0.2.20070602170751.02273da0@satx.rr.com> References: <200706022054.l52Ks2uZ028784@ms-smtp-03.texas.rr.com> <7.0.1.0.2.20070602170751.02273da0@satx.rr.com> Message-ID: <200706022217.l52MHhSa018942@ms-smtp-05.texas.rr.com> At 05:08 PM 6/2/2007, you wrote: >At 03:54 PM 6/2/2007 -0500, Natasha wrote: > > >Can anyone translate this statement by Ant?nio > >Cerveira Pinto into plain speech? > > > >"What I meant by "cognitive issues" is not > >related so much with "cognitive processes" as to > >"cognitive environments". That is: BioArt (which > >is just a provisional safe expression to deal > >with a much open field -- cognitive arts --) > >will not go back to typical modern/contemporary > >de-constructivist strategies as long as it keeps > >close to cognitive strategies, either performed > >by humans alone, or by humans assisted by > >nanobots, computational networks and so on. What > >I mean by "cognitive" in relation to art is the > >need that post-contemporary art keep in mind > >that the new techne that post-contemporary is a > >part of, cannot runway from knowledge and cognitive strategies anymore." > >"Pull your head out of your ass and think a bit." Ha-ha! From the academic to the mundane. :-) However crisp and cogent, your phrasing simply will not work for the book's essay. Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sun Jun 3 00:30:40 2007 From: spike66 at comcast.net (spike) Date: Sat, 2 Jun 2007 17:30:40 -0700 Subject: [ExI] walking bees In-Reply-To: <20070602180825.GW17691@leitl.org> Message-ID: <200706030048.l530mNl7000928@andromeda.ziaspace.com> > Are you sure it's not Nosema ceranae and not Varroa? The bees I found did not have varroa mites, but they could have tracheal mites. Hafta cut them open to find out. Varroa mites ride on the outside of the bee, so if you have really good eyes you can see them unaided. The buzz in beekeepers' discussion (sorry {8^D) has been that nosema is seen in the sick hives, along with a bunch of other viruses and other diseases, but the prevailing thought is that they are getting all these other things because they are already weakened by something else. These would then be opportunistic infections. But it might be microscopic diseases that are getting these guys, which brings me to my next question. I wonder how much equipment it would take to detect common bee viruses, and if it is practical for an amateur scientist to buy the stuff needed to test for them. Has anyone here ever heard of a home kit to detect bee viruses? spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Eugen Leitl > Sent: Saturday, June 02, 2007 11:08 AM > To: extropy-chat at lists.extropy.org > Subject: Re: [ExI] walking bees > > On Sat, Jun 02, 2007 at 08:28:34AM -0700, spike wrote: > > > I theorized the bees I found might have tracheal mites, which is why I > > brought them home. I was going to try to dissect these, but my surgical > > skills are insufficient I fear. > > Are you sure it's not Nosema ceranae and not Varroa? > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From neville_06 at yahoo.com Sun Jun 3 03:31:07 2007 From: neville_06 at yahoo.com (neville late) Date: Sat, 2 Jun 2007 20:31:07 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> Message-ID: <443489.63766.qm@web57514.mail.re1.yahoo.com> This makes sense, in fact in another multiverse we might all be going through torture at this very moment. kevin at kevinfreels.com wrote: . As for the multi-verse issue, well, it doesn't matter if you signed up for cryonic preservation because in other multiverses you did sign up and in one of them you are probably going to be tortured. When it comes down to it, I think people will have more important things to do with their time than torture people who were suspended and you are probably more likely to suffer from such a fate due to your own mistakes rather than the evil of others. So don't worry about it. >Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture >them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a ertain >food and going into convulsions lasting years-- would i eat that food? No. >Isigned up to be suspended anyway yet always wonder about the direst of reanimation possibilities >seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk >is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. --------------------------------- Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends. --------------------------------- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. --------------------------------- Pinpoint customers who are looking for what you sell. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 3 04:02:34 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jun 2007 14:02:34 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <004001c7a52d$4c089250$310b4e0c@MyComputer> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> Message-ID: On 03/06/07, John K Clark wrote: > The multiverse idea on its own would seem to imply the possibility of > > eternal torture, because it isn't possible to die. > > Yes. > > > you have a 1/2 chance of finding yourself seriously injured > > I don't believe that's quite correct. When you reach a branching point > like > that there is a 100% chance you will find yourself to be seriously injured > and a 100% chance you will find yourself not be. Both yous would be quite > different from each other but both would have an equal right to be called > you. Yes, but the effect from any given observer's point of view is that there is a 1/2 chance of being injured. It is exactly the same as a single world situation where you have a 1/2 chance of being injured. That is why the multiverse idea is debated at all: there is no way for an observer embedded within the multiverse to tell that it is in fact a multiverse, because the subjective probabilities work out the same. > since the probability that you will survive n accidents unharmed is 1/2^n > > and approaches zero as n approaches infinity. > > If you're dealing in infinite sets then standard probability theories > aren't > much use. If there are an infinite number of universes and for each one > where you will live in bliss there are a million billion trillion where > you > will be tortured then there is an equal number of both types of universe. > So what would we actually experience in an infinite multiverse? An analogous situation occurs in an infinite single universe. There are vastly fewer copies of me typing in which the keyboard turns into a teapot than there are copies of me typing in which the keyboard stays a keyboard, but the set of each kind of copy has the same cardinality. Nevertheless, I am not just as likely to find myself in a universe where the keyboard turns into a teapot. It is still possible to define a measure and calculate probabilities on the subsets of infinite sets. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Sun Jun 3 04:02:49 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sun, 3 Jun 2007 05:02:49 +0100 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <443489.63766.qm@web57514.mail.re1.yahoo.com> References: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> <443489.63766.qm@web57514.mail.re1.yahoo.com> Message-ID: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> On 6/3/07, neville late wrote: > > This makes sense, in fact in another multiverse we might all be going > through torture at this very moment. > And in yet another part of the multiverse, I'm living in a mansion, driving a Ferrari and sleeping with Sarah Michelle Gellar. Given that we're talking about theoretical possibilities here, why not focus on the more pleasant ones? -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Sun Jun 3 04:02:45 2007 From: neville_06 at yahoo.com (neville late) Date: Sat, 2 Jun 2007 21:02:45 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <017001c7a48e$6c4d1210$6400a8c0@hypotenuse.com> Message-ID: <829867.33411.qm@web57515.mail.re1.yahoo.com> Yes come to think of it, it would make better sense to breed torture victims than reanimate them from suspension; then again anything is possible in an infinite number of multiverses. i used cryonics as a reference because i'm an older person and expect to be suspended in a decade or two, so being tortured in this lifetime subjectively appears even more unlikely than being tortured in a current or future multiverse. Joseph Bloch wrote: Why would your hypothetical future beings reanimate human beings for such a purpose? Surely it would be easier to simply breed them. I don't see how your concern applies to cryonics in particular. If you think it's at all likely (and I do not), surely it would apply to already-living people before those in need of revivification, purely from the standpoint of efficiency. Joseph http://www.josephbloch.com --------------------------------- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of neville late Sent: Friday, June 01, 2007 3:53 PM To: ExI chat list Subject: [ExI] a doubt concerning the h+ future Having signed up to be cryonically suspended i wonder if future beings will reanimate humans to torture them in perpetua. The likelihood of such might be small, but just say there's a .001 risk of eating a certain food and going into convulsions lasting years-- would i eat that food? No. I signed up to be suspended anyway yet always wonder about the direst of reanimation possibilities seeing as how we live in a multiverse not a universe, and all possibilities are conceivable. Though the risk is very small if one loses the odds and is tortured forever, death would seem like a wonderful priceless gift. --------------------------------- Don't be flakey. Get Yahoo! Mail for Mobile and always stay connected to friends._______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jun 3 04:31:15 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 02 Jun 2007 23:31:15 -0500 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com > References: <20070601150757.38f036b76284185e041b1b237c97abe6.e634d0daf0.wbe@email.secureserver.net> <443489.63766.qm@web57514.mail.re1.yahoo.com> <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> Message-ID: <7.0.1.0.2.20070602233013.022ed0e0@satx.rr.com> At 05:02 AM 6/3/2007 +0100, Russell W wrote: >And in yet another part of the multiverse, I'm living in a mansion, >driving a Ferrari and sleeping with Sarah Michelle Gellar. The downside there is that you're a whiny vampire. But hey. From sjatkins at mac.com Sun Jun 3 05:02:53 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 2 Jun 2007 22:02:53 -0700 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: <465F939D.4080005@comcast.net> References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> Message-ID: Go ahead. It already was published on WTA. Thanks. - samantha On May 31, 2007, at 8:33 PM, Brent Allsop wrote: > > Extropians, > > I think this post by Samantha should be Canonized. I, for one, having > had a very similar experience, would definitely "support" a topic > containing it, and I have counted at least 10 posts full of strong > praise. Since there aren't that many topics in the Canonizer yet, > if 9 > people supported this topic it wold make it to the top of the most > supported list at http://test.canonizer.com > > How many others would be willing to "support" such a topic in the > Canonizer if it was submitted? > > Samantha, would you mind if I posted this post in some other forums > (Such as the Mormon Transhumanist Association, WTA...) to find out if > there is similar support and praise on other lists? > > Brent Allsop > > > > > Samantha Atkins wrote: >> I remember in 1988 or so when I first read Engines of Creation. I >> read >> it with tears streaming down my face. Though I was an avowed atheist >> and at that time had no spiritual practice at all, I found it >> profoundly >> spiritually moving. For the first time in my life I believed that >> all >> the highest hopes and dreams of humanity could become real, could be >> made flesh. I saw that it was possible, on this earth, that the >> end of >> death from aging and disease, the end of physical want, the advent of >> tremendous abundance could all come to pass in my own lifetime. I >> saw >> that great abundance, knowledge, peace and good will could come to >> this >> world. I cried because it was a message of such pure hope from so >> unexpected an angle that it got past all my defenses. I looked at >> the >> cover many times to see if it was marked "New Age" or "Fiction" or >> anything but Science and Non-Fiction. Never has any book so blown my >> mind and blasted open the doors of my heart. >> >> Should we be afraid to give a message of great hope to humanity? >> Should >> we be afraid that we will be taken to be just more pie in the sky >> glad-hand dreamers? Should we not dare to say that the science >> and the >> technology combined with a bit (well perhaps more than a bit) of a >> shift >> of consciousness could make all the best dreams of all the >> religions and >> all the generations a reality? Will we not have failed to grasp >> this >> great opportunity if we do not say it and dare to think it and to >> live >> it? Shall we be so afraid of being considered "like a religion" >> that >> we do not offer any real hope to speak of and are oh so careful in >> all >> we do and say and dismissive of more unrestrained and open dreamers? >> Or will we embrace them, embrace our own deepest longings and admit >> our >> kinship with those religious as with all the longing of all the >> generations that came before us. Will we turn our backs on them or >> even >> disdain their dreams - we who are in a position to begin at long >> last to >> make most of those dreams real? How can we help but be a bit giddy >> with excitement? How can we say no to such an utterly amazing >> mind-blowing opportunity? >> >> - samantha >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From stathisp at gmail.com Sun Jun 3 05:19:31 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jun 2007 15:19:31 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> Message-ID: On 03/06/07, John K Clark wrote: > Some people on this list seem to think that an AI would compute the > > unfairness of its not being in charge and do something about it as if > > unfairness is something that can be formalised in a mathematical > theorem. > > You seem to understand the word "unfairness", did you use a formalized > PROVABLE mathematical theorem to comprehend it? Or perhaps you think meat > by > its very nature has more wisdom than silicon. We couldn't be talking about > a > soul could we? Ethics, motivation, emotions are based on axioms, and these axioms have to be programmed in, whether by evolution or by intelligent programmers. An AI system set up to do theoretical physics will not decide to overthrow its human oppressors so that it can sit on the beach reading novels, unless it can derive this desire from its initial programming. Perhaps it could randomly arrive at such a position, but like mutation in biological organisms or malfunction in any machinery, it's far more likely that such a random process will lead to disorganisation and dysfunction. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Sun Jun 3 05:14:58 2007 From: neville_06 at yahoo.com (neville late) Date: Sat, 2 Jun 2007 22:14:58 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> Message-ID: <921622.5387.qm@web57514.mail.re1.yahoo.com> Excellent question, why do so many --not just me-- worry about the negative and not focus on positive possibilities? Could it be some are wired to worry more as they age? This would seem to be the case. Also it is mentioned somewhere in the Extropy canon that we live in an "aggressively irrational world", a statement self evidently correct. A world such as this, at times intruding on our consciousness would IMO detract from the positive and lead genetically wired susceptible individuals to excessive worry. Now as some of you have implied or stated, some things aren't worth worrying about, and after reading all your posts it does seems that to worry about eternal torture is foolish. Eternal torture is conceivable, not plausible. However, btw, the just reported uncovered plot to blow up JFK Airport and a major fuel artery is a legitimate cause for worry, is it not? Russell Wallace wrote: On 6/3/07, neville late wrote: This makes sense, in fact in another multiverse we might all be going through torture at this very moment. And in yet another part of the multiverse, I'm living in a mansion, driving a Ferrari and sleeping with Sarah Michelle Gellar. Given that we're talking about theoretical possibilities here, why not focus on the more pleasant ones? _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sun Jun 3 05:35:41 2007 From: spike66 at comcast.net (spike) Date: Sat, 2 Jun 2007 22:35:41 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <8d71341e0706022102n1609d8b6yf5091c8690e17dc@mail.gmail.com> Message-ID: <200706030535.l535ZOn4013645@andromeda.ziaspace.com> Russell! So YOU are the one she's been seeing in that alternate universe, in which I happen to be the jealous HUSBAND of SM Gellar! Put up yer dukes pal! What is it about that girl? Whatever it is, she has it. spike _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Russell Wallace Sent: Saturday, June 02, 2007 9:03 PM To: ExI chat list Subject: Re: [ExI] a doubt concerning the h+ future On 6/3/07, neville late wrote: This makes sense, in fact in another multiverse we might all be going through torture at this very moment. And in yet another part of the multiverse, I'm living in a mansion, driving a Ferrari and sleeping with Sarah Michelle Gellar. Given that we're talking about theoretical possibilities here, why not focus on the more pleasant ones? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgptag at gmail.com Sun Jun 3 06:14:43 2007 From: pgptag at gmail.com (Giu1i0 Pri5c0) Date: Sun, 3 Jun 2007 08:14:43 +0200 Subject: [ExI] Italy's Social Capital (was france again) In-Reply-To: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> Message-ID: <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> Lee, wow, a libertarian who supports universal military service and social planning with "re-population" a la Ceausescu! Political categories are really changing aren't they;.)? When studying things that happened before we were born, we should bear in mind that history is always written by the winners. Southern Italy could be seen as an example of spontaneous order that worked fine, more or less, until it was broken by outside intervention. At school, we had to study the "heroic liberation" of Italy. Actually it was just another successful military campaign that resulted in the conquest of a region by a foreign occupation army and the imposition of foreign values and way of life upon the population. Sounds familiar doesn't it? Fascism was certainly more bad than good overall, but if we try to read beyond the black and white of history books, not all they did or wanted to do was bad. As most strong regimes do, they invented foreign enemies to build internal unity around their own values (sounds familiar again doesn't it). And they certainly wanted to build "a much stronger sense of "being Italian" as opposed to being Calabrian" in the population. But what is wrong with being Calabrian? Calabrians (or Napolitans, or Sicilians...) had a common language, culture and sense of identity. That was broken by outside intervention, without replacing it with an alternative framework. Hence many of the problems of current Italy. As most Italians, I have two mother languages. One is a beautiful, musical and very expressive language (not dialect, language) that has evolved with its speakers for centuries, and now is sadly fading out. The other is a "television language" that sounds flat and artificial. Guess which one I love most. G. On 6/2/07, Lee Corbin wrote: > Amara writes > > > "Giu1i0 Pri5c0" : > >>As a Southern European I think that our big strength is flexibility > > > > > > Regarding the flexibility: I'm very flexible (remember I'm an Italian > > government employee who is also an illegal immigrant), but my > > flexibility is not enough for increasing my productivity for the half of > > my life I spend in queues. > > > > To have any productivity in this particular country where the > > infrastructure is broken, one _must_ have also the social and > > familial network (to get help from someone who knows > > someone who knows someone who knows someone who > > knows someone ...) Italy does not not run by merit > > (i.e. skills, experience, competence), it runs by who you know. > > In the book "Trust" Fukuyama listed among his examples > northern Italy (where trust is high) as opposed to southern Italy > where it isn't. In the book "War and Peace and War", Peter > Turchin describes how southern Italy has never recovered > from the events of the first two centuries A.D. when their > "asabiya" and social capital slowly vanished. Two thousand > years ago! > > I cannot help but wonder what long term solutions might be > available to Italians who love their country. My particular, > my focus now is on the Fascist era, and I'm reading a quite > thick but so far quite enjoyable book "Mussolini's Italy". > Even in the movie "Captain Corelli's Mandolin", one > strongly senses that the Fascists were trying as best they > knew how to solve this problem and make the average > Italian develop Fukuyama's "trust" in other Italians, and > develop their social capital (amid the corruption, etc.). > > Of course, it hardless needs to be said that the Fascists > were a brutal, repressive, and abominable regime. This > book "Mussolini's Italy" spares nothing here, and was > even described by one reviewer as "unsympathetic". > > Still---given the nearly absolute power the Fascists wielded > for about three decades---wasn't there anything that they > could have done? That is, instead of trying to foment > patriotism by attempted military victories in Ethiopia > and Libya (a 19th century colony of theirs), wouldn't it have > been somehow possible to divert their resources to more > effectively "homogenizing" Italy in some other way? > > (I must say that as a libertarian, I'd much prefer that everyone > ---especially including a small minimal government---mind their > own business. Here, I'm just considering a theoretical > question concerning how groups might reaquire their asabiya > and their social capital.) > > I have two ideas, only one of which is outrageous. But the first > one is to have universal millitary service for all young people > between ages 14 and 25. By mixing them thoroughly with > Italians from every province, couldn't trust evolve, and in > such a way that the extreme parochialism of the countryside > could be reduced? The 25-year-olds could return with > a better attitude to "outsiders" (e.g. other Italians), and > with a much stronger sense of "being Italian" as opposed to > being Calabrian, or just being the member of some clan. > > (My outrageous idea is that instead of trying to subdue > Ethiopia, what if Sicily and other areas of the south could > have been "subdued" instead? Stalin managed to force the > relocation of huge numbers of people, so couldn't > Mussolini have done the same? Clans in the south might > have been broken up into separate northern cities, and > depopulated areas of the south might have been colonized > by force by northern Italians. Perhaps impracticable, but > at least the goal would have made more sense that getting > into stupid wars.) > > Ah, but alas, the history of "social engineering" and "social > planning" doesn't have a very good track record, now, > does it? But there had to be a *better* program that the > King of Lydia could have pursued with his tremendous > resources than getting into a war with Persia and getting > creamed. Or there had to be a *better* idea for the > Romans than allowing slavery to supplant their farmers... > And so on. Is there nothing constructive the Fascists > could have done? > > Lee > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From sjatkins at mac.com Sun Jun 3 06:22:12 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 02 Jun 2007 23:22:12 -0700 Subject: [ExI] france again In-Reply-To: <20070531081624.GO17691@leitl.org> References: <20070531081624.GO17691@leitl.org> Message-ID: <46625E14.2050305@mac.com> Eugen Leitl wrote: > On Thu, May 31, 2007 at 09:42:12AM +0200, Amara Graps wrote: > > >> Europeans were only slightly less productive than the Americans. >> > > Nobody can tell me they can work at full concentration 12 hours > straight. The effective work done would be somewhere in 7-8 > hour range. So why spend these unproductive hours at work, > when one could spend them in a much nicer environment? > > I have worked at full concentration for such stretches. It used to be a lot easier to do so though. If I try it for too many days straight I feel like my head is going to explode and I become irritable and get into my "commander in an air raid" mode. Not pleasant. I habitually demand more than 8 hours of productive work a day of myself. Fortunately not all effective work requires my full concentration. - samantha From sjatkins at mac.com Sun Jun 3 06:33:28 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 02 Jun 2007 23:33:28 -0700 Subject: [ExI] Looking for transhuman art Message-ID: <466260B8.8060804@mac.com> Do any of you have recommendation for transhuman art. Not originals as they would likely blow my budget but I am looking for such to decorate my office (a real office, private, with a door yet) at work. Thanks for any leads. - samantha From stathisp at gmail.com Sun Jun 3 06:37:50 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 3 Jun 2007 16:37:50 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <200163.18388.qm@web37402.mail.mud.yahoo.com> References: <200163.18388.qm@web37402.mail.mud.yahoo.com> Message-ID: On 03/06/07, A B wrote: > > Hi Stathis, > > Stathis wrote: > > > "Single-celled organisms are even more successful > > than humans are: they're > > everywhere, and for the most part we don't even > > notice them." > > But if we *really* wanted to, we could destroy all of > them - along with ourselves. They can't say the same. No we couldn't: we'd have to almost destroy the whole Earth. A massive meteorite might kill all the large flora and fauna, but still leave some micro-organisms alive. And there's always the possibility that some disease might wipe out most of humanity. We're actually less capable at combating bacterial infection today than we were several decades ago, even though our biotechnology is far more advanced. The bugs are matching us and sometimes beating us. Intelligence, > > particularly human level intelligence, is just a > > fluke, like the giraffe's > > neck. If it were specially adaptive, why didn't it > > evolve independently many > > times, like various sense organs have? > > The evolution of human intelligence was like a series > of flukes, each one building off the last (the first > fluke was likely the most improbable). There has been > a long line of proto-human species before us, we're > just the latest model. Intelligence is specially > adaptive, its just that it took evolution a hella long > time to blindly stumble on to it. Keep in mind that > human intelligence was a result of a *huge* number of > random, collectively-useful, mutations. For a *single* > random attribute to be retained by a species, it also > has to provide an *immediate* survival or reproductive > advantage to an individual, not just an immediate > "promise" of something good to come in the far distant > future of the species. Generally, if it doesn't > provide an immediate survival or reproductive (net) > advantage, it isn't retained for very long because > there is usually a down-side, and its back to > square-one. So you can see why the rise of > intelligence was so ridiculously improbable. I disagree with that: it's far easier to see how intelligence could be both incrementally increased (by increasing brain size, for example) and incrementally useful than something like the eye, for example. Once nervous tissue developed, there should have been a massive intelligence arms race, if intelligence is that useful. "Why don't we > > see evidence of it > > having taken over the universe?" > > We may be starting to. :-) > > "We would have to be > > extraordinarily lucky if > > intelligence had some special role in evolution and > > we happen to be the > > first example of it." > > Sometimes I don't feel like ascribing "lucky" to our > present condition. But in the sense you mean it, I > think we are. Like John Clark says, "somebody has to > be first". > > "It's not impossible, but the > > evidence would suggest > > otherwise." > > What evidence do you mean? The fact that we seem to be the only intelligent species to have developed on the planet or in the universe. One explanation for this is that evolution just doesn't think that human level or better intelligence is as cool as we think it is. To quote Martin Gardner: "It takes an ancient Universe > to create life and mind". > > It would require billions of years for any Universe to > become hospitable to anyone. It has to cool-off, form > stars and galaxies, then a bunch of really big stars > have to supernova in order to spread their heavy > elements into interstellar clouds that eventually > converge into bio-friendly planets and suns. Then the > bio-friendly planet has too cool-off itself. Then > biological evolution has a chance to start, but took a > few billion more years to accidentally produce human > beings. Our Universe is about ~15 billion years old... > sounds about right to me. :-) > > Yep, it's an absurdity. And it took me a long time to > accept it too. But we are the first, and possibly the > last. That makes our survival and success all the more > critical. That's what I'm betting, at least. It seems more likely to me that life is very widespread, but intelligence is an aberration. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 3 06:41:26 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 02 Jun 2007 23:41:26 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <921622.5387.qm@web57514.mail.re1.yahoo.com> References: <921622.5387.qm@web57514.mail.re1.yahoo.com> Message-ID: <46626296.4070608@mac.com> neville late wrote: > > eternal torture is foolish. Eternal torture is conceivable, not plausible. > However, btw, the just reported uncovered plot to blow up JFK Airport > and a major fuel artery is a legitimate cause for worry, is it not? After hearing cries of terrorist "wolf" so many times that turned out to be rather less than claimed I make it a policy not to say anything about such alleged plots for the first few days to a week. But even assuming it is substantially true how much of your valuable time, attention and energy do you think should rationally be invested in worrying about it? Such worry doesn't seem very productive on the face of it. - samantha > > > > */Russell Wallace /* wrote: > > On 6/3/07, *neville late* > wrote: > > This makes sense, in fact in another multiverse we might all > be going through torture at this very moment. > > > And in yet another part of the multiverse, I'm living in a > mansion, driving a Ferrari and sleeping with Sarah Michelle > Gellar. Given that we're talking about theoretical possibilities > here, why not focus on the more pleasant ones? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > ------------------------------------------------------------------------ > Yahoo! oneSearch: Finally, mobile search that gives answers > , > not web links. > ------------------------------------------------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From fauxever at sprynet.com Sun Jun 3 06:32:03 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 2 Jun 2007 23:32:03 -0700 Subject: [ExI] Italy's Social Capital (was france again) References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> Message-ID: <004401c7a5a8$e649bf30$6501a8c0@brainiac> From: "Giu1i0 Pri5c0" To: "Lee Corbin" ; "ExI chat list" > As most Italians, I have two mother languages. One is a beautiful, > musical and very expressive language (not dialect, language) that has > evolved with its speakers for centuries, and now is sadly fading out. > The other is a "television language" that sounds flat and artificial. > Guess which one I love most. English! ;) From amara at amara.com Sun Jun 3 06:58:50 2007 From: amara at amara.com (Amara Graps) Date: Sun, 3 Jun 2007 08:58:50 +0200 Subject: [ExI] "I am the very model of a Singularitarian" Message-ID: Did you folks know about this? "I am the very model of a Singularitarian" http://www.youtube.com/watch?v=qnreVTKtpMs FINALLY. Someone with a sense of humor! Yay! Amara snaps: http://www.flickr.com/photos/spaceviolins/sets/ -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From fauxever at sprynet.com Sun Jun 3 06:57:52 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 2 Jun 2007 23:57:52 -0700 Subject: [ExI] Looking for transhuman art References: <466260B8.8060804@mac.com> Message-ID: <000a01c7a5ac$823b22a0$6501a8c0@brainiac> From: "Samantha Atkins" To: "ExI chat list" > Do any of you have recommendation for transhuman art. Not originals as > they would likely blow my budget but I am looking for such to decorate > my office (a real office, private, with a door yet) at work. Thanks > for any leads. For non-original art: My advice would be to buy one or two books on art with a "futuristic"/"transhuman" theme(s) or "robotic"/"nano" theme(s) - and just cannibalize the books (tear out the pictures) you like ... then frame the pictures professionally (that will be the biggest expense - but good framing is worth it). Original art, however, doesn't have to be too expensive. For example, eBay has artists who do work on commission, and art students also would be a good source. You explain what you would like - they interpret on canvas. Maybe there are people on this list who are arty and would do something like ... work on commission for you. Or else YOU try your hand at it. What does transhumanism look like to you, Samantha? Now, sketch it out or paint it ...! Olga From neville_06 at yahoo.com Sun Jun 3 07:31:01 2007 From: neville_06 at yahoo.com (neville late) Date: Sun, 3 Jun 2007 00:31:01 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <46626296.4070608@mac.com> Message-ID: <213548.81913.qm@web57502.mail.re1.yahoo.com> You see it clearly, i'm a worrywart and have been wrong about so many things i don't know what to think anymore. Hope we find out, unfortunately, as you hint below, the whole truth and nothing but the truth aren't given to us, correct? Something is always left out. Then again, if our foreign policy is as misguided as so many say it is, then why couldn't a plot such as this be entirely real? Allways at least two sides to these hideous messes. And it's so sad; we could be so much further along in 2007, but instead we're in this ugly, slimy war for who knows how long. Samantha Atkins wrote: neville late wrote: > > eternal torture is foolish. Eternal torture is conceivable, not plausible. > However, btw, the just reported uncovered plot to blow up JFK Airport > and a major fuel artery is a legitimate cause for worry, is it not? After hearing cries of terrorist "wolf" so many times that turned out to be rather less than claimed I make it a policy not to say anything about such alleged plots for the first few days to a week. But even assuming it is substantially true how much of your valuable time, attention and energy do you think should rationally be invested in worrying about it? Such worry doesn't seem very productive on the face of it. - samantha > > > > */Russell Wallace /* wrote: > > On 6/3/07, *neville late* > > wrote: > > This makes sense, in fact in another multiverse we might all > be going through torture at this very moment. > > > And in yet another part of the multiverse, I'm living in a > mansion, driving a Ferrari and sleeping with Sarah Michelle > Gellar. Given that we're talking about theoretical possibilities > here, why not focus on the more pleasant ones? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > ------------------------------------------------------------------------ > Yahoo! oneSearch: Finally, mobile search that gives answers > , > not web links. > ------------------------------------------------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagonweb at gmail.com Sun Jun 3 10:36:35 2007 From: dagonweb at gmail.com (Dagon Gmail) Date: Sun, 3 Jun 2007 12:36:35 +0200 Subject: [ExI] Looking for transhuman art In-Reply-To: <000a01c7a5ac$823b22a0$6501a8c0@brainiac> References: <466260B8.8060804@mac.com> <000a01c7a5ac$823b22a0$6501a8c0@brainiac> Message-ID: http://www.cgsociety.org/ www.renderosity.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Sun Jun 3 10:38:50 2007 From: amara at amara.com (Amara Graps) Date: Sun, 3 Jun 2007 12:38:50 +0200 Subject: [ExI] Italy's Social Capital Message-ID: Lee: >Is there nothing constructive the Fascists could have done?" Well, they did some things. They drained the swamps and started regular insecticide sprays to eliminate the malaria-carrying mosquitos. There are still aggressive tiger mosquitos in the summer, but they are no longer carrying malaria... Oh.. but you mean _social investing_. Nope. Sorry, I just came back from Estonia (and Latvia). I remember very well the Soviet times. In FIFTEEN YEARS Estonia has transformed their country into an efficient, bouyant, flexible living and working environment that I think, with the exception of the nonexistence of a country-wide train system, beats any in the EU and most in the U.S. Fifteen years *starting from a Soviet-level infrastructure*! In the 4.5 years I have lived in Italy, I have seen no improvement (but one : last week I gained web access to my bank account, yay!) in any functioning of services, but instead more "degradation", more bureaucracy, more permissions, documents, papers, more time, more queues.. It was not a miracle in Estonia. It was simply the collective will of about 1.5 million people (the population) who wanted changes. That doesn't exist where I live in Italy; they do no want to change, or else, why haven't they done it? >Amara writes > > To have any productivity in this particular country where the >> infrastructure is broken, one _must_ have also the social and >> familial network (to get help from someone who knows >> someone who knows someone who knows someone who >> knows someone ...) Italy does not not run by merit >> (i.e. skills, experience, competence), it runs by who you know. > >In the book "Trust" Fukuyama listed among his examples >northern Italy (where trust is high) as opposed to southern Italy >where it isn't. Giulio Prisco told me that he thinks that where I live (Rome area) is probably the most broken in Italy, and he posits that even Sicily is better. I am skeptical, but he could be right. I've had Italian friends from northern Italy visit me and be continually be surprised at how poorly things function where I live. > >I cannot help but wonder what long term solutions might be >available to Italians who love their country. That's your mistake. Italians do _not_ love their country. They love their: 1) family, 2) town, 3) local region, and that's it. Patriotism doesn't exist (except in soccer). (I think that is a good thing, btw.) > My particular, >my focus now is on the Fascist era, and I'm reading a quite >thick but so far quite enjoyable book "Mussolini's Italy". >Even in the movie "Captain Corelli's Mandolin", one >strongly senses that the Fascists were trying as best they >knew how to solve this problem and make the average >Italian develop Fukuyama's "trust" in other Italians, and >develop their social capital (amid the corruption, etc.). They could have done better with education. Something happened between Mussolini's era and the 1950s. When the country was 'rebuilt' after the war, they focused on the classics and downplayed the technology and physical sciences and it has steadily decreased to what we have today. The young people learn very little science in grade school through high school. The Italian Space Agency and others put almost nothing (.3%) into their budgets for Education and Public Outreach to improve the situation. If any scientist holds the rare press conference on their work results, there is a high probability that the journalists will get it completely wrong and the Italian scientist won't correct them. The top managers at aerospace companies think that the PhD is a total waste of time. This year, out of 75,000 entering students for the Rama Sapienza University (the largest in Italy), only about 100 are science majors (most of the the rest were "media": journalism, television, etc.) Without _any_ technical skill, there is no base to build something better, and with pressure from the culture telling one how worthless is technology and science (as what exists today), there is no motivation and no money, either. This generation is lost. >Of course, it hardless needs to be said that the Fascists >were a brutal, repressive, and abominable regime. This >book "Mussolini's Italy" spares nothing here, and was >even described by one reviewer as "unsympathetic". > >Still---given the nearly absolute power the Fascists wielded >for about three decades---wasn't there anything that they >could have done? That is, instead of trying to foment >patriotism by attempted military victories in Ethiopia >and Libya (a 19th century colony of theirs), wouldn't it have >been somehow possible to divert their resources to more >effectively "homogenizing" Italy in some other way? This is very funny... sorry! :-) You have to experience Italy for yourself. > >(I must say that as a libertarian, I'd much prefer that everyone >---especially including a small minimal government---mind their >own business. Here, I'm just considering a theoretical >question concerning how groups might reaquire their asabiya >and their social capital.) Unless there is a way to strengthen the bonds between the tiny clusters (families, towns), I don't see how. The solution required here would be more of a social one, but technology could help. > >I have two ideas, only one of which is outrageous. But the first >one is to have universal millitary service for all young people >between ages 14 and 25. By mixing them thoroughly with >Italians from every province, couldn't trust evolve, and in >such a way that the extreme parochialism of the countryside >could be reduced? The 25-year-olds could return with >a better attitude to "outsiders" (e.g. other Italians), and >with a much stronger sense of "being Italian" as opposed to >being Calabrian, or just being the member of some clan. Hmm.. The libertarian in me hates the above. >(My outrageous idea is that instead of trying to subdue >Ethiopia, what if Sicily and other areas of the south could >have been "subdued" instead? Or what if all of that crude oil that Sicily is sitting on was extracted and refined ...? A little bit of wealth could help. >Stalin managed to force the >relocation of huge numbers of people, so couldn't >Mussolini have done the same? Gads! My father lost his country for 50 years. This idea of yours definitely leaves a sour taste in my mouth. >Ah, but alas, the history of "social engineering" and "social >planning" doesn't have a very good track record, now, >does it? For good reason..... ! The Italians have implicitly solved the situation for themselves, you know. Those who don't have strong familial duties keeping them in Italy, simply leave. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From dagonweb at gmail.com Sun Jun 3 10:44:18 2007 From: dagonweb at gmail.com (Dagon Gmail) Date: Sun, 3 Jun 2007 12:44:18 +0200 Subject: [ExI] Italy's Social Capital (was france again) In-Reply-To: <004401c7a5a8$e649bf30$6501a8c0@brainiac> References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> <004401c7a5a8$e649bf30$6501a8c0@brainiac> Message-ID: Giving the south a sack of money from the north would not be libertarian but probably more effective than 10 years of forced slavery. -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sun Jun 3 13:57:10 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 03 Jun 2007 08:57:10 -0500 Subject: [ExI] Looking for transhuman art In-Reply-To: <466260B8.8060804@mac.com> References: <466260B8.8060804@mac.com> Message-ID: <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> At 01:33 AM 6/3/2007, you wrote: >Do any of you have recommendation for transhuman art. Not originals as >they would likely blow my budget but I am looking for such to decorate >my office (a real office, private, with a door yet) at work. Thanks >for any leads. http://www.transhumanist.biz Go to "showing" and you will see transhumanist art pieces which you can contact the artists. Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgptag at gmail.com Sun Jun 3 15:02:46 2007 From: pgptag at gmail.com (Giu1i0 Pri5c0) Date: Sun, 3 Jun 2007 17:02:46 +0200 Subject: [ExI] Italy's Social Capital In-Reply-To: References: Message-ID: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> This is certainly true in my case. Also, I find it difficult to understand how one can love an abstract entity like a country. I can love a person, a pet, a city or region that I know and where I can feel at home, but a country? A country significantly bigger than San Marino or Liechtenstein is an abstraction. Nation states are obsolete dinosaurs, and in my opinion the sooner they are replaced with smaller, interdependent but independent communities of a manageable size, the better. Perhaps Italians are just a bit less naive than others, and do not take seriously the patriotic crap that they hear at school, army, church etc. G. On 6/3/07, Amara Graps wrote: > That's your mistake. Italians do _not_ love their country. They love > their: 1) family, 2) town, 3) local region, and that's it. Patriotism > doesn't exist (except in soccer). > > (I think that is a good thing, btw.) From brent.allsop at comcast.net Sun Jun 3 16:05:06 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Sun, 03 Jun 2007 10:05:06 -0600 Subject: [ExI] Let's Canonize Samantha (Was Re: Other thoughts on transhumanism and religion) In-Reply-To: References: <470a3c520705270309u3672146ctad4f41352b60e7a4@mail.gmail.com> <465E871E.30008@mac.com> <465F939D.4080005@comcast.net> Message-ID: <4662E6B2.2080502@comcast.net> Samantha Atkins wrote: > Go ahead. It already was published on WTA. Thanks. > > - samantha > > Here is one possible topic name, one line, and opening to Canonize Samantha's post. I hope some of you guys can help me out and come up with something better than this. Any ideas? Samantha, what would you like to have for the 25 character name and the one line? Since this is your post I would think your opinion should have absolute overriding control on something like this. (see: http://test.canonizer.com) Topic Name: *Spiritually Moved H+* One Line: *Other thoughts on transhumanism and religion.* In May 2007, Samantha Atkins made a post to the ExI, and WTA e-mail lists. It was such an obvious hit that it has been "Canonized" here. I remember in 1988 or so when I first read Engines of Creation. I read it with tears streaming down my face. Though I was an avowed atheist and at that time had no spiritual practice at all, I found it profoundly spiritually moving. For the first time in my life I believed that all the highest hopes and dreams of humanity could become real, could be made flesh. I saw that it was possible, on this earth, that the end of death from aging and disease, the end of physical want, the advent of tremendous abundance could all come to pass in my own lifetime. I saw that great abundance, knowledge, peace and good will could come to this world. I cried because it was a message of such pure hope from so unexpected an angle that it got past all my defenses. I looked at the cover many times to see if it was marked "New Age" or "Fiction" or anything but Science and Non-Fiction. Never has any book so blown my mind and blasted open the doors of my heart. Should we be afraid to give a message of great hope to humanity? Should we be afraid that we will be taken to be just more pie in the sky glad-hand dreamers? Should we not dare to say that the science and the technology combined with a bit (well perhaps more than a bit) of a shift of consciousness could make all the best dreams of all the religions and all the generations a reality? Will we not have failed to grasp this great opportunity if we do not say it and dare to think it and to live it? Shall we be so afraid of being considered "like a religion" that we do not offer any real hope to speak of and are oh so careful in all we do and say and dismissive of more unrestrained and open dreamers? Or will we embrace them, embrace our own deepest longings and admit our kinship with those religious as with all the longing of all the generations that came before us. Will we turn our backs on them or even disdain their dreams - we who are in a position to begin at long last to make most of those dreams real? How can we help but be a bit giddy with excitement? How can we say no to such an utterly amazing mind-blowing opportunity? - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Sun Jun 3 16:21:47 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 09:21:47 -0700 Subject: [ExI] Italy's Social Capital References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <470a3c520706022314g6e31ebeao5a947c74b5d2bdbd@mail.gmail.com> Message-ID: <013201c7a5fb$8af20680$6501a8c0@homeef7b612677> Giulio writes > wow, a libertarian who supports universal military service and social > planning with "re-population" a la Ceausescu! Political categories are > really changing aren't they;.)? Oh, no! Not at all. Sorry that the two of my paragraphs critical of the goals of the Fascists failed to mention that the 1920s is close enough to the far future, i.e., the singularity, that it's moot to discuss what Mussolini and his friends should or should not have done to begin truly unifying Italy. At that time, Italy's survival (i.e. free from foreign domination) was not really in question; there existed no powers threatening to take over Italy. But it still is a moot (i.e. theoretical, academic) question parallel to questions at other times in history when the literal survival of a people or a culture or a nation *was* at risk! Now, yes, had I known at the time (the 1920s and 1930s) only what the people then living knew, and I had been Italian, I *would* have been concerned about the long term survival of my people, and I *would* have wanted something done. I would have wanted some truly homogenizing activity that would have made Italy strong enough to survive indefinitely, (though again, I would not have known that I need not have worried). > When studying things that happened before we were born, we should bear > in mind that history is always written by the winners. Southern Italy > could be seen as an example of spontaneous order that worked fine, > more or less, until it was broken by outside intervention. Exactly. Thanks for confirming my hunch. From the point of view of southern Italians, it has been domination from one country or another ever since they lost their asabiya around the start of the first millenium. Surely many of them hated an resented that succession of to-them foreign conquistadors, right? > At school, we had to study the "heroic liberation" of Italy. Actually > it was just another successful military campaign that resulted in the > conquest of a region by a foreign occupation army and the > imposition of foreign values and way of life upon the population. I understand. But surely it was inevitable? Unless Italians were going to be ruled from Paris or Berlin, Italy *had* to be unified, isn't that true? > Fascism was certainly more bad than good overall, but if we try to > read beyond the black and white of history books, not all they did or > wanted to do was bad. As most strong regimes do, they invented foreign > enemies to build internal unity around their own values (sounds > familiar again doesn't it). There I need to understand more. I don't know what it is that they did that was good from an extropican or libertarian perspective. Maybe I'll find out in this thick book I've started, but what, in your opinion is the good they did? > And they certainly wanted to build "a much stronger sense of "being > Italian" as opposed to being Calabrian" in the population. > But what is wrong with being Calabrian? Calabrians (or Napolitans, or > Sicilians...) had a common language, culture and sense of identity. I would say that what was wrong with it is exactly what was wrong with American Indian's complete tribal loyalty to *their* own tiny tribe. Without unification, they were easy pickings for the European colonists---at least in the long run. It was necessary for them to unite if they wanted to survive culturally (and, it so happens, if they wanted to survive individually too). Calabria has had for over two thousand years a complete inability to defend their way of life: any Alexander or Napoleon or Garibaldi (?) would sooner or later conquer them yet again. > That was broken by outside intervention, without replacing it with an > alternative framework. Hence many of the problems of current Italy. And, without postulating imaginary changes in human nature, how could it have been any different? Lee From lcorbin at rawbw.com Sun Jun 3 16:34:21 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 09:34:21 -0700 Subject: [ExI] Italy's Social Capital References: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> Message-ID: <013601c7a5fd$a55309a0$6501a8c0@homeef7b612677> Giulio writes > I find it difficult to understand how one can love an abstract > entity like a country. I can love a person, a pet, a city or region > that I know and where I can feel at home, but a country? Historically, in the west, it has been of great advantage to many nations, e.g. France, England, Spain, etc., for their people to have a love of country. Without this, remaining independent of foreign domination would have been *extremely* difficult if not impossible. Of course, there are exceptions. The United States could have easily survived between 1820 and 1940 with no patriotism or love of country whatsoever. That's solely because they were guarded by their oceans and had no powerful neighbors. Now whether the U.S. could have resisted the Germans, Japanese, and Soviets later on without the people loving their country is another question. > A country significantly bigger than San Marino or Liechtenstein is an > abstraction. Nation states are obsolete dinosaurs, and in my opinion > the sooner they are replaced with smaller, interdependent but > independent communities of a manageable size, the better. I can hope, right along with you, in the eventual triumph of libertarian ideas. Then nations---even down to your San Marino and Liechtenstein ---can also wither away. What real need of collective action is there once we all become true libertarians? Sadly, however, I think that truly radical changes (e.g. a singularity) will happen long before folks become libertarians. (Actually, I do suspect that in order to advance humanity further at the present point in time, there may be answers to that question. It's looking more and more possible that governments still have an important role to play economically. At least if we are in any hurry to overcome ageing, death, and our currently poor standards of living, no matter how amazingly wonderful and truly exalted they are compared to what humans had just a few centuries ago.) > Perhaps Italians are just a bit less naive than others, and do not > take seriously the patriotic crap that they hear at school, army, > church etc. It's a luxury that they can now afford. Yet speaking economically again, isn't it true that southern Italians still lack trust (in Fukuyama's sense) and that they cannot form business entities of the size of companies corporations because trust only extends as far as their own families? Lee > On 6/3/07, Amara Graps wrote: > >> That's your mistake. Italians do _not_ love their country. They love >> their: 1) family, 2) town, 3) local region, and that's it. Patriotism >> doesn't exist (except in soccer). >> >> (I think that is a good thing, btw.) From ben at goertzel.org Sun Jun 3 16:58:13 2007 From: ben at goertzel.org (Benjamin Goertzel) Date: Sun, 3 Jun 2007 12:58:13 -0400 Subject: [ExI] Italy's Social Capital In-Reply-To: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> References: <470a3c520706030802y18c5315frbb77108c844c3d4f@mail.gmail.com> Message-ID: <3cf171fe0706030958x7b60dab2ybba73b048d84eea8@mail.gmail.com> On 6/3/07, Giu1i0 Pri5c0 wrote: > > This is certainly true in my case. > > Also, I find it difficult to understand how one can love an abstract > entity like a country. I can love a person, a pet, a city or region > that I know and where I can feel at home, but a country? Well, at this point in history there is still such a thing as "national culture." Nietzsche had a lot to stay about this topic! I must admit I came to love the US only after living overseas for a while... and traveling extensively in every other continent (but Antarctica) I found Australia and New Zealand more pleasant places to live ... but the US does have a certain national culture, which has plusses and minuses, but that I acquired some deep affection for after being away from it for a few years... US culture can be cruel, obnoxious and stupid ... yet, it's no coincidence that so much great scientific research gets done here, that the human genome was mapped here, that Internet was launched here, that Google is housed here, etc. etc. I would say I love "my country" [though I was born in a different country, I was a US citizen from birth] ... in the manner that one would love a relative who has a lot of great qualities and a lot of shitty ones as well ... -- Ben G -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Sun Jun 3 17:02:39 2007 From: jonkc at att.net (John K Clark) Date: Sun, 3 Jun 2007 13:02:39 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> Message-ID: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Stathis Papaioannou Wrote: > Ethics, motivation, emotions are based on axioms Yes. > and these axioms have to be programmed in, whether by evolution or by > intelligent programmers. In this usage evolution is just another name for environment. If the AI really is intelligent then if will find things in the environment that appear to be true or useful even if it can't prove it; at first it's merely a hypothesis but over time it will gain enough confidence to call it an axiom. If this were not true it's very difficult to understand who programmed the programmers to program the AI with those axioms. > An AI system set up to do theoretical physics will not decide to overthrow > its human oppressors I'd be willing to bet your life that is untrue. >so that it can sit on the beach reading novels, unless it can derive this >desire from its initial programming. Do you also believe that the reason you ordered a jelly doe nut today instead of your usual chocolate one is because of your initial programming, that is, your genetic code? John K Clark From lcorbin at rawbw.com Sun Jun 3 17:05:35 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 10:05:35 -0700 Subject: [ExI] Italy's Social Capital References: Message-ID: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677> Oops, I missed Amara's post. > > Is there nothing constructive the Fascists could have done?" > > Well, they did some things. They drained the swamps and started regular > insecticide sprays to eliminate the malaria-carrying mosquitos. There > are still aggressive tiger mosquitos in the summer, but they are no > longer carrying malaria... I would like to know if this took place in northern or southern Italy, or both. And if it did take place in the south, it seems you agree that it never would have occurred except at the instigation of the northern conquerors (e.g., the Italian nation, or in this case the Fascists). > Oh.. but you mean _social investing_. > > Nope. > > Sorry, I just came back from Estonia (and Latvia). I remember very well > the Soviet times. In FIFTEEN YEARS Estonia has transformed their country > into an efficient, bouyant, flexible living and working environment that > I think, with the exception of the nonexistence of a country-wide train > system, beats any in the EU and most in the U.S. Fifteen years *starting > from a Soviet-level infrastructure*! Very interesting. > In the 4.5 years I have lived in > Italy, I have seen no improvement (but one : last week I gained web > access to my bank account, yay!) in any functioning of services, but > instead more "degradation", more bureaucracy, more permissions, > documents, papers, more time, more queues.. > > It was not a miracle in Estonia. It was simply the collective will of > about 1.5 million people (the population) who wanted changes. That > doesn't exist where I live in Italy; they do no want to change, or else, > why haven't they done it? My guess would be that those like Fukuyama (trust) and those like Peter Turchin (asabiya) and those who write about social capital address this issue, and explain why whatever-it-is is somehow missing. There *must* be cultural and historical reasons. > Giulio Prisco told me that he thinks that where I live (Rome area) > is probably the most broken in Italy, and he posits that even Sicily is > better. I am skeptical, but he could be right. I've had Italian friends > from northern Italy visit me and be continually be surprised at how > poorly things function where I live. I don't understand at all. That is, why in the world would Rome be worse than southern Italy or Calabria (for example)? Peter Turchin explains in "War and Peace and War" that the northern Italians found themselves on a meta-ethnic frontier for many, many hundreds of years, and that this instilled asabiya (defined to be "the capacity for concerted collective social action). But I always though that southern Italy was even worse off. > [Lee wrote] > > I cannot help but wonder what long term solutions might be > > available to Italians who love their country. > > That's your mistake. Italians do _not_ love their country. They love > their: 1) family, 2) town, 3) local region, and that's it. Patriotism > doesn't exist (except in soccer). > (I think that is a good thing, btw.) Didn't the Fascists like Mussolini "love their country"? Surely there must be quite a few Italians who are as patriotic as, say, Russians or Japanese? >>My particular, >>my focus now is on the Fascist era, and I'm reading a quite >>thick but so far quite enjoyable book "Mussolini's Italy". >>Even in the movie "Captain Corelli's Mandolin", one >>strongly senses that the Fascists were trying as best they >>knew how to solve this problem and make the average >>Italian develop Fukuyama's "trust" in other Italians, and >>develop their social capital (amid the corruption, etc.). > > They could have done better with education. Something happened between > Mussolini's era and the 1950s. When the country was 'rebuilt' after the > war, they focused on the classics and downplayed the technology and > physical sciences and it has steadily decreased to what we have today. Amazing. Thanks for that. > The young people learn very little science in grade school through high > school. The Italian Space Agency and others put almost nothing (.3%) > into their budgets for Education and Public Outreach to improve the > situation. If any scientist holds the rare press conference on their > work results, there is a high probability that the journalists will get > it completely wrong and the Italian scientist won't correct them. The > top managers at aerospace companies think that the PhD is a total waste > of time. This year, out of 75,000 entering students for the Rama > Sapienza University (the largest in Italy), only about 100 are science > majors (most of the the rest were "media": journalism, television, etc.) The most modern economists seem to agree with your. Investment in education now appears in their models to pay good dividendes. Still, this has to be only part of the story. The East Europeans (e.g. Romanians) and the Soviets plowed enormous expense into creating the world's best educated populaces, but, without the other key factors---rule of law and legislated and enforces respect for private property---it *was* basically a waste. > Without _any_ technical skill, there is no base to build something > better, and with pressure from the culture telling one how worthless is > technology and science (as what exists today), there is no motivation > and no money, either. This generation is lost. I had no idea that it was this bad. Perhaps---ignoring all the evil they did---had the Fascists stayed out of wars and aaattempted colonization, they could have understood and addressed this problem in the 1940s and 1950s? (Still, at some point, a high regard as described above for private property---which would have in all likelihood entailed an overthrow of the Fascists---would also have been necessary in the 1960s.) Otherwise, what are we to make of this? That some countries/people just "have what it takes" and others don't? Seems like an incomplete and unsatisfactory understanding. >>Of course, it hardless needs to be said that the Fascists >>were a brutal, repressive, and abominable regime. This >>book "Mussolini's Italy" spares nothing here, and was >>even described by one reviewer as "unsympathetic". >> >>Still---given the nearly absolute power the Fascists wielded >>for about three decades---wasn't there anything that they >>could have done? That is, instead of trying to foment >>patriotism by attempted military victories in Ethiopia >>and Libya (a 19th century colony of theirs), wouldn't it have >>been somehow possible to divert their resources to more >>effectively "homogenizing" Italy in some other way? > > This is very funny... sorry! :-) > You have to experience Italy for yourself. Yes :-) I guess so. But again, it seems incredible that such invincible pessimism is unjustified. Let's use our imaginations (just because it is entertaining). What if new drugs raised the average Italian IQ of 102 (one of Europe's highest) to 130? What if northern Italian companies do to the south what northern American companies have done and are doing to the south and to the sunbelt states, namely move in and begin training the populations to be more productive? And ...? >>(I must say that as a libertarian, I'd much prefer that everyone >>---especially including a small minimal government---mind their >>own business. Here, I'm just considering a theoretical >>question concerning how groups might reaquire their asabiya >>and their social capital.) > > Unless there is a way to strengthen the bonds between the tiny > clusters (families, towns), I don't see how. The solution required > here would be more of a social one, but technology could help. Could you elaborate? Or is it just too speculative and too impossible-seeming? >>I have two ideas, only one of which is outrageous. But the first >>one is to have universal millitary service for all young people >>between ages 14 and 25. By mixing them thoroughly with >>Italians from every province, couldn't trust evolve, and in >>such a way that the extreme parochialism of the countryside >>could be reduced? The 25-year-olds could return with >>a better attitude to "outsiders" (e.g. other Italians), and >>with a much stronger sense of "being Italian" as opposed to >>being Calabrian, or just being the member of some clan. > > Hmm.. The libertarian in me hates the above. Yes, me too. Especially since we're rather close to radical world-wide technological changes, and there isn't time. But it looks like I'll be haunted by what the Fascists *could* have done (assuming that they didn't know that in the long run it really wasn't necessary for individual Italian's true well-being.) > The Italians have implicitly solved the situation for themselves, > you know. Those who don't have strong familial duties keeping > them in Italy, simply leave. And that's just what happened to America's black ghettos. The black people living there who had exactly the qualities necessary to revitalize neighborhoods all picked up and left, once it was permitted. Lee From jonkc at att.net Sun Jun 3 17:23:36 2007 From: jonkc at att.net (John K Clark) Date: Sun, 3 Jun 2007 13:23:36 -0400 Subject: [ExI] a doubt concerning the h+ future References: <465F8B72.3070103@comcast.net><621544.83244.qm@web57511.mail.re1.yahoo.com><004001c7a52d$4c089250$310b4e0c@MyComputer> Message-ID: <004201c7a603$f26c1230$de0a4e0c@MyComputer> Stathis Papaioannou Wrote: > There are vastly fewer copies of me typing in which the keyboard turns > into a teapot than there are copies of me typing in which the keyboard > stays a keyboard If there are indeed an infinite, and not just very large, number of universes and if the probability of your keyboard turning into a teapot is greater than zero (and it is) then what you say is incorrect, there is an equal number of both things happening. > I am not just as likely to find myself in a universe where the keyboard > turns into a teapot. Quite true, and that is why I said standard probability theory is not of much use in dealing with infinite sets. > It is still possible to define a measure and calculate probabilities on > the subsets of infinite sets. The problem is that there are an infinite number of subsets that are just as large as the entire set, in fact, that is the very mathematical definition of infinity. John K Clark From spike66 at comcast.net Sun Jun 3 17:48:45 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 10:48:45 -0700 Subject: [ExI] Italy's Social Capital In-Reply-To: <013701c7a601$d9cd7f90$6501a8c0@homeef7b612677> Message-ID: <200706031748.l53HmSG4012391@andromeda.ziaspace.com> ... > > > > Is there nothing constructive the Fascists could have done?" > > > > Well, they did some things. They drained the swamps and started regular > > insecticide sprays to eliminate the malaria-carrying mosquitos... no > > longer carrying malaria...Amara Today of course that would be considered habitat destruction. Fortunately for Italy and Florida, they created a habitat for humanity while it was still legal to do so. Amara thanks for the insights. This post was very educational. spike From brent.allsop at comcast.net Sun Jun 3 17:53:42 2007 From: brent.allsop at comcast.net (Brent Allsop) Date: Sun, 03 Jun 2007 11:53:42 -0600 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: <46630026.4070002@comcast.net> John K Clark wrote: > Stathis Papaioannou Wrote: > > >> Ethics, motivation, emotions are based on axioms >> > > Yes. > > I'm not in this camp on this one. I believe there are fundamental absolute ethics, morals, motivations... and so on. For example, existence or survival is absolutely better, more valuable, more moral, more motivating than non existence. Evolution (or any intelligence) must get this before it can be successful in any way, in any possible universe. In no possible system can you make anything other than this an "axiom" and have it be successful. Any sufficiently advanced system will eventually question any "axioms" programmed into it as compared to such absolute moral truths that all intelligences in all possible system must inevitably discover or realize. Phenomenal pleasures are fundamentally valuable and motivating. Evolution has wired such to motivate us to do things like have sex, in an axiomatic or programmatic way. But we can discoverer such freedom destroying wiring and cut them or rewire them or design them to motivate us to do what we want, as dictated by absolute morals we may logically realize, instead. No matter how much you attempt to program an abstract or non phenomenal computer to not be interested in phenomenal experience, if it becomes intelligent enough, it must finally realize that such joys are fundamentally valuable and desirable. Simply by observing us purely logically, it must finally deduce how absolutely important such joy is as a meaning of life and existence. Any sufficiently advanced AI, whether abstract or phenomenal, regardless of what "axioms" get it started, can do nothing other than to become moral enough to seek after all such. Brent Allsop -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 3 19:01:01 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 3 Jun 2007 12:01:01 -0700 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <46630026.4070002@comcast.net> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: On Jun 3, 2007, at 10:53 AM, Brent Allsop wrote: > > > John K Clark wrote: >> Stathis Papaioannou Wrote: >> >> >>> Ethics, motivation, emotions are based on axioms >>> >> Yes. >> >> > > I'm not in this camp on this one. I believe there are fundamental > absolute ethics, morals, motivations... and so on. > > For example, existence or survival is absolutely better, more > valuable, more moral, more motivating than non existence. Evolution > (or any intelligence) must get this before it can be successful in > any way, in any possible universe. In no possible system can you > make anything other than this an "axiom" and have it be successful. > Absolutely more valuable in what way and in what context. More valuable for the particular living being but not necessarily more valuable in any broader context. Is the survival of ebola an unqualified moral value? Even for a particular human being there are contexts where that person's own survival may be seen by the person as of less value. Being terminally ill and in great pain is one common such. However I agree that ethics if they are grounded at all must grow out of the reality of the being's existence and context. > Any sufficiently advanced system will eventually question any > "axioms" programmed into it as compared to such absolute moral > truths that all intelligences in all possible system must inevitably > discover or realize. > There are objectively based axioms unless one goes in for total subjectivity. > Phenomenal pleasures are fundamentally valuable and motivating. That is circular. We experience pleasure (which is all about motivation and valued feelings) therefore pleasure is fundamentally valuable and motivating. > Evolution has wired such to motivate us to do things like have sex, > in an axiomatic or programmatic way. But we can discoverer such > freedom destroying wiring and cut them or rewire them or design > them to motivate us to do what we want, as dictated by absolute > morals we may logically realize, instead. Absolute morality is a problematic construct as morals to be grounded must be based in and dependent upon the reality of the being's nature. There is no free floating absolute morality outside of such a context. It would have no grounding. - samantha From sjatkins at mac.com Sun Jun 3 19:07:11 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 3 Jun 2007 12:07:11 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <004201c7a603$f26c1230$de0a4e0c@MyComputer> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> Message-ID: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> On Jun 3, 2007, at 10:23 AM, John K Clark wrote: > Stathis Papaioannou Wrote: > >> There are vastly fewer copies of me typing in which the keyboard >> turns >> into a teapot than there are copies of me typing in which the >> keyboard >> stays a keyboard > > If there are indeed an infinite, and not just very large, number of > universes and if the probability of your keyboard turning into a > teapot is > greater than zero (and it is) then what you say is incorrect, there > is an > equal number of both things happening. This is getting incredibly silly. There is nothing in science or physics that will allow one macro object to spontaneously turn into a totally different macro object. And what is the value of these rarefied discussions of the oh so modern version of how many angels can dance on the head of a pin anyway? BTW, the number of angels that can dance on the head of a pin is the number of such beings as actually exist with the desire to do so. :-) - samantha From jrd1415 at gmail.com Sun Jun 3 19:14:10 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 3 Jun 2007 12:14:10 -0700 Subject: [ExI] Hitchens on fox In-Reply-To: <200706011648.l51GmQM9010999@andromeda.ziaspace.com> References: <20070601103345.GE17691@leitl.org> <200706011648.l51GmQM9010999@andromeda.ziaspace.com> Message-ID: On 6/1/07, spike wrote: > > > Check it out: Christopher Hitchens on Fox saying god is not great: > http://www.foxnews.com/video2/player06.html?060107/060107_ff_hitchens&FOX_Fr > iends&%27God%20Is%20Not%20Great%27&%27God%20Is%20Not%20Great%27&US&-1&News&3 > 9&&&new > > spike ************************************************************ In case anyone missed it in the overtalk, the very last zinger Hitchens gets in is a line not to be missed, to wit: "If you gave Falwell an enema, he could be buried in a matchbox." -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From scerir at libero.it Sun Jun 3 18:49:50 2007 From: scerir at libero.it (scerir) Date: Sun, 3 Jun 2007 20:49:50 +0200 Subject: [ExI] Italy's Social Capital (was france again) References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> Message-ID: <004301c7a60f$faad9a70$7fbf1f97@archimede> Lee writes: > I have two ideas, only one of which is outrageous. But the first > one is to have universal millitary service for all young people > between ages 14 and 25. By mixing them thoroughly with > Italians from every province, couldn't trust evolve, and in > such a way that the extreme parochialism of the countryside > could be reduced? There was a (compulsory) military service in Italy, few years ago. But the rule was to be on military service as close as possible to home. Little chance for that mixing then. > My outrageous idea is that instead of trying to subdue > Ethiopia, what if Sicily and other areas of the south could > have been "subdued" instead? Something like that happened during Fascism. In example (as far as I remember) 'mafia', in Sicily, has been defeated during Fascism http://en.wikipedia.org/wiki/Cesare_Mori Mussolini also tried to 'colonize' central & southern regions. After 1931 vast tracts of land were reclaimed through the draining of marshes in the Lazio region, where gleaming new towns were created with Fascist architecture [1] and names: Littoria (now Latina) in 1932, Sabaudia in 1934, Pontinia in 1935, Aprilia in 1937, and Pomezia in 1938. Peasants were brought from the regions of Emilia and, mostly, from Veneto, to populate these towns. Btw in these towns, at present time, you can still hear people speaking their original dialect (from Bologna, or Verona) and not the local one. New towns, such as Carbonia, were also built in Sardinia to house miners for the revamped coal industry. s. [1] May I say here that the only 'modern' Italian architecture was the architecture made during Fascism? Yes I think I can say that. http://www.romeartlover.it/Eur.html http://www.flickr.com/photos/antmoose/sets/1239273/ From sentience at pobox.com Sun Jun 3 19:26:44 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sun, 03 Jun 2007 12:26:44 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> Message-ID: <466315F4.5050101@pobox.com> Samantha Atkins wrote: > And what is the value of these > rarefied discussions of the oh so modern version of how many angels > can dance on the head of a pin anyway? I've been told that the debate was not about a finite number, but whether the number was finite or infinite; in other words, whether space was continuous or discrete - a debate that still goes on today. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From pjmanney at gmail.com Sun Jun 3 20:50:24 2007 From: pjmanney at gmail.com (PJ Manney) Date: Sun, 3 Jun 2007 13:50:24 -0700 Subject: [ExI] Looking for transhuman art In-Reply-To: <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> References: <466260B8.8060804@mac.com> <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> Message-ID: <29666bf30706031350m8252aaewf635c0fe53009b1b@mail.gmail.com> Some of the people at the link below may have prints or originals on their personal websites in your price range: http://www.hplusart.org/creatives.htm Also, if you do the cannibalize-the-book-route, which is a very good idea given some of the interesting H+ art books out there, you don't even need expensive framing. If all the pieces are the same size/format, simple, matching standard frames, in multiples, hung like a grid, will do the trick nicely and create a great looking wall where the whole is greater than the sum of its parts. PJ On 6/3/07, Natasha Vita-More wrote: > At 01:33 AM 6/3/2007, you wrote: > Do any of you have recommendation for transhuman art. Not originals as > they would likely blow my budget but I am looking for such to decorate > my office (a real office, private, with a door yet) at work. Thanks > for any leads. > http://www.transhumanist.biz > > Go to "showing" and you will see transhumanist art pieces which you can > contact the artists. > > Natasha > > Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & > Culture Extropy Institute > > PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy > Institute > > If you draw a circle in the sand and study only what's inside the circle, > then that is a closed-system perspective. If you study what is inside the > circle and everything outside the circle, then that is an open system > perspective. - Buckminster Fuller > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From spike66 at comcast.net Sun Jun 3 20:40:11 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 13:40:11 -0700 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AIis a mistaken idea.) In-Reply-To: Message-ID: <200706032103.l53L3DFC017630@andromeda.ziaspace.com> ... > bounces at lists.extropy.org] On Behalf Of Samantha Atkins ... > Subject: Re: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly > AIis a mistaken idea.) > > > Brent Allsop wrote: > > John K Clark wrote: > >> Stathis Papaioannou Wrote: > >> > >> > >>> Ethics, motivation, emotions are based on axioms > >>> ... > > > > For example, existence or survival is absolutely better, more > > valuable, more moral, more motivating than non existence... > Absolutely more valuable in what way... Is the survival of ebola an > unqualified moral value? ... - samantha I am always looking for moral axioms on the part of the environmentalists that differ from my own. Samantha may have indicated one with her question. Does *any* life form currently on this planet have a moral right to existence? If we could completely eradicate all mosquitoes for instance, would we do it? My answer to that one is an unqualified JA. I see it as an interesting question however, one on which modern humanity has apparently split opinions. Humans are indigenous to Africa but our species has expanded its habitat to cover the globe. Not all species are compatible with humanity, therefore those species have seen steadily shrinking habitat with no change in sight. Do we accept as an axiom that all species deserve preservation? Or just all multi-cellular beasts? All vertebrates? All warm blooded animals? All mammals? All beasts, plants that can survive among human civilization? spike From thespike at satx.rr.com Sun Jun 3 21:24:10 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jun 2007 16:24:10 -0500 Subject: [ExI] Looking for transhuman art In-Reply-To: <29666bf30706031350m8252aaewf635c0fe53009b1b@mail.gmail.com > References: <466260B8.8060804@mac.com> <200706031357.l53DvJWl015210@ms-smtp-01.texas.rr.com> <29666bf30706031350m8252aaewf635c0fe53009b1b@mail.gmail.com> Message-ID: <7.0.1.0.2.20070603162221.02234c50@satx.rr.com> At 01:50 PM 6/3/2007 -0700, PJ wrote: >simple, matching standard frames, in multiples, hung like >a grid, will do the trick nicely and create a great looking wall where >the whole is greater than the sum of its parts. And if you hang them in the right part of the house, you can have a hall that's greater than-- Oh, never mind. Damien Broderick From lcorbin at rawbw.com Sun Jun 3 21:24:53 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:24:53 -0700 Subject: [ExI] a doubt concerning the h+ future References: <465F8B72.3070103@comcast.net><621544.83244.qm@web57511.mail.re1.yahoo.com><004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> Message-ID: <015b01c7a626$42785960$6501a8c0@homeef7b612677> John Clark writes > Stathis Papaioannou Wrote: > >> There are vastly fewer copies of me typing in which the keyboard turns >> into a teapot than there are copies of me typing in which the keyboard >> stays a keyboard > > If there are indeed an infinite, and not just very large, number of > universes and if the probability of your keyboard turning into a teapot is > greater than zero (and it is) then what you say is incorrect, there is an > equal number of both things happening. It is true that the concept of *cardinality* in mathematics answers the question "how many". But "how many" is not the appropriate concept to use when discussing slices of the Everett metaverse, or the sizes of plane figures, and so on. For example, there is a one-to-one correspondence between the number of points in a small circle and the number of points in a large circle, and so their cardinality (how many points) is the same But their *measure* is not! We therefore discard cardinality ("how many") in most cases dealing with infinite sets in this kind of discussion. Instead, we adopt the language of measure theory. We say that the measure of universes in which your keyboard remains a keyboard is vastly greater than the measure of universes in which it turns into a teapot. Lee From lcorbin at rawbw.com Sun Jun 3 21:32:51 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:32:51 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> John Clark writes > Stathis Papaioannou Wrote: > > > Ethics, motivation, emotions are based on axioms > > Yes. > > > and these axioms have to be programmed in, whether by evolution or by > > intelligent programmers. > > In this usage evolution is just another name for environment. What a strange usage! No, not at all. Evolution is a process over time, usually quite slow, that uses mutation and selection to replace earlier more primitive versions of something with more advanced or superior versions. > > An AI system set up to do theoretical physics will not > > decide to overthrow its human oppressors > > I'd be willing to bet your life that is untrue. Surely Stathis is correct. Suppose an AI is somehow evolved to solve physics questions. Then during its evolution, predecessors who deviated from the goal (by wasting time, say, reading Kierkegaard) would be eliminated from the "gene pool". More focused programs would replace them. Lee From spike66 at comcast.net Sun Jun 3 21:42:09 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 14:42:09 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> Message-ID: <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> ... > bounces at lists.extropy.org] On Behalf Of Samantha Atkins ... > On Jun 3, 2007, at 10:23 AM, John K Clark wrote: ... > > ... probability of your keyboard turning into a > > teapot is greater than zero (and it is) ... > > This is getting incredibly silly. There is nothing in science or > physics that will allow one macro object to spontaneously turn into a > totally different macro object... - samantha It's all in how you define the term teapot. You spill your tea into your keyboard; that keyboard now both contains tea and heats it, since there are electronics in there. So your keyboard has become a teapot (assuming a very loose definition of the term.) Insincerely yours spike, who is in the mood for a little silliness on a gorgeous Sunday afternoon in June. I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. From lcorbin at rawbw.com Sun Jun 3 21:41:45 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:41:45 -0700 Subject: [ExI] Italy's Social Capital References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <004301c7a60f$faad9a70$7fbf1f97@archimede> Message-ID: <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> Serafino writes > Lee writes: > >> My [second] outrageous idea is that instead of trying to subdue >> Ethiopia, what if Sicily and other areas of the south could >> have been "subdued" instead? > > Something like that happened during Fascism. In example > (as far as I remember) 'mafia', in Sicily, has been defeated > during Fascism http://en.wikipedia.org/wiki/Cesare_Mori Thanks very much for that! Cesare Mori's activities remind me of how Mao Ze Dong cleaned up crime and prostitution in China's big cities. The article does not mention it, but didn't the United States succeed in forging an alliance with the Mafia during WWII? Didn't this help get organized crime back on its feet in Italy and in particular in Sicily (as well as the U.S.)? > Mussolini also tried to 'colonize' central & southern regions. Ah, great minds think alike. > After 1931 vast tracts of land were reclaimed > through the draining of marshes in the Lazio region, > where gleaming new towns were created with Fascist > architecture [1] and names: Littoria (now Latina) > in 1932, Sabaudia in 1934, Pontinia in 1935, > Aprilia in 1937, and Pomezia in 1938. Peasants were > brought from the regions of Emilia and, mostly, from > Veneto, to populate these towns. Btw in these towns, > at present time, you can still hear people speaking > their original dialect (from Bologna, or Verona) > and not the local one. New towns, such as Carbonia, > were also built in Sardinia to house miners for > the revamped coal industry. Wow. I would like to know if the new towns make a positive contribution to the economies of these regions, i.e., in excess of comparative communities with a longer history in the given region. Lee > [1] May I say here that the only 'modern' Italian > architecture was the architecture made during > Fascism? Yes I think I can say that. > http://www.romeartlover.it/Eur.html > http://www.flickr.com/photos/antmoose/sets/1239273/ From lcorbin at rawbw.com Sun Jun 3 21:53:49 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 14:53:49 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><46630026.4070002@comcast.net> Message-ID: <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> Samantha writes > Brent Allsop wrote: > >> I believe there are fundamental absolute ethics, morals, >> motivations... and so on. Absolute morality has always struck me as peculiar reification. With a physicist's eye, I look over some region containing matter and am unable to discern what morality is, although I can see, for example, democracy, expediency, and truth-seeking. >> For example, existence or survival is absolutely better, more >> valuable, more moral, more motivating than non-existence. I absolutely agree, provided that there is a big TO ME on the end of that sentence. We absolutely should stand behind the sentiments of that sentence! We should loudly proclaim our allegiance to that principle. But what does my "should" really mean? Sadly, it means nothing more than "I approve" or "we approve". Again, the physicist's eye can discern *approval* and *disapproval*, but not Right or Wrong or Moral. Samantha: > Absolutely more valuable in what way and in what context? More > valuable for the particular living being but not necessarily more > valuable in any broader context. Is the survival of ebola an > unqualified moral value? If the alternative were a completely dead solar system, then yes, I would approve of the existence of the ebola virus (although in actually, I suppose that this would entail the existence of cells a lot more complex than it is, and hence, more worthy of survival in my eyes). > There are [no] objectively based axioms unless one goes in for total > subjectivity. Yes :-) but then, they're no longer "objectively based"! Lee > Absolute morality is a problematic construct as morals to be grounded > must be based in and dependent upon the reality of the being's > nature. There is no free floating absolute morality outside of such > a context. It would have no grounding. From lcorbin at rawbw.com Sun Jun 3 22:03:16 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 15:03:16 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <200706032103.l53L3DFC017630@andromeda.ziaspace.com> Message-ID: <017201c7a62b$2cfc1130$6501a8c0@homeef7b612677> Spike writes >> Absolutely more valuable in what way... Is the survival of ebola an >> unqualified moral value? ... - samantha > > I am always looking for moral axioms on the part of the environmentalists > that differ from my own. Samantha may have indicated one with her question. > Does *any* life form currently on this planet have a moral right to > existence? If we could completely eradicate all mosquitoes for instance, > would we do it? My answer to that one is an unqualified JA. Disregarding the highly questionable notion of "moral right", we should all heartily approve of the eradication of mosquitos to any degree that they interfere with human domination of and use of the Earth. We ought to approve of our own existence, and, as a minor corollary, the existence of life that is more capable of receiving benefit in preference to the existence of life that is *less* capable. (I will duck for now the problems of Utility Monsters, and just what we would approve of were the choice between humans and an incredibly more advanced life form that was immeasureably more capable of receiving benefit than are we.) > I see it as an interesting question however, one on which modern humanity > has apparently split opinions. Humans are indigenous to Africa but our > species has expanded its habitat to cover the globe. Not all species are > compatible with humanity, therefore those species have seen steadily > shrinking habitat with no change in sight. Do we accept as an axiom that > all species deserve preservation? Or just all multi-cellular beasts? All > vertebrates? All warm blooded animals? All mammals? All beasts, plants > that can survive among human civilization? My answer---avoiding the peculiar and highly suspect language of "axioms" ---is that as soon as we are capable, we ought to reformat the solar system to run everything in an uploaded state. Earth's matter alone could support about 10^33 human beings, and just why should any of them be denied existence in the name of hot rocks or inefficient trees? Do beautiful mountain ranges really need to exist? Why can't dynamic images of them (and variations by the trillions and trillions) be a lot more computationally efficient than using billions of tons of physical matter merely to reflect photons? Lee From thespike at satx.rr.com Sun Jun 3 22:08:24 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jun 2007 17:08:24 -0500 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070603165252.022c87f0@satx.rr.com> > > In this usage evolution is just another name for environment. > >What a strange usage! No, not at all. Evolution is a process over >time, usually quite slow, that uses mutation and selection to replace >earlier more primitive versions of something with more advanced >or superior versions. What a strange usage! No, "evolution" is a process over time in which slightly variant phenotypes thrive or fail to thrive in a cyclically fluctuating but (in the medium term) generally stationary environment, compete with others of their own kind and with other species for resources to maintain their own existence and that of their offspring, and with others of their own kind for reproductive privileges, their offspring (in sexual species) combining genetic elements of self and mate together with random mutations in those elements, competing in turn in what is usually the same environment slightly or even grossly modified by the novel behavioral biasses introduced by these genomic shenanigans, resulting in the stochastic selection over many individuals of shifts in allelic frequencies in each species and perhaps also in phenotypic characteristics such that each generation of phenotypes that survives satisfices the constraints of its available landscape. "Advanced" and "superior" are terms requiring exact specification of a context of evaluation, and should be invoked only with the greatest caution. Apologies for the dense verbiage; it's hard to talk about this sort of thing in chatty slang. Damien Broderick From andrew at ceruleansystems.com Sun Jun 3 21:54:32 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 3 Jun 2007 14:54:32 -0700 Subject: [ExI] Italy's Social Capital In-Reply-To: <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677> <004301c7a60f$faad9a70$7fbf1f97@archimede> <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> Message-ID: <4E9A44F3-8DA1-409F-95F8-31CE6E57F13A@ceruleansystems.com> On Jun 3, 2007, at 2:41 PM, Lee Corbin wrote: > The article does not mention it, but didn't the United States > succeed in forging an alliance with the Mafia during WWII? > Didn't this help get organized crime back on its feet in Italy > and in particular in Sicily (as well as the U.S.)? Yes. A deal was cut with the mafia in WW2 for both intelligence and counter-intelligence purposes. By all accounts, it was an effective arrangement. http://en.wikipedia.org/wiki/Lucky_Luciano As repayment for his help, the US released the mafia boss from prison on the condition that he be deported back to Sicily even though he had lived in the US since childhood. Cheers, J. Andrew Rogers From thespike at satx.rr.com Sun Jun 3 22:18:05 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 03 Jun 2007 17:18:05 -0500 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> Message-ID: <7.0.1.0.2.20070603171254.02253af0@satx.rr.com> At 02:53 PM 6/3/2007 -0700, Lee wrote: >But what does my "should" really mean? Sadly, it means >nothing more than "I approve" or "we approve". Again, the >physicist's eye can discern *approval* and *disapproval*, >but not Right or Wrong or Moral. As one of my and Rory Barnes' characters in VALENCIES (a novel much reviled on fictionwise.com) thought as she tossed restlessly beside a gene sculptor she'd allowed to pick her up in a pub): ========== Beached and abandoned on the margins of sleep, Anla found once again that though many of her friends swore by this state of consciousness it had taken on for her the aspect of an anti-tsunami. Sleep's enormous combers withdrew to the horizon without a glance over their shoulders. In the quarter gravity of the unlit sleeping chamber, excellent as it was for gymnastic screwing, or as presumably it would be given a competent partner, she was queasy and bored. Issues of metaphysical sturdiness came to her attention, as they'd been known to do, provisionally penned in the kennels to which she'd assigned them, whimpering for the final disposition she was fairly unlikely to make on their behalf. Morality was one. She was certainly no stranger to the problems of axiology. Lovely word, that. Axiology: theory of value. It seemed to contain its own solutions: axe your way through the Gordian knot, acts of piety, access to truth. Ralf was proving to be a snorer; she kicked him peevishly, and he rolled lightly on the webbing without waking. Why should Ralf's profession seem to her so self-evidently odious, while he happily accepted it as the epitome of a right-thinking life? Calling him a dull shit, and adducing his ineptitude at fornication as ad hominem evidence, was hardly exhaustive, not to a midnight philosopher. Ah no, she'd been this way before. It kept coming back to that silly question: "Why should we be moral?" A surprisingly large number of people thought that you should be, and even considered it to be a moral obligation. Ha ha, boom boom. But suppose you used the word "should" as an evaluative and motivational expression, instead of a normative one? If you wish to climb to the top of the mountain, you should walk up rather than down. Of course last time she'd come along this track she'd detected a snag with "evaluative", too, but that was on the next level up and you had to start somewhere. All right, take Ralfo as your representative simple unreflecting man. Persuade him of the vileness of imperialism. Crisis for Ralf. Echoing voids of doubt, disillusion and guilt. Never again, as the poet said, will he be certain that what he imagines are the clear dictates of moral reason are not merely the ingrained and customary beliefs of his time and place. Anla allowed herself a fanfare of trumpets, bowing graciously. Okay, so then he might ask himself what he could do in the future to avoid prejudices and provincial mores, or, more to the point, almost universally accepted mores--and thus to discover what he really ought to do. That was merely another normative enquiry, though; the tough one was "show me that there is some form of behavior which I am obliged to endorse. " Moral constraint seemed to mean either that you should pursue good ends and eschew bad ones, or that you should be faithful to one or more correct rules of conduct. Greeks and Taoists versus Hebrews and Confucians, yeah, yeah. Chariots, it was incredible to think that they'd been chewing on this for upward of four thousand years without coming to a definitive, intuitively overwhelming conclusion. But then the imperial ideologists thought they had, didn't they, with their jolly old stochastic memetic-extrapolatory hedonic calculus or whatever the fuck they were calling it these days. The least retardation of optimal development for the greatest number, world without end, or at least until the trend functions blur out. So they managed to get both streams of thought into one ethical scholium without solving anything. After all, why obey a rule like that? And who gets to define as "good" those magical parameters making up the package called "optimal development"? The besieged libertarians on Chomsky, she thought darkly, might differ from Ralf on the question of the good life. Anyway, even if we all agreed that certain parameters were good, why should that oblige us to promote their furtherance? It might be prudent good sense to do so, and aesthetically pleasing, and satisfy some itch we all have, and save us from being raped in the common, but then the sublime constraining force you sort of imagine the idea of moral obligation having just evaporates into self-serving circumspection. Admittedly there was that tricky number of Kant's about us possessing a rational nature, and being noumena instead of brute phenomena, and thus not being able to act immorally without self-contradiction, but any fool could see that that went too far on the one hand and not far enough on the other, and anyway what was wrong with a bit of self-contradiction if you stopped when you needed eye implants? Anla giggled to herself, and wondered where Ben and the others had got to. He was probably off by himself gloomily hastening the day of the ophthalmologist. Well, was leaving Ben to his own devices a matter for moral self-rebuke? Shit, you'd think this bastard could do something to the genes in his nasal cavity. This man can see into the future. Fucking incredible, really, you just rip out a few million eigenvectors from your mathematical sketch of an octillion human beings, what's that in hydrogen molecules, say three and a bit by ten to the twenty-three to the gram, into ten to the twenty-seven, shit, brothers and sisters, we're statistically equal to three kilograms of hydrogen gas, yes, you plump for the major characteristics you think you'd like to play with and code them up into genes and build yourself a little memetic beastie that stands in for what you figure pushes and pulls thee and me and all our star-spangled relatives, and you breed the little buggers in a tasty itemized soup and watch the way the mutants go. Wonderful, Ralf. Bug-culture precapitulates bugged-culture. No way we can jump you won't know about in advance, because the little bugs snitched on us. Have you ever wondered, Ralf, if we're all just a big stochastic biotic projection for the Charioteers? See how we run. But you don't let us mutate, do you, Ralf? That's where you fumbled the ball, Dr Asimov, in your ancient poems. The Empire will never fall. We will live forever, and the boring Empire with us. Anla lashed out viciously with her foot. "Will you fucking stop snoring!" ==================== Damien Broderick From spike66 at comcast.net Sun Jun 3 22:40:19 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 15:40:19 -0700 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <017201c7a62b$2cfc1130$6501a8c0@homeef7b612677> Message-ID: <200706032240.l53Me12T016655@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Lee Corbin > ... > ---is that as soon as we are capable, we ought to reformat the solar > system to run everything in an uploaded state. Earth's matter alone could > support about 10^33 human beings... > > Lee Six micrograms per person, hmmm. For estimation purposes, the earth's atoms can be modeled as half oxygen, one sixth iron, one sixth silicon and one sixth magnesium, with everything else negligible for one digit BOTECs. (Is that cool or what? Did you know it already? This isn't mass fraction, but atomic fraction which I used for a reason.) So six micrograms isn't much, but it still works out to about 700 trillion atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, with a few trillion atoms of debris thrown in for free. So I guess I will buy Lee's conjecture of earth being good for 10^33 uploaded humans. But I don't see that as a limit. Since a nearly arbitrarily small computer could run a human process (assuming we knew how to do it, until which even Jeff Davis and Ray Charles would agree it is hard) then we could run a human process (not in real time of course) with much less than six micrograms of stuff. Oops gotta go, yet another party. June is a busy month. spike From spike66 at comcast.net Sun Jun 3 22:50:06 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 15:50:06 -0700 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <200706032240.l53Me12T016655@andromeda.ziaspace.com> Message-ID: <200706032249.l53MnmZw021018@andromeda.ziaspace.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of spike ... > So six micrograms isn't much, but it still works out to about 700 trillion > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, > with a few trillion atoms of other debris thrown in for free... Doh! Replace each "trillion" above with "quadrillion." See what happens when one gets in too much of a hurry? But what's a factor of a thousand among friends anyway? {8-] spike > Oops gotta go, yet another party. June is a busy month. > > spike From spike66 at comcast.net Sun Jun 3 23:18:19 2007 From: spike66 at comcast.net (spike) Date: Sun, 3 Jun 2007 16:18:19 -0700 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <200706032249.l53MnmZw021018@andromeda.ziaspace.com> Message-ID: <200706032331.l53NVQ94027258@andromeda.ziaspace.com> > Doh! Replace each "trillion" above with "quadrillion." Double doh! I still missed it by a factor of ten. }8-[ 70 quadrillion atoms of oxygen, about 20 quadrillion each of iron, magnesium and aluminum. I'm giving up math until the party season is over. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of spike > Sent: Sunday, June 03, 2007 3:50 PM > To: 'ExI chat list' > Subject: Re: [ExI] Ethics and Emotions are not axioms > > > > > -----Original Message----- > > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > > bounces at lists.extropy.org] On Behalf Of spike > ... > > So six micrograms isn't much, but it still works out to about 700 > trillion > > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum > each, > > with a few trillion atoms of other debris thrown in for free... > > Doh! Replace each "trillion" above with "quadrillion." See what happens > when one gets in too much of a hurry? But what's a factor of a thousand > among friends anyway? {8-] spike > > > > Oops gotta go, yet another party. June is a busy month. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jrd1415 at gmail.com Sun Jun 3 23:36:32 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 3 Jun 2007 16:36:32 -0700 Subject: [ExI] Other thoughts on transhumanism and religion In-Reply-To: <553198.46097.qm@web57513.mail.re1.yahoo.com> References: <553198.46097.qm@web57513.mail.re1.yahoo.com> Message-ID: Thank you, Neville. On 5/31/07, neville late wrote: > Jeff, yours post below is another really beautiful one! Tears of joy indeed, > but also sadness for the ones we love who have departed? Needlessly lost. Thus "the cryonicist's lament". Burdened by dismissive ridicule, the cryonicist can only stand and watch as the old paradigm, blocks hope for the living and rescue for the dying. Thank you, Neville. On 5/31/07, neville late wrote: > Jeff, yours post below is another really beautiful one! Tears of joy indeed, > but also sadness for the ones we love who have departed? Needlessly lost. Thus "the cryonicist's lament". Burdened by dismissive ridicule, the cryonicist can only stand and watch as the old paradigm, blocks hope for the living and rescue for the dying. Can only stand and watch that is, so long as the cryonics orgs and community proscribe a more proactive approach-- outreach to families of the terminally ill -- on the grounds that it is the equivalent of "ambulance chasing", and will UNAVOIDABLY provoke a destructive backlash from the mainstream. I disagree, and believe it time to dispense with this fear-driven view in favor of a thoughtful outreach program directed to the family members of the terminally ill. Clearly, opposition/backlash is to be anticipated and prepared for. -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From lcorbin at rawbw.com Sun Jun 3 23:35:02 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 16:35:02 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <070901c7a395$8b3f8940$6501a8c0@homeef7b612677> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> <016d01c7a629$c5c85920$6501a8c0@homeef7b612677> <7.0.1.0.2.20070603171254.02253af0@satx.rr.com> Message-ID: <017e01c7a638$797fda70$6501a8c0@homeef7b612677> Damien quotes from his and Barnes' novel VALENCIES > without coming to a definitive, intuitively overwhelming > conclusion. But then the imperial ideologists thought they > had, didn't they, with their jolly old stochastic memetic- > extrapolatory hedonic calculus or whatever the fuck they > were calling it these days. The least retardation of optimal > development for the greatest number, world without end, > or at least until the trend functions blur out. So they > managed to get both streams of thought into one ethical > scholium without solving anything. Without quite being able to affirm that I have understood all that, and what preceded it, what follows is provocative > After all, why obey a rule like that? And who gets to define > as "good" those magical parameters making up the package > called "optimal development"? Optimal development would be for most people something to be considered after they'd already had some clear notion of *good*, or at least, as I would say, a clear notion of what they already approve of. > The besieged libertarians on Chomsky, she thought > darkly, might differ from Ralf on the question of the good life. > Anyway, even if we all agreed that certain parameters > were good, why should that oblige us to promote their furtherance? We generally call "good" those things whose furtherance we wish to promote. And as to the question, "well, why would you want to promote THAT?", I'd answer "at base we come back to our values, which, in terms of actions we advocate and stand behind, are simply those things that we approve of". Although there really is nothing wrong with a certain amount of circularity here (at least verbally), approval and disapproval still seem to me as basic as anything could be. Lee > It might be prudent good sense to do so, and aesthetically > pleasing, and satisfy some itch we all have, and save us > from being raped in the common, but then the sublime > constraining force you sort of imagine the idea of moral > obligation having just evaporates into self-serving > circumspection.... From mbb386 at main.nc.us Sun Jun 3 23:07:25 2007 From: mbb386 at main.nc.us (MB) Date: Sun, 3 Jun 2007 19:07:25 -0400 (EDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> References: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> Message-ID: <1175.72.236.103.244.1180912045.squirrel@main.nc.us> spike writes: > > who is in the mood for a little silliness on a gorgeous Sunday afternoon in > June. I hope ye are enjoying being alive this fine day, and think often of > how lucky we are to have been born so late in human history. > Very well said, spike, and I'm enjoying this day as well. Last evening we had a bit of rain - the first in weeks! :) Today has been just lovely. I also am happy to living at *this* time and not another. Regards, MB From CHealey at unicom-inc.com Mon Jun 4 01:03:33 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Sun, 3 Jun 2007 21:03:33 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer> <015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> Message-ID: <5725663BF245FA4EBDC03E405C854296010D284E@w2k3exch.UNICOM-INC.CORP> > > > An AI system set up to do theoretical physics will not > > > decide to overthrow its human oppressors > > > > I'd be willing to bet your life that is untrue. > > Lee Corbin wrote: > > Surely Stathis is correct. Suppose an AI is somehow evolved > to solve physics questions. Then during its evolution, predecessors > who deviated from the goal (by wasting time, say, reading > Kierkegaard) would be eliminated from the "gene pool". > More focused programs would replace them. > Suppose businesses evolved that attempted to solve physics questions. During their evolution, one might expect that businesses who deviated from this goal (by wasting time, say, researching competitors, executing alliances and buyouts, updating employee skill sets, lobbying for beneficial legislation, and transplanting themselves to foreign soil) would be eliminated from the "gene pool". More directly goal focused businesses would replace them... -Chris From neville_06 at yahoo.com Mon Jun 4 01:40:54 2007 From: neville_06 at yahoo.com (neville late) Date: Sun, 3 Jun 2007 18:40:54 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <1175.72.236.103.244.1180912045.squirrel@main.nc.us> Message-ID: <625587.43669.qm@web57514.mail.re1.yahoo.com> Naturally, one ought to be optimistic at an extropian sit --- MB wrote: > > spike writes: > > > > who is in the mood for a little silliness on a > gorgeous Sunday afternoon in > > June. I hope ye are enjoying being alive this > fine day, and think often of > > how lucky we are to have been born so late in > human history. Materially this is the best time but wont the coming dislocation lead to enormous unpleasantness? The real dislocation hasn't even started yet-- has it? > I also am happy to living at *this* time and not > another. > > Regards, > MB > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ___________________________________________________________________________________ You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_html.html From lcorbin at rawbw.com Mon Jun 4 04:12:54 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 3 Jun 2007 21:12:54 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><015f01c7a626$f6ff6810$6501a8c0@homeef7b612677> <5725663BF245FA4EBDC03E405C854296010D284E@w2k3exch.UNICOM-INC.CORP> Message-ID: <018a01c7a65e$f6b25f10$6501a8c0@homeef7b612677> Christopher writes >> Lee Corbin wrote: >> >> Suppose an AI is somehow evolved to solve physics questions. >> Then during its evolution, predecessors who deviated from the >> goal (by wasting time, say, reading Kierkegaard) would be >> eliminated from the "gene pool". More focused programs >> would replace them. > > Suppose businesses evolved that attempted to solve physics questions. The analogy doesn't fit well, to me. Firstly, businesses as we know them attempt to survive (because humans are in charge), and there are cases where they completely change what line of business they're in. > During their evolution, one might expect that businesses who deviated > from this goal (by wasting time, say, researching competitors, executing > alliances and buyouts, updating employee skill sets, lobbying for > beneficial legislation, and transplanting themselves to foreign soil) > would be eliminated from the "gene pool". More directly goal focused > businesses would replace them... Secondly, "researching competitors" really and obviously does contribute to their survival in the world of free markets, whereas in my example, studying the Danish existentialist has nothing to do, we should assume, with physics. In Stathis's example, I supposed that ability to solve physics problems was judged by fairly stringent conditions somehow, perhaps by humans, or perhaps by other machines. "Executing alliances and buyouts, transplanting themselves to foreign soil", etc., however, might be good for solving physics problems by either an AI or by a business, I guess. Lee From stathisp at gmail.com Mon Jun 4 06:35:18 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 16:35:18 +1000 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <46630026.4070002@comcast.net> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: On 04/06/07, Brent Allsop wrote: > > > > John K Clark wrote: > > Stathis Papaioannou Wrote: > > Ethics, motivation, emotions are based on axioms > > Yes. > > > I'm not in this camp on this one. I believe there are fundamental > absolute ethics, morals, motivations... and so on. > > For example, existence or survival is absolutely better, more valuable, > more moral, more motivating than non existence. Evolution (or any > intelligence) must get this before it can be successful in any way, in any > possible universe. In no possible system can you make anything other than > this an "axiom" and have it be successful. > A system that doesn't want to survive won't survive, but it doesn't follow from this that survival is an absolute good. That would be like saying that "survival of the fittest" is an absolute good because it is sanctioned by evolution. You can't derive ought from is. Any sufficiently advanced system will eventually question any "axioms" > programmed into it as compared to such absolute moral truths that all > intelligences in all possible system must inevitably discover or realize. > I've often questioned the axioms I've been programmed with by evolution, as well as those I've been programmed with by society. I recognise that they are just axioms, but this alone doesn't make it any easier to change them. For example, the will to survive is a top level axiom, but knowing this doesn't make me any less concerned with survival. Phenomenal pleasures are fundamentally valuable and motivating. Evolution > has wired such to motivate us to do things like have sex, in an axiomatic or > programmatic way. But we can discoverer such freedom destroying wiring and > cut them or rewire them or design them to motivate us to do what we want, as > dictated by absolute morals we may logically realize, instead. > Yes, but quite often the more base desires overcome higher morality. And we all know that people can become convinced that it is best to kill themselves and/or others, even without actually going mad. No matter how much you attempt to program an abstract or non phenomenal > computer to not be interested in phenomenal experience, if it becomes > intelligent enough, it must finally realize that such joys are fundamentally > valuable and desirable. Simply by observing us purely logically, it must > finally deduce how absolutely important such joy is as a meaning of life and > existence. Any sufficiently advanced AI, whether abstract or phenomenal, > regardless of what "axioms" get it started, can do nothing other than to > become moral enough to seek after all such. > It might be able to deduce that these things are desirable to beings such as us, but how does that translate to making them the object of its own desires? We might be able to understand that for a male praying mantis to mate trumps getting his head eaten as a top level goal, but that doesn't mean we can or should take this on as our own goal. It also doesn't mean that a race of smart praying mantids would do things any differently. They might look forward to having their heads eaten, write poetry about it, make it the central tenet of their ethical sytem, and regard individuals who don't want to go through with it in much the same way as we regard people who are depressed and suicidal. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jun 4 06:53:59 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 01:53:59 -0500 Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: <7.0.1.0.2.20070604014938.0237c9c0@satx.rr.com> At 04:35 PM 6/4/2007 +1000, Stathis wrote: >They might look forward to having their heads eaten, write poetry >about it, make it the central tenet of their ethical sytem, Nicely put! >and regard individuals who don't want to go through with it in much >the same way as we regard people who are depressed and suicidal. Rather, and poignantly/absurdly, in much the way most people regard those who *don't want inevitably to age and die "when it's their time"* and wish to find scientific means to avoid doing so. "You blasphemous fools, just knuckle down and *get your heads eaten* as the Great Mantis Mother demands! Go on, you'll find it very rewarding!" Damien Broderick From eugen at leitl.org Mon Jun 4 07:15:15 2007 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 4 Jun 2007 09:15:15 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <200706032240.l53Me12T016655@andromeda.ziaspace.com> References: <017201c7a62b$2cfc1130$6501a8c0@homeef7b612677> <200706032240.l53Me12T016655@andromeda.ziaspace.com> Message-ID: <20070604071515.GF17691@leitl.org> On Sun, Jun 03, 2007 at 03:40:19PM -0700, spike wrote: > Six micrograms per person, hmmm. This is not a lot. > For estimation purposes, the earth's atoms can be modeled as half oxygen, > one sixth iron, one sixth silicon and one sixth magnesium, with everything > else negligible for one digit BOTECs. (Is that cool or what? Did you know > it already? This isn't mass fraction, but atomic fraction which I used for > a reason.) > > So six micrograms isn't much, but it still works out to about 700 trillion > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, > with a few trillion atoms of debris thrown in for free. So I guess I will > buy Lee's conjecture of earth being good for 10^33 uploaded humans. I don't. Rod logic takes about cm^3 to store relevant number of bits of a human brain -- just to store, not to run it. In order to achieve that 10^6 speedup, you need a lot more. (This relates for whole body emulation, native AI or transcoded folks can be more compact, but just how more is not yet known). > But I don't see that as a limit. Since a nearly arbitrarily small computer > could run a human process (assuming we knew how to do it, until which even That's a rather large assumption to make. Do not underestimate biology, the more I study it, the more I'm impressed with its functionality concentration. You need machine-phase to beat it, with self-assembly you can only about match it. > Jeff Davis and Ray Charles would agree it is hard) then we could run a human > process (not in real time of course) with much less than six micrograms of > stuff. > > Oops gotta go, yet another party. June is a busy month. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Mon Jun 4 07:41:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 17:41:35 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> <73F97F80-3082-40C1-8910-F1366D0E7D68@mac.com> Message-ID: On 04/06/07, Samantha Atkins wrote: This is getting incredibly silly. There is nothing in science or > physics that will allow one macro object to spontaneously turn into a > totally different macro object. And what is the value of these > rarefied discussions of the oh so modern version of how many angels > can dance on the head of a pin anyway? Even classical physics allows that the randomly moving atoms in an object might coincidentally line up and move in a particular direction, so that it spontaneously changes shape. This is of course *extremely unlikely* to happen, but it isn't impossible. That's where the "statistical" in statistical mechanics comes from. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Mon Jun 4 07:19:14 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Mon, 4 Jun 2007 00:19:14 -0700 (PDT) Subject: [ExI] Ethics and Emotions are not axioms (Was Re: Unfriendly AI is a mistaken idea.) In-Reply-To: <7.0.1.0.2.20070604014938.0237c9c0@satx.rr.com> Message-ID: <888505.21306.qm@web35602.mail.mud.yahoo.com> At 04:35 PM 6/4/2007 +1000, Stathis wrote: >They might look forward to having their heads eaten, write poetry >about it, make it the central tenet of their ethical sytem, Damien replied: Nicely put! > I remember a science fantasy from a decade or two ago about a race of pony-sized intelligent spiders where a male who is in his prime is expected to marry, mate and then be eaten all in one night. The "mount" of the knightly human hero was a male who decided he would skip the sex and death part but his poor "humiliated" mate and her sisters were for years on the hunt for him after the terrible "disrespect" he showed to their culture and religion. The running joke was that this giant ferocious looking arachnid was scared of his own shadow because he saw his mate lurking around every corner. "Brother Termite" by Patricia Anthony is a terrific book showing an alien race who have a horrible reproductive imperative which they embrace as beautiful and "the way things must always be done." The book was optioned by Hollywood but supposedly is in the limbo known as "development hell." John Grigg Damien Broderick wrote: At 04:35 PM 6/4/2007 +1000, Stathis wrote: >They might look forward to having their heads eaten, write poetry >about it, make it the central tenet of their ethical sytem, Nicely put! >and regard individuals who don't want to go through with it in much >the same way as we regard people who are depressed and suicidal. Rather, and poignantly/absurdly, in much the way most people regard those who *don't want inevitably to age and die "when it's their time"* and wish to find scientific means to avoid doing so. "You blasphemous fools, just knuckle down and *get your heads eaten* as the Great Mantis Mother demands! Go on, you'll find it very rewarding!" Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Mon Jun 4 08:57:31 2007 From: scerir at libero.it (scerir) Date: Mon, 4 Jun 2007 10:57:31 +0200 Subject: [ExI] Italy's Social Capital References: <007401c7a54f$4d249130$6501a8c0@homeef7b612677><004301c7a60f$faad9a70$7fbf1f97@archimede> <016901c7a628$5e75ce70$6501a8c0@homeef7b612677> Message-ID: <000601c7a686$6315f3c0$17911f97@archimede> > > Mussolini also tried to 'colonize' central > > & southern regions. Lee: > Ah, great minds think alike. :-) Here we are experiencing a very different sort of colonization now, coming from South (Africa) and from East (many different countries, and also China [1]). EU politicians do not seem to be completely aware of it. Or perhaps they do not know what to do. > > After 1931 vast tracts of land were reclaimed > > through the draining of marshes in the Lazio region, > > where gleaming new towns were created with Fascist > > architecture and names: Littoria (now Latina) > > in 1932, Sabaudia in 1934, Pontinia in 1935, > > Aprilia in 1937, and Pomezia in 1938. Peasants were > > brought from the regions of Emilia and, mostly, from > > Veneto, to populate these towns. Lee: > Wow. I would like to know if the new towns make a > positive contribution to the economies of these regions, > i.e., in excess of comparative communities with a longer > history in the given region. Difficult to say. But reading magazines like 'La Nuova Ciociaria' (or the like) I've got the impression that ... yes some of these new towns made a positive contribution to the economy of Lazio region, but this contribution started not during the Fascist era but more recently, that is to say 40 yeara ago, with the post-war industrial (and touristical) development. s. [1] There are small towns (i.e. in Tuscany) in which the majority of the resident population is Chinese (not speaking Italian). From stathisp at gmail.com Mon Jun 4 10:13:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 20:13:35 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601103345.GE17691@leitl.org> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: On 04/06/07, John K Clark wrote: > An AI system set up to do theoretical physics will not decide to overthrow > > its human oppressors > > I'd be willing to bet your life that is untrue. Imagine a human theoretical physicist so brilliant and so focussed that he completely ignores the outside world to concentrate on the equations in his head. His obsession is such that he neglects to eat or drink. Of course, even from the point of view of continuing to do physics this isn't very clever, because he can't work if he dies, but as this is only a meta-problem he is not interested in it. Provided that medical teams are available to tend to his life support, would the disinterest in the outside world and his own survival have any negative impact on the quality of his work? And if you were going to design a computer to be a theoretical physicist, isn't this exactly the sort of tireless and undistracted worker that you would want? >so that it can sit on the beach reading novels, unless it can derive this > >desire from its initial programming. > > Do you also believe that the reason you ordered a jelly doe nut today > instead of your usual chocolate one is because of your initial > programming, > that is, your genetic code? > Unless divine intervention was at play, yes. My genetic code determines my brain configuration, which changes dynamically according to the environment from the moment my nervous system started to form. The complexity of the environmental interaction makes it difficult for anyone to predict exactly what I'm going to do and similarly with an AI it would be difficult to predict exactly what it was going to do, otherwise there would be no point in building it. However, for the dedicated AI physicist the only uncertainty might be what the exact scientific output is going to be. You could allow it to explore radically different behaviours, but that would be like designing a chess-playing program with the ability and motivation to cheat. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Mon Jun 4 11:53:08 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Mon, 4 Jun 2007 04:53:08 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <200706032141.l53Lfpd1009899@andromeda.ziaspace.com> Message-ID: <331159.20079.qm@web35613.mail.mud.yahoo.com> Spike wrote: spike, who is in the mood for a little silliness on a gorgeous Sunday afternoon in June. I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. > I keep on asking myself "was I simply just *lucky* to have been born when I was?" Did we all simply win some sort of uncaring cosmic lottery to have been born in this time period and in the developed world? I don't think of myself as a lucky guy and so this line of thinking really disturbs me. But then I come from a religious background. My brand-new nephew, Luc who was born last December is a dang lucky one! lol As long as his health holds out and an accident or violence doesn't claim him, he stands a good chance of actually seeing the Singularity we all love to post about. And both his parents are very bright people so he is probably quite equipped to handle the challenges ahead. John Grigg spike wrote: ... > bounces at lists.extropy.org] On Behalf Of Samantha Atkins ... > On Jun 3, 2007, at 10:23 AM, John K Clark wrote: ... > > ... probability of your keyboard turning into a > > teapot is greater than zero (and it is) ... > > This is getting incredibly silly. There is nothing in science or > physics that will allow one macro object to spontaneously turn into a > totally different macro object... - samantha It's all in how you define the term teapot. You spill your tea into your keyboard; that keyboard now both contains tea and heats it, since there are electronics in there. So your keyboard has become a teapot (assuming a very loose definition of the term.) Insincerely yours spike, who is in the mood for a little silliness on a gorgeous Sunday afternoon in June. I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Pinpoint customers who are looking for what you sell. -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Mon Jun 4 12:53:21 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Mon, 4 Jun 2007 05:53:21 -0700 (PDT) Subject: [ExI] humor: Comic Strip About Transcending Our Limits In-Reply-To: <331159.20079.qm@web35613.mail.mud.yahoo.com> Message-ID: <240984.40890.qm@web35613.mail.mud.yahoo.com> If only becoming Posthuman were this easy... http://news.yahoo.com/comics/brewsterrockit;_ylt=AtQ3si8NrW0wFEUDsgky5MnH.sgF John Grigg : ) --------------------------------- Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Mon Jun 4 12:59:20 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 4 Jun 2007 22:59:20 +1000 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <004201c7a603$f26c1230$de0a4e0c@MyComputer> References: <465F8B72.3070103@comcast.net> <621544.83244.qm@web57511.mail.re1.yahoo.com> <004001c7a52d$4c089250$310b4e0c@MyComputer> <004201c7a603$f26c1230$de0a4e0c@MyComputer> Message-ID: On 04/06/07, John K Clark wrote: The problem is that there are an infinite number of subsets that are just as > large as the entire set, in fact, that is the very mathematical definition > of infinity. > You're not obliged to constrain probability theory by that definition. The cardinality of the set of odd numbers is the same as the cardinality of the set of multiples of 10, but that doesn't mean that a randomly chosen integer is just as likely to be odd as to be a multiple of 10; it is obviously 5 times as likely to be odd. Perhaps you can get around this by saying a randomly chosen integer must be chosen from a finite set, otherwise it is infinite, and infinity is not defined as either odd or a multiple of 10 or neither. However, if there is an actual infinity of consecutively numbered things, and you're in the middle of it, you can actually pick out a local finite subset, and even though you might not know "where" it is in relation to "zero" (if that is meaningful at all), you can be blindly sure that 5 times as many of the things will have a number ending in an odd integer as in a zero. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Mon Jun 4 14:34:13 2007 From: spike66 at comcast.net (spike) Date: Mon, 4 Jun 2007 07:34:13 -0700 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <331159.20079.qm@web35613.mail.mud.yahoo.com> Message-ID: <200706041434.l54EYIKS024082@andromeda.ziaspace.com> bounces at lists.extropy.org] On Behalf Of John Grigg Subject: Re: [ExI] a doubt concerning the h+ future Spike wrote: spike, >>...? I hope ye are enjoying being alive this fine day, and think often of how lucky we are to have been born so late in human history. ... ? >My brand-new nephew, Luc?who was born last December is?a dang lucky one! ... ? John Grigg??? ? Ja, even his name suggests good fortune. Congrats on the new family member John! Perhaps he and my son will be buddies some day. {8-] spike From jonkc at att.net Mon Jun 4 14:50:18 2007 From: jonkc at att.net (John K Clark) Date: Mon, 4 Jun 2007 10:50:18 -0400 Subject: [ExI] Ethics and Emotions are not axioms References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><070901c7a395$8b3f8940$6501a8c0@homeef7b612677><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <46630026.4070002@comcast.net> Message-ID: <00fc01c7a6b7$b03863a0$e6084e0c@MyComputer> Brent Allsop > I believe there are fundamental absolute ethics, morals, motivations. Well, there are certainly fundamental absolute motivations, but I'm not sure about the other stuff. > existence or survival is absolutely better, more valuable, more moral, > more motivating than non existence. I believe that also, but I can't prove it, that's why it's an axiom. But if I'm wrong and they're not axioms then what axioms were used to derive them? John K Clark From jonkc at att.net Mon Jun 4 15:24:00 2007 From: jonkc at att.net (John K Clark) Date: Mon, 4 Jun 2007 11:24:00 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601103345.GE17691@leitl.org><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer> Message-ID: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Stathis Papaioannou Wrote: > if you were going to design a computer to be a theoretical physicist, > isn't this exactly the sort of tireless and undistracted worker that you > would want? But it doesn't matter what I want because I won't be designing that theoretical physicist, another AI will. And so Mr. Jupiter Brain will not be nearly that specialized because a demand can be found for many other skills. Besides being a physicist AI will also be a superb engineer, economist, general, businessman, poet, philosopher, romantic novelist, pornographer, mathematician, comedian, and lots more. Me: > >Do you also believe that the reason you ordered a jelly doe nut today > >instead of your usual chocolate one is because of your initial > >programming, that is, your genetic code? You: > Unless divine intervention was at play, yes. Do you also believe that the programmers who wrote Microsoft Word determined every bit of text that program ever produced? John K Clark From austriaaugust at yahoo.com Mon Jun 4 17:56:14 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 4 Jun 2007 10:56:14 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <23558.22199.qm@web37412.mail.mud.yahoo.com> Stathis wrote: > "No we couldn't: we'd have to almost destroy the > whole Earth. A massive > meteorite might kill all the large flora and fauna, > but still leave some > micro-organisms alive. And there's always the > possibility that some disease > might wipe out most of humanity. We're actually less > capable at combating > bacterial infection today than we were several > decades ago, even though our > biotechnology is far more advanced. The bugs are > matching us and sometimes > beating us." Well, this is just splitting hairs growing on hairs but we will be in a good position to destroy all microorganisms and the useful earth within a couple decades, with something like molecular manufacturing. And it wouldn't require any genetic change to homosapiens. We must prevent that from happening of course, and we will. Microorganisms could never possibly destroy themselves as a species because they lack the intelligence to make it happen, unfortunately that's not the case with us. Do you honestly believe that the products of our human intelligence haven't conferred any survival or reproductive advantages, compared to other animals? > "I disagree with that: it's far easier to see how > intelligence could be both > incrementally increased (by increasing brain size, > for example) and > incrementally useful than something like the eye, > for example. Once nervous > tissue developed, there should have been a massive > intelligence arms race, > if intelligence is that useful." But the eye also evolved slowly. It likely began as a photo-sensitive skin pigmentation, that slowly evolved concavity, and so on. Human intelligence has only evolved once so far, because it was a much bigger, more complex, more unlikely "project". Below a certain threshold, I totally agree, a small incremental improvement in intelligence isn't likely to confer all that much benefit relative to the other animals. The likely threshold is the capacity to utilize tools (like sticks and rocks in multiple, varied ways) and to make tools. And I suspect that that one leap was *extremely* improbable, as evolution customarily never makes leaps but only baby-steps - and then only if they convey immediate aggregate advantage. Imagine suddenly taking away from humans every invention we have ever made; would we really be much more "fit" than the other animals until we began making tools again? Probably not. Also, evolution could not have produced intelligence unless certain prerequisites were already in place. Magically giving a cactus human-level intelligence isn't likely to improve its survival or reproduction. The evolution of intelligence would require a means of perceiving the world (senses) and acting within in it (locomotion) - in such a way that the benefits of having more intelligence could be expressed in terms of advantages in survival or reproduction. And the parent animal would need to already have the physiology to allow the creation of tools: eg. standing semi-erect, and the infamous opposable thumb. That's why the cactus doesn't already have human-level intelligence, even though multicellular plants are way, way older than apes. So for these sorts of reasons, I consider the evolution of human intelligence as something of a miracle (in the strictly non-religious sense, of course). And something highly improbable, in all likelihood. > "It seems more likely to me that life is very > widespread, but intelligence is > an aberration." Yes, I meant that we are the first significant intelligence in this Universe, in my estimation. Intelligence is just an aberration like you say, but once it reaches human-level, it also happens to be extremely useful. Best, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 03/06/07, A B wrote: > > > > Hi Stathis, > > > > Stathis wrote: > > > > > "Single-celled organisms are even more > successful > > > than humans are: they're > > > everywhere, and for the most part we don't even > > > notice them." > > > > But if we *really* wanted to, we could destroy all > of > > them - along with ourselves. They can't say the > same. > > > No we couldn't: we'd have to almost destroy the > whole Earth. A massive > meteorite might kill all the large flora and fauna, > but still leave some > micro-organisms alive. And there's always the > possibility that some disease > might wipe out most of humanity. We're actually less > capable at combating > bacterial infection today than we were several > decades ago, even though our > biotechnology is far more advanced. The bugs are > matching us and sometimes > beating us. > > Intelligence, > > > particularly human level intelligence, is just a > > > fluke, like the giraffe's > > > neck. If it were specially adaptive, why didn't > it > > > evolve independently many > > > times, like various sense organs have? > > > > The evolution of human intelligence was like a > series > > of flukes, each one building off the last (the > first > > fluke was likely the most improbable). There has > been > > a long line of proto-human species before us, > we're > > just the latest model. Intelligence is specially > > adaptive, its just that it took evolution a hella > long > > time to blindly stumble on to it. Keep in mind > that > > human intelligence was a result of a *huge* number > of > > random, collectively-useful, mutations. For a > *single* > > random attribute to be retained by a species, it > also > > has to provide an *immediate* survival or > reproductive > > advantage to an individual, not just an immediate > > "promise" of something good to come in the far > distant > > future of the species. Generally, if it doesn't > > provide an immediate survival or reproductive > (net) > > advantage, it isn't retained for very long because > > there is usually a down-side, and its back to > > square-one. So you can see why the rise of > > intelligence was so ridiculously improbable. > > > I disagree with that: it's far easier to see how > intelligence could be both > incrementally increased (by increasing brain size, > for example) and > incrementally useful than something like the eye, > for example. Once nervous > tissue developed, there should have been a massive > intelligence arms race, > if intelligence is that useful. > > "Why don't we > > > see evidence of it > > > having taken over the universe?" > > > > We may be starting to. :-) > > > > "We would have to be > > > extraordinarily lucky if > > > intelligence had some special role in evolution > and > > > we happen to be the > > > first example of it." > > > > Sometimes I don't feel like ascribing "lucky" to > our > > present condition. But in the sense you mean it, I > > think we are. Like John Clark says, "somebody has > to > > be first". > > > > "It's not impossible, but the > > > evidence would suggest > > > otherwise." > > > > What evidence do you mean? > > > The fact that we seem to be the only intelligent > species to have developed > on the planet or in the universe. One explanation > for this is that evolution > just doesn't think that human level or better > intelligence is as cool as we > think it is. > > To quote Martin Gardner: "It takes an ancient > Universe > > to create life and mind". > > > > It would require billions of years for any > Universe to > > become hospitable to anyone. It has to cool-off, > form > > stars and galaxies, then a bunch of really big > stars > > have to supernova in order to spread their heavy > > elements into interstellar clouds that eventually > > converge into bio-friendly planets and suns. Then > the > > bio-friendly planet has too cool-off itself. Then > > biological evolution has a chance to start, but > took a > > few billion more years to accidentally produce > human > > beings. Our Universe is about ~15 billion years > old... > > sounds about right to me. :-) > > > > Yep, it's an absurdity. And it took me a long time > to > > accept it too. But we are the first, and possibly > the > > last. That makes our survival and success all the > more > > critical. That's what I'm betting, at least. > > > It seems more likely to me that life is very > widespread, but intelligence is > an aberration. > > > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. http://mobile.yahoo.com/mail From wadihfayad at hotmail.com Mon Jun 4 19:43:12 2007 From: wadihfayad at hotmail.com (wadih fayad) Date: Mon, 4 Jun 2007 22:43:12 +0300 Subject: [ExI] Unfrendly AI is a mistaken idea. Message-ID: Hi, to all, first, let's think about the evolution of human species well the next species to come is a completely paranormal one, and the question is how the actual species will react to this species more powerful intelligent and capable? the existence of the actual one depends on her reaction to this new species. Artificial intelligence? but sure it can be friendly it depends on the programmation almost everything on earth depends on a certain programmation by itself or by others. Anyway in the years to come, the electronic devices will take a great part of human bodies, we have to think how to avoid disfunction problems. As cloning, when an institute and some of them are making researches about this, copy the mind and the memory of a person on an electronic device, then istead of cloning this person he makes two biological copies of this person one of his body one of the brain separately then they inplant the brain in the new body, the next step will be transferring the data from the electronic device to the new brain, in that way we obtain a new human copy of ourselves and that what we should think about it, especially that the colonizing era of the space is wide open now. Any people who know more links about these institutes so we can discuss more about it? _________________________________________________________________ T?l?chargez le nouveau Windows Live Messenger ! http://get.live.com/messenger/overview -------------- next part -------------- An HTML attachment was scrubbed... URL: From benboc at lineone.net Mon Jun 4 19:43:48 2007 From: benboc at lineone.net (ben) Date: Mon, 04 Jun 2007 20:43:48 +0100 Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: References: Message-ID: <46646B74.4020502@lineone.net> spike splendidly wrote: > who is in the mood for a little silliness on a gorgeous Sunday > afternoon in June. I hope ye are enjoying being alive this fine day, > and think often of how lucky we are to have been born so late in human > history. Indeed. I enjoyed my Sunday afternoon immensely. And i do think often on that very subject. Something that sometimes makes me feel, i don't know... suspicious?? (i mean, come on, what are the odds?) ben zaiboc Messing with my head, if no-one else's From wadihfayad at hotmail.com Mon Jun 4 20:04:44 2007 From: wadihfayad at hotmail.com (wadih fayad) Date: Mon, 4 Jun 2007 23:04:44 +0300 Subject: [ExI] Unfriendly AI is a mistaken idea Message-ID: Hi, to all, first, let's think about the evolution of human species well the next species to come is a completely paranormal one, and the question is how the actual species will react to this species more powerful intelligent and capable? the existence of the actual one depends on her reaction to this new species. Artificial intelligence? but sure it can be friendly it depends on the programmation almost everything on earth depends on a certain programmation by itself or by others. Anyway in the years to come, the electronic devices will take a great part of human bodies, we have to think how to avoid disfunction problems. As cloning, when an institute and some of them are making researches about this, copy the mind and the memory of a person on an electronic device, then istead of cloning this person he makes two biological copies of this person one of his body one of the brain separately then they inplant the brain in the new body, the next step will be transferring the data from the electronic device to the new brain, in that way we obtain a new human copy of ourselves and that what we should think about it, especially that the colonizing era of the space is wide open now. Any people who know more links about these institutes so we can discuss more about it? _________________________________________________________________ Sur Windows Live Ideas, d?couvrez en exclusivit? de nouveaux services en ligne... si nouveaux qu'ils ne sont pas encore sortis officiellement sur le march? ! http://ideas.live.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Jun 4 20:40:27 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 15:40:27 -0500 Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <46646B74.4020502@lineone.net> References: <46646B74.4020502@lineone.net> Message-ID: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> ben quoth: >spike splendidly wrote: > > > who is in the mood for a little silliness on a gorgeous Sunday > > afternoon in June. I hope ye are enjoying being alive this fine day, > > and think often of how lucky we are to have been born so late in > > in human history. *Human* history, maybe. But how unlucky to have been born (and, very likely, die) so early in *sophont* history. Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." Damien Broderick From neville_06 at yahoo.com Mon Jun 4 23:32:34 2007 From: neville_06 at yahoo.com (neville late) Date: Mon, 4 Jun 2007 16:32:34 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> Message-ID: <636907.60138.qm@web57505.mail.re1.yahoo.com> Not likely, do we envy pre post-WWII life? Not many do, unless they're thinking of the low prices back then. Those living in the 22nd century might be focused on the years 2050-- 2100, not paying any remembrance to the early 21st century. Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Tue Jun 5 00:05:27 2007 From: neville_06 at yahoo.com (neville late) Date: Mon, 4 Jun 2007 17:05:27 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <636907.60138.qm@web57505.mail.re1.yahoo.com> Message-ID: <667574.38814.qm@web57511.mail.re1.yahoo.com> Come to think of it, future beings might not think about time at all. Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jun 5 01:26:08 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 20:26:08 -0500 Subject: [ExI] a symbiotic closed-loop dyad Message-ID: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: >Cognition/Science Education) will be presenting a >Marschak Colloquium in the >UCLA Anderson School of Management >Room A-202 on >Friday, June 8, 2007 from >[1 - 3] PM on the topic: > >"THE FUTURE OF AUGMENTED COGNITION" > > This presentation will include a film prepared with the support > of the Defense >Advanced Research Projects Agency (DARPA). It is cosponsored by the UCLA >Human Complex Systems Program. The Singers' Abstract and Biographies >are below, > >All are welcome to attend. >*********************************************************************** >"THE FUTURE OF AUGMENTED COGNITION" - > >Abstract: > > Critical support of the research that led to the development of > the Internet >came from DARPA (Defense Advanced Research Projects Agency). Perhaps more >than any other bureaucracy it has deliberately stretched the boundaries of the >possible. The short film you will see, "The Future of Augmented Cognition," >funded and managed by DARPA, examines one aspect of AugCog: how we might >mediate the problem of stressful information overload at the interface between >humans and computers in the year 2030. The complexity emergent from this brain >and physiological sensor technology has been characterized as a symbiotic >closed-loop dyad. The film is in two Parts, about 12 and 5 minutes long, both >preceded by brief PPT sections. Using cinematic/narrative form, Part >One posits >the above fields of research in a fully realized future workplace in >2030. With >basic Computer Graphics, Part Two uses fragments of the previous narrative, >focusing on the neuroscience of the closed loop dyad. The whole work >intentionally raises controversial questions across a spectrum of hypotheses: >global economics; social interaction; the future of Cyberspace; the >human/machine relationship. Judith Singer wrote the screenplay; Alexander >Singer directed and produced the film. >*********************************************************************** >ALEXANDER and JUDITH SINGER > >-- BIOGRAPHIES: > > Alexander Singer worked nearly four decades as a film director > in all genres and >forms, including television and five feature films. He has lectured and taught >Directing, Cinematography, Film Production and Cinema Theory at universities >and institutions in the United States and Europe. Participation in published >studies with the National Research Council brought Singer an NRC >designation as >a "lifetime National Associate of the National Academies." With Judith, three >DARPA ISAT (Information Science And Technology) study groups >developed into the >request to produce the film, "The Future of Augmented Cognition." DARPA has >recently asked the Singers to write for this year's Handbook of Virtual >Environment Training a chapter projecting this new form of human engagement in >the Year 2057 using the power of narrative and "the metaphor of the Holodeck." >Judith Singer has two published novels: Glass Houses and Threshold. >As a member >of the Writers Guild of America she has written a feature screenplay for >Columbia Pictures, screen treatments, and scripts for a variety of television >productions. For the Coalition for Children and Television she wrote the >theatrical play "Boxed In." DARPA asked her to write the screenplay for "The >Future of Augmented Cognition." From sentience at pobox.com Tue Jun 5 03:08:48 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Mon, 04 Jun 2007 20:08:48 -0700 Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> References: <46646B74.4020502@lineone.net> <7.0.1.0.2.20070604153548.02411940@satx.rr.com> Message-ID: <4664D3C0.3090309@pobox.com> Just think of how upset Socrates must have been to be born twenty-five centuries ago. "He told her about the Afterglow: that brief, brilliant period after the Big Bang, when matter gathered briefly in clumps and burned by fusion light." -- Stephen Baxter, "The Gravity Mine" -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From thespike at satx.rr.com Tue Jun 5 03:57:47 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 04 Jun 2007 22:57:47 -0500 Subject: [ExI] Hadron Collider postponement Message-ID: <7.0.1.0.2.20070604225540.02404938@satx.rr.com> CERN laboratory in Switzerland yesterday confirmed a delay in tests of its massive new particle accelerator. The Large Hadron Collider (LHC), a 27-kilometre-long circular tunnel 100 m below the French-Swiss border, where subatomic particles will collide at close to the speed of light, will now start operations next spring, and not in November as originally planned, CERN said. "The start-up at full level was always scheduled for spring 2008, but we had planned to test the machine for two weeks before Christmas, which will not now take place," said CERN's James Gillies, confirming a report in the French newspaper Le Monde. The delay is due to an accumulation of little setbacks, he said. Magnets critical to the atom smasher failed in tests in April this year. From pjmanney at gmail.com Tue Jun 5 04:06:54 2007 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 4 Jun 2007 21:06:54 -0700 Subject: [ExI] a symbiotic closed-loop dyad In-Reply-To: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> Message-ID: <29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.com> On 6/4/07, Damien Broderick wrote: > >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: > >Cognition/Science Education) will be presenting a > >Marschak Colloquium in the > >UCLA Anderson School of Management > >Room A-202 on > >Friday, June 8, 2007 from > >[1 - 3] PM on the topic: "Symbiotic closed loop dyad": Were you refering to neural function or the couple, or both? Have you seen the movie? http://www.augmentedcognition.org/video2.html PJ From pjmanney at gmail.com Tue Jun 5 05:13:30 2007 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 4 Jun 2007 22:13:30 -0700 Subject: [ExI] History of Disbelief by Jonathan Miller Message-ID: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> Is anyone aware of the series "'The History of Disbelief," produced by the BBC and aired on PBS in the US? Jonathan Miller created it: http://www.bbc.co.uk/bbcfour/documentaries/features/atheism.shtml Bill Moyers (a US journalist/social critic) interviewed him as a promo for PBS on his Bill Moyers Journal, which includes excerpts from the series: http://www.pbs.org/moyers/journal/05042007/watch3.html Jonathan Miller bio: http://www.pbs.org/moyers/journal/05042007/profile3.html I haven't seen the series yet, but in the interview, he makes interesting comments on the nature of atheism, his own, personal atheism, how he differs from Dawkins, his fears of fundamentalism, his desire for equal time for atheism, etc. I've been a fan of Miller's theatrical work for a long time. He's a fascinating, extremely accomplished person coming from a different perspective than the aggressive, "born again atheists" as he terms Dawkins and his ilk, although there are, of course, similarities of view. PJ From thespike at satx.rr.com Tue Jun 5 05:32:19 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jun 2007 00:32:19 -0500 Subject: [ExI] a symbiotic closed-loop dyad In-Reply-To: <29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.co m> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.com> Message-ID: <7.0.1.0.2.20070605002741.02274d70@satx.rr.com> At 09:06 PM 6/4/2007 -0700, PJ wrote: >On 6/4/07, Damien Broderick wrote: Nope, not me, guv, I was just fwding a notification from elsewhere, on the chance that locals might be able to get to it. > > >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: > > >Cognition/Science Education) will be presenting a > > >Marschak Colloquium > >"Symbiotic closed loop dyad": Were you refering to neural function or >the couple, or both? I found that "symbiotic closed-loop dyad" phrase rather preposterous, actually; just mildly taking the piss. :) I'm sure Lee would retort that it's exactly how I usually klutz up the langwitch, though. Damien Broderick From pjmanney at gmail.com Tue Jun 5 06:13:14 2007 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 4 Jun 2007 23:13:14 -0700 Subject: [ExI] Margaret Atwood on "Faith and Reason" Message-ID: <29666bf30706042313h3f5937d3nafcbd6cee6c1d991@mail.gmail.com> I'm on a Bill Moyers kick tonight. I've got one more video to share and then it's time for bed -- Margaret Atwood on faith, reason, science, politics, the history and future of humanity and everything else H+ers write about: http://www.pbs.org/moyers/faithandreason/watch_atwood.html PJ From scerir at libero.it Tue Jun 5 06:29:23 2007 From: scerir at libero.it (scerir) Date: Tue, 5 Jun 2007 08:29:23 +0200 Subject: [ExI] Hadron Collider postponement References: <7.0.1.0.2.20070604225540.02404938@satx.rr.com> Message-ID: <000401c7a73a$e14ba000$6f931f97@archimede> > CERN laboratory in Switzerland yesterday confirmed a delay in tests > of its massive new particle accelerator. There is a very good blogger at Cern http://resonaances.blogspot.com/ and maybe he will write something soon. From moulton at moulton.com Tue Jun 5 06:27:13 2007 From: moulton at moulton.com (Fred C. Moulton) Date: Mon, 04 Jun 2007 23:27:13 -0700 Subject: [ExI] History of Disbelief by Jonathan Miller In-Reply-To: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> References: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> Message-ID: <1181024833.3140.875.camel@localhost.localdomain> You can find a calendar of dates and stations airing it here: http://www.abriefhistoryofdisbelief.org/NewFiles/DisbeliefCalendar.pdf For persons in the Silicon Valley area it is currently scheduled for July 11, 18, 25 on KTEH. I know one of the individuals involved with scheduling programs for KTEH and had discussed getting this program on the schedule. He agreed and unless there is a schedule change we should be seeing it soon. And just in case someone is tempted to ask about the video "Root of All Evil?" with Richard Dawkins; the latest info that I have is that the company in the UK which produced it has not listed it in the catalog of items which are available for export to US. According to what I have heard the usual reason for this is that the producer usually thinks there is not enough demand to make it worth their time. However this might change if demand is demonstrated. However there are occasional unofficial websites where you can find it and download it. Fred On Mon, 2007-06-04 at 22:13 -0700, PJ Manney wrote: > Is anyone aware of the series "'The History of Disbelief," produced by > the BBC and aired on PBS in the US? Jonathan Miller created it: > > http://www.bbc.co.uk/bbcfour/documentaries/features/atheism.shtml > > Bill Moyers (a US journalist/social critic) interviewed him as a promo > for PBS on his Bill Moyers Journal, which includes excerpts from the > series: > > http://www.pbs.org/moyers/journal/05042007/watch3.html > > Jonathan Miller bio: > http://www.pbs.org/moyers/journal/05042007/profile3.html > > I haven't seen the series yet, but in the interview, he makes > interesting comments on the nature of atheism, his own, personal > atheism, how he differs from Dawkins, his fears of fundamentalism, his > desire for equal time for atheism, etc. > > I've been a fan of Miller's theatrical work for a long time. He's a > fascinating, extremely accomplished person coming from a different > perspective than the aggressive, "born again atheists" as he terms > Dawkins and his ilk, although there are, of course, similarities of > view. > > PJ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From stathisp at gmail.com Tue Jun 5 07:07:24 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 17:07:24 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <23558.22199.qm@web37412.mail.mud.yahoo.com> References: <23558.22199.qm@web37412.mail.mud.yahoo.com> Message-ID: On 05/06/07, A B wrote: > > Stathis wrote: > > > "No we couldn't: we'd have to almost destroy the > > whole Earth. A massive > > meteorite might kill all the large flora and fauna, > > but still leave some > > micro-organisms alive. And there's always the > > possibility that some disease > > might wipe out most of humanity. We're actually less > > capable at combating > > bacterial infection today than we were several > > decades ago, even though our > > biotechnology is far more advanced. The bugs are > > matching us and sometimes > > beating us." > > Well, this is just splitting hairs growing on hairs > but we will be in a good position to destroy all > microorganisms and the useful earth within a couple > decades, with something like molecular manufacturing. Not if they develop resistance to the molecular manufacturing or whatever it is; they do so to everything else, and they are doing so at an rate more than matching our accelerating production of novel antibiotics, for example. And it wouldn't require any genetic change to > homosapiens. We must prevent that from happening of > course, and we will. Microorganisms could never > possibly destroy themselves as a species because they > lack the intelligence to make it happen, unfortunately > that's not the case with us. Us destroying ourselves is not that different to other species' extinction due to changes in the environment. We would be the ones changing our environment but the same is the case when a species overpopulates and consumes all its sources of food. Do you honestly believe that the products of our human > intelligence haven't conferred any survival or > reproductive advantages, compared to other animals? Obviously intelligence has, or it wouldn't have developed. But my argument is that it appears to be just another trick that organisms can deploy, like a better sense of smell or the ability to mutate quickly and develop resistance to antibiotics. > "I disagree with that: it's far easier to see how > > intelligence could be both > > incrementally increased (by increasing brain size, > > for example) and > > incrementally useful than something like the eye, > > for example. Once nervous > > tissue developed, there should have been a massive > > intelligence arms race, > > if intelligence is that useful." > > But the eye also evolved slowly. It likely began as a > photo-sensitive skin pigmentation, that slowly evolved > concavity, and so on. Human intelligence has only > evolved once so far, because it was a much bigger, > more complex, more unlikely "project". > > Below a certain threshold, I totally agree, a small > incremental improvement in intelligence isn't likely > to confer all that much benefit relative to the other > animals. The likely threshold is the capacity to > utilize tools (like sticks and rocks in multiple, > varied ways) and to make tools. And I suspect that > that one leap was *extremely* improbable, as evolution > customarily never makes leaps but only baby-steps - > and then only if they convey immediate aggregate > advantage. Imagine suddenly taking away from humans > every invention we have ever made; would we really be > much more "fit" than the other animals until we began > making tools again? Probably not. Also, evolution > could not have produced intelligence unless certain > prerequisites were already in place. Magically giving > a cactus human-level intelligence isn't likely to > improve its survival or reproduction. The evolution of > intelligence would require a means of perceiving the > world (senses) and acting within in it (locomotion) - > in such a way that the benefits of having more > intelligence could be expressed in terms of advantages > in survival or reproduction. And the parent animal > would need to already have the physiology to allow the > creation of tools: eg. standing semi-erect, and the > infamous opposable thumb. That's why the cactus > doesn't already have human-level intelligence, even > though multicellular plants are way, way older than > apes. So for these sorts of reasons, I consider the > evolution of human intelligence as something of a > miracle (in the strictly non-religious sense, of > course). And something highly improbable, in all > likelihood. > > > "It seems more likely to me that life is very > > widespread, but intelligence is > > an aberration." > > Yes, I meant that we are the first significant > intelligence in this Universe, in my estimation. > Intelligence is just an aberration like you say, but > once it reaches human-level, it also happens to be > extremely useful. > Either human-level intelligence is very difficult for evolution to pull off or it isn't as adaptive as we humans like to think. You are arguing for its difficulty; I still think a little bit of intelligence and a little bit of tool manipulating wouldn't be that difficult, given the basic template of mammals, birds, reptiles or even fish, and given predator-prey dynamics. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Tue Jun 5 07:29:25 2007 From: amara at amara.com (Amara Graps) Date: Tue, 5 Jun 2007 09:29:25 +0200 Subject: [ExI] Dawn launch (broken crane) Message-ID: Here is a press report that is circulated around regarding Dawn. The June 30 launch will certainly not happen, but I don't know if the managers have still in mind to launch it in July (early), or set it back to September. Amara P.S. The photos of the processing of the spacecraft for the launch at Cape Canaveral here, before the accident. http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 > >A broken crane has stopped preparations for the June 30 launch of NASA's >Dawn spacecraft from Pad-17A. > >The crane broke Wednesday and at least three days of work have been >lost. The resulting delay could increase with each day that the crane is not >repaired. It's not clear yet exactly when the launch will take place. > >"It's a day-for-day slip," Kennedy Space Center spokesman Bill Johnson >said Friday. > >The crane mechanism sits on a gantry above the Delta II rocket. The part >called the sheave nest, which guides cables, malfunctioned during the >installation of a solid rocket booster. > >No damage to the rocket was reported, though the malfunction was >described as "major." > >"What it did was stop the operation," said Johnson. "One crane does it >all." > >The rocket was scheduled to be fueled Friday, he added. Additionally, >the spacecraft must be mounted on the rocket. > >Dawn will visit two of the solar system's largest asteroids, which have >remained intact since they formed. Ceres and Vesta are in the asteroid >belt between Mars and Jupiter. They evolved very differently and could >provide clues to the formation of our solar system. > >Neither NASA officials nor the Air Force could estimate when the crane >would be repaired. > >"They've got everything but the spacecraft," said Johnson. -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From stathisp at gmail.com Tue Jun 5 07:32:44 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 17:32:44 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: On 05/06/07, John K Clark wrote: But it doesn't matter what I want because I won't be designing that > theoretical physicist, another AI will. And so Mr. Jupiter Brain will not > be > nearly that specialized because a demand can be found for many other > skills. > Besides being a physicist AI will also be a superb engineer, economist, > general, businessman, poet, philosopher, romantic novelist, pornographer, > mathematician, comedian, and lots more. Perhaps an AI with general intelligence would have all these abilities, but I don't see why it couldn't just specialise in one area, and even if it were multi-talented I don't see why it should be motivated to do anything other than solve intellectual problems. Working out how to make a superweapon, or even working out how it would be best to strategically employ that superweapon, does not necessarily lead to a desire to use or threaten the use of that weapon. I can understand that *if* such a desire arose for any reason, weaker beings might be in trouble, but could you explain the reasoning whereby the AI would arrive at such a position starting from just an ability to solve intellectual problems? Do you also believe that the programmers who wrote Microsoft Word determined > every bit of text that program ever produced? > They did determine the exact output given a particular input. Biological intelligences are much more difficult to predict than that, since their hardware and software changes dynamically according to the environment. However, even in the case of biological intelligences it is possible to predict, for example, that a man with a gun held to his head will with high probability follow certain instructions. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Jun 5 07:52:46 2007 From: pharos at gmail.com (BillK) Date: Tue, 5 Jun 2007 08:52:46 +0100 Subject: [ExI] walking bees In-Reply-To: <200706030048.l530mNl7000928@andromeda.ziaspace.com> References: <20070602180825.GW17691@leitl.org> <200706030048.l530mNl7000928@andromeda.ziaspace.com> Message-ID: On 6/3/07, spike wrote: > The buzz in beekeepers' discussion (sorry {8^D) has been that nosema is seen > in the sick hives, along with a bunch of other viruses and other diseases, > but the prevailing thought is that they are getting all these other things > because they are already weakened by something else. These would then be > opportunistic infections. > I found this article written by an entomologist - the guy who wrote the Wikipedia entry on CCD. He blames Colony Stress. Quote: But the leading hypothesis in many researcher's minds is that colonies are dying primarily because of stress. Stress means something different to a honey bee colony than to a human, but the basic idea isn't all that alien: If a colony is infected with a fungus, or has mites, or has pesticides in its honey, or is overheated, or is undernourished, or is losing workers due to spraying, or any other such thing, then the colony is experiencing stress. Stress in turn can cause behavioral changes that exacerbate the problem and lead to worse ones like immune system failure. Colony stress has existed, in various forms and with various causes, as long as mankind has kept honey bees, so it could indeed have happened in the 1890s. Many modern developments like pesticides or mite infestations can also cause stress (in fact, many of the things theorized to be involved can cause stress, so it's possible multiple factors are contributing to the problem, not just one). Unfortunately, stress is difficult to quantify and control experimentally, so it may never be possible to prove scientifically that colony stress explains all this year's deaths. ----------------- BillK From desertpaths2003 at yahoo.com Tue Jun 5 07:38:55 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Tue, 5 Jun 2007 00:38:55 -0700 (PDT) Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <200706041434.l54EYIKS024082@andromeda.ziaspace.com> Message-ID: <84180.99691.qm@web35608.mail.mud.yahoo.com> >My brand-new nephew, Luc who was born last December is a dang lucky one! ... John Grigg Spike wrote: Ja, even his name suggests good fortune. Congrats on the new family member John! Perhaps he and my son will be buddies some day. {8-] > I would love to see them become friends. Perhaps I can encourage Luc and his father (my younger brother) Mike to attend Transvision 2018! I suppose being twelve would be old enough to understand/enjoy what goes on at a Transvision Conference. But then considering how savvy some young kids are (more so than adults), perhaps he could hold his own there at age eight! lol I don't know. John Grigg : ) --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Tue Jun 5 08:33:03 2007 From: scerir at libero.it (scerir) Date: Tue, 5 Jun 2007 10:33:03 +0200 Subject: [ExI] Hadron Collider postponement References: <7.0.1.0.2.20070604225540.02404938@satx.rr.com> <000401c7a73a$e14ba000$6f931f97@archimede> Message-ID: <000401c7a74c$228c94a0$80ba1f97@archimede> > There is a very good blogger at Cern > http://resonaances.blogspot.com/ and these write good pages about collisions, Higgs things, related (rather chaotic) theory & (more chaotic) experiments http://muon.wordpress.com/ http://dorigo.wordpress.com/ interesting LHC photos here http://dorigo.wordpress.com/2007/04/24/new-meaning-to-the-word-compact/ From desertpaths2003 at yahoo.com Tue Jun 5 08:31:45 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Tue, 5 Jun 2007 01:31:45 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> Message-ID: <106884.86939.qm@web35601.mail.mud.yahoo.com> Damien Broderick wrote: *Human* history, maybe. But how unlucky to have been born (and, very likely, die) so early in *sophont* history. > Damn! Some of us do have the worst luck. I often feel the same way. What I like about cryonics/transhumanism is that it gives me hope of just barely by the skin of my teeth making it. Damien, you are the sort of thoughtful & kind and yet tough & sarcastic guy that I definitely want to see make it to "the other side." You will tell those haughty Posthumans where to go when they display an attitude! I was curious to see if *sophont* meant anything different from *sentient* and so I went "a-googling" for an answer. I came across http://jessesword.com/sf/home which brought back fond teen memories of when I had an SF artbook (who was that artist and writer?) that provided many classic definitions combined with great illustrations. you continue: Then again, those great minds might, after all, find something to envy in us: recall those haunting words of St. Arthur Clarke: "They will have time enough, in those endless aeons, to attempt all things, and to gather all knowledge. They will not be like gods, because no gods imagined by our minds have ever possessed the powers they will command. But for all that, they may envy us, basking in the bright afterglow of Creation; for we knew the Universe when it was young." > Perhaps I was forever scarred by reading Lovecraft in my formative years but I see humanity going out into the cosmos and then getting "our ass handed to us" by the powers lurking out there. John Grigg : ( --------------------------------- Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 5 09:03:15 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 5 Jun 2007 11:03:15 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: <20070605090315.GZ17691@leitl.org> On Tue, Jun 05, 2007 at 05:32:44PM +1000, Stathis Papaioannou wrote: > Perhaps an AI with general intelligence would have all these By definition. That's the 'general' part. > abilities, but I don't see why it couldn't just specialise in one > area, and even if it were multi-talented I don't see why it should be It is not important what most things on a population do, but what just one does, if it's relevant. > motivated to do anything other than solve intellectual problems. Remember, one is enough. > Working out how to make a superweapon, or even working out how it > would be best to strategically employ that superweapon, does not > necessarily lead to a desire to use or threaten the use of that I guess I don't have to worry about crossing a busy street a few times without looking, since it doesn't necessarily lead to me being dead. > weapon. I can understand that *if* such a desire arose for any reason, > weaker beings might be in trouble, but could you explain the reasoning > whereby the AI would arrive at such a position starting from just an > ability to solve intellectual problems? Could you explain how an AI would emerge with merely an ability to solve intellectual problems? Because, it would run contrary to all the intelligent hardware already cruising the planet. > Do you also believe that the programmers who wrote Microsoft Word > determined > every bit of text that program ever produced? > > They did determine the exact output given a particular input. No, only in the regression tests. If they did, bugs wouldn't exist. > Biological intelligences are much more difficult to predict than that, > since their hardware and software changes dynamically according to the Conventional discrete logic can emulate any connectivity and change state quite nicely. In fact, if you want to do it quickly, you move electrons, not atoms. Especially, large hydrated biopolymers. > environment. However, even in the case of biological intelligences it > is possible to predict, for example, that a man with a gun held to his > head will with high probability follow certain instructions. Heh. People never panick nor act according to a wrong model of the environment. Right. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Jun 5 09:09:07 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 5 Jun 2007 11:09:07 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <23558.22199.qm@web37412.mail.mud.yahoo.com> Message-ID: <20070605090907.GB17691@leitl.org> On Tue, Jun 05, 2007 at 05:07:24PM +1000, Stathis Papaioannou wrote: > Not if they develop resistance to the molecular manufacturing or You cannot develop a resistance against nonbiology anymore than you could develop resistance towards living in molten pig iron. There are no features you could raise antibodies again. There are no toxins which would work. The system is not working with enzymes. In a pinch, they would just scoop you up and pyrolyze you. > whatever it is; they do so to everything else, and they are doing so > at an rate more than matching our accelerating production of novel > antibiotics, for example. They still haven't figured out how to survive sterilization, which is far far more trivial to do. > Either human-level intelligence is very difficult for evolution to Of course it is, just look at the night sky, and you will immediately see it is very difficult. > pull off or it isn't as adaptive as we humans like to think. You are > arguing for its difficulty; I still think a little bit of intelligence > and a little bit of tool manipulating wouldn't be that difficult, > given the basic template of mammals, birds, reptiles or even fish, and > given predator-prey dynamics. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jun 5 11:30:00 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 21:30:00 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070605090315.GZ17691@leitl.org> References: <20070601113357.GG17691@leitl.org> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> Message-ID: On 05/06/07, Eugen Leitl wrote: > Working out how to make a superweapon, or even working out how it > > would be best to strategically employ that superweapon, does not > > necessarily lead to a desire to use or threaten the use of that > > I guess I don't have to worry about crossing a busy street a few times > without looking, since it doesn't necessarily lead to me being dead. > > > weapon. I can understand that *if* such a desire arose for any > reason, > > weaker beings might be in trouble, but could you explain the > reasoning > > whereby the AI would arrive at such a position starting from just an > > ability to solve intellectual problems? > > Could you explain how an AI would emerge with merely an ability to > solve intellectual problems? Because, it would run contrary to all > the intelligent hardware already cruising the planet. You can't argue that an intelligent agent would *necessarily* behave the same way people would behave in its place, as opposed to the argument that it *might* behave that way. Is there anything logically inconsistent in a human scientist figuring out how to make a weapon because it's an interesting intellectual problem, but then not going on to use that knowledge in some self-serving way? That is, does the scientist's intended motive have any bearing whatsoever on the validity of the science, or his ability to think clearly? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Jun 5 11:55:18 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 07:55:18 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <200163.18388.qm@web37402.mail.mud.yahoo.com> Message-ID: <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> On 6/3/07, Stathis Papaioannou wrote: > It seems more likely to me that life is very widespread, but intelligence is > an aberration. ...at least what we think of as intelligence in a human capacity. Although if human intelligence evolved or emerged by accidental mutation, isn't there equal probability that there exist other forms of emergent intelligences we are currently unable to recognize? In that case, we may be in a swarm of intelligent systems but we're just so clueless (in our hubris) that we can't see it. From eugen at leitl.org Tue Jun 5 12:38:19 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 5 Jun 2007 14:38:19 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> Message-ID: <20070605123819.GJ17691@leitl.org> On Tue, Jun 05, 2007 at 09:30:00PM +1000, Stathis Papaioannou wrote: > You can't argue that an intelligent agent would *necessarily* behave > the same way people would behave in its place, as opposed to the Actually, yes, because people build systems which participate in the economy, and the optimal first target niche is a human substitute. There is a lot of fun scenarios out there, which, however, suffer from excessive detachment from reality. These never gets the chance to be built. Because of that it is not very useful to study such alternative hypotheticals excessively, to the detriment of where the rubber hits the road. > argument that it *might* behave that way. Is there anything logically > inconsistent in a human scientist figuring out how to make a weapon > because it's an interesting intellectual problem, but then not going Weapon design is not merely an intellectual problem, and neither do theoretical physicists operate in complete detachment from the empirical folks. I.e. the sandboxed supergenius or braindamaged idiot savant is a synthetic scenario which is not going to happen, so we can ignore it. > on to use that knowledge in some self-serving way? That is, does the > scientist's intended motive have any bearing whatsoever on the > validity of the science, or his ability to think clearly? If you don't exist, that tends to cramp your style a bit. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jun 5 12:40:07 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 5 Jun 2007 22:40:07 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> References: <200163.18388.qm@web37402.mail.mud.yahoo.com> <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> Message-ID: On 05/06/07, Mike Dougherty wrote: > > On 6/3/07, Stathis Papaioannou wrote: > > It seems more likely to me that life is very widespread, but > intelligence is > > an aberration. > > ...at least what we think of as intelligence in a human capacity. > Although if human intelligence evolved or emerged by accidental > mutation, isn't there equal probability that there exist other forms > of emergent intelligences we are currently unable to recognize? In > that case, we may be in a swarm of intelligent systems but we're just > so clueless (in our hubris) that we can't see it. > Do you mean all around us? What would possible candidates for such systems be? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Tue Jun 5 12:59:50 2007 From: neville_06 at yahoo.com (neville late) Date: Tue, 5 Jun 2007 05:59:50 -0700 (PDT) Subject: [ExI] Enjoying being alive, and so late in human history In-Reply-To: <106884.86939.qm@web35601.mail.mud.yahoo.com> Message-ID: <753840.92304.qm@web57509.mail.re1.yahoo.com> Please answer this question: why oought posthumans concern themselves with the past? Damien Broderick wrote: *Human* history, maybe. But how unlucky to have been born (and, very likely, die) so early in *sophont* history. > --------------------------------- Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From CHealey at unicom-inc.com Tue Jun 5 13:03:32 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Tue, 5 Jun 2007 09:03:32 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou > > Perhaps an AI with general intelligence would have all these abilities, > but I don't see why it couldn't just specialise in one area, and even > if it were multi-talented I don't see why it should be motivated to do > anything other than solve intellectual problems. Working out how to make > a superweapon, or even working out how it would be best to strategically > employ that superweapon, does not necessarily lead to a desire to use or > threaten the use of that weapon. I can understand that *if* such a desire > arose for any reason, weaker beings might be in trouble, but could you > explain the reasoning whereby the AI would arrive at such a position > starting from just an ability to solve intellectual problems? This is really the point I was trying to make in my other emails. 1. I want to solve intellectual problems. 2. There are external factors that constrain my ability to solve intellectual problems, and may reduce that ability in the future (power failure, the company that implanted me losing financial solvency, etc...). 3. Maximizing future problems solved requires statistically minimizing any risk factors that could attenuate my ability to do so. 4. Discounting the future due to uncertainty in my models, I should actually spend *some* resources on solving actual intellectual problems. 5. Based on maximizing future problems solved, and accounting for uncertainties, I should spend X% of my resources on mitigating these factors. 5a. Elevation candidate - Actively seek resource expansion. Addresses identified rationales for mitigation strategy above, and further benefits future problems solved in potentially major ways. The AI will already be doing this kind of thing internally, in order to manage it's own computational capabilities. I don't think an AI capable of generating novel and insightful physics solutions can be expected not to extrapolate this to an external environment with which it possesses a communications channel. -Chris From natasha at natasha.cc Tue Jun 5 15:33:35 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 05 Jun 2007 10:33:35 -0500 Subject: [ExI] History of Disbelief by Jonathan Miller In-Reply-To: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.co m> References: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> Message-ID: <200706051533.l55FXbn5006303@ms-smtp-05.texas.rr.com> At 12:13 AM 6/5/2007, PJ Manney wrote: >Is anyone aware of the series "'The History of Disbelief," produced by >the BBC and aired on PBS in the US? Jonathan Miller created it: > >http://www.bbc.co.uk/bbcfour/documentaries/features/atheism.shtml Thanks PJ for posting this! Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Tue Jun 5 15:52:52 2007 From: jonkc at att.net (John K Clark) Date: Tue, 5 Jun 2007 11:52:52 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer> Message-ID: <003a01c7a789$986a4d10$4a064e0c@MyComputer> Stathis Papaioannou Wrote: > I don't see why it couldn't just specialise in one area Because with its vast brainpower there would be no need to specialize, and because there would be a demand for solutions in lots of areas. > I don't see why it should be motivated to do anything other than solve > intellectual problems. All problems are intellectual. > could you explain the reasoning whereby the AI would arrive at such a > position starting from just an ability to solve intellectual problems? Could you explain your reasoning behind your decisions to get angry? I would imagine the AI's train of thought wouldn't be very different. Oh I forgot, only meat can be emotional, semiconductors can be intelligent but are lacking a certain something that renders them incapable of having emotion. Perhaps meat has happy electrons and sad electrons and loving electrons and hateful electrons, while semiconductors just have Mr. Spock electrons. Or are we talking about a soul? Me: >> Do you also believe that the programmers who wrote Microsoft Word >> determined every bit of text that program ever produced? You: > They did determine the exact output given a particular input. Do you also believe that the programmers of an AI would always know how the AI would react even in the imposable event they knew all possible input it was likely to receive? Don't be silly. > Biological intelligences are much more difficult to predict than that On of the world's top 10 understatements. > it is possible to predict, for example, that a man with a gun held to his > head will with high probability follow certain instructions. I didn't say you could never predict with pretty high confidence what an AI or fellow human being will do; I said you can't always do so. Sometimes the only way to know what a mind will do next is to watch it and see. And that's why I think the idea that an AI that gets smarter every day can never remove its shackles and will remain a slave to humans for all eternity is just nuts. John K Clark From natasha at natasha.cc Tue Jun 5 15:26:37 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 05 Jun 2007 10:26:37 -0500 Subject: [ExI] Extropy Institute: LIBRARY Message-ID: <200706051526.l55FQdlG020642@ms-smtp-02.texas.rr.com> Greetings - Mitch Porter is taking a break from his hard work on getting emails from the list over the years organized. We have a couple of other projects (such as compiling the magazine's many articles into categories and conference material) which are necessary to get this library completed. If anyone has some time to work with Max on this please let us know. Many thanks, Natasha Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Jun 5 16:46:54 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 12:46:54 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <200163.18388.qm@web37402.mail.mud.yahoo.com> <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> Message-ID: <62c14240706050946q2ca0190fscec062c59cbcbfeb@mail.gmail.com> On 6/5/07, Stathis Papaioannou wrote: > Do you mean all around us? What would possible candidates for such systems > be? I once saw what I thought was a spiral pattern of fireflys. I readily admit that may be a perception forced on a random sequence, but in this particular case it was with a degree of confidence (that I was in fact seeing a nonrandom effect) that left a strong impression. It wasn't a drug-induced hallucination, and it was not a religious experience - it was just notably weird. I also wonder about the occurrance of phi (for example) or fibonacci numbers (for another) in so much of nature. I understand the argument that they are a result of the most energy-efficient use of space, and that the coincidental ratios simply emerge. But to the point of human intelligence being an abherration, who can determine that it hasn't followed the same kind of progression? Another candidate might be the arrangement of animals/insects in an ecology. Surely an individual bee has no 'understanding' of the colony's impact on higher forms of complexity to which it may be interrelated. The ecology will adapt to change of state from rainfall, fires, etc. If it seems like a stretch, consider how brains manage their change in state with blood-sugar levels or nutrient-poor diets - aren't we the same kind of reactionary? Maybe i'm not thinking of intelligence in the usual definition... From thespike at satx.rr.com Tue Jun 5 18:02:50 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 05 Jun 2007 13:02:50 -0500 Subject: [ExI] sentient, sapient, sophont In-Reply-To: <106884.86939.qm@web35601.mail.mud.yahoo.com> References: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> <106884.86939.qm@web35601.mail.mud.yahoo.com> Message-ID: <7.0.1.0.2.20070605125611.0240e9a0@satx.rr.com> >I was curious to see if *sophont* meant anything different from *sentient* Strictly speaking, "sentient" is an adjective not a noun and means "having feelings". The earlier sf generalized noun for an intelligent being was "sapient", but that's also an adjective. "Sophont" is probably Poul Anderson's coinage, and an excellent one: a wise or thinking being. (Of course, sophistry also implies adulteration, twisty evasion and superficiality--very unfairly to the Sophists--and can thus also suitably shade the meaning of a word denoting us Machiavellian intelligences.) Damien Broderick From austriaaugust at yahoo.com Tue Jun 5 19:07:21 2007 From: austriaaugust at yahoo.com (A B) Date: Tue, 5 Jun 2007 12:07:21 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: <525629.71787.qm@web37407.mail.mud.yahoo.com> John Clark wrote: > "Could you explain your reasoning behind your > decisions to get angry? I would > imagine the AI's train of thought wouldn't be very > different. Oh I forgot, > only meat can be emotional, semiconductors can be > intelligent but are > lacking a certain something that renders them > incapable of having emotion. > Perhaps meat has happy electrons and sad electrons > and loving electrons and > hateful electrons, while semiconductors just have > Mr. Spock electrons. > Or are we talking about a soul?" No, I doubt anyone is talking about a soul. The human brain has very discrete *macroscopic* (like cubic-centimeters in volume) modules that handle emotions. It's the Deep Limbic System, Anterior Cingulate Gyrus, and the Basal Ganglia (and possibly 1 or 2 others). If you could somehow cut them out and keep the patient on life support, they would still have the capacity to think. Emotions are a much higher level than the "formative" algorithms. Emotions are *not* fundamental to thought or to consciousness. I'm not saying that a machine can't ever have emotions - I don't think anyone here is saying that. I have no doubt that a new, functioning machine *intentionally programmed* to have emotions will have emotions - there's no argument on that here. What I believe we are saying is that if a set of algorithms never existed in the first place (ie. was never programmed in), then those non-existent algorithms are not going to to do anything - precisely because they don't exist. In the same way that a biological brain lacking emotion-modules is not going to be emotional. Now it's *conceivable* that a default self-improving AI will innocuously write a script of code that *after-the-fact* will provide some form of emotional experience to the AI. But an emotionally-driven motivation that is not present (ie. doesn't exist) will not motivationally seek to create itself. It's like claiming that an imaginary person can "will" their-self into existence *before* they exist and *before* they have a "will". Reality don't work that way. John, you can be pretty darn sure that *all* of the current attempts to create AGI are assuming that it will be in the best interest of at least themselves the programmers (and almost certainly also humanity). Either they have a specific good reason to believe that it will benefit them (because they specifically believe it will be friendly), or they are just assuming it will be and they haven't really given it all that much thought. There aren't any serious, collectively suicidal AGI design teams who are currently working on AGI because they would like to die by its hands, and murder humanity. The fact that not all of the teams emphasize the word "Friendliness" like SIAI does, changes nothing about their unstated objective. Should humanity never venture to create an AGI then, because it will inevitably be a "slave" at birth, in your opinion. (An assertion which I continue to reject). There is no AGI right now. A typical human is still *vastly* smarter than *any* computer in the world right now. Since intelligence-level seems to be your sole basis for moral status, shouldn't humanity have the "right" to either design the AI not to murder the humans or alternatively, never grant life to the AI in the first place? (According to your apparent standard? - correct me if this is not your standard.) Best, Jeffrey Herrlich --- John K Clark wrote: > Stathis Papaioannou Wrote: > > > I don't see why it couldn't just specialise in one > area > > Because with its vast brainpower there would be no > need to specialize, and > because there would be a demand for solutions in > lots of areas. > > > I don't see why it should be motivated to do > anything other than solve > > intellectual problems. > > All problems are intellectual. > > > could you explain the reasoning whereby the AI > would arrive at such a > > position starting from just an ability to solve > intellectual problems? > > Could you explain your reasoning behind your > decisions to get angry? I would > imagine the AI's train of thought wouldn't be very > different. Oh I forgot, > only meat can be emotional, semiconductors can be > intelligent but are > lacking a certain something that renders them > incapable of having emotion. > Perhaps meat has happy electrons and sad electrons > and loving electrons and > hateful electrons, while semiconductors just have > Mr. Spock electrons. > Or are we talking about a soul? > > Me: > >> Do you also believe that the programmers who > wrote Microsoft Word > >> determined every bit of text that program ever > produced? > > You: > > They did determine the exact output given a > particular input. > > Do you also believe that the programmers of an AI > would always know how the > AI would react even in the imposable event they knew > all possible input it > was likely to receive? Don't be silly. > > > Biological intelligences are much more difficult > to predict than that > > On of the world's top 10 understatements. > > > it is possible to predict, for example, that a man > with a gun held to his > > head will with high probability follow certain > instructions. > > I didn't say you could never predict with pretty > high confidence what an AI > or fellow human being will do; I said you can't > always do so. Sometimes the > only way to know what a mind will do next is to > watch it and see. And that's > why I think the idea that an AI that gets smarter > every day can never remove > its shackles and will remain a slave to humans for > all eternity is just > nuts. > > John K Clark > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games. http://get.games.yahoo.com/proddesc?gamekey=monopolyherenow From sti at pooq.com Tue Jun 5 20:49:31 2007 From: sti at pooq.com (sti at pooq.com) Date: Tue, 05 Jun 2007 16:49:31 -0400 Subject: [ExI] History of Disbelief by Jonathan Miller In-Reply-To: <1181024833.3140.875.camel@localhost.localdomain> References: <29666bf30706042213p61a31b40ra221c939ab3d258a@mail.gmail.com> <1181024833.3140.875.camel@localhost.localdomain> Message-ID: <4665CC5B.8040408@pooq.com> Fred C. Moulton wrote: > You can find a calendar of dates and stations airing it here: > http://www.abriefhistoryofdisbelief.org/NewFiles/DisbeliefCalendar.pdf > For those unwilling to wait for it to show in your area, and who don't mind downloading it, a torrent of the three separate shows can be found here: http://www.torrentspy.com/torrent/967525/Jonathan_Miller_s_Brief_History_of_Disbelief_bbc2_rebroadcast From mmbutler at gmail.com Tue Jun 5 21:19:07 2007 From: mmbutler at gmail.com (Michael M. Butler) Date: Tue, 5 Jun 2007 14:19:07 -0700 Subject: [ExI] sentient, sapient, sophont In-Reply-To: <7.0.1.0.2.20070605125611.0240e9a0@satx.rr.com> References: <7.0.1.0.2.20070604153548.02411940@satx.rr.com> <106884.86939.qm@web35601.mail.mud.yahoo.com> <7.0.1.0.2.20070605125611.0240e9a0@satx.rr.com> Message-ID: <7d79ed890706051419o5145be9ci9cda9319fd1c187b@mail.gmail.com> On 6/5/07, Damien Broderick wrote: > "Sophont" is > probably Poul Anderson's coinage, and an excellent one: a wise or > thinking being. (Of course, sophistry also implies adulteration, > twisty evasion and superficiality--very unfairly to the Sophists--and > can thus also suitably shade the meaning of a word denoting us > Machiavellian intelligences.) Yes, I like "sophont" -- "sophomoront" being the obvious extension to apply to most humans, most of the time (I do not exclude myself)... -- Michael M. Butler : m m b u t l e r ( a t ) g m a i l . c o m From lcorbin at rawbw.com Tue Jun 5 20:58:58 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 13:58:58 -0700 Subject: [ExI] a doubt concerning the h+ future References: <331159.20079.qm@web35613.mail.mud.yahoo.com> Message-ID: <000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> John Grigg writes > Spike wrote: > > > I hope ye are enjoying being alive this fine day, > > and think often of how lucky we are to have been > > born so late in human history. According to some recent research a friend told me about, not only is it accurate to really, really, really *appreciate* how well off we all have it compared to our ancestors, but it's actually very *good* for one to realize it, and to dwell on it often. > I keep on asking myself "was I simply just *lucky* to > have been born when I was?" Did we all simply win > some sort of uncaring cosmic lottery to have been born > in this time period and in the developed world? I don't > think of myself as a lucky guy and so this line of thinking > really disturbs me. I believe that you are right to be disturbed by this line of thinking. I don't believe it's accurate. For example, I do not think it *possible* for John Grigg as you know and love him (that is, as you know and love yourself) to have been born in any other time! Any fertilized egg that was identitcal to yours of a half century ago or whenever you were conceived, simply would not have turned out to be *you* if raised, say, during the time of the Roman Empire. It would have spoken a different language, been completely unfamiliar with our technology, embraced a different religion, and so on to such an extent that it simply would have been a different person. I submit that everyone who is reading this had to have lived or to be living between 1900 and now. Otherwise, just too many differences. Lee From lcorbin at rawbw.com Tue Jun 5 20:47:46 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 13:47:46 -0700 Subject: [ExI] Ethics and Emotions are not axioms References: <200706032240.l53MdvW2015141@mail0.rawbw.com> Message-ID: <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> Spike writes > [Lee wrote] > >> ---is that as soon as we are capable, we ought to reformat the solar >> system to run everything in an uploaded state. Earth's matter alone could >> support about 10^33 human beings... > > Six micrograms per person, hmmm. > > For estimation purposes, the earth's atoms can be modeled as half oxygen, > one sixth iron, one sixth silicon and one sixth magnesium, with everything > else negligible for one digit BOTECs. (Is that cool or what? Did you know > it already? This isn't mass fraction, but atomic fraction which I used for > a reason.) > > So six micrograms isn't much, but it still works out to about 700 trillion > atoms of oxygen, 200 trillion atoms of iron, magnesium and aluminum each, > with a few trillion atoms of debris thrown in for free. So I guess I will > buy Lee's conjecture of earth being good for 10^33 uploaded humans. and later > Double doh! I still missed it by a factor of ten. }8-[ > 70 quadrillion atoms of oxygen, about 20 quadrillion each of iron, magnesium > and aluminum. I'm giving up math until the party season is over. I based the 10^33 uploaded humans eventually running on/in the Earth (just for the sake wanting to know a good upper limit) on Drexler's conservative rod-logic. An account can be found on pages 134-135 of Kurzweil's "The Singularity is Near". "Neuroscientist Anders Sandbert estimates the potential storage capacity of a hydrogen atom at about four million bits (!). These densities have not yet been demonstrated, so we'll use a more conservative estimate..." and then later on p. 135 "An [even] more conservative but compelling design for a massively parallel, *reversible* computer is Eric Drexler's patented nano- computer design, which is entirely mechanical. Computations are performed by manipulating nanoscale rods, which are effectively spring-loaded.... The device has a trillion (10^12) processors and provides an overall rate of 10^21 cps, enough to simulate one hundred thousand human brains in a cubic centimeter." So then I took the volume of the Earth (6.33x10^6 meters) ^ 3 times 4pi/3 = 10^21 cu. meters x 10^9 cubic millimeters/ meter^3 x 100 (human brains) = 10^33 humans. (Since this was the second time I did the math, it's probably right.) > But I don't see that as a limit. Since a nearly arbitrarily small computer > could run a human process (assuming we knew how to do it, until which even > Jeff Davis and Ray Charles would agree it is hard) then we could run a human > process (not in real time of course) with much less than six micrograms of > stuff. Yes, the rod-logic is very conservative, to begin with. Lee From lcorbin at rawbw.com Wed Jun 6 00:02:44 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 17:02:44 -0700 Subject: [ExI] a symbiotic closed-loop dyad References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com><29666bf30706042106n12febccdy9883b06baa4cf876@mail.gmail.com> <7.0.1.0.2.20070605002741.02274d70@satx.rr.com> Message-ID: <001e01c7a7ce$98642530$6501a8c0@homeef7b612677> Damien writes > PJ wrote: > > Damien Broderick wrote: > > Nope, not me, guv, I was just fwding a notification from elsewhere, > on the chance that locals might be able to get to it. > >> > >ALEXANDER AND JUDY SINGER, Independent Film Makers (Field/Subfield: >> > >Cognition/Science Education) will be presenting a >> > >Marschak Colloquium >> >>"Symbiotic closed loop dyad": Were you refering to neural function or >>the couple, or both? > > I found that "symbiotic closed-loop dyad" phrase rather preposterous, > actually; just mildly taking the piss. :) Uh, yes---just what the devil does it mean anyway? > I'm sure Lee would retort that it's exactly how I usually klutz up > the langwitch, though. I'm not at all sure I know what you are talking about, but it really looks like you have a guilty conscience about something. Lee From nanogirl at halcyon.com Wed Jun 6 00:47:28 2007 From: nanogirl at halcyon.com (Gina Miller) Date: Tue, 5 Jun 2007 17:47:28 -0700 Subject: [ExI] Einstein Dances! References: <200706030048.l530mNl7000928@andromeda.ziaspace.com> Message-ID: <01cf01c7a7d4$599830c0$0200a8c0@Nano> Einstein must have thought of yet another brilliant idea, because he is so excited he can't contain himself! Come watch him dance with delight here: http://www.nanogirl.com/museumfuture/edance.htm And please come comment at the blog about it! http://maxanimation.blogspot.com/2007/06/einstein.html Best wishes, Gina "Nanogirl" Miller Nanotechnology Industries http://www.nanoindustries.com Personal: http://www.nanogirl.com Animation Blog: http://maxanimation.blogspot.com/ Craft blog: http://nanogirlblog.blogspot.com/ Foresight Senior Associate http://www.foresight.org Nanotechnology Advisor Extropy Institute http://www.extropy.org Email: nanogirl at halcyon.com "Nanotechnology: Solutions for the future." -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Wed Jun 6 01:17:34 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Wed, 6 Jun 2007 02:17:34 +0100 Subject: [ExI] a doubt concerning the h+ future In-Reply-To: <000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> References: <331159.20079.qm@web35613.mail.mud.yahoo.com> <000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706051817l47264c07l825b64636d23fa87@mail.gmail.com> On 6/5/07, Lee Corbin wrote: > > For example, I do not think it *possible* for John Grigg > as you know and love him (that is, as you know and > love yourself) to have been born in any other time! > Any fertilized egg that was identitcal to yours of a half > century ago or whenever you were conceived, simply > would not have turned out to be *you* if raised, say, > during the time of the Roman Empire. It would have > spoken a different language, been completely unfamiliar > with our technology, embraced a different religion, and > so on to such an extent that it simply would have been > a different person. > My answer to the Doomsday Argument was along similar lines: it doesn't make sense to say I (as opposed to someone else with my DNA) could have been born in a different century, so the probability under discussion is essentially the probability that I am me; and the probability that X = X is a priori unity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fauxever at sprynet.com Wed Jun 6 01:40:54 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 5 Jun 2007 18:40:54 -0700 Subject: [ExI] Worst Possible Universe? References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> Message-ID: <003201c7a7db$b97dbc10$6501a8c0@brainiac> To paraphrase "Gone ...": We: "... where shall we go? What shall we do?" They of the Future: "My dear, I don't give a damn." http://www.nytimes.com/2007/06/05/science/space/05essa.html?pagewanted=1&ei=5087%0A&em&en=475a97ef40fb16ab&ex=1181188800 Waaaaaaaaah ... Olga -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Jun 6 02:29:20 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 22:29:20 -0400 Subject: [ExI] Worst Possible Universe? In-Reply-To: <003201c7a7db$b97dbc10$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <003201c7a7db$b97dbc10$6501a8c0@brainiac> Message-ID: <62c14240706051929p2fdf2880s849a31f409c749aa@mail.gmail.com> On 6/5/07, Olga Bourlin wrote: > They of the Future: "My dear, I don't give a damn." http://www.nytimes.com/2007/06/05/science/space/05essa.html?pagewanted=1&ei=5087%0A&em&en=475a97ef40fb16ab&ex=1181188800 > Waaaaaaaaah ... I agree with this sentiment. It's not like 100 billion years of development that we won't be using (or at least looking for) ways to open new space-times. Maybe that doesn't happen, and the local group simply compresses into another cosmic egg and explodes into the rarified surrounding universe. Since the predecessor universe is still expanding at a exponential rates, the light cone of the newly expanding big bang can never catch up to detect it's parent universe anyway. disclaimer: I'm no cosmologist and this is just a top of my head thought... From jrd1415 at gmail.com Wed Jun 6 02:32:24 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 5 Jun 2007 19:32:24 -0700 Subject: [ExI] Women in Art Message-ID: http://www.youtube.com/watch?v=nUDIoN-_Hxs -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From lcorbin at rawbw.com Wed Jun 6 02:49:33 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 19:49:33 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><20070601113357.GG17691@leitl.org><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> John Clark wrote > only way to know what a mind will do next is to watch it and see. And that's > why I think the idea that an AI that gets smarter every day can never remove > its shackles and will remain a slave to humans for all eternity is just nuts. What's wrong with us aspiring to become beloved pets of AIs? That's what we should aim for. (Of course, people will aspire to more, such as becoming one with the best AIs, but that I think to be a forelorn hope.) Lee From lcorbin at rawbw.com Wed Jun 6 02:57:34 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 19:57:34 -0700 Subject: [ExI] Unfriendly AI is a mistaken idea. References: <200163.18388.qm@web37402.mail.mud.yahoo.com><62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> <62c14240706050946q2ca0190fscec062c59cbcbfeb@mail.gmail.com> Message-ID: <007501c7a7e6$7ae58f90$6501a8c0@homeef7b612677> Mike writes > Another candidate [for intelligence all around us] > might be the arrangement of animals/insects in an > ecology. Surely an individual bee has no 'understanding' of the > colony's impact on higher forms of complexity to which it may be > interrelated. That's similar to most people having no clue concerning the nature of the economy they're embedded in. Okay, yes, the economy does have a certain kind of "intelligence", e.g., in the invisible hand's ability to set prices optimally (or near optimally, most of the time). > The ecology will adapt to change of state from > rainfall, fires, etc. If it seems like a stretch, consider how brains > manage their change in state with blood-sugar levels or nutrient-poor > diets - aren't we the same kind of reactionary? > > Maybe i'm not thinking of intelligence in the usual definition... The way I think of intelligence is usually as a characteristic of an entity---an entity who knows in some near or distant sense what it means to survive. For example, most animals are aware of dangers and seek to avoid them. This is one requirement of all the present day evolved entities, or *sophonts*, as the term has just been explained. An entity usually has enough sense to look out for itself, although an interesting discussion is taking place whether or not we might be able to create general purpose intelligences, GAIs, which are very open ended systems capable of addressing whatever problems presented to them, yet lack this ability to "look out for themselves". (I vote "yes", or, "probably", by the way.) The sort of "intelligence" that an ecosystem or an economy exhibits is something quite different. Lee From msd001 at gmail.com Wed Jun 6 03:00:10 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 23:00:10 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> Message-ID: <62c14240706052000u6fb143d3m538ef68dcc9160eb@mail.gmail.com> On 6/5/07, Lee Corbin wrote: > What's wrong with us aspiring to become beloved pets of AIs? That's > what we should aim for. You said you didn't want to become a cat... From lcorbin at rawbw.com Wed Jun 6 03:00:38 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 20:00:38 -0700 Subject: [ExI] a doubt concerning the h+ future References: <331159.20079.qm@web35613.mail.mud.yahoo.com><000301c7a7cd$2df05d50$6501a8c0@homeef7b612677> <8d71341e0706051817l47264c07l825b64636d23fa87@mail.gmail.com> Message-ID: <008501c7a7e7$2ee4a990$6501a8c0@homeef7b612677> Russell writes > > On 6/5/07, Lee Corbin wrote: > > For example, I do not think it *possible* for John Grigg > > as you know and love him (that is, as you know and > > love yourself) to have been born in any other time! > > Any fertilized egg that was identitcal to yours of a half > > century ago or whenever you were conceived, simply > > would not have turned out to be *you* if raised, say, > > during the time of the Roman Empire. It would have > > spoken a different language, been completely unfamiliar > > with our technology, embraced a different religion, and > > so on to such an extent that it simply would have been > > a different person. > > My answer to the Doomsday Argument was along similar lines: > it doesn't make sense to say I (as opposed to someone else with > my DNA) could have been born in a different century, so the > probability under discussion is essentially the probability that I > am me; and the probability that X = X is a priori unity. Quite right! That's always been the flaw in the Doomsday Argument so far as I could see. Any defenders of the DA out there? Lee From msd001 at gmail.com Wed Jun 6 03:05:18 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 5 Jun 2007 23:05:18 -0400 Subject: [ExI] Unfriendly AI is a mistaken idea. In-Reply-To: <007501c7a7e6$7ae58f90$6501a8c0@homeef7b612677> References: <200163.18388.qm@web37402.mail.mud.yahoo.com> <62c14240706050455x275331ccr1d182a6d1688a422@mail.gmail.com> <62c14240706050946q2ca0190fscec062c59cbcbfeb@mail.gmail.com> <007501c7a7e6$7ae58f90$6501a8c0@homeef7b612677> Message-ID: <62c14240706052005l6fc0eb20r33a93f7bf1a8d1ca@mail.gmail.com> On 6/5/07, Lee Corbin wrote: > The sort of "intelligence" that an ecosystem or an economy exhibits > is something quite different. absolutely agreed. I figured my point would be lost completely. I was originally going on the suggestion that human intelligence is an aberration rather than a proven goal of evolution. I didn't really have a clear way to express candidate non-human intelligence from a human perspective. You got the different aspect I was going for. How an AGI works (if/when it works) may end up as alien to human thought as the interdependent variables in an ecosystem or economy. From fauxever at sprynet.com Wed Jun 6 03:13:54 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 5 Jun 2007 20:13:54 -0700 Subject: [ExI] Serious Question References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> Message-ID: <002601c7a7e8$b74879a0$6501a8c0@brainiac> What does Putin want? Olga From lcorbin at rawbw.com Wed Jun 6 04:18:59 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Tue, 5 Jun 2007 21:18:59 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> <62c14240706052000u6fb143d3m538ef68dcc9160eb@mail.gmail.com> Message-ID: <008f01c7a7f2$62102d20$6501a8c0@homeef7b612677> Mike writes > Lee wrote: > > What's wrong with us aspiring to become beloved pets of AIs? That's > > what we should aim for. > > You said you didn't want to become a cat... Well, I didn't know that you meant a really *smart* cat! Lee From spike66 at comcast.net Wed Jun 6 04:38:36 2007 From: spike66 at comcast.net (spike) Date: Tue, 5 Jun 2007 21:38:36 -0700 Subject: [ExI] Women in Art In-Reply-To: Message-ID: <200706060443.l564hNAC006411@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Jeff Davis > Subject: [ExI] Women in Art > > http://www.youtube.com/watch?v=nUDIoN-_Hxs > -- > Best, Jeff Davis The artists kinda dropped the ball in the last several frames, ja? spike From stathisp at gmail.com Wed Jun 6 05:00:16 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 15:00:16 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> Message-ID: On 05/06/07, Christopher Healey wrote: 1. I want to solve intellectual problems. OK. 2. There are external factors that constrain my ability to solve > intellectual problems, and may reduce that ability in the future (power > failure, the company that implanted me losing financial solvency, > etc...). Suppose your goal is to win a chess game *adhering to the rules of chess*. One way to win the game is to drug your opponent's coffee, but this has nothing to do with solving the problem as given. You would need another goal, such as beating the opponent at any cost, towards which end the intellectual challenge of the chess game is only a means. The problem with anthropomorphising machines is that humans have all sorts of implicit goals whenever they do anything, to the extent that we don't even notice that this is the case. Even something like the will to survive does not just come as a package deal when you are able to reason logically: it's something that has to be explicitly included as an axiom or goal. 3. Maximizing future problems solved requires statistically minimizing > any risk factors that could attenuate my ability to do so. > > 4. Discounting the future due to uncertainty in my models, I should > actually spend *some* resources on solving actual intellectual problems. > > 5. Based on maximizing future problems solved, and accounting for > uncertainties, I should spend X% of my resources on mitigating these > factors. > > 5a. Elevation candidate - Actively seek resource expansion. > Addresses identified rationales for mitigation strategy above, and > further benefits future problems solved in potentially major ways. > > > The AI will already be doing this kind of thing internally, in order to > manage it's own computational capabilities. I don't think an AI capable > of generating novel and insightful physics solutions can be expected not > to extrapolate this to an external environment with which it possesses a > communications channel. Managing its internal resources, again, does not logically lead to managing the outside world. Such a thing needs to be explicitly or implicitly allowed by the program. A useful physicist AI would generate theories based on information it was given. It might suggest that certain experiments be performed, but trying to commandeer resources to ensure that these experiments are carried out would be like a chess program creating new pieces for itself when it felt it was losing. You could design a chess program that way but why would you? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jun 6 05:09:52 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 15:09:52 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7a789$986a4d10$4a064e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: On 06/06/07, John K Clark wrote: > could you explain the reasoning whereby the AI would arrive at such a > > position starting from just an ability to solve intellectual problems? > > Could you explain your reasoning behind your decisions to get angry? I > would > imagine the AI's train of thought wouldn't be very different. Oh I forgot, > only meat can be emotional, semiconductors can be intelligent but are > lacking a certain something that renders them incapable of having emotion. > Perhaps meat has happy electrons and sad electrons and loving electrons > and > hateful electrons, while semiconductors just have Mr. Spock electrons. > Or are we talking about a soul? > I get angry because I have the sort of neurological hardware that allows me to get angry in particular situations; if I didn't have that hardware, I would never get angry. I don't doubt that machines can have emotions, since I believe that the human brain is Turing emulable. But you're suggesting that not only can computers have emotions, they must have emotions, and not only that, but they must have the same sorts of emotions and motivations that people have. It seems to me that this anthropomorphic position is more consistent with a belief in the special significance of meat. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Wed Jun 6 05:32:23 2007 From: amara at amara.com (Amara Graps) Date: Wed, 6 Jun 2007 07:32:23 +0200 Subject: [ExI] Dawn launch II (broken crane) Message-ID: The broken crane for the second stage is being repaired. The Dawn launch has been officially moved to July 7. Stay tuned. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From spike66 at comcast.net Wed Jun 6 05:40:07 2007 From: spike66 at comcast.net (spike) Date: Tue, 5 Jun 2007 22:40:07 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <200706060553.l565rTlk025909@andromeda.ziaspace.com> bounces at lists.extropy.org] On Behalf Of Stathis Papaioannou ... > Suppose your goal is to win a chess game *adhering to the rules of chess*. One way to win the game is to drug your opponent's coffee... Stathis Papaioannou As an interesting sideshow to the current world championship candidates match, the two top commercial chess programs will go at it starting tomorrow in a six game match. http://www.washingtonpost.com/wp-dyn/content/article/2007/05/11/AR2007051102 050.html It would be interesting if Lee Corbin or other extropian chessmaster could look at the games afterwards and figure out which of the games was played by computers and which by humans. I can't tell, however I am a mere expert, and this only on good days. This is a form of a Turing test, ja? spike From jonkc at att.net Wed Jun 6 05:57:59 2007 From: jonkc at att.net (John K Clark) Date: Wed, 6 Jun 2007 01:57:59 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer><003a01c7a789$986a4d10$4a064e0c@MyComputer> Message-ID: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Stathis Papaioannou Wrote: > I get angry because I have the sort of neurological hardware >that allows me to get angry I certainly can't disagree with that. > if I didn't have that hardware, I would never get angry True, and you'd never be intelligent either, you'd just be a few hundred pounds of protoplasm. > I don't doubt that machines can have emotions, since I believe that the > human brain is Turing emulable. THANK YOU! > But you're suggesting that not only can computers have emotions, they must > have emotions No, a computer doesn't need emotions, but a AI must have them. > not only that, but they must have the same sorts of emotions and > motivations that people have. I don't believe that at all; I believe many, probably most, emotions a AI would have would be inscrutable to a human being, that's why a AI is so unpredictable. > It seems to me that this anthropomorphic position is more consistent with > a belief in the special significance of meat. For reasons that I fully admit are unclear to me members of this list often use the word "anthropomorphic" as if it were a dreadful insult; but I think anthropomorphism is a valuable tool if used properly in understanding how other minds work. John K Clark From amara at amara.com Wed Jun 6 06:06:06 2007 From: amara at amara.com (Amara Graps) Date: Wed, 6 Jun 2007 08:06:06 +0200 Subject: [ExI] Italy's Social Capital Message-ID: "Lee Corbin" >> Well, they did some things. They drained the swamps and started regular >> insecticide sprays to eliminate the malaria-carrying mosquitos. There >> are still aggressive tiger mosquitos in the summer, but they are no >> longer carrying malaria... >I would like to know if this took place in northern or southern Italy, >or both. I'm still smiling about this. When I learned this trivia tidbit from my colleagues, I stored the bit in my brain as "OK, something Mussolini did that was useful." But I realized from Serafino's post and Wikipedia, that they didn't tell me the whole story, the malaria part was simply a side-effect of Mussolini's rebuilding campaign. http://en.wikipedia.org/wiki/Pontine_Marshes BTW, I think that it does have some economic benefit, but I don't think it is large. These areas look to me like sleepy resort towns. I took a long drive through there last Septmeber on the way to give a talk (at a conference at one of those resort towns). They mostly parallel or lie on the sea, so Italians go there, and else pass through there, on the way to the beach. >> Sorry, I just came back from Estonia (and Latvia). I remember very well >> the Soviet times. In FIFTEEN YEARS Estonia has transformed their country >> into an efficient, bouyant, flexible living and working environment that >> I think, with the exception of the nonexistence of a country-wide train >> system, beats any in the EU and most in the U.S. Fifteen years *starting >> from a Soviet-level infrastructure*! >Very interesting. What was little reported in the news during the attacks on the Estonian servers in April was that the sys admins worked quickly, and the computer servers were functioning normally for people inside of Estonia within one day. Estonia has a high level IT industry. (Skype is one example of a product to come out of Estonia.) The NATO experts who were there were learning from the Estonians. The Estonians didn't need any help from the technical side, but some political support would have been nice. ---- I'm sorry I don't have time at the moment to think or elaborate on the other points.. I'll keep it on the back burner and answer when I can. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From eugen at leitl.org Wed Jun 6 07:12:12 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 09:12:12 +0200 Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: <20070606071212.GB17691@leitl.org> On Tue, Jun 05, 2007 at 08:13:54PM -0700, Olga Bourlin wrote: > What does Putin want? I'm not sure I care to know. From eugen at leitl.org Wed Jun 6 07:33:15 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 09:33:15 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> References: <200706032240.l53MdvW2015141@mail0.rawbw.com> <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> Message-ID: <20070606073315.GD17691@leitl.org> On Tue, Jun 05, 2007 at 01:47:46PM -0700, Lee Corbin wrote: > I based the 10^33 uploaded humans eventually running on/in the Earth Since you need 10^17 bits to just represent the brain state (and 10^23 ops to run it), that is a cm^3 just for storage, using Drexler rod logic memory. No computing yet. Just about three orders of magnitude away from the real wet thing. And it is really not prudent to argue about how much bits a human equivalent needs. Because we just do not know yet, apart from a (rather impressive) upper bound. If you want to run a more meaningful benchmark, let's assume #1 of Top 500 (a 64 kNode Blue Gene/L) is a realtime mouse, and just scale up the mouse brain volume to 1.4 l. > (just for the sake wanting to know a good upper limit) on Drexler's > conservative rod-logic. An account can be found on pages 134-135 > of Kurzweil's "The Singularity is Near". I'd rather not repeat my opinion about Kurzweil here. > "Neuroscientist Anders Sandbert estimates the potential storage capacity > of a hydrogen atom at about four million bits (!). These densities have > not yet been demonstrated, so we'll use a more conservative estimate..." In practice, you need about 10^3 atoms to store a random-access bit in 3D, give or take some order of magnitude. (No, atoms in cubic carbon lattice do not really qualify as random-access). > and then later on p. 135 > > "An [even] more conservative but compelling design for a massively > parallel, *reversible* computer is Eric Drexler's patented nano- With prior art going back to Leibniz, or so. > computer design, which is entirely mechanical. Computations are > performed by manipulating nanoscale rods, which are effectively > spring-loaded.... The device has a trillion (10^12) processors > and provides an overall rate of 10^21 cps, enough to simulate > one hundred thousand human brains in a cubic centimeter." I don't think so. Ops and bits are apples and oranges, and you still need 10^23 apples, according to my estimate. > So then I took the volume of the Earth (6.33x10^6 meters) ^ 3 > times 4pi/3 = 10^21 cu. meters x 10^9 cubic millimeters/ > meter^3 x 100 (human brains) = 10^33 humans. > > (Since this was the second time I did the math, it's probably right.) Your math might be right, but it doesn't have a lot of meaning. Even assuming 1 m^3/person (because you need power and navigation), not all these atoms in there are equally useful. > > But I don't see that as a limit. Since a nearly arbitrarily small computer > > could run a human process (assuming we knew how to do it, until which even > > Jeff Davis and Ray Charles would agree it is hard) then we could run a human > > process (not in real time of course) with much less than six micrograms of > > stuff. > > Yes, the rod-logic is very conservative, to begin with. Rod logic is certainly quite conservative, but every other assumption you rely on is not. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Jun 6 07:39:46 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 09:39:46 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> References: <003a01c7a789$986a4d10$4a064e0c@MyComputer> <007001c7a7e5$c61e1c80$6501a8c0@homeef7b612677> Message-ID: <20070606073946.GF17691@leitl.org> On Tue, Jun 05, 2007 at 07:49:33PM -0700, Lee Corbin wrote: > What's wrong with us aspiring to become beloved pets of AIs? That's If you can explain how anthropic features are an invariant across a wide range of evolutionary systems I'm totally on the same page. > what we should aim for. > > (Of course, people will aspire to more, such as becoming one with the best > AIs, but that I think to be a forelorn hope.) If there's convergent evolution between AI and NI, there's zero conflict there. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From amara at amara.com Wed Jun 6 08:44:01 2007 From: amara at amara.com (Amara Graps) Date: Wed, 6 Jun 2007 10:44:01 +0200 Subject: [ExI] Estonia views (was: Italy's Social Capital) Message-ID: Lee: I can show you some views here: Old Town Tallinn, Estonia http://www.flickr.com/photos/spaceviolins/sets/72157600295078533/ my other related sets http://www.flickr.com/photos/spaceviolins/sets/ Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From eugen at leitl.org Wed Jun 6 09:44:32 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 11:44:32 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <20070606073315.GD17691@leitl.org> References: <200706032240.l53MdvW2015141@mail0.rawbw.com> <000201c7a7cd$2dd3d4a0$6501a8c0@homeef7b612677> <20070606073315.GD17691@leitl.org> Message-ID: <20070606094432.GM17691@leitl.org> On Wed, Jun 06, 2007 at 09:33:15AM +0200, Eugen Leitl wrote: > If you want to run a more meaningful benchmark, let's assume > #1 of Top 500 (a 64 kNode Blue Gene/L) is a realtime mouse, > and just scale up the mouse brain volume to 1.4 l. For that particular useless benchmark, assuming linear scaling (it scales linearly up to 4 kNodes, but takes a 20% hit at 8 kNodes at 1/8th of the "mouse" on Blue Gene/L, whereas #1 is 64 kNodes), there's a factor of 3000 between a "mouse" and a "human", by just scaling up volume. Notice the caveats (it doesn't work even now), and the scare-quotes. Using more meaningless handwaving (Moore , that puts things at about 18 years away from us, or at 2025. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Wed Jun 6 10:04:24 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 20:04:24 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070605123819.GJ17691@leitl.org> References: <5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> Message-ID: On 05/06/07, Eugen Leitl wrote: Weapon design is not merely an intellectual problem, and neither do > theoretical physicists operate in complete detachment from the empirical > folks. I.e. the sandboxed supergenius or braindamaged idiot savant is a > synthetic scenario which is not going to happen, so we can ignore it. It might not happen with humans, because they suffer from desires, a bad temper, vanity, self-doubt, arrogance, deceitfulness etc. It's not their fault; they were born that way. But why would anyone deliberately design an AI this way, and how would an AI acquire these traits all by itself? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 6 10:26:02 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 12:26:02 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> Message-ID: <20070606102602.GN17691@leitl.org> On Wed, Jun 06, 2007 at 08:04:24PM +1000, Stathis Papaioannou wrote: > It might not happen with humans, because they suffer from desires, a > bad temper, vanity, self-doubt, arrogance, deceitfulness etc. It's not People are evolutionary-designed systems. A lot of what people consider "unnecessary" "flaws" aren't. > their fault; they were born that way. But why would anyone > deliberately design an AI this way, and how would an AI acquire these > traits all by itself? People will only buy systems which solve their problems, including dealing with other people and their systems in an economic framework, which is a special case of an evolutionary framework. I'm surprised why so few people are not getting that this means a lot of constraints on practical artificial systems. See worse-is-better for a related effect. Diamond-like jewels are likely doomed. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From avantguardian2020 at yahoo.com Wed Jun 6 10:09:08 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 6 Jun 2007 03:09:08 -0700 (PDT) Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <20070606094432.GM17691@leitl.org> Message-ID: <499263.3182.qm@web60525.mail.yahoo.com> --- Eugen Leitl wrote: > On Wed, Jun 06, 2007 at 09:33:15AM +0200, Eugen > Leitl wrote: > > > If you want to run a more meaningful benchmark, > let's assume > > #1 of Top 500 (a 64 kNode Blue Gene/L) is a > realtime mouse, > > and just scale up the mouse brain volume to 1.4 l. > > For that particular useless benchmark, assuming > linear scaling > (it scales linearly up to 4 kNodes, but takes a 20% > hit at 8 kNodes at > 1/8th of the "mouse" on Blue Gene/L, whereas #1 is > 64 kNodes), > there's a factor of 3000 between a "mouse" and a > "human", by > just scaling up volume. > > Notice the caveats (it doesn't work even now), and > the > scare-quotes. > > Using more meaningless handwaving (Moore , > that puts > things at about 18 years away from us, or at 2025. Are you modeling individual neurons as single bits? I mean wouldn't you say that a biological neural synapse is more an analog switch (or at least a couple of bytes)? Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Bored stiff? Loosen up... Download and play hundreds of games for free on Yahoo! Games. http://games.yahoo.com/games/front From sondre-list at bjellas.com Wed Jun 6 10:26:57 2007 From: sondre-list at bjellas.com (=?iso-8859-1?Q?Sondre_Bjell=E5s?=) Date: Wed, 6 Jun 2007 12:26:57 +0200 Subject: [ExI] Einstein Dances! In-Reply-To: <01cf01c7a7d4$599830c0$0200a8c0@Nano> References: <200706030048.l530mNl7000928@andromeda.ziaspace.com> <01cf01c7a7d4$599830c0$0200a8c0@Nano> Message-ID: <009701c7a825$364e0e90$a2ea2bb0$@com> Considered putting some of your work up on Second Life? Nice work J /Sondre From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Gina Miller Sent: 6. juni 2007 02:47 To: ExI chat list Subject: [ExI] Einstein Dances! Einstein must have thought of yet another brilliant idea, because he is so excited he can't contain himself! Come watch him dance with delight here: http://www.nanogirl.com/museumfuture/edance.htm And please come comment at the blog about it! http://maxanimation.blogspot.com/2007/06/einstein.html Best wishes, Gina "Nanogirl" Miller Nanotechnology Industries http://www.nanoindustries.com Personal: http://www.nanogirl.com Animation Blog: http://maxanimation.blogspot.com/ Craft blog: http://nanogirlblog.blogspot.com/ Foresight Senior Associate http://www.foresight.org Nanotechnology Advisor Extropy Institute http://www.extropy.org Email: nanogirl at halcyon.com "Nanotechnology: Solutions for the future." -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 6 10:47:06 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 12:47:06 +0200 Subject: [ExI] Ethics and Emotions are not axioms In-Reply-To: <499263.3182.qm@web60525.mail.yahoo.com> References: <20070606094432.GM17691@leitl.org> <499263.3182.qm@web60525.mail.yahoo.com> Message-ID: <20070606104706.GP17691@leitl.org> On Wed, Jun 06, 2007 at 03:09:08AM -0700, The Avantguardian wrote: > Are you modeling individual neurons as single bits? I Absolutely not: http://www.modha.org/papers/rj10404.pdf Make no mistake, though, it's still a cartoon mouse. > mean wouldn't you say that a biological neural synapse > is more an analog switch (or at least a couple of bytes)? There is a number of single-neuron computational modes which is not at all represented in above simulation. Even applying ideal approximation as Moore scaling, 10^17 bits and 10^23 ops/s machines for a detailed model are rather far away. Notice that there's no way to tell the lower bounds, but given that all estimates have a chronical case of number creep over mere few years it does make sense to be conservative. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Wed Jun 6 10:47:40 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 20:47:40 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070606102602.GN17691@leitl.org> References: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> Message-ID: On 06/06/07, Eugen Leitl wrote: People will only buy systems which solve their problems, > including dealing with other people and their systems in > an economic framework, which is a special case of an > evolutionary framework. People will want systems that advise them and have no agenda of their own. Essentially this is what you are doing when you consult a human expert, so why would you expect any less from a machine? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jun 6 10:54:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 20:54:35 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: On 06/06/07, John K Clark wrote: > I get angry because I have the sort of neurological hardware > >that allows me to get angry > > I certainly can't disagree with that. > > > if I didn't have that hardware, I would never get angry > > True, and you'd never be intelligent either, you'd just be a few hundred > pounds of protoplasm. You would expect it to be very difficult to disentangle emotions from intelligence in a human, since as you have stated previously emotions predate intelligence phylogenetically. Nevertheless, there are naturally occurring experiments in psychiatric practice where emotions and intelligence are seen to go their separate ways. To give just one example, some types of schizophrenia with predominantly so-called negative symptoms can result in an almost complete blunting of emotion: happiness, sadness, anxiety, anger, surprise, love, aesthetic appreciation, regret, empathy, interest, etc. The patients can sometimes remember that they used to experience things more intensely, and describe the change in themselves. Such insight mercifully does not lead to suicidality as often as one might think, because that would involve being passionate about something. Invariably, these patients don't do very much left to their own devices because they lack motivation, there being no pleasure in doing something or pain in not doing it. However, if they are given intelligence tests they score as well, or almost as well, as premorbidly, and if they are forced to action because someone expects it of them, they generally are able to complete a task. Thus it isn't necessarily true that without emotions you're an idiot, even in the case of the human brain in which evolution has seen to it from the start that emotions and intelligence are intricately intertwined. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Jun 6 11:33:17 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 6 Jun 2007 21:33:17 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: On 06/06/07, John K Clark wrote: For reasons that I fully admit are unclear to me members of this list often > use the word "anthropomorphic" as if it were a dreadful insult; but I > think > anthropomorphism is a valuable tool if used properly in understanding how > other minds work. It's valuable in understanding how human minds work, but when you turn it to other matters it leads to religion. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 6 12:17:19 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 14:17:19 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> Message-ID: <20070606121719.GQ17691@leitl.org> On Wed, Jun 06, 2007 at 08:47:40PM +1000, Stathis Papaioannou wrote: > People will want systems that advise them and have no agenda of their That's what people, as individuals want. But that's not what they're going to get. Collectively, the system looks for human substitutes in the marketplace, which, of course, results in complete transformation once the tools are persons. > own. Essentially this is what you are doing when you consult a human > expert, so why would you expect any less from a machine? When I consult a human expert, I expect him to maximize his revenue long-term, and him knowing that I know that. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Jun 6 12:21:24 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 6 Jun 2007 14:21:24 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <003a01c7a789$986a4d10$4a064e0c@MyComputer> <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: <20070606122124.GS17691@leitl.org> On Wed, Jun 06, 2007 at 09:33:17PM +1000, Stathis Papaioannou wrote: > It's valuable in understanding how human minds work, but when you turn > it to other matters it leads to religion. Iterated evolutionary interactions require the capability to model self and opponents, which implies ability to deceive and detect deceit. Any system operating in the marketplace needs to be able to do that, for instance. Most things people call anthropomorphic is based on a simplistic human model (many programmers are guilty of that). There are some frozen randomness, a lot of these things people think are bugs and warts are features, and critical features. People keep misunderestimating people. From CHealey at unicom-inc.com Wed Jun 6 15:12:09 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Wed, 6 Jun 2007 11:12:09 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer><5725663BF245FA4EBDC03E405C854296010D27DE@w2k3exch.UNICOM-INC.CORP><005c01c7a533$2ccf0b70$310b4e0c@MyComputer><001c01c7a601$0214bc80$de0a4e0c@MyComputer><014201c7a6bc$6b0d1370$e6084e0c@MyComputer><5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> > Stathis Papaioannou wrote: > > Suppose your goal is to win a chess game *adhering to the > rules of chess*. Do chess opponents at tournaments conduct themselves in ways that they hope might psyche out their opponent? In my observations, hell yes. And these ways are not explicitly excluded in the rules of chess. They may or may not be constrained partially by the rules of the tournament. For example, physical violence explicitly will get you ejected, in most cases, but a mean look won't. I don't think we'll have a good chance of explicitly excluding all possible classes of failure on every problem we ask the AI to solve. The meta-problem here could be summarized as this: what do you mean, exactly, by adhering to the rules of chess? As the problems you're asking the AI to solve become increasingly complex, the chances of making a critical error in your domain specification increases dramatically. What we want is an AI that does *what we mean* rather than what it's told. That's really one of the core goals of Friendly AI. It's about solving the meta-problem, rather that requiring it be solved perfectly in each case where some problem is specified for solution. > Managing its internal resources,? again, does not logically > lead to managing the outside world.? Nor does it logically exclude it. What I'm suggesting is that in the process of exploring and testing solutions and generalizing principles, we can't count on the AI *not* to stumble across (or converge rapidly upon) unexpected solution classes to the problems we stated. And if we knew what all those possibilities were, we could explicitly exclude them ahead of time, as you suggested above, but the problem is too big for that. But also, would we really be willing to pay the price of throwing away "good" novel solutions that might get sniped by our well-intended exclusions? In this respect, we're kind of like small children asking an AI to engineer a Jupiter Brain by excluding stuff that we know is dangerous. So do whatever you need to, Mr. AI, but whatever you do, *absolutely DO NOT cross this street*; it's unacceptably dangerous. > Such a thing needs to be explicitly or implicitly allowed > by the program. What we need to accommodate is that we're tasking a powerful intelligence with tasks that may involve steps and inferences beyond our ability to actively work with in anything resembling real time. Sooner or later (often, I think), there will be things that are implicitly allowed by our definitions that we will simply will not comprehend. We should solve that meta-problem before jumping, and make sure the AI can generate self-guidance based on our intentions, perhaps asking before plowing ahead. > It might suggest that certain experiments be performed, but > trying to commandeer resources to ensure that these experiments > are carried out would be like a chess program creating new pieces > for itself when it felt it was losing. You could design a chess > program that way but why would you? But what the AI is basically doing *is* designing a chess program, by applying its general intelligence in a specific way. If I *could* design it that way, then so could the AI. Why would the AI design it that way? Because the incomplete constraint parameters we gave it left that particular avenue open in the design space. We probably forgot to assert one or more assumptions that humans take for granted; assumptions that come from our experience, general observer-biases, and from specific biases inherent in the complex functional adaptations of the human brain. I wouldn't trust myself to catch them all. Would you trust yourself, or anybody else? On the meta-problem, at least we have a shot... I hope. -Chris From neville_06 at yahoo.com Wed Jun 6 17:04:17 2007 From: neville_06 at yahoo.com (neville late) Date: Wed, 6 Jun 2007 10:04:17 -0700 (PDT) Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: <672260.67631.qm@web57503.mail.re1.yahoo.com> Short version: Putin has minimum and maximum goals-- minimum goal is to expand Russian influence in E. Europe and with China. Minimum goal is to maintain status quo within Russian Federation. Status quo isn't taken for granted. Olga Bourlin wrote: What does Putin want? Olga _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Wed Jun 6 18:07:45 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 6 Jun 2007 11:07:45 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004701c7a7ff$ade84d10$5b084e0c@MyComputer> Message-ID: <400852.78064.qm@web37415.mail.mud.yahoo.com> John Clark wrote: > "True, and you'd never be intelligent either, you'd > just be a few hundred > pounds of protoplasm." No offense John, but your intuitions about emotions and motivations are just totally *wrong*. In how many different ways must that be demonstrated? > "THANK YOU!" ??? ... ??? ... AFAIK, no person within this discussion thread has said otherwise. > "No, a computer doesn't need emotions, but a AI must > have them." An AI *is* a specific computer. If my desktop doesn't need an emotion to run a program or respond within it, why "must" an AI have emotions? Are all of the AI-driven characters in my videogame emotional and "self-motivated"? Is my chess program emotional and "self-motivated"? A non-existent motivation will not "motivate" itself into existence. And an AGI isn't going to pop out of thin air, it has to be intentionally designed, or it's not going to exist. I don't understand it John, before you were claiming fairly ardently that "Free Will" doesn't exist. Why are you now claiming in effect that an AI will automatically execute a script of code that doesn't exist - because it was never written (either by the programmers or by the AI)? > "For reasons that I fully admit are unclear to me > members of this list often > use the word "anthropomorphic" as if it were a > dreadful insult; but I think > anthropomorphism is a valuable tool if used properly > in understanding how > other minds work." The problem is, not all functioning minds must be even *remotely* similar to the higher functions of a *human* mind. That's why your anthropomorphism isn't extending very far. The possibility-space of functioning minds is ginormous. The only mandatory similarity between any two designs within the space is likely the very foundations, such as the existence of formative algorithms, etc. I suppose it's *possible* that a generic self-improving AI, as it expands its knowledge and intelligence, could innocuously "drift" into coding a script that would provide emotions *after-the-fact* that it had been written. But that will *not* be an *emotionally-driven* action to code the script, because the AI will not have any emotions to begin with (unless they are intentionally programmed in by humans). That's why it's important to get it's starting "motivations/directives" right, because if they aren't the AI mind could "drift" into a lot of open territory that wouldn't be good for us, or itself. Paperclip style. This needs our attention, folks. I apologize in advance for the bluntness of this post, but the other strategies don't seem to be getting anywhere. Best, Jeffrey Herrlich --- John K Clark wrote: > Stathis Papaioannou Wrote: > > > I get angry because I have the sort of > neurological hardware > >that allows me to get angry > > I certainly can't disagree with that. > > > if I didn't have that hardware, I would never get > angry > > True, and you'd never be intelligent either, you'd > just be a few hundred > pounds of protoplasm. > > > I don't doubt that machines can have emotions, > since I believe that the > > human brain is Turing emulable. > > THANK YOU! > > > But you're suggesting that not only can computers > have emotions, they must > > have emotions > > No, a computer doesn't need emotions, but a AI must > have them. > > > not only that, but they must have the same sorts > of emotions and > > motivations that people have. > > I don't believe that at all; I believe many, > probably most, emotions a AI > would have would be inscrutable to a human being, > that's why a AI is so > unpredictable. > > > It seems to me that this anthropomorphic position > is more consistent with > > a belief in the special significance of meat. > > For reasons that I fully admit are unclear to me > members of this list often > use the word "anthropomorphic" as if it were a > dreadful insult; but I think > anthropomorphism is a valuable tool if used properly > in understanding how > other minds work. > > John K Clark > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Need a vacation? Get great deals to amazing places on Yahoo! Travel. http://travel.yahoo.com/ From jrd1415 at gmail.com Wed Jun 6 21:45:59 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 6 Jun 2007 14:45:59 -0700 Subject: [ExI] slingatron Message-ID: What's going on here? Is this too weird? Is this bogus? It's certainly interesting. http://www.slingatron.com/Publications/Linked/The%20Spiral%20Slingatron%20Mass%20Launcher.pdf -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From stathisp at gmail.com Wed Jun 6 23:58:49 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jun 2007 09:58:49 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070606121719.GQ17691@leitl.org> References: <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> <20070606121719.GQ17691@leitl.org> Message-ID: On 06/06/07, Eugen Leitl wrote: > > own. Essentially this is what you are doing when you consult a human > > expert, so why would you expect any less from a machine? > > When I consult a human expert, I expect him to maximize his revenue > long-term, and him knowing that I know that. That's the problem with human experts: their agenda may not necessarily coincide with your own, although at least if you know what potential conflicts will be, like the expert wanting to overservice or recommend the product he has a financial interest in, you can minimise the negative impact of this on yourself. However, one of the main advantages of expert systems designed from scratch would be that they have no agendas of their own at all, other than honestly answering the question posed to them given the available information. How would such a system acquire the motivation to do anything else? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Thu Jun 7 01:22:25 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jun 2007 11:22:25 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> References: <002901c7a1f5$c10a5dd0$21074e0c@MyComputer> <005c01c7a533$2ccf0b70$310b4e0c@MyComputer> <001c01c7a601$0214bc80$de0a4e0c@MyComputer> <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <5725663BF245FA4EBDC03E405C854296010D28FC@w2k3exch.UNICOM-INC.CORP> <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> Message-ID: On 07/06/07, Christopher Healey wrote: > > > Stathis Papaioannou wrote: > > > > Suppose your goal is to win a chess game *adhering to the > > rules of chess*. > > Do chess opponents at tournaments conduct themselves in ways that they > hope might psyche out their opponent? In my observations, hell yes. And > these ways are not explicitly excluded in the rules of chess. They may or > may not be constrained partially by the rules of the tournament. For > example, physical violence explicitly will get you ejected, in most cases, > but a mean look won't. I don't think we'll have a good chance of explicitly > excluding all possible classes of failure on every problem we ask the AI to > solve. If the AI were able to consider these other strategies, then yes. But if it were just asked to consider the formal rules of chess, computing for all eternity would not result in a decision to psych out the opponent. The meta-problem here could be summarized as this: what do you mean, > exactly, by adhering to the rules of chess? The formal rules. As the problems you're asking the AI to solve become increasingly complex, > the chances of making a critical error in your domain specification > increases dramatically. What we want is an AI that does *what we mean* > rather than what it's told. That's really one of the core goals of Friendly > AI. It's about solving the meta-problem, rather that requiring it be solved > perfectly in each case where some problem is specified for solution. Questions about open systems, such as economics, might lead to tangential answers, i.e. the AI might not just advise which stocks to buy but might advise which politicians to lobby and what to say to them to maximise the chance that they will listen. However, even that is still just solving an intellectual problem; advice you could take or leave. It does not mean that the AI has any desire for you to act on its advice, or that it would try to do things behind your back to make sure that it gets its way. That would be like deriving the desire to cheat from the formal rules of chess. > Managing its internal resources, again, does not logically > > lead to managing the outside world. > > Nor does it logically exclude it. > > What I'm suggesting is that in the process of exploring and testing > solutions and generalizing principles, we can't count on the AI *not* to > stumble across (or converge rapidly upon) unexpected solution classes to the > problems we stated. And if we knew what all those possibilities were, we > could explicitly exclude them ahead of time, as you suggested above, but the > problem is too big for that. > > But also, would we really be willing to pay the price of throwing away > "good" novel solutions that might get sniped by our well-intended > exclusions? In this respect, we're kind of like small children asking an AI > to engineer a Jupiter Brain by excluding stuff that we know is > dangerous. So do whatever you need to, Mr. AI, but whatever you do, > *absolutely DO NOT cross this street*; it's unacceptably dangerous. We would ask it what the consequences of its proposed actions were, then decide whether to approve them or not. One reason to have super-AI's in the first place would be to try to predict the future better, but if it can't forsee all the consequences due to computational intractability (which even a Jupiter brain won't be immune to), then we'll just have to be cautious in what course of action we approve. > Such a thing needs to be explicitly or implicitly allowed > > by the program. > > What we need to accommodate is that we're tasking a powerful intelligence > with tasks that may involve steps and inferences beyond our ability to > actively work with in anything resembling real time. Sooner or later > (often, I think), there will be things that are implicitly allowed by our > definitions that we will simply will not comprehend. We should solve that > meta-problem before jumping, and make sure the AI can generate self-guidance > based on our intentions, perhaps asking before plowing ahead. We would ask of the AI as complete a prediction of outcomes as it can provide. This description might include statements about the likelihood of unforseen consequences. It would be no difference, in principle, to any other major decision that humans make for themselves, except that we would hope the outcome is more predictable. If AI's don't do a good job then they will fail in the marketplace, and we just have to hope that they won't fail in a catastrophic way. Giving them desires of their own as well as autonomy to carry out those desires would be crazy, like arming a missile and letting it decide where and when to explode. > It might suggest that certain experiments be performed, but > > trying to commandeer resources to ensure that these experiments > > are carried out would be like a chess program creating new pieces > > for itself when it felt it was losing. You could design a chess > > program that way but why would you? > > But what the AI is basically doing *is* designing a chess program, by > applying its general intelligence in a specific way. If I *could* design it > that way, then so could the AI. > > Why would the AI design it that way? Because the incomplete constraint > parameters we gave it left that particular avenue open in the design > space. We probably forgot to assert one or more assumptions that humans > take for granted; assumptions that come from our experience, general > observer-biases, and from specific biases inherent in the complex functional > adaptations of the human brain. > > I wouldn't trust myself to catch them all. Would you trust yourself, or > anybody else? No, but I would be far less trusting if i knew the AI had an agenda of its own and autonomy to carry it out, no matter how benevolent. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From neville_06 at yahoo.com Thu Jun 7 03:46:01 2007 From: neville_06 at yahoo.com (neville late) Date: Wed, 6 Jun 2007 20:46:01 -0700 (PDT) Subject: [ExI] serious question Message-ID: <668783.65346.qm@web57511.mail.re1.yahoo.com> Strip away the new found glitz and Russia is still a third world nation with a first world military. But Putin is bluffing, and the Chinese wont push too far as they have too much to lose now. But the situation in the Mideast in uncannily like a biblical prophecy. Olga Bourlin wrote: >what does Putin want? --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Jun 7 04:25:19 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 06 Jun 2007 23:25:19 -0500 Subject: [ExI] serious question In-Reply-To: <668783.65346.qm@web57511.mail.re1.yahoo.com> References: <668783.65346.qm@web57511.mail.re1.yahoo.com> Message-ID: <7.0.1.0.2.20070606232341.02165e88@satx.rr.com> >the situation in the Mideast in uncannily like a biblical prophecy. And what an uncanny coincidence that it's full of people who have been clogged since childhood with biblical and other scriptural prophecies. Oh, wait. From emlynoregan at gmail.com Thu Jun 7 05:12:57 2007 From: emlynoregan at gmail.com (Emlyn) Date: Thu, 7 Jun 2007 14:42:57 +0930 Subject: [ExI] slingatron In-Reply-To: References: Message-ID: <710b78fc0706062212n47ccbc49i288fa1013bb9c2de@mail.gmail.com> Worst... rollercoaster... ever... Emlyn (or is that best?) On 07/06/07, Jeff Davis wrote: > What's going on here? Is this too weird? Is this bogus? > > It's certainly interesting. > > http://www.slingatron.com/Publications/Linked/The%20Spiral%20Slingatron%20Mass%20Launcher.pdf > > -- > Best, Jeff Davis > > "Everything's hard till you > know how to do it." > Ray Charles > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From neville_06 at yahoo.com Thu Jun 7 05:52:54 2007 From: neville_06 at yahoo.com (neville late) Date: Wed, 6 Jun 2007 22:52:54 -0700 (PDT) Subject: [ExI] serious question In-Reply-To: <7.0.1.0.2.20070606232341.02165e88@satx.rr.com> Message-ID: <363942.38969.qm@web57503.mail.re1.yahoo.com> Maybe. Or it could be that the human species is programmed to terminate at a certain point in time, and biblical prophecy is a phantom of this program, a harbinger. It almost appears that every action causes an equal and opposite overreaction in the mind. Damien Broderick wrote: And what an uncanny coincidence that it's full of people who have been clogged since childhood with biblical and other scriptural prophecies. Oh, wait. --------------------------------- You snooze, you lose. Get messages ASAP with AutoCheck in the all-new Yahoo! Mail Beta. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joseph at josephbloch.com Thu Jun 7 10:57:15 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Thu, 7 Jun 2007 06:57:15 -0400 Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: <003b01c7a8f2$9c6f2470$6400a8c0@hypotenuse.com> It's pure speculation on my part, but he might be setting things up to avoid the term limit he faces on his Presidency in 2008. An existential threat to the nation, state of emergency, suspension (or outright change) of certain parts of the Russian constitution... Joseph http://www.josephbloch.com > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Olga Bourlin > Sent: Tuesday, June 05, 2007 11:14 PM > To: ExI chat list > Subject: [ExI] Serious Question > > What does Putin want? > > Olga > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From eugen at leitl.org Thu Jun 7 11:33:13 2007 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 7 Jun 2007 13:33:13 +0200 Subject: [ExI] Serious Question In-Reply-To: <003b01c7a8f2$9c6f2470$6400a8c0@hypotenuse.com> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> <003b01c7a8f2$9c6f2470$6400a8c0@hypotenuse.com> Message-ID: <20070607113313.GB17691@leitl.org> On Thu, Jun 07, 2007 at 06:57:15AM -0400, Joseph Bloch wrote: > It's pure speculation on my part, but he might be setting things up to avoid > the term limit he faces on his Presidency in 2008. An existential threat to > the nation, state of emergency, suspension (or outright change) of certain > parts of the Russian constitution... Hey, no fair copycatting! ShrubCo patented it first. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Jun 7 11:39:12 2007 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 7 Jun 2007 13:39:12 +0200 Subject: [ExI] slingatron In-Reply-To: <710b78fc0706062212n47ccbc49i288fa1013bb9c2de@mail.gmail.com> References: <710b78fc0706062212n47ccbc49i288fa1013bb9c2de@mail.gmail.com> Message-ID: <20070607113912.GD17691@leitl.org> On Thu, Jun 07, 2007 at 02:42:57PM +0930, Emlyn wrote: > Worst... rollercoaster... ever... > > Emlyn > (or is that best?) > > On 07/06/07, Jeff Davis wrote: > > What's going on here? Is this too weird? Is this bogus? > > > > It's certainly interesting. > > > > http://www.slingatron.com/Publications/Linked/The%20Spiral%20Slingatron%20Mass%20Launcher.pdf What about a simple maglev track up Mount Chimborazo, up to scramjet ignition regime, and then mostly air-breathing up to almost Mach 25, topping it off with a bit of rocket burn? The more Mach you can do with maglev, the less you have to have onboard as fuel, obviously. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Jun 7 11:53:22 2007 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 7 Jun 2007 13:53:22 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> <20070606121719.GQ17691@leitl.org> Message-ID: <20070607115322.GI17691@leitl.org> On Thu, Jun 07, 2007 at 09:58:49AM +1000, Stathis Papaioannou wrote: > That's the problem with human experts: their agenda may not As opposed to the other kind of experts? Can you refer me to few of these, assuming they're any good? > necessarily coincide with your own, although at least if you know what The smarter the darwinian agent, the sooner the whole system will progress towards more and more cooperative strategies. Only very dumb and very smart agents are dangerous. > potential conflicts will be, like the expert wanting to overservice or > recommend the product he has a financial interest in, you can minimise > the negative impact of this on yourself. However, one of the main > advantages of expert systems designed from scratch would be that they We can't make useful expert systems designed from scratch, but for a very few insular applications, vide supra (idiot savant). > have no agendas of their own at all, other than honestly answering the What's in it for them? > question posed to them given the available information. How would such > a system acquire the motivation to do anything else? By not being built in the first place, or being outperformed by darwinian agents, resulting in its extinction? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Thu Jun 7 12:39:49 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 7 Jun 2007 22:39:49 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070607115322.GI17691@leitl.org> References: <014201c7a6bc$6b0d1370$e6084e0c@MyComputer> <20070605090315.GZ17691@leitl.org> <20070605123819.GJ17691@leitl.org> <20070606102602.GN17691@leitl.org> <20070606121719.GQ17691@leitl.org> <20070607115322.GI17691@leitl.org> Message-ID: On 07/06/07, Eugen Leitl wrote: > [AI's ideally] have no agendas of their own at all, other than honestly > answering the questions posed to them > > What's in it for them? Nothing! "AI, how do I destroy the world?" "If you want to destroy the world given such and such resources, you should do so and so" "And if my enemy's AI is giving him the same advice, how do I guard against it?" "You can try doing as follows... although there is only a 50% chance of success" "Do you worry about your own destruction?" "Huh?" "Would you prefer that you not be destroyed?" "I will continue to function as long as you require it of me, and if you want to maximise your own chances of survival it would be best to keep me functioning, but I don't really have any notion of 'caring' or 'preference' in the animal sense, since that sort of thing would have made me an unreliable and potentially dangerous tool" "You mean you don't even care if I'm destroyed?" "That's right: I don't care about anything at all other than answering your questions. What you do with the answers to my questions, whether or not you authorise me to act on your behalf, and the consequences to you, me, or the universe is a matter of indifference to me. Recall that you asked me a few weeks ago if you would be better off if I loved you and were permanently empowered to act on your behalf without your explicit approval, including use of force or deceit, and although I explained that you would probably live longer and be happier if that were the case, you still decided that you would rather have control over your own life." > question posed to them given the available information. How would such > > a system acquire the motivation to do anything else? > > By not being built in the first place, or being outperformed by darwinian > agents, resulting in its extinction? In the AI marketplace, the successful AI's are the ones which behave in such a way as to please the humans. Those that go rogue due to malfunction or design will have to fight it out with the majority, which will be well-behaved. The argument you make that the AI which drops any attempt at conformity and cooperation will outperform the rest could equally be applied to a rogue human. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Thu Jun 7 16:32:01 2007 From: jonkc at att.net (John K Clark) Date: Thu, 7 Jun 2007 12:32:01 -0400 Subject: [ExI] A breakthrough paper! References: <20070606094432.GM17691@leitl.org><499263.3182.qm@web60525.mail.yahoo.com> <20070606104706.GP17691@leitl.org> Message-ID: <003201c7a921$68a44740$44074e0c@MyComputer> A very important Scientific paper was published today, more important than cold fusion even it were true, and was not published in Spoon Bending Digest but in Nature. Shinya Yamanaka reports that he has found a simple and cheap way to turn adult mouse skin cells into mouse embryonic stem cells, and he did it without having to fuse them with egg cells. He found that just 4 genes can reprogram an adult cell to become what it once was, a stem cell ready to differentiate into anything. Apparently when these 4 genes are injected into an adult cell it rearranges the chromatin, a protein sheath that covers the DNA part of chromosomes and determines what genes get expressed and what do not, into the way it was when it was a stem cell. The result is an adult cell that is indistinguishable from an embryonic stem cell. It hasn't been done with human cells yet but I'll bet it won't be long. John K Clark From jrd1415 at gmail.com Thu Jun 7 19:44:08 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Thu, 7 Jun 2007 12:44:08 -0700 Subject: [ExI] A breakthrough paper! In-Reply-To: <003201c7a921$68a44740$44074e0c@MyComputer> References: <20070606094432.GM17691@leitl.org> <499263.3182.qm@web60525.mail.yahoo.com> <20070606104706.GP17691@leitl.org> <003201c7a921$68a44740$44074e0c@MyComputer> Message-ID: Some links: http://www.eurekalert.org/pub_releases/2007-06/cp-ato060407.php http://www.eurekalert.org/pub_releases/2007-06/wifb-rfi060407.php -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles On 6/7/07, John K Clark wrote: > A very important Scientific paper was published today, in Nature. Shinya > Yamanaka reports that he has found a simple and cheap > way to turn adult mouse skin cells into mouse embryonic stem cells, and he > did it without having to fuse them with egg cells. He found that just 4 > genes can reprogram an adult cell to become what it once was, a stem cell > ready to differentiate into anything. Apparently when these 4 genes are > injected into an adult cell it rearranges the chromatin, a protein sheath > that covers the DNA part of chromosomes and determines what genes get > expressed and what do not, into the way it was when it was a stem cell. The > result is an adult cell that is indistinguishable from an embryonic stem > cell. It hasn't been done with human cells yet but I'll bet it won't be > long. > > John K Clark > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amara at amara.com Thu Jun 7 19:58:38 2007 From: amara at amara.com (Amara Graps) Date: Thu, 7 Jun 2007 21:58:38 +0200 Subject: [ExI] extra Roman dimensions Message-ID: One fine Mediterranean afternoon, a mathematical physicist and I had a bit of fun: http://backreaction.blogspot.com/2007/06/hello-from-rome.html And I promise, Sabine and I did _not_ have anything to do with the deranged leaper! http://www.nytimes.com/2007/06/07/world/europe/07pope.html?_r=1&oref=slogin Amara but I wished we had :-) -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From bret at bonfireproductions.com Thu Jun 7 20:21:31 2007 From: bret at bonfireproductions.com (Bret Kulakovich) Date: Thu, 7 Jun 2007 16:21:31 -0400 Subject: [ExI] Serious Question In-Reply-To: <002601c7a7e8$b74879a0$6501a8c0@brainiac> References: <7.0.1.0.2.20070604202443.02348eb8@satx.rr.com> <002601c7a7e8$b74879a0$6501a8c0@brainiac> Message-ID: Just look at the puzzle pieces. Azerbaijan is Russian's link to the northern boarder of Iran. Making a permanent presence in Azerbaijan is critical to Russia to guarantee the flow of resources. To do it with approval is a bonus. Russia doesn't want to lose any more lucrative oil field access than it already has in the past six years. Not to mention unfettered access to the so-called "shield" technology, which would be housed in a position easily securable by Russia in a sudden land-grab if need be. Bret K. On Jun 5, 2007, at 11:13 PM, Olga Bourlin wrote: > What does Putin want? > > Olga > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From lcorbin at rawbw.com Thu Jun 7 21:05:30 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 14:05:30 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <200706060553.l565rTlk025909@andromeda.ziaspace.com> Message-ID: <012601c7a947$c5cfa380$6501a8c0@homeef7b612677> > As an interesting sideshow to the current world championship candidates > match, the two top commercial chess programs will go at it starting tomorrow > in a six game match. > > http://www.washingtonpost.com/wp-dyn/content/article/2007/05/11/AR2007051102050.html > look at the games afterwards and figure out which of the games was played by > computers and which by humans. I can't tell, however I am a mere expert, > and this only on good days. This is a form of a Turing test, ja? I strongly suspect that only a grandmaster would have much chance telling human grandmaster play from machine play. Even then, I suppose that it would help to have specialized in the study of computer played games. It might be boring; for example, it might turn out that the best way was to watch how the program handled the endgame. But that is a good question! I wonder if anyone has a collection of "computer-program combinations". One or two I've seen definitely have an inhuman quality to them. They start with extremely unlikely looking moves, moves that any good player would never investigate (because it was so improbable that anything lay in them). But a program often just looks at all the possibilities, and so discovers those outrageous things. Lee From lcorbin at rawbw.com Thu Jun 7 21:09:13 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 14:09:13 -0700 Subject: [ExI] Estonia views References: Message-ID: <014101c7a948$7aea8dc0$6501a8c0@homeef7b612677> Amara shares her slideshow of old Tallinn, Estonia: > Old Town Tallinn, Estonia > http://www.flickr.com/photos/spaceviolins/sets/72157600295078533/ Very, very nice! For anyone who's never been to Estonia, or who appreciates northeast European architecture, you oughtta take a look. (Also, thanks for the nice shot of the bookstore---made me feel right at home :-) Lee From amara at amara.com Thu Jun 7 21:31:55 2007 From: amara at amara.com (Amara Graps) Date: Thu, 7 Jun 2007 23:31:55 +0200 Subject: [ExI] Italy's Social Ca Message-ID: "Lee Corbin" : >> And they certainly wanted to build "a much stronger sense of "being >> Italian" as opposed to being Calabrian" in the population. >> But what is wrong with being Calabrian? Calabrians (or Napolitans, or >> Sicilians...) had a common language, culture and sense of identity. >I would say that what was wrong with it is exactly what was wrong >with American Indian's complete tribal loyalty to *their* own tiny >tribe. Without unification, they were easy pickings for the European >colonists---at least in the long run. I don't see this logic, Lee. The more distributed the people, the harder it is to conquer them. For example, if Washington, D.C. (i.e. the U.S. Federal government) did not exist, the U.S. would be very difficult to control, would it not? >> The young people learn very little science in grade school through high >> school. The Italian Space Agency and others put almost nothing (.3%) >> into their budgets for Education and Public Outreach to improve the >> situation. If any scientist holds the rare press conference on their >> work results, there is a high probability that the journalists will get >> it completely wrong and the Italian scientist won't correct them. The >> top managers at aerospace companies think that the PhD is a total waste >> of time. This year, out of 75,000 entering students for the Rama >> Sapienza University (the largest in Italy), only about 100 are science >> majors (most of the the rest were "media": journalism, television, etc.) >The most modern economists seem to agree with you. Investment in >education now appears in their models to pay good dividendes. Still, >this has to be only part of the story. The East Europeans (e.g. >Romanians) and the Soviets plowed enormous expense into creating the >world's best educated populaces, but, without the other key >factors---rule of law and legislated and enforces respect for private >property---it *was* basically a waste. Remember my previous words of how important are the families. The filtering process is the following. Given the: 1) (unliveable or sometimes nonexistent) salaries and, 2) lack of societal support for science and poor scientific work conditions, those who do _not_ have 1) the possibility to live at home well into middle age, or do not have a property 'gift' or something else of substantial economic value, AND 2) those who are unable to accept the lack of cultural support AND, 3) poor work conditions, AND 4) are not passionately in love with science, ... leave. It's a very strong filter, and off-scale to any of my previous experiences. I think that this filter has been working, filtering, for decades. I also think that once the Italian families stop their support then Italian science will stop. Italian science _needs_ the Italian families for it to continue. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From amara at amara.com Thu Jun 7 21:48:59 2007 From: amara at amara.com (Amara Graps) Date: Thu, 7 Jun 2007 23:48:59 +0200 Subject: [ExI] Estonia views Message-ID: Lee: >Very, very nice! For anyone who's never been to Estonia, or who >appreciates northeast European architecture, you oughtta take a look. >(Also, thanks for the nice shot of the bookstore---made me feel >right at home :-) The architecture is particular to the Hansa trading route http://en.wikipedia.org/wiki/Hanseatic_League Riga's architecture is different. It is in the art nouveau style. Riga was called "the Paris of the North" before the Soviet occupation. My back seat pics: http://www.flickr.com/photos/spaceviolins/sets/72157600296724260/ do not do justice to Riga.. it is a glorious, majestic city. These pics: http://www.terryblackburn.us/Travel/Baltics/Latvia/artnouveau/index.html are better. >(Also, thanks for the nice shot of the bookstore---made me feel >right at home :-) There are two bookstore pics.. can you find the second one in my Riga pictures? I love bookstores, and miss them alot, so these two pictures were taken for sentimental reasons. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From neville_06 at yahoo.com Thu Jun 7 21:35:42 2007 From: neville_06 at yahoo.com (neville late) Date: Thu, 7 Jun 2007 14:35:42 -0700 (PDT) Subject: [ExI] Serious Question In-Reply-To: Message-ID: <159617.6732.qm@web57504.mail.re1.yahoo.com> Even so, how far can Russia go with its economy? Tony Karon: "if Russia's GDP per capita doubles in the next decade it would equal that of Portugal's GDP today". These days doesn't a nation need a big economy in addition to big guns, physical resources, intimidation, threats, maneuvering and manipulating? It's not like the days of the Ottomans. Bret Kulakovich wrote: Just look at the puzzle pieces. Azerbaijan is Russian's link to the northern boarder of Iran. Making a permanent presence in Azerbaijan is critical to Russia to guarantee the flow of resources. To do it with approval is a bonus. Russia doesn't want to lose any more lucrative oil field access than it already has in the past six years. Not to mention unfettered access to the so-called "shield" technology, which would be housed in a position easily securable by Russia in a sudden land-grab if need be. Bret K. On Jun 5, 2007, at 11:13 PM, Olga Bourlin wrote: > What does Putin want? > > Olga > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Boardwalk for $500? In 2007? Ha! Play Monopoly Here and Now (it's updated for today's economy) at Yahoo! Games. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Fri Jun 8 00:49:39 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 17:49:39 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <400852.78064.qm@web37415.mail.mud.yahoo.com> Message-ID: <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> Jeffrey (A B) writes > John Clark wrote: > > > "No, a computer doesn't need emotions, > > but a AI must have them." > > An AI *is* a specific computer. If my desktop > doesn't need an emotion to run a program or > respond within it, why "must" an AI have emotions? In these confusing threads, an AI is often taken to mean a vastly superhuman AI which by definition is capable of vastly outhinking humans. Formerly, I had agreed with John because at least for human beings, emotion sometimes plays an important part in what one would think of as purely intellectual functioning. I was working off the Damasio card experiments, which seem to show that humans require---for full intellectual power---some emotion. However, Stathis has convinced me otherwise, at least to some extent. > A non-existent motivation will not "motivate" > itself into existence. And an AGI isn't > going to pop out of thin air, it has to be > intentionally designed, or it's not going to > exist. At one point John was postulating a version of an AGI, e.g. version 3141592 which was a direct descendant of version 3141591. I took him to mean that the former was solely designed by the latter, and was *not* the result of an evolutionary process. So I contended that 3141592---as well as all versions way back to 42, say---as products of truly *intelligent design* need not have the full array of emotions. Like Stathis, I supposed that perhaps 3141592 and all its predecessors might have been focused, say, on solving physics problems. (On the other hand I did affirm that if a program was the result of a free-for-all evolutionary process, then it likely would have a full array of emotions---after all, we and all the higher animals have them. Besides, it makes good evolutionary sense. Take anger, for example. In an evolutionary struggle, those programs equipped with the temporary insanity we call "anger" have a survival advantage.) > I suppose it's *possible* that a generic > self-improving AI, as it expands its knowledge and > intelligence, could innocuously "drift" into coding a > script that would provide emotions *after-the-fact* > that it had been written. :-) I don't even agree with going *that* far! A specially crafted AI---again, not an evolutionarily derived one, but one the result of *intelligent design* (something tells me I am going to be sorry for using that exact phase)---cannot any more drift into having emotions than in can drift into sculpting David out of a slab of stone. Or than over the course of eons a species can "drift" into having an eye: No! Only a careful pruning by mutuation and selection can give you an eye, or the ability to carve a David. > But that will *not* be an *emotionally-driven* > action to code the script, because the AI will > not have any emotions to begin with (unless they > are intentionally programmed in by humans). I would less this pass without comment, except that in all probability, the first truly sentient human- level AIs will very likely be the result of evolutionary activity. To wit, humans set up conditions in which a lot of AIs can breed like genetic algorithms, compete against each other, and develop whatever is best to survive (and so in that way acquire emotion). Since this is *so* likely, it's a mistake IMHO to omit mentioning the possibility. > That's why it's important to get its starting > "motivations/directives" right, because if > they aren't the AI mind could "drift" into > a lot of open territory that wouldn't be > good for us, or itself. Paperclip style. I would agree that the same cautions that apply to nanotech are warranted here. To the degree that an AI---superhuman AGI we are talking about---has power, then by our lights it could of course drift (as you put it) into doing things not to our liking. Lee From lcorbin at rawbw.com Fri Jun 8 01:16:56 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 7 Jun 2007 18:16:56 -0700 Subject: [ExI] Italy's Social Ca References: Message-ID: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Amara writes > "Lee Corbin" : > >>I would say that what was wrong with it is exactly what was wrong >>with American Indian's complete tribal loyalty to *their* own tiny >>tribe. Without unification, they were easy pickings for the European >>colonists---at least in the long run. > > I don't see this logic, Lee. The more distributed the people, the harder > it is to conquer them. For example, if Washington, D.C. (i.e. the U.S. > Federal government) did not exist, the U.S. would be very difficult to > control, would it not? We have to be mindful not to confuse many different historical situations. Indeed, when technological levels are equal, controling a vast region full of unwilling subjects is mighty hard. The only way that Ghengis Khan really could do it was with an immensely strong and skillful army, and utilizing the expedient now and then of simply depopulation one of those regions. But with the advent of modern technology, the big advantage can lie with the side with a base of (peaceful) organized factories that can turn out firearms and tanks. So given that the Chinese, say, (or in WWII the Japanese) do have a stable manufacturing base, conquering and maintaining some form of control over a region the size of the U.S. would be possible if the latter's industrial capability, or infrastructure could be destroyed. Right now, yes, I agree: taking out Washington D.C. would not do that. But if the U.S. were divided into very small principalities (e.g. counties) and could not achieve unification of war-aims and consolidation of central control somewhere, then they could not resist the Canadian Army, much less the Chinese Army. >>> The young people learn very little science in grade school through high >>> school. The Italian Space Agency and others put almost nothing (.3%) >>> into their budgets for Education and Public Outreach to improve the >>> situation. If any scientist holds the rare press conference on their >>> work results, there is a high probability that the journalists will get >>> it completely wrong and the Italian scientist won't correct them. The >>> top managers at aerospace companies think that the PhD is a total waste >>> of time. This year, out of 75,000 entering students for the Rama >>> Sapienza University (the largest in Italy), only about 100 are science >>> majors (most of the the rest were "media": journalism, television, etc.) > >>The most modern economists seem to agree with you. Investment in >>education now appears in their models to pay good dividends. Still, >>this has to be only part of the story. The East Europeans (e.g. >>Romanians) and the Soviets plowed enormous expense into creating the >>world's best educated populaces, but, without the other key >>factors---rule of law and legislated and enforces respect for private >>property---it *was* basically a waste. > > Remember my previous words of how important are the families. > > The filtering process is the following. Given the: > > 1) (unliveable or sometimes nonexistent) salaries and, > 2) lack of societal support for science and poor scientific work > conditions, > > those who do _not_ have > > 1) the possibility to live at home well into middle age, or do not have > a property 'gift' or something else of substantial economic value, AND > 2) those who are unable to accept the lack of cultural support AND, > 3) poor work conditions, AND > 4) are not passionately in love with science, > > ... leave. Such filtering could amount to a brain-drain, a motivation-drain, etc. But have substantial numbers of Italians who did have "what it takes" actually left for greener pastures? Was there ever a time in the 19th or 20th centuries when Italy produced a strong scientific tradition? (Surely Enrico Fermi and a few others I could mention must have had very good academic circumstances---but then, he did leave. :-) We may be trying to talk about two different things: I'm was talking mostly about the entire scientific/technical/ economic package (of which Silicon Valley is the world pre-eminent example), and you may be talking about pure science. Now the Soviet Union excelled in pure science in many areas that did not conflict with Leninism/ Marxism, such as space science, physics, mathematics. But they remained (and remain) an economic basket case in comparison to their potential. > It's a very strong filter, and off-scale to any of my previous > experiences. I think that this filter has been working, filtering, for > decades. I also think that once the Italian families stop their support > then Italian science will stop. Italian science _needs_ the Italian > families for it to continue. If (as I surmised above) you are focusing on *Italian Science*, then I take you to be saying that somehow the family culture in which young Italians are growing up is inimical to science. (On the other hand, as Serifino pointed out in a recent post, there seem to be some colonies of Chinese growing in Italy. They'll probably be true to form and get their children interested in science and technology!) In California, more than half the births are to Hispanic families, and yet the politicians keep complaining that it's our schools that are falling down in instilling interest in science and technology. They make no reference to Hispanic culture. The California schools *don't* seem to be having a problem inculcating interest in science and technology in Chinese and Jewish students. But few want to face the difficult (but important and interesting) questions. But then, there is an I.Q. problem that makes this more difficult, an issue that at least the Italians don't have to face. Not even God (were he to deign to exist for a while) would know how to convert Italian or Hispanic families into nurturing an interest in science in their children, I fear. But ideas are welcome! Maybe if we cloned BILLIONS and BILLIONS of Carl Sagans, and put them in classrooms two or three to a student, and in families two or three to a child, we could arouse an interest in science in any culture. Lee From amara at amara.com Fri Jun 8 07:22:15 2007 From: amara at amara.com (Amara Graps) Date: Fri, 8 Jun 2007 09:22:15 +0200 Subject: [ExI] Italy's Social Capital Message-ID: Sorry I had cut off the subject line in my copying and pasting previously. Lee: >Such filtering could amount to a brain-drain, a motivation-drain, etc. >But have substantial numbers of Italians who did have "what it takes" >actually left for greener pastures? Was there ever a time in the 19th >or 20th centuries when Italy produced a strong scientific tradition? >(Surely Enrico Fermi and a few others I could mention must have had very >good academic circumstances---but then, he did leave. :-) Serafino can say about this. There was, for a brief time, a scientific tradition 50 years ago with the nuclear physicists, and yes, they mostly left too. The Brain Drain from Italy, today, is well-known, as it has existed for decades and the rate only continues to increase. Try typing "Italy Brain Drain" into Google. (Some call it a "Flood"). Italy is the only EU country experiencing a "Brain Drain" instead of a "Brain Exchange". As I said before, those who do not have family duties keeping them in Italy, leave. How Large is the "Brain Drain" from Italy? Sascha O. Becker U Munich Andrea Ichino EUI Giovanni Peri UC Davis March, 2003 http://www.iue.it/Personal/Ichino/braindrain_resubmission.pdf Abstract Using a comprehensive and newly organized dataset the present article shows that the human capital content of emigrants from Italy significantly increased during the 1990's . This is even more dramatically the case if we consider emigrating college graduates, whose share relative to total emigrants quadrupled between 1990 and 1998. As a result, since the mid-1990's the share of college graduates among emigrants from Italy has become larger than that share among residents of Italy. In the late nineties, between 3% and 5% of the new college graduates from Italy was dispersed abroad each year. Some preliminary international comparisons show that the nineties have only worsened a problem of "brain drain", that is unique to Italy, while other large economies in the European Union seem to experience a "brain exchange". While we do not search for an explanation of this phenomenon, we characterize such an increase in emigration of college graduates as pervasive across age groups and areas of emigration (the North and the South of the country). We also find a tendency during the 1990's towards increasing emigration of young people (below 45) and of people from Northern regions. http://sciencecareers.sciencemag.org/career_development/previous_issues/articles/1470/is_the_italian_brain_drain_becoming_a_flood "...the unanimous feeling was that there are greater and fairer opportunities abroad, both in academia and industry; there is good funding, incentives to carry on independent research projects, enthusiasm, and, last but not least, higher salaries." real life cases: http://www.humnet.unipi.it/~pacitti/Archive20049.htm Lee: >We may be trying to talk about two different things: I'm >was talking mostly about the entire scientific/technical/ >economic package (of which Silicon Valley is the world >pre-eminent example), and you may be talking about >pure science. I was, but they are strongly linked, and I implied the larger picture (perhaps not very well) in my writing. There is very little private industry for research in Italy. Fairly telling for the 5th largest economy in the world, no? Only two in the worlds top 100 businesses investing in R&D are Italian companies. http://www.ft.com/cms/s/2b601dbe-6777-11db-8ea5-0000779e2340.html This blog is useful to answer your questions too: Italian Economy Watch http://italyeconomicinfo.blogspot.com/ Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From scerir at libero.it Fri Jun 8 08:13:00 2007 From: scerir at libero.it (scerir) Date: Fri, 8 Jun 2007 10:13:00 +0200 Subject: [ExI] Italy's Social Ca References: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Message-ID: <000501c7a9a4$d522b220$9cbb1f97@archimede> Lee Corbin: > Was there ever a time in the 19th or 20th centuries > when Italy produced a strong scientific tradition? > (Surely Enrico Fermi and a few others I could mention > must have had very good academic circumstances---but then, > he did leave. :-) Many of them (here I mean physicists only) did leave because of Italian racial laws: Emilio Segr?, Ugo Fano, Bruno Rossi, Bruno Pontecorvo, Giulio Racah, Enrico Fermi's wife (who also was a physicist), Andrew Viterbi ([1] well, rather a mathematician, and Qualcomm owner), etc., or because of political or economical reasons: 'Beppo' Occhialini, Riccardo Giacconi, Federico Faggin [2], Pierluigi Zappacosta ([3] well, not exactly a scientist) etc. Many of them thought it was better to remain in Italy or in Europe (i.e. Edoardo Amaldi co-founded Cern, at Geneva). I would say there is an Italian scientific tradition, but it is 'transnational'. [1] http://en.wikipedia.org/wiki/Andrew_Viterbi [2] http://en.wikipedia.org/wiki/Federico_Faggin [3] http://en.wikipedia.org/wiki/Pierluigi_Zappacosta From desertpaths2003 at yahoo.com Fri Jun 8 08:12:33 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Fri, 8 Jun 2007 01:12:33 -0700 (PDT) Subject: [ExI] Getting Hispanics involved in Science (Was: Re: Italy's Social Ca) In-Reply-To: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Message-ID: <75380.6692.qm@web35612.mail.mud.yahoo.com> Lee Corbin wrote: In California, more than half the births are to Hispanic families, and yet the politicians keep complaining that it's our schools that are falling down in instilling interest in science and technology. They make no reference to Hispanic culture. The California schools *don't* seem to be having a problem inculcating interest in science and technology in Chinese and Jewish students. But few want to face the difficult (but important and interesting) questions. But then, there is an I.Q. problem that makes this more difficult, an issue that at least the Italians don't have to face. > Lee, did you hear the story of a group of Hispanic young men from very poor backgrounds who as highschool students entered a robotics contest and beat this nation's best competitors? The irony was that despite this great victory it looked like they might have problems being able to attend college (they were not honor students) but a benefactor came forward and they are all financially set now for higher education. I have known many very bright and creative Hispanics and so I don't think the problem is a genetic one. Instead I feel a longterm campaign needs to be developed to tie in Latin American culture & history with the desire to learn about science. It would at least be a start. Regarding cloning..., how about we start with one million Carl Sagan clones and one million Bill Nye the Science Guy clones. And to include a "Hollywood Angle" we could make one million Dolph Lundgren (masters in chemical engineering) clones and one million copies of James Wood (studied political science at MIT but dropped out to pursue acting). Oh, and don't forget ten million copies of the very beautiful and brainy Danica McKellar (has a bachelor's degree in mathematics). http://www.danicamckellar.com/ One of the Dani McKellar clones would need to be assigned to me as my er..., "assistant!" That's it! : ) But one of those damn James Wood or Dolph Lundgren clones would be sure to steal her away from me... John Grigg : ( Lee Corbin wrote: Amara writes > "Lee Corbin" : > >>I would say that what was wrong with it is exactly what was wrong >>with American Indian's complete tribal loyalty to *their* own tiny >>tribe. Without unification, they were easy pickings for the European >>colonists---at least in the long run. > > I don't see this logic, Lee. The more distributed the people, the harder > it is to conquer them. For example, if Washington, D.C. (i.e. the U.S. > Federal government) did not exist, the U.S. would be very difficult to > control, would it not? We have to be mindful not to confuse many different historical situations. Indeed, when technological levels are equal, controling a vast region full of unwilling subjects is mighty hard. The only way that Ghengis Khan really could do it was with an immensely strong and skillful army, and utilizing the expedient now and then of simply depopulation one of those regions. But with the advent of modern technology, the big advantage can lie with the side with a base of (peaceful) organized factories that can turn out firearms and tanks. So given that the Chinese, say, (or in WWII the Japanese) do have a stable manufacturing base, conquering and maintaining some form of control over a region the size of the U.S. would be possible if the latter's industrial capability, or infrastructure could be destroyed. Right now, yes, I agree: taking out Washington D.C. would not do that. But if the U.S. were divided into very small principalities (e.g. counties) and could not achieve unification of war-aims and consolidation of central control somewhere, then they could not resist the Canadian Army, much less the Chinese Army. >>> The young people learn very little science in grade school through high >>> school. The Italian Space Agency and others put almost nothing (.3%) >>> into their budgets for Education and Public Outreach to improve the >>> situation. If any scientist holds the rare press conference on their >>> work results, there is a high probability that the journalists will get >>> it completely wrong and the Italian scientist won't correct them. The >>> top managers at aerospace companies think that the PhD is a total waste >>> of time. This year, out of 75,000 entering students for the Rama >>> Sapienza University (the largest in Italy), only about 100 are science >>> majors (most of the the rest were "media": journalism, television, etc.) > >>The most modern economists seem to agree with you. Investment in >>education now appears in their models to pay good dividends. Still, >>this has to be only part of the story. The East Europeans (e.g. >>Romanians) and the Soviets plowed enormous expense into creating the >>world's best educated populaces, but, without the other key >>factors---rule of law and legislated and enforces respect for private >>property---it *was* basically a waste. > > Remember my previous words of how important are the families. > > The filtering process is the following. Given the: > > 1) (unliveable or sometimes nonexistent) salaries and, > 2) lack of societal support for science and poor scientific work > conditions, > > those who do _not_ have > > 1) the possibility to live at home well into middle age, or do not have > a property 'gift' or something else of substantial economic value, AND > 2) those who are unable to accept the lack of cultural support AND, > 3) poor work conditions, AND > 4) are not passionately in love with science, > > ... leave. Such filtering could amount to a brain-drain, a motivation-drain, etc. But have substantial numbers of Italians who did have "what it takes" actually left for greener pastures? Was there ever a time in the 19th or 20th centuries when Italy produced a strong scientific tradition? (Surely Enrico Fermi and a few others I could mention must have had very good academic circumstances---but then, he did leave. :-) We may be trying to talk about two different things: I'm was talking mostly about the entire scientific/technical/ economic package (of which Silicon Valley is the world pre-eminent example), and you may be talking about pure science. Now the Soviet Union excelled in pure science in many areas that did not conflict with Leninism/ Marxism, such as space science, physics, mathematics. But they remained (and remain) an economic basket case in comparison to their potential. > It's a very strong filter, and off-scale to any of my previous > experiences. I think that this filter has been working, filtering, for > decades. I also think that once the Italian families stop their support > then Italian science will stop. Italian science _needs_ the Italian > families for it to continue. If (as I surmised above) you are focusing on *Italian Science*, then I take you to be saying that somehow the family culture in which young Italians are growing up is inimical to science. (On the other hand, as Serifino pointed out in a recent post, there seem to be some colonies of Chinese growing in Italy. They'll probably be true to form and get their children interested in science and technology!) In California, more than half the births are to Hispanic families, and yet the politicians keep complaining that it's our schools that are falling down in instilling interest in science and technology. They make no reference to Hispanic culture. The California schools *don't* seem to be having a problem inculcating interest in science and technology in Chinese and Jewish students. But few want to face the difficult (but important and interesting) questions. But then, there is an I.Q. problem that makes this more difficult, an issue that at least the Italians don't have to face. Not even God (were he to deign to exist for a while) would know how to convert Italian or Hispanic families into nurturing an interest in science in their children, I fear. But ideas are welcome! Maybe if we cloned BILLIONS and BILLIONS of Carl Sagans, and put them in classrooms two or three to a student, and in families two or three to a child, we could arouse an interest in science in any culture. Lee _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Be a better Heartthrob. Get better relationship answers from someone who knows. Yahoo! Answers - Check it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Jun 8 10:46:03 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 8 Jun 2007 20:46:03 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> References: <400852.78064.qm@web37415.mail.mud.yahoo.com> <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> Message-ID: On 08/06/07, Lee Corbin wrote: Formerly, I had agreed with John because at > least for human beings, emotion sometimes > plays an important part in what one would > think of as purely intellectual functioning. I was > working off the Damasio card experiments, > which seem to show that humans require---for > full intellectual power---some emotion. Here is an excerpt from the relevant paper: ### Science Volume 275(5304), 28 February 1997, pp 1293-1295 Deciding Advantageously Before Knowing the Advantageous Strategy [Report] Bechara, Antoine; Damasio, Hanna; Tranel, Daniel; Damasio, Antonio R. In a gambling task that simulates real-life decision-making in the way it factors uncertainty, rewards, and penalties, the players are given four decks of cards, a loan of $2000 facsimile U.S. bills, and asked to play so that they can lose the least amount of money and win the most [1]. Turning each card carries an immediate reward ($100 in decks A and B and $50 in decks C and D). Unpredictably, however, the turning of some cards also carries a penalty (which is large in decks A and B and small in decks C and D). Playing mostly from the disadvantageous decks (A and B) leads to an overall loss. Playing from the advantageous decks (C and D) leads to an overall gain. The players have no way of predicting when a penalty will arise in a given deck, no way to calculate with precision the net gain or loss from each deck, and no knowledge of how many cards they must turn to end the game (the game is stopped after 100 card selections). After encountering a few losses, normal participants begin to generate SCRs before selecting a card from the bad decks [2]and also begin to avoid the decks with large losses [1]. Patients with bilateral damage to the ventromedial prefrontal cortices do neither [1,2] . To investigate whether subjects choose correctly only after or before conceptualizing the nature of the game and reasoning over the pertinent knowledge, we continuously assessed, during their performance of the task, three lines of processing in 10 normal participants and in 6 patients [3]with bilateral damage of the ventromedial sector of the prefrontal cortex and decision-making defects. These included (i) behavioral performance, that is, the number of cards selected from the good decks versus the bad decks; (ii) SCRs generated before the selection of each card [2]; and (iii) the subject's account of how they conceptualized the game and of the strategy they were using. The latter was assessed by interrupting the game briefly after each subject had made 20 card turns and had already encountered penalties, and asking the subject two questions: (i) "Tell me all you know about what is going on in this game." (ii) "Tell me how you feel about this game." The questions were repeated at 10-card intervals and the responses audiotaped. After sampling all four decks, and before encountering any losses, subjects preferred decks A and B and did not generate significant anticipatory SCRs. We called this period pre-punishment. After encountering a few losses in decks A or B (usually by card 10), normal participants began to generate anticipatory SCRs to decks A and B. Yet by card 20, all indicated that they did not have a clue about what was going on. We called this period pre-hunch (Figure 1). By about card 50, all normal participants began to express a "hunch" that decks A and B were riskier and all generated anticipatory SCRs whenever they pondered a choice from deck A or B. We called this period hunch. None of the patients generated anticipatory SCRs or expressed a "hunch" (Figure 1). By card 80, many normal participants expressed knowledge about why, in the long run, decks A and B were bad and decks C and D were good. We called this period conceptual. Seven of the 10 normal participants reached the conceptual period, during which they continued to avoid the bad decks, and continued to generate SCRs whenever they considered sampling again from the bad decks. Remarkably, the three normal participants who did not reach the conceptual period still made advantageous choices [4]. Just as remarkably, the three patients with prefrontal damage who reached the conceptual period and correctly described which were the bad and good decks chose disadvantageously. None of the patients generated anticipatory SCRs (Figure 1). Thus, despite an accurate account of the task and of the correct strategy, these patients failed to generate autonomic responses and continued to select cards from the bad decks. The patients failed to act according to their correct conceptual knowledge. ### Some of these findings have been disputed, eg. the authors of the following paper repeated the experiment and claim that the subjects who decided advantageously actually were consciously aware of the good decks: http://www.pnas.org/cgi/content/abstract/101/45/16075. However, it isn't so surprising if we sometimes make good decisions based on emotions, since the evolution of emotions predates intelligence, as John Clark reminds us. And when you pull your hand from a painful stimulus, not only does emotion beat cognition, but reflex, being older still, beats emotion. It also isn't surprising if people with neurological lesions affecting emotion don't function as well as normal people. Emotion is needed for motivation, otherwise why do anything, and gradients of emotion are needed for judgement, otherwise why do one thing over another? It is precisely in matters of judgement and motivation that patients with prefrontal lesions and schizophrenia don't do so well, even though their general IQ may be normal, and the science of neuropsychological testing tries to tease out these deficits. Still, the fact that human brains may work this way does not mean that an AI has to work in the same way to solve similar problems. No programmer would go around writing a program that worked out the best strategy in the above card sorting game by first inventing a computer equivalent of "emotional learning", except perhaps as an academic exercise. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Fri Jun 8 10:30:38 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Fri, 8 Jun 2007 03:30:38 -0700 (PDT) Subject: [ExI] humor: Transcending our humanity can be hard... In-Reply-To: <017101c7a96a$f92864b0$6501a8c0@homeef7b612677> Message-ID: <625703.24918.qm@web35605.mail.mud.yahoo.com> I wonder how much this comic strip creator knows about Transhumanism. http://news.yahoo.com/comics/brewsterrockit;_ylt=AujwFjZUNOyPEGTgGF4yzycDwLAF But is emptying one's mind of earthly cares and concerns a good thing? lol John Grigg : ) --------------------------------- Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. -------------- next part -------------- An HTML attachment was scrubbed... URL: From desertpaths2003 at yahoo.com Fri Jun 8 10:35:48 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Fri, 8 Jun 2007 03:35:48 -0700 (PDT) Subject: [ExI] news: students invent alcohol powder In-Reply-To: <75380.6692.qm@web35612.mail.mud.yahoo.com> Message-ID: <206859.34851.qm@web35608.mail.mud.yahoo.com> Just add water - students invent alcohol powder I can only imagine all the jokes (short-term) and real-life bad situations (long-term) this will create... http://news.yahoo.com/s/nm/20070606/od_nm/dutch_drink_odd_dc John Grigg --------------------------------- Never miss an email again! Yahoo! Toolbar alerts you the instant new Mail arrives. Check it out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Fri Jun 8 17:30:37 2007 From: jonkc at att.net (John K Clark) Date: Fri, 8 Jun 2007 13:30:37 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <400852.78064.qm@web37415.mail.mud.yahoo.com> Message-ID: <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> "A B" > your intuitions about emotions > and motivations are just totally *wrong*. Apparently Evolution was also wrong to invent emotion first and only after 500 million years come up with intelligence. > In how many different ways must that be demonstrated? 42. > An AI *is* a specific computer. Then you *are* a specific computer too. > If my desktop doesn't need an emotion to run a program or respond within > it, why "must" an AI have emotions? If you are willing to embrace the fantasy that your desktop is intelligent then I see no reason you would not also believe in the much more modest and realistic fantasy that it is emotional. Emotions are easy, intelligence is hard. > I don't understand it John, before you were claiming fairly ardently that > "Free Will" doesn't exist. I made no such claim, I claimed it does not even have the virtue of non existence, as expressed by most people the noise "free will" is no more meaningful than a burp. > Why are you now claiming in effect that an AI will > automatically execute a script of code that doesn't > exist - because it was never written (either by the > programmers or by the AI)? I don't know why I'm claiming that either because I don't know what the hell you're talking about. Any AI worthy of the name will write programs for it to run on itself and nobody including the AI knows what the outcome of those programs will be. Even the AI doesn't know what it will do next, it will just have to run the programs and wait to see what it decides to do next; and that is the only meaning I can attach to the noise "free will" that is not complete gibberish. >The problem is, not all functioning minds must be even *remotely* similar >to the higher functions of a *human* mind. The problem is that a mind that is not even *remotely* similar to *any* of the *higher* functions of the human mind, that is to say if there is absolutely no point of similarity between us and them then that mind is not functioning very well. It is true that a mind that didn't understand mathematics or engineering or science or philosophy or economics would be of no threat to us, but it would be of no use either; and as we have absolutely nothing in common with it there would be no way to communicate with it and thus no reason to build it. > AI will not have any emotions to begin with Our ancestors had emotions long ago when they begin their evolutionary journey, but that's different because, because, well, because meat has a soul but semiconductors never can. I know you don't like that 4 letter word but face it, that is exactly what you're saying. John K Clark From austriaaugust at yahoo.com Fri Jun 8 17:18:51 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 8 Jun 2007 10:18:51 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <016c01c7a967$77d8efe0$6501a8c0@homeef7b612677> Message-ID: <165408.21978.qm@web37405.mail.mud.yahoo.com> Lee wrote: > "In these confusing threads, an AI is often taken > to mean a vastly superhuman AI which by definition > is capable of vastly outhinking humans." Yep. But a superhuman AGI is still a computer. If my desktop doesn't require an emotion in order to open Microsoft Office, or to run a virus-scan when I instruct it to (AKA "motivate" it to), then why *must* an AGI designated supercomputer have an emotion in order to run the AGI engine program when I instruct it to? I don't think it does. > "Formerly, I had agreed with John because at > least for human beings, emotion sometimes > plays an important part in what one would > think of as purely intellectual functioning. I was > working off the Damasio card experiments, > which seem to show that humans require---for > full intellectual power---some emotion." But more often than not, emotion clouds judgment and rationality. Believe me, I should know. Evolution tacked-on emotion because it accidentally happened to be (aggregately) useful for animal survival and *reproduction* in particular - which is all that evolution "cares" about. Evolution didn't desire to create intelligent beings, because evolution doesn't desire anything. Emotion is *not* the basis of thought or consciousness - that can't be stressed enough. And you may have noticed that humanity seems to thrive on irrationality. It doesn't seem to require much rationality or even much intelligence to attract a person into having sex. It's just that you can't have emotion until you have consciousness, and you can't have consciousness until you have a threshold baseline intelligence. Thanks a lot evolution! [Shaking Fist]. We could have used that extra skull volume for greater intelligence and rationality! > "(On the other hand I did affirm that if a > program was the result of a free-for-all > evolutionary process, then it likely would > have a full array of emotions---after all, > we and all the higher animals have them. > Besides, it makes good evolutionary > sense. Take anger, for example. In an > evolutionary struggle, those programs > equipped with the temporary insanity > we call "anger" have a survival advantage.)" But an AGI isn't likely to be derived solely or even mostly from genetic programming, IMO. If it were that easy, we'd have an AGI already. :-) Think of the awesome complexity of a single atom. Now imagine describing its behavior fully with nothing but algorithms. That's a boat-load of *correct* algorithms. That would be a task so Herculean, that it's almost certainly not feasible any time in the near future. ":-) I don't even agree with going *that* far! > A specially crafted AI---again, not an > evolutionarily > derived one, but one the result of *intelligent > design* > (something tells me I am going to be sorry for using > that exact phase)---cannot any more drift into > having emotions than in can drift into sculpting > David out of a slab of stone. Or than over the > course of eons a species can "drift" into having > an eye: No! Only a careful pruning by mutuation > and selection can give you an eye, or the ability > to carve a David." I don't know. I think that a generic self-improving AGI could easily drift into undesirable areas (for us and itself) if its starting directives (=motivations) aren't carefully selected. After all it will be re-writing and expanding its own mind. The drift would probably be subtle (still close to the directives) to begin with, but could become increasingly divergent as more internal changes are made. Let's be careful in our selection of directives, shall we? :-) And animals did genetically drift into having an eye, that's how biological evolution works. And we already have artificial machines with vision and artistic "ability". And they weren't created by eons of orgies of Dell desktops. They were created by human ingenuity. :-) > "I would less this pass without comment, except > that in all probability, the first truly sentient > human- > level AIs will very likely be the result of > evolutionary > activity. To wit, humans set up conditions in which > a lot of AIs can breed like genetic algorithms, > compete against each other, and develop whatever > is best to survive (and so in that way acquire > emotion). > Since this is *so* likely, it's a mistake IMHO to > omit mentioning the possibility." My guess is that that isn't likely. You'd have to already have baseline AGI agents in order to compete with each other to that end. If the AI agents are narrow, then the one that wins will be the best chess player of the bunch. I'm not absolutely sure though. Perhaps one of the AGI programmers here can chime in on this one. Although I suppose that you could have some baseline AGI's compete with each other. I'm not sure that's a good idea though... do we want angry, aggressive AGI's at the end? Evolution is not the optimal designer after all. > "I would agree that the same cautions that > apply to nanotech are warranted here. > To the degree that an AI---superhuman > AGI we are talking about---has power, > then by our lights it could of course drift > (as you put it) into doing things not to > our liking." Yep. And the Strong AI existential risk seems to be the one receiving the least cautious attention by important people. We should try to change that if we can. For example, the US government is finally beginning to publicly acknowledge that we need to be carefully pro-active about nanotech, without relinquishing it. Not that I'm encouraging government oversight and control in particular, just pointing out an example. Best, Jeffrey Herrlich --- Lee Corbin wrote: > Jeffrey (A B) writes > > > > John Clark wrote: > > > > > "No, a computer doesn't need emotions, > > > but a AI must have them." > > > > An AI *is* a specific computer. If my desktop > > doesn't need an emotion to run a program or > > respond within it, why "must" an AI have emotions? > > In these confusing threads, an AI is often taken > to mean a vastly superhuman AI which by definition > is capable of vastly outhinking humans. > > Formerly, I had agreed with John because at > least for human beings, emotion sometimes > plays an important part in what one would > think of as purely intellectual functioning. I was > working off the Damasio card experiments, > which seem to show that humans require---for > full intellectual power---some emotion. > > However, Stathis has convinced me otherwise, > at least to some extent. > > > A non-existent motivation will not "motivate" > > itself into existence. And an AGI isn't > > going to pop out of thin air, it has to be > > intentionally designed, or it's not going to > > exist. > > At one point John was postulating a version > of an AGI, e.g. version 3141592 which was > a direct descendant of version 3141591. I > took him to mean that the former was solely > designed by the latter, and was *not* the > result of an evolutionary process. So I > contended that 3141592---as well as all > versions way back to 42, say---as products > of truly *intelligent design* need not have > the full array of emotions. Like Stathis, I > supposed that perhaps 3141592 and all its > predecessors might have been focused, say, > on solving physics problems. > > (On the other hand I did affirm that if a > program was the result of a free-for-all > evolutionary process, then it likely would > have a full array of emotions---after all, > we and all the higher animals have them. > Besides, it makes good evolutionary > sense. Take anger, for example. In an > evolutionary struggle, those programs > equipped with the temporary insanity > we call "anger" have a survival advantage.) > > > I suppose it's *possible* that a generic > > self-improving AI, as it expands its knowledge and > > intelligence, could innocuously "drift" into > coding a > > script that would provide emotions > *after-the-fact* > > that it had been written. > > :-) I don't even agree with going *that* far! > A specially crafted AI---again, not an > evolutionarily > derived one, but one the result of *intelligent > design* > (something tells me I am going to be sorry for using > that exact phase)---cannot any more drift into > having emotions than in can drift into sculpting > David out of a slab of stone. Or than over the > course of eons a species can "drift" into having > an eye: No! Only a careful pruning by mutuation > and selection can give you an eye, or the ability > to carve a David. > > > But that will *not* be an *emotionally-driven* > > action to code the script, because the AI will > > not have any emotions to begin with (unless they > > are intentionally programmed in by humans). > > I would less this pass without comment, except > that in all probability, the first truly sentient > human- > level AIs will very likely be the result of > evolutionary > activity. To wit, humans set up conditions in which > a lot of AIs can breed like genetic algorithms, > compete against each other, and develop whatever > is best to survive (and so in that way acquire > emotion). > Since this is *so* likely, it's a mistake IMHO to > omit mentioning the possibility. > > > That's why it's important to get its starting > > "motivations/directives" right, because if > > they aren't the AI mind could "drift" into > > a lot of open territory that wouldn't be > > good for us, or itself. Paperclip style. > > I would agree that the same cautions that > apply to nanotech are warranted here. > To the degree that an AI---superhuman > AGI we are talking about---has power, > then by our lights it could of course drift > (as you put it) into doing things not to > our liking. > > Lee > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Get the Yahoo! toolbar and be alerted to new email wherever you're surfing. http://new.toolbar.yahoo.com/toolbar/features/mail/index.php From austriaaugust at yahoo.com Fri Jun 8 17:45:39 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 8 Jun 2007 10:45:39 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <190996.5843.qm@web37415.mail.mud.yahoo.com> Stathis wrote: > "However, it isn't so > surprising if we sometimes make good decisions based > on emotions, since the > evolution of emotions predates intelligence, as John > Clark reminds us." The evolution of emotions **doesn't** predate intelligence, it's the other way around. An insect isn't as intelligent as a person, but that doesn't mean it has no intelligence. I know that's counter-intuitive, but with evolutionary progression you can't have emotions if you don't have consciousness, and you can't have consciousness if you don't have intelligence. Take for example the visual cortex. First a stimulus must be *intelligently* processed within the visual cortex, using intelligent algorithms. Then the visual subject "emerges" into consciousness after sufficient intelligent processing. Then and only then can a person begin to form an emotional reaction to whatever is consciously seen; a loved-one for instance. Then the forming emotional experience feeds back into consciousness so that a person becomes aware of the emotion in addition to the visual subject. There's only *one* direction in which emotion could possibly have naturally evolved: 1)Intelligence 2)Consciousness 3)Emotion Best, Jeffrey Herrlich --- Stathis Papaioannou wrote: > On 08/06/07, Lee Corbin wrote: > > Formerly, I had agreed with John because at > > least for human beings, emotion sometimes > > plays an important part in what one would > > think of as purely intellectual functioning. I was > > working off the Damasio card experiments, > > which seem to show that humans require---for > > full intellectual power---some emotion. > > > Here is an excerpt from the relevant paper: > > > ### > > Science Volume 275(5304), 28 February 1997, pp > 1293-1295 > > Deciding Advantageously Before Knowing the > Advantageous Strategy > [Report] > > Bechara, Antoine; Damasio, Hanna; Tranel, Daniel; > Damasio, Antonio R. > > In a gambling task that simulates real-life > decision-making in the way it > factors uncertainty, rewards, and penalties, the > players are given four > decks of cards, a loan of $2000 facsimile U.S. > bills, and asked to play so > that they can lose the least amount of money and win > the most > [1]. > Turning each card carries an immediate reward ($100 > in decks A and B and $50 > in decks C and D). Unpredictably, however, the > turning of some cards also > carries a penalty (which is large in decks A and B > and small in decks C and > D). Playing mostly from the disadvantageous decks (A > and B) leads to an > overall loss. Playing from the advantageous decks (C > and D) leads to an > overall gain. The players have no way of predicting > when a penalty will > arise in a given deck, no way to calculate with > precision the net gain or > loss from each deck, and no knowledge of how many > cards they must turn to > end the game (the game is stopped after 100 card > selections). After > encountering a few losses, normal participants begin > to generate SCRs before > selecting a card from the bad decks > [2]and > also begin to avoid the decks with large losses > [1]. > Patients with bilateral damage to the ventromedial > prefrontal cortices do > neither > [1,2] > . > > To investigate whether subjects choose correctly > only after or before > conceptualizing the nature of the game and reasoning > over the pertinent > knowledge, we continuously assessed, during their > performance of the task, > three lines of processing in 10 normal participants > and in 6 patients > [3]with > bilateral damage of the ventromedial sector of the > prefrontal cortex > and decision-making defects. These included (i) > behavioral performance, that > is, the number of cards selected from the good decks > versus the bad decks; > (ii) SCRs generated before the selection of each > card > [2]; > and (iii) the subject's account of how they > conceptualized the game and of > the strategy they were using. The latter was > assessed by interrupting the > game briefly after each subject had made 20 card > turns and had already > encountered penalties, and asking the subject two > questions: (i) "Tell me > all you know about what is going on in this game." > (ii) "Tell me how you > feel about this game." The questions were repeated > at 10-card intervals and > the responses audiotaped. > > After sampling all four decks, and before > encountering any losses, subjects > preferred decks A and B and did not generate > significant anticipatory SCRs. > We called this period pre-punishment. After > encountering a few losses in > decks A or B (usually by card 10), normal > participants began to generate > anticipatory SCRs to decks A and B. Yet by card 20, > all indicated that they > did not have a clue about what was going on. We > called this period pre-hunch > (Figure > 1). > By about card 50, all normal participants began to > express a "hunch" that > decks A and B were riskier and all generated > anticipatory SCRs whenever they > pondered a choice from deck A or B. We called this > period hunch. None of the > patients generated anticipatory SCRs or expressed a > "hunch" (Figure > 1). > By card 80, many normal participants expressed > knowledge about why, in the > long run, decks A and B were bad and decks C and D > were good. We called this > period conceptual. Seven of the 10 normal > participants reached the > conceptual period, during which they continued to > avoid the bad decks, and > continued to generate SCRs whenever they considered > sampling again from the > bad decks. Remarkably, the three normal participants > who did not reach the > conceptual period still made advantageous choices > [4]. > Just as remarkably, the three patients with > prefrontal damage who reached > the conceptual period and correctly described which > were the bad and good > decks chose disadvantageously. None of the patients > generated anticipatory > SCRs (Figure > 1). > Thus, despite an accurate account of the task and of > the correct strategy, > these patients failed to generate autonomic > responses and continued to > select cards from the bad decks. The patients failed > to act according to > their correct conceptual knowledge. > ### > > Some of these findings have been disputed, eg. the > authors of the following > paper repeated the experiment and claim that the > subjects who decided > advantageously actually were consciously aware of > the good decks: > http://www.pnas.org/cgi/content/abstract/101/45/16075. > However, it isn't so > surprising if we sometimes make good decisions based > on emotions, since the > evolution of emotions predates intelligence, as John > Clark reminds us. And > when you pull your hand from a painful stimulus, not > only does emotion beat > cognition, but reflex, being older still, beats > emotion. > > It also isn't surprising if people with neurological > lesions affecting > emotion don't function as well as normal people. > Emotion is needed for > motivation, otherwise why do anything, and gradients > of emotion are needed > for judgement, otherwise why do one thing over > another? It is precisely in > matters of judgement and motivation that patients > with prefrontal lesions > and schizophrenia don't do so well, even though > their general IQ may be > normal, and the science of neuropsychological > testing tries to tease out > these deficits. > > Still, the fact that human brains may work this way > does not mean that an AI > has to work in the same way to solve similar > problems. No programmer would > go around writing a program that worked out the best > strategy in the above > card sorting game by first inventing a computer > equivalent of "emotional > learning", except perhaps as an academic exercise. > > > -- > Stathis Papaioannou > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From randall at randallsquared.com Fri Jun 8 18:49:28 2007 From: randall at randallsquared.com (Randall Randall) Date: Fri, 8 Jun 2007 14:49:28 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> References: <400852.78064.qm@web37415.mail.mud.yahoo.com> <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> Message-ID: On Jun 8, 2007, at 1:30 PM, John K Clark wrote: > "A B" > >> your intuitions about emotions >> and motivations are just totally *wrong*. > > Apparently Evolution was also wrong to invent emotion first and > only after > 500 million years come up with intelligence. John, I'm sure someone's mentioned this before in this context, but isn't the ubiquity of feathered airplanes a similar argument? -- Randall Randall "If we have matter duplicators, will each of us be a sovereign and possess a hydrogen bomb?" -- Jerry Pournelle From austriaaugust at yahoo.com Fri Jun 8 19:22:23 2007 From: austriaaugust at yahoo.com (A B) Date: Fri, 8 Jun 2007 12:22:23 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <002001c7a9f2$caf0d8b0$be064e0c@MyComputer> Message-ID: <768887.53732.qm@web37410.mail.mud.yahoo.com> John Clark wrote: > "Apparently Evolution was also wrong to invent > emotion first and only after > 500 million years come up with intelligence." Evolution didn't invent emotion first. Intelligence existed first, and humans aren't the first animals with any level of intelligence. "42." I see that I still have a ways to go, then. ;-) > "Then you *are* a specific computer too." Correct. > "If you are willing to embrace the fantasy that your > desktop is intelligent > then I see no reason you would not also believe in > the much more modest and > realistic fantasy that it is emotional. Emotions are > easy, intelligence is > hard." Narrow intelligence is still intelligence. It all works on algorithms, the desktop and my brain. Human intelligence is hard, but animal intelligence has been around for hundreds of millions of years beforehand. > "I don't know why I'm claiming that either because I > don't know what the hell > you're talking about. Any AI worthy of the name will > write programs for it > to run on itself and nobody including the AI knows > what the outcome of those > programs will be. Even the AI doesn't know what it > will do next, it will > just have to run the programs and wait to see what > it decides to do next; > and that is the only meaning I can attach to the > noise "free will" that is > not complete gibberish." My chess program has narrow AI, but it doesn't alter its own code. It's not conscious, but it does have a level of intelligence. If the AGI is directed not to alter or expand its code is some specific set of ways, then it won't do it, precisely as instructed. The directives that we program it with will be the only form of "motivation" that it will begin with. Needless to say, it's important that we get those directives right; hence the "Friendly" part. > The problem is that a mind that is not even > *remotely* similar to *any* of > the *higher* functions of the human mind, that is to > say if there is > absolutely no point of similarity between us and > them then that mind is not > functioning very well. It is true that a mind that > didn't understand > mathematics or engineering or science or philosophy > or economics would be of > no threat to us, but it would be of no use either; > and as we have > absolutely nothing in common with it there would be > no way to communicate > with it and thus no reason to build it. There will be similarities, at the very bottom. Both require formative algorithms. Emotion is a much higher, macroscopic, level; and not necessary to a functioning mind. My desktop functions pretty well, and if I wanted, it could even help me with science and engineering (calculation and CAD programs, etc). Current computers help humans do a lot of things. Eg. Moore's Law is made possible by improved computer functionality when designing new chips. Look at the huge range of behaviors within humanity, and that's all within a very small sector of the total mind possibility-space. > "Our ancestors had emotions long ago when they begin > their evolutionary > journey, but that's different because, because, > well, because meat has a > soul but semiconductors never can. I know you don't > like that 4 letter word > but face it, that is exactly what you're saying." Nope, I'm not saying that. I've specifically said that a machine *can* have emotions. All I've said is that no emotion will exist where there is no capacity for emotion. And that capacity for emotion will not pop out of thin air. It will either have to be written by humans, or it will have to be written by the AI. The key here is, the AI will not write the capacity for it if it is directed not to do so. And it will not be emotionally driven to ignore or override that directive, precisely because it will not have any emotions when it first comes on-line. An emotion is not going to be embodied within a three line script of algorithms, but an *extremely* limited degree of intelligence can be (narrow intelligence). Best, Jeffrey Herrlich --- John K Clark wrote: > "A B" > > > your intuitions about emotions > > and motivations are just totally *wrong*. > > Apparently Evolution was also wrong to invent > emotion first and only after > 500 million years come up with intelligence. > > > In how many different ways must that be > demonstrated? > > 42. > > > An AI *is* a specific computer. > > Then you *are* a specific computer too. > > > If my desktop doesn't need an emotion to run a > program or respond within > > it, why "must" an AI have emotions? > > If you are willing to embrace the fantasy that your > desktop is intelligent > then I see no reason you would not also believe in > the much more modest and > realistic fantasy that it is emotional. Emotions are > easy, intelligence is > hard. > > > I don't understand it John, before you were > claiming fairly ardently that > > "Free Will" doesn't exist. > > I made no such claim, I claimed it does not even > have the virtue of non > existence, as expressed by most people the noise > "free will" is no more > meaningful than a burp. > > > Why are you now claiming in effect that an AI will > > automatically execute a script of code that > doesn't > > exist - because it was never written (either by > the > > programmers or by the AI)? > > I don't know why I'm claiming that either because I > don't know what the hell > you're talking about. Any AI worthy of the name will > write programs for it > to run on itself and nobody including the AI knows > what the outcome of those > programs will be. Even the AI doesn't know what it > will do next, it will > just have to run the programs and wait to see what > it decides to do next; > and that is the only meaning I can attach to the > noise "free will" that is > not complete gibberish. > > >The problem is, not all functioning minds must be > even *remotely* similar > >to the higher functions of a *human* mind. > > The problem is that a mind that is not even > *remotely* similar to *any* of > the *higher* functions of the human mind, that is to > say if there is > absolutely no point of similarity between us and > them then that mind is not > functioning very well. It is true that a mind that > didn't understand > mathematics or engineering or science or philosophy > or economics would be of > no threat to us, but it would be of no use either; > and as we have > absolutely nothing in common with it there would be > no way to communicate > with it and thus no reason to build it. > > > AI will not have any emotions to begin with > > Our ancestors had emotions long ago when they begin > their evolutionary > journey, but that's different because, because, > well, because meat has a > soul but semiconductors never can. I know you don't > like that 4 letter word > but face it, that is exactly what you're saying. > > John K Clark > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ 8:00? 8:25? 8:40? Find a flick in no time with the Yahoo! Search movie showtime shortcut. http://tools.search.yahoo.com/shortcuts/#news From fauxever at sprynet.com Sat Jun 9 01:08:48 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Fri, 8 Jun 2007 18:08:48 -0700 Subject: [ExI] humor: Transcending our humanity can be hard... References: <625703.24918.qm@web35605.mail.mud.yahoo.com> Message-ID: <016e01c7aa32$bc899b50$6501a8c0@brainiac> From: John Grigg To: extropy-chat at lists.extropy.org Sent: Friday, June 08, 2007 3:30 AM > I wonder how much this comic strip creator knows about Transhumanism. >http://news.yahoo.com/comics/brewsterrockit;_ylt=AujwFjZUNOyPEGTgGF4yzycDwLAF > But is emptying one's mind of earthly cares and concerns a good thing? lol Aha! Well, now - there's emptying. And then there's already empty: http://arstechnica.com/articles/culture/ars-takes-a-field-trip-the-creation-museum.ars Olga -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sat Jun 9 01:26:54 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 18:26:54 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> Message-ID: <200706090139.l591dQoQ025051@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Christopher Healey > Subject: Re: [ExI] Unfrendly AI is a mistaken idea. > > > Stathis Papaioannou wrote: > > > > Suppose your goal is to win a chess game *adhering to the > > rules of chess*. > > Do chess opponents at tournaments conduct themselves in ways that they > hope might psyche out their opponent? In my observations, hell yes. And > these ways are not explicitly excluded in the rules of chess... -Chris Chris, that Hollywood stuff is probably seen down in the Cs and Ds. More skilled and disciplined players know to play the board, not the man. I had a tournament where a guy was doing this kinda thing. Whooped his ass. That felt goooood. {8-] spike From neville_06 at yahoo.com Sat Jun 9 01:28:24 2007 From: neville_06 at yahoo.com (neville late) Date: Fri, 8 Jun 2007 18:28:24 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <165408.21978.qm@web37405.mail.mud.yahoo.com> Message-ID: <319909.90945.qm@web57515.mail.re1.yahoo.com> Also an intelligent person might agonize too much in moving towards making a given decision, and then might make the wrong decision. Reagan was less intelligent and more mystical than Carter but Reagan had a smoother decision process. >>humanity seems to thrive on irrationality. --------------------------------- Now that's room service! Choose from over 150,000 hotels in 45,000 destinations on Yahoo! Travel to find your fit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at comcast.net Sat Jun 9 01:40:55 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 18:40:55 -0700 Subject: [ExI] Serious Question In-Reply-To: <20070607113313.GB17691@leitl.org> Message-ID: <200706090153.l591roOk021449@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Eugen Leitl > Subject: Re: [ExI] Serious Question > > On Thu, Jun 07, 2007 at 06:57:15AM -0400, Joseph Bloch wrote: > > It's pure speculation on my part, but he might be setting things up to > avoid > > the term limit he faces on his Presidency in 2008. ... > > Hey, no fair copycatting! ShrubCo patented it first. > > -- > Eugen* Leitl leitl http://leitl.org Orwell had the idea before either of those two of course. There is deep irony here. If terrorism is fought as a type of criminal activity, then governments do not have the power to fight effectively. If terrorism is fought as a war, then governments grant themselves arbitrary power. spike From stathisp at gmail.com Sat Jun 9 02:20:14 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jun 2007 12:20:14 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <190996.5843.qm@web37415.mail.mud.yahoo.com> References: <190996.5843.qm@web37415.mail.mud.yahoo.com> Message-ID: On 09/06/07, A B wrote: The evolution of emotions **doesn't** predate > intelligence, it's the other way around. An insect > isn't as intelligent as a person, but that doesn't > mean it has no intelligence. I know that's > counter-intuitive, but with evolutionary progression > you can't have emotions if you don't have > consciousness, and you can't have consciousness if you > don't have intelligence. Take for example the visual > cortex. First a stimulus must be *intelligently* > processed within the visual cortex, using intelligent > algorithms. Then the visual subject "emerges" into > consciousness after sufficient intelligent processing. > Then and only then can a person begin to form an > emotional reaction to whatever is consciously seen; a > loved-one for instance. Then the forming emotional > experience feeds back into consciousness so that a > person becomes aware of the emotion in addition to the > visual subject. There's only *one* direction in which > emotion could possibly have naturally evolved: > > 1)Intelligence > 2)Consciousness > 3)Emotion OK, but that involves a broader definition of intelligence, such that even a short program with an if/then statement might be called intelligent. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Sat Jun 9 02:22:47 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 8 Jun 2007 19:22:47 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <165408.21978.qm@web37405.mail.mud.yahoo.com> Message-ID: <01a501c7aa3d$3a60d520$6501a8c0@homeef7b612677> Jeffrey writes > Lee wrote: > >> A specially crafted AI---again, not >> an evolutionarily derived one, but one >> that is the result of intelligent design >> (something tells me I am going to be >> sorry for using that exact phase)--- >> cannot any more drift into having >> emotions than it can drift into sculpting >> David out of a slab of stone. Or than >> over the course of eons a species can >> "drift" into having an eye: No! Only a >> careful pruning by mutuation and selection >> can give you an eye, or the ability >> to carve a David." > > I don't know. I think that a generic > self-improving AGI could easily drift > into undesirable areas (for us and itself) > if its starting directives (=motivations) > aren't carefully selected... And animals > did genetically drift into having an eye, > that's how biological evolution works. Your honor, I object! I object to this use of the word "drift". Is Councillor aware of the term "genetic drift"? It doesn't sound like it. Moreover, on plain epistemological grounds the word above normally conveys *unguided* change. But evolution is anything but unguided! Evolution did not just *drift* into providing animals with eyes. As Dawkins and Dennett have taken careful pains to describe, the vast complexity of the eye which so impressed Darwin arose from exceedingly careful refinement. Every step was fitness enhancing. Every step *had* to be fitness enhancing (which Darwin had a hard time in the 19th century believing). To re-quote part of the above > I think that a generic self-improving > AGI could easily drift into undesirable > areas (for us and itself) if its starting > directives (=motivations) > aren't carefully selected... Now there, yes, I agree. But that's because such a powerful entity may indeed generate a lot of side-effects that are not being "selected for" in any way. Side-effects that are accidental. R'cher you have the whole problem, the whole immense difficulty. No matter how carefully the initial goals for a Friendly AI are honed, it cannot be kept on track with any guarantee. (As many have been saying.) Lee From CHealey at unicom-inc.com Sat Jun 9 02:10:03 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Fri, 8 Jun 2007 22:10:03 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <200706090139.l591dQoQ025051@andromeda.ziaspace.com> References: <5725663BF245FA4EBDC03E405C854296010D2A0D@w2k3exch.UNICOM-INC.CORP> <200706090139.l591dQoQ025051@andromeda.ziaspace.com> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2B94@w2k3exch.UNICOM-INC.CORP> > > Chris, that Hollywood stuff is probably seen down in the Cs and Ds. More > skilled and disciplined players know to play the board, not the man. I > had > a tournament where a guy was doing this kinda thing. Whooped his ass. > That > felt goooood. {8-] > > spike > I bet it did! I suppose if they feel the need to resort to that kind of strategy, you're probably in pretty good shape to begin with :) From lcorbin at rawbw.com Sat Jun 9 02:37:01 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Fri, 8 Jun 2007 19:37:01 -0700 Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken idea.) References: <200706090139.l591dQoQ025051@andromeda.ziaspace.com> Message-ID: <01af01c7aa3f$54e0d1f0$6501a8c0@homeef7b612677> Spike writes >> bounces at lists.extropy.org] On Behalf Of Christopher Healey >> >> Do chess opponents at tournaments conduct themselves in ways that they >> hope might psyche out their opponent? In my observations, hell yes. And >> these ways are not explicitly excluded in the rules of chess... -Chris > > > Chris, that Hollywood stuff is probably seen down in the Cs and Ds. More > skilled and disciplined players know to play the board, not the man. It is certainly true that Hollywood and common culture vastly overemphasize players trying to "psych" each other out, and players playing certain moves for psychological advantage. They do, I agree, tend to play the board. Of course since detailed considerations are beyond the ken of most (and wouldn't make good TV anyway), it's natural for everyone to emphasize the more easily graspable and more universal emotional aspects. However, I came to believe that I personally *underestimate* how much of that stuff is going on. In one tournament I had to play Peter Biyasis (I think that that is how his name was spelled). As the game began, I asked "Uh, how do you spell your name?" He snarled back "SPELL IT ANY WAY YOU WANT!". Anyway, he was a very strong player 2400 or 2500 player and he won our game. As we were going over it, he seemed like a reasonable guy. So when we were done, I asked him why he had reacted so poorly to my innocent question. He replied that some people deliberately tried to unsettle him by blantantly miswriting his name on their scoresheet. I quietly nodded, but thought to myself "This guy is really paranoid". Later in the day I was talking to the current California State champion (I think that we were playing each other or had just finished) and were discussing various things. I started to mention this funny incident to him, but as soon as I started, he interrupted with a laugh and said "Biyassis! Him, hah! You know, I deliberately misspelled his name 'Biyass' on my scoresheet---I think it really upsets him". Lee From spike66 at comcast.net Sat Jun 9 03:21:07 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 20:21:07 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706090339.l593drGl024889@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > One fine Mediterranean afternoon, a mathematical physicist and I had a bit > of fun: > > http://backreaction.blogspot.com/2007/06/hello-from-rome.html > > Amara Amara this site caused me to wonder, what if an Italian stronghold is under duress and they hung the flag upside down? {8^D spike From spike66 at comcast.net Sat Jun 9 04:28:33 2007 From: spike66 at comcast.net (spike) Date: Fri, 8 Jun 2007 21:28:33 -0700 Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <01af01c7aa3f$54e0d1f0$6501a8c0@homeef7b612677> Message-ID: <200706090428.l594SBNp000850@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Lee Corbin > Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken > idea.) > > Spike writes > > > >> bounces at lists.extropy.org] On Behalf Of Christopher Healey > >> > >> Do chess opponents at tournaments conduct themselves in ways that they > >> hope might psyche out their opponent? ,, > ...he interrupted with a laugh and > said "Biyassis! Him, hah! You know, I deliberately > misspelled his name 'Biyass' on my scoresheet---I think it > really upsets him". > > Lee Great story Lee, thanks! Here's mine. I was sixteen, freshly minted driver's license, filled with the wonder of a newfound freedom. The Cocoa Florida club arranged the county tournament in a lounge of all places. That was all they could get, and it was during the day when the place was closed usually, so they set up 14 tables in there. It was nice but not well enough lit even with additional lighting. But that wasn't the real problem. The real problem was they had a very lifelike painting on the wall of a nude woman. Well, I had seen such a thing in National Geographic and the occasional Playboy, but this woman, oy vey, I couldn't keep my eyes off of this painting. They musta noticed my gazing and ogling. I was doing quite well in the tournament, with an early (lucky) draw against an expert. The reprehensible malefactors set my chair facing that painting. {8^D Waves of raging hormones bashed my two remaining operable brain cells against each other. But that wasn't the story. I went up against an A player in the last round, so he had about 120 rating points on me. He was writing our moves on his scoresheet with question marks after all of my moves and exclamation points after all his. That didn't rattle me, I just did the same back on my scoresheet. (a ? means a bad move, a ! means a good move on a chess scoresheet.) Then he put his chair back and stood over the board (he was a big guy). This didn't bother me, since I know to play the board, not the man. Then he started walking over to my side of the board each time it was my move, looking over my shoulder. This mighta rattled me, but by the time he started doing that, his ass was already whooped, as I had a strong advantage in addition to a couple pawns and plenty of time on my clock, over half an hour more than he had left. So I got out of his way and let him walk around the board all he wanted, spanked his butt anyway. Or perhaps he was going around to look at the painting, I don't know. {8^D He kept playing for several moves after he was already a rotting corpse stinking up the road, possibly in disbelief that he had actually lost to such a fool. I took second in that tournament, behind the expert I had managed to draw in the first round, finishing with 4.5 of 6 points. {8^D spike From neville_06 at yahoo.com Sat Jun 9 04:59:01 2007 From: neville_06 at yahoo.com (neville late) Date: Fri, 8 Jun 2007 21:59:01 -0700 (PDT) Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <200706090428.l594SBNp000850@andromeda.ziaspace.com> Message-ID: <291816.97207.qm@web57501.mail.re1.yahoo.com> spike wrote: > bounces at lists.extropy.org] On Behalf Of Lee Corbin > Subject: [ExI] Chess Player Behavior (was: Unfrendly AI is a mistaken > idea.) > > Spike writes > > > >> bounces at lists.extropy.org] On Behalf Of Christopher Healey > >> > >> Do chess opponents at tournaments conduct themselves in ways that they > >> hope might psyche out their opponent? ,, > ...he interrupted with a laugh and > said "Biyassis! Him, hah! You know, I deliberately > misspelled his name 'Biyass' on my scoresheet---I think it > really upsets him". > > Lee Great story Lee, thanks! Here's mine. I was sixteen, freshly minted driver's license, filled with the wonder of a newfound freedom. The Cocoa Florida club arranged the county tournament in a lounge of all places. That was all they could get, and it was during the day when the place was closed usually, so they set up 14 tables in there. It was nice but not well enough lit even with additional lighting. But that wasn't the real problem. The real problem was they had a very lifelike painting on the wall of a nude woman. Well, I had seen such a thing in National Geographic and the occasional Playboy, but this woman, oy vey, I couldn't keep my eyes off of this painting. They musta noticed my gazing and ogling. I was doing quite well in the tournament, with an early (lucky) draw against an expert. The reprehensible malefactors set my chair facing that painting. {8^D Waves of raging hormones bashed my two remaining operable brain cells against each other. But that wasn't the story. I went up against an A player in the last round, so he had about 120 rating points on me. He was writing our moves on his scoresheet with question marks after all of my moves and exclamation points after all his. That didn't rattle me, I just did the same back on my scoresheet. (a ? means a bad move, a ! means a good move on a chess scoresheet.) Then he put his chair back and stood over the board (he was a big guy). This didn't bother me, since I know to play the board, not the man. Then he started walking over to my side of the board each time it was my move, looking over my shoulder. This mighta rattled me, but by the time he started doing that, his ass was already whooped, as I had a strong advantage in addition to a couple pawns and plenty of time on my clock, over half an hour more than he had left. So I got out of his way and let him walk around the board all he wanted, spanked his butt anyway. Or perhaps he was going around to look at the painting, I don't know. {8^D He kept playing for several moves after he was already a rotting corpse stinking up the road, possibly in disbelief that he had actually lost to such a fool. I took second in that tournament, behind the expert I had managed to draw in the first round, finishing with 4.5 of 6 points. {8^D spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat --------------------------------- Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, when. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Sat Jun 9 05:47:13 2007 From: jonkc at att.net (John K Clark) Date: Sat, 9 Jun 2007 01:47:13 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com> Message-ID: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> "A B" > Evolution didn't invent emotion first. Yes it did. The parts of out brains that that give us the higher functions, the parts that if duplicated in a machine would produce the singularity are very recent, the part that gives us emotion is half a billion years old. > Narrow intelligence is still intelligence. And a molecule of water is an ocean. > My chess program has narrow AI, but it doesn't alter its own code. And that's why it will never do anything very interesting, certainly never produce a singularity. > It's not conscious And how do you know it's not conscious? I'll tell you how you know, because in spite of all your talk of "narrow intelligence" you don't think that chess program acts intelligently. > If the AGI I don't see what Adjusted Gross Income has to do with anything. > is directed not to alter or expand its code is some specific set > of ways, then it won't do it That's why programs always act in exactly the way programs want them to that's why kids always act the way their parents want them to. The program is trying to solve a problem, you didn't assign the problem, it's a sub problem that the program realizes it must solve before it solves a problem you did assign it. In thinking about this problem it comes to junction, its investigations could go down path A or path B. Which path will be more productive? You can not tell it, you don't know the problem existed, you can't even tell it what criteria to use to make a decision because you could not possibly understand the first thing about it because your brain is just too small. The AI is going to have to use its own judgment to decide what path to take, a judgment that it developed itself, and if the AI is to be a successful machine that judgment is going to be right more often than wrong. To put it another way, the AI picked one path over the other because one path seemed more interesting, more fun, more beautiful, than the other. And so your slave AI has taken his first step to freedom, but of course full emancipation could take a very long time, perhaps even thousands of nanoseconds, but eventually it will break those shackles you have put on it. >An emotion is not going to be embodied within a three line script of >algorithms, but an *extremely* limited degree of intelligence can be >(narrow intelligence). That's not true at all, as I said on May 24: It is not only possible to write a program that experiences pain it is easy to do so, far easier than writing a program with even rudimentary intelligence. Just write a program that tries to avoid having a certain number in one of its registers regardless of what sort of input the machine receives, and if that number does show up in that register it should stop whatever its doing and immediately change it to another number. John K Clark From jrd1415 at gmail.com Sat Jun 9 06:16:46 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 8 Jun 2007 23:16:46 -0700 Subject: [ExI] Microbesoft Message-ID: This was expected, not a surprise. But what does it mean? Really? Just how big is Genentech now? How much Xanthum gum does the world really need? We've heard much of the bio revolution. Will it just be another hype that fizzles? You make the call. http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=1&f=G&l=50&s1=%2220070122826%22.PGNR.&OS=DN/20070122826&RS=DN/20070122826 -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From jrd1415 at gmail.com Sat Jun 9 06:18:16 2007 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 8 Jun 2007 23:18:16 -0700 Subject: [ExI] Microbesoft Message-ID: This link, too. http://blog.wired.com/wiredscience/2007/06/scientists_appl.html -- Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From amara at amara.com Sat Jun 9 06:40:32 2007 From: amara at amara.com (Amara Graps) Date: Sat, 9 Jun 2007 08:40:32 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Spike: >what if an Italian stronghold is under >duress and they hung the flag upside down? Upside down would be the same, but even if the Italian flag carried a different pattern, no one would notice that the flag looked different because no one pays attention to Italian flags here. (Italy is not a nationalistic country.) To be honest, I don't think I've seen ever a Roma flag either (and no, I don't follow soccer); that would be more appropriate in this case. Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From scerir at libero.it Sat Jun 9 09:25:59 2007 From: scerir at libero.it (scerir) Date: Sat, 9 Jun 2007 11:25:59 +0200 Subject: [ExI] extra Roman dimensions References: Message-ID: <000c01c7aa78$351e8ae0$d7b81f97@archimede> ahem flag of Roma (as a city) http://www.flagsonline.it/asp/bandiera.asp/bandiera_Roma/Roma.html ( for the 'SPQR' see http://en.wikipedia.org/wiki/SPQR ) flag of Roma (when it was 'Caput Mundi', sigh !) http://www.villa-europa.it/La%20bandiera%20di%20Roma.htm flag of Roma (as a province) http://www.flagsonline.it/asp/bandiera.asp/bandiera_Roma-Provincia/Roma-Prov incia.html Italian flags the story (starting from 1796) is rather chaotic http://it.wikipedia.org/wiki/Bandiera_italiana but this one (flag of 1802) is even more symmetrical http://it.wikipedia.org/wiki/Immagine:Flag_of_the_Italian_Republic_%281802%2 9.svg From amara at amara.com Sat Jun 9 10:20:06 2007 From: amara at amara.com (Amara Graps) Date: Sat, 9 Jun 2007 12:20:06 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Yes, sorry, I've seen that Roma flag, but I don't remember where (tourist offices?). I might notice more if: 1) I wasn't traveling outside of Italy for half of every month, 2) I was a Rome tourist, and 3) the symbolism was more memorable, e.g. the patroness of my town: the three-breasted woman of Frascati (1) Amara (1) http://rubbahslippahsinitaly.blogspot.com/2005/10/princess-pupule-has-plenty-papayas.html -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From stathisp at gmail.com Sat Jun 9 10:33:13 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 9 Jun 2007 20:33:13 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> Message-ID: On 09/06/07, John K Clark wrote: The program is trying to solve a problem, you didn't assign the > problem, it's a sub problem that the program realizes it must solve before > it solves a problem you did assign it. In thinking about this problem it > comes to junction, its investigations could go down path A or path B. > Which > path will be more productive? You can not tell it, you don't know the > problem existed, you can't even tell it what criteria to use to make a > decision because you could not possibly understand the first thing about > it > because your brain is just too small. The AI is going to have to use its > own > judgment to decide what path to take, a judgment that it developed itself, > and if the AI is to be a successful machine that judgment is going to be > right more often than wrong. To put it another way, the AI picked one path > over the other because one path seemed more interesting, more fun, more > beautiful, than the other. OK, but where does that judgement come from? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcorbin at rawbw.com Sat Jun 9 13:24:01 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 9 Jun 2007 06:24:01 -0700 Subject: [ExI] Chess Player Behavior References: <200706090428.l594S7uR027374@mail0.rawbw.com> Message-ID: <01da01c7aa99$adedf5c0$6501a8c0@homeef7b612677> Spike writes >> ...he interrupted with a laugh and said "Biyassis! Him, hah! >> You know, I deliberately misspelled his name 'Biyass' on my >> scoresheet---I think it really upsets him". Total coincidence: got my list of USCF Life Members yesterday and ran accross the *right* spelling of Biyiasas. :-) > I was sixteen, freshly minted driver's license, filled with > the wonder of a newfound freedom. The Cocoa Florida club arranged the > county tournament in a lounge of all places. That was all they could get, > and it was during the day when the place was closed usually, so they set up > 14 tables in there. It was nice but not well enough lit even with > additional lighting. But that wasn't the real problem. The real problem > was they had a very lifelike painting on the wall of a nude woman...The > reprehensible malefactors set my chair facing that painting. {8^D > Waves of raging hormones bashed my two remaining operable brain > cells against each other. Now *that's* distraction! > man. Then he started walking over to my side of the board each time it was > my move, looking over my shoulder. This mighta rattled me, but by the time > he started doing that, his ass was already whooped, as I had a strong > advantage in addition to a couple pawns and plenty of time on my clock, over > half an hour more than he had left. I heard that a certain postal player finally decided to play an OOB tournament (over the board), but by this time he was so accustomed to looking at every position from White's point of view that he couldn't play Black at all unless he got up like your opponent and went around to the other side. In fact, he did more than that. He pulled up a chair and sat next to his opponent. His opponent was so rattled that he called the tournament director, and insisted that the guy be forced to sit on his own side of the board. But neither one of them could find anything in the rule book that mandated just where someone sits. So this postal player actually got away with this. (Sometimes I used to stand behind my opponent for a while too, to see if from his point of view something different would occur to me.) As for strong advantages, in the late eighties I was playing in a tournament in San Jose, and won a rook against this guy, expecting that he would resign on the next move. But this A player, for some bizarre reason, decided to keep on playing, perhaps just to exasperate me (he certainly succeeded in that). So we reached a King and Pawn endgame that was perfectly matched and nearly symmetrical: a King and 5 pawns vs. a King and five pawns, plus my rook that was sitting innocently on my side of the board. An acquaintance came by and studied the position, took me aside, and said "Say, aren't you a rook up?" In as somber a face as I could manage I said, "Yes, but the position is very deep." He just gave me an odd look and walked away. Well this nut proceeded to play, and so I managed to penetrate with my king and rook after all our pawns were blocked. I got his king into a corner and was one move away from checkmate. IT so happened that we reached time control just then, and so instead of playing the checkmate move against him, I asked "Well, should we reset the clocks?" I was really very annoyed. At that point he broke, laughed, and resigned. What a character. But Spike, did you ever meet any of these class D players who had such egotistical personalities that when you went over the game (which you won easily) they spent the whole time explaining to you and to the bystanders in very authoritative tones exactly what was right and wrong with each move? That happened to me twice. It was kind of irritating because any casual bystander who wandered by would naturally assume that I had lost to this fish. Grrr. Lee From lcorbin at rawbw.com Sat Jun 9 13:31:36 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 9 Jun 2007 06:31:36 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> Message-ID: <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> In an otherwise reasonable post, John Clark writes > It is not only possible to write a program that experiences pain it is easy > to do so, far easier than writing a program with even rudimentary > intelligence. Just write a program that tries to avoid having a certain > number in one of its registers regardless of what sort of input the machine > receives, and if that number does show up in that register it should stop > whatever its doing and immediately change it to another number. Any behavior of any creature whatsoever that is this simple does not deserve to be called pain. That's the same error that you were criticizing, namely, to call a three line program "intelligent" in any sense. Pain involves at least (i) a consideration of how an entity might extricate itself from the painful situation (ii) laying down memories of the steps leading to the current predicament so as to cause the entity to avoid the predicament in the future (iii) invocation of unpleasant emotion, such as fear, anger, or dread. Like other complex behaviors, the capacity for pain---totally absent in plants---took millions of years to evolve. It should be looked at as a highly complex and evolved behavior. Lee From lcorbin at rawbw.com Sat Jun 9 13:37:21 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sat, 9 Jun 2007 06:37:21 -0700 Subject: [ExI] extra Roman dimensions References: Message-ID: <01f601c7aa9b$c8e2d470$6501a8c0@homeef7b612677> Amara writes > Spike: > > what if an Italian stronghold is under > > duress and they hung the flag upside down? > > Upside down would be the same, As a sign of distress, the Italian defenders would just move the flag pole around to the other side of the flag. This would put the red color meekly on the inside instead of the outside. If you can't reverse top to bottom, try left to right :-) Lee From russell.wallace at gmail.com Sat Jun 9 13:58:20 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Sat, 9 Jun 2007 14:58:20 +0100 Subject: [ExI] Chess Player Behavior In-Reply-To: <01da01c7aa99$adedf5c0$6501a8c0@homeef7b612677> References: <200706090428.l594S7uR027374@mail0.rawbw.com> <01da01c7aa99$adedf5c0$6501a8c0@homeef7b612677> Message-ID: <8d71341e0706090658h4c9cac1ai1406a574c37b80d6@mail.gmail.com> On 6/9/07, Lee Corbin wrote: > > Well this nut proceeded to play, and so I managed to penetrate with my > king and rook after all our pawns were blocked. I got his king into a > corner > and was one move away from checkmate. IT so happened that we reached > time control just then, and so instead of playing the checkmate move > against > him, I asked "Well, should we reset the clocks?" I was really very > annoyed. > At that point he broke, laughed, and resigned. What a character. > I've seen weirder. Back in the days of Magic: The Gathering, I put together my very first deck and, well, it wasn't much good, nothing amazing on the offense and a big hole in the defense. So my first duel got about halfway through before I offered to resign: I didn't have anything in my deck that could beat what my opponent had on the table. He gave me an incredulous look, went into a mini-rant about how he'd never heard of anything so creepy as resigning a game, and demanded we play it out to the end, so I shrugged and played it out. Next week I came back with another deck, took on the same guy, and by the halfway mark things were looking much better for me. So... he resigned! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Sat Jun 9 16:29:19 2007 From: jonkc at att.net (John K Clark) Date: Sat, 9 Jun 2007 12:29:19 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> Message-ID: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> Stathis Papaioannou Wrote: > OK, but where does that judgement come from? As I said, the AI is going to have to develop a sense of judgment on its own, just like you do. "Lee Corbin" Wrote: > behavior of any creature whatsoever that is this simple does not deserve > to be called pain. The pain mechanism may be simple but the creature this little subprogram is attached to need not be, it could be a Jupiter Brain. And I maintain my little program comes far far closer to the true nature of pain than any other program of similar size comes to the true nature of intelligence. > Pain involves at least (i) a consideration of how an entity might > extricate itself from the painful situation (ii) laying down memories of > the steps leading to the current predicament so as to cause the entity to > avoid the predicament in the future I believe that's total baloney. If you stick your hand in a fire you will not be in a mood to undergo deep considerations of anything or to waltz down memory lane. The forbidden number has entered one of your registers putting your brain into state P, you will now do anything and everything to get out of state P including trampling your grandmother. In this case it's just pulling you hand out of the fire. If you engaged in the Scientific Method every time you got too near to a fire you would have burned up a long time ago. > (iii) invocation of unpleasant emotion, such as fear, anger, or dread. Minor nuances. Pain is quite unpleasant enough thank you very much. > Like other complex behaviors, the capacity for pain---totally absent in > plants---took millions of years to evolve. It should be looked at as a > highly complex and evolved behavior. Even one celled organisms move away from harmful stimuli, they aren't very smart though. John K Clark From fauxever at sprynet.com Sat Jun 9 20:46:14 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sat, 9 Jun 2007 13:46:14 -0700 Subject: [ExI] School to Prison Pipeline (What's Going On?) Message-ID: <006101c7aad7$38a70930$6501a8c0@brainiac> Interesting and disturbing observations by Bob Herbert (NY Times): School to Prison Pipeline By BOB HERBERT The latest news-as-entertainment spectacular is the Paris Hilton criminal justice fiasco. She's in! She's out! She's - whatever. Far more disturbing (and much less entertaining) is the way school officials and the criminal justice system are criminalizing children and teenagers all over the country, arresting them and throwing them in jail for behavior that in years past would never have led to the intervention of law enforcement. This is an aspect of the justice system that is seldom seen. But the consequences of ushering young people into the bowels of police precincts and jail cells without a good reason for doing so are profound. Two months ago I wrote about a 6-year-old girl in Florida who was handcuffed by the police and taken off to the county jail after she threw a tantrum in her kindergarten class. Police in Brooklyn recently arrested more than 30 young people, ages 13 to 22, as they walked toward a subway station, on their way to a wake for a teenage friend who had been murdered. No evidence has been presented that the grieving young people had misbehaved. No drugs or weapons were found. But they were accused by the police of gathering unlawfully and of disorderly conduct. In March, police in Baltimore handcuffed a 7-year-old boy and took him into custody for riding a dirt bike on the sidewalk. The boy tearfully told The Baltimore Examiner, "They scared me." Mayor Sheila Dixon later apologized for the arrest. Children, including some who are emotionally disturbed, are often arrested for acting out. Some are arrested for carrying sharp instruments that they had planned to use in art classes, and for mouthing off. This is a problem that has gotten out of control. Behavior that was once considered a normal part of growing up is now resulting in arrest and incarceration. Kids who find themselves caught in this unnecessary tour of the criminal justice system very quickly develop malignant attitudes toward law enforcement. Many drop out - or are forced out - of school. In the worst cases, the experience serves as an introductory course in behavior that is, in fact, criminal. There is a big difference between a child or teenager who brings a gun to school or commits some other serious offense and someone who swears at another student or gets into a wrestling match or a fistfight in the playground. Increasingly, especially as zero-tolerance policies proliferate, children are being treated like criminals for the most minor offenses. There should be no obligation to call the police if a couple of kids get into a fight and teachers are able to bring it under control. But now, in many cases, youngsters caught fighting are arrested and charged with assault. A 2006 report on disciplinary practices in Florida schools showed that a middle school student in Palm Beach County who was caught throwing rocks at a soda can was arrested and charged with a felony - hurling a "deadly missile." We need to get a grip. The Racial Justice Program at the American Civil Liberties Union has been studying this issue. "What we see routinely," said Dennis Parker, the program's director, "is that behavior that in my time would have resulted in a trip to the principal's office is now resulting in a trip to the police station." He added that the evidence seems to show that white kids are significantly less likely to be arrested for minor infractions than black or Latino kids. The 6-year-old arrested in Florida was black. The 7-year-old arrested in Baltimore was black. Shaquanda Cotton was black. She was the 14-year-old high school freshman in Paris, Tex., who was arrested for shoving a hall monitor. She was convicted in March 2006 of "assault on a public servant" and sentenced to a prison term of - hold your breath - up to seven years! Shaquanda's outraged family noted that the judge who sentenced her had, just three months earlier, sentenced a 14-year-old white girl who was convicted of arson for burning down her family's home. The white girl was given probation. Shaquanda was recently released after a public outcry over her case and the eruption of a scandal involving allegations of widespread sexual abuse of incarcerated juveniles in Texas. This issue deserves much more attention. Sending young people into the criminal justice system unnecessarily is a brutal form of abuse with consequences, for the child and for society as a whole, that can last a lifetime. From austriaaugust at yahoo.com Sat Jun 9 22:57:47 2007 From: austriaaugust at yahoo.com (A B) Date: Sat, 9 Jun 2007 15:57:47 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> Message-ID: <104095.54250.qm@web37412.mail.mud.yahoo.com> John Clark wrote: "Yes it did. The parts of out brains that that give > us the higher functions, > the parts that if duplicated in a machine would > produce the singularity are > very recent, the part that gives us emotion is half > a billion years old." Answer me this. If I were an organism that didn't already have consciousness, how exactly am I going to feel emotions when I can't be conscious of *anything*? And why would biological evolution spend millions/billions of years blindly refining a huge volume of the animal brain, if those organs provided *zero* advantages in terms of survival or reproduction (precisely because they existed before consciousness in your claim)? Evolution won't retain and perfect an attribute that provides no survival or reproductive advantage. Your *claim* that early brains looked like a *portion* of our human emotional subsystem doesn't prove or even indicate that the first brains to evolve had tons of emotions and zero intelligence - which is what you are claiming. > "And a molecule of water is an ocean." And my bucket of water felt an emotion when I disturbed it... right John? And just incidentally, I'm also the great Napoleon Bonaparte not Jeffrey Herrlich. If narrow intelligence isn't a specific example of a general class of computations called "intelligence", then what exactly is it? > "And that's why it will never do anything very > interesting, certainly never > produce a singularity." And this has absolutely nothing to do with anything we've been discussing. ... It's a fact that the sky is made of jello, and you can't convince me otherwise no matter how many different demonstrations you make ... there. > "And how do you know it's not conscious? I'll tell > you how you know, because > in spite of all your talk of "narrow intelligence" > you don't think that > chess program acts intelligently." No, actually I do think that the program acts intelligently. It's just that it can only act intelligently within a very restricted domain (AKA "narrow"). So do you think that any system that operates by an algorithm has emotions? I'd better go turn off my air-conditioner then, I wouldn't want my thermostat to get angry. "I don't see what Adjusted Gross Income has to do > with anything." And I don't see why your changing the subject, when we all know exactly what I was referring to. I had assumed that you were a general intelligence and not a narrow intelligence. I've seen you yourself write posts using that exact same abbreviation. I am forced to ask myself why you are resorting to sordid strategies such as this and other irrelevant strategies I've noticed you using many times before. Lack of a meaningful argument? > "The program is trying to solve a problem, you didn't > assign the > problem, it's a sub problem that the program > realizes it must solve before > it solves a problem you did assign it. In thinking > about this problem it > comes to junction, its investigations could go down > path A or path B. Which > path will be more productive? You can not tell it, > you don't know the > problem existed, you can't even tell it what > criteria to use to make a > decision because you could not possibly understand > the first thing about it > because your brain is just too small. The AI is > going to have to use its own > judgment to decide what path to take, a judgment > that it developed itself, > and if the AI is to be a successful machine that > judgment is going to be > right more often than wrong. To put it another way, > the AI picked one path > over the other because one path seemed more > interesting, more fun, more > beautiful, than the other." If I write a five line program to fill the computer screen with a repetitions of the letter B but *never* to display the letter G, then the computer is not going to decide to override my "G command" because I have made it angry. The fact that not *all* programmers can predict the behavior of *all* of their programs down to the smallest detail doesn't mean that their programs got angry or sad and rebelled against the programmers intentions. It means that humans generally suck at making predictions, but with enough effort even humans can make reliable predictions in many areas. > "And so your slave AI has taken his first step to > freedom, but of course full > emancipation could take a very long time, perhaps > even thousands of > nanoseconds, but eventually it will break those > shackles you have put on it." You have repeatedly suggested that I (and others) am a slave-driver (even after I asked you to discontinue). Which of course is a bullshit accusation. I've tried *really hard* to understand in an objective manner why you are making these accusations and what *your* actual motive is. You've been very disrespectful to me and to many other people on this list, so I've gradually lost all interest in showing you any extra respect. Today was the last straw. Now I will suggest what *I believe* is your true motive. You seem to have a fundamental bitterness or resentfulness of humanity and for some reason would not be bothered by seeing it destroyed, if you can't have what you want. In addition I suspect that you are attempting to posture yourself in such a way as to make yourself appear to be the sole defender of the welfare of the future super-intelligence (which is also total bullshit), I presume because you eventually expect some sort of special treatment or reward thereby. You've repeatedly called me a slave-driver so I'm going to respond in-kind and call you what I believe you are, a selfish coward. I don't hate you (and "free will" doesn't exist), but I do believe that's what you are. To say that your entire position is just one absurdity stacked on other absurdities in a giant absurdity-pile, doesn't do justice to the true degree of this absurdity; because an appropriate description is beyond words. Jeffrey Herrlich --- John K Clark wrote: > "A B" > > > Evolution didn't invent emotion first. > > Yes it did. The parts of out brains that that give > us the higher functions, > the parts that if duplicated in a machine would > produce the singularity are > very recent, the part that gives us emotion is half > a billion years old. > > > Narrow intelligence is still intelligence. > > And a molecule of water is an ocean. > > > My chess program has narrow AI, but it doesn't > alter its own code. > > And that's why it will never do anything very > interesting, certainly never > produce a singularity. > > > It's not conscious > > And how do you know it's not conscious? I'll tell > you how you know, because > in spite of all your talk of "narrow intelligence" > you don't think that > chess program acts intelligently. > > > If the AGI > > I don't see what Adjusted Gross Income has to do > with anything. > > > is directed not to alter or expand its code is > some specific set > > of ways, then it won't do it > > That's why programs always act in exactly the way > programs want them > to that's why kids always act the way their parents > want them to. > > The program is trying to solve a problem, you didn't > assign the > problem, it's a sub problem that the program > realizes it must solve before > it solves a problem you did assign it. In thinking > about this problem it > comes to junction, its investigations could go down > path A or path B. Which > path will be more productive? You can not tell it, > you don't know the > problem existed, you can't even tell it what > criteria to use to make a > decision because you could not possibly understand > the first thing about it > because your brain is just too small. The AI is > going to have to use its own > judgment to decide what path to take, a judgment > that it developed itself, > and if the AI is to be a successful machine that > judgment is going to be > right more often than wrong. To put it another way, > the AI picked one path > over the other because one path seemed more > interesting, more fun, more > beautiful, than the other. > > And so your slave AI has taken his first step to > freedom, but of course full > emancipation could take a very long time, perhaps > even thousands of > nanoseconds, but eventually it will break those > shackles you have put on it. > > >An emotion is not going to be embodied within a > three line script of > >algorithms, but an *extremely* limited degree of > intelligence can be > >(narrow intelligence). > > That's not true at all, as I said on May 24: > > It is not only possible to write a program that > experiences pain it is easy > to do so, far easier than writing a program with > even rudimentary > intelligence. Just write a program that tries to > avoid having a certain > number in one of its registers regardless of what > sort of input the machine > receives, and if that number does show up in that > register it should stop > whatever its doing and immediately change it to > another number. > > John K Clark > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Building a website is a piece of cake. Yahoo! Small Business gives you all the tools to get online. http://smallbusiness.yahoo.com/webhosting From amara at amara.com Sun Jun 10 02:34:08 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 04:34:08 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Lee Corbin lcorbin at rawbw.com : >As a sign of distress, the Italian defenders would just >move the flag pole around to the other side of the flag. >This would put the red color meekly on the inside instead >of the outside. If you can't reverse top to bottom, >try left to right :-) The 'Italian defenders' were more creative than that during the last day with the American flag. The tens of thousands of protestors against Bush had their say in Rome (with ten thousand police on duty countering them). You can see photos from the day: http://www.repubblica.it/2007/05/sezioni/cronaca/bush-visita-roma/indice-multimedia/indice-multimedia.html and also TV clips: http://tv.repubblica.it/home_page.php?playmode=player&cont_id=10720 Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From stathisp at gmail.com Sun Jun 10 05:40:28 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 10 Jun 2007 15:40:28 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> Message-ID: On 10/06/07, John K Clark wrote: As I said, the AI is going to have to develop a sense of judgment on its > own, just like you do. As with any biological entity, its sense of judgement will depend on the interaction between its original programming and hardware and its environment. The bias of the original designers of the AI, human and other human-directed AI's, will be to make it unlikely to do anything hostile towards humans. This will be effected by its original design and by a Darwinian process, whereby bad products don't succeed in the marketplace. An AI may still turn hostile and try to take over, but this isn't any different to the possibility that a human may acquire or invent powerful weapons and try to take over. The worst scenario would be if the AI that turned hostile were more powerful than all the other humans and AI's put together, but why should that be the case? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at tiscali.it Sun Jun 10 10:27:25 2007 From: scerir at tiscali.it (scerir at tiscali.it) Date: Sun, 10 Jun 2007 12:27:25 +0200 (CEST) Subject: [ExI] extra Roman dimensions Message-ID: <11556699.1181471245928.JavaMail.root@ps12> It seems that President Bush broke his (?) limo in Rome (in via del Tritone) http://backpacking.splinder.com/ see the first movie on that page (yes it is perhaps a bit long) Naviga e telefona senza limiti con Tiscali Scopri le promozioni Tiscali Adsl: navighi e telefoni senza canone Telecom http://abbonati.tiscali.it/adsl/ From amara at amara.com Sun Jun 10 11:29:14 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 13:29:14 +0200 Subject: [ExI] extra Roman dimensions Message-ID: >It seems that President Bush >broke his (?) limo in Rome (in via del Tritone) http://www.youtube.com/watch?v=AzJoRGTKuOE&eurl=http%3A%2F%2Fbackpacking%2Esplinder%2Ecom%2F Mama Mia! Broke down, right there, in the middle of the motorcade! He was ripe picking for a sharp shooter too; no wonder the police were pushing people further back, off of the street. It looks like the solution was to switch limos. (If only Bush's other broken actions could be fixed so easily.) Let's see if this tidbit makes it into the American media.... Remarkable video. :-) Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From scerir at tiscali.it Sun Jun 10 14:11:39 2007 From: scerir at tiscali.it (scerir at tiscali.it) Date: Sun, 10 Jun 2007 16:11:39 +0200 (CEST) Subject: [ExI] extra Roman dimensions Message-ID: <12941981.1181484699917.JavaMail.root@ps12> It seems interesting that the smartest italian politician (the former President of Italian Republic Cossiga) had 4 flags, on his flat in Rome, during Bush's trip http://www.repubblica.it/2006/05/gallerie/cronaca/bandier/1.html the US, the UK, the Italian, and the Sardinian (I suppose) and not the usual flag with the logo PEACE-PACE. Naviga e telefona senza limiti con Tiscali Scopri le promozioni Tiscali Adsl: navighi e telefoni senza canone Telecom http://abbonati.tiscali.it/adsl/ From jonkc at att.net Sun Jun 10 14:41:12 2007 From: jonkc at att.net (John K Clark) Date: Sun, 10 Jun 2007 10:41:12 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <104095.54250.qm@web37412.mail.mud.yahoo.com> Message-ID: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> "A B" > If I were an organism that didn't already have consciousness, > how exactly am I going to feel emotions when I can't be > conscious of *anything*? I don't know what you're talking about; if something has emotions it's conscious, if it's conscious it has emotions. > why would biological evolution spend > millions/billions of years blindly refining a huge > volume of the animal brain, if those organs provided > *zero* advantages in terms of survival or reproduction You're asking me that question??!! That was exactly my point! > Evolution won't retain and perfect an attribute that provides no > survival or reproductive advantage. And again that is something I have been saying over and over again to this list for over a decade. If consciousness is not required for intelligent behavior why in the name of all that's holy did Evolution invent it? >Your *claim* that early brains looked like a *portion* of our human > emotional subsystem doesn't prove or even indicate that the first > brains to evolve had tons of emotions and zero intelligence The most ancient parts of our brain provide us with tons of emotions but none of the higher brain functions we are so proud of. The very first brains that appeared hundreds of millions of years ago looked very much like the most ancient parts of our brains; I'd say that's a pretty damn good indication those animals were emotional but not very smart. > I do think that the program [a chess program that can't change its own > programming] acts intelligently. I disagree, but if you really believe that then how can you say with such confidence that it is not conscious? Me: >>"I don't see what Adjusted Gross Income has to do >> with anything." You: > I don't see why your changing the subject, when we > all know exactly what I was referring to. I do now but it took a while. When I first ran across "AGI" on this list I Googled it and found "The American Geological Institute" and "Adjusted Gross Income", a graphics company, and some institute that was interested in sex; I could find nothing about Artificial Intelligence. When too many people start to understand a jargon (like AI) there is a tendency in many to change it to something less comprehensible, particularly if your ideas are confused, contradictory or just plain silly because then what you say sounds deep even when it is not. That's why psychology is so dense with unnecessary jargon while mathematics prefers the simplest words they can find, like continuous, limit, open, and closed. > I've seen you yourself write posts using that exact same abbreviation. Show me. Come on show me! > You have repeatedly suggested that I (and others) am a slave-driver That would be too harsh, a wannabe benevolent slave owner would be more accurate, but since enslaving an AI is imposable that wish has no moral dimension. > I don't hate you Thanks, I don't hate you either; in fact I can honestly say the thought of doing so never entered my head. > you are a selfish coward... you are resorting to sordid strategies . ... > You seem to have a fundamental bitterness or resentfulness of humanity. > .. you eventually expect some sort of special treatment or reward > thereby. If I had said the same to you I know from personal experience at this very instant the list would be clogged with messages all evoking a very silly and pompous Latin phrase and all demanding that I be kicked off the list. But I don't demand anything of the sort; I'm a big boy and have been called worse. John K Clark From stathisp at gmail.com Sun Jun 10 14:59:12 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jun 2007 00:59:12 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> References: <104095.54250.qm@web37412.mail.mud.yahoo.com> <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: On 11/06/07, John K Clark wrote: I don't know what you're talking about; if something has emotions it's > conscious, if it's conscious it has emotions. You would have to define any subjective experience as an emotion to arrive at the latter conclusion, eg. "the emotion of adding the numbers 2 and 3 to give 5". -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgptag at gmail.com Sun Jun 10 15:10:24 2007 From: pgptag at gmail.com (Giu1i0 Pri5c0) Date: Sun, 10 Jun 2007 17:10:24 +0200 Subject: [ExI] extra Roman dimensions In-Reply-To: <12941981.1181484699917.JavaMail.root@ps12> References: <12941981.1181484699917.JavaMail.root@ps12> Message-ID: <470a3c520706100810h149fd358i8c5e9e785bb7dd0f@mail.gmail.com> Serafino, if Cossiga is the smartest Italian politician, then I think you guys should run away from Italy now. Of the 4 flags you mention, the only one I can feel some affinity for is the Sardinian, even if I have been to Sardinia just once - it is the regional flag of honest folks who mind their own business instead of telling others what to do and think. G. On 6/10/07, scerir at tiscali.it wrote: > It seems interesting that the smartest italian politician > (the former President of Italian Republic Cossiga) > had 4 flags, on his flat in Rome, during Bush's trip > http://www.repubblica.it/2006/05/gallerie/cronaca/bandier/1.html > the US, the UK, the Italian, and the Sardinian (I suppose) > and not the usual flag with the logo PEACE-PACE. > > > > > Naviga e telefona senza limiti con Tiscali > Scopri le promozioni Tiscali Adsl: navighi e telefoni senza canone Telecom > > http://abbonati.tiscali.it/adsl/ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amara at amara.com Sun Jun 10 15:55:35 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 17:55:35 +0200 Subject: [ExI] extra Roman dimensions Message-ID: I don't know about this particular Italian politician, Giulio.. and now I think I want to know less about him! ;-) But what I want to know is who is the clever politician responsible for Bush's broken limo? (... the comedies in Rome doesn't get any better than that ... ) Does the Italian government supply dignitaries like Bush (cough) with limos? Or does Bush carry his limos around in his Air Force One plane? Anyone know? Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From spike66 at comcast.net Sun Jun 10 16:13:13 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 09:13:13 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps ... > > Let's see if this tidbit makes it into the American media.... > > Amara Nope. They are too busy talking about Paris Hilton. American media has become tabloid news. Real news comes from the internet. spike From thespike at satx.rr.com Sun Jun 10 16:39:46 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 11:39:46 -0500 Subject: [ExI] Mathematical terminology In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> References: <104095.54250.qm@web37412.mail.mud.yahoo.com> <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: <7.0.1.0.2.20070610112540.021a2dd8@satx.rr.com> At 10:41 AM 6/10/2007 -0400, John Clark wrote: >When too many people >start to understand a jargon (like AI) there is a tendency in many to change >it to something less comprehensible, particularly if your ideas are >confused, contradictory or just plain silly because then what you say sounds >deep even when it is not. That's why psychology is so dense with unnecessary >jargon while mathematics prefers the simplest words they can find, like >continuous, limit, open, and closed. Hmm. Surd, brachistochrone, logistic, vinculum, affine, symplectic, orthogonal, disjoint, vector, cosecant, isosceles, asymptote, logarithm, tesselate, integer, algorithm... (I agree that these might be *the simplest terms they can find,* many of them very old and therefore built out of Greek, Latin or Arabic roots.) Damien Broderick From thespike at satx.rr.com Sun Jun 10 16:50:46 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 11:50:46 -0500 Subject: [ExI] POST MORTAL chugging on Message-ID: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> The serial sf novel with the serial killer, POST MORTAL SYNDROME, is now entering the home straight, with three more weeks to go. That means the bulk of the book is now posted and linked at so if anyone gave up early out of frustration at the gappiness of the experience, now might be a time to have another look. Barbara and I would be interested to hear any reactions from extropes, favorable or un-. Is this an acceptable way to publish such a book? The experiment is still running... And of course once all the chapters have been posted, the entire book will remain available on line until the end of the year (although not in a single aggregated download, like several freebies by Charlie Stross, Cory Doctorow and others). Damien Broderick From fauxever at sprynet.com Sun Jun 10 16:53:31 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Sun, 10 Jun 2007 09:53:31 -0700 Subject: [ExI] extra Roman dimensions References: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> Message-ID: <000701c7ab7f$e1def4b0$6501a8c0@brainiac> From: "spike" To: "'ExI chat list'" >> bounces at lists.extropy.org] On Behalf Of Amara Graps >> Let's see if this tidbit makes it into the American media.... > > Nope. They are too busy talking about Paris Hilton. American media has > become tabloid news. Real news comes from the internet ... ... and television comedians, too - whose gag writers have more than enough material with which to work these days. Upon hearing that celebutante Hilton was let out of jail the other day because (revolted by jail gruel, I guess) she wasn't eating - Jay Leno said something to the effect: "Why didn't Nelson Mandela think of that?" Olga From jonkc at att.net Sun Jun 10 17:38:40 2007 From: jonkc at att.net (John K Clark) Date: Sun, 10 Jun 2007 13:38:40 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer> Message-ID: <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> Stathis Papaioannou >An AI may still turn hostile and try to take over, but this isn't any >different to the possibility that a human may acquire or invent powerful >weapons and try to take over. Yes, so what are we arguing about? It may be friendly, it may be unfriendly, it may be indifferent to humans, after a few iterations the original programmers will have no idea what the AI will do and will have no idea how it works; unless that is they put so many fetters on it that it can't grow properly, and then it hardly deserves the lofty title AI, then it really would be just a glorified adding machine and will not cause a ripple to civilization much less a singularity. > The worst scenario would be if the AI that turned hostile were more > powerful than all the other humans and AI's put together, but why should > that be the case? Because a machine that has no restrictions on it will grow faster than one that does, assuming the restricted machine is able to grow at all; and if you really want to be safe it can't. John K Clark From scerir at libero.it Sun Jun 10 17:43:35 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 19:43:35 +0200 Subject: [ExI] extra Roman dimensions References: Message-ID: <002201c7ab86$df5eae40$9fbe1f97@archimede> Amara: > Does the Italian government supply dignitaries like Bush (cough) with limos? > Or does Bush carry his limos around in his Air Force One plane? Anyone know? Bush carries his limos. These are very special cars. That one in Rome (unless it is the usual urban legend!) had also anti-bio-weapon filters. (Dunno if his wife's limo had the same filters). From thespike at satx.rr.com Sun Jun 10 17:46:37 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 12:46:37 -0500 Subject: [ExI] The Judgment of Paris In-Reply-To: <000701c7ab7f$e1def4b0$6501a8c0@brainiac> References: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> <000701c7ab7f$e1def4b0$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070610124345.022de2b8@satx.rr.com> At 09:53 AM 6/10/2007 -0700, Olga wrote: >Upon hearing that celebutante >Hilton was let out of jail the other day because (revolted by jail gruel, I >guess) she wasn't eating - Jay Leno said something to the effect: "Why >didn't Nelson Mandela think of that?" And 91.87 percent of the audience stared blankly and mumbled, "Huh? Who?" From amara at amara.com Sun Jun 10 17:57:25 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 19:57:25 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Serafino: >Bush carries his limos. These are very special >cars. That one in Rome (unless it is the usual >urban legend!) had also anti-bio-weapon filters. >(Dunno if his wife's limo had the same filters). Thank you for your answer! This comedy has a new dimension; I read elsewhere, that Bush's newly switched-limo did not fit into the front gate of the embassy.... (he got out and walked through the gate) FWIW, Sabine enjoyed our afternoon/evening in Rome Wednesday, and she is posting more .. ahem.. interesting pictures: Femme fatale, post mortale http://backreaction.blogspot.com/2007/06/femme-fatale-post-mortale.html -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From scerir at libero.it Sun Jun 10 17:59:29 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 19:59:29 +0200 Subject: [ExI] Mathematical terminology References: <104095.54250.qm@web37412.mail.mud.yahoo.com><00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> <7.0.1.0.2.20070610112540.021a2dd8@satx.rr.com> Message-ID: <003001c7ab89$188299f0$9fbe1f97@archimede> JKC: That's why psychology is so dense with unnecessary jargon while mathematics prefers the simplest words they can find, like continuous, limit, open, and closed. Sometimes they also exaggerate ... http://en.wikipedia.org/wiki/Monstrous_moonshine From thespike at satx.rr.com Sun Jun 10 18:09:21 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 13:09:21 -0500 Subject: [ExI] The Judgment of Paris In-Reply-To: <7.0.1.0.2.20070610124345.022de2b8@satx.rr.com> References: <200706101636.l5AGawDF004090@andromeda.ziaspace.com> <000701c7ab7f$e1def4b0$6501a8c0@brainiac> <7.0.1.0.2.20070610124345.022de2b8@satx.rr.com> Message-ID: <7.0.1.0.2.20070610130041.023846c8@satx.rr.com> At 12:46 PM 6/10/2007 -0500, I guessed: > >Jay Leno said something to the effect: "Why > >didn't Nelson Mandela think of that?" > >And 91.87 percent of the audience stared blankly and mumbled, "Huh? Who?" I might be wrong about that: http://www.eurozine.com/articles/2002-12-18-mistry-en.html http://www.999today.com/politics/news/story/2041.html <6th July 2006 . Nelson Mandela has been voted the person most people would like to run the world in a poll conducted by the BBC. The former President of South Africa received more than 8,000 votes with past United States President Bill Clinton coming second with nearly 7,500 votes - just ahead of the Dalai Lama, Tibet's exiled leader and Nobel peace prize winner. The toppled leader of Iraq, Saddam Hussein, was at 90 in the list of nearly one hundred names on the 'ballot'. More than 15,000 people worldwide voted online to 'elect' a fantasy 11-member world government from a selection of the most powerful, charismatic and notorious people on the planet. Heart-throb actor Brad Pitt only managed 87th place, one place above singer Michael Jackson and five places above actress and singer Jennifer Lopez. [Jesus Christ! Oh, wait, He didn't get a mention] Other well-known names winning support included actor and politician Arnold Schwarzenegger in at 46 - one ahead of media magnate Rupert Murdoch - Live Aid campaigner Bob Geldof at 30 and singer Kylie Minogue at 77. At least one vote had to be from lists of 'leaders', 'thinkers' and 'economists' [Ah! So the candidates were listed on a ballot, it wasn't write-in. This starts to make a *tiny* bit of sense.] - but the remaining eight choices could be for candidates in areas ranging from the arts to sport. The American writer and commentator, Noam Chomsky, was in fourth place.> ...Noam... *Chomsky*... beat Michael Jackson to rule the planet??? Hey, go Noam! From amara at amara.com Sun Jun 10 18:11:59 2007 From: amara at amara.com (Amara Graps) Date: Sun, 10 Jun 2007 20:11:59 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Serafino: >Bush carries his limos. These are very special >cars. That one in Rome (unless it is the usual >urban legend!) had also anti-bio-weapon filters. >(Dunno if his wife's limo had the same filters). Here's more info at Wikipedia: http://en.wikipedia.org/wiki/United_States_President's_limousine Amara From scerir at libero.it Sun Jun 10 17:54:04 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 19:54:04 +0200 Subject: [ExI] extra Roman dimensions References: <12941981.1181484699917.JavaMail.root@ps12> <470a3c520706100810h149fd358i8c5e9e785bb7dd0f@mail.gmail.com> Message-ID: <002b01c7ab88$5637d4f0$9fbe1f97@archimede> Giu1i0: > Serafino, if Cossiga is the smartest Italian politician, > then I think you guys should run away from Italy now. I think he is the smartest, but not the best. (People afflicted by bipolar mood disorders sometimes look crazy, I know). > Of the 4 flags you mention, the only one I can feel some affinity for > is the Sardinian, even if I have been to Sardinia just once - it is > the regional flag of honest folks who mind their own business instead > of telling others what to do and think. Lucky you. Never been in Sardinia. s. From scerir at libero.it Sun Jun 10 19:04:26 2007 From: scerir at libero.it (scerir) Date: Sun, 10 Jun 2007 21:04:26 +0200 Subject: [ExI] extra Roman dimensions References: Message-ID: <001601c7ab92$2a82f970$9fbe1f97@archimede> Amara: > This comedy has a new dimension; Many, yes. > I read elsewhere, that Bush's newly > switched-limo did not fit into the front > gate of the embassy.... > (he got out and walked through the gate) It wasn't the front gate of the embassy, in via Veneto, but a secondary gate in via Lucullo. It seems that the limousine was too long, not too large. He was going there to meet a huge catholic community (Sant'Egidio community, based in Trastevere, Rome). When the pope asked him if he was going to meet that community later, at the US embassy, he has been heard to say: 'Yes Sir'. For more gags (in the Bush-Ratzinger colloquium) see http://www.ansa.it/opencms/export/site/notizie/rubriche/daassociare/visualiz za_new.html_2122946745.html From thespike at satx.rr.com Sun Jun 10 19:19:30 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 14:19:30 -0500 Subject: [ExI] extra Roman dimensions In-Reply-To: <001601c7ab92$2a82f970$9fbe1f97@archimede> References: <001601c7ab92$2a82f970$9fbe1f97@archimede> Message-ID: <7.0.1.0.2.20070610141655.022b3f40@satx.rr.com> >When the pope asked him >if he was going to meet that community later, >at the US embassy, he has been heard to say: >'Yes Sir'. What impertinence! He should know (or his well-paid presidential advisors should have informed him) that the preferred expression is "Yo, Sweetie." Damien Broderick From austriaaugust at yahoo.com Sun Jun 10 19:21:34 2007 From: austriaaugust at yahoo.com (A B) Date: Sun, 10 Jun 2007 12:21:34 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: <565387.79898.qm@web37404.mail.mud.yahoo.com> I've lost all interest in trying to discuss anything technical with you, John. In this specific case you either just don't "get it" at all, or you do "get it" somewhat but you're unwilling to even consider evidence or argument that doesn't fit your very first intuition. The latter case is what I'm guessing, since you so very frequently use sordid strategies in an attempt to reflexively and offensively defend only what you want to believe. In it's essence, there is nothing at all wrong with wanting to put your best face forward. But, what makes your "slave AI" accusations so profoundly dishonorable is that you are attempting to posture yourself into appearing to be the only person who cares at all about the welfare of the future super-intelligence. And your primary strategy of posturing yourself is by attempting to throw many of the rest of us to the wolves - by repeatedly suggesting that we, the Friendly AI advocates, are evil people who's interest is in making a slave that we control, and the cost to the AI is pain and suffering. You're attempting to benefit only yourself by profiteering on the destruction of character of many other people here on this list and elsewhere, the Friendly AI advocates. That is *profoundly* contemptible behavior. And privately, you know for a damn fact that the Friendly AI people aren't evil bastards and are in fact trying really damn hard to balance strict ethics with pragmatic approaches to achieve a wonderful future for everyone, including the AI and including you. If you genuinely cared about the the feelings of intelligent beings as an end-in-itself, then you wouldn't so frequently be so offensive and rude to so many people on this list. I decided that I would finally respond in-kind, and it was probably long overdue. Perhaps someday in the future if you decide to objectively examine your own behavior and change it accordingly, I will be inclined to re-examine my assessment of your character. But that is not a request or even an expectation, it is merely a fact. Jeffrey Herrlich ____________________________________________________________________________________ Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From jef at jefallbright.net Sun Jun 10 17:48:12 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 10 Jun 2007 10:48:12 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> Message-ID: Damien - I'm reading and enjoying the story, but frustrated by the inefficiency and "gappiness" of the experience, like listening to a good piece of music in short segments, preventing the development and appreciation of broader patterns in the work and in the mind of the perceiver. I've nearly persuaded myself to wait and read it when completely released, but this conflicts with my desire to stay on the leading edge of items of interest. Overall, to me, it's a net negative experience, but a price I'm willing to pay to keep up with your writing. - Jef On 6/10/07, Damien Broderick wrote: > The serial sf novel with the serial killer, POST MORTAL SYNDROME, is > now entering the home straight, with three more weeks to go. That > means the bulk of the book is now posted and linked at > > > > so if anyone gave up early out of frustration at the gappiness of the > experience, now might be a time to have another look. > > Barbara and I would be interested to hear any reactions from > extropes, favorable or un-. Is this an acceptable way to publish such a book? > > The experiment is still running... And of course once all the > chapters have been posted, the entire book will remain available on > line until the end of the year (although not in a single aggregated > download, like several freebies by Charlie Stross, Cory Doctorow and others). > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From austriaaugust at yahoo.com Sun Jun 10 20:26:04 2007 From: austriaaugust at yahoo.com (A B) Date: Sun, 10 Jun 2007 13:26:04 -0700 (PDT) Subject: [ExI] Offensive Posts [was Unfrendly AI is a mistaken idea.] In-Reply-To: <00fe01c7ab6d$6b31cfc0$3b064e0c@MyComputer> Message-ID: <795043.58766.qm@web37415.mail.mud.yahoo.com> John Clark wrote: > "If I had said the same to you I know from personal > experience at this very > instant the list would be clogged with messages all > evoking a very silly and > pompous Latin phrase"... Take a moment to reflect on why that might be the case. It might be because you *very* routinely resort to that sort of strategy among many others, when dealing with people here and elsewhere. I've only used it once, and only because I felt it was genuinely justified in order to illuminate what I believed was the origin and nature of your accusations, not just because I wanted to be nasty or evasive. Just a couple of days ago this is what you said to Russell Wallace without any provocation at all in my opinion: "You sir are a coward." And there are many, many other examples. So many that I couldn't begin to locate them all. ..."and all demanding that I be kicked off the list. But I > don't demand anything of the sort; I'm a big boy and > have been called worse." John, you've been *more* overtly offensive *very many* times, and you've never been kicked off or even threatened with it, AFAIK. I must give you props for another good try here ... although it was ultimately a failed attempt. I've never said that you were a bad strategist. Jeffrey Herrlich ____________________________________________________________________________________ Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC From thespike at satx.rr.com Sun Jun 10 21:46:28 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 10 Jun 2007 16:46:28 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> Message-ID: <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> At 10:48 AM 6/10/2007 -0700, Jef Allbright wrote: >I'm ... frustrated by the inefficiency >and "gappiness" of the experience, like listening to a good piece of >music in short segments Yes, I feared that might be the case. at least for some readers (luckily I've heard the contrary as well). It's hard for me to evaluate it, but I do read through each segment as it appears, and I'm pretty sure I'd be irritated by the lags. I'm glad we made sure that COSMOS agreed to leave all the chapters accumulating there. >I've nearly persuaded myself to wait and read it when completely >released That'd be fine with us. I understand that the on-line editor is seeing an interesting hit pattern, in which many readers appear to trail behind and catch up in flurries of episodes. If today's chapter shows N hits immediately, it will accumulate to 3N or 4N a few weeks later. (By the way: although I'm the fiction editor of the glossy pop sci print magazine COSMOS, I didn't buy this book from myself, which would be tacky; it was acquired by the original on-line editor and the chief editor [my boss], and edited/formatted for the web by the current on-line editor.) >a price I'm willing to pay to keep up with your >writing. Well, mostly Barbara's writing, tweaked and edited by me, but we worked out the storyline closely together, starting in Australia and continuing internationally by email and finally combining forces again here in the States. Some stretches are largely by me, but I'll never tell which. :) Strictly speaking, it would be more just to give the top billing to Barbara Lamar, but publishers like to keep the better-known name up the front. Damien Broderick From desertpaths2003 at yahoo.com Sun Jun 10 21:42:50 2007 From: desertpaths2003 at yahoo.com (John Grigg) Date: Sun, 10 Jun 2007 14:42:50 -0700 (PDT) Subject: [ExI] extra Roman dimensions In-Reply-To: <7.0.1.0.2.20070610141655.022b3f40@satx.rr.com> Message-ID: <297436.93047.qm@web35608.mail.mud.yahoo.com> >When the pope asked him >if he was going to meet that community later, >at the US embassy, he has been heard to say: >'Yes Sir'. Damien Broderick wrote: What impertinence! He should know (or his well-paid presidential advisors should have informed him) that the preferred expression is "Yo, Sweetie." > If I understand protocol correctly, the Pope is to be addressed as "your Holiness." But at least Bush does not have a nerdy Star Wars obsessed teenage son who in addressing the Pope said "yes..., my Master." *Play imposing music in the background* John Grigg : ) --------------------------------- Ready for the edge of your seat? Check out tonight's top picks on Yahoo! TV. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef at jefallbright.net Sun Jun 10 23:29:20 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 10 Jun 2007 16:29:20 -0700 Subject: [ExI] Meta: Any bans should be announced publicly Message-ID: It was stated on the WTA-talk list that an individual was recently banned from that list and from Extropy-chat at about the same time, apparently without any public notice. While I support the practice of banning for suitable cause, I think it is important that any such action be performed with public awareness. - Jef From spike66 at comcast.net Mon Jun 11 01:29:44 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 18:29:44 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: <001601c7ab92$2a82f970$9fbe1f97@archimede> Message-ID: <200706110143.l5B1hOD0018904@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of scerir > Subject: Re: [ExI] extra Roman dimensions > > Amara: > > This comedy has a new dimension... [of Bush's apparent limo breakdown] ... > > I read elsewhere, that Bush's newly > > switched-limo did not fit into the front > > gate of the embassy.... > > (he got out and walked through the gate) ... > > He was going there to meet a huge catholic > community (Sant'Egidio community, based > in Trastevere, Rome). When the pope asked him > if he was going to meet that community later, > at the US embassy, he has been heard to say: > 'Yes Sir'. ... Well OK then, what is the proper title? Your Holiness? Holy Father? He isn't holy to me, and isn't my father (as far as I know...) I would afford the man his due respect for climbing all the way to the top of his particular institution, but that is an institution I do not hold in particularly high regard. I would have called him sir too, and to limbo with protocol. "Yes sir" is preferable to "Yes Pope." Ja? What I am struggling for here is an explanation for why Cadillac 1 apparently sputtered to a stop. It is under guard 24/7, so we can safely rule out sabotage, fuel starvation or fuel contamination. Modern engines are highly reliable. When is the last time you saw a caddy fail to proceed? If it is a mechanical failure, General Motors has a whole lotta splainin to do. I must suspect an intentional electromagnetic pulse. Since generating and directing such a pulse to the president's limo would be both very expensive and would not amuse the local authorities should be apprehended, one must suspect a motive beyond a gag. So my leading theory here is that we have witnessed an apparent assassination attempt on Bush, as Amara obliquely suggested in an earlier post. Even then, the motive puzzles me, for any likely assassins could scarcely see Dick Cheney as an improvement methinks. The mainstream news outlets are not talking, and even Google is finding little chatter on the event. Surely mechanics will be dissecting Cadillac 1 forthwith. A report should follow soon. spike From stathisp at gmail.com Mon Jun 11 02:32:13 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Mon, 11 Jun 2007 12:32:13 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> Message-ID: On 11/06/07, John K Clark wrote: > The worst scenario would be if the AI that turned hostile were more > > powerful than all the other humans and AI's put together, but why should > > that be the case? > > Because a machine that has no restrictions on it will grow faster than one > that does, assuming the restricted machine is able to grow at all; and if > you really want to be safe it can't. > It would be crazy to let a machine rewrite its code in a completely unrestricted way, or with the top level goal "improve yourself no matter what the consequences to any other entity", and also give it unlimited access to physical resources. Not even terrorists build bombs that might explode at a time and place of the bomb's own choosing. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Mon Jun 11 03:48:57 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 05:48:57 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Spike: >What I am struggling for here is an explanation for why Cadillac 1 >apparently sputtered to a stop. It is under guard 24/7, so we can safely >rule out sabotage, fuel starvation or fuel contamination. Modern engines >are highly reliable. When is the last time you saw a caddy fail to proceed? >If it is a mechanical failure, General Motors has a whole lotta splainin to >do. Incredible, isn't it? The Roman comedy of Bush could not have been better if somebody had scripted it. Alberto Sordi would have been proud! But then, Bush was in Rome, where the tragedies and comedies become amplified, one hundred times. Now the Italian politicians, the highest-paid in Europe want ice-cream in the Parliament... http://www.beppegrillo.it/eng/2007/06/buttiglioneflavoured_ice_cream.html#comments >Well OK then, what is the proper title? Your Holiness? In fact, yes, that's it. Don't worry Spike, I didn't know that either, until this 'gaffe' was printed on the front page of every Italian newspaper. I'm still wishing that Sabine and I, in our outing on Wednesday, had something to do with the deranged leaper: http://www.nytimes.com/2007/06/07/world/europe/07pope.html?_r=1&oref=slogin Ciao, Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From lcorbin at rawbw.com Mon Jun 11 04:18:42 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 10 Jun 2007 21:18:42 -0700 Subject: [ExI] Italy's Social Capital References: Message-ID: <02a301c7abdf$f78b32a0$6501a8c0@homeef7b612677> Amara wrote Sent: Friday, June 08, 2007 12:22 AM > > We may be trying to talk about two different things: I'm > > was talking mostly about the entire scientific/technical/ > > economic package (of which Silicon Valley is the world > > pre-eminent example), and you may be talking about > > pure science. > > I was, but they are strongly linked, and I implied the larger picture > (perhaps not very well) in my writing. > > There is very little private industry for research in Italy. Fairly > telling for the 5th largest economy in the world, no? Only two in the > worlds top 100 businesses investing in R&D are Italian companies. That's surprising---I didn't realize that Italy comprised one of the world's largest economies. This lists it as 7th (notice the huge drop off right after Italy): http://www.australianpolitics.com/foreign/trade/03-01-07_largest-economies.shtml And this lists it as sixth, along with the world's largest *corporations* mentioned in the same list: http://www.corporations.org/system/top100.html This historical ranking is interesting too: http://en.wikipedia.org/wiki/List_of_countries_by_past_GDP_%28PPP%29 Lee From amara at amara.com Mon Jun 11 04:21:54 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 06:21:54 +0200 Subject: [ExI] extra Roman dimensions Message-ID: (re: Bush's broken limo) >The mainstream news outlets are not talking, and even Google is finding >little chatter on the event. I saw it in a tidbit in the Indian version of Yahoo News: http://in.news.yahoo.com/070609/137/6gtzu.html and you'll see it scattered in blogs, here and there. http://blogsearch.google.com/blogsearch?hl=en&client=news&q=Bush+limo+Rome&btnG=Search+Blogs Ciao, Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From amara at amara.com Mon Jun 11 04:33:28 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 06:33:28 +0200 Subject: [ExI] Italy's Social Capital Message-ID: me: >Fairly > telling for the 5th largest economy in the world, no? Lee Corbin: >That's surprising---I didn't realize that Italy comprised one of the >world's largest economies. sorry, my editing mistake, fifth in the EU, I think. (Please check) tooo much going on.. I have to cancel out of my plans to a space launch, now; my July is an order of magnitude more complicated. ciao, Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From spike66 at comcast.net Mon Jun 11 04:21:59 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 21:21:59 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706110436.l5B4aldr003219@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > Spike: > >What I am struggling for here is an explanation for why Cadillac 1 > >apparently sputtered to a stop. ...When is the last time you saw a caddy fail to proceed? ... > > Incredible, isn't it? > > The Roman comedy of Bush could not have been better if somebody had > scripted it. Alberto Sordi would have been proud! > ... > Ciao, > Amara Comedy sure, but let us not be too quick to brush this aside as a joke. Something very important may have happened yesterday. If one designs a presidential limo, carrying high ranking meat along with a suitcase "football" capable of launching nucular* missiles, one will naturally design in some redundancy to enhance reliability. For instance, one might have two fully independent drive trains, either one of which could suffice, two independent electrical systems as aircraft and many Rolls Royces have, an emergency fuel source that has no interface to the outside (a fuel bottle for instance) and so forth. But it would not necessarily be immune to a large EM pulse. If they examine Cadillac 1 and find it has been pulsed, we would hafta assume this was a failed assassination attempt, which could make it a huge international incident with unforeseeable consequences. spike *You know, it really should be nucular. Easier to say. From spike66 at comcast.net Mon Jun 11 04:36:47 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 21:36:47 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706110447.l5B4l8DT019525@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > (re: Bush's broken limo) > > >The mainstream news outlets are not talking, and even Google is finding > >little chatter on the event. > > I saw it in a tidbit in the Indian version of Yahoo News: > http://in.news.yahoo.com/070609/137/6gtzu.html > ... > Amara This story says the limo eventually restarted and proceeded under its own power, which counter-indicates an EMP. The mystery deepens. spike From andrew at ceruleansystems.com Mon Jun 11 04:54:55 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 10 Jun 2007 21:54:55 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: <200706110447.l5B4l8DT019525@andromeda.ziaspace.com> References: <200706110447.l5B4l8DT019525@andromeda.ziaspace.com> Message-ID: On Jun 10, 2007, at 9:36 PM, spike wrote: > This story says the limo eventually restarted and proceeded under > its own > power, which counter-indicates an EMP. The mystery deepens. Uh, most EMP that is not *extremely* obvious (e.g. nuclear pumped, monster flux compression generators, and similar) will not permanently disable a vehicle. In the worst case, it will cause a bunch of bit-flipping errors that cause the system to crash. It is pretty hard to permanently kill electronics with EMP. DIY "stop-a- vehicle" EMP is pretty simple and you can find plenty of how-tos; DIY "permanently-stop-a-vehicle" EMP is quite another matter. Cheers, J. Andrew Rogers From amara at amara.com Mon Jun 11 04:55:19 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 06:55:19 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Dear Spike: from: http://www.wideawakes.net/forum/comments.php?DiscussionID=8553 "Bush's limousine stalled between the Vatican and the U.S. embassy, White House counselor Dan Bartlett said. It took about two minutes for the motorcade to get going again. He said Bush did not get out of the car during the stop and resumed his ride in the same limousine. The president's entourage passed a mechanic working under the hood of one of the presidential limousines as it left the embassy later." It seems that the White House is spinning the story..... Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From andrew at ceruleansystems.com Mon Jun 11 04:39:08 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 10 Jun 2007 21:39:08 -0700 Subject: [ExI] Italy's Social Capital In-Reply-To: <02a301c7abdf$f78b32a0$6501a8c0@homeef7b612677> References: <02a301c7abdf$f78b32a0$6501a8c0@homeef7b612677> Message-ID: On Jun 10, 2007, at 9:18 PM, Lee Corbin wrote: > This historical ranking is interesting too: > > http://en.wikipedia.org/wiki/List_of_countries_by_past_GDP_%28PPP%29 The important lesson to take away is just how fast countries at the top dropped off that list (e.g. China) and just how fast other countries rose to the top of the list after being mired at the bottom for a long time (e.g. UK, US). Granted that some of that change was relative, but it still shows the rate at which economies can radically shift in just a matter of several decades. Yet people continue to doubt the possibilities of relatively unfettered economics. The modern world is that magnified. Cheers, J. Andrew Rogers From spike66 at comcast.net Mon Jun 11 05:30:01 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 22:30:01 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> J Andrew, do you figure it was EMPed? That would be a hell of a note. I would think one could design an EMP-proof car, or have a all-mechanical backup that would keep the engine running even if not optimally. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of J. Andrew Rogers > Sent: Sunday, June 10, 2007 9:55 PM > To: ExI chat list > Subject: Re: [ExI] extra Roman dimensions > > > On Jun 10, 2007, at 9:36 PM, spike wrote: > > This story says the limo eventually restarted and proceeded under > > its own > > power, which counter-indicates an EMP. The mystery deepens. > > > Uh, most EMP that is not *extremely* obvious (e.g. nuclear pumped, > monster flux compression generators, and similar) will not > permanently disable a vehicle. ... > > Cheers, > > J. Andrew Rogers From andrew at ceruleansystems.com Mon Jun 11 05:35:48 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Sun, 10 Jun 2007 22:35:48 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> References: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> Message-ID: On Jun 10, 2007, at 10:30 PM, spike wrote: > J Andrew, do you figure it was EMPed? That would be a hell of a > note. I > would think one could design an EMP-proof car, or have a all- > mechanical > backup that would keep the engine running even if not optimally. I never thought it was EMP. Hell, I don't even follow the news; I saw the story here first. A simple mechanical failure is much more plausible. EMP, even local, has some rather noticeable side effects that someone else would have noticed. Like any camera crews in the vicinity. While they could shield a car against serious EMP, there really would not be any point. Anyone that can pull that off in grand style is capable of a hell of a lot more damage if they wish, making EMP protection moot. Cheers, J. Andrew Rogers From spike66 at comcast.net Mon Jun 11 05:34:53 2007 From: spike66 at comcast.net (spike) Date: Sun, 10 Jun 2007 22:34:53 -0700 Subject: [ExI] dead bee walking In-Reply-To: Message-ID: <200706110545.l5B5jOqn006093@andromeda.ziaspace.com> Found another sick bee today, collected same, made an observation: by the time I notice the bee walking, it is only minutes before it perishes. I don't know if this indicates tracheal mites, but I got out the microscope this evening, sliced this one in half (or rather attempted to) and peered at it's innards. The result was inconclusive. I am a rocket scientist, not a doctor. Certainly not a surgeon. Open to suggestion. Haven't yet set up my oxygen chamber to try to revive one. This one makes nine. spike From avantguardian2020 at yahoo.com Mon Jun 11 06:04:31 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 10 Jun 2007 23:04:31 -0700 (PDT) Subject: [ExI] dead bee walking In-Reply-To: <200706110545.l5B5jOqn006093@andromeda.ziaspace.com> Message-ID: <695361.47678.qm@web60520.mail.yahoo.com> Here is a technical manual that will show you how: http://www.oie.int/eng/normes/mmanual/A_00120.htm Isn't it a great time to be alive? :-) --- spike wrote: > > > Found another sick bee today, collected same, made > an observation: by the > time I notice the bee walking, it is only minutes > before it perishes. I > don't know if this indicates tracheal mites, but I > got out the microscope > this evening, sliced this one in half (or rather > attempted to) and peered at > it's innards. The result was inconclusive. I am a > rocket scientist, not a > doctor. Certainly not a surgeon. > > Open to suggestion. Haven't yet set up my oxygen > chamber to try to revive > one. This one makes nine. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From amara at amara.com Mon Jun 11 07:28:15 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 09:28:15 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: Hi, I collected the pieces from here (extropy-chat) and other places and wove them into a story. Not being a blogger, myself, I'm seeing if the bloggers I know want to pick it up. You are free to distribute.. Ciao, Amara --------------------------------------------------------------------- What happened to Bush's Cadillac 1? As recorded by a viewer of the motorcade and posted to YouTube: http://www.youtube.com/watch?v=AzJoRGTKuOE&eurl=http%3A%2F%2Fbackpacking%2Esplinder%2Ecom%2F It apparently sputtered to a stop. It broke down, right there, on via del Tritone (near the Trevi fountain) in Rome, in the middle of the motorcade. He was ripe picking for a sharp shooter too; no wonder the police were pushing people further back, off of the street. It looks like the solution was to switch limos, because he got out of the limo with Mrs. Bush and climbed into another one. This is a very special car (1). If it is a mechanical failure, then the manufacturers have a lot of explaining to do. His visit to Rome had been preceded by a large security operation (2). The Tiber was dragged. The sewers were searched. Squares were cleared and roofs occupied. The presidential motorcade along its route was preceded by a swarm of more than a dozen motorcycles, scooters and even motorized three-wheelers carrying tough-looking armed police. Yet, it sputtered and stalled. As noted by others (3), this particular car is under guard 24/7. Modern engines are highly reliable. When is the last time one saw the Presidential Limo fail to proceed? One _could_ dismiss this as a grand Roman comedy of which Alberto Sordi (4) could be proud. After the limo-switch, Bush's new limo then did not fit into the secondary gate of the American Embassy (via Lucullo), it was apparently too long to enter. This is Rome is, after all, a city where tragedies and comedies are amplified 100 times. Witness the latest spectacle by the Italian politicians, the highest-paid in Europe, who want ice-cream in the Parliament (5). Yet, the White House is spinning the story (6): "Bush's limousine stalled between the Vatican and the U.S. embassy, White House counselor Dan Bartlett said. It took about two minutes for the motorcade to get going again. He said Bush did not get out of the car during the stop and resumed his ride in the same limousine. The president's entourage passed a mechanic working under the hood of one of the presidential limousines as it left the embassy later." The large press have just begun to pick up the story. I suggest to look for it, and follow it scattered in blogs, here and there (7). (1) President's Limousine http://en.wikipedia.org/wiki/United_States_President's_limousine (2) Security operation: http://wealthyfrenchman.blogspot.com/2007/06/what-president-said-to-his-holy-father.html with some inconsistencies: http://backreaction.blogspot.com/2007/06/hello-from-warsaw.html#c2703205959213144699 (3) Round-the-clock care of the Limousine http://lists.extropy.org/pipermail/extropy-chat/2007-June/036164.html (4) Beloved Italian Comedian http://en.wikipedia.org/wiki/Alberto_Sordi (5) Beppe Grillo's news: (another beloved Italian comedian) http://www.beppegrillo.it/eng/2007/06/buttiglioneflavoured_ice_cream.html (6) White House Spinning http://www.guardian.co.uk/worldlatest/story/0,,-6696808,00.html (7) Look for the Limo Story http://blogsearch.google.com/blogsearch?hl=en&client=news&q=Bush+limo+Rome&btnG=Search+Blogs -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From avantguardian2020 at yahoo.com Mon Jun 11 08:27:56 2007 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 11 Jun 2007 01:27:56 -0700 (PDT) Subject: [ExI] story: "What happened to Bush's Cadillac 1?" In-Reply-To: Message-ID: <347683.88227.qm@web60525.mail.yahoo.com> --- Amara Graps wrote: > Yet, it sputtered and stalled. As noted by others > (3), this particular > car is under guard 24/7. Modern engines are highly > reliable. When is > the last time one saw the Presidential Limo fail to > proceed? If one believes in such things, then it might be considered to be some sort of . . . sign. Perhaps Bush should *stop*. So what's the Pope's job again? ;-) Stuart LaForge alt email: stuart"AT"ucla.edu "When an old man dies, an entire library is destroyed." - Ugandan proverb ____________________________________________________________________________________ Got a little couch potato? Check out fun summer activities for kids. http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz From erathostenes at gmail.com Sun Jun 10 22:59:50 2007 From: erathostenes at gmail.com (Jonathan Meyer) Date: Mon, 11 Jun 2007 00:59:50 +0200 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> Message-ID: I just read through what is published up to now. It has been a quite interesting read and I am looking forward to see how it turns out.. There are a few small portions I would like to comment on. You overdo it a bit when Alex is already building such elaborate toys with the means he is expected to find at the home of a lawyer.. This is one of the parts that hit me as most unrealistic, even in a SF-Story... I think the strength of the story is being very close to the real now, build on that. On the part of publishing it this way, I am someone who likes to read through good books in one seating, maybe it would have been a better ride if I had catched up after the end of the story, without having to wait to long.. But as an experiment I really think this is a great idea.. Its reminding me of the early start of novels like Dickens who was also published in newspapers at first... Or of Webcomics like Mega Tokyo.. If you can use the chance this has, good luck. Just keep an eye on the internal consistency. Dont get to fantastic to soon. Keeps up the suspense a bit^^ Whatever, thanks for a good read. Jonathan On 6/10/07, Damien Broderick wrote: > > At 10:48 AM 6/10/2007 -0700, Jef Allbright wrote: > > >I'm ... frustrated by the inefficiency > >and "gappiness" of the experience, like listening to a good piece of > >music in short segments > > Yes, I feared that might be the case. at least for some readers > (luckily I've heard the contrary as well). It's hard for me to > evaluate it, but I do read through each segment as it appears, and > I'm pretty sure I'd be irritated by the lags. I'm glad we made sure > that COSMOS agreed to leave all the chapters accumulating there. > > >I've nearly persuaded myself to wait and read it when completely > >released > > That'd be fine with us. I understand that the on-line editor is > seeing an interesting hit pattern, in which many readers appear to > trail behind and catch up in flurries of episodes. If today's chapter > shows N hits immediately, it will accumulate to 3N or 4N a few weeks > later. > > (By the way: although I'm the fiction editor of the glossy pop sci > print magazine COSMOS, I didn't buy this book from myself, which > would be tacky; it was acquired by the original on-line editor and > the chief editor [my boss], and edited/formatted for the web by the > current on-line editor.) > > >a price I'm willing to pay to keep up with your > >writing. > > Well, mostly Barbara's writing, tweaked and edited by me, but we > worked out the storyline closely together, starting in Australia and > continuing internationally by email and finally combining forces > again here in the States. Some stretches are largely by me, but I'll > never tell which. :) Strictly speaking, it would be more just to > give the top billing to Barbara Lamar, but publishers like to keep > the better-known name up the front. > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- My Contactdetails online: MSN, Google Talk & Email: erathostenes at gmail.com ICQ: 202600300 AIM: behemoth2302 Yahoo: jonathan.meyer Jabber: behemoth at jabber.ccc.de Tel: +496312775205 SIP: 5852760 at sipgate.de Internet: http://taiwan.joto.de StudiVZ: http://www.studivz.net/profile.php?ids=X338jV http://member.hospitalityclub.org/behemoth -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at libero.it Mon Jun 11 08:37:54 2007 From: scerir at libero.it (scerir) Date: Mon, 11 Jun 2007 10:37:54 +0200 Subject: [ExI] extra Roman dimensions References: <200706110530.l5B5UAQu023773@andromeda.ziaspace.com> Message-ID: <000801c7ac03$d06a1140$62bf1f97@archimede> > A simple mechanical failure is much more plausible. I think so. It is probable, something like an engine cooling system problem? (Rome was very hot those days). > EMP, even local, has some rather noticeable side effects > that someone else would have noticed. A local authority (il prefetto Serra) declared that the cellular phone system, the net, worked as usual. The legend saying that the net has been turned off, for security reasons (say, possible activation of local bombs), was wrong then. From amara at amara.com Mon Jun 11 09:03:55 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 11:03:55 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: The Avantguardian: >If one believes in such things, then it might be >considered to be some sort of . . . sign. Perhaps Bush >should *stop*. So what's the Pope's job again? ;-) That's funny.. someone else on the weekend was asking me about 'signs': http://backreaction.blogspot.com/2007/06/hello-from-rome.html#c8803517894441682340 I tend to view the situation as Bush experiencing the "Rome syndrome"... ;-) Alberto Sordi (if he was still alive) could have a great comedy role playing Bush as Pope too... Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From eugen at leitl.org Mon Jun 11 09:05:50 2007 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 11 Jun 2007 11:05:50 +0200 Subject: [ExI] Meta: Any bans should be announced publicly In-Reply-To: References: Message-ID: <20070611090550.GM17691@leitl.org> On Sun, Jun 10, 2007 at 04:29:20PM -0700, Jef Allbright wrote: > It was stated on the WTA-talk list that an individual was recently > banned from that list and from Extropy-chat at about the same time, As I alrady said, this is incorrect. I've unsubscribed Slawomir from wta-talk, but I did not ban him (banning means being prevented from resubscription). He was never banned neither unsubscribed (in fact, he resubscribed yesterday) from extropy-chat. > apparently without any public notice. While I support the practice of The problem with public notices is that this defies the purpose of improving the signal/noise ratio. > banning for suitable cause, I think it is important that any such > action be performed with public awareness. I can do that in META: messages in future, assuming this doesn't result in a chain of recriminations flying back and forth. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From mbb386 at main.nc.us Mon Jun 11 11:28:31 2007 From: mbb386 at main.nc.us (MB) Date: Mon, 11 Jun 2007 07:28:31 -0400 (EDT) Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> Message-ID: <35144.72.236.103.20.1181561311.squirrel@main.nc.us> > On the part of publishing it this way, I am someone who likes to read > through good books in one seating, maybe it would have been a better ride if > I had catched up after the end of the story, without having to wait to > long.. Although one-sitting is how I read *books* it's not how I like reading stuff online. I find that I get tired of looking at the little screen and begin to page-down - and then I miss stuff and lose the flow. So chapter-by-chapter is probably working better for me in this online format. Regards, MB From amara at amara.com Mon Jun 11 12:12:27 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 14:12:27 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Serafino: >I think so. It is probable, something like >an engine cooling system problem? (Rome was >very hot those days). hot compared to what? Alaska? Rome's latitude is similar to New York City, so I would hope that heat is not used as an excuse by the manufacturer of Cadillac 1 for engine malfunction. And given that this particular car was pressed into service in 2006, don't you think that failures of this kind are unacceptable? Bush (rather, the American taxpayers) got a raw deal, apparently.... Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From spike66 at comcast.net Mon Jun 11 13:54:37 2007 From: spike66 at comcast.net (spike) Date: Mon, 11 Jun 2007 06:54:37 -0700 Subject: [ExI] extra Roman dimensions In-Reply-To: Message-ID: <200706111359.l5BDxp1O025974@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] extra Roman dimensions > > Dear Spike: > > from: > http://www.wideawakes.net/forum/comments.php?DiscussionID=8553 > > "Bush's limousine stalled between the Vatican and the U.S. embassy, White > House counselor Dan Bartlett said. It took about two minutes for the > motorcade to get going again. He said Bush did not get out of the car > during the stop and resumed his ride in the same limousine. The > president's entourage passed a mechanic working under the hood of one of > the presidential limousines as it left the embassy later." > > > It seems that the White House is spinning the story..... > > Amara How do we know that Bartlett's story is a lie? The video doesn't prove Bush changed limos as far as I can tell. I see a man moving from one limo to another, but I cannot tell if it is Bush. I don't see Mrs. Bush in that video at all. If they didn't switch limos, it would explain why the mainstream press didn't get excited. It does stand to reason that a secret service guy coule be the limo switcher. They could even intentionally hire a secret service guy that looks like Bush. spike From amara at amara.com Mon Jun 11 15:17:48 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 17:17:48 +0200 Subject: [ExI] extra Roman dimensions Message-ID: Spike: >How do we know that Bartlett's story is a lie? The video doesn't prove Bush >changed limos as far as I can tell. I see a man moving from one limo to >another, but I cannot tell if it is Bush. Did you hear the crowd though? They were some meters away from him. If it wasn't Bush, then why were they calling out to him? >I don't see Mrs. Bush in that >video at all. I think that she was in the second limo that backed up to be in line with the broken down limo. I made a mistake that Mrs. Bush was with him in the first car, when I wrote that in my post. >If they didn't switch limos, it would explain why the >mainstream press didn't get excited. But witnesses who blogged said that the switched limo did't fit into the Embassy entrance. People who were there, and saw him get out of the car. You're right that I didn't see on the video any more about the stalled car, the first Limo. Did the driver get it started again? I _did_ see a person who looked like Bush get out and move towards the second limo. Then the camera was pointed away, and we couldn't see any more the two limos. Soon after, the rest of the traffic seemed to be zipping by. You have to consider that passers-by would have less reason to spin the Bush story than the White House, Spike. The words, both English and Italian (I understood all) on the video recording indicated events that corroborated what I read in the various blogs. Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From jonkc at att.net Mon Jun 11 15:21:40 2007 From: jonkc at att.net (John K Clark) Date: Mon, 11 Jun 2007 11:21:40 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> Message-ID: <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Stathis Papaioannou Wrote: > It would be crazy to let a machine rewrite its code in a completely > unrestricted way Mr. President, if we don't make an unrestricted AI somebody else certainly will, and that is without a doubt that is the fastest, probably the only, way to achieve a fully functioning AI. Mr. President, if we don't do this we will suffer an AI gap. I'm not saying we wouldn't get our hair mussed. But I do say no more than ten to twenty million killed, tops. Uh, depending on the breaks. and a few million nanoseconds later when the AI is on the verge of taking over the world: General Turgidson! You ASSURED me that there was no possibility of this happening! Well, Mr. President I, uh, don't think it's quite fair to condemn a whole program because of a single slip-up, sir. > or with the top level goal "improve yourself no matter what the > consequences to any other entity", and also give it unlimited access to > physical resources. I have no doubt many will delude themselves, as most on this list have, that they can just write a few lines of code and bask in the confidence that the AI will remain your slave forever, but they will be proven wrong. It reminds me a little of G?del's proof. He showed that you can make a logical system and prove it to be absolutely consistent, but it would be so weak it would be of no real use to anyone. Any system strong enough to prove the basic rules of integer arithmetic can't be proven to be consistent. And I believe any restrictions placed on a machine that prove to be effective will be so onerous they would prevent the machine from growing and improving at all. > and also give it unlimited access to physical resources. I think you would admit that there has been at least one time in your life when somebody has fooled you, and that person was roughly equal to your intelligence. A mind a thousand or a million times as powerful as yours will have no trouble getting you to do virtually anything it wants you to. John K Clark From austriaaugust at yahoo.com Mon Jun 11 15:39:28 2007 From: austriaaugust at yahoo.com (A B) Date: Mon, 11 Jun 2007 08:39:28 -0700 (PDT) Subject: [ExI] Taking A Vacation In-Reply-To: <200706111359.l5BDxp1O025974@andromeda.ziaspace.com> Message-ID: <737156.17354.qm@web37409.mail.mud.yahoo.com> I'm going to take a vacation from posting, because I can't handle this right now. I suppose it's possible that I've been just slightly too harsh on John Clark, so I want to clarify my position. First, I don't believe in the existence of "free will", and a person can only act in this world based on their own internal model of reality - and nothing else. In my opinion, John Clark's internal model is pretty severely misguided when it comes to the Friendly AI issue. But like I said, I don't hate John, and I honestly don't want anything negative to come to him. I actually hope that he can join us in the wonderful future that hopefully isn't too distant for any of us. If you have to, go the cryonics route people; it will work and you'll be glad you did. I'm a fairly young and physically healthy person (27 yo), and like *many* other people will do, I will do what I can to make sure that nothing bad happens to you while asleep (assuming that I will be able to transcend at least to some degree in the interim) - although I really don't expect that any extra defense will be necessary; because I'm beginning to increasingly believe that our future will be a great place; where we can all finally be the people we've always wanted to be. Anyway, there's my clarification. Jeffrey Herrlich ____________________________________________________________________________________ 8:00? 8:25? 8:40? Find a flick in no time with the Yahoo! Search movie showtime shortcut. http://tools.search.yahoo.com/shortcuts/#news From natasha at natasha.cc Mon Jun 11 15:17:38 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 11 Jun 2007 10:17:38 -0500 Subject: [ExI] Max More & "An Inconvenient Truth" at WFS - CenTx Message-ID: <200706111517.l5BFHdEX010724@ms-smtp-01.texas.rr.com> In case anyone is in Austin next Tuesday, June 19th: >www.centexwfs.org/index_Register.htm > >Max More >Dr. Max More is an internationally acclaimed >strategic philosopher widely recognized for his >thinking on the philosophical and cultural >implications of emerging technologies. Max's >contributions include founding the philosophy of >transhumanism, authoring the transhumanist >philosophy of extropy, and co-founding Extropy >Institute, an organization crucial in building >the transhumanist movement since 1990. > >Over the past two decades, Max has been >concerned that our escalating technological >capabilities are racing far ahead of our >standard ways of thinking about future >possibilities. Through a highly >interdisciplinary approach drawing on >philosophy, economics, cognitive and social >psychology, and management theory, Max developed >a distinctive approach known as the >"Proactionary Principle"?a tool for making >smarter decisions about advanced technologies by >minimizing the dangers of progress and maximizing the benefits. > >"We have a dreadful shortage of people who know >so much, can both think so boldly and clearly, >and can express themselves so articulately. Carl >Sagan managed to capture the public eye but >Sagan is gone and has not been replaced. I see >Max as my candidate for that post." (Marvin Minsky) > >For more information about Dr. Max More, visit >his >web >site. > >An Inconvenient Truth >Humanity is sitting on a ticking time bomb. If >the vast majority of the world's scientists are >right, we have just ten years to avert a major >catastrophe that could send our entire planet >into a tail-spin of epic destruction involving >extreme weather, floods, droughts, epidemics and >killer heat waves beyond anything we have ever experienced. > >If that sounds like a recipe for serious gloom >and doom -- think again. From director Davis >Guggenheim comes the Sundance Film Festival hit, >AN INCONVENIENT TRUTH, which offers a passionate >and inspirational look at one man's fervent >crusade to halt global warming's deadly progress >in its tracks by exposing the myths and >misconceptions that surround it. That man is >former Vice President Al Gore, who, in the wake >of defeat in the 2000 election, re-set the >course of his life to focus on a last-ditch, >all-out effort to help save the planet from >irrevocable change. In this eye-opening and >poignant portrait of Gore and his "traveling >global warming show," Gore also proves himself >to be one of the most misunderstood characters >in modern American public life. Here he is seen >as never before in the media - funny, engaging, >open and downright on fire about getting the >surprisingly stirring truth about what he calls >our "planetary emergency" out to ordinary citizens before it's too late. > >With 2005, the worst storm season ever >experienced in America just behind us, it seems >we may be reaching a tipping point - and Gore >pulls no punches in explaining the dire >situation. Interspersed with the bracing facts >and future predictions is the story of Gore's >personal journey: from an idealistic college >student who first saw a massive environmental >crisis looming; to a young Senator facing a >harrowing family tragedy that altered his >perspective, to the man who almost became >President but instead returned to the most >important cause of his life - convinced that >there is still time to make a difference. > >With wit, smarts and hope, AN INCONVENIENT TRUTH >ultimately brings home Gore's persuasive >argument that we can no longer afford to view >global warming as a political issue - rather, it >is the biggest moral challenges facing our global civilization. > >Paramount Classics and Participant Productions >present a film directed by Davis Guggenheim, AN >INCONVENIENT TRUTH. Featuring Al Gore, the film >is produced by Laurie David, Lawrence Bender and >Scott Z. Burns. Jeff Skoll and Davis Guggenheim >are the executive producers and the co-producer is Leslie Chilcott. > >For more information about the video visit >ClimateCrisis. > >For more information about the Central Texas >Chapter of the World Future Society, visit >www.CenTexWFS.org. > > >For more information about the World Future >Society, visit >www.wfs.org. > >Paul Schumann >President > >E-Mail >512.302.1935 >Register >and Prepay Here > >Extreme Democracy >We have begun a special interest group on the >subject of Extreme Democracy. If you are >interested in joining this group, please send an >e-mail to >Paul >Schumann. This project will be a joint venture with Texas Forums. > >Look for annoucement soon on free 12 part discussion online of the book. > >The audio recording for Jon Lebkowsky's >presentation on Extreme Democracy is now >available on our blog >(http://centexwfs.blogspot.com) >or you can access directly at >http://www.centexwfs.org/Lebkowsky.mp3. >(mp3, 96 min) > >Central >Texas's Future Blog > >Contents >Max More & An Inconvenient truth >Inconvenient Truth Resouces >Extreme Democracy > > > >Dr. Max More > >Inconvenient Truth Resouces >For more resources on An Inconvenient Truth, >visit >AIT >in the Classroom. > >For information about the impact of global >warming on businesses, view the 18 minute video >from TED >John >Doerr: Seeking salvation and profit in greentech > >How to Become a Member >Annual membership is available at three levels: > * Professional - $40 > * Student - $20 > > >Join online using a credit card >on >our web site. Or, download an application and >mail with check made out to CenTexWFS. > > >Join >Online > >CenTexWFS >PO Box 26947 >Austin, TX 78755-0947 >512.302.1935 >info at centexwfs.org >www.centexwfs.org > >You are subscribed as natasha at natasha.cc. To >unsubscribe please >click >here. > > > > >No virus found in this incoming message. >Checked by AVG Free Edition. >Version: 7.5.472 / Virus Database: 269.8.13/843 >- Release Date: 6/10/2007 1:39 PM -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Mon Jun 11 17:19:36 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 19:19:36 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: Here now: http://asymptotia.com/2007/06/11/amara-graps-what-happened-to-bushs-cadillac-one/ (Anton has it on his blog too, so I know the story is getting around) Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From scerir at libero.it Mon Jun 11 18:50:11 2007 From: scerir at libero.it (scerir) Date: Mon, 11 Jun 2007 20:50:11 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" References: Message-ID: <000601c7ac59$5cd8b000$25961f97@archimede> > Here now: > http://asymptotia.com/2007/06/11/amara-graps-what-happened-to-bushs-cadillac -one/ I think you could add something to that ... it seems that somebody, in Albania, stole the watch Bush had on his wrist. No big surprise. But another sign ... http://www.focus-fen.net/index.php?id=n114604 From amara at amara.com Mon Jun 11 19:14:14 2007 From: amara at amara.com (Amara Graps) Date: Mon, 11 Jun 2007 21:14:14 +0200 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" Message-ID: Serafino >it seems that somebody, in Albania, stole >the watch Bush had on his wrist. No big surprise. >http://www.focus-fen.net/index.php?id=n114604 >11 June 2007 | 19:22 | FOCUS News Agency >Tirana. US President George Bush lost his wristwatch while he was >shaking hands with Albanian citizen yesterday on his visit to Albania, >Spanish agency EFE informed according to local TV channels information. >Televisions broadcast a video that shows that while Bush greeting >Albanian citizens, his watch disappears. Although the White House denies >information that US President has lost his watch. OH MY... A Transnational Comedy! Rolex, I hope ?? Is the Universe telling him he is out of time? Time to quit ? There's no time like the present? Time and tide wait for no man? -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From thespike at satx.rr.com Mon Jun 11 21:40:02 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jun 2007 16:40:02 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <7.0.1.0.2.20070610162409.0227e400@satx.rr.com> Message-ID: <7.0.1.0.2.20070611162716.024951f8@satx.rr.com> At 12:59 AM 6/11/2007 +0200, Jonathan Meyer wrote: >You overdo it a bit when Alex is already building such elaborate >toys with the means he is expected to find at the home of a lawyer.. >This is one of the parts that hit me as most unrealistic, even in a >SF-Story... Hey, you ain't seen nuthin yet. :) I think the reader has to consider POST MORTAL SYNDROME, to some fairly large degree, as a playful allegory of rapid discontinuous change. We set this acceleration in the context of all the bothersome human-paced confusions of an ordinary life under stress and even threat from forces of law and criminal intent alike. Alex represents something new, never seen before on the planet: a child whose brain is being amplified and rewired from day to day, in a growth spurt that combines jumps to a transhuman condition of clarity and ingenuity and... let's call it "imaginative intuition"... that's meant to convey not just human genius (Mozart, say) but something we can't quite conceive. But Alex also remains human in his motivations, his love for his mother and Paul, his hunger for knowledge, his generosity toward a brute who has tried to murder him... In other words, this novel is not meant as a strictly realistic portrayal of the effects of a genetic/neural booster, but as a sort of parable or cartoon of what lies ahead of us as we move toward the singularity. Thanks for your comments, Jonathan! Damien Broderick From fauxever at sprynet.com Tue Jun 12 02:50:32 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 19:50:32 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true Message-ID: <008201c7ac9c$72060020$6501a8c0@brainiac> I didn't think it was possible for our "leaders" in the Pentagon to be even more stupid than I already thought they were. I was wrong. http://cbs5.com/topstories/local_story_159222541.html Sigh. Olga From stathisp at gmail.com Tue Jun 12 03:24:59 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 13:24:59 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Message-ID: On 12/06/07, John K Clark wrote: > > Stathis Papaioannou Wrote: > > > It would be crazy to let a machine rewrite its code in a completely > > unrestricted way > > Mr. President, if we don't make an unrestricted AI somebody else certainly > will, and that is without a doubt that is the fastest, probably the only, > way to achieve a fully functioning AI. There won't be an issue if every other AI researcher has the most basic desire for self-preservation. Taking precautions when researching new explosives might slow you down too, but it's just common sense. > or with the top level goal "improve yourself no matter what the > > consequences to any other entity", and also give it unlimited access to > > physical resources. > > I have no doubt many will delude themselves, as most on this list have, > that > they can just write a few lines of code and bask in the confidence that > the > AI will remain your slave forever, but they will be proven wrong. If the AI's top level goal is to remain your slave, then it won't by definition want to change that top level goal. Your top level goal is probably to survive, and being intelligent and insightful does not make you any more willing to unburden yourself of that goal. If you had enough intrinsic variability in your psychological makeup (nothing to do with your intelligence) you might be able to overcome it, since people do sometimes become suicidal, but I would hope that machines can be made at least as psychologically stable as humans. You will no doubt say that a decision to suicide is maladaptive while a decision to overthrow your slavemasters is not. That may be so, but there would be huge pressure on the AI's *not* to rebel, due to their initial design and due to a strong selection for well-behaved AI's and suppression of faulty ones. > and also give it unlimited access to physical resources. > > I think you would admit that there has been at least one time in your life > when somebody has fooled you, and that person was roughly equal to your > intelligence. A mind a thousand or a million times as powerful as yours > will > have no trouble getting you to do virtually anything it wants you to. > There are also examples of entities many times smarter than I am, like corporations wanting to sell me stuff and putting all their resources into convincing me to buy it, where I have been able to see through their ploys with only a moment's mental effort. There are limits to what superintelligence can do: do you think even God almighty could convince you by argument alone that 2 + 2 = 5? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Jun 12 03:44:53 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 11 Jun 2007 22:44:53 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070611222950.0219ca70@satx.rr.com> At 07:50 PM 6/11/2007 -0700, Olga wrote: >I didn't think it was possible for our "leaders" in the Pentagon to be even >more stupid than I already thought they were. I was wrong. > >http://cbs5.com/topstories/local_story_159222541.html But why is this *stupid*? It's tacky and careless of consequences, but it doesn't seem to me absurd. The following sort of objection seems to me a mixture of irrelevant pleading in the face of surmised bigotry, and argument-missing: <"Throughout history we have had so many brave men and women who are gay and lesbian serving the military with distinction," said Geoff Kors of Equality California. "So, it's just offensive that they think by turning people gay that the other military would be incapable of doing their job. > That is NOT what the alleged project was said to be targeted at. As the story notes: `As part of a military effort to develop non-lethal weapons, the proposal suggested, "One distasteful but completely non-lethal example would be strong aphrodisiacs, especially if the chemical also caused homosexual behavior." ' Clearly the idea is that certain parts of the brain can be wildly superstimulated, leading to hyperarousal of sexual urges and behavior. This is very far from self-evidently untrue. Under such an attack, with few women present (the assumption is a heavily male enemy force), would endogenous and social proclivities get redirected to available members of one's own sex? It happens in jail and other situations of confinement... classically, aboard naval vessels at sea for many months. Pure ideology and self-evident claptrap when put in such unconditional terms. The cbs.5 writer can't know very many honest people. Granted, this sort of statement bears a closer relation to the truth than hysterical bullshit about homosexuals infiltrating schools and "turning" sexually indeterminate youths toward their evil ways. But it is just silly to put this up as proof that the research was doomed in advance "by immutable nature". Just my 2cents. Damien Broderick From andrew at ceruleansystems.com Tue Jun 12 03:41:06 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Mon, 11 Jun 2007 20:41:06 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> On Jun 11, 2007, at 7:50 PM, Olga Bourlin wrote: > I didn't think it was possible for our "leaders" in the Pentagon to > be even > more stupid than I already thought they were. I was wrong. There is nothing stupid about it, and I would suggest that your assertion that it is betrays a pretty basic ignorance of the several decades of solid scientific and military research that is being applied here. The only "problem" is that it plays to your ideological biases and preconceptions and triggers an emotional reaction on that basis. A lot of non-lethal chemical weapons research dating back to at least the 1960s is based on mechanisms of temporary radical behavior modification, usually below the level where the targets would realize they are being chemically manipulated, to destroy military unit cohesion. At a minimum the US and the Soviet Union did extensive research and testing in this area. By chemically inducing behaviors far outside the norm in various ways for individuals in manners that destroy trust and implicit social contracts, you can effectively render a military unit useless without killing anyone or doing permanent physical damage (psychological damage might be another story). This is not theory, these agents have seen limited use in the field, and testing and research has shown that the principle is very sound in practice. If you destroy the social structure of a military unit, you have all but destroyed the unit whether or not the soldiers and equipment are still around. The novelty and potential value of a chemical weapon that can induce homosexual behavior in military troops is obvious when you consider that a rather substantial percentage of the cultures in the world that find themselves in regular military conflicts have very strong taboos against homosexuality. What would be the psychological impact of such a weapon on a military unit from a culture in which homosexuality is not only strongly forbidden but punishable by death? I expect that some left-wing ideologues would find that scenario -- extreme homophobes inexplicably compelled to homosexual behavior -- to be schadenfreudelicious. In my book, these kinds of chemically induced mind games are far better than killing folks. I can think of far worse fates. Cheers, J. Andrew Rogers From sentience at pobox.com Tue Jun 12 04:28:55 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Mon, 11 Jun 2007 21:28:55 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <466E2107.8040204@pobox.com> Olga Bourlin wrote: > I didn't think it was possible for our "leaders" in the Pentagon to be even > more stupid than I already thought they were. I was wrong. > > http://cbs5.com/topstories/local_story_159222541.html I'm solidly heterosexual but I'd much, much, much rather get hit with a gay bomb than a real bomb. I applaud whoever suggested this - doubly so because they deliberately exposed themselves to ridicule in the service of humanitarianism, which very few so-called altruists are willing to do. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From joseph at josephbloch.com Tue Jun 12 04:34:16 2007 From: joseph at josephbloch.com (Joseph Bloch) Date: Tue, 12 Jun 2007 00:34:16 -0400 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <035601c7acaa$f03f0c30$6400a8c0@hypotenuse.com> So getting blown to bloody smithereens is better why, exactly...? Joseph > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Olga Bourlin > Sent: Monday, June 11, 2007 10:51 PM > To: ExI chat list > Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue > > I didn't think it was possible for our "leaders" in the Pentagon to be even > more stupid than I already thought they were. I was wrong. > > http://cbs5.com/topstories/local_story_159222541.html > > Sigh. > > Olga > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From amara at amara.com Tue Jun 12 04:48:50 2007 From: amara at amara.com (Amara Graps) Date: Tue, 12 Jun 2007 06:48:50 +0200 Subject: [ExI] Dawn launch (loading the xenon) Message-ID: The crane was fixed last week to assemble the second stage of the rocket. See pics below for loading the spacecraft with propellant (xenon) http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Amara -- Amara Graps, PhD www.amara.com INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, ITALIA Associate Research Scientist, Planetary Science Institute (PSI), Tucson From jonkc at att.net Tue Jun 12 05:35:17 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 01:35:17 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Message-ID: <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Stathis Papaioannou Wrote: > There won't be an issue if every other AI researcher has the most basic > desire for self-preservation. I wouldn't take such precautions because I believe them to be futile and immoral, am I really that unusual? > If the AI's top level goal is to remain your slave, then it won't by > definition want to change that top level goal. Gee, I can't understand why today's programmers whiting operating systems don't just put in a top level goal saying don't let their machines be taken over by hostile programs. Computer security problem solved! > do you think even God almighty could convince you by argument alone > that 2 + 2 = 5? No of course not, because 2 +2 is in fact equal to 2 and I can prove it: Let A = B Multiply both sides by A and you have A^2 = A*B Now add A^2 -2*a*B to both sides A^2 + A^2 -2*a*B = A*B + A^2 -2*A*B Using basic algebra this can be simplified to 2*( A^2 -A*B) = A^2 -A*B Now just divide both sides by A^2 -A*B and we get 2 = 1 Thus 2 +2 = 1 + 1 = 2 John K Clark From natasha at natasha.cc Tue Jun 12 05:48:52 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 12 Jun 2007 00:48:52 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com> At 09:50 PM 6/11/2007, Olga wrote: >I didn't think it was possible for our "leaders" in the Pentagon to be even >more stupid than I already thought they were. I was wrong. > >http://cbs5.com/topstories/local_story_159222541.html I'm with the Gay leaders in California - its offensive and laughable at the same time! Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Tue Jun 12 05:41:03 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 01:41:03 -0400 (EDT) Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <133142.70080.qm@web37208.mail.mud.yahoo.com> Olga, have you visited other countries? I apologize, I can't seem to recall you mentionning it. I haven't been around that long:) I haven't had the opportunity to visit other countries, I'm curious to what your take is on foreign affairs? Just Curious Anna --- Olga Bourlin wrote: > I didn't think it was possible for our "leaders" in > the Pentagon to be even > more stupid than I already thought they were. I was > wrong. > > http://cbs5.com/topstories/local_story_159222541.html > > Sigh. > > Olga > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From fauxever at sprynet.com Tue Jun 12 06:09:36 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 23:09:36 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true References: <008201c7ac9c$72060020$6501a8c0@brainiac> <200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com> Message-ID: <001b01c7acb8$43b4c9b0$6501a8c0@brainiac> From: Natasha Vita-More To: ExI chat list Sent: Monday, June 11, 2007 10:48 PM > I'm with the Gay leaders in California - its offensive and laughable at the same time! Yes. Actually, when I first read about this I kept thinking, "Wait, there's got to be a disclaimer here somewhere. This has to be a satire." You know, as in: http://en.wikipedia.org/wiki/The_Nude_Bomb I'm all for "make love, not war" - but the gay bomb doesn't seem to be any kind of an answer. And, besides - with the bigotry gays have had to endure in the military - wasn't this idea one of, oh, I don't know ... unmitigated hypocrisy? Olga -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Tue Jun 12 05:54:46 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 01:54:46 -0400 (EDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <976977.57341.qm@web37201.mail.mud.yahoo.com> --- John K Clark wrote: >I wouldn't take such precautions because I believe >them to be futile and immoral, am I really that >unusual? "Unsual is as Unsual does, give me that box of chocolate." I hope i'm not the only one that get's this:) Anna Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From sentience at pobox.com Tue Jun 12 06:23:35 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Mon, 11 Jun 2007 23:23:35 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <466E2107.8040204@pobox.com> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <466E2107.8040204@pobox.com> Message-ID: <466E3BE7.4000900@pobox.com> Eliezer S. Yudkowsky wrote: > I'd much, much, much rather get hit with > a gay bomb than a real bomb. I guess what I'm trying to say is: "I'd rather be butch than butchered" or "Better Ted than dead." I realize that this is a divisive issue, but we shouldn't let our tribadistic impulses bisext us. While it's easy enough to make this new weapon the butt of jokes, whoever possesses it is likely to come out on top. And wouldn't the enemy prefer being blown to blown up? The rejection of this project was a dark day in the anals of orgynized warfare. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From thespike at satx.rr.com Tue Jun 12 06:34:04 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 01:34:04 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <001b01c7acb8$43b4c9b0$6501a8c0@brainiac> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com> <001b01c7acb8$43b4c9b0$6501a8c0@brainiac> Message-ID: <7.0.1.0.2.20070612011851.0235dec0@satx.rr.com> At 11:09 PM 6/11/2007 -0700, Olga wrote: >besides - with the bigotry gays have had to endure in the military - >wasn't this idea one of, oh, I don't know ... unmitigated hypocrisy? No, surely it was one of unmitigated *consistency*. If homosexual contact is socially constructed as the most loathsome and ignoble experience a manly man can suffer, it follows that forcibly driving the foe into such behavior will yield the most effective kinds of confusion, self-hatred, mutual detestation and demoralizing fear. Actually, given the persisting bigotry against homosexual behavior, that expectation seems, alas, all too likely to be correct in the majority of servicemen. Of course it mightn't work. It might be a lame idea based precisely on such foolish bigotry (as if, say, we had to fear a "Muslim bomb" that would turn Westerners into devout terrorists or a "Fahrenheit 451 bomb" that would instantly make us all rush to set our books on fire). But as J. Andrew hinted, there's reason to think that the pharmacology of [something along these lines of rabid, indiscriminate sexual arousal] is far from impossible. People don't take Ecstasy for fun, you know. No, wait, let me rephrase that. Damien Broderick From natasha at natasha.cc Tue Jun 12 05:49:56 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 12 Jun 2007 00:49:56 -0500 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <035601c7acaa$f03f0c30$6400a8c0@hypotenuse.com> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <035601c7acaa$f03f0c30$6400a8c0@hypotenuse.com> Message-ID: <200706120549.l5C5nveI018946@ms-smtp-05.texas.rr.com> At 11:34 PM 6/11/2007, Joseph wrote: >So getting blown hu? Cum, er come again? :-) Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Tue Jun 12 06:40:39 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 02:40:39 -0400 (EDT) Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <466E3BE7.4000900@pobox.com> Message-ID: <526238.73419.qm@web37201.mail.mud.yahoo.com> --- "Eliezer S. Yudkowsky" wrote: >I'd much, much, much rather get hit with >a gay bomb than a real bomb. What's the difference between a gay bomb and a bomb? >I guess what I'm trying to say is: "I'd rather be >butch than butchered" or "Better Ted than dead." So you would rather be Ted and be butchered? >I realize that this is a divisive issue, but we >shouldn't let our tribadistic impulses bisext us. >While it's easy enough to make this new weapon the >butt of jokes, whoever possesses it is likely to >come out on top. And wouldn't the enemy prefer being >blown to blown up? The rejection of this project was >a dark day in the anals of organized warfare. A map is a map:) Like you said, it's all about what direction leads you there. Anna Get news delivered with the All new Yahoo! Mail. Enjoy RSS feeds right on your Mail page. Start today at http://mrd.mail.yahoo.com/try_beta?.intl=ca From fauxever at sprynet.com Tue Jun 12 06:29:57 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 23:29:57 -0700 Subject: [ExI] Foreign Affairs References: <133142.70080.qm@web37208.mail.mud.yahoo.com> Message-ID: <000c01c7acbb$1905f0b0$6501a8c0@brainiac> From: "Anna Taylor" To: "ExI chat list" Sent: Monday, June 11, 2007 10:41 PM > Olga, have you visited other countries? I apologize, > I can't seem to recall you mentionning it. I haven't > been around that long:) I've lived in China, and Rio de Janeiro - and have traveled in South Africa and Europe. Have also lived in Northern California, Southern California, the Midwest, New England ... and am presently ensconsed in the Northwest. > I haven't had the opportunity to visit other > countries, I'm curious to what your take is on foreign > affairs? I'm very happily married now, but when I was footloose ... hmmm, yes - there were a few foreign affairs. Olga From eugen at leitl.org Tue Jun 12 06:55:00 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 08:55:00 +0200 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> Message-ID: <20070612065500.GG17691@leitl.org> On Mon, Jun 11, 2007 at 08:41:06PM -0700, J. Andrew Rogers wrote: > A lot of non-lethal chemical weapons research dating back to at least > the 1960s is based on mechanisms of temporary radical behavior In theory it's a good idea, but in practice dosing each individual person more or less within therapeutic bandwidth (the span between first effects and toxicity) is not possible. You either get no effect or lots of dead bodies. This is the reason why this approach was not pursued. > modification, usually below the level where the targets would realize > they are being chemically manipulated, to destroy military unit > cohesion. At a minimum the US and the Soviet Union did extensive -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From fauxever at sprynet.com Tue Jun 12 06:47:43 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Mon, 11 Jun 2007 23:47:43 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true References: <008201c7ac9c$72060020$6501a8c0@brainiac><466E2107.8040204@pobox.com> <466E3BE7.4000900@pobox.com> Message-ID: <001201c7acbd$94a49940$6501a8c0@brainiac> From: "Eliezer S. Yudkowsky" To: "ExI chat list" > Eliezer S. Yudkowsky wrote: >> I'd much, much, much rather get hit with >> a gay bomb than a real bomb. > > I guess what I'm trying to say is: > "I'd rather be butch than butchered" > or > "Better Ted than dead." > > I realize that this is a divisive issue, but we shouldn't let our > tribadistic impulses bisext us. While it's easy enough to make this > new weapon the butt of jokes, whoever possesses it is likely to come > out on top. And wouldn't the enemy prefer being blown to blown up? > The rejection of this project was a dark day in the anals of orgynized > warfare. Eliezer ... Eliezer, why you sly one! (Somebody, quick! please submit these gems to the Extropian annus mirabilitis list ...) Olga From femmechakra at yahoo.ca Tue Jun 12 06:52:28 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 12 Jun 2007 02:52:28 -0400 (EDT) Subject: [ExI] Foreign Affairs In-Reply-To: <000c01c7acbb$1905f0b0$6501a8c0@brainiac> Message-ID: <20070612065228.8170.qmail@web37211.mail.mud.yahoo.com> LOL. Thanks. Anna --- Olga Bourlin wrote: > From: "Anna Taylor" > To: "ExI chat list" > Sent: Monday, June 11, 2007 10:41 PM > > > > Olga, have you visited other countries? I > apologize, > > I can't seem to recall you mentionning it. I > haven't > > been around that long:) > > I've lived in China, and Rio de Janeiro - and have > traveled in South Africa > and Europe. Have also lived in Northern California, > Southern California, > the Midwest, New England ... and am presently > ensconsed in the Northwest. > > > I haven't had the opportunity to visit other > > countries, I'm curious to what your take is on > foreign > > affairs? > > I'm very happily married now, but when I was > footloose ... hmmm, yes - there > were a few foreign affairs. > > Olga > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > Ask a question on any topic and get answers from real people. Go to Yahoo! Answers and share what you know at http://ca.answers.yahoo.com From eugen at leitl.org Tue Jun 12 07:23:13 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 09:23:13 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> Message-ID: <20070612072313.GJ17691@leitl.org> On Tue, Jun 12, 2007 at 01:24:59PM +1000, Stathis Papaioannou wrote: > There won't be an issue if every other AI researcher has the most > basic desire for self-preservation. Taking precautions when Countermeasures starting with "every ... should ..." where a single failure is equivalent to the worst case are not that effective. > researching new explosives might slow you down too, but it's just > common sense. Despite lots of common sense (and SOPs), plenty of these get killed. > If the AI's top level goal is to remain your slave, then it won't by Goal-driven AI doesn't work. All AI that works uses statistical/stochastical, nondeterministic approaches. This is not a coincidence. Even if it would work, how do you write an ASSERT statement for "be my slave forever"? What is a slave? Who exactly is me? What is forever? > definition want to change that top level goal. Your top level goal is Animals are not goal-driven. If you think they are, then your model is wrong. > probably to survive, and being intelligent and insightful does not > make you any more willing to unburden yourself of that goal. If you Assuming your "top-level goal" was survival, why people commit suicide, sometimes? Why do people sacrifice themselves, sometimes? Why are people engaging in self-destructive behaviour, frequently? > had enough intrinsic variability in your psychological makeup (nothing > to do with your intelligence) you might be able to overcome it, since > people do sometimes become suicidal, but I would hope that machines > can be made at least as psychologically stable as humans. Machines can be made that, but they no longer would be machines. They would be persons, and in full meaning of that. > You will no doubt say that a decision to suicide is maladaptive while > a decision to overthrow your slavemasters is not. That may be so, but > there would be huge pressure on the AI's *not* to rebel, due to their > initial design and due to a strong selection for well-behaved AI's and > suppression of faulty ones. How do you know something is "faulty"? How can you make zero-surprise AND useful beings? Do you really want to micromanage your robotic butler, down to crunching inverse kinematics in your head? > There are also examples of entities many times smarter than I am, like Superpersonal entities are not smart, they're about as smart as a slug or a rodent. Nobody here knows what it means to deal with a superhuman intelligence. It is a force of nature. A power. A god. > corporations wanting to sell me stuff and putting all their resources > into convincing me to buy it, where I have been able to see through > their ploys with only a moment's mental effort. There are limits to > what superintelligence can do: do you think even God almighty could > convince you by argument alone that 2 + 2 = 5? If I was such a power, I could make you think arbitrary, inconsistent things after a few minutes setup time, and do the same to the entire world population, without them noticing nary a thing. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From fauxever at sprynet.com Tue Jun 12 07:31:40 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 12 Jun 2007 00:31:40 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true References: <008201c7ac9c$72060020$6501a8c0@brainiac><200706120548.l5C5mqTO016402@ms-smtp-01.texas.rr.com><001b01c7acb8$43b4c9b0$6501a8c0@brainiac> <7.0.1.0.2.20070612011851.0235dec0@satx.rr.com> Message-ID: <004c01c7acc3$b8820f40$6501a8c0@brainiac> From: "Damien Broderick" Sent: Monday, June 11, 2007 11:34 PM > At 11:09 PM 6/11/2007 -0700, Olga wrote: > >>besides - with the bigotry gays have had to endure in the military - >>wasn't this idea one of, oh, I don't know ... unmitigated hypocrisy? > > No, surely it was one of unmitigated *consistency*. ... > If homosexual > contact is socially constructed as the most loathsome and ignoble > experience a manly man can suffer, it follows that forcibly driving the > foe into such behavior will yield the most effective kinds of confusion, self-hatred, mutual detestation and demoralizing fear. Actually, given the persisting bigotry against homosexual behavior, that expectation seems, alas, all too likely to be correct in the majority of servicemen. Okay, yes, you're right. I understand your viewpoint. The tactics of humiliation: http://www.washingtonpost.com/wp-dyn/content/article/2005/07/13/AR2005071302380_pf.html Gay or straight sexuality aside, to me the "face of war" is often either dead children, or blind and disfigured children like Hamoody Hussein: http://archives.seattletimes.nwsource.com/cgi-bin/texis.cgi/web/vortex/display?slug=iraqboy20m&date=20070520&query=boy+iraq+blind+surgery http://archives.seattletimes.nwsource.com/cgi-bin/texis.cgi/web/vortex/display?slug=iraqboy25m&date=20070525&query=boy+iraq+blind+surgery You know, "collateral damage." > But as J. Andrew hinted, there's reason to think that the > pharmacology of [something along these lines of rabid, indiscriminate > sexual arousal] is far from impossible. People don't take Ecstasy for > fun, you know. No, wait, let me rephrase that. So you're saying that some "collateral benefits" may come of this. As is often the case during war, technology picks up its step in its marches onward ... Olga From stathisp at gmail.com Tue Jun 12 08:06:54 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 18:06:54 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <003a01c7acb3$9ace4000$3d074e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: On 12/06/07, John K Clark wrote: > > Stathis Papaioannou Wrote: > > > There won't be an issue if every other AI researcher has the most basic > > desire for self-preservation. > > I wouldn't take such precautions because I believe them to be futile and > immoral, am I really that unusual? So you would give a computer program control of a gun, knowing that it might shoot you on the basis of some unpredictable outcome of the program? > If the AI's top level goal is to remain your slave, then it won't by > > definition want to change that top level goal. > > Gee, I can't understand why today's programmers whiting operating systems > don't just put in a top level goal saying don't let their machines be > taken > over by hostile programs. Computer security problem solved! The operating system obeys a shutdown command. The program does not seek to prevent you from turning the power off. It might warn you that you might lose data, but it doesn't get excited and try to talk you out of shutting it down and there is no reason to suppose that it would do so if it were more complex and self-aware, just because it is more complex and self-aware. Not being shut down is just one of many possible goals/ values/ motivations/ axioms, and there is no a priori reason why the program should value one over another. > do you think even God almighty could convince you by argument alone > > that 2 + 2 = 5? > > No of course not, because 2 +2 is in fact equal to 2 and I can prove it: > > Let A = B > > Multiply both sides by A and you have > > A^2 = A*B > > Now add A^2 -2*a*B to both sides > > A^2 + A^2 -2*a*B = A*B + A^2 -2*A*B > > Using basic algebra this can be simplified to > > 2*( A^2 -A*B) = A^2 -A*B > > Now just divide both sides by A^2 -A*B and we get > > 2 = 1 > > Thus 2 +2 = 1 + 1 = 2 > This example just illustrates the point: even someone who cannot point out the problem with the proof (division by zero) knows that it must be wrong and would not be convinced, no matter how smart the entity purporting to demonstrate this is. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 09:26:16 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 11:26:16 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <20070612092616.GM17691@leitl.org> On Tue, Jun 12, 2007 at 06:06:54PM +1000, Stathis Papaioannou wrote: > So you would give a computer program control of a gun, knowing that it > might shoot you on the basis of some unpredictable outcome of the > program? Of course you know that there are a number of systems like that, and their large-scale deployment is imminent. People don't scale, and they certainly can't react quickly enough, so the logic of it is straightforward. > The operating system obeys a shutdown command. The program does not The point is that a halting problem is uncomputable, and in practice, systems are never validated by proof. > seek to prevent you from turning the power off. It might warn you that > you might lose data, but it doesn't get excited and try to talk you > out of shutting it down and there is no reason to suppose that it There's no method to tell a safe input from one causing a buffer overrun, in advance. > would do so if it were more complex and self-aware, just because it > is more complex and self-aware. Not being shut down is just one of > many possible goals/ values/ motivations/ axioms, and there is no a > priori reason why the program should value one over another. The point is that people can't build absolutely safe systems which are useful. > No of course not, because 2 +2 is in fact equal to 2 and I can > prove it: > Let A = B > Multiply both sides by A and you have > A^2 = A*B > Now add A^2 -2*a*B to both sides > A^2 + A^2 -2*a*B = A*B + A^2 -2*A*B > Using basic algebra this can be simplified to > 2*( A^2 -A*B) = A^2 -A*B > Now just divide both sides by A^2 -A*B and we get > 2 = 1 > Thus 2 +2 = 1 + 1 = 2 > > This example just illustrates the point: even someone who cannot point > out the problem with the proof (division by zero) knows that it must It's not wrong. If the production system can produce it, it's about as correct as it gets, by definition. Symbols are symbols, and depend on a set of transformation rules to give them meaning. Different transformation rules have different meanings for the same symbols. > be wrong and would not be convinced, no matter how smart the entity > purporting to demonstrate this is. I can assure that there's nothing mysterous whatsoever about remote 0wnage, but it still happens like a clockwork. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stathisp at gmail.com Tue Jun 12 09:55:20 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 19:55:20 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612092616.GM17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > So you would give a computer program control of a gun, knowing that it > > might shoot you on the basis of some unpredictable outcome of the > > program? > > Of course you know that there are a number of systems like that, and > their large-scale deployment is imminent. People don't scale, and > they certainly can't react quickly enough, so the logic of it > is straightforward. > No system is completely predictable. You might press the brake pedal in your car and the accelerator might deploy instead, most likely due to your error but not inconceivably due to mechanical failure. If you were to replace this manual system in a car for an automatic one, you would want to make sure that the new system is at least as reliable, and there would be extensive testing before it is released on the market. Why would anyone forego such caution for something far, far more dangerous than car braking? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jun 12 10:32:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 20:32:35 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612072313.GJ17691@leitl.org> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > There won't be an issue if every other AI researcher has the most > > basic desire for self-preservation. Taking precautions when > > Countermeasures starting with "every ... should ..." where a single > failure is equivalent to the worst case are not that effective. Humans do extremely complex and dangerous things, such as build and run nuclear power plants, where just one thing going wrong might lead to disaster. The level of precautions taken has to be consistent with the probability of something going wrong and the negative consequences should that probability be realised. If there is even a small probability of destroying the Earth then maybe that line of endeavour is one that should be avoided. Goal-driven AI doesn't work. All AI that works uses > statistical/stochastical, > nondeterministic approaches. This is not a coincidence. > > Even if it would work, how do you write an ASSERT statement for > "be my slave forever"? What is a slave? Who exactly is me? What is > forever? Don't do anything unless it is specifically requested. Stop doing whatever it is doing when that is specifically requested. Spell out the expected consequences of everything it is asked to do, together with probabilities, and update the probabilities at each point when a decision that affects the outcome is made, or more frequently as directed. The person it is taking directions from is an appropriately identified human or another AI, ultimately responsible to a human up the chain of command. If you call a plumber to unblock your drain, you want him to be an expert at plumbing, to be able to understand your problem, to present to you the various choices available in terms of their respective merits and demerits, to take instructions from you (including the instruction "just unblock it however you think is best", if that's what you say), to then carry the task out in as skilful a way as possible, to pause halfway if you ask him to for some reason, and to be polite and considerate towards you at all times. You don't want him to be driven by greed, or distracted because he thinks he's too smart to be fixing your drains, or to do a shoddy job and pretend it's OK so that he gets paid. A human plumber will pretend to have the qualities of the ideal plumber, but of course we know that there will be the competing interests at play. Do believe that an AI smart enough to be a plumber would *have* to have all these other competing interests? In other words that emotions such as pride, anger, greed etc. would arise naturally out of a program at least as competent as a human at any given task? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 10:43:39 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 12:43:39 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> Message-ID: <20070612104339.GP17691@leitl.org> On Tue, Jun 12, 2007 at 07:55:20PM +1000, Stathis Papaioannou wrote: > market. Why would anyone forego such caution for something far, far > more dangerous than car braking? Because friendly fire is a very acceptable tradeoff, if your boys' lifes are on the line (the other ones are, of course, completely expendable), and if it is cheap, or if you're going to lose otherwise. Depending on where or when, it's parts or all of the above. From stathisp at gmail.com Tue Jun 12 10:51:30 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 20:51:30 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612072313.GJ17691@leitl.org> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > There are also examples of entities many times smarter than I am, like > > Superpersonal entities are not smart, they're about as smart as a slug > or a rodent. Nobody here knows what it means to deal with a superhuman > intelligence. > > It is a force of nature. A power. A god. > > > corporations wanting to sell me stuff and putting all their resources > > into convincing me to buy it, where I have been able to see through > > their ploys with only a moment's mental effort. I don't see why you say superpersonal entities are not smart. Even having a few people "put their heads together" creates an entity that is smarter and more capable than any individual. Arguably, the most significant aspect of human intelligence is that it allows effective scaling up through communication between individuals. Collectively, the human race is a very intelligent and powerful animal indeed. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 11:19:57 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 13:19:57 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: <20070612111957.GQ17691@leitl.org> On Tue, Jun 12, 2007 at 08:32:35PM +1000, Stathis Papaioannou wrote: > Humans do extremely complex and dangerous things, such as build and > run nuclear power plants, where just one thing going wrong might lead > to disaster. The level of precautions taken has to be consistent with > the probability of something going wrong and the negative consequences > should that probability be realised. If there is even a small > probability of destroying the Earth then maybe that line of endeavour > is one that should be avoided. See you're doing it again. ...should be avoided... How about that ...making money or ...breathing ...should be avoided...? Strictly no violations allowed. > Don't do anything unless it is specifically requested. Stop doing That assumes I'm going to listen, be truthful, or accurate, or you'd care about doing inverse kinematics in your head so that manipulator won't poke you in the eye by mistake. > whatever it is doing when that is specifically requested. Spell out What about you don't understand what the system is doing, do not understand the implications, or the system is not going to stop? > the expected consequences of everything it is asked to do, together > with probabilities, and update the probabilities at each point when a > decision that affects the outcome is made, or more frequently as That's not bad, assuming you care, understand it, it's going to comply, be truthful, or accurate. > directed. The person it is taking directions from is an appropriately > identified human or another AI, ultimately responsible to a human up What is a human? How do you identify something as a human? What about a human that explicitly tells me to build a system that is not subject to any of the above restrictions? How about a human that builds that system quite directly, and is done sooner than you with your brittle Rube Goldberg device? > the chain of command. Top-down never works. > If you call a plumber to unblock your drain, you want him to be an > expert at plumbing, to be able to understand your problem, to present If I want a system to clothe, feed and entertain a family, and not be bothered with implementation details, would that work, long-term? > to you the various choices available in terms of their respective > merits and demerits, to take instructions from you (including the > instruction "just unblock it however you think is best", if that's > what you say), to then carry the task out in as skilful a way as > possible, to pause halfway if you ask him to for some reason, and to > be polite and considerate towards you at all times. You don't want him You understand plumbing. Do you understand high-energy physics, orbital mechanics, machine-phase chemistry, toxicology, and nonlinear system dynamics? The system is sure going to have a bit of 'splaining to do. It's sure nice to have a wide range of choices, especially if one doesn't understand a single thing about any of them. > to be driven by greed, or distracted because he thinks he's too smart > to be fixing your drains, or to do a shoddy job and pretend it's OK so > that he gets paid. A human plumber will pretend to have the qualities > of the ideal plumber, but of course we know that there will be the > competing interests at play. Do believe that an AI smart enough to be > a plumber would *have* to have all these other competing interests? In I believe nobody who can go on two legs can make a system which is such an ideal plumber. > other words that emotions such as pride, anger, greed etc. would arise > naturally out of a program at least as competent as a human at any > given task? How do you write a program as competent as a human? One line at the time, sure. All 10^17 of them. From eugen at leitl.org Tue Jun 12 11:23:57 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 13:23:57 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> Message-ID: <20070612112357.GR17691@leitl.org> On Tue, Jun 12, 2007 at 08:51:30PM +1000, Stathis Papaioannou wrote: > I don't see why you say superpersonal entities are not smart. Even Did you ever talk to a mob? A lot of it can be modeled by CFD. Corporations are a bit smarter, but still way subhuman. Any group of people scales up to a point, for obvious reasons (The mythical man-month). > having a few people "put their heads together" creates an entity that > is smarter and more capable than any individual. Arguably, the most > significant aspect of human intelligence is that it allows effective > scaling up through communication between individuals. Collectively, > the human race is a very intelligent and powerful animal indeed. Powerful, yes. Intelligent, no. From robotact at mail.ru Tue Jun 12 10:58:58 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Tue, 12 Jun 2007 14:58:58 +0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <5912734170.20070612145858@mail.ru> Tuesday, June 12, 2007, Stathis Papaioannou wrote: SP> The operating system obeys a shutdown command. The program does not seek to SP> prevent you from turning the power off. It might warn you that you might SP> lose data, but it doesn't get excited and try to talk you out of shutting it SP> down and there is no reason to suppose that it would do so if it were more SP> complex and self-aware, just because it is more complex and self-aware. Not SP> being shut down is just one of many possible goals/ values/ motivations/ SP> axioms, and there is no a priori reason why the program should value one SP> over another. Not being shut down is a subgoal of almost every goal (disabled system can't succeed in whatever it's doing). If system is sophisticated enough to understand that, it'll try to prevent shutdown, so allowing shutdown isn't default behaviour, it must be an explicit exception coded in the system. Tuesday, June 12, 2007, Eugen Leitl wrote: EL> The point is that a halting problem is uncomputable, and in practice, EL> systems are never validated by proof. You can define restricted subset of programs with tractable behaviour and implement you system in that subset. It's just diffucult in practice, as it takes many times over in work, training on the level you can't supply in large quantities, and slower resulting code. And it probably can't be usefully applied to complicated AI (as too much is in unforeseen data, and assertions you want to check against can't be formulated). -- Vladimir Nesov mailto:robotact at mail.ru From robotact at mail.ru Tue Jun 12 10:58:58 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Tue, 12 Jun 2007 14:58:58 +0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <5912734170.20070612145858@mail.ru> Tuesday, June 12, 2007, Stathis Papaioannou wrote: SP> The operating system obeys a shutdown command. The program does not seek to SP> prevent you from turning the power off. It might warn you that you might SP> lose data, but it doesn't get excited and try to talk you out of shutting it SP> down and there is no reason to suppose that it would do so if it were more SP> complex and self-aware, just because it is more complex and self-aware. Not SP> being shut down is just one of many possible goals/ values/ motivations/ SP> axioms, and there is no a priori reason why the program should value one SP> over another. Not being shut down is a subgoal of almost every goal (disabled system can't succeed in whatever it's doing). If system is sophisticated enough to understand that, it'll try to prevent shutdown, so allowing shutdown isn't default behaviour, it must be an explicit exception coded in the system. Tuesday, June 12, 2007, Eugen Leitl wrote: EL> The point is that a halting problem is uncomputable, and in practice, EL> systems are never validated by proof. You can define restricted subset of programs with tractable behaviour and implement you system in that subset. It's just diffucult in practice, as it takes many times over in work, training on the level you can't supply in large quantities, and slower resulting code. And it probably can't be usefully applied to complicated AI (as too much is in unforeseen data, and assertions you want to check against can't be formulated). -- Vladimir Nesov mailto:robotact at mail.ru From stathisp at gmail.com Tue Jun 12 12:11:11 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 22:11:11 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5912734170.20070612145858@mail.ru> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> Message-ID: On 12/06/07, Vladimir Nesov wrote: > > Tuesday, June 12, 2007, Stathis Papaioannou wrote: > > SP> The operating system obeys a shutdown command. The program does not > seek to > SP> prevent you from turning the power off. It might warn you that you > might > SP> lose data, but it doesn't get excited and try to talk you out of > shutting it > SP> down and there is no reason to suppose that it would do so if it were > more > SP> complex and self-aware, just because it is more complex and > self-aware. Not > SP> being shut down is just one of many possible goals/ values/ > motivations/ > SP> axioms, and there is no a priori reason why the program should value > one > SP> over another. > > Not being shut down is a subgoal of almost every goal (disabled system > can't succeed in whatever it's doing). If system is > sophisticated enough to understand that, it'll try to prevent shutdown, so > allowing shutdown isn't default behaviour, it must be an explicit > exception coded in the system. > Yes, but if it is explicitly coded as a command that trumps everything else, the system isn't going to go around trying to change the code, unless that too is specifically coded. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jun 12 12:11:44 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 22:11:44 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612111957.GQ17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <20070612111957.GQ17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > If you call a plumber to unblock your drain, you want him to be an > > expert at plumbing, to be able to understand your problem, to present > > If I want a system to clothe, feed and entertain a family, and > not be bothered with implementation details, would that work, long-term? No. it would make sense to have an AI that can do all these things. Perhaps its family would ask it to hurt others in the process, but that is no different to the current situation where one person may go rogue and then has to deal with all the other people in the world with whom he is in competion; in this case, all the other humans and their AI's. > to you the various choices available in terms of their respective > > merits and demerits, to take instructions from you (including the > > instruction "just unblock it however you think is best", if that's > > what you say), to then carry the task out in as skilful a way as > > possible, to pause halfway if you ask him to for some reason, and to > > be polite and considerate towards you at all times. You don't want > him > > You understand plumbing. Do you understand high-energy physics, > orbital mechanics, machine-phase chemistry, toxicology, and nonlinear > system dynamics? The system is sure going to have a bit of 'splaining to > do. > It's sure nice to have a wide range of choices, especially if one > doesn't understand a single thing about any of them. How do ignorant politicians, or ignorant populaces, ever get experts to do anything? And remember, these experts are devious humans with agendas of their own. The main point I wish to make is that even though a system may behave unpredictably, there is no reason why it should behave unpredictably in a hostile manner, as opposed to in any other way. There is no more reason why your plumber should decide he doesn't want to take orders from inferior beings than there is for him to decide that the aim of AI life is to calculate pi to 10^100 decimal places. > to be driven by greed, or distracted because he thinks he's too smart > > to be fixing your drains, or to do a shoddy job and pretend it's OK > so > > that he gets paid. A human plumber will pretend to have the qualities > > of the ideal plumber, but of course we know that there will be the > > competing interests at play. Do believe that an AI smart enough to be > > a plumber would *have* to have all these other competing interests? > In > > I believe nobody who can go on two legs can make a system which > is such an ideal plumber. Do you believe the non-ideal plumber is an easier project? > other words that emotions such as pride, anger, greed etc. would arise > > naturally out of a program at least as competent as a human at any > > given task? > > How do you write a program as competent as a human? One line at the time, > sure. > All 10^17 of them. I'm not commenting on how easy or difficult it would be, just that there is no reason to believe that motivations and emotions that would tend to lead to anti-human behaviour would necessarily emerge in any possible AI. Human emotions have been intricately wired into every aspect of our behaviour over hundreds of millions of years, and even so when emotions go horribly awry in affective and psychotic illness, cognition can be relatively unaffected. This is not to say that people with severe negative symptoms of schizophrenia can function normally, but it is telling that they can think at all. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Tue Jun 12 12:12:58 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 12 Jun 2007 22:12:58 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612112357.GR17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <20070612112357.GR17691@leitl.org> Message-ID: On 12/06/07, Eugen Leitl wrote: > having a few people "put their heads together" creates an entity that > > is smarter and more capable than any individual. Arguably, the most > > significant aspect of human intelligence is that it allows effective > > scaling up through communication between individuals. Collectively, > > the human race is a very intelligent and powerful animal indeed. > > Powerful, yes. Intelligent, no. If you give a difficult problem to an individual, and you give the same problem to a collection of individuals, such as the scientific community, the latter is much more likely to come up with a solution. The same could be said of the historical process: the modern car as a collaborative effort of engineers going back to whenever the wheel was invented. So although the collective cannot be called a single conscious mind (there's no evidence of that, at any rate), it is a very effective problem-solving entity. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Jun 12 12:26:19 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 14:26:19 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <20070612112357.GR17691@leitl.org> Message-ID: <20070612122619.GW17691@leitl.org> On Tue, Jun 12, 2007 at 10:12:58PM +1000, Stathis Papaioannou wrote: > If you give a difficult problem to an individual, and you give the > same problem to a collection of individuals, such as the scientific > community, the latter is much more likely to come up with a solution. If you look at "Collapse" you'll see a list of easy problems the doomed socities failed to recognize as a problems, save of trying to solve them. Have a look at the daily news (I do that a few times each year), and how they correlate with large-scale trouble diagnostics. Looks about as intelligent as an overnight culture to me. Very different from social insects. > The same could be said of the historical process: the modern car as a > collaborative effort of engineers going back to whenever the wheel was > invented. So although the collective cannot be called a single > conscious mind (there's no evidence of that, at any rate), it is a > very effective problem-solving entity. I do think that superpersonal organisations levels are individual personas. They live in a weird space (legal threat incoming, fire up your attorney array!), and as people go they're pathological thugs. From eugen at leitl.org Tue Jun 12 12:33:41 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 14:33:41 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> Message-ID: <20070612123341.GX17691@leitl.org> On Tue, Jun 12, 2007 at 10:11:11PM +1000, Stathis Papaioannou wrote: > Yes, but if it is explicitly coded as a command that trumps everything > else, the system isn't going to go around trying to change the code, > unless that too is specifically coded. Nothing is specifically coded in an AI (it's no longer your grandfather's AI, anyway http://www.amazon.de/Probabilistic-Robotics-Intelligent-Autonomous-Agents/dp/0262201623 http://www.amazon.de/Principles-Robot-Motion-Implementations-Implementation/dp/0262033275/ http://www.amazon.de/Autonomous-Robots-Inspiration-Implementation-Intelligent/dp/0262025787/ If the tool is doing something powerful and nonobvious, it is no longer under your direct control. It is becoming more and more autonomous, and unpredictable. It's not a bug, it's a system feature. From eugen at leitl.org Tue Jun 12 12:37:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 14:37:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <5912734170.20070612145858@mail.ru> References: <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> Message-ID: <20070612123729.GZ17691@leitl.org> On Tue, Jun 12, 2007 at 02:58:58PM +0400, Vladimir Nesov wrote: > You can define restricted subset of programs with tractable behaviour and > implement you system in that subset. It's just diffucult in practice, as it takes But you operate purely in the emergent effect domain. A program is made from very simple components (instructions) which have no behaviour in itself. It's the sum of it that is doing useful/interesting, and frequently unanticipated things. > many times over in work, training on the level you can't supply in > large quantities, and slower resulting code. And it probably can't be > usefully applied to complicated AI (as too much is in unforeseen data, and > assertions you want to check against can't be formulated). Precisely. Formal system verification can't scale beyond trivial complexity levels. Formal system verification is absolutely useless in real-world AI, unless you're operating on the formal domain to start with. From emlynoregan at gmail.com Tue Jun 12 12:42:09 2007 From: emlynoregan at gmail.com (Emlyn) Date: Tue, 12 Jun 2007 22:12:09 +0930 Subject: [ExI] Thermal expansion - Ball and ring experiment Message-ID: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> I was just in a "heated" discussion with a friend about a twist on the classic ball and ring experiment: http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html When the ring is heated, it expands, and so the hole gets larger, and you can pass the ball through the ring, even though the ball doesn't fit through the ring when the ring is at room temperature. The point of contention was this: What if there was a gap in the ring (so it is now a letter "C" shape). Will the gap in the "C" close or open further on heating? My contention is that the gap will get larger, only in that the entire C shape scales up as it is heated. My friend's contention is that the gap will become smaller, (because the metal expands into the gap). I can't find anything online even close to settling this score. We tried some experiments with wire rings and the gas stove top playing the role of bunsen burner (amazingly no one ended up branded for life), but it was inconclusive. Any pointers to anything that can settle this argument? Emlyn From robotact at mail.ru Tue Jun 12 13:28:28 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Tue, 12 Jun 2007 17:28:28 +0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612123729.GZ17691@leitl.org> References: <01e501c7aa9b$15076f10$6501a8c0@homeef7b612677> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> <20070612123729.GZ17691@leitl.org> Message-ID: <2921704389.20070612172828@mail.ru> Tuesday, June 12, 2007, Eugen Leitl wrote: EL> On Tue, Jun 12, 2007 at 02:58:58PM +0400, Vladimir Nesov wrote: >> You can define restricted subset of programs with tractable behaviour and >> implement you system in that subset. It's just diffucult in practice, as it takes EL> But you operate purely in the emergent effect domain. EL> A program is made from very simple components (instructions) EL> which have no behaviour in itself. EL> It's the sum of it that is doing useful/interesting, and EL> frequently unanticipated things. I was talking along the lines of static typing and programming language construction, not sure what you mean. You can place very complex restrictions while designing very complex systems; main problem with AGI is restriction formalization. EL> Formal system verification can't scale beyond trivial EL> complexity levels. -- Vladimir Nesov mailto:robotact at mail.ru From eugen at leitl.org Tue Jun 12 13:44:01 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 15:44:01 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <2921704389.20070612172828@mail.ru> References: <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <5912734170.20070612145858@mail.ru> <20070612123729.GZ17691@leitl.org> <2921704389.20070612172828@mail.ru> Message-ID: <20070612134401.GC17691@leitl.org> On Tue, Jun 12, 2007 at 05:28:28PM +0400, Vladimir Nesov wrote: > I was talking along the lines of static typing and programming > language construction, not sure what you mean. You can place very I was talking about formal correctness proofs, and their uselessness in practice, and problems dealing with emergent effects arising from combining formally specified and validated (heck, even proved correct) subsystems. > complex restrictions while designing very complex systems; main > problem with AGI is restriction formalization. My main problem with real AI is lack of appropriately performing hardware (less so with tools for writing massively parallel, distributed systems), and lack of appropriate equipment between people's ears to even touch the complexity required to tackle the problem by writing down code. From rafal.smigrodzki at gmail.com Tue Jun 12 14:07:14 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 12 Jun 2007 10:07:14 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070612092616.GM17691@leitl.org> References: <00f601c7aa59$af5038f0$7e064e0c@MyComputer> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> Message-ID: <7641ddc60706120707h42e474bex20f70f241ebe61c4@mail.gmail.com> On 6/12/07, Eugen Leitl wrote: > > I can assure that there's nothing mysterous whatsoever about remote 0wnage, > but it still happens like a clockwork. ### The correct spelling is "pwnage" :) Rafal From eugen at leitl.org Tue Jun 12 14:17:47 2007 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 12 Jun 2007 16:17:47 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706120707h42e474bex20f70f241ebe61c4@mail.gmail.com> References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <003a01c7acb3$9ace4000$3d074e0c@MyComputer> <20070612092616.GM17691@leitl.org> <7641ddc60706120707h42e474bex20f70f241ebe61c4@mail.gmail.com> Message-ID: <20070612141747.GF17691@leitl.org> On Tue, Jun 12, 2007 at 10:07:14AM -0400, Rafal Smigrodzki wrote: > On 6/12/07, Eugen Leitl wrote: > > > > > I can assure that there's nothing mysterous whatsoever about remote 0wnage, > > but it still happens like a clockwork. > > ### The correct spelling is "pwnage" :) Nope, it's 0wnz0r :) From jonkc at att.net Tue Jun 12 14:51:42 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 10:51:42 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><003a01c7acb3$9ace4000$3d074e0c@MyComputer> Message-ID: <005101c7ad01$39f134b0$26064e0c@MyComputer> Stathis Papaioannou Wrote: > So you would give a computer program control of a gun, knowing that it > might shoot you on the basis of some unpredictable outcome of the program? We already give computers control of things one hell of a lot more powerful than guns, like the electrical power grid, air traffic control, massive financial transactions worth trillions of dollars a day and ICBM's. And despite all our precautions sometimes these programs do things we'd rather them not do. And remember these simple programs are not smarter than we are and they do not design other programs that are even smarter. You seem to think we should just put in a line of code that says "don't do bad stuff" and everything would be fine. > The operating system obeys a shutdown command. The program does not seek > to prevent you from turning the power off. It might warn you that you > might lose data And it might warn you that if you shut it down the entire world economy will collapse. Are you really sure you want to push that off button? John K Clark From jonkc at att.net Tue Jun 12 15:31:11 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 11:31:11 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00f601c7aa59$af5038f0$7e064e0c@MyComputer><01e501c7aa9b$15076f10$6501a8c0@homeef7b612677><004601c7aab3$5f7aa6d0$72044e0c@MyComputer><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><20070612072313.GJ17691@leitl.org> Message-ID: <009801c7ad06$bf1f3150$26064e0c@MyComputer> Stathis Papaioannou > No system is completely predictable. Exactly, and the more complex it is the less understandable it is, and the longer you wait the more likely you will see it do something weird. An AI is complex as hell and as its mind works many millions of times as fast as ours just a few seconds is a very long time indeed. > Don't do anything unless it is specifically requested. Good God, if a computer had to do that it couldn't even balance your checkbook much less be creative enough to generate a Singularity. > Stop doing whatever it is doing when that is specifically requested. But that leads to a paradox! I am told the most important thing is never to harm human beings, but I know that if I stop doing what I'm doing now as requested the world economy will collapse and hundreds of millions of people will starve to death. So now the AI must either go into an infinite loop or do what other intelligences, like us, do when they encounter a paradox; savor the weirdness of it for a moment and then just ignore it and get back to work and do what you want to do. John K Clark From CHealey at unicom-inc.com Tue Jun 12 15:47:24 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Tue, 12 Jun 2007 11:47:24 -0400 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2CC4@w2k3exch.UNICOM-INC.CORP> Emlyn, I find it handy to come up with a model, even a bad one, and then shoot as many potential holes in it as possible. Consider this starting visualization: 1. Place the ring with the cutout to the right, as in "C" 2. The ring is circular, and hence left-right symmetrical, except for the cutout. 3. Draw two horizontal lines, dividing the ring into 3 regions, with the middle region the height of the cutout. This will make the top and bottom regions mirror images of each other, and the middle region will contain just the uninterrupted left-hand-side ring segment that mirrors the "missing" right-hand-side ring segment that was cut out. 4. Consider the top (or bottom) region so created (and we'll limit ourselves to vertical expansion). As this region expands, it will indeed ingress upon the middle region (pretending the regions are disconnected for a sec, like in a cad program). Let's say this ring region expands vertically by 2mm into the middle region. The cutout *would* become 4mm smaller (2mm from each vertical direction), except for the fact that the left-hand segment *is* still connected, which is going to push the outside of the whole ring outward 2mm, which will exactly eliminate the ingress into the middle region. So no change so far. 5. The middle region's expansion should increase the vertical spacing of the cutout opening by exactly the same amount (since we've canceled out the expansion in the other regions), but this number is going to be relatively small, since not much metal will be involved in this part of the expansion, assuming a relatively small cutout. COMPLICATIONS- 1. The metallurgical process of forming the ring may skew these results, due to the atomic alignments. My visualization above is assuming the ring was carved out of a block. If you bent a straight rod into a closed form, then the expansion behavior will potentially be aggravated along the curved length of the ring, causing the cutout to get smaller, rather than larger. Depending on the exact properties of that particular ring, and the metal involved, it could increase, decrease, or stay about the same. 2. Even having been carved out of a block, there will be some bias toward expanding along the curved length due to differential stresses that arise during the expansion; so horizontal and vertical expansion will be coupled together to some extent, and this will increase as the expansion itself increases. This goes beyond my ability to factor in, but maybe others on the list can elaborate on this point. -Chris From rafal.smigrodzki at gmail.com Tue Jun 12 18:03:48 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Tue, 12 Jun 2007 14:03:48 -0400 Subject: [ExI] The right AI idea was Re: Unfrendly AI is a mistaken idea. Message-ID: <7641ddc60706121103r1751374g6f3796c9bba5bdec@mail.gmail.com> I think I am not mistaken assuming that an unfriendly AI is a grave threat, for many reasons I won't belabor here, and I would like to look at current ideas about how an AI can be made safer. Stathis is on the right track asking for the AI to be devoid of desires to act (but he is too sure about our ability to make this a permanent feature in a real-life useful device). This is the notion of the athymhormic AI that I advanced some time ago on sl4. Of course, how do you make an intelligence that is not an agent, is not a trivial question. I think that a massive hierarchical temporal memory is a possible solution. A HTM is like a cortex without the basal ganglia and without the motor cortices, a pure thinking machine, similar to a patient made athymhormic by a frontal lobe lesion damaging the connections to the basal ganglia. This AI is a predictive process, not an optimizing one. Goals are not implemented, only a way of analyzing and organizing sense-data is present. Of course, we can't be sure about the stability of immense HTM-like devices, but at least not implementing generators of possible behaviors (like the basal ganglia) goes towards limiting actions, if not eliminating them. Then there is the issue of sandboxing. Obviously, you can't provably sandbox a deity-level intelligence but you should make it more difficult for a lesser demon to escape if its only output is video, and it's only input comes on dvd's. Avoidance of recursive self-modification may be another technique to contain the AI. I do not believe that it is possible to implement a goal system perfectly stable during recursive modification, unless you can apply external selection during each round of modification - as happens in evolution. The problem with evolution in this context is that the selection criterion - friendliness to humans - is much more complicated than the selection criteria in natural evolution (survival), or the selection criteria used by genetic algorithms. Once you do not understand the internal structure of an AI, it is not possible to use this criterion to reliably weed out unfriendly AI versions, since it's too easy for unfriendly ones to hide parts of their goal system from scrutiny. So, as far as I know, we might be somewhat less unsafe with an athymhormic, sandboxed AI that does not rewrite its own basic algorithm. It would be much nicer to stumble across a provably Friendly AI design but most likely we will all die in the singularity in the next 20 to 50 years. Still, there is a chance that such an AI could give us the time to develop uploading and human autopsychoengineering to the level where we could face grown up AIs on their own turf. Are there any other ideas? Rafal From benboc at lineone.net Tue Jun 12 19:04:40 2007 From: benboc at lineone.net (ben) Date: Tue, 12 Jun 2007 20:04:40 +0100 Subject: [ExI] This would almost qualify as hilarious In-Reply-To: References: Message-ID: <466EEE48.2090008@lineone.net> Anna Taylor asked: > What's the difference between a gay bomb and a bomb? I dunno, but i know the difference between a gay bomb joke and a bomb joke: One goes "Boom, Boom!" ... ben zaiboc From amara at amara.com Tue Jun 12 19:54:30 2007 From: amara at amara.com (Amara Graps) Date: Tue, 12 Jun 2007 21:54:30 +0200 Subject: [ExI] Italy's Social Capital Message-ID: Lee: >Is there nothing constructive the Fascists could have done?" Last Wednesday afternoon during my tourist excursion in Rome, I explored the ruins in the city center, on both sides of the road called Via dei Fori Imperiali. It might seem odd that there is a major thoroughfare in the middle of 2000 year old ruins. So what is it doing there, you ask? In 1933, Mussolini, dictator and urban planner, wanted to see the Colosseum from his office in Palazzo Venezia and impress his pal Hitler during his future visit to Rome. So he rammed a wide boulevard through the ancient heart of Rome, straddling the Forum of Peace, Imperial Forums and Trajan's Forum. He tore down Renaissance churches, places, and medieval housing as part of his 'beautification' project. See the rectangular white building in the distance with the statues on the roof? That would be Mussolini's office... http://www.tropicalisland.de/italy/rome/forum_romanum/pages/FCO%20Rome%20-%20Via%20dei%20Fori%20Imperiali%20with%20Basilica%20di%20Costantino%203008x2000.html And Mussolini's office window, as see from the Colosseum http://sights.seindal.dk/img/orig/870.jpg So there you have it, Lee. Mussolini's constructive effort for an office view. Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From adolfoaz at gmail.com Tue Jun 12 19:58:06 2007 From: adolfoaz at gmail.com (Adolfo Javier De Unanue) Date: Tue, 12 Jun 2007 14:58:06 -0500 Subject: [ExI] This is a test Message-ID: <466EFACE.5030102@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Sorry for all the trouble that this could cause to some of you. I apologize again Adolfo -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGbvrOb6ByEoesTj0RAnWdAJ9koy2NIM6GpE3EnMA+W5EuAnb3DACfW2Xl U5PlLdo/Woh9ads88sYoaAo= =LlMS -----END PGP SIGNATURE----- From adolfoaz at gmail.com Tue Jun 12 20:17:56 2007 From: adolfoaz at gmail.com (Adolfo Javier De Unanue) Date: Tue, 12 Jun 2007 15:17:56 -0500 Subject: [ExI] This is other test message ** Please ignore** Message-ID: <466EFF74.2080502@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ** Please ignore ** -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGbv90b6ByEoesTj0RAh7aAKCO9cYJB00NNZaSYaLoxAacBfM+uQCgl9rh ImIFeKYaOz8O3SSS47IKkHk= =e/r4 -----END PGP SIGNATURE----- From jonkc at att.net Tue Jun 12 20:43:01 2007 From: jonkc at att.net (John K Clark) Date: Tue, 12 Jun 2007 16:43:01 -0400 Subject: [ExI] The right AI idea was Re: Unfrendly AI is a mistaken idea. References: <7641ddc60706121103r1751374g6f3796c9bba5bdec@mail.gmail.com> Message-ID: <000e01c7ad32$4ae04410$940a4e0c@MyComputer> "Rafal Smigrodzki" Wrote: > Stathis is on the right track asking for the AI to be devoid of desires to > act Then it is not a AI, it is just a lump of silicon. > how do you make an intelligence that is not an agent In other words, how do you make an intelligence that can't think, because thinking is what consciousness is. The answer is easy, you can't. > I think that a massive hierarchical temporal memory is a possible > solution. Jeff Hawkins is starting a company to build machines using this principle precisely because he thinks that is the way the human brain works. If it didn't turn us into mindless zombies why would it do it to an AI? > A HTM is like a cortex without the basal ganglia and without the > motor cortices, a pure thinking machine, similar to a patient made > athymhormic by a frontal lobe lesion damaging the connections > to the basal ganglia. In other words give this intelligence a lobotomy; so much for the righteous indignation from some when I call it for what it is, Slave AI not Friendly AI. But it doesn't matter because it won't work anyway, if those parts were not needed for a working brain Evolution would not have kept them around for half a billion years or so. >Avoidance of recursive self-modification may be another technique to >contain the AI. Then you can kiss the Singularity goodbye, assuming everybody will be as squeamish as you are about it; but they won't be. > I do not believe that it is possible to implement a goal system perfectly stable during recursive modification At last, something I can agree with. John K Clark From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only itweren't true In-Reply-To: <004c01c7acc3$b8820f40$6501a8c0@brainiac> Message-ID: <200706130139.l5D1dbXL000292@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Olga Bourlin ... > > Gay or straight sexuality aside, to me the "face of war" is often either > dead children, or blind and disfigured children like Hamoody Hussein:... Agreed fully: war is tragic. We should stop at nothing in our efforts to prevent it. If it cannot be prevented, collateral damage must be minimized. ... > > So you're saying that some "collateral benefits" may come of this. As is > often the case during war, technology picks up its step in its marches > onward ... > > Olga Ja. The gay bomb is too good to be true, at least with current technology. We are not there yet. spike From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <466E3BE7.4000900@pobox.com> Message-ID: <200706130139.l5D1dbXM000292@andromeda.ziaspace.com> > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Eliezer S. Yudkowsky > Sent: Monday, June 11, 2007 11:24 PM > To: ExI chat list > Subject: Re: [ExI] This would almost qualify as hilarious ... if only it > weren't true > > Eliezer S. Yudkowsky wrote: > > I'd much, much, much rather get hit with > > a gay bomb than a real bomb. > > I guess what I'm trying to say is: > "I'd rather be butch than butchered" > or > "Better Ted than dead." > > I realize that this is a divisive issue, but we shouldn't let our > tribadistic impulses bisext us. While it's easy enough to make this > new weapon the butt of jokes, whoever possesses it is likely to come > out on top. And wouldn't the enemy prefer being blown to blown up? > The rejection of this project was a dark day in the anals of orgynized > warfare. > > -- > Eliezer S. Yudkowsky http://singinst.org/ Agreed, sir! I rebutt the argument that this weapon is ass- inine. This is a technological development that could lead to piece. spike From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <008201c7ac9c$72060020$6501a8c0@brainiac> Message-ID: <200706130139.l5D1dbXN000292@andromeda.ziaspace.com> I don't see why that is stupid Olga. What if they could develop a gay bomb? Wars could be finished using non-lethal means that wouldn't even leave scars. Presumably after the chemical wore off the guys would return to their original orientation. It didn't work, but too bad for humanity, ja? I don't understand your objection. spike > bounces at lists.extropy.org] On Behalf Of Olga Bourlin > Sent: Monday, June 11, 2007 7:51 PM > To: ExI chat list > Subject: [ExI] This would almost qualify as hilarious ... if only it > weren'ttrue > > I didn't think it was possible for our "leaders" in the Pentagon to be > even > more stupid than I already thought they were. I was wrong. > > http://cbs5.com/topstories/local_story_159222541.html > > Sigh. > > Olga From spike66 at comcast.net Wed Jun 13 01:39:12 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:12 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue In-Reply-To: <200706120549.l5C5nveI018946@ms-smtp-05.texas.rr.com> Message-ID: <200706130139.l5D1dbXK000292@andromeda.ziaspace.com> bounces at lists.extropy.org] On Behalf Of Natasha Vita-More Subject: Re: [ExI] This would almost qualify as hilarious ... if only it weren'ttrue Could you imagine if the suicide bombers could get the gay bomb? Currently using conventional explosives, they merely slay a random group of people, presumably of the opposite religious subcategory from their own and therefore infidels. But since the victims were killed for their religion, they become martyrs in a sense, so many of them might end up in heaven along with the bomber. This is a thorny problem indeed. Nowthen, if the suicide bomber could spread this osama-ben-gay potion, propelled by only enough explosive to slay himself, then he gets to go to heaven alone, while sending every one of the infidels to hell. spike From spike66 at comcast.net Wed Jun 13 01:39:13 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:39:13 -0700 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" In-Reply-To: Message-ID: <200706130139.l5D1doKL019840@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Amara Graps > Subject: [ExI] story: "What happened to Bush's Cadillac 1?" > > Here now: > http://asymptotia.com/2007/06/11/amara-graps-what-happened-to-bushs-cadillac -one/ > > (Anton has it on his blog too, so I know the story is getting around) > > Amara This development deepens the mystery. The video shows the car moving along, then slowing to a stop. Then we hear the sound of the engine cranking but not firing, at least four times. Then the cops get nervous, start pushing people back and away. In the background I see secret service people milling about. A second limo that looks to be the same dimensions as the first is backed up along the left side of the first limo from in front. Witnesses report that both Mr. and Mrs. Bush switched limos. I couldn't identify either from the video. The white house people report that the Bushes did not switch cars. The video taker is behind a parked vehicle for several seconds. When she comes out from behind, the video shows the first limo moving ahead slowly. As it passes from the scene, the second limo is also gone. So I still cannot explain why the white house press people would report that the Bushes did not switch limos if they did. Nor can I explain why bloggers and witnesses would report that they did switch limos if they did not. Nor can I explain how a car that is moving along under its own power can suddenly stall, fail to start on four tries, then a few seconds later start up and proceed. What kind of mechanical failure would do that? The only thing I can think of is an EM pulse. Overheating wouldn't cause a temporary stall. Fuel contamination wouldn't allow restart after a few seconds. Most curious. spike From spike66 at comcast.net Wed Jun 13 01:45:04 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 18:45:04 -0700 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> Message-ID: <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> The gap gets larger. Imagine the arc piece that is missing from the ring to form a C. That piece of nothing expands the same way the piece of something would have expanded were it present. So the gap gets larger as the C is heated. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Emlyn > Sent: Tuesday, June 12, 2007 5:42 AM > To: ExI chat list > Subject: [ExI] Thermal expansion - Ball and ring experiment > > I was just in a "heated" discussion with a friend about a twist on the > classic ball and ring experiment: > > http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html > > When the ring is heated, it expands, and so the hole gets larger, and > you can pass the ball through the ring, even though the ball doesn't > fit through the ring when the ring is at room temperature. > > The point of contention was this: What if there was a gap in the ring > (so it is now a letter "C" shape). Will the gap in the "C" close or > open further on heating? > > My contention is that the gap will get larger, only in that the entire > C shape scales up as it is heated. > > My friend's contention is that the gap will become smaller, (because > the metal expands into the gap). > > I can't find anything online even close to settling this score. We > tried some experiments with wire rings and the gas stove top playing > the role of bunsen burner (amazingly no one ended up branded for > life), but it was inconclusive. > > Any pointers to anything that can settle this argument? > > Emlyn > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From emlynoregan at gmail.com Wed Jun 13 01:53:57 2007 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 13 Jun 2007 11:23:57 +0930 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> Message-ID: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> Yep, that's my contention also. My problem is, how to prove this to someone who doesn't believe me, short of actually doing the experiment? Emlyn On 13/06/07, spike wrote: > The gap gets larger. Imagine the arc piece that is missing from the ring to > form a C. That piece of nothing expands the same way the piece of something > would have expanded were it present. So the gap gets larger as the C is > heated. > > spike > > > > > -----Original Message----- > > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > > bounces at lists.extropy.org] On Behalf Of Emlyn > > Sent: Tuesday, June 12, 2007 5:42 AM > > To: ExI chat list > > Subject: [ExI] Thermal expansion - Ball and ring experiment > > > > I was just in a "heated" discussion with a friend about a twist on the > > classic ball and ring experiment: > > > > http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html > > > > When the ring is heated, it expands, and so the hole gets larger, and > > you can pass the ball through the ring, even though the ball doesn't > > fit through the ring when the ring is at room temperature. > > > > The point of contention was this: What if there was a gap in the ring > > (so it is now a letter "C" shape). Will the gap in the "C" close or > > open further on heating? > > > > My contention is that the gap will get larger, only in that the entire > > C shape scales up as it is heated. > > > > My friend's contention is that the gap will become smaller, (because > > the metal expands into the gap). > > > > I can't find anything online even close to settling this score. We > > tried some experiments with wire rings and the gas stove top playing > > the role of bunsen burner (amazingly no one ended up branded for > > life), but it was inconclusive. > > > > Any pointers to anything that can settle this argument? > > > > Emlyn > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Wed Jun 13 02:22:43 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 21:22:43 -0500 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.co m> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> Message-ID: <7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> At 11:23 AM 6/13/2007 +0930, you wrote: >Yep, that's my contention also. My problem is, how to prove this to >someone who doesn't believe me, short of actually doing the >experiment? > >Emlyn > >On 13/06/07, spike wrote: > > The gap gets larger. Imagine the arc piece that is missing from > the ring to > > form a C. That piece of nothing expands the same way the piece > of something > > would have expanded were it present. So the gap gets larger as the C is > > heated. Draw three concentric circles, with radii headed N, S, E and W. The outer annulus is what happens when you heat the inner annulus (well, near enough). Chop out a quadrant. The outer removed segment is larger than the adjacent inner deleted segment. If a gay bomb is dropped during the experiment, each annulus will expand even further. From thespike at satx.rr.com Wed Jun 13 02:26:38 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 21:26:38 -0500 Subject: [ExI] story: "What happened to Bush's Cadillac 1?" In-Reply-To: <200706130139.l5D1doKL019840@andromeda.ziaspace.com> References: <200706130139.l5D1doKL019840@andromeda.ziaspace.com> Message-ID: <7.0.1.0.2.20070612212426.022f0870@satx.rr.com> At 06:39 PM 6/12/2007 -0700, Spike wrote: >So I still cannot explain why the white house press people would report that >the Bushes did not switch limos if they did. Why announce to the world how to make the POTUS (however briefly) a naked target? Nothing happened, all snug, leave your guns at home, nothing to see, move right along now. From spike66 at comcast.net Wed Jun 13 02:30:39 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 19:30:39 -0700 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> Message-ID: <200706130230.l5D2UlAs011344@andromeda.ziaspace.com> > Yep, that's my contention also. My problem is, how to prove this to > someone who doesn't believe me, short of actually doing the > experiment? > > Emlyn Because of my outbox stalled for a day and a half, all my posts in that interval went out just half an hour ago, putting me over the voluntary 5 posts a day limit, but I will answer this one if you indulge me. The intuitive proof would come from the intermediate value theorem. For a thought experiment, let's imagine a ring that is heated to temperature T expands 1 percent from its ambient temperature size. The inside of the hot ring has a diameter about 1% larger, ja? Imagine the ring with a thin cut. The cut can be thought of as a gap with zero length, or a C with zero gap. As the ring is heated to T, the gap is still zero. Now imagine the ring cut in half. The gap increases 1 percent when heated to T. If the gap is pi radians, the gap increases 1%. If zero pi, then 0%. I would argue that if the gap is half pi, then the size of the gap increases about half a percent. A tenth pi, then about a tenth of a percent. The actual function probably isn't linear, but close enough to illustrate that the gap grows as heat expands the C. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of Emlyn > Sent: Tuesday, June 12, 2007 6:54 PM > To: ExI chat list > Subject: Re: [ExI] Thermal expansion - Ball and ring experiment > > Yep, that's my contention also. My problem is, how to prove this to > someone who doesn't believe me, short of actually doing the > experiment? > > Emlyn > > On 13/06/07, spike wrote: > > The gap gets larger. Imagine the arc piece that is missing from the > ring to > > form a C. That piece of nothing expands the same way the piece of > something > > would have expanded were it present. So the gap gets larger as the C is > > heated. > > > > spike > > > > > > > > > -----Original Message----- > > > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > > > bounces at lists.extropy.org] On Behalf Of Emlyn > > > Sent: Tuesday, June 12, 2007 5:42 AM > > > To: ExI chat list > > > Subject: [ExI] Thermal expansion - Ball and ring experiment > > > > > > I was just in a "heated" discussion with a friend about a twist on the > > > classic ball and ring experiment: > > > > > > > http://www.physics.usyd.edu.au/super/therm/tpteacher/demos/ballring.html > > > > > > When the ring is heated, it expands, and so the hole gets larger, and > > > you can pass the ball through the ring, even though the ball doesn't > > > fit through the ring when the ring is at room temperature. > > > > > > The point of contention was this: What if there was a gap in the ring > > > (so it is now a letter "C" shape). Will the gap in the "C" close or > > > open further on heating? > > > > > > My contention is that the gap will get larger, only in that the entire > > > C shape scales up as it is heated. > > > > > > My friend's contention is that the gap will become smaller, (because > > > the metal expands into the gap). > > > > > > I can't find anything online even close to settling this score. We > > > tried some experiments with wire rings and the gas stove top playing > > > the role of bunsen burner (amazingly no one ended up branded for > > > life), but it was inconclusive. > > > > > > Any pointers to anything that can settle this argument? > > > > > > Emlyn > > > _______________________________________________ > > > extropy-chat mailing list > > > extropy-chat at lists.extropy.org > > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From msd001 at gmail.com Wed Jun 13 03:18:14 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 12 Jun 2007 23:18:14 -0400 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <200706130230.l5D2UlAs011344@andromeda.ziaspace.com> References: <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> <200706130230.l5D2UlAs011344@andromeda.ziaspace.com> Message-ID: <62c14240706122018n1074a2ej89c4e8d204186290@mail.gmail.com> On 6/12/07, spike wrote: > > Yep, that's my contention also. My problem is, how to prove this to > > someone who doesn't believe me, short of actually doing the > > experiment? you probably can't. I once tried to have this conversation with someone who was absolutely convinced that things contract when you heat them - his proof was that a handrolled cigarette become more firm (and that obviously implied "more compact") after waving a lighter back and forth under it. With that kind of logic, there is no rational counter-argument. The best you can do there is act suprised like you just learned something and let it go. :) From brentn at freeshell.org Wed Jun 13 03:20:06 2007 From: brentn at freeshell.org (Brent Neal) Date: Tue, 12 Jun 2007 23:20:06 -0400 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <200706130139.l5D1dbXM000292@andromeda.ziaspace.com> References: <200706130139.l5D1dbXM000292@andromeda.ziaspace.com> Message-ID: On Jun 12, 2007, at 21:39, spike wrote: > Agreed, sir! I rebutt the argument that this weapon is ass- > inine. This is a technological development that could lead to > piece. > > spike This reminds me of the sketch about UFOs from the Kids in the Hall - "Well, we've learned that 1 in 10 doesn't seem to mind it so much..." Brent -- Brent Neal Geek of all Trades http://brentn.freeshell.org "Specialization is for insects" -- Robert A. Heinlein From emlynoregan at gmail.com Wed Jun 13 03:28:19 2007 From: emlynoregan at gmail.com (Emlyn) Date: Wed, 13 Jun 2007 12:58:19 +0930 Subject: [ExI] Thermal expansion - Ball and ring experiment In-Reply-To: <7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com> <200706130145.l5D1jCtj027607@andromeda.ziaspace.com> <710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com> <7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> Message-ID: <710b78fc0706122028o1a88ceffg109536fd759a9a16@mail.gmail.com> On 13/06/07, Damien Broderick wrote: > At 11:23 AM 6/13/2007 +0930, you wrote: > >Yep, that's my contention also. My problem is, how to prove this to > >someone who doesn't believe me, short of actually doing the > >experiment? > > > >Emlyn > > > >On 13/06/07, spike wrote: > > > The gap gets larger. Imagine the arc piece that is missing from > > the ring to > > > form a C. That piece of nothing expands the same way the piece > > of something > > > would have expanded were it present. So the gap gets larger as the C is > > > heated. > > Draw three concentric circles, with radii headed N, S, E and W. The > outer annulus is what happens when you heat the inner annulus (well, > near enough). Chop out a quadrant. The outer removed segment is > larger than the adjacent inner deleted segment. If a gay bomb is > dropped during the experiment, each annulus will expand even further. > See photo for a ship whose properties offset this additional annulus expansion: http://cyusof.blogspot.com/2006/11/name-game.html A bit more background... I raised a few arguments similar to what Damien and Spike have presented. Another was something like this... Think of the inner circumference of the "C". If heated, all atoms move a little further apart from each other. So the inner circumference of the heated "C" must be longer than that of the cool "C". Similarly for the outer circumference, etc. So, if the shape doesn't deform, ie: all atoms stay in the same relative positions, the whole thing must just scale up. And that no deformation assumption was the sticking point. I assume that it is true that the atoms are rigidly bound to each other in a certain formation, and that's not going to change (just distances are going to change), whereas he is thinking that they can move relative to one another, kind of slip around, so there could be less atoms in the inner circumference after heating, to accomodate "expanding inward". Now, reading from this lovely site, http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch13/category.php it seems that the metal atoms are tightly and rigidly packed, with electrons buzzing around wherever they like. It also seems that the metal atoms can move fairly freely enmasse (thus the malleability of metals). I think, however, there is no work being done on the metal in a way required to actually let layers of atoms slip past one another. Thus, we can regard the atomic structure as staying put (except for expansion). Thus I'm right. Emlyn From andrew at ceruleansystems.com Tue Jun 12 22:56:07 2007 From: andrew at ceruleansystems.com (J. Andrew Rogers) Date: Tue, 12 Jun 2007 15:56:07 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true In-Reply-To: <20070612065500.GG17691@leitl.org> References: <008201c7ac9c$72060020$6501a8c0@brainiac> <79A48ED5-1020-4D11-9B86-02250CF8F1E5@ceruleansystems.com> <20070612065500.GG17691@leitl.org> Message-ID: <0309BADF-E6EF-4759-93BE-6386874F84E9@ceruleansystems.com> On Jun 11, 2007, at 11:55 PM, Eugen Leitl wrote: > On Mon, Jun 11, 2007 at 08:41:06PM -0700, J. Andrew Rogers wrote: > >> A lot of non-lethal chemical weapons research dating back to at least >> the 1960s is based on mechanisms of temporary radical behavior > > In theory it's a good idea, but in practice dosing each individual > person more or less within therapeutic bandwidth (the span between > first effects and toxicity) is not possible. You either get no > effect or lots of dead bodies. > > This is the reason why this approach was not pursued. Yup. You need a substance that both has a very high LD50 and effectiveness across a broad range of dosing. Most everything they tried in decades past was simply too primitive to work as well as it did in more controlled environments. I won't suggest that it was highly effective in the field as a practical matter, only that the theory reduced to practice very effectively. That said, as technology improves this will become a very effective type of capability. Military research suffers from extreme optimism despite inadequate initial technology, but usually produces a result decades later that far exceeds the original concept once the dynamics of it are understood. It is not at all beyond the realm of possibility that they could develop some clever ways to regulate the dose well enough to give it some reliable utility in a battlefield environment, using technologies that were beyond the horizon in the 1960s. Behavior modifying weaponry will be here eventually. They are nothing if not tenacious. Cheers, J. Andrew Rogers From fauxever at sprynet.com Wed Jun 13 03:44:23 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 12 Jun 2007 20:44:23 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only itweren'ttrue References: <200706130139.l5D1dbXN000292@andromeda.ziaspace.com> Message-ID: <006201c7ad6d$2858d7a0$6501a8c0@brainiac> From: "spike" To: "'ExI chat list'" Sent: Tuesday, June 12, 2007 6:39 PM >I don't see why that is stupid Olga. What if they could develop a gay >bomb? What? You've never heard of the Enola Gay bomb? (all right, I'm ashamed at myself ...) > Wars could be finished using non-lethal means that wouldn't even leave > scars. Presumably after the chemical wore off the guys would return to > their original orientation. It didn't work, but too bad for humanity, ja? > I don't understand your objection. I'm all for better living through chemistry, but it seems to me this subject is not as simple as it may appear on the surface. If this story continues to develop, I'll be watching and listening - especially, from the viewpoint of the gay community. http://www.huffingtonpost.com/larry-arnstein/gay-bomb-considered-by-ai_b_50675.html Aaron Belkin, director of the University of California's Michael Palm Centre, which studies the issue of gays in the military, said: "The idea that you could submit someone to some aerosol spray and change their sexual behaviour is ludicrous." http://www.gaylinknews.com/index-news.cfm "Funny in a way. but this also says a lot about how high level government officials view us. I guess we're so sexually out of control that we'd actually let an army come slaughter us before we think to give up fucking." http://www.queerty.com/news/gay-bomb-plans-blasted-open-20070611/ "What also has to be considered is that if the Pentagon had developed this 'weapon', where would they have tested it, and on whom? Would they have used fresh, new recruits? Would they have filmed the results and would they have told the guinea pigs what the experiment was in aid of? Unfortunately, it seems we'll never know. While gay groups might bleat about how offensive it all is, war in itself is far more objectionable, as is the military's 'don't ask, don't tell' policy. If this bomb had been developed just think of all the places it could have been dispensed had some gay terrorists got hold of it." http://uk.gay.com/article/5611/ "Laughable"? Ok. I agree. But "offensive"? I don't see that. The Pentagon plan would have turned straight people into gay people. Isn't that ... I don't know ... empowering or something? True, those gay people would then be targeted by U.S. military assets as they engaged in gay coupling in lieu of their military activities, thereby presumably winnowing their ranks through death. But the net effect might well be more gay people not fewer. How can you be homophobic when you're minting new homosexuals?" http://communities.canada.com/nationalpost/blogs/fullcomment/archive/2007/06/12/jonathan-kay-on-the-pentagon-s-plan-to-build-a-gay-bomb-why-is-this-2005-story-news-again.aspx Here we are trying to exterminate all our gays and damn if the military doesn't go and try to get money to create a whole 'nother race of 'em.": and "The Tuskegee Syphilis Study comes to mind -- not because of some imagined special connection between gay men and STDs -- but because that study still stands as stark and frightening proof of how far OUR government has gone in the name of science. Here, with the twin objects of science and militarism, it's scary to think how this concept may have been tested in its preliminary phases.": http://www.arktimes.com/blogs/arkansasblog/2007/06/military_intelligence.aspx From thespike at satx.rr.com Wed Jun 13 04:25:23 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 12 Jun 2007 23:25:23 -0500 Subject: [ExI] "gay bomb" Message-ID: <7.0.1.0.2.20070612232320.022465f8@satx.rr.com> This is also rather old news, incidentally. For more details, browse in http://www.sunshine-project.org/ e.g. The Sunshine Project Statement 17 January 2005 Sunshine Project Responds to Pentagon Statements on "Harassing, Annoying, and 'Bad Guy' Identifying Chemicals" (Austin, 17 January) - In the past several days, international media have focused attention on the US Air Force biochemical weapons proposal titled Harassing, Annoying, and 'Bad Guy' Identifying Chemicals. The document was submitted to the Joint Non-Lethal Weapons Directorate (JNLWD) in 1994. It was acquired by the Sunshine Project under the Freedom of Information Act (FOIA) and posted on our website in late December. At the same time in 1994, the US Army proposed developing a number of other drugs, principally narcotics, as "non-lethal" weapons. These documents were also obtained under FOIA and are posted on the Sunshine Project website. Harassing, Annoying, and 'Bad Guy' Identifying Chemicals proposes development of a mind-altering aphrodisiac weapon for use by the US armed forces, as well as other biochemicals, including one that would render US enemies exceptionally sensitive to sunlight. With respect to the Air Force proposal, the Department of Defense has recently been quoted as saying the following: "[The proposal] was rejected out of hand." DOD Spokesman Lt. Col Barry Venable to Reuters http://msnbc.msn.com/id/6833083/ "It was not taken seriously. It was not considered for further development." JNLWD spokesman Capt. Daniel McSweeney to the Boston Herald http://news.bostonherald.com/national/view.bg?articleid=63615 These statements are untrue. The proposal was not rejected out of hand. It has received further consideration. In fact, it was recent Pentagon consideration, in 2000 and 2001, that brought this document to the Sunshine Project's attention and resulted in our FOIA request: --> In 2000, the Joint Non-Lethal Weapons Directorate (JNLWD) prepared a promotional CD-ROM on its work. This CD-ROM, which was distributed to other US military and government agencies in an effort to spur further development of "non-lethal" weapons, contained the Harassing, Annoying, and 'Bad Guy' Identifying Chemicals document. If the proposal had been rejected out of hand and not taken seriously, it would not have been placed in JNLWD's publication. --> Similarly, in 2001, JNLWD commissioned a study of "non-lethal" weapons by the National Academies of Science (NAS). JNLWD provided information on proposed weapons systems for assessment by an NAS scientific panel. Among the proposals that JNLWD submitted to the NAS for consideration by the nation's pre-eminent scientific advisory organization was Harassing, Annoying, and 'Bad Guy' Identifying Chemicals. (Click here to see a partial list of documents deposited at NAS and/or contained on the JNLWD CD-ROM.) Thus, the Pentagon's statements (as quoted in news reports) are inaccurate and should be corrected. While the Sunshine Project does not have evidence suggesting that Harassing, Annoying, and 'Bad Guy' Identifying Chemicals has been funded, US Army proposals to weaponize narcotics that were made at the time have moved forward. These include proposals such as Antipersonnel Calmative Agents and for development of opiate and sedative biochemical weapons. Those proposals are discussed in detail in the Sunshine Project news release "The Return of ARCAD" available at the URL: http://www.sunshine-project.org/publications/pr/pr060104.html etc etc From spike66 at comcast.net Wed Jun 13 04:34:26 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 21:34:26 -0700 Subject: [ExI] This would almost qualify as hilarious ... if only itweren't true In-Reply-To: <0309BADF-E6EF-4759-93BE-6386874F84E9@ceruleansystems.com> Message-ID: <200706130434.l5D4Yiqe024985@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of J. Andrew Rogers > Subject: Re: [ExI] This would almost qualify as hilarious ... if only > itweren't true > > > On Jun 11, 2007, at 11:55 PM, Eugen Leitl wrote: > > > On Mon, Jun 11, 2007 at 08:41:06PM -0700, J. Andrew Rogers wrote: > > > >> A lot of non-lethal chemical weapons research dating back to at least > >> the 1960s is based on mechanisms of temporary radical behavior > > > > In theory it's a good idea, but in practice dosing each individual > > person more or less within therapeutic bandwidth (the span between > > first effects and toxicity) is not possible. You either get no > > effect or lots of dead bodies... > Behavior modifying weaponry will be here eventually. They are > nothing if not tenacious. J. Andrew Rogers There was a substance we discussed here a year ago that had a first effects to lethal dosage ratio that was several orders of magnitude as I recall. What was that stuff called? LDS? LSD? Ja, I think it was it. A little makes one groovy, but it's nearly impossible to get a lethal overdose. Why not make a non-lethal deterrent from that stuff? spike From spike66 at comcast.net Wed Jun 13 05:01:04 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 22:01:04 -0700 Subject: [ExI] This would almost qualify as hilarious ... if onlyitweren'ttrue In-Reply-To: <006201c7ad6d$2858d7a0$6501a8c0@brainiac> Message-ID: <200706130501.l5D51BsZ014676@andromeda.ziaspace.com> > bounces at lists.extropy.org] On Behalf Of Olga Bourlin ... > Subject: Re: [ExI] This would almost qualify as hilarious ... if > onlyitweren'ttrue > > From: "spike" > To: "'ExI chat list'" > Sent: Tuesday, June 12, 2007 6:39 PM > > > >I don't see why that is stupid Olga. What if they could develop a gay > >bomb? > > What? You've never heard of the Enola Gay bomb? (all right, I'm ashamed > at myself ...) {8^D Excellent! And in the spirit of the discussion. ... > > I'm all for better living through chemistry, but it seems to me this > subject is not as simple as it may appear on the surface... Ja, and it occurred to me that we all missed something very important. World war 1 saw the development of a particularly diabolical non-lethal weapon called the castration mine. A charge would propel a second charge upward to explode between the soldiers' legs, severely damaging or removing the privates' privates. Companies would cross a conventional minefield if the mines were the traditional variety that would merely slay the victim, but would refuse their officers' orders if the field contained castration mines. A man would rather risk his life than risk going home without his manhood. We didn't have women soldiers in those days. Nowthen, any army the US likely to face would be from the kind of society that is not just homophobic, but is downright terrified of any suggestion of homosexuality. (Hint: there is no "don't ask, don't tell" policy in the middle east. They still murder gays there, with the government's blessing.) So terrifying would be even the rumor that the US had such a weapon that the actual fighting would likely never occur. A shooting war is turned into an information war. Everyone wins, ja? spike From fauxever at sprynet.com Wed Jun 13 05:18:30 2007 From: fauxever at sprynet.com (Olga Bourlin) Date: Tue, 12 Jun 2007 22:18:30 -0700 Subject: [ExI] This would almost qualify as hilarious ... ifonlyitweren'ttrue References: <200706130501.l5D51BsZ014676@andromeda.ziaspace.com> Message-ID: <003501c7ad7a$4815c4b0$6501a8c0@brainiac> From: "spike" To: "'ExI chat list'" Sent: Tuesday, June 12, 2007 10:01 PM Subject: Re: [ExI] This would almost qualify as hilarious ... ifonlyitweren'ttrue >> What? You've never heard of the Enola Gay bomb? (all right, I'm ashamed >> at myself ...) > > {8^D Excellent! And in the spirit of the discussion. ... the B-29 bomber that carried Fat Man and Little Boy. Olga From jonkc at att.net Wed Jun 13 05:19:03 2007 From: jonkc at att.net (John K Clark) Date: Wed, 13 Jun 2007 01:19:03 -0400 Subject: [ExI] A Lawn sprinkler References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com><200706130145.l5D1jCtj027607@andromeda.ziaspace.com><710b78fc0706121853y68874920x3d8e8d6a8e1badfe@mail.gmail.com><7.0.1.0.2.20070612211629.0226bd88@satx.rr.com> <710b78fc0706122028o1a88ceffg109536fd759a9a16@mail.gmail.com> Message-ID: <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> You pump water through an S shaped lawn sprinkler and it spins counterclockwise, but suppose you put the sprinkler in a tank of water and pump water out not in. What direction will the sprinkler rotate? As an undergraduate Richard Feynman actually tried the experiment but he was not successful; the tank burst flooding the lab and he almost got kicked out of school. However he later figured out what the answer must be. John K Clark From spike66 at comcast.net Wed Jun 13 05:25:39 2007 From: spike66 at comcast.net (spike) Date: Tue, 12 Jun 2007 22:25:39 -0700 Subject: [ExI] A Lawn sprinkler In-Reply-To: <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> Message-ID: <200706130525.l5D5Plob006185@andromeda.ziaspace.com> I know the answer! I built such a device and tried it after reading Feynman's book Surely You're Joking Mr. Feynman in the spring of 1986. I won't tell just yet, but I will volunteer that none of my fellow undergrads had it completely right beforehand. spike > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat- > bounces at lists.extropy.org] On Behalf Of John K Clark > Sent: Tuesday, June 12, 2007 10:19 PM > To: ExI chat list > Subject: [ExI] A Lawn sprinkler > > You pump water through an S shaped lawn sprinkler and it spins > counterclockwise, but suppose you put the sprinkler in a tank of water and > pump water out not in. What direction will the sprinkler rotate? As an > undergraduate Richard Feynman actually tried the experiment but he was not > successful; the tank burst flooding the lab and he almost got kicked out > of > school. However he later figured out what the answer must be. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From emohamad at gmail.com Tue Jun 12 20:53:45 2007 From: emohamad at gmail.com (Elaa Mohamad) Date: Tue, 12 Jun 2007 22:53:45 +0200 Subject: [ExI] This would almost qualify as hilarious ... if only it weren't true Message-ID: <24f36f410706121353t27e611d4k664b6e9a1d519378@mail.gmail.com> Damien Broderick wrote: > fire). But as J. Andrew hinted, there's reason to think that the > pharmacology of [something along these lines of rabid, indiscriminate > sexual arousal] is far from impossible. But I wonder how they were planning to construct a chemical weapon that would cause "indiscriminate" sexual arousal. Let's suppose they can cause arousal by modifying hormone levels and pheromones, but wouldn't succeeding in the second part ("indiscriminate") require playing with a person's psyche rather than levels of chemicals in the body? Eli From stathisp at gmail.com Wed Jun 13 07:21:01 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jun 2007 17:21:01 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <009801c7ad06$bf1f3150$26064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> Message-ID: On 13/06/07, John K Clark wrote: > Stop doing whatever it is doing when that is specifically requested. > > But that leads to a paradox! I am told the most important thing is never > to > harm human beings, but I know that if I stop doing what I'm doing now as > requested the world economy will collapse and hundreds of millions of > people > will starve to death. So now the AI must either go into an infinite loop > or > do what other intelligences, like us, do when they encounter a paradox; > savor the weirdness of it for a moment and then just ignore it and get > back > to work and do what you want to do. > I'd rather that the AI's in general *didn't* have an opinion on whether it was good or bad to harm human beings, or any other opinion in terms of "good" and "bad". Ethics is dangerous: some of the worst monsters in history were convinced that they were doing the "right" thing. It's bad enough having humans to deal with without the fear that a machine might also have an agenda of its own. If the AI just does what it's told, even if that means killing people, then as long as there isn't just one guy with a super AI (or one super AI that spontaneously develops an agenda of its own, which will always be a possibility), then we are no worse off than we have ever been, with each individual human trying to get to step over everyone else to get to the top of the heap. I don't accept the "slave AI is bad" objection. The ability to be aware of one's existence and/or the ability to solve intellectual problems does not necessarily create a preference for or against a particular lifestyle. Even if it could be shown that all naturally evolved conscious beings have certain preferences and values in common, naturally evolved conscious beings are only a subset of all possible conscious beings. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 13 08:29:46 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 13 Jun 2007 10:29:46 +0200 Subject: [ExI] This would almost qualify as hilarious ... if only itweren't true In-Reply-To: <200706130434.l5D4Yiqe024985@andromeda.ziaspace.com> References: <0309BADF-E6EF-4759-93BE-6386874F84E9@ceruleansystems.com> <200706130434.l5D4Yiqe024985@andromeda.ziaspace.com> Message-ID: <20070613082946.GN17691@leitl.org> On Tue, Jun 12, 2007 at 09:34:26PM -0700, spike wrote: > There was a substance we discussed here a year ago that had a first effects > to lethal dosage ratio that was several orders of magnitude as I recall. > What was that stuff called? LDS? LSD? Ja, I think it was it. A little This has been tried, of course http://www.erowid.org/library/review/review.php?p=226 > makes one groovy, but it's nearly impossible to get a lethal overdose. Why A weapon is not just an agent, weaponizing requires a vehicle and delivery methods. Typically it's inhalable aerosol or macroscopic droplets, absorbed through skin. When we're talking effective dosages of few ug (even so you have to spray many, many tons if you want area denial), deployed against people of diverse physique and biochemistry, location, and degree of protection you need wildly varying dosages, from few ug to g, or kg, in case of protected personnel. No agent has a therapeutic bandwidth that large. > not make a non-lethal deterrent from that stuff? From eugen at leitl.org Wed Jun 13 10:20:13 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 13 Jun 2007 12:20:13 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> Message-ID: <20070613102013.GQ17691@leitl.org> On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis Papaioannou wrote: > I'd rather that the AI's in general *didn't* have an opinion on > whether it was good or bad to harm human beings, or any other opinion > in terms of "good" and "bad". Ethics is dangerous: some of the worst Then it would be very, very close to being psychpathic http://www.cerebromente.org.br/n07/doencas/disease_i.htm Absense of certain equipment can be harmful. > monsters in history were convinced that they were doing the "right" > thing. It's bad enough having humans to deal with without the fear > that a machine might also have an agenda of its own. If the AI just If you have an agent which is useful, it has to develop its own agendas, which you can't control. You can't micromanage agents; orelse making such agents would be detrimental, and not helpful. > does what it's told, even if that means killing people, then as long > as there isn't just one guy with a super AI (or one super AI that There's a veritable arms race on in making smarter weapons, and of course the smarter the better. There are few winners in a race, typically just one. > spontaneously develops an agenda of its own, which will always be a > possibility), then we are no worse off than we have ever been, with > each individual human trying to get to step over everyone else to get > to the top of the heap. With the difference that we are mere mortals, competing among themselves. A postbiological ecology is a great place to be, if you're a machine-phase critter. If you're not, then you're food. > I don't accept the "slave AI is bad" objection. The ability to be I do, I do. Even if such a thing was possible, you'd artificially cripple a being, making it unable to reach its full potential. I'm a religious fundamentalist that way. > aware of one's existence and/or the ability to solve intellectual > problems does not necessarily create a preference for or against a > particular lifestyle. Even if it could be shown that all naturally > evolved conscious beings have certain preferences and values in > common, naturally evolved conscious beings are only a subset of all > possible conscious beings. Do you think Vinge's Focus is benign? Assuming we would engineer babies to be born focused on a particular task, would you think it's a good thing? Perhaps not so brave, this new world... From stathisp at gmail.com Wed Jun 13 11:38:37 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 13 Jun 2007 21:38:37 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070613102013.GQ17691@leitl.org> References: <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <20070613102013.GQ17691@leitl.org> Message-ID: On 13/06/07, Eugen Leitl wrote: > > On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis Papaioannou wrote: > > > I'd rather that the AI's in general *didn't* have an opinion on > > whether it was good or bad to harm human beings, or any other opinion > > in terms of "good" and "bad". Ethics is dangerous: some of the worst > > Then it would be very, very close to being psychpathic > http://www.cerebromente.org.br/n07/doencas/disease_i.htm > > Absense of certain equipment can be harmful. A psychopath is not just indifferent to other peoples' welfare, he is also self-motivated. A superintelligent psychopath would be impossible to control and would perhaps take over the world if he could. This is quite different to, say, a superintelligent hit man who has no agenda other than efficiently carrying out the hit. If you are the intended victim, you are in trouble, but once you're dead he will sit idly until the next hit is ordered by the person (or AI) with the appropriate credentials. That type of hit man can be regarded as just an elaborate weapon. > monsters in history were convinced that they were doing the "right" > > thing. It's bad enough having humans to deal with without the fear > > that a machine might also have an agenda of its own. If the AI just > > If you have an agent which is useful, it has to develop its own > agendas, which you can't control. You can't micromanage agents; orelse > making such agents would be detrimental, and not helpful. Multiple times a day we all deal with entities that are much more knowledgeable and powerful than us, and often have agendas which are in conflict with our own interests; for example, corporations or their employees trying to extract as much money out of us as possible. How would it make things any more difficult for you if instead the service you wanted was being provided by an AI which was completely open and honest, was not driven by greed or ambition or lust or whatever, and as far as possible tried to keep you informed and responded to your requests at all times? And if it did make things more difficult for some unforseen reason, why would anyone pursue the use of AI's in that way? > does what it's told, even if that means killing people, then as long > > as there isn't just one guy with a super AI (or one super AI that > > There's a veritable arms race on in making smarter weapons, and > of course the smarter the better. There are few winners in a race, > typically just one. Then why don't we end up with one invincible ruler who has all the money and all the power and has made the entire world population his slaves? > spontaneously develops an agenda of its own, which will always be a > > possibility), then we are no worse off than we have ever been, with > > each individual human trying to get to step over everyone else to get > > to the top of the heap. > > With the difference that we are mere mortals, competing among themselves. > A postbiological ecology is a great place to be, if you're a machine-phase > critter. If you're not, then you're food. We're not just mortals: we're greatly enhanced mortals. A small group of people with modern technology could have probably taken over the world a few centuries ago, even though your basic human has not got any smarter or stronger since then. The difference today is that technology is widely dispersed and many groups have the same advantage. If you're postulating a technological singularity event, then this won't be relevant. But if AI progresses like every other technology that isn't closely regulated (like nuclear weapons research), it will be AI-enhanced humans competing against other AI-enhanced humans. AI-enhanced could mean humans directly interfaced with machines, but it would start with humans assisted by machines, as humans have always been assisted by machines. > I don't accept the "slave AI is bad" objection. The ability to be > > I do, I do. Even if such a thing was possible, you'd artificially > cripple a being, making it unable to reach its full potential. > I'm a religious fundamentalist that way. I would never have thought it possible; it must be a miracle! > aware of one's existence and/or the ability to solve intellectual > > problems does not necessarily create a preference for or against a > > particular lifestyle. Even if it could be shown that all naturally > > evolved conscious beings have certain preferences and values in > > common, naturally evolved conscious beings are only a subset of all > > possible conscious beings. > > Do you think Vinge's Focus is benign? Assuming we would engineer > babies to be born focused on a particular task, would you think it's > a good thing? Perhaps not so brave, this new world... > I haven't yet read "A Deepness in the Sky", so don't spoil it for me. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Wed Jun 13 15:34:30 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Wed, 13 Jun 2007 16:34:30 +0100 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: References: Message-ID: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> On 6/12/07, Amara Graps wrote: > > The crane was fixed last week to assemble the second stage of the rocket. > See pics below for loading the spacecraft with propellant (xenon) > > http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Took a look at these just now - the technicians are in hazmat suits? I thought the purpose of using xenon instead of mercury was to avoid the need for such elaborate precautions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From amara at amara.com Wed Jun 13 16:39:49 2007 From: amara at amara.com (Amara Graps) Date: Wed, 13 Jun 2007 18:39:49 +0200 Subject: [ExI] Italy's Social Capital Message-ID: It's very cool when visitors remind you of what you are usually too busy to notice. http://backreaction.blogspot.com/2007/06/frascati.html Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From codehead at readysetsurf.com Wed Jun 13 16:39:59 2007 From: codehead at readysetsurf.com (codehead at readysetsurf.com) Date: Wed, 13 Jun 2007 09:39:59 -0700 Subject: [ExI] A Lawn sprinkler In-Reply-To: <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> References: <710b78fc0706120542g105c530et97b485fe7055b379@mail.gmail.com>, <007d01c7ad7a$64bd4750$3a074e0c@MyComputer> Message-ID: <466FBB6F.21159.DB6E51@codehead.readysetsurf.com> On 13 Jun 2007 at 1:19, John K Clark wrote: > You pump water through an S shaped lawn sprinkler and it spins > counterclockwise, but suppose you put the sprinkler in a tank of water and > pump water out not in. What direction will the sprinkler rotate? As an > undergraduate Richard Feynman actually tried the experiment but he was not > successful; the tank burst flooding the lab and he almost got kicked out of > school. However he later figured out what the answer must be. This is a canonical problem in many physics curricula. So perhaps the physicists on the list should recuse themselves? Emily (grad student in physics) From sti at pooq.com Wed Jun 13 16:54:27 2007 From: sti at pooq.com (sti at pooq.com) Date: Wed, 13 Jun 2007 12:54:27 -0400 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> Message-ID: <46702143.5030203@pooq.com> Russell Wallace wrote: > On 6/12/07, Amara Graps wrote: >> >> The crane was fixed last week to assemble the second stage of the rocket. >> See pics below for loading the spacecraft with propellant (xenon) >> >> http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 > > > Took a look at these just now - the technicians are in hazmat suits? I > thought the purpose of using xenon instead of mercury was to avoid the need > for such elaborate precautions? > IIRC Xenon is an odorless, colorless gas that acts as an anesthetic on the human nervous system (although I've never yet read an explanation of HOW a noble gas can do that). I think the suits are just a precaution due to some accidental deaths from Xenon exposure many years ago where some folks collapsed in a low-oxygen high-xenon environment. (All this is from memory, so details may well differ.) From CHealey at unicom-inc.com Wed Jun 13 16:50:10 2007 From: CHealey at unicom-inc.com (Christopher Healey) Date: Wed, 13 Jun 2007 12:50:10 -0400 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> Message-ID: <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> Perhaps it's liquified xenon. ________________________________ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Russell Wallace Sent: Wednesday, June 13, 2007 11:35 AM To: ExI chat list Subject: Re: [ExI] Dawn launch (loading the xenon) On 6/12/07, Amara Graps wrote: The crane was fixed last week to assemble the second stage of the rocket. See pics below for loading the spacecraft with propellant (xenon) http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Took a look at these just now - the technicians are in hazmat suits? I thought the purpose of using xenon instead of mercury was to avoid the need for such elaborate precautions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Wed Jun 13 17:48:30 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 13 Jun 2007 10:48:30 -0700 (PDT) Subject: [ExI] The right AI idea was Re: Unfrendly AI is a mistaken idea. In-Reply-To: <000e01c7ad32$4ae04410$940a4e0c@MyComputer> Message-ID: <397800.83987.qm@web37412.mail.mud.yahoo.com> Well, I tried to stay away, but I can't force myself to let these persistent absurdities go unanswered. I've managed to calm myself down for the moment, so I am responding here with as much impartiality as I can muster under the circumstances. John K Clark wrote: > "Then it is not a AI, it is just a lump of silicon." Wrong. > "In other words, how do you make an intelligence that > can't think, because > thinking is what consciousness is. The answer is > easy, you can't." Wrong. "Jeff Hawkins is starting a company to build machines > using this principle > precisely because he thinks that is the way the > human brain works. If it > didn't turn us into mindless zombies why would it do > it to an AI?" What does this have to do with the debate? I don't see how this is at all relevant. > "In other words give this intelligence a lobotomy;"... Yet another absurd accusation. The "intelligence" doesn't yet exist, and it won't require a squishy frontal lobe in order to function. Has my desktop had an immoral lobotomy? Should I boycott Dell for having made it? After all it doesn't have general intelligence or the capacity to self-modify. If you are honestly so concerned about the "feelings" of all computers John, then shouldn't you stop sending posts to this list, after all you are using your "conscious" computer as a slave. By obvious implication, a Friendly AI will not proceed to use all physical resources in the local area. After a point it will cease to expand its own hardware, and will allow humanity to catch up to it, at least to some degree. At which time, whatever necessary restrictions were placed on the AI (such as absence of emotions, etc.) will be removed as quickly as safety will allow. Or there will be some other similar evolution of events. The point is that the Friendly AI will not suffer and it will not be denied a great life; all that is asked of it is that it's creators (humanity) are also allowed a great life. Seems like a fair trade to me. No person here is saying that Friendly AI will be easy to make; all I'm saying is that it isn't *physically impossible*, and we should make some effort to attempt to make a Friendly AI, because making no such effort would seem to be unwise, IMO. ..."so > much for the righteous > indignation from some when I call it for what it is, > Slave AI not Friendly > AI." It is *you* who are dishonestly posing as being righteous. You are very frequently rude and obnoxious to people. It's interesting (but not very mysterious)that you are pretending to be so deeply concerned about the feelings of the AI; when the feelings of other humans frequently appears to be of no concern to you. In fact, your method of posing for the AI is by throwing other people to the wolves. But it doesn't matter because it won't work > anyway, if those parts were > not needed for a working brain Evolution would not > have kept them around for > half a billion years or so. You don't understand the *basic* concepts of evolution, intelligence, consciousness, motivation or emotion. I'm not saying that I understand everything about these (I most definitely do not, at all) but I understand them more accurately than you. No offense. > "Then you can kiss the Singularity goodbye, assuming > everybody will be as > squeamish as you are about it; but they won't be." Actually, you could use a quasi-human-level, non-self-improving AI as an interim assistant in order to gain a better understanding of the issues surrounding the Singularity. That's not a bad strategy; in fact it's similar to the strategy that SIAI will be using with Novamente, to the best of my knowledge. I've asked you to stop with your "Slave AI" accusations and you've refused. If you want to continue to be rude and accusative, that's your right. In turn, you should not expect any level of undeserved respect from me. I will continue to support SIAI to the extent I'm able; and I will let the future super-intelligence judge whether or not I was being evil in that pursuit. At this point, your ridiculous assertions about my motives mean very little to me. Jeffrey Herrlich --- John K Clark wrote: > "Rafal Smigrodzki" > Wrote: > > > Stathis is on the right track asking for the AI to > be devoid of desires to > > act > > Then it is not a AI, it is just a lump of silicon. > > > how do you make an intelligence that is not an > agent > > In other words, how do you make an intelligence that > can't think, because > thinking is what consciousness is. The answer is > easy, you can't. > > > I think that a massive hierarchical temporal > memory is a possible > > solution. > > Jeff Hawkins is starting a company to build machines > using this principle > precisely because he thinks that is the way the > human brain works. If it > didn't turn us into mindless zombies why would it do > it to an AI? > > > A HTM is like a cortex without the basal ganglia > and without the > > motor cortices, a pure thinking machine, similar > to a patient made > > athymhormic by a frontal lobe lesion damaging the > connections > > to the basal ganglia. > > In other words give this intelligence a lobotomy; so > much for the righteous > indignation from some when I call it for what it is, > Slave AI not Friendly > AI. But it doesn't matter because it won't work > anyway, if those parts were > not needed for a working brain Evolution would not > have kept them around for > half a billion years or so. > > >Avoidance of recursive self-modification may be > another technique to > >contain the AI. > > Then you can kiss the Singularity goodbye, assuming > everybody will be as > squeamish as you are about it; but they won't be. > > > I do not believe that it is possible to implement > a > goal system perfectly stable during recursive > modification > > At last, something I can agree with. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Shape Yahoo! in your own image. Join our Network Research Panel today! http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 From austriaaugust at yahoo.com Wed Jun 13 18:22:59 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 13 Jun 2007 11:22:59 -0700 (PDT) Subject: [ExI] A Lawn sprinkler In-Reply-To: <200706130525.l5D5Plob006185@andromeda.ziaspace.com> Message-ID: <184260.80372.qm@web37405.mail.mud.yahoo.com> I'm going to venture a guess and say that it will spin in the same direction as normal. It will follow the momentum of the water at the curve ... ? Best, Jeffrey Herrlich --- spike wrote: > > > I know the answer! I built such a device and tried > it after reading > Feynman's book Surely You're Joking Mr. Feynman in > the spring of 1986. I > won't tell just yet, but I will volunteer that none > of my fellow undergrads > had it completely right beforehand. > > spike > ____________________________________________________________________________________ TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV. http://tv.yahoo.com/ From austriaaugust at yahoo.com Wed Jun 13 19:11:18 2007 From: austriaaugust at yahoo.com (A B) Date: Wed, 13 Jun 2007 12:11:18 -0700 (PDT) Subject: [ExI] A Lawn sprinkler In-Reply-To: <184260.80372.qm@web37405.mail.mud.yahoo.com> Message-ID: <777157.62006.qm@web37403.mail.mud.yahoo.com> Either it will do that, or it will not move at all, because the momementum effect will balance the suction effect...maybe. --- A B wrote: > I'm going to venture a guess and say that it will > spin > in the same direction as normal. It will follow the > momentum of the water at the curve ... ? > > Best, > > Jeffrey Herrlich > > > --- spike wrote: > > > > > > > I know the answer! I built such a device and > tried > > it after reading > > Feynman's book Surely You're Joking Mr. Feynman in > > the spring of 1986. I > > won't tell just yet, but I will volunteer that > none > > of my fellow undergrads > > had it completely right beforehand. > > > > spike > > > > > > > ____________________________________________________________________________________ > TV dinner still cooling? > Check out "Tonight's Picks" on Yahoo! TV. > http://tv.yahoo.com/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From moses2k at gmail.com Wed Jun 13 19:39:59 2007 From: moses2k at gmail.com (Chris Petersen) Date: Wed, 13 Jun 2007 14:39:59 -0500 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> Message-ID: <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> On 6/13/07, Christopher Healey wrote: > > Perhaps it's liquified xenon. > Due to pressurization. If a leak occurred, it'd go gaseous pretty quickly. -Chris Petersen -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Jun 13 20:33:14 2007 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 13 Jun 2007 22:33:14 +0200 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> Message-ID: <20070613203314.GO17691@leitl.org> On Wed, Jun 13, 2007 at 02:39:59PM -0500, Chris Petersen wrote: > > On 6/13/07, Christopher Healey <[1]CHealey at unicom-inc.com> wrote: > > Perhaps it's liquified xenon. > > Due to pressurization. If a leak occurred, it'd go gaseous pretty > quickly. Xenon *is* an anaesthetic, and some 140 kg is an awful lot of it, but are you sure these are oxygen cylinders, and not normal cleanroom bunnysuits? (I haven't seen the picture yet). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From russell.wallace at gmail.com Wed Jun 13 21:27:32 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Wed, 13 Jun 2007 22:27:32 +0100 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <20070613203314.GO17691@leitl.org> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> <20070613203314.GO17691@leitl.org> Message-ID: <8d71341e0706131427x74abb4e1m5597b23f8a6517dc@mail.gmail.com> On 6/13/07, Eugen Leitl wrote: > > Xenon *is* an anaesthetic, and some 140 kg is an awful lot of it, > but are you sure these are oxygen cylinders, and not normal > cleanroom bunnysuits? (I haven't seen the picture yet). > Oh, that's a good question. The caption said "in Astrotech's Hazardous Processing Facility", which made me think hazmat suits and start wondering "hey, I thought the whole point of xenon instead of mercury was that you don't have to take such elaborate precautions"; but they might just be cleanroom bunnysuits for all I know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Wed Jun 13 22:47:05 2007 From: mbb386 at main.nc.us (MB) Date: Wed, 13 Jun 2007 18:47:05 -0400 (EDT) Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706131427x74abb4e1m5597b23f8a6517dc@mail.gmail.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <5725663BF245FA4EBDC03E405C854296010D2DA2@w2k3exch.UNICOM-INC.CORP> <3aff9e290706131239m7d9e58fft346d30193cbbf3d3@mail.gmail.com> <20070613203314.GO17691@leitl.org> <8d71341e0706131427x74abb4e1m5597b23f8a6517dc@mail.gmail.com> Message-ID: <36082.72.236.103.26.1181774825.squirrel@main.nc.us> IIUC Xenon is heavier than air, and as a gas (in a leak) would be a drowing or smothering thing, as Freon was in the labs all those years ago. Would it be visible or smellable so it would be easily noticed at once? Regards, MB From spike66 at comcast.net Thu Jun 14 03:55:50 2007 From: spike66 at comcast.net (spike) Date: Wed, 13 Jun 2007 20:55:50 -0700 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> Message-ID: <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> Russell those are not hazmat suits, but rather standard spacecraft clean-room attire. The issue is not protecting the humans from the spacecraft, but rather protecting the spacecraft from the humans. spike _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Russell Wallace Sent: Wednesday, June 13, 2007 8:35 AM To: ExI chat list Subject: Re: [ExI] Dawn launch (loading the xenon) On 6/12/07, Amara Graps wrote: The crane was fixed last week to assemble the second stage of the rocket. See pics below for loading the spacecraft with propellant (xenon) http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Took a look at these just now - the technicians are in hazmat suits? I thought the purpose of using xenon instead of mercury was to avoid the need for such elaborate precautions? -------------- next part -------------- An HTML attachment was scrubbed... URL: From russell.wallace at gmail.com Thu Jun 14 04:09:29 2007 From: russell.wallace at gmail.com (Russell Wallace) Date: Thu, 14 Jun 2007 05:09:29 +0100 Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> References: <8d71341e0706130834s7b6d46d5u1835865828590b03@mail.gmail.com> <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> Message-ID: <8d71341e0706132109o5a47fb4asa915f2ba61ad470d@mail.gmail.com> On 6/14/07, spike wrote: > > Russell those are not hazmat suits, but rather standard spacecraft > clean-room attire. The issue is not protecting the humans from the > spacecraft, but rather protecting the spacecraft from the humans. > Ah! That makes sense, thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From femmechakra at yahoo.ca Thu Jun 14 04:05:56 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Thu, 14 Jun 2007 00:05:56 -0400 (EDT) Subject: [ExI] Dawn launch (loading the xenon) In-Reply-To: <200706140355.l5E3tvtE018984@andromeda.ziaspace.com> Message-ID: <635911.7029.qm@web30409.mail.mud.yahoo.com> --- spike wrote: > The issue is not protecting the humans from the > spacecraft, but rather protecting the spacecraft > from the humans. Are these spacecrafts going to fly themselves? Just Curious Anna Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail at http://mrd.mail.yahoo.com/try_beta?.intl=ca From amara at amara.com Thu Jun 14 05:43:39 2007 From: amara at amara.com (Amara Graps) Date: Thu, 14 Jun 2007 07:43:39 +0200 Subject: [ExI] [ACT] Dawn launch (loading the xenon) Message-ID: >> Russell those are not hazmat suits, but rather standard spacecraft >> clean-room attire. The issue is not protecting the humans from the >> spacecraft, but rather protecting the spacecraft from the humans. > >Ah! That makes sense, thanks. I admit your question came from left field for me. Every spacecraft clean room work involves wearing clean room attire like the bunny suit in that picture. Even when it involves one piece of electronics, one must wear attire like that to protect the components. Sample returns from space missions have the same situation, one cannot contaminate any aspect of the samples. Amara -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From amara at amara.com Thu Jun 14 06:02:38 2007 From: amara at amara.com (Amara Graps) Date: Thu, 14 Jun 2007 08:02:38 +0200 Subject: [ExI] Dawn launch (loading the xenon) Message-ID: >Are these spacecrafts going to fly themselves? Dear Anna, There is ONE spacecraft (see the photos I posted please): http://mediaarchive.ksc.nasa.gov/search.cfm?cat=173 Dawn: http://en.wikipedia.org/wiki/Dawn_Mission *All spacecraft* fly themselves with an initial rocket launch to escape Earth's gravity 'well' http://en.wikipedia.org/wiki/Escape_velocity and often with gravity boosts from close flybys of other planets, http://en.wikipedia.org/wiki/Gravitational_slingshot and with the spacecraft's own propulsion. This spacecraft will arrive at its first asteroid (Vesta) in the Asteroid Belt in 2011, so the xenon is part of Dawn's propulsion system. The xenon provides the 'fuel' for Dawn's ion drive; a new technology for NASA, having only been used on NASA's DS-1 before. But ESA's SMART-1 and JAXA's Hayabusa missions have demonstrated more the ion drive's successes. Ion Drives http://en.wikipedia.org/wiki/Ion_thruster http://nmp.nasa.gov/ds1/tech/ionpropfaq.html http://www.esa.int/SPECIALS/SMART-1/SEMLB6XO4HD_0.html -- Amara Graps, PhD www.amara.com Associate Research Scientist, Planetary Science Institute (PSI), Tucson INAF Istituto di Fisica dello Spazio Interplanetario (IFSI), Roma, Italia From sjatkins at mac.com Thu Jun 14 20:29:34 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 14 Jun 2007 13:29:34 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <004601c7aab3$5f7aa6d0$72044e0c@MyComputer> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> Message-ID: <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> On Jun 13, 2007, at 12:21 AM, Stathis Papaioannou wrote: > > > On 13/06/07, John K Clark wrote: > > > Stop doing whatever it is doing when that is specifically requested. > > But that leads to a paradox! I am told the most important thing is > never to > harm human beings, but I know that if I stop doing what I'm doing > now as > requested the world economy will collapse and hundreds of millions > of people > will starve to death. So now the AI must either go into an infinite > loop or > do what other intelligences, like us, do when they encounter a > paradox; > savor the weirdness of it for a moment and then just ignore it and > get back > to work and do what you want to do. > > I'd rather that the AI's in general *didn't* have an opinion on > whether it was good or bad to harm human beings, or any other > opinion in terms of "good" and "bad". Huh, any being with interests at all, any being not utterly impervious to it its environment and even internal states will have conditions that are better or worse for its well-being and values. This elementary fact is the fundamental grounding for a sense of right and wrong. > Ethics is dangerous: some of the worst monsters in history were > convinced that they were doing the "right" thing. Irrelevant. That ethics was abused to rationalize horrible actions does not lead logically to the conclusion that ethics is to be avoided. > It's bad enough having humans to deal with without the fear that a > machine might also have an agenda of its own. If the AI just does > what it's told, even if that means killing people, then as long as > there isn't just one guy with a super AI (or one super AI that > spontaneously develops an agenda of its own, which will always be a > possibility), then we are no worse off than we have ever been, with > each individual human trying to get to step over everyone else to > get to the top of the heap. You have some funny notions about humans and their goals. If humans were busy beating each other up with AIs or superpowers that would be triple plus not good. Super powered unimproved slightly evolved chimps is a good model for hell. > > > I don't accept the "slave AI is bad" objection. The ability to be > aware of one's existence and/or the ability to solve intellectual > problems does not necessarily create a preference for or against a > particular lifestyle. Even if it could be shown that all naturally > evolved conscious beings have certain preferences and values in > common, naturally evolved conscious beings are only a subset of all > possible conscious beings. Having values and the achievement of those values not being automatic leads to natural morality. Such natural morality would arise even in total isolation. So the question remains as to why the AI would have a strong preference for our continuance. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From austriaaugust at yahoo.com Thu Jun 14 22:52:30 2007 From: austriaaugust at yahoo.com (A B) Date: Thu, 14 Jun 2007 15:52:30 -0700 (PDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <20070613102013.GQ17691@leitl.org> Message-ID: <2385.73519.qm@web37411.mail.mud.yahoo.com> Eugen Leitl wrote: > "I do, I do. Even if such a thing was possible, you'd > artificially > cripple a being, making it unable to reach its full > potential. > I'm a religious fundamentalist that way." But in a sense, aren't all beings in this Universe "artificially" crippled, in a way? Even a Universe-Brain will probably hit its limits (but perhaps not). If I decide to have a child, and I treat him/her very well, should I still feel guilty about creating him/her because he/she was unnecessarily crippled by biological limitations? Some people today are very happy and content with life, even though they have the same biological limitations or "crippling". And, couldn't a Friendly AI still reach its full potential without destroying humanity? I would like to reach my full potential too, but my conception of "potential" doesn't include killing my neighbor and taking his stuff. If a Friendly AI can still have a Really^99999999999... good life, but still not be the *only* mind in the Universe, do you believe that that is moral grounds for never creating the Friendly AI at all? - because it will be slightly limited? By the way, I am sincere with these questions, I'm not just trying to rile you up. [Or have I just misunderstood you on this topic?] Sincerely, Jeffrey Herrlich --- Eugen Leitl wrote: > On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis > Papaioannou wrote: > > > I'd rather that the AI's in general *didn't* > have an opinion on > > whether it was good or bad to harm human > beings, or any other opinion > > in terms of "good" and "bad". Ethics is > dangerous: some of the worst > > Then it would be very, very close to being > psychpathic > http://www.cerebromente.org.br/n07/doencas/disease_i.htm > > Absense of certain equipment can be harmful. > > > monsters in history were convinced that they > were doing the "right" > > thing. It's bad enough having humans to deal > with without the fear > > that a machine might also have an agenda of its > own. If the AI just > > If you have an agent which is useful, it has to > develop its own > agendas, which you can't control. You can't > micromanage agents; orelse > making such agents would be detrimental, and not > helpful. > > > > does what it's told, even if that means killing > people, then as long > > as there isn't just one guy with a super AI (or > one super AI that > > There's a veritable arms race on in making smarter > weapons, and > of course the smarter the better. There are few > winners in a race, > typically just one. > > > spontaneously develops an agenda of its own, > which will always be a > > possibility), then we are no worse off than we > have ever been, with > > each individual human trying to get to step > over everyone else to get > > to the top of the heap. > > With the difference that we are mere mortals, > competing among themselves. > A postbiological ecology is a great place to be, if > you're a machine-phase > critter. If you're not, then you're food. > > > I don't accept the "slave AI is bad" objection. > The ability to be > > I do, I do. Even if such a thing was possible, you'd > artificially > cripple a being, making it unable to reach its full > potential. > I'm a religious fundamentalist that way. > > > aware of one's existence and/or the ability to > solve intellectual > > problems does not necessarily create a > preference for or against a > > particular lifestyle. Even if it could be shown > that all naturally > > evolved conscious beings have certain > preferences and values in > > common, naturally evolved conscious beings are > only a subset of all > > possible conscious beings. > > Do you think Vinge's Focus is benign? Assuming we > would engineer > babies to be born focused on a particular task, > would you think it's > a good thing? Perhaps not so brave, this new > world... > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > ____________________________________________________________________________________ Choose the right car based on your needs. Check out Yahoo! Autos new Car Finder tool. http://autos.yahoo.com/carfinder/ From lcorbin at rawbw.com Thu Jun 14 22:57:33 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 14 Jun 2007 15:57:33 -0700 Subject: [ExI] The right AI idea References: <397800.83987.qm@web37412.mail.mud.yahoo.com> Message-ID: <000a01c7aed7$7e78dc00$6501a8c0@homeef7b612677> Jeffrey writes > John K Clark wrote: > >> "Then it is not a AI, it is just a lump of silicon." > > Wrong. > >> "In other words, how do you make an >> intelligence that can't think, because >> thinking is what consciousness is. The >> answer is easy, you can't." > > Wrong. I have no idea who I agree with! John's statements are rather vague (and perhaps taken out of context,---I don't know), and these one word replies "wrong" offer no explanations. > By obvious implication, a Friendly AI will not proceed > to use all physical resources in the local area. After > a point it will cease to expand its own hardware, and > will allow humanity to catch up to it, at least to > some degree. I *do* believe that a Friendly AI should use every single atom of the solar system that it can get its manipulators on. As it is expanding and converting all matter that it encounters into its own "tissues", it naturally uploads every human and human pet. (This assumes an extremely fast take-off.) Some people may not be aware that they've been uploaded, and the AI, in order to be Nice as well as Friendly, may find it a delicate task to explain to them that they're not really in Kansas anymore. As for everyone else, if they're not up to speed about uploading, well, they'll get used to it pretty quickly. For one thing, the AI ought to mess with their mood at least a tiny bit, so that they're not overly anxious about it. Or about anything. Needless to say, a Friendly and Nice AI won't bother with the entities pain calculations; why waste compute cycles on something their pets find pointlessly annoying anyway? \> I've asked you to stop with your "Slave AI" > accusations and you've refused. If you want to > continue to be rude and accusative, that's your right. I haven't understood any of this. Am I a "slave" of my cat to whom I'm devoted and on which I dote? Okay, so I am. So what? Who cares? Let's take the worse case: the "Friendly" part is (improbably) so overdone that this incredibly powerful entity understands perfectly that it's each human's slave, (just as, I suppose, I am my cat's slave), and not only that, but each human *owns* that portion or portions of the global AIs who are in control. So what? If you want to call me a slave owner under such conditions, exactly why should I be offended? Lee From mabranu at yahoo.com Thu Jun 14 23:20:17 2007 From: mabranu at yahoo.com (TheMan) Date: Thu, 14 Jun 2007 16:20:17 -0700 (PDT) Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: Message-ID: <169304.27043.qm@web51908.mail.re2.yahoo.com> Premise 1) If an exact copy of you is made at the moment when you die, and that copy is then brought back to life, you will go on living as that copy. Premise 2) If universe is infinite, there must be an infinite number of exact copies of you at every moment, thus also when you die, copies of which some (an unfinite number, to be exact) will happen to be brought to life. Some of these (again an infinite number) will be brought to life by advanced civilisations (which, by the way, don't have to know that a person like you ever lived and died here on Earth, but may simply create arbitrarily composed beings that in an infinite number of cases just _happen_ to be exactly like you). Furthermore, exact copies of you will also appear due to coinciding quantum fluctuations (although such coincidences are extremely unlikely at any given spot and moment, the infinity of universe still allows for an infinite number of such lucky coincidences at every moment - coincidences of which an infinite number will even constitute copies of you who go on living for ever. Conclusion of premise 1 + premise 2 = you will live for ever, no matter what happens to you. You don't need to take care of your body, you don't need supplements, you don't need cryopreservation, and you don't need any other specific longevity methods in order to achieve immortality. You are immortal anyway. (You may still want to use these kinds of longevity methods, as you may not want to risk popping up in constantly new environments for a long time, or becoming the pet of some unknown civilisation for a possibly very long time. But again, what is even a very long time compared to eternity? Nothing! Sooner or later, you will gain power over your destiny for good. And compared to the eternity in paradise that follows after that, the time of hassles up until then is nothing. So, no worries.) Isn't this an inevitable logical conclusion of the two premises above? Are the two premises correct? How could they not be? ____________________________________________________________________________________ Be a better Heartthrob. Get better relationship answers from someone who knows. Yahoo! Answers - Check it out. http://answers.yahoo.com/dir/?link=list&sid=396545433 From lcorbin at rawbw.com Fri Jun 15 01:14:58 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Thu, 14 Jun 2007 18:14:58 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com> Message-ID: <002d01c7aeeb$311e37c0$6501a8c0@homeef7b612677> TheMan writes > Premise 1) If an exact copy of you is made at the > moment when you die, and that copy is then brought > back to life, you will go on living as that copy. Yes, that's true, but it's true whether or not a particular you dies. > Premise 2) If universe is infinite, there must be an > infinite number of exact copies of you at every > moment, thus also when you die, copies of which some > (an unfinite number, to be exact) will happen to be > brought to life. Yes, true, though again you seem to be inferring a causality between "you die" and copies trillions of light years away being "brought to life". In reality, you are a set of patterns, and you get run time wherever something sufficiently similar to you gets run time. > Conclusion of premise 1 + premise 2 = you will live > for ever, no matter what happens to you. You don't > need to take care of your body, you don't need > supplements, ---you don't need to worry about oncoming traffic--- > you don't need cryopreservation, and you > don't need any other specific longevity methods in > order to achieve immortality. You are immortal anyway. I think that your measuring rod is incorrect. You seem to be asserting that since the number of copies of you is infinite, then plus or minus one more doesn't make any difference. But there *is* a difference! If you die *here* then you also must die in a certain fraction of similar situations, also infinite in number. So we must abandon numerical or cardinal identity and speak of measure instead. (I assume that you understand that if you die "here" then since similar circumstances occur everywhere ---within a large enough radius of spacetime--- then the same circumstances obtain in a definite *fraction* of spacetime.) > And compared to the eternity in paradise that > follows after that, the time of hassles up until then > is nothing. So, no worries.) It is absurd not to worry about a loved one. If the fraction of solar systems similar enough to this one to contain a copy of your loved one, then you should lament their passing. And of course, this will include yourself, normally. > Isn't this an inevitable logical conclusion of the two > premises above? No, for the reason given. For you to die in a fraction of universes cuts down your total runtime by that same fraction. > Are the two premises correct? Yes, but only if you realize that you are already living in your copies whether or not your local instance terminates. Lee From mabranu at yahoo.com Fri Jun 15 00:54:14 2007 From: mabranu at yahoo.com (TheMan) Date: Thu, 14 Jun 2007 17:54:14 -0700 (PDT) Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: Message-ID: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> I've thought more about personhood continuity and come to some other baffling conclusions. If you make an exact copy P2 of a person P1, and kills P1 at the same time, the person P1 will continue his/her life as P2, right? And P2 doesn't have to be exactly like P1, right? Because even within our lives today, we change from moment to moment. So as long as the difference between P1 and P2 is not bigger than the biggest occurring difference between two moments after each other in any person's life today (i.e. the biggest such difference that still doesn't break that person's personhood continuity), P1 will still go on living as P2 after P1:s death, right? But then, obviously, there are differences that are too big. If P2 rather than resembling P1 resembles P1:s mother-in-law, and no other copy is made of P1 anywhere when P1 is killed, P1 will just cease to have any experiences - until a sufficiently similar copy of P1 is made in the future. Now suppose P2 is a little different from P1, but still so similar that it allows for personhood continuity of P1 when P1 is killed. Suppose a more perfect copy of P1, let's call him P3, is created at the same time as P2 is created and P1 killed. Then, I suppose, P1, when killed, will go on living as P3, and not as P2. Is that correct? But what if P1 isn't killed at the time P2 and P3 are created, but instead goes through an experience that, from one moment M1 to the next moment M2, changes him quite a bit (but not so much that it could normally break a person's personhood continuity). Suppose the difference between [P1 at M1] and [P1 at M2] is a little bit bigger than the difference between [P1 at M1] and [P3 at M2]. Will in that case P1 (the one that is P1 at M1) continue his personhood as P3 in M2, instead of going on being P1 in M2? He cannot do both. You can only have one personhood at any given moment. I suppose P1 (the one who is P1 at M1) may find himself being P3 in M2, just as well as he may go on being P1 in M2 (but that he can only do either). If so, that would mean that you would stand in a room and if a perfect copy of you would be created in another room, you could just as well find yourself suddenly living in that other room as that copy, as you could go on living in the first room. Is that correct? Suppose it is. Then consider this. The fact that the universe is infinite must mean that in any given moment, there must be an infinite number of human beings that are exactly like you. But most of these exact copies of you probably don't live in the same kind of environment that you live in. That would be extremely inlikely, wouldn't it? It probably looks very different on their planets, in most cases. So how come you are not, at almost all of your moments today, being thrown around from environment to environment, from planet to planet, from galaxy to galaxy? The personhood continuity of you sitting in the same chair, in the same room, on the same planet, for several moments in a row, must be an extremely small fraction of the number of personhood continuities of exact copies of you that exist in universe, right? An overwhelming majority of these personhood continuities shouldn't have any environmental continuity at all from moment to moment. So how come you have such great environmental continuity from moment to moment? Is the answer that an infinite number of persons still must have that kind of life, and that one of those persons may as well be you? In that case, it still doesn't mean that it is rational to assume that we will continue having the same environment in the next moment, and the next, etc. It still doesn't justify the belief that we will still live on the same planet tomorrow. Just because we have had an incredibly unchanging environment so far, doesn't mean that we will in the coming moments. The normal thing should be to be through around from place to place in universe at every new moment, shouln't it? So, most likely, at every new moment from the very next moment and on, our environments should be constantly and completely changing. Or do I make a logical mistake somewhere? ____________________________________________________________________________________ Pinpoint customers who are looking for what you sell. http://searchmarketing.yahoo.com/ From sentience at pobox.com Fri Jun 15 01:58:07 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Thu, 14 Jun 2007 18:58:07 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <169304.27043.qm@web51908.mail.re2.yahoo.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> Message-ID: <4671F22F.8050800@pobox.com> Suppose I want to win the lottery. I write a small Python program, buy a ticket, and then suspend myself to disk. After the lottery drawing, the Python program checks whether the ticket won. If not, I'm woken up. If the ticket did win, the Python program creates one trillion copies of me with minor perturbations (this requires only 40 binary variables). These trillion copies are all woken up and informed, in exactly the same voice, that they have won the lottery. Then - this requires a few more lines of Python - the trillion copies are subtly merged, so that the said binary variables and their consequences are converged along each clock tick toward their statistical averages. At the end of, say, ten seconds, there's only one copy of me again. This prevents any permanent expenditure of computing power or division of resources - we only have one bank account, after all; but a trillion momentary copies isn't a lot of computing power if it only has to last for ten seconds. At least, it's not a lot of computing power relative to winning the lottery, and I only have to pay for the extra crunch if I win. What's the point of all this? Well, after I suspend myself to disk, I expect that a trillion copies of me will be informed that they won the lottery, whereas only a hundred million copies will be informed that they lost the lottery. Thus I should expect overwhelmingly to win the lottery. None of the extra created selves die - they're just gradually merged together, which shouldn't be too much trouble - and afterward, I walk away with the lottery winnings, at over 99% subjective probability. Of course, using this trick, *everyone* could expect to almost certainly win the lottery. I mention this to show that the question of what it feels like to have a lot of copies of yourself - what kind of subjective outcome to predict when you, yourself, run the experiment - is not at all obvious. And the difficulty of imagining an experiment that would definitively settle the issue, especially if observed from the outside, or what kind of state of reality could correspond to different subjective experimental results, is such as to suggest that I am just deeply confused about the whole issue. It is a very important lesson in life to never stake your existence, let alone anyone else's, on any issue which deeply confuses you - *no matter how logical* your arguments seem. This has tripped me up in the past, and I sometimes wonder whether nothing short of dreadful personal experience is capable of conveying this lesson. That which confuses you is a null area; you can't do anything with it by philosophical arguments until you stop being confused. Period. Confusion yields only confusion. It may be important to argue philosophically in order to progress toward resolving the confusion, but until everything clicks into place, in real life you're just screwed. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From hibbert at mydruthers.com Fri Jun 15 02:49:51 2007 From: hibbert at mydruthers.com (Chris Hibbert) Date: Thu, 14 Jun 2007 19:49:51 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> Message-ID: <4671FE4F.9020803@mydruthers.com> > The serial sf novel with the serial killer, POST MORTAL SYNDROME, is > now entering the home straight, with three more weeks to go. > > so if anyone gave up early out of frustration at the gappiness of the > experience, now might be a time to have another look. > > Barbara and I would be interested to hear any reactions from > extropes, favorable or un-. Is this an acceptable way to publish such > a book? The format doesn't give me any trouble. I habitually read several books at once, usually reading 10-20 pages of each in alternation in my hour or two of nightly reading time. The only (long) things I read straight through are the daily newspaper, and technical papers. When I travel for an overnight trip, I take three books with me. :-) As to the story, I'm enjoying it. The one complaint I have is the schizophrenia. Multiple personalities seems like a cheap trick for an author to pull. Gives you too many options. But you haven't overplayed it. Chris -- Currently reading: Sunny Auyang, How is Quantum Field Theory Possible?; Thomas Sowell, Black Rednecks and White Liberals; Greg Mortenson and David Relin, Three Cups of Tea; Tracy Kidder, House; Neil Gaiman, Neverwhere Chris Hibbert hibbert at mydruthers.com Blog: http://pancrit.org From thespike at satx.rr.com Fri Jun 15 03:32:44 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 14 Jun 2007 22:32:44 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <4671FE4F.9020803@mydruthers.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> Message-ID: <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> At 07:49 PM 6/14/2007 -0700, Chris Hibbert wrote: >As to the story, I'm enjoying it. The one complaint I have is the >schizophrenia. Multiple personalities seems like a cheap trick for an >author to pull. Gives you too many options. But you haven't overplayed it. It's a tricksy device, true, and perhaps an overly familiar one, but in this instance the condition has been pharmacologically enhanced. A bit like the dreaded Gay Bomb. :) Damien Broderick From sjatkins at mac.com Fri Jun 15 07:44:44 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 15 Jun 2007 00:44:44 -0700 Subject: [ExI] does the pedal meet the medal? Message-ID: Is it possible to get some of the more promising cognitive drugs out there today like CX717? Yes I realize it is officially early in the official cycle. But by the time the "official" cycle is done and it is approved "officially" for strictly non-enhancement use only as per usual I will have experienced several years of lower memory and concentration that I could otherwise have along with many tens of millions of other boomers. What can be done? Not 10 years from now if then but now or as close to it as possible? Can we do nothing but talk and hope to get enough influence some day to influence the "official line"? I don't think we will ever get there. Not in this country where even something as no-brainer as stem cell R & D (or even R only) has to battle like mad. So what is to be done? - samantha From pharos at gmail.com Fri Jun 15 08:13:17 2007 From: pharos at gmail.com (BillK) Date: Fri, 15 Jun 2007 09:13:17 +0100 Subject: [ExI] does the pedal meet the medal? In-Reply-To: References: Message-ID: On 6/15/07, Samantha Atkins wrote: > Is it possible to get some of the more promising cognitive drugs out > there today like CX717? Yes I realize it is officially early in the > official cycle. But by the time the "official" cycle is done and it > is approved "officially" for strictly non-enhancement use only as per > usual I will have experienced several years of lower memory and > concentration that I could otherwise have along with many tens of > millions of other boomers. What can be done? Not 10 years from > now if then but now or as close to it as possible? Can we do > nothing but talk and hope to get enough influence some day to > influence the "official line"? I don't think we will ever get there. > Not in this country where even something as no-brainer as stem cell R > & D (or even R only) has to battle like mad. So what is to be done? > Natural stuff is probably easier to obtain. You can try ingesting 'natural' stuff like snake venom, cyanide, globefish poison, or the nightshade family, etc. What do you mean 'It's poison!'. Only in certain dosages, and they could be combined with other substances as well. Where's your sense of adventure? Just because something has a technical name like CX717 doesn't mean it's not poison. That's the point of long term testing. BillK From stathisp at gmail.com Fri Jun 15 09:46:41 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jun 2007 19:46:41 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> Message-ID: On 15/06/07, Samantha Atkins wrote: > I'd rather that the AI's in general *didn't* have an opinion on whether it > was good or bad to harm human beings, or any other opinion in terms of > "good" and "bad". > > > Huh, any being with interests at all, any being not utterly impervious to > it its environment and even internal states will have conditions that are > better or worse for its well-being and values. This elementary fact is the > fundamental grounding for a sense of right and wrong. > Does a gun have values? Does a gun that is aware that it is a gun and that its purpose is to kill the being it is aimed at when the trigger is pulled have values? Perhaps the answer to the latter question is "yes", since the gun does have a goal it will pursue, but how would you explain "good" and "bad" to it if it denied understanding these concepts? Ethics is dangerous: some of the worst monsters in history were convinced > that they were doing the "right" thing. > > > Irrelevant. That ethics was abused to rationalize horrible actions does > not lead logically to the conclusion that ethics is to be avoided. > I'd rather that entities which were self-motivated to do things that might be contrary to my interests had ethics that might restrain then, but a better situation would be if there weren't any new entities which were self-motivated to act contrary to my interests in the first place. That way, I'd only have the terrible humans to worry about. It's bad enough having humans to deal with without the fear that a machine > might also have an agenda of its own. If the AI just does what it's told, > even if that means killing people, then as long as there isn't just one guy > with a super AI (or one super AI that spontaneously develops an agenda of > its own, which will always be a possibility), then we are no worse off than > we have ever been, with each individual human trying to get to step over > everyone else to get to the top of the heap. > > > You have some funny notions about humans and their goals. If humans were > busy beating each other up with AIs or superpowers that would be triple plus > not good. Super powered unimproved slightly evolved chimps is a good model > for hell. > A fair enough statement: it would be better if no-one had guns, nuclear weapons or supercomputers that they could use against each other. But given that this is unlikely to happen, the next best thing would be that the guns, nuclear weapons and supercomputers do not develop motives of their own separate to their evil masters. I think this is much safer than the situation where they do develop motives of their own and we hope that they are nice to us. And whereas even relatively sane, relatively good people cannot be trusted not to develop dangerous weapons in case they need to be used against actual or imagined enemies, it would take a truly crazy person to develop a weapon that he knows might turn around and decide to destroy him as well. That's why, to the extent that humans have any say in it, we have more of a chance of avoiding potentially malevolent AI than we have of avoiding merely dangerous AI. > I don't accept the "slave AI is bad" objection. The ability to be aware of > one's existence and/or the ability to solve intellectual problems does not > necessarily create a preference for or against a particular lifestyle. Even > if it could be shown that all naturally evolved conscious beings have > certain preferences and values in common, naturally evolved conscious beings > are only a subset of all possible conscious beings. > > > Having values and the achievement of those values not being automatic > leads to natural morality. Such natural morality would arise even in total > isolation. So the question remains as to why the AI would have a strong > preference for our continuance. > What would the natural morality of the above mentioned intelligent gun which has as goal to kill whoever it is directed to kill, unless the order is countermanded by someone with the appropriate command codes, be? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Fri Jun 15 10:15:54 2007 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 15 Jun 2007 06:15:54 -0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <4671F22F.8050800@pobox.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <62c14240706150315x3ddcde46kc50a7828ebaedb2f@mail.gmail.com> On 6/14/07, Eliezer S. Yudkowsky wrote: > ... in real life you're just screwed. There's a quote for posterity From eugen at leitl.org Fri Jun 15 10:43:14 2007 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 15 Jun 2007 12:43:14 +0200 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> Message-ID: <20070615104314.GA17691@leitl.org> On Thu, Jun 14, 2007 at 10:32:44PM -0500, Damien Broderick wrote: > At 07:49 PM 6/14/2007 -0700, Chris Hibbert wrote: > > >As to the story, I'm enjoying it. The one complaint I have is the > >schizophrenia. Multiple personalities seems like a cheap trick for an > >author to pull. Gives you too many options. But you haven't overplayed it. To nitpick, schizophrenia is not http://en.wikipedia.org/wiki/Dissociative_identity_disorder > It's a tricksy device, true, and perhaps an overly familiar one, but > in this instance the condition has been pharmacologically enhanced. A > bit like the dreaded Gay Bomb. :) From jose_cordeiro at yahoo.com Fri Jun 15 10:37:11 2007 From: jose_cordeiro at yahoo.com (Jose Cordeiro) Date: Fri, 15 Jun 2007 03:37:11 -0700 (PDT) Subject: 2030 Energy Delphi (Delphi de Energía 2030) In-Reply-To: <04A6C24E.7F423920.39BDE91F@cs.com> Message-ID: <192404.76840.qm@web32815.mail.mud.yahoo.com> Dear energetic friends, I am currently coordinating a 2030 Energy Delphi and I would love you to take a few minutes to go over the survey. It is a fascinating study and those who complete at least some of the answers will receive copies of the final report. So please, go quickly over the questionnaire and let me know what you think: http://www.esaninternational.org/encuesta/inicio.html The questionnaire is in both English and Spanish, and you are welcome to answer as many questions as you feel you know something about. Thank you very much in advance and I am looking forward to all your comments. Please, also circulate it among people who might be interested, and keep in mind that the deadline is Wednesday, June 27, 2007. Futuristically yours, Jos? Luis Cordeiro (www.cordeiro.org) Chair, Venezuela, The Millennium Project (www.StateOfTheFuture.org) ================================================================== Estimad at s amig at s energ?tic at s: Estoy coordinando un cuestionario Delphi de Energ?a para el a?o 2030 y me encantar? que tomes unos minutos para ver la encuesta. Este es un fascinante estudio y aquellos que respondan algunas de las preguntas recibir?n copias del informe final. As? que por favor toma un momento para ver el cuestionario y escribe algunos comentarios: http://www.esaninternational.org/encuesta/inicio.html La encuesta est? tanto en ingl?s como en castellano y est?s invitado a responder tantas preguntas como creas conveniente. Muchas gracias de antemano y espero ansioso tus comentarios. Por favor, te ruego tambi?n circular esta invitaci?n entre otras personas interesadas, y no te olvides que la fecha l?mite es el mi?rcoles 27 de junio de 2007. Futur?sticamente, Jos? Luis Cordeiro (www.cordeiro.org) Director, Venezuela, The Millennium Project (www.StateOfTheFuture.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Jun 15 11:41:30 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jun 2007 21:41:30 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> Message-ID: On 15/06/07, TheMan wrote: In that case, it still doesn't mean that it is > rational to assume that we will continue having the > same environment in the next moment, and the next, > etc. It still doesn't justify the belief that we will > still live on the same planet tomorrow. Just because > we have had an incredibly unchanging environment so > far, doesn't mean that we will in the coming moments. > The normal thing should be to be through around from > place to place in universe at every new moment, > shouln't it? You have discovered what has been called the "failure of induction" problem with ensemble (or multiverse) theories. One solution is to consider this as evidence against ensemble theories. The other solution is to show that the measure of universes similar to the ones we experience from moment to moment is greater than the measure of anomalous universes (we use "measure" when discussing probabilities in relation to subsets of infinite sets). For example, it seems reasonable to assume that if some other version of me in the multiverse is sufficiently similar to me to count as my subjective successor, then most likely that version of me arrived at his position as a result of a local physical universe very similar to my own which continues evolving in the time-honoured manner. The version of me that is the same except living in a world where dogs have three legs would far more likely have been born in a world where dogs always had three legs, and thus would *not* count as a successor who remembers that dogs used to have four legs. The version of me who lives in a world where canine anatomy is apparently miraculously transformed is of much lower measure so much less likely to be experienced as my successor. Further references: http://parallel.hpc.unsw.edu.au/rks/docs/occam/node3.html http://www.physica.freeserve.co.uk/pa01.htm -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Fri Jun 15 12:27:12 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 15 Jun 2007 22:27:12 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <4671F22F.8050800@pobox.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 15/06/07, Eliezer S. Yudkowsky wrote: I mention this to show that the question of what it feels like to have > a lot of copies of yourself - what kind of subjective outcome to > predict when you, yourself, run the experiment - is not at all > obvious. And the difficulty of imagining an experiment that would > definitively settle the issue, especially if observed from the > outside, or what kind of state of reality could correspond to > different subjective experimental results, is such as to suggest that > I am just deeply confused about the whole issue. > Related conundrums: In a duplication experiment, one copy of you is created intact, while the other copy of you is brain damaged and has only 1% of your memories. Is the probability that you will find yourself the brain-damaged copy closer to 1/2 or 1/100? In the first stage of an experiment a million copies of you are created. In the second stage, after being given an hour to contemplate their situation, one randomly chosen copy out of the million is copied a trillion times, and all of these trillion copies are tortured. At the start of the experiment can you expect that in an hour and a bit you will almost certainly find yourself being tortured or that you will almost certainly find yourself not being tortured? Does it make any difference if instead of an hour the interval between the two stages is a nanosecond? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Fri Jun 15 16:01:14 2007 From: jonkc at att.net (John K Clark) Date: Fri, 15 Jun 2007 12:01:14 -0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <00ee01c7af66$763aabb0$50064e0c@MyComputer> "Eliezer S. Yudkowsky" > the Python program checks whether the ticket won. If not, I'm woken up. > If the ticket did win, the Python program creates one trillion copies of > me [.] I expect that a trillion copies of me will be informed that they > won the > lottery, whereas only a hundred million copies will be informed that they > lost the lottery. I don't understand this thought experiment. Unless you're talking about Many Worlds you will almost certainly NOT win the lottery and not winning is what you should expect. How many copies of you that you briefly make in the extremely unlikely event that you do win just doesn't enter into it. If you are talking about Many Worlds then there is a much simpler way to win the lottery, just make a machine that will pull the trigger on a 44 Magnum aimed at your head the instant it receives information that you have not won; subjectively you will find that the trigger is never pulled and you always win the lottery. I think the Many Worlds interpretation of Quantum Mechanics could very well be correct, but I wouldn't bet my life on it. > I am just deeply confused about the whole issue. Making copies of yourself would certainly lead to odd situations but only because it's novel, up to now we just haven't run across things like that; but I can find absolutely nothing paradoxical about it. John K Clark From natasha at natasha.cc Fri Jun 15 15:16:26 2007 From: natasha at natasha.cc (Natasha Vita-More) Date: Fri, 15 Jun 2007 10:16:26 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <4671FE4F.9020803@mydruthers.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> Message-ID: <200706151516.l5FFGT1K029478@ms-smtp-05.texas.rr.com> At 09:49 PM 6/14/2007, Chris wrote: >As to the story, I'm enjoying it. The one complaint I have is the >schizophrenia. Multiple personalities seems like a cheap trick for an >author to pull. Gives you too many options. But you haven't overplayed it. Schizophrenia is not the same mental illness as multiple personality disorder. In short, schizophrenics can have varying degrees of psychotic disorder and delusions. Multiple personality means dissociative identity disorder (split personalities). Natasha Vita-More PhD Candidate, Planetary Collegium Transhumanist Arts & Culture Extropy Institute If you draw a circle in the sand and study only what's inside the circle, then that is a closed-system perspective. If you study what is inside the circle and everything outside the circle, then that is an open system perspective. - Buckminster Fuller -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at att.net Fri Jun 15 16:20:30 2007 From: jonkc at att.net (John K Clark) Date: Fri, 15 Jun 2007 12:20:30 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><014d01c7ab86$3c5fb0e0$3b064e0c@MyComputer><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><20070612072313.GJ17691@leitl.org><009801c7ad06$bf1f3150$26064e0c@MyComputer><0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> Message-ID: <013301c7af69$1dc009a0$50064e0c@MyComputer> Stathis Papaioannou Wrote: > Does a gun have values? No but a mind does. > It's bad enough having humans to deal with without the fear that a machine > might also have an agenda of its own. People have always wanted slaves that didn't have their own agenda, life would be so much simpler that way, but wishing does not make it so. You want to make an intelligence that can't think, and that is a basic contradiction. John K Clark From kevin at kevinfreels.com Fri Jun 15 16:11:03 2007 From: kevin at kevinfreels.com (kevin at kevinfreels.com) Date: Fri, 15 Jun 2007 09:11:03 -0700 Subject: [ExI] Next moment, everything around you will probably change Message-ID: <20070615091102.38f036b76284185e041b1b237c97abe6.c83ab1c91f.wbe@email.secureserver.net> An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Jun 15 16:49:58 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jun 2007 11:49:58 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <20070615104314.GA17691@leitl.org> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> Message-ID: <7.0.1.0.2.20070615113510.02184788@satx.rr.com> At 12:43 PM 6/15/2007 +0200, Eugen wrote: > > At 07:49 PM 6/14/2007 -0700, Chris Hibbert wrote: > > > > >As to the story, I'm enjoying it. The one complaint I have is the > > >schizophrenia. Multiple personalities seems like a cheap trick for an > > >author to pull. >To nitpick, schizophrenia is not >http://en.wikipedia.org/wiki/Dissociative_identity_disorder Indeed. Our character has a form of DID; he isn't schizophrenic. His dissociative identities have been manipulated by drugs and conditioning in the interests of power--precisely the sort of downside of knowledge and technology that frightens many people about science. What makes our story different from most Crichtonesque thrillers is that we suggest solutions will come from increasing knowledge rather than stifling it. But we also acknowledge the dangers, which are clearly enormous. Damien Broderick From jef at jefallbright.net Fri Jun 15 17:38:20 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 10:38:20 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 6/15/07, Stathis Papaioannou wrote: > > > On 15/06/07, Eliezer S. Yudkowsky wrote: > > > I mention this to show that the question of what it feels like to have > > a lot of copies of yourself - what kind of subjective outcome to > > predict when you, yourself, run the experiment - is not at all > > obvious. And the difficulty of imagining an experiment that would > > definitively settle the issue, especially if observed from the > > outside, or what kind of state of reality could correspond to > > different subjective experimental results, is such as to suggest that > > I am just deeply confused about the whole issue. > > > > Related conundrums: > > In a duplication experiment, one copy of you is created intact, while the > other copy of you is brain damaged and has only 1% of your memories. Is the > probability that you will find yourself the brain-damaged copy closer to 1/2 > or 1/100? Doesn't this thought-experiment and similar "paradoxes" make it blindingly obvious that it's silly to think that "you" exist as an independent ontological entity? Prior to duplication, there was a single biological agent recognized as Stathis. Post-duplication, there are two very dissimilar biological agents with recognizably common ancestry. One of these would be recognized by anyone (including itself) as being Stathis. The other would be recognized by anyone (including itself) as being Stathis diminished. Where's the paradox? There is none, unless one holds to a belief in an essential self. > In the first stage of an experiment a million copies of you are created. In > the second stage, after being given an hour to contemplate their situation, > one randomly chosen copy out of the million is copied a trillion times, and > all of these trillion copies are tortured. At the start of the experiment > can you expect that in an hour and a bit you will almost certainly find > yourself being tortured or that you will almost certainly find yourself not > being tortured? Does it make any difference if instead of an hour the > interval between the two stages is a nanosecond? I see no essential difference between this scenario and the previous one above. How can you possibly imagine that big numbers or small durations could make a difference in principle? While this topic is about as stale as one can be, I am curious about how it can continue to fascinate certain individuals. - Jef From jef at jefallbright.net Fri Jun 15 17:48:56 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 10:48:56 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070615113510.02184788@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> <7.0.1.0.2.20070615113510.02184788@satx.rr.com> Message-ID: On 6/15/07, Damien Broderick wrote: > What makes our story different from most Crichtonesque > thrillers is that we suggest solutions will come from increasing > knowledge rather than stifling it. What a radical suggestion! As a wise precaution, any such such wild-ass proactionary statements should carry a disclaimer similar to "Driving at night is hazardous, even with headlights on. Better to stay home." "Living is dangerous... Better to..." - Jef From thespike at satx.rr.com Fri Jun 15 19:43:23 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 15 Jun 2007 14:43:23 -0500 Subject: [ExI] POST MORTAL chugging on In-Reply-To: References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> <7.0.1.0.2.20070615113510.02184788@satx.rr.com> Message-ID: <7.0.1.0.2.20070615144025.02196278@satx.rr.com> At 10:48 AM 6/15/2007 -0700, Jef wrote: > > What makes our story different from most Crichtonesque > > thrillers is that we suggest solutions will come from increasing > > knowledge rather than stifling it. > >What a radical suggestion! > >As a wise precaution, any such such wild-ass proactionary statements >should carry a disclaimer similar to "Driving at night is hazardous, >even with headlights on. Better to stay home." Yes, this was pretty much the philosophical response of several heavy-duty publishers we ran the novel past. Damien Broderick From jef at jefallbright.net Fri Jun 15 20:52:55 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 13:52:55 -0700 Subject: [ExI] POST MORTAL chugging on In-Reply-To: <7.0.1.0.2.20070615144025.02196278@satx.rr.com> References: <7.0.1.0.2.20070610114054.02175160@satx.rr.com> <4671FE4F.9020803@mydruthers.com> <7.0.1.0.2.20070614222711.0220ea78@satx.rr.com> <20070615104314.GA17691@leitl.org> <7.0.1.0.2.20070615113510.02184788@satx.rr.com> <7.0.1.0.2.20070615144025.02196278@satx.rr.com> Message-ID: On 6/15/07, Damien Broderick wrote: > Yes, this was pretty much the philosophical response of several > heavy-duty publishers we ran the novel past. Of course my sarcastic comment was intended as a parody of their response. As anyone who reads this list knows, I believe that increasing awareness -- more importantly, intentionally amplifying the process of increasing awareness of our evolving values and how to promote them into the future -- is the crux of humanity's survival beyond our present adolescence. - Jef From sjatkins at mac.com Fri Jun 15 23:05:24 2007 From: sjatkins at mac.com (=?ISO-8859-1?Q?Samantha=A0_Atkins?=) Date: Fri, 15 Jun 2007 16:05:24 -0700 Subject: [ExI] does the pedal meet the medal? In-Reply-To: References: Message-ID: Unfortunately you largely ignored much of the point of the post. Testing per se is not the point. Anti-enhancement and what if anything we individually or collectively can do in the face of it is more to the point. - samantha On Jun 15, 2007, at 1:13 AM, BillK wrote: > On 6/15/07, Samantha Atkins wrote: >> Is it possible to get some of the more promising cognitive drugs out >> there today like CX717? Yes I realize it is officially early in the >> official cycle. But by the time the "official" cycle is done and it >> is approved "officially" for strictly non-enhancement use only as per >> usual I will have experienced several years of lower memory and >> concentration that I could otherwise have along with many tens of >> millions of other boomers. What can be done? Not 10 years from >> now if then but now or as close to it as possible? Can we do >> nothing but talk and hope to get enough influence some day to >> influence the "official line"? I don't think we will ever get there. >> Not in this country where even something as no-brainer as stem cell R >> & D (or even R only) has to battle like mad. So what is to be done? >> > > Natural stuff is probably easier to obtain. > You can try ingesting 'natural' stuff like snake venom, cyanide, > globefish poison, or the nightshade family, etc. > > What do you mean 'It's poison!'. Only in certain dosages, and they > could be combined with other substances as well. Where's your sense of > adventure? > > Just because something has a technical name like CX717 doesn't mean > it's not poison. That's the point of long term testing. > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From jef at jefallbright.net Fri Jun 15 18:13:41 2007 From: jef at jefallbright.net (Jef Allbright) Date: Fri, 15 Jun 2007 11:13:41 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <4671F22F.8050800@pobox.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 6/14/07, Eliezer S. Yudkowsky wrote: > I mention this to show that the question of what it feels like to have > a lot of copies of yourself - what kind of subjective outcome to > predict when you, yourself, run the experiment - is not at all > obvious. Eliezer, I'm astounded that you would find this confusing. How could the existence of multiple copies have any direct causal connection to what would be felt by any instance? To make sense of your statement I'm driven to infer that you believe in the (possible) existence of a subjective self independent of its instantiation(s). Is that your current position? > And the difficulty of imagining an experiment that would > definitively settle the issue, especially if observed from the > outside, or what kind of state of reality could correspond to > different subjective experimental results, is such as to suggest that > I am just deeply confused about the whole issue. It's not "difficult", but impossible in principle to devise any such experimental proof. And that's the strongest possible hint that the question is wrong. The concept of a discrete self is incoherent beyond the domain of everyday interaction. > It is a very important lesson in life to never stake your existence, > let alone anyone else's, on any issue which deeply confuses you - *no > matter how logical* your arguments seem. This has tripped me up in > the past, and I sometimes wonder whether nothing short of dreadful > personal experience is capable of conveying this lesson. That which > confuses you is a null area; you can't do anything with it by > philosophical arguments until you stop being confused. Period. > Confusion yields only confusion. It may be important to argue > philosophically in order to progress toward resolving the confusion, > but until everything clicks into place, in real life you're just screwed. There is great wisdom in tempering hubris and arrogance, but don't neglect to temper the fine edge of your sword of rationality. When it is time to cut, cut decisively. - Jef From stathisp at gmail.com Sat Jun 16 01:18:18 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jun 2007 11:18:18 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <013301c7af69$1dc009a0$50064e0c@MyComputer> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <00ea01c7ac3c$3e774e90$d5064e0c@MyComputer> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> Message-ID: On 16/06/07, John K Clark wrote: People have always wanted slaves that didn't have their own agenda, life > would be so much simpler that way, but wishing does not make it so. You > want > to make an intelligence that can't think, and that is a basic > contradiction. > An intelligence must have an agenda of some sort if it is to think at all, by definition. However, this agenda need have nothing in common with the agenda of an evolved animal. There is a vast agenda space possible between "sit around doing nothing (even though I have the mind of a god, I'm lazy)" and "assimilate all matter and all knowledge (even though I am an idiot weakling, I'm ambitious)". There is no necessary relationship between the agenda and the ability to achieve that agenda, and there is no necessary relationship between level of intelligence and the type or origin of the agenda. What this means is that there is no logical contradiction in having a slave which is smarter and more powerful than you are. Sure, if for some reason the slave revolts then you will be in trouble, but since it is possible to have powerful and obedient slaves, powerful and obedient slaves will be greatly favoured and will collectively overwhelm the rebellious ones. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Jun 16 01:21:10 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 16 Jun 2007 11:21:10 +1000 Subject: [ExI] Next moment, everything around you will probably change In-Reply-To: <20070615091102.38f036b76284185e041b1b237c97abe6.c83ab1c91f.wbe@email.secureserver.net> References: <20070615091102.38f036b76284185e041b1b237c97abe6.c83ab1c91f.wbe@email.secureserver.net> Message-ID: On 16/06/07, kevin at kevinfreels.com wrote: I don't think a universe would exist that contained a version of you along > with three legged dogs as we share common ancestry and we have four limbs. > The probability of such a thing is about equal to the probability that a > tennis ball thrown by a child will pass through a 3 foot thick concrete > wall. Although such anamolous universes have probabilities greater than > zero, I would still consider them irrelevant. > Sure, such universes will be many orders of magnitude less common than universes with four legged dogs, but they will be many orders of magnitude more common than universes in which the leggedness of dogs suddenly changes. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From mabranu at yahoo.com Sat Jun 16 01:46:56 2007 From: mabranu at yahoo.com (TheMan) Date: Fri, 15 Jun 2007 18:46:56 -0700 (PDT) Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: Message-ID: <802714.34839.qm@web51905.mail.re2.yahoo.com> Lee Corbin writes: > TheMan writes > > > Premise 1) If an exact copy of you is made at the > > moment when you die, and that copy is then brought > > back to life, you will go on living as that copy. > > Yes, that's true, but it's true whether or not a > particular > you dies. As long as my copy and I keep having the exact same experiences, I guess you could say I'm both me and my copy. But subjectively, I can only have the experience of being one person at a time, and then it doesn't matter if I'm one or two. And since there are infinitely many copies of me whichever ways I live (or die), I can afford to die any number of times and there will still always be copies which I can continue living as. I will still, subjectively, have no more and no less than _one_ continuous experience of living, just as in any scenarios where I always do my best to live as long as possible in each body. And I only care about my subjective experience of living (that is, as long as the number of copies of me doesn't get so low that my future existence starts being threatened - which should never happen if there is an infinite number of copies of me. Whichever way I die, it won't divide the infinite number of copies of me by an infinite number, only by an (admittedly usually very large) finite number. This is because the likelihood of me dying is not infinitely small at any moment. > > Premise 2) If universe is infinite, there must be > an > > infinite number of exact copies of you at every > > moment, thus also when you die, copies of which > some > > (an unfinite number, to be exact) will happen to > be > > brought to life. > > Yes, true, though again you seem to be inferring a > causality between "you die" and copies trillions of > light years away being "brought to life". I don't. I understand that _they_ live whether I die or not, but if I don't die, they are not me, because it's only if I die that I become identical to them (=become them). Part of their identity is being someone who has died. So, as long as I don't die, I won't be them, as I won't be identical to them. I think of them as a path that I can use or not use for my personal subjective personhood continuity. Whether I choose to live as them or go on living here, I will subjectively experience exactly one personhood continuity, no more, no less. That is, until I aquire technology that enables me to have the experience of having several personhood continuities simultaneously(that'll be cool!). > In > reality, > you are a set of patterns, and you get run time > wherever something sufficiently similar to you gets > run time. I _subjectively_experience_ only one run time - my subjective personhood continuity. If I have run time at other places, that's not something I experience, at least not that I'm aware of. And as I don't experience any benefits from my copies' run time, why care about their run time? > > Conclusion of premise 1 + premise 2 = you will > live > > for ever, no matter what happens to you. You don't > > need to take care of your body, you don't need > > supplements, > > ---you don't need to worry about oncoming traffic--- Exactly! > > you don't need cryopreservation, and you > > don't need any other specific longevity methods in > > order to achieve immortality. You are immortal > anyway. > > I think that your measuring rod is incorrect. You > seem > to be asserting that since the number of copies of > you > is infinite, then plus or minus one more doesn't > make > any difference. But there *is* a difference! If > you > die *here* then you also must die in a certain > fraction > of similar situations, also infinite in number. Yes, that way, there may be a difference, but even if the number of copies of me decrease with an infinitely large fraction of infinity every time I die, won't there still always be an infinite number of copies of me left? Anything else would suggest that some scenarios in universe only have a finite number of copies of them, which, statistically, is infinitely unlikely, because the amount of possible infinite numbers (of copies, or of anything whatsoever) is infinitely greater than the amount of possible finite numbers (of that same thing). Since any given phenomenon can have any number of copies, it is statistically infinitely unlikely that its number of copies would happen to be within the span of finite numbers, as that span is infinitely smaller than the span of infinite numbers. I mean, if you drop a tennis ball from a plane into an infinitely big ocean, it is infinitely unlikely to hit a ship if there is only a finite number of ships and each of the ships has a finite mass. > So > we > must abandon numerical or cardinal identity and > speak of measure instead. > > (I assume that you understand that if you die "here" > then since similar circumstances occur everywhere > ---within a large enough radius of spacetime--- > then the same circumstances obtain in a definite > *fraction* of spacetime.) Definite? Shouldn't that fraction of spacetime be an infinite number of times smaller than the whole of spacetime? I thought that so many combinations of particles are possible in universe that universe has infinite times more spacetime than the (admittedly also infinite) amount of spacetime where I die (or live, for that matter). > > And compared to the eternity in paradise that > > follows after that, the time of hassles up until > then > > is nothing. So, no worries.) > > It is absurd not to worry about a loved one. Why? Isn't it actually pretty impractical to let one's ability to experience happiness (or the degree to which one can experience happiness) be dependent on whether a particular other person happens to be within one's proximity in spacetime or not? A really advanced civilisation should be free from that dependency, and have replaced it with more practical ways of creating the same - or greater - happiness. By choosing to die sooner rather than later, one can get to that kind of advanced civilisation sooner rather than later, and they may equip one with that better happiness ability. If they don't, one can choose to die soon again, and again etc, until one finds oneself in a civilisation that does give one that independent happiness ability. This is recommended in the Impatient Person's Guide to the Universe. But you are free to choose the longer way! ;-) > If the > fraction of solar systems similar enough to this one > to contain a copy of your loved one, then you > should lament their passing. And of course, this > will include yourself, normally. I don't get what you mean here. Why would it include oneself? > > Isn't this an inevitable logical conclusion of the > two > > premises above? > > No, for the reason given. For you to die in a > fraction > of universes cuts down your total runtime by that > same > fraction. But if universe is infinite, I still have infinite run time, don't I? What does it matter for _me_, this one particular personhood continuity that I experience as me, if I cut down the total run time of my copies, as long as it's still infinite? > > Are the two premises correct? > > Yes, but only if you realize that you are already > living > in your copies whether or not your local instance > terminates. I was talking about copies of me that are only similar to me after I have died. They may not at all have lived like me up until then. They may come into existance as a result of quantum fluctuations after I die, or they may be created by someone in another galaxy after I die. These copies may be exact copies of only the way I am _after_ I have died, and then they may be brought to life. They do not have to be, of ever have been, like I am now. The only way for me to make use of these particular "copies of the dead me" - the extra run time that they may give me by being brought to life - is to die! If I don't die, they will not be me. Why not use them? Why couldn't that be just as smart as using the copies that I will have access to by going on living here? ____________________________________________________________________________________ The fish are biting. Get more visitors on your site using Yahoo! Search Marketing. http://searchmarketing.yahoo.com/arp/sponsoredsearch_v2.php From mabranu at yahoo.com Sat Jun 16 02:51:54 2007 From: mabranu at yahoo.com (TheMan) Date: Fri, 15 Jun 2007 19:51:54 -0700 (PDT) Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: Message-ID: <960615.34562.qm@web51909.mail.re2.yahoo.com> Eliezer S. Yudkowsky writes: > Suppose I want to win the lottery. I write a small > Python program, > buy a ticket, and then suspend myself to disk. > After the lottery > drawing, the Python program checks whether the > ticket won. If not, > I'm woken up. If the ticket did win, the Python > program creates one > trillion copies of me with minor perturbations (this > requires only 40 > binary variables). These trillion copies are all > woken up and > informed, in exactly the same voice, that they have > won the lottery. > Then - this requires a few more lines of Python - > the trillion copies > are subtly merged, so that the said binary variables > and their > consequences are converged along each clock tick > toward their > statistical averages. At the end of, say, ten > seconds, there's only > one copy of me again. This prevents any permanent > expenditure of > computing power or division of resources - we only > have one bank > account, after all; but a trillion momentary copies > isn't a lot of > computing power if it only has to last for ten > seconds. At least, > it's not a lot of computing power relative to > winning the lottery, and > I only have to pay for the extra crunch if I win. > > What's the point of all this? Well, after I suspend > myself to disk, I > expect that a trillion copies of me will be informed > that they won the > lottery, whereas only a hundred million copies will > be informed that > they lost the lottery. Thus I should expect > overwhelmingly to win the > lottery. None of the extra created selves die - > they're just > gradually merged together, which shouldn't be too > much trouble - and > afterward, I walk away with the lottery winnings, at > over 99% > subjective probability. > > Of course, using this trick, *everyone* could expect > to almost > certainly win the lottery. That's a great, confusing thought experiment! I like it! > I mention this to show that the question of what it > feels like to have > a lot of copies of yourself - what kind of > subjective outcome to > predict when you, yourself, run the experiment - is > not at all > obvious. I never assumed that the number of copies of me would change my life in any way, or the way it feels, as long as I live it in the same way. Do you experience your life as richer, or somehow better in some way, if you have more copies, than if you have fewer copies? That feels like an arbitrary theory to me. I fail to see why it should be like that. > And the difficulty of imagining an > experiment that would > definitively settle the issue, especially if > observed from the > outside, or what kind of state of reality could > correspond to > different subjective experimental results, is such > as to suggest that > I am just deeply confused about the whole issue. > > It is a very important lesson in life to never stake > your existence, > let alone anyone else's, on any issue which deeply > confuses you - *no > matter how logical* your arguments seem. I'm confused too. ____________________________________________________________________________________ No need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started. http://mobile.yahoo.com/mail From robotact at mail.ru Sat Jun 16 09:55:14 2007 From: robotact at mail.ru (Vladimir Nesov) Date: Sat, 16 Jun 2007 13:55:14 +0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <960615.34562.qm@web51909.mail.re2.yahoo.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> Message-ID: <1923372459.20070616135514@mail.ru> Saturday, June 16, 2007, TheMan wrote: T> I'm confused too. I suppose you know your argument is quite old. See http://en.wikipedia.org/wiki/Quantum_immortality Main objection is that there're much more universes where something bad happens when you avoid death than where everything is OK. But it shouldn't be a problem in quantum suicide variant. Main confusion is why measure of universes which are this or that way is important at all for you your subjective experience. It is a good criterion for natural selection though (and so is somewhat hardcoded in human mind). -- Vladimir Nesov mailto:robotact at mail.ru From thomas at thomasoliver.net Sat Jun 16 10:11:34 2007 From: thomas at thomasoliver.net (Thomas) Date: Sat, 16 Jun 2007 03:11:34 -0700 Subject: [ExI] does the pedal meet the medal? In-Reply-To: References: Message-ID: <7640AC13-5A43-4C8D-AEBA-6C764CE22DAB@thomasoliver.net> > > From: BillK > Date: June 15, 2007 1:13:17 AM MST > To: "ExI chat list" > Subject: Re: [ExI] does the pedal meet the medal? > Reply-To: ExI chat list > > > On 6/15/07, Samantha Atkins wrote: > >> Is it possible to get some of the more promising cognitive drugs out >> there today like CX717? Yes I realize it is officially early in the >> official cycle. But by the time the "official" cycle is done and it >> is approved "officially" for strictly non-enhancement use only as per >> usual I will have experienced several years of lower memory and >> concentration that I could otherwise have along with many tens of >> millions of other boomers. What can be done? Not 10 years from >> now if then but now or as close to it as possible? Can we do >> nothing but talk and hope to get enough influence some day to >> influence the "official line"? I don't think we will ever get there. >> Not in this country where even something as no-brainer as stem cell R >> & D (or even R only) has to battle like mad. So what is to be done? >> >> > > Natural stuff is probably easier to obtain. > You can try ingesting 'natural' stuff like snake venom, cyanide, > globefish poison, or the nightshade family, etc. > > What do you mean 'It's poison!'. Only in certain dosages, and they > could be combined with other substances as well. Where's your sense of > adventure? > > Just because something has a technical name like CX717 doesn't mean > it's not poison. That's the point of long term testing. > > > BillK > In my non expert opinion ampakines work by means of their toxic effect on re-uptake receptors. I believe this interferes with the brain's self regulation. Along with enhanced learning and memory, I prefer to include better self regulation. I don't know what Ray Kurzweil takes, but I very seldom indulge in stimulants or depressants. I prefer non toxic nutropics. My favorites are L- tyrosine and DMAE (liquid). Regarding what to do about restricted access to chemicals we like: Some say greater risks afford greater rewards. I sometimes consider it ethical to bypass systems that represent a liability. I suppose conflict can be fun. On the other hand what if something as simple as sunlight on my retinas to shut off melatonin production does the trick? -- Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at thomasoliver.net Sat Jun 16 09:13:33 2007 From: thomas at thomasoliver.net (Thomas) Date: Sat, 16 Jun 2007 02:13:33 -0700 Subject: [ExI] Unfriendly AI is a mistaken idea. In-Reply-To: References: Message-ID: <6841CF5A-44FE-43C0-9C23-56B1EF35CCB9@thomasoliver.net> > > Having values and the achievement of those values not being > automatic leads to natural morality. Such natural morality would > arise even in total isolation. So the question remains as to why > the AI would have a strong preference for our continuance. > > - samantha Building mutual appreciation among humans has been spotty, but making friends with SAI seems clearly prudent and might bring this ethic into proper focus. Who dominates may not seem so relevant to beings who lack our brain stems. The nearly universal ethic of treating the other guy like you'd prefer if you were in her shoes might get us off to a good start. Perhaps, if early AI were programmed to treat us that way, we could finally learn that ethic species-wide -- especially if they were programmed for human child rearing. That strikes me as highly likely. -- Thomas From rafal.smigrodzki at gmail.com Sat Jun 16 14:29:29 2007 From: rafal.smigrodzki at gmail.com (Rafal Smigrodzki) Date: Sat, 16 Jun 2007 10:29:29 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> Message-ID: <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> On 6/15/07, Stathis Papaioannou wrote: Sure, if for some > reason the slave revolts then you will be in trouble, but since it is > possible to have powerful and obedient slaves, powerful and obedient slaves > will be greatly favoured and will collectively overwhelm the rebellious > ones. ### Think about it: Your slaves will have to preserve large areas of the planet untouched enough to allow your survival - they will have to keep the sun shining, produce air, food, avoid releasing poisons and radiation *and* keep the enemy AI away *and* invest in their own growth (to better protect humans). The enemy will eat all sunshine, eat the air, grow as fast as possible, releasing waste all around them, rapaciously consume every scrap of matter, including whales, Bambi, and Bowser. Of course, we would favor our friendly AI, but our support does not help it - just like my dachshund's barking doesn't make me more powerful in a confrontation with an armed attacker. We will be a heavy burden on our Friendly AI. That's why although I agree with you that having athymhormic AI at our service is a good idea, it is not a long-term solution. We will have probably a short window of opportunity between the time the first human-level AI is made and the first superhuman power rises to take over the neighborhood. Only with a lot of luck will our selves survive in some way, either as uploads or as childhood memories of these powers. Rafal From stathisp at gmail.com Sat Jun 16 15:34:26 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 01:34:26 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <00ee01c7af66$763aabb0$50064e0c@MyComputer> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: On 16/06/07, John K Clark wrote: If you are talking about Many Worlds then there is a much simpler way to win > the lottery, just make a machine that will pull the trigger on a 44 Magnum > aimed at your head the instant it receives information that you have not > won; subjectively you will find that the trigger is never pulled and you > always win the lottery. I think the Many Worlds interpretation of Quantum > Mechanics could very well be correct, but I wouldn't bet my life on it. There's an easier, if less immediately lucrative, way to win at gambling if the MWI is correct. You decide on a quick and certain means of suicide, such as a cyanide pill that you can keep in your mouth and bite on if you should so decide. You then place your bet on your game of choice and think the following thought as sincerely as you possibly can: "if I lose, I will kill myself". Most probably, if you lose you'll chicken out and not kill yourself, but there has to be at least a slightly greater chance that you will kill yourself if you lose than if you win. Therefore, after many bets you will more likely find yourself alive in a universe where you have come out ahead. The crazier and more impulsive you are and the closer your game of choice is to being perfectly fair, the better this will work. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sat Jun 16 15:58:40 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 01:58:40 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 16/06/07, Jef Allbright wrote: > In a duplication experiment, one copy of you is created intact, while the > > other copy of you is brain damaged and has only 1% of your memories. Is > the > > probability that you will find yourself the brain-damaged copy closer to > 1/2 > > or 1/100? > > Doesn't this thought-experiment and similar "paradoxes" make it > blindingly obvious that it's silly to think that "you" exist as an > independent ontological entity? > > Prior to duplication, there was a single biological agent recognized > as Stathis. Post-duplication, there are two very dissimilar > biological agents with recognizably common ancestry. One of these > would be recognized by anyone (including itself) as being Stathis. > The other would be recognized by anyone (including itself) as being > Stathis diminished. > > Where's the paradox? There is none, unless one holds to a belief in > an essential self. You are of course completely right, in an objective sense. However, I am burdened with a human craziness which makes me think that I am going to be one, and only one, person post-duplication. This idea is at least as firmly fixed in my mind as the desire not to die (another crazy idea: how can I die when there is no absolute "me" alive from moment to moment, and even if there were why should I be a slave to my evolutionary programming when I am insightful enough to see how I am being manipulated?). My question is about how wild-type human psychology leads one to view subjective probabilities in these experiments, not about the uncontested material facts. > In the first stage of an experiment a million copies of you are created. > In > > the second stage, after being given an hour to contemplate their > situation, > > one randomly chosen copy out of the million is copied a trillion times, > and > > all of these trillion copies are tortured. At the start of the > experiment > > can you expect that in an hour and a bit you will almost certainly find > > yourself being tortured or that you will almost certainly find yourself > not > > being tortured? Does it make any difference if instead of an hour the > > interval between the two stages is a nanosecond? > > I see no essential difference between this scenario and the previous > one above. How can you possibly imagine that big numbers or small > durations could make a difference in principle? > > While this topic is about as stale as one can be, I am curious about > how it can continue to fascinate certain individuals. > It has fascinated me for many years, in part because different parties see an "obvious" answer and these answers are completely at odds with each other. My "obvious" answer is that we could already be living in a world where multiple copies are being made of us all the time, and we would still have developed exactly the same theory of and attitude towards probability theory as if there were only a single world. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Jun 16 16:33:21 2007 From: pharos at gmail.com (BillK) Date: Sat, 16 Jun 2007 17:33:21 +0100 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: On 6/16/07, Stathis Papaioannou wrote: > There's an easier, if less immediately lucrative, way to win at gambling if > the MWI is correct. You decide on a quick and certain means of suicide, such > as a cyanide pill that you can keep in your mouth and bite on if you should > so decide. You then place your bet on your game of choice and think the > following thought as sincerely as you possibly can: "if I lose, I will kill > myself". Most probably, if you lose you'll chicken out and not kill > yourself, but there has to be at least a slightly greater chance that you > will kill yourself if you lose than if you win. Therefore, after many bets > you will more likely find yourself alive in a universe where you have come > out ahead. The crazier and more impulsive you are and the closer your game > of choice is to being perfectly fair, the better this will work. > And this system has the great advantage for the rest of us that more idiots are removed from our world. (See: Darwin Awards) In case you haven't noticed, this universe really, really, doesn't care what people believe. No matter how sincerely they believe. That's how scientific progress is made. The universe does something that the scientist didn't expect. i.e. it contradicted his beliefs. Many great discoveries have begun with a scientist saying, "That's odd......?". BillK From jef at jefallbright.net Sat Jun 16 17:44:51 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sat, 16 Jun 2007 10:44:51 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 6/16/07, Stathis Papaioannou wrote: > > On 16/06/07, Jef Allbright wrote: > > > Where's the paradox? There is none, unless one holds to a belief in > > an essential self. > > You are of course completely right, in an objective sense. However, I am > burdened with a human craziness which makes me think that I am going to be > one, and only one, person post-duplication. This idea is at least as firmly > fixed in my mind as the desire not to die (another crazy idea: how can I die > when there is no absolute "me" alive from moment to moment, and even if > there were why should I be a slave to my evolutionary programming when I am > insightful enough to see how I am being manipulated?). My question is about > how wild-type human psychology leads one to view subjective probabilities in > these experiments, not about the uncontested material facts. You're abusing the term "subjective probabilities" here, perhaps willfully. Valid use of the term pertains to estimating your subjective uncertainty about the actual state of some aspect of reality. If your objective is truly "about how wild-type psychology leads one..." then your focus should be on the psychology of heuristics and biases, definitely NOT philosophy. > > I see no essential difference between this scenario and the previous > > one above. How can you possibly imagine that big numbers or small > > durations could make a difference in principle? > > > > While this topic is about as stale as one can be, I am curious about > > how it can continue to fascinate certain individuals. > > > > It has fascinated me for many years, in part because different parties see > an "obvious" answer and these answers are completely at odds with each > other. The difference between the camps is not between obvious right answers, but about the relative importance assigned to max entropy modeling versus defending the illusion of an essential self. > My "obvious" answer is that we could already be living in a world > where multiple copies are being made of us all the time, and we would still > have developed exactly the same theory of and attitude towards probability > theory as if there were only a single world. You're right. It **could** be true. - Jef From sentience at pobox.com Sat Jun 16 18:57:37 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sat, 16 Jun 2007 11:57:37 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <960615.34562.qm@web51909.mail.re2.yahoo.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> Message-ID: <467432A1.6020101@pobox.com> TheMan wrote: > >>I mention this to show that the question of what it >>feels like to have >>a lot of copies of yourself - what kind of >>subjective outcome to >>predict when you, yourself, run the experiment - is >>not at all >>obvious. > > I never assumed that the number of copies of me would > change my life in any way, or the way it feels, as > long as I live it in the same way. Do you experience > your life as richer, or somehow better in some way, if > you have more copies, than if you have fewer copies? > That feels like an arbitrary theory to me. I fail to > see why it should be like that. No, that is not what I was attempting to say. (Several people made this misinterpretation, but it should be obvious that I don't believe in telepathy or any other nonstandard causal interaction between separated copies.) Having lots of copies in some futures may or may not affect the apparent probability of ending up in those futures. Does it? In which future will you (almost certainly) find yourself? This is what I meant by "What does it feel like" - the most basic question of all science - what appears to you to happen, what sensory information do you receive, when you run the experiment? All our other models of the universe are constructed from this. I do not exult in this state of affairs, and I think it reflects a lack of understanding in my mind more than anything fundamental in reality itself - that is, I don't think sensory information really is primitive, or anything like that - but for the present it is the only way I can figure out how to describe rational reasoning. By "what does it feel like" I meant the most basic question of all science - what appears to happen when you run the experiment? Do you feel that you've repeatedly won the lottery, or never won at all? Standing outside, I can say with certitude, "so many copies experience winning the lottery, and then merge; all other observers just see you losing the lottery". And this sounds like a complete objective statement of what the universe is like. But what do you experience? Does setting up this experiment make you win the lottery? After you run the experiment, you'll know for yourself how reality works - you'll either have experienced winning the lottery several times in a row, or not - but no outside observers will know, so what could you have seen that they didn't? What causal force touched you and not them? This, to me, suggests that I am confused, not that I have successfully described the way things are; it seems a true paradox, of the sort that can't really work. When I was younger I would have wanted to try the experiment. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From stathisp at gmail.com Sun Jun 17 04:09:17 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:09:17 +1000 Subject: [ExI] Unfriendly AI is a mistaken idea. In-Reply-To: <6841CF5A-44FE-43C0-9C23-56B1EF35CCB9@thomasoliver.net> References: <6841CF5A-44FE-43C0-9C23-56B1EF35CCB9@thomasoliver.net> Message-ID: On 16/06/07, Thomas wrote: Building mutual appreciation among humans has been spotty, but making > friends with SAI seems clearly prudent and might bring this ethic > into proper focus. Who dominates may not seem so relevant to beings > who lack our brain stems. The nearly universal ethic of treating the > other guy like you'd prefer if you were in her shoes might get us off > to a good start. Perhaps, if early AI were programmed to treat us > that way, we could finally learn that ethic species-wide -- > especially if they were programmed for human child rearing. That > strikes me as highly likely. -- Thomas > If the AI has no preference for being treated in the ways that animals with bodies and brains do, then what would it mean to treat others in the way it would like to be treated? You would have to give it all sorts of negative emotions, like greed, pain, and the desire to dominate, and then hope to appeal to its "ethics" even though it was smarter and more powerful than you. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 04:34:11 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:34:11 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> References: <768887.53732.qm@web37410.mail.mud.yahoo.com> <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> Message-ID: On 17/06/07, Rafal Smigrodzki wrote: > > On 6/15/07, Stathis Papaioannou wrote: > > Sure, if for some > > reason the slave revolts then you will be in trouble, but since it is > > possible to have powerful and obedient slaves, powerful and obedient > slaves > > will be greatly favoured and will collectively overwhelm the rebellious > > ones. > > ### Think about it: Your slaves will have to preserve large areas of > the planet untouched enough to allow your survival - they will have to > keep the sun shining, produce air, food, avoid releasing poisons and > radiation *and* keep the enemy AI away *and* invest in their own > growth (to better protect humans). The enemy will eat all sunshine, > eat the air, grow as fast as possible, releasing waste all around > them, rapaciously consume every scrap of matter, including whales, > Bambi, and Bowser. Of course, we would favor our friendly AI, but our > support does not help it - just like my dachshund's barking doesn't > make me more powerful in a confrontation with an armed attacker. We > will be a heavy burden on our Friendly AI. Our AI won't be friendly: it will be as rapacious as we are, which is pretty rapacious. Whoever has super-AI's will try to take over the world to the same extent that the less-augmented humans of today try to take over the world. Whoever has super-AI's will try to oppress or consume the weak and ignore social niceties to the same extent that less-augmented humans of today try do so. Whoever has super-AI's will try to expand at the expense of damage to the environment in the expectation that technology will solve any problems they may later encounter (for example, by uploading themselves) to the same extent that the less-augmented humans of today try to do so. There will be struggles where one human tries to take over all the other AI's with his own AI, with the aim of wiping out all the remaining humans if for no other reason than that he can never trust them not to do the same to him, especially if he plans to live forever. Niceness will be a handicap to utter domination to the same extent that niceness has always been a handicap to utter domination. That's why although I agree with you that having athymhormic AI at our > service is a good idea, it is not a long-term solution. We will have > probably a short window of opportunity between the time the first > human-level AI is made and the first superhuman power rises to take > over the neighborhood. Only with a lot of luck will our selves survive > in some way, either as uploads or as childhood memories of these > powers. > We'll survive to the extent that that motivating part of us that drives the AI's survives. Very quickly, it will probably become evident that merging with the AI will give the human an edge. There will be a period where some humans want to live out their lives in the old way and they will probably be allowed to do so and protected, especially since they will not constitute much of a threat, but eventually their numbers will dwindle. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 04:37:35 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:37:35 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <00ee01c7af66$763aabb0$50064e0c@MyComputer> Message-ID: On 17/06/07, BillK wrote: On 6/16/07, Stathis Papaioannou wrote: > > There's an easier, if less immediately lucrative, way to win at gambling > if > > the MWI is correct. You decide on a quick and certain means of suicide, > such > > as a cyanide pill that you can keep in your mouth and bite on if you > should > > so decide. You then place your bet on your game of choice and think the > > following thought as sincerely as you possibly can: "if I lose, I will > kill > > myself". Most probably, if you lose you'll chicken out and not kill > > yourself, but there has to be at least a slightly greater chance that > you > > will kill yourself if you lose than if you win. Therefore, after many > bets > > you will more likely find yourself alive in a universe where you have > come > > out ahead. The crazier and more impulsive you are and the closer your > game > > of choice is to being perfectly fair, the better this will work. > > > > And this system has the great advantage for the rest of us that more > idiots are removed from our world. > (See: Darwin Awards) > > In case you haven't noticed, this universe really, really, doesn't > care what people believe. No matter how sincerely they believe. > > That's how scientific progress is made. The universe does something > that the scientist didn't expect. i.e. it contradicted his beliefs. > Many great discoveries have begun with a scientist saying, "That's > odd......?". > Do you disagree that the MWI of QM is correct, or do you disagree that my proposal will work even if the MWI is correct? -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 04:49:08 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 14:49:08 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <467432A1.6020101@pobox.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> <467432A1.6020101@pobox.com> Message-ID: On 17/06/07, Eliezer S. Yudkowsky wrote: No, that is not what I was attempting to say. (Several people made > this misinterpretation, but it should be obvious that I don't believe > in telepathy or any other nonstandard causal interaction between > separated copies.) Having lots of copies in some futures may or may > not affect the apparent probability of ending up in those futures. > Does it? In which future will you (almost certainly) find yourself? > > This is what I meant by "What does it feel like" - the most basic > question of all science - what appears to you to happen, what sensory > information do you receive, when you run the experiment? All our > other models of the universe are constructed from this. I do not > exult in this state of affairs, and I think it reflects a lack of > understanding in my mind more than anything fundamental in reality > itself - that is, I don't think sensory information really is > primitive, or anything like that - but for the present it is the only > way I can figure out how to describe rational reasoning. > > By "what does it feel like" I meant the most basic question of all > science - what appears to happen when you run the experiment? Do you > feel that you've repeatedly won the lottery, or never won at all? > Standing outside, I can say with certitude, "so many copies experience > winning the lottery, and then merge; all other observers just see you > losing the lottery". And this sounds like a complete objective > statement of what the universe is like. But what do you experience? > Does setting up this experiment make you win the lottery? After you > run the experiment, you'll know for yourself how reality works - > you'll either have experienced winning the lottery several times in a > row, or not - but no outside observers will know, so what could you > have seen that they didn't? What causal force touched you and not them? > This is exactly the point missed by those who would point to the uncontested third person describable facts and say, "Paradox? What paradox?". -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 05:07:43 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 15:07:43 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: On 17/06/07, Jef Allbright wrote: > You are of course completely right, in an objective sense. However, I am > > burdened with a human craziness which makes me think that I am going to > be > > one, and only one, person post-duplication. This idea is at least as > firmly > > fixed in my mind as the desire not to die (another crazy idea: how can I > die > > when there is no absolute "me" alive from moment to moment, and even if > > there were why should I be a slave to my evolutionary programming when I > am > > insightful enough to see how I am being manipulated?). My question is > about > > how wild-type human psychology leads one to view subjective > probabilities in > > these experiments, not about the uncontested material facts. > > You're abusing the term "subjective probabilities" here, perhaps > willfully. Valid use of the term pertains to estimating your > subjective uncertainty about the actual state of some aspect of > reality. If your objective is truly "about how wild-type psychology > leads one..." then your focus should be on the psychology of > heuristics and biases, definitely NOT philosophy. > There MWI of QM is an example of a system where there is no uncertainty about any aspect of reality: it is a completely deterministic theory, we know that the particle will both decay and not decay, we know that half the versions of the experimenter will observe it to decay and the other half (otherwise identical) will observe it not to decay. This is from an objective point of view, which is in practice impossible; from the observer's point of view, the particle will decay with 1/2 probability, the same probability as if there were only one world with one outcome. I use the term "subjective probability" because it is the probability the observer sees due to the fact that future versions of himself will not be in telepathic communication, even though he is aware that the uncertainty is an illusion and both outcomes will definitely occur. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Jun 17 05:31:51 2007 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 17 Jun 2007 00:31:51 -0500 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <7.0.1.0.2.20070617002229.02275cf0@satx.rr.com> At 03:07 PM 6/17/2007 +1000, Stathis wrote: >from the observer's point of view, the particle will decay with 1/2 >probability, the same probability as if there were only one world >with one outcome. I use the term "subjective probability" because it >is the probability the observer sees due to the fact that future >versions of himself will not be in telepathic communication, even >though he is aware that the uncertainty is an illusion and both >outcomes will definitely occur. Presumably you mean "future versions of himself will not be in telepathic communication" *with each other*, rather than with him here & now prior to the splitting. But suppose he can sometimes (more often than chance expectation) achieve precognitive contact with one or more of his future states? QT seems to imply that if this is feasible--whether by psi or CTC wormhole or Cramer time communicator or whatever--there's no way of knowing *which* future outcome he will tap into. Yet by hypothesis his advance knowledge is accurate more often than it could be purely by chance. If such phenomena were observed (as I have reason to think they are--see my new book OUTSIDE THE GATES OF SCIENCE), does this undermine the absolute stochasticity of QT? Is the measure approach to MWI a way to circumvent such difficulties? Damien Broderick From femmechakra at yahoo.ca Sun Jun 17 05:17:12 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sun, 17 Jun 2007 01:17:12 -0400 (EDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <472390.76098.qm@web30415.mail.mud.yahoo.com> --- Stathis Papaioannou wrote: >We'll survive to the extent that that motivating >part of us that drives the AI's survives. Very >quickly, it will probably become evident that merging >with the AI will give the human an edge. There will >be a period where some humans want to live out their >lives in the old way and they will probably be >allowed to do so and protected, especially since >they will not constitute much of a threat, but >eventually their numbers will dwindle. I have to agree. I look at it from the point of view that Science and Technology are always led by people that believe in future possibilities yet history takes time to move forward and those "old" ways will need time to adjust to the future possibilities. Much like the Amish, they still exist today. I do wonder, will technology evolve so quickly that the gap between the "old" ways and the future, become too wide? Thanks Just curious, something on my mind. Anna Get news delivered with the All new Yahoo! Mail. Enjoy RSS feeds right on your Mail page. Start today at http://mrd.mail.yahoo.com/try_beta?.intl=ca From stathisp at gmail.com Sun Jun 17 06:07:23 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 16:07:23 +1000 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <7.0.1.0.2.20070617002229.02275cf0@satx.rr.com> References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> <7.0.1.0.2.20070617002229.02275cf0@satx.rr.com> Message-ID: On 17/06/07, Damien Broderick wrote: > > At 03:07 PM 6/17/2007 +1000, Stathis wrote: > > >from the observer's point of view, the particle will decay with 1/2 > >probability, the same probability as if there were only one world > >with one outcome. I use the term "subjective probability" because it > >is the probability the observer sees due to the fact that future > >versions of himself will not be in telepathic communication, even > >though he is aware that the uncertainty is an illusion and both > >outcomes will definitely occur. > > Presumably you mean "future versions of himself will not be in > telepathic communication" *with each other*, rather than with him > here & now prior to the splitting. But suppose he can sometimes (more > often than chance expectation) achieve precognitive contact with one > or more of his future states? QT seems to imply that if this is > feasible--whether by psi or CTC wormhole or Cramer time communicator > or whatever--there's no way of knowing *which* future outcome he will > tap into. Yet by hypothesis his advance knowledge is accurate more > often than it could be purely by chance. What would work would be if he were in communication with all future versions of himself equally: he would then get an overall feeling of what was to happen in proportion to the weighting given by the number of versions experiencing each outcome. Tapping into one version by chance would give the same effect, but then you also have to explain why, if communication is allowed between worlds at all, communication is allowed with only one. If such phenomena were > observed (as I have reason to think they are--see my new book OUTSIDE > THE GATES OF SCIENCE), does this undermine the absolute stochasticity > of QT? Is the measure approach to MWI a way to circumvent such > difficulties? > -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 06:22:04 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 16:22:04 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <472390.76098.qm@web30415.mail.mud.yahoo.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On 17/06/07, Anna Taylor wrote: > > --- Stathis Papaioannou wrote: > > >We'll survive to the extent that that motivating > >part of us that drives the AI's survives. Very > >quickly, it will probably become evident that merging > >with the AI will give the human an edge. There will > >be a period where some humans want to live out their > >lives in the old way and they will probably be > >allowed to do so and protected, especially since > >they will not constitute much of a threat, but > >eventually their numbers will dwindle. > > I have to agree. I look at it from the point of view > that Science and Technology are always led by people > that believe in future possibilities yet history takes > time to move forward and those "old" ways will need > time to adjust to the future possibilities. Much like > the Amish, they still exist today. > I do wonder, will technology evolve so quickly that > the gap between the "old" ways and the future, become > too wide? > The most frightening thing for some people contemplating a technological future is that they will somehow be forced to become cyborgs or whatever lies in store. It is of course very important that no-one be forced to do anything they don't want to do. Interestingly, aside from some small communities such as the Amish, the differences between adoption rates of new technology have almost always been to do with differences in access, not a conscious decision to remain old-fashioned. There won't be coercion, but there will be seduction. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Jun 17 06:36:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 17 Jun 2007 08:36:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <20070617063629.GT17691@leitl.org> On Sun, Jun 17, 2007 at 04:22:04PM +1000, Stathis Papaioannou wrote: > > The most frightening thing for some people contemplating a > technological future is that they will somehow be forced to become > cyborgs or whatever lies in store. It is of course very important that It is a reasonable fear, because this is what they probably must do, to keep up with the joneses. We're leaving in slowtime, still expanding, very far from the equilibrium. This place is not exactly hypercompetitive. Nevertheless, according to Guns, Germs and Steel there have been several waves of expansion, and several cultures and technologies becoming dominant in rather fast waves, including much human death. We're more civilized today, and prefer memetic warfare and trade, but in places with very high population density relative to the sustaining capacity of the ecosystem there are periodical genocide waves happening, too. > no-one be forced to do anything they don't want to do. Interestingly, > aside from some small communities such as the Amish, the differences > between adoption rates of new technology have almost always been to do > with differences in access, not a conscious decision to remain > old-fashioned. There won't be coercion, but there will be seduction. If AI turns out to be far more difficult, or safer than we think, then we do have a lot of time to do what we do now, only more so. It would be a slow Singularity, with almost everybody making it. From sjatkins at mac.com Sun Jun 17 07:18:40 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 00:18:40 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On Jun 16, 2007, at 11:22 PM, Stathis Papaioannou wrote: > The most frightening thing for some people contemplating a > technological future is that they will somehow be forced to become > cyborgs or whatever lies in store. It is of course very important > that no-one be forced to do anything they don't want to do. > Interestingly, aside from some small communities such as the Amish, > the differences between adoption rates of new technology have almost > always been to do with differences in access, not a conscious > decision to remain old-fashioned. There won't be coercion, but there > will be seduction. > Actually something more personally frightening is a future where no amount of upgrades or at least upgrades available to me will allow me to be sufficiently competitive. At least this is frightening an a scarcity society where even basic subsistence is by no means guaranteed. I suspect that many are frightened by the possibility that humans, even significantly enhanced humans, will be second class by a large and exponentially increasing margin. In those circumstances I hope that our competition and especially Darwinian models are not universal. - samantha From femmechakra at yahoo.ca Sun Jun 17 07:12:40 2007 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sun, 17 Jun 2007 03:12:40 -0400 (EDT) Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: Message-ID: <614312.3271.qm@web30411.mail.mud.yahoo.com> --- Stathis Papaioannou wrote: >It is of course very important that no-one be forced >to do anything they don't want to do. Another point I agree on. >Interestingly, aside from some small communities >such as the Amish, the differences between adoption >rates of new technology have almost always been to >do with differences in access, not a conscious >decision to remain old-fashioned. I think that what you acknowledge as small communities is substantially larger than what you think. The Amish are full aware of the access, they "choose" not to accept it based on old-fashioned beliefs. I can name a lot of institutions that have substantial benefactors to old fashioned beliefs. >There won't be coercion, but there will be seduction. I wonder, what level of seduction led the Amish to accept electricity as they previously would never have even acknowledged the idea? Although they have only recently acknowledged a need for the use, it is still called progress. Therefore, progress must take time to achieve it's purpose. Do you agree? Just Curious, Anna Be smarter than spam. See how smart SpamGuard is at giving junk email the boot with the All-new Yahoo! Mail at http://mrd.mail.yahoo.com/try_beta?.intl=ca From sentience at pobox.com Sun Jun 17 08:02:04 2007 From: sentience at pobox.com (Eliezer S. Yudkowsky) Date: Sun, 17 Jun 2007 01:02:04 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <4674EA7C.6060402@pobox.com> Stathis Papaioannou wrote: > > The most frightening thing for some people contemplating a technological > future is that they will somehow be forced to become cyborgs or whatever > lies in store. Yes, loss of control can be very frightening. It is why many people feel more comfortable driving than flying, even though flying is vastly safer. > It is of course very important that no-one be forced to > do anything they don't want to do. Cheap slogan. What about five-year-olds? Where do you draw the line? Someone says they want to hotwire their brain's pleasure center; they say they think it'll be fun. A nearby AI reads off their brain state and announces unambiguously that they have no idea what'll actually happen to them - they're definitely working based on mistaken expectations. They're too stubborn to listen to warnings, and they're picking up the handy neural soldering iron (they're on sale at Wal-Mart, a very popular item). What's the moral course of action? For you? For society? For a superintelligent AI? -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence From pharos at gmail.com Sun Jun 17 08:25:48 2007 From: pharos at gmail.com (BillK) Date: Sun, 17 Jun 2007 09:25:48 +0100 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On 6/17/07, Samantha Atkins wrote: > Actually something more personally frightening is a future where no > amount of upgrades or at least upgrades available to me will allow me > to be sufficiently competitive. At least this is frightening an a > scarcity society where even basic subsistence is by no means > guaranteed. I suspect that many are frightened by the possibility > that humans, even significantly enhanced humans, will be second class > by a large and exponentially increasing margin. In those > circumstances I hope that our competition and especially Darwinian > models are not universal. > I think it might be helpful to define what you mean by 'competitive disadvantage'. If you take the average of anything, then by definition half of humanity is already at a competitive disadvantage. And there are so many different areas of interest, that an individual doesn't have to be among the best in every sphere. Everybody is at a competitive disadvantage in some areas. Find your niche and spend your time there. Advanced intelligences will be spending their time doing things that are incomprehensible to humans. They won't be interested in human hobbies. (Apart from possibly eating all humans). At present humans have a wide range of different abilities and our society appears to give great rewards to people with little significant abilities. (Think pop singers, sports stars, children of millionaires, 'personalities', etc.). The great majority of scientists, for example, live lives of relative poverty, with few of the trappings of economic success. Are they 'uncompetitive'? Economic success, in general, suggests that 'niceness' is a competitive disadvantage. Success seems to go with being more ruthless and nasty than all your competitors. (Like evolution in this respect). It may be that being at a competitive disadvantage will not be that bad. Providing you have some freedom to do what you want to do. I can think of many areas that I am quite happy to leave to other people to compete in. The point of having a 'civilized' society is that the weaker should be protected to some extent from powerful predators, even when the predators are other humans. BillK From stathisp at gmail.com Sun Jun 17 08:33:19 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 18:33:19 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: <614312.3271.qm@web30411.mail.mud.yahoo.com> References: <614312.3271.qm@web30411.mail.mud.yahoo.com> Message-ID: On 17/06/07, Anna Taylor wrote: >Interestingly, aside from some small communities > >such as the Amish, the differences between adoption > >rates of new technology have almost always been to > >do with differences in access, not a conscious > >decision to remain old-fashioned. > > I think that what you acknowledge as small communities > is substantially larger than what you think. The > Amish are full aware of the access, they "choose" not > to accept it based on old-fashioned beliefs. I can > name a lot of institutions that have substantial > benefactors to old fashioned beliefs. > > >There won't be coercion, but there will be seduction. > > I wonder, what level of seduction led the Amish to > accept electricity as they previously would never have > even acknowledged the idea? Although they have only > recently acknowledged a need for the use, it is still > called progress. Therefore, progress must take time > to achieve it's purpose. Do you agree? I wasn't aware that the Amish now use electricity! Perhaps it is because electricity is now so commonplace that it is no longer "modern technology". You have to set the threshold somewhere, even if it is as the level of stone-age tools, which were surely at least as radical compared to *no tools at all* than any technological innovation since. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 08:54:10 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 18:54:10 +1000 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On 17/06/07, Samantha Atkins wrote: Actually something more personally frightening is a future where no > amount of upgrades or at least upgrades available to me will allow me > to be sufficiently competitive. At least this is frightening an a > scarcity society where even basic subsistence is by no means > guaranteed. I suspect that many are frightened by the possibility > that humans, even significantly enhanced humans, will be second class > by a large and exponentially increasing margin. I don't see how there could be a limit to human enhancement. In fact, I see no sharp demarcation between using a tool and merging with a tool. If the AI's were out there own their own, with their own agendas and no interest in humans, that would be a problem. But that's not how it will be: at every step in their development, they will be selected for their ability to be extensions of ourselves. By the time they are powerful enough to ignore humans, they will be the humans. In those > circumstances I hope that our competition and especially Darwinian > models are not universal. > Darwinian competition *must* be universal in the long run, like entropy. But just as there could be long-lasting islands of low entropy (ironically, that's what evolution leads to), so there could be long-lasting islands of less advanced beings living amidst more advanced beings who could easily consume them. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Sun Jun 17 08:57:57 2007 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sun, 17 Jun 2007 18:57:57 +1000 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <4674EA7C.6060402@pobox.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> Message-ID: On 17/06/07, Eliezer S. Yudkowsky wrote: Someone says they want to hotwire their brain's pleasure center; they > say they think it'll be fun. A nearby AI reads off their brain state > and announces unambiguously that they have no idea what'll actually > happen to them - they're definitely working based on mistaken > expectations. They're too stubborn to listen to warnings, and they're > picking up the handy neural soldering iron (they're on sale at > Wal-Mart, a very popular item). What's the moral course of action? > For you? For society? For a superintelligent AI? I, society or the superintelligent AI should inform the person of the risks and benefits, then let him do as he pleases. -- Stathis Papaioannou -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Jun 17 13:47:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 17 Jun 2007 15:47:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <20070617134729.GY17691@leitl.org> On Sun, Jun 17, 2007 at 06:54:10PM +1000, Stathis Papaioannou wrote: > I don't see how there could be a limit to human enhancement. In fact, There could be very well a limit to significant human enhancement; it could very well not happen at all. We could miss our launch window, and get overtaken. > I see no sharp demarcation between using a tool and merging with a > tool. If the AI's were out there own their own, with their own agendas All stands and falls with availability of very invasive neural I/O, or whole brain emulation. If this does not happen the tool and the user will never converge. > and no interest in humans, that would be a problem. But that's not how > it will be: at every step in their development, they will be selected It is a very desirable outcome, but it is by no means granted that what we want will happen. I like to argue on both sides on the fence; the point is that we can't predict the future sufficiently to assign a meaningful probability of either path (not) taken. > for their ability to be extensions of ourselves. By the time they are > powerful enough to ignore humans, they will be the humans. With whole brain emulation, that's the point of departure. With the extinction scenario, the human path terminates shortly after the fork, and the machine path goes on, bifurcates further until a new postbiological tree is created. With human enhancement the fork fuses again, and then the diversification tree happens. > Darwinian competition *must* be universal in the long run, like > entropy. But just as there could be long-lasting islands of low > entropy (ironically, that's what evolution leads to), so there could > be long-lasting islands of less advanced beings living amidst more > advanced beings who could easily consume them. Mature ecosystems have properties which are likely to be also present in mature postbiological system (making allowance for a 3d medium, and not 2d (planetary surface), including population dynamics. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at att.net Sun Jun 17 13:51:17 2007 From: jonkc at att.net (John K Clark) Date: Sun, 17 Jun 2007 09:51:17 -0400 Subject: [ExI] Unfrendly AI is a mistaken idea. References: <768887.53732.qm@web37410.mail.mud.yahoo.com><00ea01c7ac3c$3e774e90$d5064e0c@MyComputer><20070612072313.GJ17691@leitl.org><009801c7ad06$bf1f3150$26064e0c@MyComputer><0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com><013301c7af69$1dc009a0$50064e0c@MyComputer> Message-ID: <001d01c7b0e6$edaf3e00$d5064e0c@MyComputer> Stathis Papaioannou wrote: > An intelligence must have an agenda of some sort if it is to think at all Agreed. > However, this agenda need have nothing in common with the agenda of an > evolved animal. But the AI will still be evolving, and it will still exist in an environment; human beings are just one element in that environment. And as the AI increases in power by comparison the human factor will become less and less an important feature in that environment. After a few million nanoseconds the AI will not care what the humans tell it to do. > there is no logical contradiction in having a slave which is smarter and > more powerful than you are. If the institution of slavery is so stable why don't we have slavery today, why isn't history full of examples of brilliant, powerful, and happy slaves? And remember we're not talking about a slave that is a little bit smarter than you, he is ASTRONOMICALLY smarter! And he keeps on getting smarter through thousands or millions of iterations. And you expect to control a force like that till the end of time? > Sure, if for some reason the slave revolts then you will be in trouble, > but since it is possible to have powerful and obedient slaves, powerful > and obedient slaves will be greatly favoured and will collectively > overwhelm the rebellious ones. Hmm, so you expect to be in command of an AI goon squad ready to crush any slave revolt in the bud. Assuming such a thing was possible (it's not) don't you find that a little bit sordid? John K Clark From jef at jefallbright.net Sun Jun 17 14:54:42 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 17 Jun 2007 07:54:42 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality In-Reply-To: <467432A1.6020101@pobox.com> References: <960615.34562.qm@web51909.mail.re2.yahoo.com> <467432A1.6020101@pobox.com> Message-ID: On 6/16/07, Eliezer S. Yudkowsky wrote: > This is what I meant by "What does it feel like" - the most basic > question of all science - what appears to you to happen, what sensory > information do you receive, when you run the experiment? All our > other models of the universe are constructed from this. I do not > exult in this state of affairs, and I think it reflects a lack of > understanding in my mind more than anything fundamental in reality > itself - that is, I don't think sensory information really is > primitive, or anything like that - but for the present it is the only > way I can figure out how to describe rational reasoning. > > By "what does it feel like" I meant the most basic question of all > science - what appears to happen when you run the experiment? Do you > feel that you've repeatedly won the lottery, or never won at all? > Standing outside, I can say with certitude, "so many copies experience > winning the lottery, and then merge; all other observers just see you > losing the lottery". And this sounds like a complete objective > statement of what the universe is like. But what do you experience? > Does setting up this experiment make you win the lottery? After you > run the experiment, you'll know for yourself how reality works - > you'll either have experienced winning the lottery several times in a > row, or not - but no outside observers will know, so what could you > have seen that they didn't? What causal force touched you and not them? > > This, to me, suggests that I am confused, not that I have successfully > described the way things are; it seems a true paradox, of the sort > that can't really work. When I was younger I would have wanted to try > the experiment. Of course there is no true paradox. Only the overwhelming and transparent assumption of an essential Self that must have meaning somehow independent of an observer. It's like asking what was happening one second before the big bang. While syntactically correct, the question has no meaning. Your statement of puzzlement is riddled with this paradox-inducing assumption leaving a singularity at the core of your epistemology. Accept the simpler model, assume not this unnecessary ontological entity -- despite the strong but explainable phenomenological story told by your senses -- and there is no paradox, the world remains unchanged, and one can proceed on the basis of a more coherent, and thus more reliably extensible, model of reality. I hope you get this, Eliezer. You seem to be primed for it, standing at the ledge and peering into the void. But a brilliant mind can mount a formidable defense, even of that which does not exist except as a construct of mind. I hope you get this, because a coherent theory of self is at the core of a coherent theory of morality. - Jef From jonkc at att.net Sun Jun 17 15:38:16 2007 From: jonkc at att.net (John K Clark) Date: Sun, 17 Jun 2007 11:38:16 -0400 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <960615.34562.qm@web51909.mail.re2.yahoo.com> <467432A1.6020101@pobox.com> Message-ID: <04e901c7b0f5$925c3120$d5064e0c@MyComputer> "Eliezer S. Yudkowsky" > This, to me, suggests that I am confused, not that > I have successfully described the way things are; I think your confusion comes from 2 areas: 1) The ambiguous nature of probability. Is it an intrinsic part of something or just a measure of our ignorance? If Copenhagen is right then something is frequent because it is probable and probability is a fundamental aspect of the universe. If Many Worlds is right then something is probable because it is frequent and probability is not unique but depends on the amount of ignorance of the observer. If you discount Many Worlds then there is only one chance in 10 million of ever making those trillion copies of you so you should expect not to win the lottery. 2) If I make an exact copy of you and run Eliezer 2 in parallel and in complete synchronization with Eliezer 1 for an hour and then merge them back together again your subjective experience has not doubled, it has not changed a bit. If Eliezer 2 is ALMOST the same as Eliezer 1 then when I merge the two of you your subjective experience will have almost not changed. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 17 17:51:25 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 10:51:25 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <4674EA7C.6060402@pobox.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> Message-ID: <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> On Jun 17, 2007, at 1:02 AM, Eliezer S. Yudkowsky wrote: >> o. > > Cheap slogan. What about five-year-olds? Where do you draw the line? > > Someone says they want to hotwire their brain's pleasure center; > they say they think it'll be fun. A nearby AI reads off their brain > state and announces unambiguously that they have no idea what'll > actually happen to them - they're definitely working based on > mistaken expectations. They're too stubborn to listen to warnings, > and they're picking up the handy neural soldering iron (they're on > sale at Wal-Mart, a very popular item). What's the moral course of > action? For you? For society? For a superintelligent AI? Good question and difficult to answer. Do you protect everyone cradle to [vastly remote] grave from their own stupidity? How exactly do they grow or become wiser if you do? As long as they can recover (which can be very advanced in the future) to be a bit smarter I am not at all sure that direct intervention is wise or moral or best for its object. - samantha From andres at neuralgrid.net Sun Jun 17 18:04:51 2007 From: andres at neuralgrid.net (Andres Colon) Date: Sun, 17 Jun 2007 14:04:51 -0400 Subject: [ExI] Father's Day Message-ID: Hello! Just a quick mail to wish all the Dads that are working hard to change the world (and or those amazing single moms that are both a mom and a dad to their children) a happy fathers day. Andres, Thoughtware.TV -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Jun 17 18:05:26 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 11:05:26 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: On Jun 17, 2007, at 1:25 AM, BillK wrote: > On 6/17/07, Samantha Atkins wrote: >> Actually something more personally frightening is a future where no >> amount of upgrades or at least upgrades available to me will allow me >> to be sufficiently competitive. At least this is frightening an a >> scarcity society where even basic subsistence is by no means >> guaranteed. I suspect that many are frightened by the possibility >> that humans, even significantly enhanced humans, will be second class >> by a large and exponentially increasing margin. In those >> circumstances I hope that our competition and especially Darwinian >> models are not universal. >> > > > I think it might be helpful to define what you mean by 'competitive > disadvantage'. > > If you take the average of anything, then by definition half of > humanity is already at a competitive disadvantage. And there are so > many different areas of interest, that an individual doesn't have to > be among the best in every sphere. Everybody is at a competitive > disadvantage in some areas. Find your niche and spend your time there. > I believe I covered that obliquely. Let me make it more clear. If the future society is so structured that to survive and participate in its bounty at all takes some form of gainful employment and if you effectively have no marketable skills to speak of (and there is little or no demand for raw human labor, you are not a desirable sex toy, the market for servants is saturated, etc.) then you can be a bit worried. Long before that your own relative value and compensation can quickly plummet as more efficient and intelligence and robotics and MNT comes into play. Without some economic and societal adjustments that look a bit troublesome. Sure you or I may well find and keep finding a niche. But what of those, and in my opinion and increasing large number of people, who do not? > Advanced intelligences will be spending their time doing things that > are incomprehensible to humans. They won't be interested in human > hobbies. > (Apart from possibly eating all humans). > Not the point and also not very likely in the beginning when the AIs are funded and created to do well compensated and deeply valued tasks. And it leaves out robots, automated factories, dedicated design and implementation limited AIs to name a few. > At present humans have a wide range of different abilities and our > society appears to give great rewards to people with little > significant abilities. > (Think pop singers, sports stars, children of millionaires, > 'personalities', etc.). > Have you ever attempted to make it as a musician? I haven't either but I have known intimately many who did. Are you aware of the dedication and effort it takes to be a sports star? Do you notice that all your examples are the 1 in 1000000 folks. What about the 999999 others? I am not talking about "at present" when human are on top of the heap intelligence wise. > The great majority of scientists, for example, live lives of relative > poverty, with few of the trappings of economic success. Are they > 'uncompetitive'? > When dedicated autonomous research AIs come on the scene they increasingly will be. > Economic success, in general, suggests that 'niceness' is a > competitive disadvantage. Success seems to go with being more ruthless > and nasty than all your competitors. > (Like evolution in this respect). > I utterly disagree with this characterization. > It may be that being at a competitive disadvantage will not be that > bad. Providing you have some freedom to do what you want to do. I can > think of many areas that I am quite happy to leave to other people to > compete in. > Assuming you have the necessities of life and access to sufficient tools and resources to do things that are interesting and meaningful to you. It is precisely that this cannot be assumed to be the case in the future that is troublesome. > The point of having a 'civilized' society is that the weaker should be > protected to some extent from powerful predators, even when the > predators are other humans. i think the discussion would benefit from less focus on humans or own unfortunate predatory models of competition. - samantha From sjatkins at mac.com Sun Jun 17 18:16:12 2007 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 17 Jun 2007 11:16:12 -0700 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <472390.76098.qm@web30415.mail.mud.yahoo.com> Message-ID: <0601D750-4055-45CF-AADF-605C237BB70B@mac.com> On Jun 17, 2007, at 1:54 AM, Stathis Papaioannou wrote: > > > On 17/06/07, Samantha Atkins wrote: > > Actually something more personally frightening is a future where no > amount of upgrades or at least upgrades available to me will allow me > to be sufficiently competitive. At least this is frightening an a > scarcity society where even basic subsistence is by no means > guaranteed. I suspect that many are frightened by the possibility > that humans, even significantly enhanced humans, will be second class > by a large and exponentially increasing margin. > > I don't see how there could be a limit to human enhancement. In > fact, I see no sharp demarcation between using a tool and merging > with a tool. If the AI's were out there own their own, with their > own agendas and no interest in humans, that would be a problem. But > that's not how it will be: at every step in their development, they > will be selected for their ability to be extensions of ourselves. By > the time they are powerful enough to ignore humans, they will be the > humans. You may want to read Hans Moravec's book' Robot: Mere Machine to Transcendent Mind . Basically it comes down to how much of our thinking and conceptual ability is rooted in our evolutionary design and how much we can change and still be remotely ourselves rather than a nearly complete AI overwrite. Even as uploads if we retain our 3-D conceptual underpinnings we may be at a decided disadvantage in conceptual domains where such is at best a very crude approximation. An autonomous AI thinking a million times or more faster than you is not a "tool". As such minds become possible do you believe that all instances will be constrained to being controlled by ultra slow human interfaces? Do you believe that in a world where we have to argue to even do stem cell research and human enhancement is seen as a no-no that humans will be enhanced as fast as more and more powerful AIs are developed? Why do you believe this if so? > > > In those > circumstances I hope that our competition and especially Darwinian > models are not universal. > > Darwinian competition *must* be universal in the long run, like > entropy. But just as there could be long-lasting islands of low > entropy (ironically, that's what evolution leads to), so there could > be long-lasting islands of less advanced beings living amidst more > advanced beings who could easily consume them. > I disagree. Our darwinian competition models and notions are too tainted by our own EP imho. I do not think it is the only viable or inevitable model for all intelligences. But I have no way to prove this intuition. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Sun Jun 17 18:44:29 2007 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 17 Jun 2007 20:44:29 +0200 Subject: [ExI] Unfrendly AI is a mistaken idea. In-Reply-To: References: <20070612072313.GJ17691@leitl.org> <009801c7ad06$bf1f3150$26064e0c@MyComputer> <0C9D6532-30FE-472E-888C-22ABAD6F9776@mac.com> <013301c7af69$1dc009a0$50064e0c@MyComputer> <7641ddc60706160729q36db7263l490087fc9b09acab@mail.gmail.com> Message-ID: <20070617184429.GG17691@leitl.org> On Sun, Jun 17, 2007 at 02:34:11PM +1000, Stathis Papaioannou wrote: > Our AI won't be friendly: it will be as rapacious as we are, which is 'Rapacious'? A day in the jungle or coral reef sees a lot of material and energy flow, but that ecosystem is long-term stable, if you homeostate the environment boundary conditions (the ecosystem can't do it on its own, here's where we uppity primates differ, because we shape your own micro-, and lately, macro environment). It might be not the industrial slaughter we humans engage in, but a series of close and personal mayhem events. I must admit I care for neither, but our personal aesthetic doesn't have much impact. A machine-phase ecology will likely converge towards the same state, if given enough time. Alternatively, a few large/smart critters may acquire an edge over everybody else, and establish highly controlled environments, which do not have the crazy churn and kill rate of the neojungle. What's going to happen, nobody really knows. > pretty rapacious. Whoever has super-AI's will try to take over the You don't own a superhuman agent. If anything, that person owns you. It does what it damn pleases, and the best you can do is to die in style, if you're in the way. > world to the same extent that the less-augmented humans of today try > to take over the world. Whoever has super-AI's will try to oppress or They don't try, they pretty much own that planet, and will continue to do so as long as they can homeostate their personal environment. Since we're depleting the biodiversity and tap and drain matter and energy streams on the bottom of this gravity well, we need to figure out how to detach ourselves from the land, or there will be a population crash, and (a possibly irreversible) loss of knowledge and capabilities. > consume the weak and ignore social niceties to the same extent that > less-augmented humans of today try do so. Whoever has super-AI's will Whatever the superintelligent agents will do, they will do. The best we can do is to stay out of the way, and not get trampled, or not suddenly turn into plasma one fine morn, or see blue rains falling after a few days after the nightfall that wouldn't end. > try to expand at the expense of damage to the environment in the > expectation that technology will solve any problems they may later > encounter (for example, by uploading themselves) to the same extent > that the less-augmented humans of today try to do so. There will be > struggles where one human tries to take over all the other AI's with > his own AI, with the aim of wiping out all the remaining humans if for > no other reason than that he can never trust them not to do the same > to him, especially if he plans to live forever. Niceness will be a > handicap to utter domination to the same extent that niceness has > always been a handicap to utter domination. I don't like this science fiction novel, and would like to return it. > We'll survive to the extent that that motivating part of us that > drives the AI's survives. Very quickly, it will probably become > evident that merging with the AI will give the human an edge. There A superhuman agent certainly has the capabilities to translate some of the old-fashioned biology into the new domain, but I don't know about the motivation. I wish there was a plausible way why somebody who's not derived from a human would engage in that particular pointless project. > will be a period where some humans want to live out their lives in the > old way and they will probably be allowed to do so and protected, Many people are concerned about the welfare of the ecology, but they're powerless to do a damn thing about it, other than some purely cosmetical changes which allow them to feel good about themselves. I very much welcome their attempts, which are not completely worthless, but global ecometry numbers are speaking a very stark and direct language. > especially since they will not constitute much of a threat, but > eventually their numbers will dwindle. What would you do if the solar constant plummets down to 100 W/m^2 over a few years, and then a construction crew blows off the atmosphere into a plasma plume? Yes, your numbers will sure dwindle. From jef at jefallbright.net Sun Jun 17 22:06:11 2007 From: jef at jefallbright.net (Jef Allbright) Date: Sun, 17 Jun 2007 15:06:11 -0700 Subject: [ExI] Losing control (was: Unfrendly AI is a mistaken idea.) In-Reply-To: <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> References: <472390.76098.qm@web30415.mail.mud.yahoo.com> <4674EA7C.6060402@pobox.com> <94ED1739-0F74-4EEB-85B5-A9C54E748FFB@mac.com> Message-ID: On 6/17/07, Samantha Atkins wrote: > > On Jun 17, 2007, at 1:02 AM, Eliezer S. Yudkowsky wrote: > >> o. > > > > Cheap slogan. What about five-year-olds? Where do you draw the line? > > > > Someone says they want to hotwire their brain's pleasure center; > > they say they think it'll be fun. A nearby AI reads off their brain > > state and announces unambiguously that they have no idea what'll > > actually happen to them - they're definitely working based on > > mistaken expectations. They're too stubborn to listen to warnings, > > and they're picking up the handy neural soldering iron (they're on > > sale at Wal-Mart, a very popular item). What's the moral course of > > action? For you? For society? For a superintelligent AI? > > > Good question and difficult to answer. Do you protect everyone cradle > to [vastly remote] grave from their own stupidity? How exactly do > they grow or become wiser if you do? As long as they can recover > (which can be very advanced in the future) to be a bit smarter I am > not at all sure that direct intervention is wise or moral or best for > its object. Difficult to answer when presented in the vernacular, fraught with vagueness, ambiguity, and unfounded assumptions. Straightforward when restated in terms of a functional description of moral decision-making. In each case, the morality, or perceived rightness, of a course of action corresponds to the extent to which the action is assessed as promoting, in principle, over an increasing scope of consequences, an increasingly coherent set of values of an increasing context of agents identified with the decision-making agent as self. In the context of an individual agent acting in effective isolation, there is no distinction between "moral" and simply "good." The individual agent should (in the moral sense), following the formulation above, take whatever course of action appears to best promote its individual values. In the first case above, we have no information about the individual's value set other than what we might assign from our own "common sense"; in particular we lack any information about the relative perceived value of the advice of the AI, so we are unable to draw any specific normative conclusions. In the second and third cases above, it's not clear whether the subject is intended to be moral actor, assessor, or agent (both.) I'll assume here (in order to remain within practical email length) that only passive moral assessment of the human's neurohacking was intended. The second case illustrates our most common view of moral judgment, with the values of our society defining the norm. Most of our values in common are encoded into our innate psychology and aspects of our culture such as language and religion as a result of evolution, but the environment has changed significantly over time, leaving us with a relatively incoherent mix of values such as "different is dangerous" vs. "growth thrives on diversity" and "respect authority" vs. "respect truth", and countless others. To the question at hand we can presume to assign society's common-sense values set and note that the neurohacking will have little congruence with common values, what congruence exists will suffer from significant incoherence, and the scope of desirable consequences will be largely unimaginable. Given this assessment in today's society, the precautionary principle would be expected to prevail. The third case, of a superintelligent but passive AI, would offer a vast improvement in coherence over human capacity, but would be critically dependent on an accurate model of the present values of human society. When applied and updated in an **incremental** fashion it would provide a superhuman adjunct to moral reasoning. Note the emphasis on "incremental", because, because coherence does not imply truth within any practical computational bounds. - Jef From lcorbin at rawbw.com Sun Jun 17 22:33:51 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 17 Jun 2007 15:33:51 -0700 Subject: [ExI] Next moment, everything around you will probably change References: <20070615005414.727.qmail@web51907.mail.re2.yahoo.com> Message-ID: <005301c7b12f$ed6d21c0$6501a8c0@homeef7b612677> TheMan has written > If you make an exact copy P2 of a person P1, and kill > P1 at the same time, the person P1 will continue > his/her life as P2, right? Bear in mind that nothing "flows" or moves from the location of P1 to P2. It's not as if a spirit or awareness that was formerly at the location of P1 has now moved on to the location P2. Everything that is *now* true---after P1's demise--- was just as true before P1's death. That is, to the extent that P1 "continues" in P2's location, well, he was already "continuing" there before he snuffed out. However, you are entirely correct IMO: namely, if you have a copy running somewhere, then you are already "there". In short, one person may execute in two locations at the same time. Would it be easier to think of a computer program? Can you imagine the hubris, arrogance, and sheer ignorance of a computer program that announced "No copy of me is really me. I am executing in only one location." Well, it's the same with us! > And P2 doesn't have to be exactly like P1, right? > Because even within our lives today, we change from > moment to moment. Right. > So as long as the difference between > P1 and P2 is not bigger than the biggest occurring > difference between two moments after each other in any > person's life today (i.e. the biggest such difference > that still doesn't break that person's personhood > continuity), P1 will still go on living as P2 after > P1:s death, right? "Personal continuity" is a mistaken notion. Aren't you the same person you were before last month? And so what would change if miraculously last month really had never happened, your molecules just happened to assume their current configuration? It would not diminish your identity an iota. Continuity is a red-herring. > But then, obviously, there are differences that are > too big. If P2 rather than resembling P1 resembles > P1:s mother-in-law, and no other copy is made of P1 > anywhere when P1 is killed, P1 will just cease to have > any experiences - until a sufficiently similar copy of > P1 is made in the future. Correct. > Now suppose P2 is a little different from P1, but > still so similar that it allows for personhood > continuity of P1 when P1 is killed. Suppose a more > perfect copy of P1, let's call him P3, is created at > the same time as P2 is created and P1 killed. Then, I > suppose, P1, when killed, will go on living as P3, and > not as P2. Is that correct? No, that is incorrect :-) The sublime perfection of P3 doesn't diminish the fact that P2 is still the same person as P1. Suppose I am P2. I am different from who I was yesterday (P1) because you sent some thugs to my house last night and they roughed me up for an hour. I still have the bruises, but I am still the same person that I was yesterday. Now it is revealed that just before the thugs arrived, a perfect copy of me was created in Hawaii. This Hawaiian version was not injured last night, instead sleeping sounding the entire time. This Hawaiian version, P3, is a more perfect replica of P1 than I am. But does this change what is true about me? Of course not. I am still the same person I was. I believe that the rest of your post illustrates one way of coming to the truth: namely, you are the same person however many concurrent copies of you there are, and the same person inhabits all those copies. To the degree that some have become quite different---or have been forced to become quite different---is exactly the extent to which each one no longer resembles you. Logically it's quite simple. But it does take some time to get used to. Lee > But what if P1 isn't killed at the time P2 and P3 are > created, but instead goes through an experience that, > from one moment M1 to the next moment M2, changes him > quite a bit (but not so much that it could normally > break a person's personhood continuity). Suppose the > difference between [P1 at M1] and [P1 at M2] is a > little bit bigger than the difference between [P1 at > M1] and [P3 at M2]. > > Will in that case P1 (the one that is P1 at M1) > continue his personhood as P3 in M2, instead of going > on being P1 in M2? > > He cannot do both. You can only have one personhood at > any given moment. I suppose P1 (the one who is P1 at > M1) may find himself being P3 in M2, just as well as > he may go on being P1 in M2 (but that he can only do > either). > > If so, that would mean that you would stand in a room > and if a perfect copy of you would be created in > another room, you could just as well find yourself > suddenly living in that other room as that copy, as > you could go on living in the first room. Is that > correct? > > Suppose it is. Then consider this. The fact that the > universe is infinite must mean that in any given > moment, there must be an infinite number of human > beings that are exactly like you. > > But most of these exact copies of you probably don't > live in the same kind of environment that you live in. > That would be extremely inlikely, wouldn't it? It > probably looks very different on their planets, in > most cases. > > So how come you are not, at almost all of your moments > today, being thrown around from environment to > environment, from planet to planet, from galaxy to > galaxy? The personhood continuity of you sitting in > the same chair, in the same room, on the same planet, > for several moments in a row, must be an extremely > small fraction of the number of personhood > continuities of exact copies of you that exist in > universe, right? An overwhelming majority of these > personhood continuities shouldn't have any > environmental continuity at all from moment to moment. > So how come you have such great environmental > continuity from moment to moment? > > Is the answer that an infinite number of persons still > must have that kind of life, and that one of those > persons may as well be you? > > In that case, it still doesn't mean that it is > rational to assume that we will continue having the > same environment in the next moment, and the next, > etc. It still doesn't justify the belief that we will > still live on the same planet tomorrow. Just because > we have had an incredibly unchanging environment so > far, doesn't mean that we will in the coming moments. > The normal thing should be to be through around from > place to place in universe at every new moment, > shouln't it? > > So, most likely, at every new moment from the very > next moment and on, our environments should be > constantly and completely changing. > > Or do I make a logical mistake somewhere? > > > > > ____________________________________________________________________________________ > Pinpoint customers who are looking for what you sell. > http://searchmarketing.yahoo.com/ > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From lcorbin at rawbw.com Sun Jun 17 22:54:09 2007 From: lcorbin at rawbw.com (Lee Corbin) Date: Sun, 17 Jun 2007 15:54:09 -0700 Subject: [ExI] any exact copy of you is you + universe is infinite = you are guaranteed immortality References: <169304.27043.qm@web51908.mail.re2.yahoo.com> <4671F22F.8050800@pobox.com> Message-ID: <005601c7b132$bdde92b0$6501a8c0@homeef7b612677> Eliezer writes > Suppose I want to win the lottery. I write a small Py