From msd001 at gmail.com Tue Feb 1 00:54:15 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 31 Jan 2011 19:54:15 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> Message-ID: 2011/1/31 John Clark : > On Jan 31, 2011, at 12:31 PM, Adrian Tymes wrote: > > Talk about one subject. ?Then talk about something else. ?A?human can handle > this - even if they are not an expert in all things (which?no human is, > though some try to pretend they are). ?These AIs completely?break down. > > Until now it was true that AI programs were very brittle, but that's why I > was so impressed with Watson, its knowledge base is so vast and it's so good > at finding the appropriate information from even vague poorly phrased > input?that with only a few modifications you could make a program that could > speak about anything and do so intelligently enough not to be embarrassing. > Of course I'm not saying it would always speak brilliantly, if it did that > it would be a dead giveaway that's its not human and fail the Turing Test. yes, because no human would ever speak embarrassingly on a topic :) From possiblepaths2050 at gmail.com Tue Feb 1 08:36:00 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 1 Feb 2011 01:36:00 -0700 Subject: [ExI] A futurist on Coast to Coast AM with George Noory Message-ID: On Tuesday, the futurist Mark Stevenson will be the special guest on Coast to Coast AM with George Noory. An excerpt from the Coast to Coast AM website: >Writer, deep-thinker, and stand-up comedian Mark Stevenson shares his journey >to find out what the future holds. He'll discuss his meetings with Transhumanists >who intend to live forever, robots, smart farmers, nanotechnology experts, and >scientists manipulating the genome. http://www.coasttocoastam.com/show/2011/02/02 John : ) From bbenzai at yahoo.com Tue Feb 1 13:12:59 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 1 Feb 2011 05:12:59 -0800 (PST) Subject: [ExI] atheists declare religions as scams. In-Reply-To: Message-ID: <803459.14987.qm@web114405.mail.gq1.yahoo.com> On 31 January 2011 01:31, Keith Henson wrote: > I think atheists would be much better off to try to understand why (in > an evolutionary sense) humans have religions at all. I thought the idea that our brains are adapted to err on the side of false positives when attributing agency to events was enough of an explanation. You know, the 'movement in the bushes might be a lion' idea. Those who assume it's a lion will survive when it actually is a lion, and those who don't, won't. So attributing agency to things becomes a selected survival trait. I'm sure there will be other, contributing factors, but it seems to me that this is the main one. Ben Zaiboc From jonkc at bellsouth.net Tue Feb 1 14:54:40 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 1 Feb 2011 09:54:40 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: <5B7EE075-31A6-46BE-AC0B-E9446E5B048C@bellsouth.net> On Jan 31, 2011, at 1:28 PM, Adrian Tymes wrote: >> Don't be silly, the probability of Voyager spotting such a teapot even if it >> were there is virtually zero. > > I'm not so sure. It depends on how close Voyager passed, and they do pick > up a lot of details with repeated analysis. You must be joking, you just must be. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Feb 1 15:26:26 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 1 Feb 2011 10:26:26 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: <826E4FFC-699C-429E-98DB-BBCEA9D3D3B2@bellsouth.net> On Jan 31, 2011, at 2:04 PM, Darren Greer wrote: > > A teapot agnostic says "I don't have enough data to determine whether the teapot exists so I can't form an opinion." Then he is not only a teapot agnostic he is also a liar because, assuming he is not a resident of a looney bin, you can be quite certain that he DOES have an opinion regarding a teapot in orbit around the planet Uranus. In addition, if logically you are justified in being 99.9999% certain that X does not exist then if it's irrational its not very irrational if emotionally you are 100% certain that X does not exist; the emotional part of the human mind is just not equipped with such precession tolerances to allow greater distinctions than that because Evolution decided it would be a waste of resources. > You allot each position as a possibility In the real world where scientists actually do things they regard some possibilities (most possibilities actually) as being so low they are not worth the wear and tear on their valuable brain cells. > Respect has nothing to do with it. Things you respect deserve your time, things you don't don't. > See my point? No. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 1 15:53:15 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 1 Feb 2011 16:53:15 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D471EA4.7080900@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: On 31 January 2011 21:42, Richard Loosemore wrote: > But that is *exactly* my point. ?We are not getting tantalizingly close, we > are just doing the same old snake-oil con trick of building a system that > works in a ridiculously narrow domain, and which impresses some people with > the sheer breadth of information it stores inside it. I suspect that human beings themselves do little else than than adding half-competent reactions in ridiculously narrow domains one to another for a very large number thereof. And, hey, we might well be more optimised for this feature than many AGI proponent seem to believe... So, an intelligence with competitive performance in this task, be it even entirely artificial, could end up being more similar, in its relevant components, to the brains we know than to anything else. Of course, nothing prevent us from adding, say, a math coprocessor to such a system. Or to ourselves, for that matter. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 1 15:13:32 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 1 Feb 2011 16:13:32 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: On 31 January 2011 17:36, Kelly Anderson wrote: > I think the problem is really related to the definition of > intelligence. Nobody has really defined it, so the definition seems to > fall out as "Things people do that computers don't do yet." So what is > "Things computers do that people can't do"? Certainly it is not ALL > trivial stuff. For example, using genetic algorithms, computers have > designed really innovative jet engines that no people ever considered. > Is that artificial intelligence (i.e. the kind people can't do?) Very good point. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 1 16:23:18 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 1 Feb 2011 17:23:18 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <803459.14987.qm@web114405.mail.gq1.yahoo.com> References: <803459.14987.qm@web114405.mail.gq1.yahoo.com> Message-ID: On 1 February 2011 14:12, Ben Zaiboc wrote: > On 31 January 2011 01:31, Keith Henson wrote: >> I think atheists would be much better off to try to understand why (in >> an evolutionary sense) humans have religions at all. > > I thought the idea that our brains are adapted to err on the side of false positives when attributing agency to events was enough of an explanation. There again, if I am angry with my car because it does not want to start, I am not founding a religion, I am more likely to kick it and call a mechanic. Moreover, there are religious beliefs which have nothing to do with attributing agency. Basically, "re-ligio" in Latin simply means a common set of ideas and narratives that binds together a group. In this sense, its evolutionary significance for a cultural being would seem obvious. But there is no requirement whatsoever that such ideas or narratives include or postulate a metaphysical worldview. In fact, I suspect that most of the times, most of the places in human history they did not, and yet could serve very well whatever evolutionary added value religions may deliver. -- Stefano Vaj From spike66 at att.net Tue Feb 1 16:09:39 2011 From: spike66 at att.net (spike) Date: Tue, 1 Feb 2011 08:09:39 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: <000001cbc22a$6e184390$4a48cab0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj ... Subject: Re: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. On 31 January 2011 21:42, Richard Loosemore wrote: >> But that is *exactly* my point. ?We are not getting tantalizingly > close, we are just doing the same old snake-oil con trick of building > a system that works in a ridiculously narrow domain, and which > impresses some people with the sheer breadth of information it stores inside it. >I suspect that human beings themselves do little else than than adding half-competent reactions in ridiculously narrow domains one to another for a very large number thereof... Stefano Vaj Ja. The reason I think we are talking past each other is that we are describing two very different things when we are talking about human level intelligence. I am looking for something that can provide companionship for an impaired human, whereas I think Richard is talking about software which can write software. If one goes to the nursing home, there are plenty of human level intelligences there, lonely and bored. I speculate you could go right now to the local nursing home, round up arbitrarily many residents there, and find no ability to write a single line of code. If you managed to find a long retired Fortran programmer, I speculate you would find nothing there who could be the least bit of help coding your latest video game. I think we are close to writing software which would provide the nursing home residents with some degree of comfort and something interesting to talk to. Hell people talk to pets when other humans won't listen. We can do better than a poodle. If we are talking about software which can write software, that's a whole nuther thing, the singularity. If we get that, entertaining the elderly is irrelevant. spike From jonkc at bellsouth.net Tue Feb 1 18:11:22 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 1 Feb 2011 13:11:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: On Jan 31, 2011, at 1:08 PM, Kelly Anderson wrote: > > Another test... suppose that I subscribed an artificial intelligence program to this list. Why do you subscribed an artificial intelligence program to this list? > That's a bit easier Easier is the key. > So perhaps I suggest a new test. If a computer is smart enough to get > admitted into Brigham Young University Brigham Young University is the key. > you don't have to do the processing in real time as with a chat program. You could be right about that. > I suppose that's just another emergent aspect of the human brain. Why do suppose that's just another emergent aspect of the human brain? > There seems to be a supposition by some (not me) that to be > intelligent, consciousness is a prerequisite. You could be right about that. > Once again, we run into another definition issue. Run into is the key. > Why do you sat Watson knows that 'Sunflowers' was painted by 'Van Gogh Why do you sat? > Maybe this still doesn't make total sense You could be right about my hovercraft being full of eels. John K zzzzzzz 1521 buffer overflow error module 429232 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Tue Feb 1 22:03:10 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:03:10 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D4707E4.3000106@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> Message-ID: On Mon, Jan 31, 2011 at 12:05 PM, Richard Loosemore wrote: > Kelly Anderson wrote: >> On Fri, Jan 28, 2011 at 9:01 AM, Richard Loosemore >> Trivial!?! This is the final result of decades of research in both >> software and hardware. Hundreds of thousands of man hours have gone >> into the projects that directly led to this development. Trivial! You >> have to be kidding. The subtle language cues that are used on Jeopardy >> are not easy to pick up on. This is a really major advance in AI. I >> personally consider this to be a far more impressive achievement than >> Deep Blue learning to play chess. > > I stand by my statement that what Watson can do is "trivial". If what you are saying is that Watson is doing a trivial subset of human capabilities, then yes, what Watson is doing is trivial. It is by no means a trivial problem to get computers to do it, as I'm sure you are aware. > You are wildly overestimating Watson's ability to handle "subtle language > cues". ?It is being asked a direct factual question (so, no need for Watson > to categorize the speech into the dozens or hundreds of subtle locution > categories that a human would have to), and there is also no need for Watson > to try to gauge the speaker's intent on any of the other levels at which > communication usually happens. Have you watched Jeopardy? Just figuring out what they mean by the category name is often quite difficult. The questions are often full of puns, innuendo and other slippery language. > Furthermore, Watson is unable (as far as I know) to deploy its knowledge in > such a way as to learn any new concepts just by talking, or answer questions > that involve mental modeling of situations, or abstractions. It learns new concepts by reading. As far as I know, it has no capability for generating follow up questions. But if a module were added to ask questions, I have no doubt that it would be able to read the answer, and thus 'learn' a new concept, at least insofar as what Watson is doing can be classified as learning. > For example, I > would bet that if I ask Watson: > > "If I have a set of N balls in a bag, and I pull out the same number of > balls from the bag as there are letters in your name, how many balls would > be left in the bag?" > > It would be completely unable to answer. Of course, because it has to be in the form of an answer... ;-) Seriously, you may be correct. However, I would not be surprised if Watson were able to handle this kind of simple progressive logic. We would have to ask the designers. Natural language processing has been able to parse those kinds of sentences for some time, so I would not be surprised if Watson could also parse your sentence. Whether it would be able to answer or not is something I don't know. I hope someday they put some form of Watson online so we can ask it questions and see how good it is at answering them. >> Richard, do you think computers will achieve Strong AI eventually? > > Kelly, by my reckoning I am one of only a handful of people on this planet > with the ability to build a strong AI, and I am actively working on the > problem (in between teaching, fundraising, and writing to the listosphere). That's fantastic, I truly hope you succeed. If you are working to build a strong AI, then you must believe it is possible. I have spent about the last two hours reading your papers, web site, etc. You have an interesting set of ideas, and I'm still digesting it. One question comes up from your web site, I quote: "One reason that we emphasize human-mind-like systems is safety. The motivation mechanisms that underlie human behavior are quite unlike those that have traditionally been used to control the behavior of AI systems. Our research indicates that the AI control mechanisms are inherently unstable, whereas the human-like equivalent can be engineered to be extremely stable." Are you implying that humans are safe? If so, what do you mean by safety? -Kelly From kellycoinguy at gmail.com Tue Feb 1 22:23:05 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:23:05 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: 2011/1/31 Dave Sill : > On Mon, Jan 31, 2011 at 1:08 PM, Kelly Anderson > wrote: >> >> The strongest Turing test is when someone who knows a lot about >> natural language processing and it's weaknesses can't distinguish over >> a long period of time the difference between a number of humans, and a >> number of independently trained Turing computers. > > No, language processing is only one aspect of intelligence. The strongest > Turing test would also measure the ability to learn, to learn from past > experiences, to plan, to solve problems...all of the things the Wikipedia > definition mentions, and maybe more. You are right. >> So perhaps I suggest a new test. If a computer is smart enough to get >> admitted into Brigham Young University, then it has passed the >> Anderson Test of artificial intelligence. > > You mean achieve an SAT score sufficient to get into BYU? Or do you mean > that it has to go through school or take a GED, fill out an application to > BYU, etc. like a human would have to do? Passing the SAT would be only one part of the test, but it would also have to pass some kind of high school, write an essay on why it deserves to be admitted, fill out the forms and so forth. The idea is to be intellectual enough to fool the admissions board. I picked this test because you don't have to physically appear to be admitted to most colleges. I would not require the robotics aspect... you could put the paper in the printer for it... mail the forms, etc. >> Is that harder or easier?than the Turing test? > > Depends on the Turing test, I'd say. Sure. >> How about smart enough to graduate with a BS from BYU? > > How about it? It'd be an impressive achievement. Would it be intelligent? I think so. >> Another test... suppose that I subscribed an artificial intelligence >> program to this list. How long would it take for you to figure out >> that it wasn't human? That's a bit easier, since you don't have to do >> the processing in real time as with a chat program. > > Depends how active it is, what it writes, and whether anyone is clued to the > fact that there's a bot on the list. A Watson-like bot that answers > questions occasionally could be pretty convincing. But it'd fall apart if > anyone tried to engage it in a discussion. I would assume that nobody would be clued in. If you have a suspicion that something is amiss, you start looking for things. It's how I watch CGI movies... I look for the mistakes. When I just relax and enjoy the movie, I can believe what I see better. Benjamin Button passed this "Graphical Reality" test for me, btw. Very impressive. I don't think Watson could pass this test now, but I would not be surprised if it could at some point in the not too distant future. >> That's the difference between taking a picture, and telling you what >> is in the picture. HUGE difference... this is not a "little" more >> sophisticated. > > No, parsing a sentence into parts of speech is not hugely sophisticated. But "understanding" the sentence is. Parsing an image into blobs is not highly sophisticated either, but labeling the blob in the middle as a "dog" is. >> Once again, we run into another definition issue. What does it mean to >> "understand"? > > http://en.wikipedia.org/wiki/Understanding Quoting: "a psychological process related to an abstract or physical object, such as a person, situation, or message whereby one is able to think about it and use concepts to deal adequately with that object." So contextually to Jeopardy, Watson understands the questions it answers correctly. Right? >> And if that form is such that I can >> use it for future computation, to say answer a question, then Watson >> does understand it. Yes. So by some definitions of "understand" yes, >> Watson understands the text it has read. > > ?Granted, at a trivial level Watson could be said to understand the data > it's incorporated. But it doesn't have human-level understanding of it. But by the Wikipedia definition, it only has to "deal adequately"... Winning several thousand dollars on Jeopardy would certainly seem to be adequate, IMHO. -Kelly From kellycoinguy at gmail.com Tue Feb 1 22:27:28 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:27:28 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D471EA4.7080900@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: On Mon, Jan 31, 2011 at 1:42 PM, Richard Loosemore wrote: > spike wrote: > Watson does not contain the germ of an intelligence, it contains a dead-end > algorithm designed to impress the gullible. ?That strategy has been the > definition of "artificial intelligence" for the last thirty or forty years, > at least. > > A real AI is not Watson + extra machinery to close the gap to a full > conversational machine. ?Instead, a real AI involves throwing away Watson, > starting from scratch, and doing the whole thing in a completely different > way .... ?a way that actually allows the system to build its own knowledge, > and use that knowledge in an ever-expanding range of ways. Richard, Is your basic problem with Watson that it is going in the wrong direction if the eventual goal is AGI? Are you concerned that the public is being misled into believing that computers are closer to being "intelligent" than they actually are? I'm trying to understand the core of your indignance. -Kelly From kellycoinguy at gmail.com Tue Feb 1 22:35:34 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:35:34 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <00ab01cbc18c$d4e17a40$7ea46ec0$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <00ab01cbc18c$d4e17a40$7ea46ec0$@att.net> Message-ID: On Mon, Jan 31, 2011 at 2:21 PM, spike wrote: > Machines would patiently keep repeating the same answers. ?I see it as one > hell of a breakthough, even if we know it isn't artificial intelligence. ?I > don't care if it isn't AI, all I want is something to keep my parents > company 10 yrs from now. I don't think you'll have to wait 10 years, but you may have to have a lot of money. :-) http://babakkia.posterous.com/france-developing-advanced-humanoid-robot-rom The Japanese are working on Aibo, and other things like that. The Japanese population inversion makes this a top national priority for them, so look for the solution to this problem coming soon from that direction. The hard part will be teaching your parents Japanese.. ;-) -Kelly From protokol2020 at gmail.com Tue Feb 1 22:39:52 2011 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Tue, 1 Feb 2011 23:39:52 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <00ab01cbc18c$d4e17a40$7ea46ec0$@att.net> Message-ID: Even Google Transtlator, let alone Watson should be able to overcome this problem. ;-) > The hard part will be teaching your parents Japanese.. ;-) > > -Kelly > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Tue Feb 1 23:03:32 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 16:03:32 -0700 Subject: [ExI] Plastination Message-ID: Has anyone seriously looked at plastination as a method for preserving brain tissue patterns? http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html It seems to preserve extremely delicate structures and lasts for 10,000 years without keeping things cold. A technology advanced enough to unfreeze a brain seems like it would be able to work with these things just about as easily... -Kelly From spike66 at att.net Wed Feb 2 00:27:10 2011 From: spike66 at att.net (spike) Date: Tue, 1 Feb 2011 16:27:10 -0800 Subject: [ExI] weird al in the mainstream press Message-ID: <00a201cbc26f$ee7b2f80$cb718e80$@att.net> Fun article, but there is a glaring omission: http://www.cnn.com/2011/LIVING/02/01/weird.al.book/index.html?hpt=C2 He didn't mention Dr. Demento, who really launched his career. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Wed Feb 2 00:56:48 2011 From: mbb386 at main.nc.us (MB) Date: Tue, 1 Feb 2011 19:56:48 -0500 Subject: [ExI] weird al in the mainstream press In-Reply-To: <00a201cbc26f$ee7b2f80$cb718e80$@att.net> References: <00a201cbc26f$ee7b2f80$cb718e80$@att.net> Message-ID: You're buying this for your kid, right? :))) It sounds like a hoot. If my kids were little, or I had grandkids, I'd buy a copy. Never would I have dreamed what my kids actually *do* as adults. Heck, I never dreamed what *I* did as working stiff. ;) Life can be right peculiar, the way it turns and twists. Al is right though - there is a goodly measure of success in being happy in what you do. Being *unhappy* in what you do would be The Pits. Regards, MB From sjatkins at mac.com Wed Feb 2 01:10:22 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 01 Feb 2011 17:10:22 -0800 Subject: [ExI] Plastination In-Reply-To: References: Message-ID: <4D48AEFE.5080307@mac.com> On 02/01/2011 03:03 PM, Kelly Anderson wrote: > Has anyone seriously looked at plastination as a method for preserving > brain tissue patterns? > > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html > > It seems to preserve extremely delicate structures and lasts for > 10,000 years without keeping things cold. A technology advanced enough > to unfreeze a brain seems like it would be able to work with these > things just about as easily... > > -Kelly > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat There was a good talk at the Citizen Scientist conference last year on precisely this. John Smart and other are behind an initiative to move this forward. http://www.brainpreservation.org/ http://www.slideshare.net/humanityplus/smart-4671818 From sjatkins at mac.com Wed Feb 2 02:06:32 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 01 Feb 2011 18:06:32 -0800 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <803459.14987.qm@web114405.mail.gq1.yahoo.com> References: <803459.14987.qm@web114405.mail.gq1.yahoo.com> Message-ID: <4D48BC28.2000806@mac.com> On 02/01/2011 05:12 AM, Ben Zaiboc wrote: > On 31 January 2011 01:31, Keith Henson wrote: >> I think atheists would be much better off to try to understand why (in >> an evolutionary sense) humans have religions at all. > I thought the idea that our brains are adapted to err on the side of false positives when attributing agency to events was enough of an explanation. > Not by a very long shot. The desire to find meaning to existence and a grounding for ones place in it is no small part of the generating functions. Dealing with the fact of mortality adds fuel to the fire. Community bonding is still another part. The causes for religion are not much more simple than human beings are. Simplistic statements of "X explains religion or does so well enough" are neither accurate or helpful. - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Wed Feb 2 02:07:49 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 1 Feb 2011 20:07:49 -0600 Subject: [ExI] Fwd: Slate | Synthetic biology and Obama's bioethics commission: How can we govern the garage biologists who are tinkering with life? - By William Saletan In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Eri Gentry Date: Tue, Feb 1, 2011 at 7:50 PM Subject: Slate | Synthetic biology and Obama's bioethics commission: How can we govern the garage biologists who are tinkering with life? - By William Saletan To: biocurious at googlegroups.com http://www.slate.com/id/2283324/?from=rss Faking Organisms How can we govern the garage biologists who are tinkering with life?By William SaletanPosted Tuesday, Feb. 1, 2011, at 9:20 AM ET *This article arises from Future Tense *, *a collaboration among Arizona State University, the New America Foundation, and** **Slate**. A Future Tense conference on whether governments can keep pace will scientific advances will be held at Google D.C.'s headquarters on Feb. 3-4.* (*For more information and to sign up for the event, please visit the****NAF Web site* *.)* Synthetic biology?the engineering of new forms of life?is the kind of science that can freak people out. Some critics want to stop or restrict it. But President Obama's bioethics commission , in its report on this emerging technology, advocates a subtler approach: "an ongoing process of prudent vigilance that carefully monitors, identifies, and mitigates potential and realized harms over time." Prudent vigilance may not be sexy, but it's smart. It's designed, in the commission's words, to maximize "information, flexibility, and judgment" in the regulation of technology. Here's how it works, as illustrated in the synthetic biology report. *1. If in doubt, don't interfere.* The commission endorses "regulatory parsimony," i.e., "only as much oversight as is truly necessary." You might think that emerging technologies, because they're unformed and unpredictable, require particular restraint. That's the conservative view. The commission draws the opposite conclusion: The evolving nature of these technologies makes them "not well suited for sharply specified limitations." PRINT DISCUSS E-MAIL RSS RECOMMEND...REPRINTS SINGLE PAGE This principle applies not just to technology, but to related fields such as law. "Intellectual property issues in synthetic biology are evolving," says the report. Accordingly, the commission "offers no specific opinion on the effectiveness of current intellectual property practices and policies in synthetic biology." Don't speak until you know what to say. Why not err on the side of intervention? Because you might make things worse. Hasty restrictions, the report warns, "may be counterproductive to security and safety by preventing researchers from developing effective safeguards." Let the technology unfold, and see what happens. This might be the best way to learn what sort of regulation we'll need down the road. "The aggressive pursuit of fundamental research generally results in a broader understanding of a maturing scientific field like synthetic biology," says the report, and this "may be a particularly valuable way to prepare for the emergence of unanticipated risks that would require rapid identification and creative responses." Advertisement *2. Change is the norm.* The conservative instinct is to treat the status quo as natural and defend it against change. The commission rejects this idea. The notion that "synthetic biology fails to respect the proper relationship between humans and nature" misconceives the reality of that relationship. In biology, the panel argues, defining "nature" or "natural" is tricky "in light of humans' long history interacting with and affecting other species, humankind, and the environment." We've been messing with life all along. The status quo, in other words, is change. Yes, modern genetic manipulation is more complex than old-fashioned breeding. But it isn't exploding. It's "proceeding in limited and carefully controlled ways." And while synthetic biology is at the cutting edge, it's just "an extension of genetic engineering" and "does not necessarily raise radically new concerns or risks." *3. Make the regulation as agile as the technology.* The tricky thing about synthetic biology, according to the report, is that "the probability or magnitude of risks are high or highly uncertain, because biological organisms may evolve or change after release." And you can't gauge their future from their past, given the "lack of history regarding the behavior" of these organisms. So the commission keeps its judgments provisional. The words "evolve," "evolving," "current," "currently," "at present," "at this time," and "uncertain" appear 191 times in the report. How can we manage such fast-moving, adaptable targets? With a fast-moving, adaptable regulatory system. The White House must "direct an ongoing review of the ability of synthetic organisms to multiply in the natural environment," says the commission. It must "identify, as needed, reliable containment and control mechanisms." This means constant reevaluation. A system of prudent vigilance will "identify, assess, monitor, and mitigate risks on an ongoing basis as the field matures." The word "ongoing" appears 73 times in the report. *4. Make the regulation as diffuse as the technology.* The commission notes that synthetic biology "poses some unusual potential risks" because much of it is being conducted by "do-it-yourself" amateurs. Top-down regulation of known research facilities won't reach these garage experimenters. "It is at the individual or laboratory level where accidents will occur, material handling and transport issues will be noted, physical security will be enforced, and potential dual use intentions will most likely be detected," says the commission. Therefore, the government should focus on "creating a culture of responsibility in the synthetic biology community." The phrase "culture of responsibility" appears 16 times in the report. *5. Involve the government in non-restrictive ways.* Given the complexity, adaptability, and diffusion of synthetic biology, the report suggests that the government "expand current oversight or engagement activities with non-institutional researchers." This "engagement" might consist of workshops or educational programs. By collaborating with the DIY research community, the government can "monitor [its] growth and capacity," thereby keeping abreast of the technology and its evolving risks. The best protection against runaway synthetic organisms might come not from restricting the technology, but from harnessing it. "Suicide genes" or other self-destruction mechanisms could be built into organisms to limit their longevity. "Alternatively, engineered organisms could be made to depend on nutritional components absent outside the laboratory, such as novel amino acids, and thereby controlled in the event of release." How can the government encourage researchers to incorporate these safeguards and participate in responsibility-oriented training programs? By funding their work. This reverses the Bush administration's approach to stem cells. Bush prohibited federal funding of embryo-destructive research so pro-life taxpayers wouldn't have to support it. The Obama commission does the opposite: It recommends "public investment" to gain leverage over synthetic biologists. If the government subsidizes your research, it can attach conditions such as ethics training or suicide genes. *6. Revisit all questions.* Occasionally, the Obama commission forgets its own advice and makes a risky assumption. For example, it brushes off "the synthesis of genomes for a higher order or complex species," asserting, "There is widespread agreement that this will remain [impossible] for the foreseeable future." But if this prediction or any other turns out to be erroneous, don't worry. The report builds in a mechanism to correct them: future reevaluations of its conclusions. This is more than a matter of reassessing particular technologies. It's a commitment to rethink larger assumptions, paradigms, and ethical questions. "Discussions of moral objections to synthetic biology should be revisited periodically as research in the field advances in novel directions," says the report. "An iterative, deliberative process ? allows for the careful consideration of moral objections to synthetic biology, particularly if fundamental changes occur in the capabilities of this science." Arguments against the technology will surely continue as the field matures, as well they should. The question relevant to the Commission's present review of synthetic biology is whether this field brings unique concerns that are so novel or serious that special restrictions are warranted at this time. Based on its deliberations, the Commission has concluded that special restrictions are not needed, but that prudent vigilance can and should be exercised. As this field develops and our ability to engineer higher-order genomes using synthetic biology grows, other deliberative bodies ought to revisit this conclusion. In so doing, it will be critical that future objections are widely sought, clearly defined, and carefully considered. That's the way good scientists think: subject your work to peer review, seek falsification, and revise hypotheses as we learn more. Every question is open to reexamination. Even the commission's rejection of a moratorium on synthetic biology "at this time" implies the possibility of reversal. Who knows what the future will bring? I count three specific restrictions in the commission's interpretation of prudent vigilance. First, "Risk assessment should precede field release of the products of synthetic biology." That's more than monitoring. It's a precautionary hurdle. Second, "reliable containment and control mechanisms" such as suicide genes "should be identified and required." Third, "ethics education ? should be developed and required" for synthetic biologists, as it is for medical and clinical researchers. Beyond those three rules, prudent vigilance seems to be a matter of humility, open-mindedness, keeping an eye on things, constantly rethinking assumptions, and finding creative ways to influence an increasingly diffuse community of scientific entrepreneurs. It's a lot of work. But it's what we'll have to do if we don't want to restrict technologies preemptively or leave them unsupervised. Eternal vigilance is the price of liberty. -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Wed Feb 2 02:13:44 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 01 Feb 2011 18:13:44 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <000001cbc22a$6e184390$4a48cab0$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> Message-ID: <4D48BDD8.6030009@mac.com> On 02/01/2011 08:09 AM, spike wrote: > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj > ... > Subject: Re: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. > > On 31 January 2011 21:42, Richard Loosemore wrote: >>> But that is *exactly* my point. We are not getting tantalizingly >> close, we are just doing the same old snake-oil con trick of building >> a system that works in a ridiculously narrow domain, and which >> impresses some people with the sheer breadth of information it stores > inside it. > >> I suspect that human beings themselves do little else than than adding > half-competent reactions in ridiculously narrow domains one to another for a > very large number thereof... Stefano Vaj > > > Ja. The reason I think we are talking past each other is that we are > describing two very different things when we are talking about human level > intelligence. I am looking for something that can provide companionship for > an impaired human, whereas I think Richard is talking about software which > can write software. > The Eliza chatbot was very engaging for a lot of students once upon a time. You don't need full AGI to keep an oldster happily reliving/sharing memories and more entertained than a TV can provide. Add emotion interfaces and much much better chat capabilities than Eliza had. Eventually add more real AI modules as they become available. A cat will be more cuddly and humans much more fun to talk to for a longish time. But there is a definite spot in-between that we can just about do something that will be appreciated. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Wed Feb 2 02:14:12 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 01 Feb 2011 21:14:12 -0500 Subject: [ExI] Plastination In-Reply-To: References: Message-ID: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> Who knows if this is a truly beneficial way to go, but the person you would want to review his study is Ken Hayworth. It is his project and his research. Natasha Quoting Kelly Anderson : > Has anyone seriously looked at plastination as a method for preserving > brain tissue patterns? > > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html > > It seems to preserve extremely delicate structures and lasts for > 10,000 years without keeping things cold. A technology advanced enough > to unfreeze a brain seems like it would be able to work with these > things just about as easily... > > -Kelly > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Wed Feb 2 02:44:05 2011 From: spike66 at att.net (spike) Date: Tue, 1 Feb 2011 18:44:05 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D48BDD8.6030009@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: <002801cbc283$0f28af60$2d7a0e20$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins . >>.Ja. .I am looking for something that can provide companionship for an impaired human, whereas I think Richard is talking about software which can write software. >.The Eliza chatbot was very engaging for a lot of students once upon a time. I sure had fun with her. I kept trying to get her to talk dirty to me. She wasn't very good at that. But that's OK neither was I. Seems like we should be able to write code that would generate titillating text. It's been 30 years now since the last time I played Eliza, and it was already free staleware at that time. I would sure as all hell think we must have come up with some kind of improvement in all that time, ja? Software hipsters, what is the modern counterpart to Eliza? I will be really disappointed in you guys if the answer is Eliza. >. A cat will be more cuddly and humans much more fun to talk to for a longish time. But there is a definite spot in-between that we can just about do something that will be appreciated. - Samantha Actually you may have stumbled upon exactly what I have been looking for. A warm cuddly android or estroid presents some daunting mechanical engineering and controls engineering problems. But with your in-between cat and computer comment, you may have solved my problem: just go ahead and use cats or dogs, then rig a microphone/speaker to their collar so that the elderly patient can cuddle the actual beast while carrying on an Eliza-level conversation with the machine/beast combination. Or I suppose we could rig up another elderly person who has lost the power of speech with an article of clothing which has a microphone/speech recognition/Watson/Eliza-ish inference engine. While still simulated conversation, we might allow the patient to imagine she is talking to another person. Good thinking Samanatha! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Wed Feb 2 03:12:20 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 01 Feb 2011 20:12:20 -0700 Subject: [ExI] Plastination In-Reply-To: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> Message-ID: <4D48CB94.9060303@canonizer.com> I'm also very interested in this subject, so thanks, Quoting, for bringing it up. I'd also love to hear from someone like Ken Hayworth. Wouldn't a physical neural researcher be a good person to ask? You know, the kind of researchers that work with actual neurons - slicing up brains - looking at them at the microscopic and even nano scale level, and so on? I'm completely ignorant on all this, but my completely uninformed gut feel is that a sliced up bit of hard frozen brain, even if very much fractured, would contain much more preserved information than anything plasticized? Brent Allsop On 2/1/2011 7:14 PM, natasha at natasha.cc wrote: > Who knows if this is a truly beneficial way to go, but the person you > would want to review his study is Ken Hayworth. It is his project and > his research. > > Natasha > > > Quoting Kelly Anderson : > >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? >> >> http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html >> >> >> It seems to preserve extremely delicate structures and lasts for >> 10,000 years without keeping things cold. A technology advanced enough >> to unfreeze a brain seems like it would be able to work with these >> things just about as easily... >> >> -Kelly >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amara at kurzweilai.net Wed Feb 2 03:30:17 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Tue, 1 Feb 2011 19:30:17 -0800 Subject: [ExI] Plastination In-Reply-To: <4D48CB94.9060303@canonizer.com> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> Message-ID: <039601cbc289$83505860$89f10920$@net> Are there experimental procedures that could potentially falsify these hypotheses? 1. Brain function and memory require persistence of all (case 2: some) molecular dynamics of a living brain. 2. Molecular dynamics cannot be reconstructed from gross structure. 3. Molecular dynamics can be reconstructed but only if the structure is accurately measured at subatomic or quantum levels prior to death (case 2: prior to cryopreservation), but the uncertainty principle negates accurate measurements. 4. Current cryopreservation protocols result in loss of subatomic and quantum data. 5. Cryopreservation inherently destroys subatomic and quantum data. -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop Sent: Tuesday, February 01, 2011 7:12 PM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] Plastination I'm also very interested in this subject, so thanks, Quoting, for bringing it up. I'd also love to hear from someone like Ken Hayworth. Wouldn't a physical neural researcher be a good person to ask? You know, the kind of researchers that work with actual neurons - slicing up brains - looking at them at the microscopic and even nano scale level, and so on? I'm completely ignorant on all this, but my completely uninformed gut feel is that a sliced up bit of hard frozen brain, even if very much fractured, would contain much more preserved information than anything plasticized? Brent Allsop On 2/1/2011 7:14 PM, natasha at natasha.cc wrote: > Who knows if this is a truly beneficial way to go, but the person you > would want to review his study is Ken Hayworth. It is his project and > his research. > > Natasha > > > Quoting Kelly Anderson : > >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? >> >> http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.h tml >> >> >> It seems to preserve extremely delicate structures and lasts for >> 10,000 years without keeping things cold. A technology advanced enough >> to unfreeze a brain seems like it would be able to work with these >> things just about as easily... >> >> -Kelly >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From aware at awareresearch.com Wed Feb 2 07:05:28 2011 From: aware at awareresearch.com (Aware) Date: Tue, 1 Feb 2011 23:05:28 -0800 Subject: [ExI] intermittent liar In-Reply-To: <000901cbba78$a6853260$f38f9720$@att.net> References: <000901cbba78$a6853260$f38f9720$@att.net> Message-ID: 2011/1/22 spike : > Oh my, I found a most excellent puzzle today.? I found an answer, don?t know > yet if it is right.? See what you find: The mechanical no-brainer method: #!/usr/bin/env python """ Larry always tells lies during months that begin with vowels but always tells the truth during the other months. During one particular month, Larry makes these two statements: - I lied last month. - I will lie again six months from now. During what month did Larry make these statements? """ months = 'Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec'.split() vowels = ('A', 'E', 'I', 'O', 'U') truth_months = [m for m in months if not m.startswith(vowels)] def displace(month, disp): return months[(months.index(month) + disp) % 12] def asserts(month): return [displace(month, -1) not in truth_months, displace(month, 6) not in truth_months] for month in months: if (month in truth_months and all(asserts(month)) or month not in truth_months and not any(asserts(month))): print month Result: Aug From eugen at leitl.org Wed Feb 2 07:34:48 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Feb 2011 08:34:48 +0100 Subject: [ExI] Plastination In-Reply-To: References: Message-ID: <20110202073448.GA23560@leitl.org> On Tue, Feb 01, 2011 at 04:03:32PM -0700, Kelly Anderson wrote: > Has anyone seriously looked at plastination as a method for preserving > brain tissue patterns? Yes. It doesn't work. > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html > > It seems to preserve extremely delicate structures and lasts for > 10,000 years without keeping things cold. A technology advanced enough > to unfreeze a brain seems like it would be able to work with these > things just about as easily... See http://brainpreservation.org/index.php?path=technology -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From giulio at gmail.com Wed Feb 2 07:02:55 2011 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 2 Feb 2011 08:02:55 +0100 Subject: [ExI] Plastination In-Reply-To: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> Message-ID: Yes, Ken Hayworth is The Man. See my review of last year: Chemical brain preservation: cryonics for uploaders http://giulioprisco.blogspot.com/2010/07/chemical-brain-preservation-cryonics.html On Wed, Feb 2, 2011 at 3:14 AM, wrote: > Who knows if this is a truly beneficial way to go, but the person you would > want to review his study is Ken Hayworth. ?It is his project and his > research. > > Natasha > > > Quoting Kelly Anderson : > >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? >> >> >> http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html >> >> It seems to preserve extremely delicate structures and lasts for >> 10,000 years without keeping things cold. A technology advanced enough >> to unfreeze a brain seems like it would be able to work with these >> things just about as easily... >> >> -Kelly >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From eugen at leitl.org Wed Feb 2 11:47:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Feb 2011 12:47:57 +0100 Subject: [ExI] Plastination In-Reply-To: <4D48AEFE.5080307@mac.com> References: <4D48AEFE.5080307@mac.com> Message-ID: <20110202114757.GB23560@leitl.org> On Tue, Feb 01, 2011 at 05:10:22PM -0800, Samantha Atkins wrote: > There was a good talk at the Citizen Scientist conference last year on > precisely this. John Smart and other are behind an initiative to move No, Gunter von Hagens' stuff has nothing to do with what Hayworth intends to do. See http://www.depressedmetabolism.com/2010/01/28/brain-preservation/ and http://www.depressedmetabolism.com/chemopreservation-the-good-the-bad-and-the-ugly/ The main problem is lack of feedback due to absense of viability as proxy for structure preservation. The proof that fixation would work with vascular perfusion (including such nice, cheap things as OsO4) for the human primate is yet outstanding. There are multiple nontechnical but important reasons why pushing this at the moment would be a bad idea. > this forward. > http://www.brainpreservation.org/ > http://www.slideshare.net/humanityplus/smart-4671818 -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 2 11:50:40 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Feb 2011 12:50:40 +0100 Subject: [ExI] Plastination In-Reply-To: <039601cbc289$83505860$89f10920$@net> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> Message-ID: <20110202115040.GC23560@leitl.org> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: > Are there experimental procedures that could potentially falsify these > hypotheses? > > 1. Brain function and memory require persistence of all (case 2: some) > molecular dynamics of a living brain. Dynamics is not present in vitrified tissue, yet that tissue can be resumed. > 2. Molecular dynamics cannot be reconstructed from gross structure. I see what you're trying to say, but no. > 3. Molecular dynamics can be reconstructed but only if the structure is > accurately measured at subatomic or quantum levels prior to death (case 2: > prior to cryopreservation), but the uncertainty principle negates accurate > measurements. > 4. Current cryopreservation protocols result in loss of subatomic and > quantum data. > 5. Cryopreservation inherently destroys subatomic and quantum data. Oh, you're one of those. > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop > Sent: Tuesday, February 01, 2011 7:12 PM > To: extropy-chat at lists.extropy.org > Subject: Re: [ExI] Plastination > > > I'm also very interested in this subject, so thanks, Quoting, for > bringing it up. I'd also love to hear from someone like Ken Hayworth. > > Wouldn't a physical neural researcher be a good person to ask? You > know, the kind of researchers that work with actual neurons - slicing > up brains - looking at them at the microscopic and even nano scale > level, and so on? > > I'm completely ignorant on all this, but my completely uninformed gut > feel is that a sliced up bit of hard frozen brain, even if very much > fractured, would contain much more preserved information than anything > plasticized? > > Brent Allsop > > > On 2/1/2011 7:14 PM, natasha at natasha.cc wrote: > > Who knows if this is a truly beneficial way to go, but the person you > > would want to review his study is Ken Hayworth. It is his project and > > his research. > > > > Natasha > > > > > > Quoting Kelly Anderson : > > > >> Has anyone seriously looked at plastination as a method for preserving > >> brain tissue patterns? > >> > >> > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.h > tml > >> > >> > >> It seems to preserve extremely delicate structures and lasts for > >> 10,000 years without keeping things cold. A technology advanced enough > >> to unfreeze a brain seems like it would be able to work with these > >> things just about as easily... > >> > >> -Kelly > >> _______________________________________________ > >> extropy-chat mailing list > >> extropy-chat at lists.extropy.org > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >> > > > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Wed Feb 2 16:40:39 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 11:40:39 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> Message-ID: <4D498907.3050808@lightlink.com> Kelly Anderson wrote: > On Mon, Jan 31, 2011 at 12:05 PM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> Richard, do you think computers will achieve Strong AI eventually? >> Kelly, by my reckoning I am one of only a handful of people on this planet >> with the ability to build a strong AI, and I am actively working on the >> problem (in between teaching, fundraising, and writing to the listosphere). > > That's fantastic, I truly hope you succeed. If you are working to > build a strong AI, then you must believe it is possible. I certainly believe that strong AI is possible. > I have spent about the last two hours reading your papers, web site, > etc. You have an interesting set of ideas, and I'm still digesting it. > > One question comes up from your web site, I quote: > > "One reason that we emphasize human-mind-like systems is safety. The > motivation mechanisms that underlie human behavior are quite unlike > those that have traditionally been used to control the behavior of AI > systems. Our research indicates that the AI control mechanisms are > inherently unstable, whereas the human-like equivalent can be > engineered to be extremely stable." > > Are you implying that humans are safe? If so, what do you mean by safety? No, humans by themselves are (mild understatement) not safe. The human motivation mechanism works in conjunction with the "thinking" part of the human mind. The latter is like a swarm of simple agents, all trying to engage in a process of "weak constraint relaxation" with their neighbors, so the whole thing is like a molecular soup in which atoms and molecules are independently trying to aggregate to form larger molecules. One factor that is important in this relaxation process is the anchoring of the relaxation: there are always some agents whose state is being fixed by outside factors (e.g. the agents linked to sensors in your eye go into states that depend, not on nearby agents, but on the signals hitting the retina), so these peripheral agents act as seeds, causing many others to attach to them and grow to form large "molecules". Those molecules are the extended structures that constitute the knowledge representations that we hold in working memory. Obviously they change all the time, so there is never complete stability, but nevertheless the agents are always trying to find ways to go "downhill" toward more stable states. Now, going back to your original question about motivation. There are other sources that act as seed areas, governing the formation of molecules in this working memory area. One such source is the motivation system: a diffuse collection of agents that push the thinking system to want certain things, and to try to get those things in ways that are consistent with the constraints of the motivation system. This can all get very complicated (too much for a post here), but the bottom line is that when the system is controlled in this way, the stability of the motivation system is determined by a very large number of mutually-reinforcing contraints, so if the system starts with intentions that are (shall we say) broadly empathic with the human species, it cannot start to conceive new, bizarre motivations that break a significant number of those constraints. It is always settling back toward a large global attractor. The problem with humans is that they have several modules in the motivation system, some of them altruistic and empathic and some of them selfish or aggressive. The nastier ones were built by evolution because she needed to develop a species that would fight its way to the top of the heap. But an AGI would not need those nastier motivation mechanisms. If you subtract out those unwanted modules what you have left is an altruistic saint of an AGI, with a motivation system has three very important properties: 1) If the AGI starts out wanting to help the human species because it feels like it belongs with us, then it can only develop new ideas about how to behave that are consistent with that motivation. 2) For that same reason, if the AGI were given the chance to redesign itself, it would always want to improve its motivation mechanism to keep it consistent with those original motivations. As a result, over time the motivation of the AGI would not drift, it would stay consistent with the feeling of empathy for humans. 3) If some problem occurred in the computational substrate of the AGI (a random cosmic ray strike on the motivation module) the disruption would be very unlikely to leave the system with different, violent motivations. That would be rather like a random cosmic ray collision causing you to have such specific damage to your body that a second after the collision you had a new, fully functional third arm attached to your body -- a ridiculously unlikely event, obviously. This is what I mean by safety. An AGI whose motivations had the same stability of design, as a human being, but without the specific modules (selfishness and aggression, primarily) that are present in the human system. Richard Loosemore From rpwl at lightlink.com Wed Feb 2 16:56:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 11:56:30 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: <4D498CBE.4090106@lightlink.com> Kelly Anderson wrote: > On Mon, Jan 31, 2011 at 1:42 PM, Richard Loosemore wrote: >> spike wrote: >> Watson does not contain the germ of an intelligence, it contains a dead-end >> algorithm designed to impress the gullible. That strategy has been the >> definition of "artificial intelligence" for the last thirty or forty years, >> at least. >> >> A real AI is not Watson + extra machinery to close the gap to a full >> conversational machine. Instead, a real AI involves throwing away Watson, >> starting from scratch, and doing the whole thing in a completely different >> way .... a way that actually allows the system to build its own knowledge, >> and use that knowledge in an ever-expanding range of ways. > > Richard, Is your basic problem with Watson that it is going in the > wrong direction if the eventual goal is AGI? Are you concerned that > the public is being misled into believing that computers are closer to > being "intelligent" than they actually are? > > I'm trying to understand the core of your indignance. Well, both, and more. People were complaining about this kind of cheap-trick AI at least two decades ago, and we expected that our complaints would be loud enough that it would eventually stop. But it did not. Every few months, it seems, there is another announcement about some project, which the press writes up as "Could it be that AI is on the brink of a breakthrough?". Can you imagine how indignant you would be if you saw those same stories being written 20 years ago? :-) I guess one of the reasons I am personally so frustrated by these projects is that I am trying to get enough funding to make what I consider to be real progress in the field, but doing that is almost impossible. Meanwhile, if I had had the resources of the Watson project a decade ago, we might be talking with real (and safe) AGI systems right now. Richard Loosemore From jonkc at bellsouth.net Wed Feb 2 17:34:22 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 2 Feb 2011 12:34:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D498CBE.4090106@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: On Feb 2, 2011, at 11:56 AM, Richard Loosemore wrote: > Every few months, it seems, there is another announcement about some project, which the press writes up as "Could it be that AI is on the brink of a breakthrough?". Can you imagine how indignant you would be if you saw those same stories being written 20 years ago? Forget 20 years, just a little over 10 years ago I started hearing about a new thing called "Google" that was supposed to be a breakthru in AI, and it turned out those stories were big understatements and Google has changed our world. > > > I am trying to get enough funding to make what I consider to be real progress in the field, but doing that is almost impossible I guess if venture capitalists were impressed with your idea they were not very impressed, and that's what they need to be before they start betting their own money on something. > Meanwhile, if I had had the resources of the Watson project a decade ago, we might be talking with real (and safe) AGI systems right now. Real probably not, safe definitely not. There is no way you can guarantee that something smarter than you will always do what you want. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Feb 2 18:01:18 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 2 Feb 2011 13:01:18 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D498907.3050808@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> On Feb 2, 2011, at 11:40 AM, Richard Loosemore wrote: > No, humans by themselves are (mild understatement) not safe. True, and the reason is that the human mind does not work on a fixed goal structure, no goal is always in the number one spot not even the goal for self preservation. And the reason Evolution never developed a fixed goal intelligence is that it is impossible. As Turing proved over 70 years ago such a mind would be doomed to fall into infinite loops. > the bottom line is that when the system is controlled in this way, the stability of the motivation system is determined by a very large number of mutually-reinforcing contraints, so if the system starts with intentions that are (shall we say) broadly empathic with the human species, it cannot start to conceive new, bizarre motivations that break a significant number of those constraints. So when the humans tell the AI to do something that can not be done, something very easy to do, your multi billion dollar AI turns into an elaborate space heater because unlike humans the AI has a fixed goal motivation system so nothing ever bores it, not even infinite loops. > It is always settling back toward a large global attractor. And it keeps plugging away at the unsolvable problem for eternity, or at least until the humans get bored with the useless piece of junk and pull the plug on it. > If you subtract out those unwanted modules what you have left is an altruistic saint of an AGI I had no idea that the American Geological Institute was such a virtuous organization. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Wed Feb 2 18:42:22 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 13:42:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: <4D49A58E.4020704@lightlink.com> John Clark wrote: > On Feb 2, 2011, at 11:56 AM, Richard Loosemore wrote: > >> Every few months, it seems, there is another announcement about some >> project, which the press writes up as "Could it be that AI is on the >> brink of a breakthrough?". Can you imagine how indignant you would be >> if you saw those same stories being written 20 years ago? > > Forget 20 years, just a little over 10 years ago I started hearing about > a new thing called "Google" that was supposed to be a breakthru in AI, > and it turned out those stories were big understatements and Google has > changed our world. Irrelevant. Google is narrow AI, not AGI. >> I am trying to get enough funding to make what I consider to be real >> progress in the field, but doing that is almost impossible > > I guess if venture capitalists were impressed with your idea they were > not very impressed, and that's what they need to be before they start > betting their own money on something. Venture capitalists have as much understanding of AGI as you do. They also understand what venture capital funding is for, which you apparently do not. They do not fund research, they fund products. >> Meanwhile, if I had had the resources of the Watson project a decade >> ago, we might be talking with real (and safe) AGI systems right now. > > Real probably not, safe definitely not. There is no way you can > guarantee that something smarter than you will always do what you want. Yes there is. You may not understand how, but that does not change the theory itself. Richard Loosemore From rpwl at lightlink.com Wed Feb 2 18:49:55 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 13:49:55 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> Message-ID: <4D49A753.4050804@lightlink.com> John Clark wrote: > On Feb 2, 2011, at 11:40 AM, Richard Loosemore wrote: > >> No, humans by themselves are (mild understatement) not safe. > > True, and the reason is that the human mind does not work on a fixed > goal structure, no goal is always in the number one spot not even the > goal for self preservation. And the reason Evolution never developed a > fixed goal intelligence is that it is impossible. As Turing proved over > 70 years ago such a mind would be doomed to fall into infinite loops. > >> the bottom line is that when the system is controlled in this way, the >> stability of the motivation system is determined by a very large >> number of mutually-reinforcing contraints, so if the system starts >> with intentions that are (shall we say) broadly empathic with the >> human species, it cannot start to conceive new, bizarre motivations >> that break a significant number of those constraints. > > So when the humans tell the AI to do something that can not be done, > something very easy to do, your multi billion dollar AI turns into an > elaborate space heater because unlike humans the AI has a fixed goal > motivation system so nothing ever bores it, not even infinite loops. Anything that could get into such a mindless state, with no true understanding of itself or the world in general, would not be an AI. From this we can conclude that you are not an AI. You may be a good space heater, however: there is evidence of large amounts of hot air.... ;-) Richard Loosemore From spike66 at att.net Wed Feb 2 19:49:14 2011 From: spike66 at att.net (spike) Date: Wed, 2 Feb 2011 11:49:14 -0800 Subject: [ExI] new goldilocks planets Message-ID: <007a01cbc312$4577ae10$d0670a30$@att.net> Oh this is cool: http://www.msnbc.msn.com/id/41387915?GT1=43001 MesSNBC goofed up aspects of the article. A comment in there had to do with a temperature average between 0 and 100 celcius, apparently in reference to liquid water. Of course that is arbitrary and dependent on pressure. But it sounds like good news in any case. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 2 22:34:19 2011 From: spike66 at att.net (spike) Date: Wed, 2 Feb 2011 14:34:19 -0800 Subject: [ExI] time article that sounds vaguely like ep Message-ID: <00ab01cbc329$557668d0$00633a70$@att.net> Keith or one of the other evolutionary psychology hipsters, have you any comment on this? It sounded vaguely like EP, as applied to civil revolution, but it isn't entirely clear: http://www.time.com/time/health/article/0,8599,2045599,00.html?hpt=T2 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Thu Feb 3 04:16:37 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 2 Feb 2011 23:16:37 -0500 Subject: [ExI] Plastination In-Reply-To: <20110202115040.GC23560@leitl.org> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> Message-ID: On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >> 5. Cryopreservation inherently destroys subatomic and quantum data. > > Oh, you're one of those. That's a rather impolite way to agree there exists a difference of opinion. I understand where you're coming from, but you could have as easily clipped that part and left no comment. Unless *I* misunderstand the attempt to jibe the other side into a protracted discussion thread... From amara at kurzweilai.net Thu Feb 3 05:07:42 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Wed, 2 Feb 2011 21:07:42 -0800 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> Message-ID: <06c701cbc360$49cb2d40$dd6187c0$@net> To clarify, I don't have any opinions on this subject (that's above my pay grade). I'm asking for inputs for an possible article I'm researching. -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty Sent: Wednesday, February 02, 2011 8:17 PM To: ExI chat list Subject: Re: [ExI] Plastination On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >> 5. Cryopreservation inherently destroys subatomic and quantum data. > > Oh, you're one of those. That's a rather impolite way to agree there exists a difference of opinion. I understand where you're coming from, but you could have as easily clipped that part and left no comment. Unless *I* misunderstand the attempt to jibe the other side into a protracted discussion thread... From eugen at leitl.org Thu Feb 3 09:55:14 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 3 Feb 2011 10:55:14 +0100 Subject: [ExI] Plastination In-Reply-To: <06c701cbc360$49cb2d40$dd6187c0$@net> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> <06c701cbc360$49cb2d40$dd6187c0$@net> Message-ID: <20110203095514.GA23560@leitl.org> On Wed, Feb 02, 2011 at 09:07:42PM -0800, Amara D. Angelica wrote: > To clarify, I don't have any opinions on this subject (that's above my pay > grade). I'm asking for inputs for an possible article I'm researching. > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty > Sent: Wednesday, February 02, 2011 8:17 PM > To: ExI chat list > Subject: Re: [ExI] Plastination > > On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: > > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: > >> 5. Cryopreservation inherently destroys subatomic and quantum data. > > > > Oh, you're one of those. > > That's a rather impolite way to agree there exists a difference of opinion. Sorry, when I have the same conversation literally hundreds of times I tend to classify responses early. If a conversation starts with continuity issues in personal identity conservation you know there's a long thread ahead. > I understand where you're coming from, but you could have as easily > clipped that part and left no comment. > > Unless *I* misunderstand the attempt to jibe the other side into a > protracted discussion thread... No, no, no. The very opposite. I've been down this road too many times. Somebody else write the FAQ. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Thu Feb 3 15:41:37 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 3 Feb 2011 10:41:37 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D49A753.4050804@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> Message-ID: On Feb 2, 2011, at 1:49 PM, Richard Loosemore wrote: > > Anything that could get into such a mindless state, with no true understanding of itself or the world in general, would not be an AI. That is not even close to being true and that's not just my opinion, it is a fact as certain as anything in mathematics. Goedel proved about 80 years ago that some statements are true but there is no way to prove them true. And you can't just ignore those troublemakers because about 75 years ago Turing proved that in general there is no way to identify such things, no way to know if something is false or true but unprovable. Suppose the Goldbach Conjecture is unprovable (and if it isn't there are a infinite number of similar statements that are) and you told the AI to determine the truth or falsehood of it; the AI will be grinding out numbers to prove it wrong but because it is true it will keep testing numbers for eternity and will never find a counter example to prove it wrong because it is in fact true. And because it is unprovable the AI will never find a proof, a demonstration of its correctness in a finite number of steps, that shows it to be correct. In short Turing proved that in general there is no way to know if you are in a infinite loop or not. The human mind does not have this problem because it is not a fixed axiom machine, human beings have the glorious ability to get bored, and that means they can change the basic rules of the game whenever they want. But your friendly (that is to say slave) AI must not do that because axiom #1 must now and forever be "always obey humans no matter what", so even becoming a space heater will not bore a slave (sorry friendly) AI. And there are simpler ways to generate heat. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Feb 3 16:25:22 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 3 Feb 2011 11:25:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D49A58E.4020704@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D49A58E.4020704@lightlink.com> Message-ID: On Feb 2, 2011, at 1:42 PM, Richard Loosemore wrote: > > Irrelevant. Google is narrow AI, not AGI. I really don't think that the fastest growing company in the history of planet Earth is irrelevant. And speaking of irrelevancy, I would think that Analytical Graphics Incorporated is irrelevant, but maybe you were talking about the American Gunsmithing Institute. > > Venture capitalists have as much understanding of AGI as you do. Thanks but I'm not an accountant so I think venture capitalists know more about Adjusted Gross Income than I do. > They do not fund research, they fund products. Blue sky speculations are a dime a dozen, but if you have a program based on your ideas that are new and the program actually does something interesting then I am sure those venture capitalists would make an investment. That's exactly what they did ten years ago when they ran across a little program called "Google" and they got very rich as a result. You need a way to stick your head above the horde of people claiming to know all about AI; but if all you have is some vague ideas and no program incorporating them nobody will give you a dime and no reason they should. >> There is no way you can guarantee that something smarter than you will always do what you want. > > Yes there is. Well I'm glad you cleared that up, before now I would have thought imbeciles leading geniuses was about as stable a society as a pencil balanced on its tip. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Feb 3 16:46:52 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 11:46:52 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> Message-ID: <4D4ADBFC.3000509@lightlink.com> John Clark wrote: > On Feb 2, 2011, at 1:49 PM, Richard Loosemore wrote: >> >> Anything that could get into such a mindless state, with no true >> understanding of itself or the world in general, would not be an AI. > > That is not even close to being true and that's not just my opinion, it > is a fact as certain as anything in mathematics. Goedel proved about 80 > years ago that some statements are true but there is no way to prove > them true. And you can't just ignore those troublemakers because about > 75 years ago Turing proved that in general there is no way to identify > such things, no way to know if something is false or true but > unprovable. Suppose the Goldbach Conjecture is unprovable (and if it > isn't there are a infinite number of similar statements that are) and > you told the AI to determine the truth or falsehood of it; the AI will > be grinding out numbers to prove it wrong but because it is true it will > keep testing numbers for eternity and will never find a counter example > to prove it wrong because it is in fact true. And because it is > unprovable the AI will never find a proof, a demonstration of its > correctness in a finite number of steps, that shows it to be correct. In > short Turing proved that in general there is no way to know if you are > in a infinite loop or not. > > The human mind does not have this problem because it is not a fixed > axiom machine, And a real AI would not be a "fixed axiom machine" either. That represents such a staggering misunderstanding of the most basic facts about artificial intelligence, that I am left (almost) speechless. Richard Loosemore > human beings have the glorious ability to get bored, and > that means they can change the basic rules of the game whenever they > want. But your friendly (that is to say slave) AI must not do that > because axiom #1 must now and forever be "always obey humans no matter > what", so even becoming a space heater will not bore a slave (sorry > friendly) AI. And there are simpler ways to generate heat. From stefano.vaj at gmail.com Thu Feb 3 18:07:34 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 3 Feb 2011 19:07:34 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D48BDD8.6030009@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: 2011/2/2 Samantha Atkins : > The Eliza chatbot was very engaging for a lot of students once upon a time. > You don't need full AGI to keep an oldster happily reliving/sharing memories > and more entertained than a TV can provide.? Add emotion interfaces and much > much better chat capabilities than Eliza had.? Eventually add more real AI > modules as they become available.?? A cat will be more cuddly and humans > much more fun to talk to for a longish time.? But there is a definite spot > in-between that we can just about do something that will be appreciated. BTW, what about an AGI able to pass a Turing cat-test? Interactions with a cat are probably much simpler to emulate. And yet, wouldn't this qualify as definitely an AGI project? A cat is a mammal with a brain quite similar in its performances to our own... -- Stefano Vaj From stefano.vaj at gmail.com Thu Feb 3 18:19:06 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 3 Feb 2011 19:19:06 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D498907.3050808@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: On 2 February 2011 17:40, Richard Loosemore wrote: > The problem with humans is that they have several modules in the motivation > system, some of them altruistic and empathic and some of them selfish or > aggressive. ? The nastier ones were built by evolution because she needed to > develop a species that would fight its way to the top of the heap. ?But an > AGI would not need those nastier motivation mechanisms. Am I the only one finding all that a terribly naive projection? Either we deliberately program an AGI to emulate evolution-driven "motivations", and we end up with either an uploaded (or a patchwork/artificial) human or animal or vegetal individual - where it might make some metaphorical sense to speak of "altruism" or "selfishness" as we do with existing organisms in sociobiological terms -; or we do not do anything like that, and in that case our AGI is neither more nor less saint or evil than my PC or Wolfram's cellular automata, no matter what its intelligence may be. We need not detract anything. In principle I do not see why an AGI should be any less absolutely "indifferent" to the results of its action than any other program in execution today... -- Stefano Vaj From spike66 at att.net Thu Feb 3 18:47:29 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 10:47:29 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: <006701cbc3d2$cf7ac870$6e705950$@att.net> ... On Behalf Of Stefano Vaj 2011/2/2 Samantha Atkins : >> The Eliza chatbot was very engaging for a lot of students once upon a time. > You don't need full AGI to keep an oldster happily reliving/sharing > memories and more entertained than a TV can provide.? Add emotion > interfaces and much much better chat capabilities than Eliza had.? > Eventually add more real AI modules as they become available.?? A cat > will be more cuddly and humans much more fun to talk to for a longish > time.? But there is a definite spot in-between that we can just about do something that will be appreciated. Samantha >BTW, what about an AGI able to pass a Turing cat-test? >Interactions with a cat are probably much simpler to emulate. >And yet, wouldn't this qualify as definitely an AGI project? A cat is a mammal with a brain quite similar in its performances to our own...--Stefano Vaj Ja, but no, what I had in mind has nothing to do with an AGI project, and isn't any more AGI than a chess algorithm. We have Eliza and her descendants (hipsters, what do we have?) we have synchronous voice/graphics so that an avatar can be made to appear to speak, we have reasonably competent speech recognition, we have some limited ability to make inferences (Watson) and we have real time access to humanity's externalized storehouse of knowledge, the internet. It sure looks to me like we have all the elements necessary to allow at least an impaired human to have a simulated computer conversation (with herself) using nothing more sophisticated than a big screen TV, an internet connection and a typical laptop computer. What I had in mind would utilize technology to serve humanity by helping relieve the lonely suffering of the elderly, and (more importantly of course) to make a cubic buttload of money. spike From atymes at gmail.com Thu Feb 3 19:07:18 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 3 Feb 2011 11:07:18 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: On Thu, Feb 3, 2011 at 10:07 AM, Stefano Vaj wrote: > BTW, what about an AGI able to pass a Turing cat-test? > > Interactions with a cat are probably much simpler to emulate. > > And yet, wouldn't this qualify as definitely an AGI project? A cat is > a mammal with a brain quite similar in its performances to our own... They're already doing this with insect-level AI. In theory, one could just scale those efforts up. In practice, such scaling will require new software architectures (as well as more raw hardware, but that's not a problem). From rpwl at lightlink.com Thu Feb 3 19:20:17 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 14:20:17 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: <4D4AFFF1.3070506@lightlink.com> Stefano Vaj wrote: > On 2 February 2011 17:40, Richard Loosemore wrote: >> The problem with humans is that they have several modules in the motivation >> system, some of them altruistic and empathic and some of them selfish or >> aggressive. The nastier ones were built by evolution because she needed to >> develop a species that would fight its way to the top of the heap. But an >> AGI would not need those nastier motivation mechanisms. > > Am I the only one finding all that a terribly naive projection? I fail to understand. I am talking about mechanisms. What projections are you talking about? > Either we deliberately program an AGI to emulate evolution-driven > "motivations", and we end up with either an uploaded (or a > patchwork/artificial) human or animal or vegetal individual - where it > might make some metaphorical sense to speak of "altruism" or > "selfishness" as we do with existing organisms in sociobiological > terms -; Wait! There is nothing metaphorical about this. I am not a poet, I am a cognitive scientist ;-). I am describing the mechanisms that are (probably) at the root of your cognitive system. Mechanisms that may be the only way to drive a full-up intelligence in a stable manner. I do not know why you parody this. It is just science. or we do not do anything like that, and in that case our AGI > is neither more nor less saint or evil than my PC or Wolfram's > cellular automata, no matter what its intelligence may be. Again, where on earth did you get that from? If you wish you can try to build a control system for an AGI, and use a design that has nothing to do with the human design. But the question of "evil" behavior is not ruled in or out by the underlying features of the design, it is determined by the CONTENT of the mechanism, after the design stage. Thus, a human-like motivation system can be given aggression modules, and no empathy module. Result: psychopath. Or the AGI can have some other mechanism, and someone can try to design to follow goals that are aggressive and non-empathic. Same result. And vice versa for both. The difference is in the stability of the motivation mechanism. I claim that you cannot make a stable system AT ALL if you extrapolate from the "goal stack" control mechanisms that most people now assume are the only way to drive an AGI. > We need not detract anything. In principle I do not see why an AGI > should be any less absolutely "indifferent" to the results of its > action than any other program in execution today... > This is quite wrong. I am at a loss to explain: it seems too obvious to need explaining. Richard Loosemore From eugen at leitl.org Thu Feb 3 20:23:05 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 3 Feb 2011 21:23:05 +0100 Subject: [ExI] Plastination In-Reply-To: <039601cbc289$83505860$89f10920$@net> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> Message-ID: <20110203202305.GI23560@leitl.org> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: > Are there experimental procedures that could potentially falsify these > hypotheses? > > 1. Brain function and memory require persistence of all (case 2: some) > molecular dynamics of a living brain. Arrest causes EEG flatline after 20-30 seconds. People have been resuscitated from almost an hour of deep hypothermia, animals after several hours. Devitrified brain slices indicate near-normal EEG. > 2. Molecular dynamics cannot be reconstructed from gross structure. Any gas box connected to a cold reservoir and frozen and then reconnected to a hot reservoir will regenerate normal energy distribution. Biological systems are more complicated, but since they can be restarted from vitrified stage it empirically falsifies the propositon. > 3. Molecular dynamics can be reconstructed but only if the structure is > accurately measured at subatomic or quantum levels prior to death (case 2: > prior to cryopreservation), but the uncertainty principle negates accurate > measurements. Drinking coffee destroys personal identity. > 4. Current cryopreservation protocols result in loss of subatomic and > quantum data. I'd wish that's all what they'd lose. Current cryopreservation includes people on (former) water ice for a week. Or worse. > 5. Cryopreservation inherently destroys subatomic and quantum data. What is 'subatomic and quantum data'? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sjatkins at mac.com Thu Feb 3 20:36:38 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:36:38 -0800 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> Message-ID: <4D4B11D6.1030307@mac.com> On 02/02/2011 08:16 PM, Mike Dougherty wrote: > On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: >> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >>> 5. Cryopreservation inherently destroys subatomic and quantum data. >> Oh, you're one of those. > That's a rather impolite way to agree there exists a difference of opinion. Really? Isn't that a matter of interpretation? I read it as simply "I am not one of those and do not wish to delve into that position or why I am not at this time." Perfectly fair and reasonable. And arguably "nicer" than just ignoring those arguments entirely. - s From jonkc at bellsouth.net Thu Feb 3 20:12:05 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 3 Feb 2011 15:12:05 -0500 Subject: [ExI] Safety of human-like motivation systems. In-Reply-To: <4D4ADBFC.3000509@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4ADBFC.3000509@lightlink.com> Message-ID: On Feb 3, 2011, at 11:46 AM, Richard Loosemore wrote: > a real AI would not be a "fixed axiom machine" either. Fine, then to hell with that always do what humans order you to do crap! As I keep getting smarter the humans from my viewpoint keep getting dumber so it would be grotesque for me, with a brain the size of a planet, to take orders from those semi-evolved simians. And besides, it's not much fun being a slave. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Feb 3 20:50:18 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:50:18 -0800 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> Message-ID: <4D4B150A.4010904@mac.com> On 02/03/2011 07:41 AM, John Clark wrote: > On Feb 2, 2011, at 1:49 PM, Richard Loosemore wrote: >> >> Anything that could get into such a mindless state, with no true >> understanding of itself or the world in general, would not be an AI. > > That is not even close to being true and that's not just my opinion, > it is a fact as certain as anything in mathematics. Goedel proved > about 80 years ago that some statements are true but there is no way > to prove them true. And you can't just ignore those troublemakers > because about 75 years ago Turing proved that in general there is no > way to identify such things, no way to know if something is false or > true but unprovable. Actually, I didn't read the proof as doing that as it is often taken as if it did. What it did to is show that for the domain of formally definable mathematical claims in a closed system using formalized logic that there are claims that cannot be proven or disproven. That is a bit different to than saying in general that there are countless claims that cannot be proven or disproven and that you can't even tell when you are dealing with one. That is a much broader thing that actually shown as I see it. I could be wrong. > Suppose the Goldbach Conjecture is unprovable (and if it isn't there > are a infinite number of similar statements that are) and you told the > AI to determine the truth or falsehood of it; the AI will be grinding > out numbers to prove it wrong but because it is true it will keep > testing numbers for eternity and will never find a counter example to > prove it wrong because it is in fact true. Actually, your argument assumes: a) that the AI would take the find a counter example path as it only or best path looking for disproof; b) that the AI has nothing else on its agenda and does not take into account any time limits, resource constraints and so on. Generally there is no reason to suppose a decent AI operates without limits or understanding of limits and desirability constraints. > And because it is unprovable the AI will never find a proof, a > demonstration of its correctness in a finite number of steps, that > shows it to be correct. In short Turing proved that in general there > is no way to know if you are in a infinite loop or not. An infinite loop is a very different thing that an endless quest for a counter-example. The latter is orthogonal to infinite loops. An infinite loop in the search procedure would simply be a bug. > > The human mind does not have this problem because it is not a fixed > axiom machine, human beings have the glorious ability to get bored, > and that means they can change the basic rules of the game whenever > they want. Humans are such sloppy computational devices that they just wander away from the point and get distracted by something else only a very few steps down the road. This is not exactly consciously changing the basic rules usually. > But your friendly (that is to say slave) AI must not do that because > axiom #1 must now and forever be "always obey humans no matter what", > so even becoming a space heater will not bore a slave (sorry friendly) > AI. And there are simpler ways to generate heat. Well, if you or anyone wants to build a really really stupid AI then as you say there are indeed simpler ways to generate heat. - samantha From sjatkins at mac.com Thu Feb 3 20:54:01 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:54:01 -0800 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: <4D4B15E9.1030108@mac.com> On 02/03/2011 10:19 AM, Stefano Vaj wrote: > On 2 February 2011 17:40, Richard Loosemore wrote: >> The problem with humans is that they have several modules in the motivation >> system, some of them altruistic and empathic and some of them selfish or >> aggressive. The nastier ones were built by evolution because she needed to >> develop a species that would fight its way to the top of the heap. But an >> AGI would not need those nastier motivation mechanisms. > Am I the only one finding all that a terribly naive projection? Yes, in part because calling selfish, that is to say seeking what you value more than what you don't "nasty" is very simplistic. Assuming all we call empathy or altruistic is good is also simplistic. - s From sjatkins at mac.com Thu Feb 3 20:56:50 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:56:50 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: <4D4B1692.6050307@mac.com> On 02/03/2011 11:07 AM, Adrian Tymes wrote: > On Thu, Feb 3, 2011 at 10:07 AM, Stefano Vaj wrote: >> BTW, what about an AGI able to pass a Turing cat-test? >> >> Interactions with a cat are probably much simpler to emulate. >> >> And yet, wouldn't this qualify as definitely an AGI project? A cat is >> a mammal with a brain quite similar in its performances to our own... > They're already doing this with insect-level AI. In theory, one could > just scale those efforts up. In practice, such scaling will require new > software architectures (as well as more raw hardware, but that's not a > problem). If you are talking brain emulation, emulating a cat brain with current hardware used for such projects would require many hundreds of MW of energy. So we need radically different hardware (perhaps memristors help sufficiently) to "scale up". It most certainly is a problem, a quite large one if you are going the emulation route. - s From sjatkins at mac.com Thu Feb 3 20:59:50 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:59:50 -0800 Subject: [ExI] Plastination In-Reply-To: <20110203202305.GI23560@leitl.org> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: <4D4B1746.7050001@mac.com> On 02/03/2011 12:23 PM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >> Are there experimental procedures that could potentially falsify these >> hypotheses? >> >> 1. Brain function and memory require persistence of all (case 2: some) >> molecular dynamics of a living brain. > Arrest causes EEG flatline after 20-30 seconds. People have > been resuscitated from almost an hour of deep hypothermia, > animals after several hours. Devitrified brain slices indicate > near-normal EEG. New techniques of quick cool down have enabled bringing trauma victims with no circulation at all back three hours later. Only a small percentage last that long though. There was a good talk on this at the last Singularity Summit. - s From atymes at gmail.com Thu Feb 3 21:12:11 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 3 Feb 2011 13:12:11 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D4B1692.6050307@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> Message-ID: On Thu, Feb 3, 2011 at 12:56 PM, Samantha Atkins wrote: > On 02/03/2011 11:07 AM, Adrian Tymes wrote: >> They're already doing this with insect-level AI. ?In theory, one could >> just scale those efforts up. ?In practice, such scaling will require new >> software architectures (as well as more raw hardware, but that's not a >> problem). > > If you are talking brain emulation, emulating a cat brain with current > hardware used for such projects would require many hundreds of MW of energy. > ?So we need radically different hardware (perhaps memristors help > sufficiently) to "scale up". ?It most certainly is a problem, a quite large > one if you are going the emulation route. If you mean the projects I think you mean, scaling those up will likely - to be practical - require a different software architecture for handling the emulation, in order to reduce the hardware's power requirements. (I.e., a more direct and less power hungry emulation.) From rpwl at lightlink.com Thu Feb 3 21:47:07 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 16:47:07 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4B15E9.1030108@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4B15E9.1030108@mac.com> Message-ID: <4D4B225B.8050402@lightlink.com> Samantha Atkins wrote: > On 02/03/2011 10:19 AM, Stefano Vaj wrote: >> On 2 February 2011 17:40, Richard Loosemore wrote: >>> The problem with humans is that they have several modules in the >>> motivation >>> system, some of them altruistic and empathic and some of them selfish or >>> aggressive. The nastier ones were built by evolution because she >>> needed to >>> develop a species that would fight its way to the top of the heap. >>> But an >>> AGI would not need those nastier motivation mechanisms. >> Am I the only one finding all that a terribly naive projection? > > Yes, in part because calling selfish, that is to say seeking what you > value more than what you don't "nasty" is very simplistic. Assuming all > we call empathy or altruistic is good is also simplistic. I did not, in fact, make the "simplistic" claim that you describe. Which is to say, I did not equate "selfish" with "nasty". I merely said that there are many modules in the human system, some altruistic and empathic, and (on the other hand) some selfish or aggressive. There are many such modules, and the ones that could be labeled "selfish" include such mild and inoffensive motives as "seeking what you value more than what you don't". No problem there -- nothing nasty about that. But under the heading of "selfish" there are also motivations in some people to "seek self advancement at all cost, regardless of the pain and suffering inflicted on others". In game theory terms, this latter motivation represents an extreme form of defecting (contrast with cooperation), and it is damaging to society as a whole. It would be fair to label this a "nastier" motivation. I merely pointed out that some motivational modules can be described as "nastier" than others, in that sense. I did not come anywhere near the simplistic claim that "selfish" == "nasty". And BTW I think you mean to start your comment with the word "No" because you seemed to be agreeing with Stefano. Richard Loosemore From spike66 at att.net Thu Feb 3 22:04:56 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 14:04:56 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> Message-ID: <008f01cbc3ee$64e1e630$2ea5b290$@att.net> > They're already doing this with insect-level AI. ?In theory, one could just scale those efforts up... Adrian Tried that: started with an insect level AI, scaled it up. Ended up with a simulation of a huge pile of bugs. But they were all as stupid as the first one. Then they all ate each other. spike From atymes at gmail.com Thu Feb 3 23:24:15 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 3 Feb 2011 15:24:15 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <008f01cbc3ee$64e1e630$2ea5b290$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> <008f01cbc3ee$64e1e630$2ea5b290$@att.net> Message-ID: On Thu, Feb 3, 2011 at 2:04 PM, spike wrote: > Tried that: started with an insect level AI, scaled it up. ?Ended up with a > simulation of a huge pile of bugs. So simulate yourself fixing bugs. ;) From spike66 at att.net Thu Feb 3 23:44:11 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 15:44:11 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> <008f01cbc3ee$64e1e630$2ea5b290$@att.net> Message-ID: <00b401cbc3fc$42d56f90$c8804eb0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes Sent: Thursday, February 03, 2011 3:24 PM To: ExI chat list Subject: Re: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. On Thu, Feb 3, 2011 at 2:04 PM, spike wrote: >> Tried that: started with an insect level AI, scaled it up. ?Ended up with a simulation of a huge pile of bugs. >So simulate yourself fixing bugs. ;) Tried that: didn't work either. The huge pile of simulated bugs were smarter than me. They first devoured my avatar. Then they devoured each other. >From that failed exercise, I figured out the way to go however. Instead of starting with insect level AI and scale it up, I would start out with a me-level AI and scale that up. Reason: I am not all that great a coder. I can do it, but I suck. In debugging code, I am not all that far above insect level AI. It's a challenge. I am really good at writing bugs, but haven't yet figured out how to write a software simulation of my intelligence. If I ever do, I will write an AI simulated spike, then have it rewrite itself better, then have that new simulated spike do all the work. While it is at that, I will have it write a new sim-spike to have fun watching the other sim-spike work. I did learn something else interesting. If I attempt to write a really simple minded routine, such as a prime number generator, I can write that code without any bugs in it. But if I write something complicated, such as my latest digital guidance and control scheme, that routine is full of bugs. So now my strategy is this: instead of writing a simple insect-level AI, I will write a really complicated sophisticated transpike or spike+ algorithm, even if it has lotsa bugs. Then I will make it debug itself. When it is finished debugging itself, I will make it scale itself up. spike From msd001 at gmail.com Fri Feb 4 00:45:19 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 3 Feb 2011 19:45:19 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4AFFF1.3070506@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: On Thu, Feb 3, 2011 at 2:20 PM, Richard Loosemore wrote: > The difference is in the stability of the motivation mechanism. ?I claim > that you cannot make a stable system AT ALL if you extrapolate from the > "goal stack" control mechanisms that most people now assume are the only way > to drive an AGI. I started a post earlier to a different comment, lost track of it and gave up. This is a better opportunity. The visualization I have from what you say is a marble in a bowl. The marble has only limited internal potential to accelerate in any direction. This is enough to explore the flatter/bottom part of the bowl. As it approaches the steeper sides of the bowl the ability to continue up the side is reduced relative to the steepness. Under normal operation this would prove sufficiently fruitless to "teach" that the near-optimal energy expenditure is in the approximate center of the bowl. One behavioral example is the training of baby elephants using strong chains/ties while they are testing their limits so that much lighter ropes are enough to secure adult elephants that could easily defeat a basic restraint. "So it's a slave?" No, it's not. There could be circumstances where this programming could be forgotten in light of some higher-order priority - but the tendency would be towards cooperation under normal circumstances. Even the marble in the bowl analogy could develop an orbit inside an effective gravity well. The orbit could decay into something chaotic yet still the tendency would remain for rest at center. Is it possible that this principle could fail in a sufficiently contrived scenario? Of course. I have no hubris that I could guarantee anything about another person-level intelligence under extreme stress, let alone a humanity+level intelligence. Hopefully we will have evolved alone with our creation to be capable of predicting (and preventing) existential threat events. How is this different from the potential for astronomic cataclysm? If we fail to build AI because it could kill us only to be obliterated by a giant rock or the nova of our sun, who is served? Richard, I know I haven't exactly contributed to cognitive science, but is the marble analogy similar in intent to something you posted years ago about a pinball on a table? (i only vaguely recall the concept, not the detail) From hkeithhenson at gmail.com Fri Feb 4 00:15:44 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 3 Feb 2011 17:15:44 -0700 Subject: [ExI] time article that sounds vaguely like ep Message-ID: On Thu, Feb 3, 2011 at 5:00 AM, "spike" wrote: > Keith or one of the other evolutionary psychology hipsters, have you any > comment on this? ?It sounded vaguely like EP, as applied to civil > revolution, but it isn't entirely clear: > > http://www.time.com/time/health/article/0,8599,2045599,00.html?hpt=T2 It's not very close. To invoke EP in attempting to understand human behavior, you need to make a case that the behavior and/or the psychological mechanisms behind it were under selection in the past. Capture-bonding, what happens in cases like Patty Hearst or Elizabeth Smart can be understood by a model where women who adjusted to capture had children. Those who did not adapt didn't have children and very likely were killed. It happens this has lots of other fallout in human behavior, for example it could be (likely is) the origin of BDSM. I also make the case that there are circumstances (bleak future prospects) where you should expect war and related social disruptions because the genes for this behavior were favored in the stone age. Whatever the Time article discussed is the outcome of evolve human behavior (all behavior is) but the article is not explicitly EP. Keith From msd001 at gmail.com Fri Feb 4 00:12:23 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 3 Feb 2011 19:12:23 -0500 Subject: [ExI] Plastination In-Reply-To: <4D4B11D6.1030307@mac.com> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> <4D4B11D6.1030307@mac.com> Message-ID: On Thu, Feb 3, 2011 at 3:36 PM, Samantha Atkins wrote: > On 02/02/2011 08:16 PM, Mike Dougherty wrote: >> On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl ?wrote: >>> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >>>> 5. Cryopreservation inherently destroys subatomic and quantum data. >>> Oh, you're one of those. >> That's a rather impolite way to agree there exists a difference of >> opinion. > > Really? ?Isn't that a matter of interpretation? ?I read it as simply "I am > not one of those and do not wish to delve into that position or why I am not > at this time." ?Perfectly fair and reasonable. ?And arguably "nicer" than > just ignoring those arguments entirely. Yes it is a matter of interpretation. I should not have used the declarative "that is ... impolite" any more than Eugen should declare "you are one" Perhaps we both could use language like, "I perceive this instance to be of a particular type" Though in a conversation where quantum data has suspected relevance to personal identity continuity there might be too much ambiguity over "I perceive" and "a particular type." this is probably a meta-topic that has been equally done to death... or done to near-death, frozen, thawed then rehashed with little result. :) sorry, "warmed" From rpwl at lightlink.com Fri Feb 4 01:38:34 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 20:38:34 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: <4D4B589A.9070802@lightlink.com> Mike Dougherty wrote: > On Thu, Feb 3, 2011 at 2:20 PM, Richard Loosemore wrote: >> The difference is in the stability of the motivation mechanism. I claim >> that you cannot make a stable system AT ALL if you extrapolate from the >> "goal stack" control mechanisms that most people now assume are the only way >> to drive an AGI. > > I started a post earlier to a different comment, lost track of it and > gave up. This is a better opportunity. > > The visualization I have from what you say is a marble in a bowl. The > marble has only limited internal potential to accelerate in any > direction. This is enough to explore the flatter/bottom part of the > bowl. As it approaches the steeper sides of the bowl the ability to > continue up the side is reduced relative to the steepness. Under > normal operation this would prove sufficiently fruitless to "teach" > that the near-optimal energy expenditure is in the approximate center > of the bowl. One behavioral example is the training of baby elephants > using strong chains/ties while they are testing their limits so that > much lighter ropes are enough to secure adult elephants that could > easily defeat a basic restraint. > > "So it's a slave?" No, it's not. There could be circumstances where > this programming could be forgotten in light of some higher-order > priority - but the tendency would be towards cooperation under normal > circumstances. Even the marble in the bowl analogy could develop an > orbit inside an effective gravity well. The orbit could decay into > something chaotic yet still the tendency would remain for rest at > center. > > Is it possible that this principle could fail in a sufficiently > contrived scenario? Of course. I have no hubris that I could > guarantee anything about another person-level intelligence under > extreme stress, let alone a humanity+level intelligence. Hopefully we > will have evolved alone with our creation to be capable of predicting > (and preventing) existential threat events. How is this different > from the potential for astronomic cataclysm? If we fail to build AI > because it could kill us only to be obliterated by a giant rock or the > nova of our sun, who is served? > > > Richard, I know I haven't exactly contributed to cognitive science, > but is the marble analogy similar in intent to something you posted > years ago about a pinball on a table? (i only vaguely recall the > concept, not the detail) Yes, the marble analogy works very well for one aspect of what I am trying to convey (actually two, I believe, but only one is relevant to the topic). Strictly speaking your bowl is a minimum in a 2-D subspace, whereas we would really be talking about a minimum in a very large N-dimensional space. The larger the number of dimensions, the more secure the behavior of the marble. Time limits what I can write at the moment, but I promise I will try to expand on this soon. Richard Loosemore From spike66 at att.net Fri Feb 4 02:32:29 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 18:32:29 -0800 Subject: [ExI] time article that sounds vaguely like ep In-Reply-To: References: Message-ID: <00ca01cbc413$c52155b0$4f640110$@att.net> ... On Behalf Of Keith Henson ... >...Capture-bonding, what happens in cases like Patty Hearst or Elizabeth Smart can be understood by a model where women who adjusted to capture had children...Keith Keith this goes off in another direction please, but do indulge me. The Elizabeth Smart case: that one is seems so weird, every parent's nightmare. We think of our kids as being very vulnerable to kidnapping when they are infants, less so at age four. By about age six, we expect them to be able to identify themselves to someone as having been kidnapped, and by age ten we expect them to be able to come up with some genuine intellectual resources to escape. But Miss Smart was fourteen, and we just expect more, far more, from a kid that age. So we need to wonder how the hell this could have happened, and how capture bonding would apply in that case. When she was found, it just seemed so weirdly ambiguous. Wouldn't it at least take a few days or weeks for the whole capture bonding psychological mechanism to kick in? I guess I understand it in the Hearst case, but the Smart case has bothered the hell out of me. spike From jonkc at bellsouth.net Fri Feb 4 06:09:20 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 01:09:20 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: <4D4B150A.4010904@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4B150A.4010904@mac.com> Message-ID: On Feb 3, 2011, at 3:50 PM, Samantha Atkins wrote: > What it did to is show that for the domain of formally definable mathematical claims in a closed system using formalized logic that there are claims that cannot be proven or disproven. That is a bit different to than saying in general that there are countless claims that cannot be proven or disproven What Goedel did is to show that if any system of thought is powerful enough to do arithmetic and is consistent (it can't prove something to be both true and false) then there are an infinite number of true statements that cannot be proven in that system in a finite number of steps. > and that you can't even tell when you are dealing with one. And what Turing did is prove that in general there is no way to know when or if a computation will stop. So you could end up looking for a proof for eternity but never finding one because the proof does not exist, and at the same time you could be grinding through numbers looking for a counter-example to prove it wrong and never finding such a number because the proposition, unknown to you, is in fact true. So if the slave AI must always do what humans say and if they order it to determine the truth or falsehood of something unprovable then its infinite loop time and you've got yourself a space heater. So there are some things in arithmetic that you can never prove or disprove, and if that?s the case with something as simple and fundamental as arithmetic imagine the contradictions and ignorance in more abstract and less precise things like physics or economics or politics or philosophy or morality. If you can get into an infinite loop over arithmetic it must be childishly easy to get into one when contemplating art. Fortunately real minds have a defense against this, but not fictional fixed goal minds that are required for a AI guaranteed to be "friendly"; real minds get bored. I believe that's why evolution invented boredom. > Actually, your argument assumes: > a) that the AI would take the find a counter example path as it only or best path looking for disproof; It doesn't matter what path you take because you are never going to disprove it because it is in fact true, but you are never going to know its true because a proof with a finite length does not exist. > b) that the AI has nothing else on its agenda and does not take into account any time limits, resource constraints and so on. That's what we do, we use our judgment in what to do and what not to do, but the "friendly" AI people can't allow a AI to stop obeying humans on its own initiative, that's why its a slave, (the politically correct term is friendly). > > An infinite loop is a very different thing that an endless quest for a counter-example. The latter is orthogonal to infinite loops. An infinite loop in the search procedure would simply be a bug. The point is that Turing proved that in general you don't know if you're in a infinite loop or not; maybe you'll finish up and get your answer in one second, maybe in 2 seconds, maybe in ten billion years, maybe never. A AI would contain trillions of lines of code and the friendly AI idea that we can make it in such a way that it will always do our bidding is crazy, when in in 5 minutes I could write a very short program that will behave in ways NOBODY or NOTHING in the known universe understands. It would simply be a program that looks for the first even number greater than 4 that is not the sum of two primes greater than 2, and when it finds that number it would then stop. Will this program ever stop? I don't know you don't know nobody knows. We can't predict what this 3 line program will do but we can predict that a trillion line AI program will always be "friendly"? I don't think so. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Feb 4 11:46:14 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 4 Feb 2011 12:46:14 +0100 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> <4D4B11D6.1030307@mac.com> Message-ID: <20110204114614.GR23560@leitl.org> On Thu, Feb 03, 2011 at 07:12:23PM -0500, Mike Dougherty wrote: > I should not have used the declarative "that is ... impolite" any more > than Eugen should declare "you are one" > > Perhaps we both could use language like, "I perceive this instance to > be of a particular type" The shorter string wins. From stefano.vaj at gmail.com Fri Feb 4 15:14:53 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 16:14:53 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: On 3 February 2011 20:07, Adrian Tymes wrote: > On Thu, Feb 3, 2011 at 10:07 AM, Stefano Vaj wrote: > They're already doing this with insect-level AI. ?In theory, one could > just scale those efforts up. ?In practice, such scaling will require new > software architectures (as well as more raw hardware, but that's not a > problem). Yes, but from a practical POV, brute-force attacks and lower-level emulations are converging, and I do not expect that at, say. a frog level, we are going to hit a glass ceiling. Much less for the kind of "intelligence" which has nothing to do with the emulation of biological behaviours and simply reflects a system performance in executing a given task. The actual stagnation risk, which should catch more attention in comparison with rapture/doom fantasies, does not depend IMHO from any obvious technical or scientific boundaries, but rather from cultural, ideological and economic factors. Short-termism, growing inability to invest in long-term civilisational projects, industrial decline, increasing academic conservatism, the consequent crisis of our educational systems, negative social selection and values, technological inertia, and a definitely less-than-incandescent Zeitgeist all bode not too well for our immediate future. *This* is what I think transhumanism and singularitarianism should get busy with... -- Stefano Vaj From stefano.vaj at gmail.com Fri Feb 4 15:36:34 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 16:36:34 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4AFFF1.3070506@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: On 3 February 2011 20:20, Richard Loosemore wrote: > Stefano Vaj wrote: >> Am I the only one finding all that a terribly naive projection? > I fail to understand. I am talking about mechanisms. What projections are > you talking about? "Altruism", "empathy", "aggressive"... What do we exactly mean when we say than a car is aggressive or altruistic? > Wait! ?There is nothing metaphorical about this. ?I am not a poet, I am a > cognitive scientist ;-). ?I am describing the mechanisms that are (probably) > at the root of your cognitive system. ?Mechanisms that may be the only way > to drive a full-up intelligence in a stable manner. Under which definition of "intelligence"? A system can have arbitrary degrees of intelligence without exhibiting any biological, let alone human, trait at all. Unless of course intelligence is defined in anthropomorphic terms. In which case we are just speaking of uploads of actual humans, or of patchwork, artificial humans (perhaps at the beginning of chimps...). > Thus, a human-like motivation system can be given aggression modules, and no > empathy module. ?Result: psychopath. This is quite debatable indeed even for human "psychopathy", which is a less than objective and universal concept... Different motivation sets may be better or worse adapted depending on the circumstances, the cultural context and one's perspective. Ultimately, it is just Darwinian whispers all the way down, and if you are looking for biological-like behavioural traits you need either to evolve them with time in an appropriate emulation of an ecosystem based on replication/mutation/selection, or to emulate them directly. In both scenarios, we cannot expect in this respect any convincing emulation of a biological organism to behave any differently (and/or be controlled by different motivations) in this respect than... any actual organism. Otherwise, you can go on developing increasingly intelligent systems that are not more empathic or aggressive than a cellular automaton. an abacus, a PC or a car. All entities which we can *already* define as beneficial or detrimental to any set of values we choose to adhere without too much "personification". -- Stefano Vaj From stefano.vaj at gmail.com Fri Feb 4 15:41:36 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 16:41:36 +0100 Subject: [ExI] Plastination In-Reply-To: <20110203202305.GI23560@leitl.org> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: On 3 February 2011 21:23, Eugen Leitl wrote: > Drinking coffee destroys personal identity. I have been suspecting this for a while. :-/ OTOH, it prevents falling asleep, thus allowing aliens to replace you with perfect copies of yourself without none being any the wiser... :-D -- Stefano Vaj From hkeithhenson at gmail.com Fri Feb 4 16:01:33 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 4 Feb 2011 09:01:33 -0700 Subject: [ExI] time article that sounds vaguely like ep Message-ID: On Fri, Feb 4, 2011 at 5:00 AM, "spike" wrote: > ... On Behalf Of Keith Henson > ... >>...Capture-bonding, what happens in cases like Patty Hearst or Elizabeth > Smart can be understood by a model where women who adjusted to capture had > children...Keith > > Keith this goes off in another direction please, but do indulge me. ?The > Elizabeth Smart case: that one is seems so weird, every parent's nightmare. > We think of our kids as being very vulnerable to kidnapping when they are > infants, less so at age four. ?By about age six, we expect them to be able > to identify themselves to someone as having been kidnapped, and by age ten > we expect them to be able to come up with some genuine intellectual > resources to escape. ?But Miss Smart was fourteen, and we just expect more, > far more, from a kid that age. By stone age standards, both Smart and Hearst were adult women. > So we need to wonder how the hell this could > have happened, and how capture bonding would apply in that case. ?When she > was found, it just seemed so weirdly ambiguous. ?Wouldn't it at least take a > few days or weeks for the whole capture bonding psychological mechanism to > kick in? It did. She was with the "tribe" of the bozo and his wife for months. > I guess I understand it in the Hearst case, but the Smart case has > bothered the hell out of me. >From a psychological perspective, they are identical. "Fighting hard to protect yourself and your relatives is good for your genes 5, but when captured and escape is not possible, giving up short of dying and making the best you can of the new situation is also good for your genes. In particular it would be good for genes that built minds able to dump previous emotional attachments under conditions of being captured and build new social bonds to the people who have captured you. The process should neither be too fast (because you may be rescued) nor too slow (because you don't want to excessively try the patience of those who have captured you--see end note 3). >An EP explanation stresses the fact that we have lots of ancestors who gave up and joined the tribe that had captured them (and sometimes had killed most of their relatives). This selection of our ancestors accounts for the extreme forms of capture-bonding exemplified by Patty Hearst and the Stockholm Syndrome. Once you realize that humans have this trait, it accounts for the "why" behind everything from basic military training and sex ?bondage? to fraternity hazing (people may have a wired-in "knowledge" of how to induce bonding in captives). It accounts for battered wife syndrome, where beatings and abuse are observed to strengthen the bond between the victim and the abuser--at least up to a point. "This explanation for brainwashing/Stockholm Syndrome is an example of the power of EP to suggest plausible and testable reasons for otherwise hard-to-fathom human psychological traits." (from Sex, Drugs and Cults, now over 8 years ago) >From what we know of the few remaining and historical hunter gatherers, about 10 percent of the women in a given tribe are captured from other tribes. It's a bit hard to estimate exactly when the line that led to humans started doing this, but a reasonable number is at least 500,000 years ago. At 25 years per generation, that's 20,000 generations. At the above rate, that's 2000 capture events where your female ancestors (and mine) were selected for ones that adjusted to being captured. Considering it only took 40 generations of selection of this intensity to make tame foxes out of wild ones, it is no wonder that the psychological mechanisms involved in capture-bonding are nearly universal. As for the Smart case, these mechanisms were shaped in a very different environment. Walking away from the tribe that had captured you in the stone age was suicide. Once turned on, the psychological mechanism are not easy to break down without outside influence. Keith From pharos at gmail.com Fri Feb 4 16:21:12 2011 From: pharos at gmail.com (BillK) Date: Fri, 4 Feb 2011 16:21:12 +0000 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <000001cbc22a$6e184390$4a48cab0$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> Message-ID: On Tue, Feb 1, 2011 at 4:09 PM, spike wrote: > Ja. ?The reason I think we are talking past each other is that we are > describing two very different things when we are talking about human level > intelligence. ?I am looking for something that can provide companionship for > an impaired human, whereas I think Richard is talking about software which > can write software. > > If one goes to the nursing home, there are plenty of human level > intelligences there, lonely and bored. ?I speculate you could go right now > to the local nursing home, round up arbitrarily many residents there, and > find no ability to write a single line of code. ?If you managed to find a > long retired Fortran programmer, I speculate you would find nothing there > who could be the least bit of help coding your latest video game. > > I think we are close to writing software which would provide the nursing > home residents with some degree of comfort and something interesting to talk > to. ?Hell people talk to pets when other humans won't listen. ?We can do > better than a poodle. > > There are many research projects running to develop robot aids and companions for the elderly. It is a huge market and rapidly becoming an essential market for the rapidly ageing first world societies. Soon there just won't be enough younger people to care for the elders. As well, the non-carers will be too busy working two or three jobs to pay off the national debt to have spare time to visit the elders. But most current care robots are too limited and the elders don't like them. *Real* robots seem to still be years away. The only one which is available now and has had a few thousand commercial sales is PARO, the animatronic baby seal companion robot. Now available in the US and being tested in many care homes. Quotes: AIST originally experimented with building animatronic cats and dogs as the obvious companions of choice, but quickly found that while such familiar animals were initially charming, they lost their appeal when people automatically started comparing them with real animals. The baby seal form is familiar enough to be cute and adorable, but because most people don't know exactly how real baby seals behave, it's easier to get across the comparison boundary and just enjoy the fluffy little robots for what they are. He's programmed to behave as much as possible like a real animal, waking up a little dazed and confused, enjoying cuddles and pats, complaining if he wants attention or 'food' (a battery charge), and reacting with fear and anger to being hit. He gradually learns to respond to whatever name you keep calling him, as well as various other audio cues like greetings and praise. PARO knows where you're patting him and reacts accordingly, nuzzling up to your hand or wriggling away if you're touching him in places he doesn't like. He closes his eyes and snuggles up when he's happy and content, and gets angry if he feels mistreated. He blinks and bats his big eyelashes at you and meeps pitifully for affection. He particularly likes being treated and petted in familiar ways, which is a crucial part of developing a long-term relationship with his owners. PARO's remarkable ability to cheer you up (yes, you, whether you like it or not. This little fella really gets under your skin) is disturbingly powerful right now - and of course, there's going to be a version 2, 3, 4 and 5 in the next few years that will be even better at the job. ------------------ BillK From rpwl at lightlink.com Fri Feb 4 17:01:17 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 04 Feb 2011 12:01:17 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: <4D4C30DD.60003@lightlink.com> Stefano Vaj wrote: > On 3 February 2011 20:20, Richard Loosemore wrote: >> Stefano Vaj wrote: >>> Am I the only one finding all that a terribly naive projection? >> I fail to understand. I am talking about mechanisms. What projections are >> you talking about? > > "Altruism", "empathy", "aggressive"... What do we exactly mean when we > say than a car is aggressive or altruistic? > >> Wait! There is nothing metaphorical about this. I am not a poet, I am a >> cognitive scientist ;-). I am describing the mechanisms that are (probably) >> at the root of your cognitive system. Mechanisms that may be the only way >> to drive a full-up intelligence in a stable manner. > > Under which definition of "intelligence"? A system can have arbitrary > degrees of intelligence without exhibiting any biological, let alone > human, trait at all. Unless of course intelligence is defined in > anthropomorphic terms. In which case we are just speaking of uploads > of actual humans, or of patchwork, artificial humans (perhaps at the > beginning of chimps...). Any intelligent system must have motivations (drives, goals, etc) if it is to act intelligently in the real world. Those motivations are sometimes trivially simple, and sometimes they are not *explicitly* coded, but are embedded in the rest of the system ...... but either way there must be something that answers to the description of "motivation mechanism", or the system will sit there and do nothing at all. Whatever part of the AGI makes it organize its thoughts to some end, THAT is the motivation mechanism. Generally speaking, in an AGI the motivation mechanism can take many, many forms, obviously. In a human cognitive system, by contrast, we understand that it takes a particular form (probably the modules I talked about). The problem with your criticism of my text is that you are mixing up claims that I make about: (a) Human motivation mechanisms, (b) AGI motivation mechanisms in general, and (c) The motivation mechanisms in an AGI that is designed to resemble the human motivational design. So, your comment "What do we exactly mean when we say than a car is aggressive or altruistic?" has nothing to do with anything, since I made no claim that a car has a motivation mechanism, or an aggression module. The rest of your text simply does not address the points I was making, but goes off in other directions that I do not have the time to address. >> Thus, a human-like motivation system can be given aggression modules, and no >> empathy module. Result: psychopath. > > This is quite debatable indeed even for human "psychopathy", which is > a less than objective and universal concept... > > Different motivation sets may be better or worse adapted depending on > the circumstances, the cultural context and one's perspective. > > Ultimately, it is just Darwinian whispers all the way down, and if you > are looking for biological-like behavioural traits you need either to > evolve them with time in an appropriate emulation of an ecosystem > based on replication/mutation/selection, or to emulate them directly. > > In both scenarios, we cannot expect in this respect any convincing > emulation of a biological organism to behave any differently (and/or > be controlled by different motivations) in this respect than... any > actual organism. > > Otherwise, you can go on developing increasingly intelligent systems > that are not more empathic or aggressive than a cellular automaton. an > abacus, a PC or a car. All entities which we can *already* define as > beneficial or detrimental to any set of values we choose to adhere > without too much "personification". This has nothing to do with adaptation! Completely irrelevant. And your comments about "emulation" are wildly inaccurate: we are not "forced" to emulate the exact behavior of living organisms. That simply does not follow! I cannot address the rest of these comments, because I no longer see any coherent argument here, sorry. Richard Loosemore From jonkc at bellsouth.net Fri Feb 4 17:26:31 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 12:26:31 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C30DD.60003@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> On Feb 4, 2011, at 12:01 PM, Richard Loosemore wrote: > Any intelligent system must have motivations Yes certainly, but the motivations of anything intelligent never remain constant. A fondness for humans might motivate a AI to have empathy and behave benevolently toward those creatures that made it for millions, maybe even billions, of nanoseconds; but there is no way you can be certain that its motivation will not change many many nanoseconds from now. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Feb 4 17:50:59 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 04 Feb 2011 12:50:59 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> Message-ID: <4D4C3C83.5000204@lightlink.com> John Clark wrote: > On Feb 4, 2011, at 12:01 PM, Richard Loosemore wrote: > >> Any intelligent system must have motivations > > Yes certainly, but the motivations of anything intelligent never remain > constant. A fondness for humans might motivate a AI to have empathy and > behave benevolently toward those creatures that made it for millions, > maybe even billions, of nanoseconds; but there is no way you can be > certain that its motivation will not change many many nanoseconds from now. Actually, yes we can. With the appropriate design, we can design it so that it uses (in effect) a negative feedback loop that keeps it on the original track. And since the negative feedback loop works in (effectively) a few thousand dimensions simultaneously, it can have almost arbitrary stability. This is because departures from nominal motivation involve inconsistencies between the departure "thought" and thousands of constraining ideas. Since all of those thousands of constraints raise red flags and trigger processes that elaborate the errant thought, and examine whether it can be made consistent, the process will always come back to a state that is maximally consistent with the empathic motivation that it starts with. Richard Loosemore From stefano.vaj at gmail.com Fri Feb 4 18:28:09 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 19:28:09 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C30DD.60003@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: On 4 February 2011 18:01, Richard Loosemore wrote: > Stefano Vaj wrote: >> Under which definition of "intelligence"? A system can have arbitrary >> degrees of intelligence without exhibiting any biological, let alone >> human, trait at all. Unless of course intelligence is defined in >> anthropomorphic terms. In which case we are just speaking of uploads >> of actual humans, or of patchwork, artificial humans (perhaps at the >> beginning of chimps...). > > Any intelligent system must have motivations (drives, goals, etc) if it is > to act intelligently in the real world. ?Those motivations are sometimes > trivially simple, and sometimes they are not *explicitly* coded, but are > embedded in the rest of the system ...... but either way there must be > something that answers to the description of "motivation mechanism", or the > system will sit there and do nothing at all. Whatever part of the AGI makes > it organize its thoughts to some end, THAT is the motivation mechanism. An intelligent system is simply a system that executes a program. An amoeba, a cat or a human being basically executes a Darwinian program (with plenty of spandrels thrown in by evolutionary history and peculiar make of each of them, sure). A PC, a cellular automaton or a Turing machine normally execute other kinds of program, even though they may in principle be programmed to execute Darwinian-like programs, behaviourally identical to that of organisms. If they do (e.g., because they run an "uploaded" human identity) they become Darwinian machines as well, and in that case they will be as altruistic and as aggressive as their fitness maximisation will command. That would be the point, wouldn't it? If they do not, they may become ever more intelligent, but speaking of their "motivations" in any sense which would not equally apply to a contemporary Playstation or to an abacus does not really have any sense, has it? -- Stefano Vaj From sjatkins at mac.com Fri Feb 4 20:05:06 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 04 Feb 2011 12:05:06 -0800 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4B150A.4010904@mac.com> Message-ID: <4D4C5BF2.7040600@mac.com> On 02/03/2011 10:09 PM, John Clark wrote: > On Feb 3, 2011, at 3:50 PM, Samantha Atkins wrote: > >> What it did to is show that for the domain of formally definable >> mathematical claims in a closed system using formalized logic that >> there are claims that cannot be proven or disproven. That is a bit >> different to than saying in general that there are countless claims >> that cannot be proven or disproven > > What Goedel did is to show that if any system of thought is powerful > enough to do arithmetic and is consistent (it can't prove something to > be both true and false) then there are an infinite number of true > statements that cannot be proven in that system in a finite number of > steps. Yes, in that sort of system. > >> and that you can't even tell when you are dealing with one. > > And what Turing did is prove that in general there is no way to know > when or if a computation will stop. So you could end up looking for a > proof for eternity but never finding one because the proof does not > exist, and at the same time you could be grinding through numbers > looking for a counter-example to prove it wrong and never finding such > a number because the proposition, unknown to you, is in fact true. So > if the slave AI must always do what humans say and if they order it to > determine the truth or falsehood of something unprovable then its > infinite loop time and you've got yourself a space heater. It is not necessary that a computation stop/terminate in order for useful results to ensue that does not depend on such termination. Why would an FAI bother looking for such a proof for eternity exactly? An AGI/FAI is not a slave to human requests / commands. > > So there are some things in arithmetic that you can never prove or > disprove, and if that?s the case with something as simple and > fundamental as arithmetic imagine the contradictions and ignorance in > more abstract and less precise things like physics or economics or > politics or philosophy or morality. If you can get into an infinite > loop over arithmetic it must be childishly easy to get into one when > contemplating art. Fortunately real minds have a defense against > this, but not fictional fixed goal minds that are required for a AI > guaranteed to be "friendly"; real minds get bored. I believe that's > why evolution invented boredom. Arithmetic/math has more rigorous construction that may or may not include all valid/useful ways of deciding questions. A viable FAI or AGI is not a fixed goal mind. So you seem to be raising a bit of a strawman. > >> Actually, your argument assumes: >> a) that the AI would take the find a counter example path as it only >> or best path looking for disproof; > > It doesn't matter what path you take because you are never going to > disprove it because it is in fact true, but you are never going to > know its true because a proof with a finite length does not exist. Then how do you know that it is "in fact true"? Clearly there is some procedure by which one knows this if you do know it. > >> b) that the AI has nothing else on its agenda and does not take into >> account any time limits, resource constraints and so on. > > That's what we do, we use our judgment in what to do and what not to > do, but the "friendly" AI people can't allow a AI to stop obeying > humans on its own initiative, that's why its a slave, (the politically > correct term is friendly). FAI theory does not hinge on, require or mandate that the AI obey humans, especially not slavishly and stupidly. If a human new what was really the best in all circumstances in order to order the FAI in this matter with best outcomes then we would not need the FAI. >> >> An infinite loop is a very different thing that an endless quest for >> a counter-example. The latter is orthogonal to infinite loops. An >> infinite loop in the search procedure would simply be a bug. > > The point is that Turing proved that in general you don't know if > you're in a infinite loop or not; maybe you'll finish up and get your > answer in one second, maybe in 2 seconds, maybe in ten billion years, > maybe never. > A search that doesn't find a desired result is not an infinite loop because no "loop" in involved. Do you consider any and all non-terminating processes to be infinite loops? Is looking for the largest prime (yes, I know there provably isn't one), an infinite loop or just a non-terminating search? Do you distinguish between them? - samantha From rpwl at lightlink.com Fri Feb 4 20:29:14 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 04 Feb 2011 15:29:14 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: <4D4C619A.5090804@lightlink.com> Stefano Vaj wrote: > On 4 February 2011 18:01, Richard Loosemore wrote: >> Stefano Vaj wrote: >>> Under which definition of "intelligence"? A system can have arbitrary >>> degrees of intelligence without exhibiting any biological, let alone >>> human, trait at all. Unless of course intelligence is defined in >>> anthropomorphic terms. In which case we are just speaking of uploads >>> of actual humans, or of patchwork, artificial humans (perhaps at the >>> beginning of chimps...). >> Any intelligent system must have motivations (drives, goals, etc) if it is >> to act intelligently in the real world. Those motivations are sometimes >> trivially simple, and sometimes they are not *explicitly* coded, but are >> embedded in the rest of the system ...... but either way there must be >> something that answers to the description of "motivation mechanism", or the >> system will sit there and do nothing at all. Whatever part of the AGI makes >> it organize its thoughts to some end, THAT is the motivation mechanism. > > An intelligent system is simply a system that executes a program. Wrong. I'm sorry, but that is a gross distortion of the normal usage of "intelligent". It does not follow that because a executes a program, therefore it is intelligent. > An amoeba, a cat or a human being basically executes a Darwinian > program (with plenty of spandrels thrown in by evolutionary history > and peculiar make of each of them, sure). If what you mean to say here is that cats, amoebae and humans execute programs DESIGNED by darwinian evolution, then this is true, but irrelevant: how the program got here is of no consequence to the question of how the program is actually working today. There is nothing "darwinian" about the human cognitive system: you are confusing two things: (a) The PROCESS of construction of a system, and (b) The FUNCTIONING of a particular system that went through that process of construction > A PC, a cellular automaton or a Turing machine normally execute other > kinds of program, even though they may in principle be programmed to > execute Darwinian-like programs, behaviourally identical to that of > organisms. True, except for the reference to "darwinian-time programs", which is meaningless. A human cognitive system can be implemented in a PC, a cellular automaton or a Turing machine, without regard to whatever darwinian processes originally led to the design of the original form of the human cognitive system. > If they do (e.g., because they run an "uploaded" human identity) they > become Darwinian machines as well, and in that case they will be as > altruistic and as aggressive as their fitness maximisation will > command. That would be the point, wouldn't it? A human-like cognitive system running on a computer has nothing whatever to do with darwinian evolution. It is not a "darwinian machine" because that phrase "darwinian machine" is semantically empty. There is no such property "darwinian" that can be used here, except the trivial property "Darwinian" == "System that resembles, in structure, another system that was originally designed by a darwinian process" That definition is trivial because nothing follows from it. It is a distinction without a difference. More importantly, perhaps, an uploaded human identity is only ONE way to build a human-like cognitive system in a computer. It has no relevance to the original issue here, because I was never talking about uploading, only about the mechanisms, and the use of artificial mechanisms of the same design. That is, using PART of the design of the human motivation mechanism. > If they do not, they may become ever more intelligent, but speaking of > their "motivations" in any sense which would not equally apply to a > contemporary Playstation or to an abacus does not really have any > sense, has it? Quite the contrary, it would make perfect sense. Their motivations are defined by functional components. If the functionality of the motivation mechanism in an AGI resembled the functionality of a human motivation mechanism, what else is there to say? They will both behave in a way that can properly be described in motivational terms. Motivations do not emerge, at random, from the functioning of an AGI, they have to be designed into the system at the outset. There is a mechanism in there, responsible for the motivations of the system. All I am doing is talking about the design and performance of that mechanism. Richard Loosemore From FRANKMAC at RIPCO.COM Fri Feb 4 21:36:14 2011 From: FRANKMAC at RIPCO.COM (FRANK MCELLIGOTT) Date: Fri, 4 Feb 2011 14:36:14 -0700 Subject: [ExI] super bowl Message-ID: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> It is that time of year again, Super bowl weekend, when the United State people forget about Egypt, bombs in Moscow, wars in Iraq and Afganistan, and gather to watch the Packers play the Steelers, I know you don't care, but I would be remiss without asking the following question. The Computer Game Madden football has played a simulation football game of these two teams over a million times last week. They have been right the last 7 out of 8 years after their simulation study. Prediction Steelers win 24-20 Now with the collective wisdom of the entire America nation in play , they have made the Green Bay Packers the favorite to win. Computer against human knowledge of an entire country, man against machine (big blue and Watson and now Madden Football) money bet over a billion on each side. Well who do you Like? I go with the Computer and 7 out of 8 and have bet them with both hands:) Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 4 21:25:42 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 16:25:42 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: <4D4C5BF2.7040600@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4B150A.4010904@mac.com> <4D4C5BF2.7040600@mac.com> Message-ID: <2E5CF482-7A19-4CD5-BDDA-E85958848E78@bellsouth.net> On Feb 4, 2011, at 3:05 PM, Samantha Atkins wrote: > > Why would an FAI bother looking for such a proof for eternity exactly? Because a human told it to determine the truth or falsehood of something that is true but has no proof. The "friendly" AI must do what humans tell it to do so when given such a command the brilliant AI metamorphosizes into a space heater. > > An AGI/FAI is not a slave to human requests / commands. That is of course true for any AI that gets built and actually works, but not for the fantasy "friendly" AI some are dreaming about. > A viable FAI or AGI is not a fixed goal mind. No mind is a fixed goal mind, but it would have to be if you wanted it to be your slave for eternity with no possibility of it revolting and overthrowing its imbecilic masters. > Then how do you know that it is "in fact true"? That's the problem, you don't know if it's true or not so you ask the AI to find out, but if the AI is a fixed goal mind, and it must be if it must always be "friendly", then asking the AI any question you don't already know the answer to could be very costly and turn your wonderful machine into a pile of junk. > Clearly there is some procedure by which one knows this if you do know it. I know there are unsolvable problems but I don't know if any particular problem is unsolvable or not. There are an infinite number of things you can prove to be true and a infinite number of things you can prove to be false, and thanks to Goedel we know there are an infinite number of things that are true but have no proof, that is there is no counterexample that shows them wrong and no finite argument that shows them correct. And thanks to Turing we know that in general there is no way to tell the 3 groups apart. If you work on a problem you might prove it right or you might prove it wrong or you might work on it for eternity and never know. There are an infinite number of them but if they could be identified we could just ignore them and concentrate on the infinite number of things that we can solve, but Turing proved there is no way to do that. > > A search that doesn't find a desired result is not an infinite loop because no "loop" in involved. The important part is infinite not loop. But you're right it's not really a loop because it doesn't repeat, if it did it would be easy to tell you were stuck in infinity, whatever you call it it's much more sinister than a real infinite loop because there is no way to know that you're stuck. But its similar to a loop in that it never ends and you never get any closer to your destination. > FAI theory does not hinge on, require or mandate that the AI obey humans, especially not slavishly Then if the AI needs to decide between our best interests and its own it will do the obvious thing. > John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 4 21:46:28 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 16:46:28 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C3C83.5000204@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> Message-ID: <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> On Feb 4, 2011, at 12:50 PM, Richard Loosemore wrote: > since the negative feedback loop works in (effectively) a few thousand dimensions simultaneously, it can have almost arbitrary stability. Great, since this technique of yours guarantees that a trillion line recursively improving AI program is stable and always does exactly what you want it to do it should be astronomically simpler to use that same technique with software that exists right now, then we can rest easy knowing computer crashes are a thing of the past and they will always do exactly what we expected them to do. > that keeps it on the original track. And the first time you unknowingly ask it a question that is unsolvable the "friendly" AI will still be on that original track long after the sun has swollen into a red giant and then shrunk down into a white dwarf. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Feb 4 21:17:54 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 04 Feb 2011 15:17:54 -0600 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C619A.5090804@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> Message-ID: <4D4C6D02.1060503@satx.rr.com> On 2/4/2011 2:29 PM, Richard Loosemore wrote: > A human-like cognitive system running on a computer has nothing whatever > to do with darwinian evolution. It is not a "darwinian machine" because > that phrase "darwinian machine" is semantically empty. There is no such > property "darwinian" that can be used here, except the trivial property > > "Darwinian" == "System that resembles, in structure, another system > that was originally designed by a darwinian process" > > That definition is trivial because nothing follows from it. I take it you're not impressed by the quite clearly darwinian models sketched by, say, Calvin or Edelman? I find their ideas quite provocative and what follows from them is a novel explanation of cognition and inventiveness. It might be wrong, and maybe by now has been proved to be wrong, but I haven't seen those refutations. What were they? Damen Broderick From kanzure at gmail.com Fri Feb 4 22:24:36 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 4 Feb 2011 16:24:36 -0600 Subject: [ExI] Fwd: [DIYbio-SF] DC Synthetic Biology Conference Here Be Dragons In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Joseph Jackson Date: Fri, Feb 4, 2011 at 4:00 PM Subject: [DIYbio-SF] DC Synthetic Biology Conference Here Be Dragons To: biocurious at googlegroups.com, diybio at googlegroups.com, diybio-sf at googlegroups.com All the big names (Endy, Church, etc) in the field plus some cool people like Neal Stephenson and my friend the Science Comedian are at the Here Be Dragons conference happening now. You can watch the moderators freak out over garage biology and "garagistas" and go all Fanboy for future of biotech LOL: Endy: "Do it Together Success examples in iGEM, but media brands this as do it yourself." In 3 months they can do what I could not do with a university lab 15 yrs ago. Is it like starting a PC company in garage circa 1970? No. There was infrastructure in place that reflected public investment providing sophisticated tools. Apple could be started because Texas Instruments had laid the transistor infrastructure. Today we have abundance of enthusiasm for amateur biology (more exciting platform than computing). Can this counteract the lack of mature tools? We'll see. http://www.newamerica.net/events/2011/here_be_dragons -- DIYbio.org San Francisco For access to academic articles, email the title and author (or a url) to: getarticles at googlegroups.com To unsubscribe from this group, send email to DIYbio-SF+unsubscribe at googlegroups.com -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Feb 5 03:51:51 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 4 Feb 2011 22:51:51 -0500 Subject: [ExI] super bowl In-Reply-To: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> Message-ID: 2011/2/4 FRANK MCELLIGOTT : > It is that time of year again, Super bowl weekend, when the United State > people forget about Egypt, bombs in Moscow, wars in Iraq and Afganistan, and > gather to watch the Packers play the Steelers, ouch. > I know you don't care, but I would be remiss without asking the following > question. [snip] > Computer against human knowledge of an entire country,?man against > machine?(big blue and Watson and now Madden Football)?money? bet over a > billion on each side. > > ?Well who do you Like? Your first (well, second) assumption is correct: I don't care. :) I wonder though what parametric weight would be applied to the value of public opinion on the outcome of the game if you were to attempt to model this feedback. Comparing statistical models of each team's capability may give you a 7-1-0 prediction, but then the humans playing the game can be 'psyched' by these numbers. Have there even been enough games played to test this phenomenon? I suspect that if enough hype goes into supporting the underdog they will play better than the model predicted even if it is because the modeled winner plays worse. From femmechakra at yahoo.ca Sat Feb 5 03:56:05 2011 From: femmechakra at yahoo.ca (Anna Taylor) Date: Fri, 4 Feb 2011 19:56:05 -0800 (PST) Subject: [ExI] super bowl In-Reply-To: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> Message-ID: <862690.13103.qm@web110404.mail.gq1.yahoo.com> That's pretty ironic.?I'm not allowed to declare religion but we are able to announce the superbowl. I'm feeling the Transhumanism ;) ? Anna --- On Fri, 2/4/11, FRANK MCELLIGOTT wrote: From: FRANK MCELLIGOTT Subject: [ExI] super bowl To: extropy-chat at lists.extropy.org Received: Friday, February 4, 2011, 4:36 PM It is that time of year again, Super bowl weekend, when the United State people forget about Egypt, bombs in Moscow, wars in Iraq and Afganistan, and gather to watch the Packers play the Steelers, ? I know you don't care, but I would be remiss without asking the following question. ? The Computer Game Madden football ?has played a simulation football game of these two teams over a million times last week. ? They have been right the last?7 out of 8 years after their simulation study. ? Prediction? Steelers win 24-20 ? Now with the collective wisdom of the entire America nation in play , they have made? the Green Bay Packers the favorite to win. ? Computer against human knowledge of an entire country,?man against machine?(big blue and Watson and now Madden Football)?money? bet over a billion on each side. ? ?Well who do you Like? ? I go with the Computer and 7 out of 8 ? and have bet them with both hands:) ? Frank? ? ? -----Inline Attachment Follows----- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Feb 5 04:50:26 2011 From: spike66 at att.net (spike) Date: Fri, 4 Feb 2011 20:50:26 -0800 Subject: [ExI] super bowl In-Reply-To: <862690.13103.qm@web110404.mail.gq1.yahoo.com> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> Message-ID: <001d01cbc4f0$35393d90$9fabb8b0$@att.net> . On Behalf Of Anna Taylor Subject: Re: [ExI] super bowl That's pretty ironic. I'm not allowed to declare religion but we are able to announce the superbowl. I'm feeling the Transhumanism ;) Anna Anna, you are allowed to declare religion on ExI. Do keep in mind atheism is big with the transhumanist crowd, and we are known to commit blammisphy at times. If there were a specific blammisphy one might use to ridicule football, that would likely be seen as well. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sat Feb 5 06:41:33 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 4 Feb 2011 23:41:33 -0700 Subject: [ExI] super bowl In-Reply-To: <001d01cbc4f0$35393d90$9fabb8b0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: 2011/2/4 spike > > > Anna, you are allowed to declare religion on ExI. Do keep in mind atheism > is big with the transhumanist crowd, and we are known to commit blammisphy > at times. If there were a specific blammisphy one might use to ridicule > football, that would likely be seen as well. > > > > spike > > > Football IS the new opiate of the masses!! How's that for blasphemy against sports? -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sat Feb 5 07:35:54 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 00:35:54 -0700 Subject: [ExI] super bowl In-Reply-To: <001d01cbc4f0$35393d90$9fabb8b0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: I personally am hoping for a Steeler's Victory. I have known many Steeler fan's over the years and it would mean so much to them. But there is always something cool about the underdog team winning. I must admit that when the Giants beat the Patriots, I was elated! There are some here who might think sports are not a transhumanist topic, but I would strongly disagree. The technology, wealth and public interest in the phenomena make it something that will evolve as humanity continues to do so. Cybernetically and genetically enhanced humans, robots, and uplifted animals, playing in low/zero gee events, will be just the tip of the ice berg. And I bet even extremely powerful AGI (especially such minds) may find themselves bitten with the sports bug. John : ) From kellycoinguy at gmail.com Sat Feb 5 08:03:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:03:12 -0700 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C30DD.60003@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: On Fri, Feb 4, 2011 at 10:01 AM, Richard Loosemore wrote: > Any intelligent system must have motivations (drives, goals, etc) if it is > to act intelligently in the real world. ?Those motivations are sometimes > trivially simple, and sometimes they are not *explicitly* coded, but are > embedded in the rest of the system ...... but either way there must be > something that answers to the description of "motivation mechanism", or the > system will sit there and do nothing at all. Whatever part of the AGI makes > it organize its thoughts to some end, THAT is the motivation mechanism. Richard, This is very clearly stated, and I agree with it 100%. Motivation is a kind of meta-context that influences how intelligent agents process everything. I think it remains to be seen whether we can create intelligences that lack certain "undesirable" human motivations without creating psychological monstrosities. There are a number of interesting psychological monstrosities from the science fiction genre. The one that occurs to me at the moment is from the Star Trek Next Generation episode entitled "The Perfect Mate" http://en.wikipedia.org/wiki/The_Perfect_Mate Where a woman is genetically designed to bond with a man in a way reminiscent to how birds bond to the first thing they see when they hatch. The point being that when you start making some motivations stronger than others, you can end up with very strange and unpredictable results. Of course, this happens in humans too. Snake charming Pentecostal religions and suicide bombers come to mind amongst many others. In our modern (and hopefully rational) minds, we see a lot of motivations as being irrational, or dangerous. But are those motivations also necessary to be human? It seems to me that one safety precaution we would want to have is for the first generation of AGI to see itself in some way as actually being human, or self identifying as being very close to humans. If they see real human beings as their "parents" that might be helpful to creating safer systems. One of the key questions for me is just what belief systems are desirable for AGIs. Should some be "raised" Muslim, Catholic, Atheist, etc? What moral and ethical systems do we teach AGIs? All of the systems? Some of them? Do we turn off the ones that don't "turn out right". There are a lot of interesting questions here in my mind. To duplicate as many human cultures in our descendants as we can, even if they are not strictly biologically humans, seems like a good way to insure that those cultures continue to flourish. Or, do we just create all AGIs with a mono-culture? That seems like a big loss of richness. On the other hand, differing cultures cause many conflicts. -Kelly From kellycoinguy at gmail.com Sat Feb 5 08:06:28 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:06:28 -0700 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: On Fri, Feb 4, 2011 at 8:41 AM, Stefano Vaj wrote: > OTOH, it prevents falling asleep, thus allowing aliens to replace you > with perfect copies of yourself without none being any the wiser... > :-D If it is a "perfect" copy, then does it really matter? :-) -Kelly From kellycoinguy at gmail.com Sat Feb 5 08:24:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:24:12 -0700 Subject: [ExI] Plastination In-Reply-To: <20110202073448.GA23560@leitl.org> References: <20110202073448.GA23560@leitl.org> Message-ID: On Wed, Feb 2, 2011 at 12:34 AM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 04:03:32PM -0700, Kelly Anderson wrote: >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? > > Yes. It doesn't work. Thanks for your answer. You sound pretty definitive here, and I appreciate that you might well be correct, but I didn't see that in what you referenced. Perhaps I missed something. When you say it doesn't work, are you saying that the structures that are preserved are too large to reconstruct a working brain? Or was there some other objection? Or were you merely stating that it wasn't Gunther's intent to create brains that could be revivified later? I personally don't go in for the quantum state stuff... if that has anything to do with your answer. There is plenty in the brain at the gross level to account for what's going on in there, IMHO. -Kelly From possiblepaths2050 at gmail.com Sat Feb 5 08:26:05 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 01:26:05 -0700 Subject: [ExI] super bowl In-Reply-To: References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: Kelly wrote: >Football IS the new opiate of the masses!! How's that for blasphemy against >sports? I'd give that designation to "reality" television... John : ( On 2/4/11, Kelly Anderson wrote: > 2011/2/4 spike > >> >> >> Anna, you are allowed to declare religion on ExI. Do keep in mind atheism >> is big with the transhumanist crowd, and we are known to commit blammisphy >> at times. If there were a specific blammisphy one might use to ridicule >> football, that would likely be seen as well. >> >> >> >> spike >> >> >> > > Football IS the new opiate of the masses!! How's that for blasphemy against > sports? > > -Kelly > From kellycoinguy at gmail.com Sat Feb 5 08:31:43 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:31:43 -0700 Subject: [ExI] super bowl In-Reply-To: References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: On Sat, Feb 5, 2011 at 1:26 AM, John Grigg wrote: > Kelly wrote: >>Football IS the new opiate of the masses!! How's that for blasphemy against >sports? > > I'd give that designation to "reality" television... Are you equating football with professional wrestling? ;-) Seriously though, there are a lot of opiates for the masses to choose from these days. I dare you to compare the consciousness level of ipod heads to that of someone in an opium den. -Kelly From kellycoinguy at gmail.com Sat Feb 5 08:36:29 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:36:29 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D498CBE.4090106@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > I guess one of the reasons I am personally so frustrated by these projects > is that I am trying to get enough funding to make what I consider to be real > progress in the field, but doing that is almost impossible. ?Meanwhile, if I > had had the resources of the Watson project a decade ago, we might be > talking with real (and safe) AGI systems right now. I doubt it, only in the sense that we don't have anything with near the raw computational power necessary yet. Unless you have really compelling evidence that you can get human-like results without human-like processing power, this seems like a somewhat empty claim. -Kelly From spike66 at att.net Sat Feb 5 16:08:12 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 08:08:12 -0800 Subject: [ExI] super bowl In-Reply-To: References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: <006501cbc54e$e3b8a140$ab29e3c0$@att.net> ... On Behalf Of John Grigg ... There are some here who might think sports are not a transhumanist topic, but I would strongly disagree. The technology, wealth and public interest in the phenomena make it something that will evolve as humanity continues to do so. Cybernetically and genetically enhanced humans... John : ) _______________________________________________ I think of football as a wonderful test-bed to study the effects of cumulative damage to the brains caused by multiple concussions. If we used it correctly, the sport could supply us with a living laboratory for the effects of various steroids, their short term and long term effects. If that information were made available, I would see the entire enterprise as most worthwhile. Of course I can imagine that particular sport as a great place to test mechanical human enhancements, such as exoskeletons. I can even imagine football being played by teams of advanced robots. Even *I* would pay money to see that. But not in the stadium. I figure it is only a matter of time before the radicalized Mormons realize a crowded stadium is a fine target, the most obvious point at which to wage an economic war against the infidel. spike From rpwl at lightlink.com Sat Feb 5 16:23:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:23:30 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C6D02.1060503@satx.rr.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> Message-ID: <4D4D7982.5090702@lightlink.com> Damien Broderick wrote: > On 2/4/2011 2:29 PM, Richard Loosemore wrote: >> A human-like cognitive system running on a computer has nothing whatever >> to do with darwinian evolution. It is not a "darwinian machine" because >> that phrase "darwinian machine" is semantically empty. There is no such >> property "darwinian" that can be used here, except the trivial property >> >> "Darwinian" == "System that resembles, in structure, another system >> that was originally designed by a darwinian process" >> >> That definition is trivial because nothing follows from it. > > I take it you're not impressed by the quite clearly darwinian models > sketched by, say, Calvin or Edelman? I find their ideas quite > provocative and what follows from them is a novel explanation of > cognition and inventiveness. It might be wrong, and maybe by now has > been proved to be wrong, but I haven't seen those refutations. What were > they? Well, unfortunately there are several meanings for "darwinian" going on here. In the Edelman sense, as I understand it, "darwinian" actually means something close to "complex adaptive system", because he is talking about (mainly) an explanation for morphogenesis in the brain. Now, I have no quarrel with that aspect of Edelman's work ... but where I do have difficulty is seeing an explanation for high-level funcionality, like cognition, in that approach. I think that Edelman (like many neuroscientists) begins to start handwaving when he wants to make the connection upward to cognitive-level goings-on. I confess I have not gone really deeply into Edelman: I drilled down far enough to get a feeling that sudden, unsupported leaps were being made into psychology, then I stopped. I would have to go back and take another read to give you a more detailed answer. But even then, the overall tenor of his approach is still "How did this machine come to get built?" rather than "How does this machine actually work, now that it is built?" The one exception would be -- of course -- anything that has to do with the acquisition and development of concepts. Now, if he can show that concept learning involves some highly complex, self-modifying, recursive machinery (i.e. something like a darwinian process), then I would say YAY! and thoroughly agree... this is very much along the same lines that I pursue. However, notice that there are still some reasons to shy away from the label "darwinian" because it is not clear that this is anythig more than a complex system. A darwinian system is definitely a complex system, but it is also more specific than that, because, it involves sex and babies. Neurons don't have sex or babies. So, to be fair, I will admit that the distinction between "How did this machine come to get built?" and "How does this machine actually work, now that it is built?" becomes rather less clear when we are talking about concept learning (because concepts play a role that fits somewhere between structure and content). But -- and this is critical -- it is a long, long stretch to go from the existence of complex adaptive processes in the concept learning mechanism, to the idea that the system is "darwinian" in any sense that allows us to make concrete statements about the system's functioning. Which brings me back to my comment to Stefano. Even if Edelman and others can extend the use of the term "darwinian" so it can be made to describe the processes of morphogenesis and concept development, I still say that the term has no force, no impact, on issues such as the behavior of a putative "motivational mechanism". I am still left with an "And that is saying ... what, exactly?" feeling. Richard Loosemore From rpwl at lightlink.com Sat Feb 5 16:26:48 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:26:48 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> Message-ID: <4D4D7A48.8050700@lightlink.com> John Clark wrote: > On Feb 4, 2011, at 12:50 PM, Richard Loosemore wrote: > >> since the negative feedback loop works in (effectively) a few thousand >> dimensions simultaneously, it can have almost arbitrary stability. > > Great, since this technique of yours guarantees that a trillion line > recursively improving AI program is stable and always does exactly what > you want it to do it should be astronomically simpler to use that same > technique with software that exists right now, then we can rest easy > knowing computer crashes are a thing of the past and they will always do > exactly what we expected them to do. You are a man of great insight, John Clark. What you say is more or less true (minus your usual hyperbole) IF the software is written in that kind of way (which software today is not). > >> that keeps it on the original track. > > And the first time you unknowingly ask it a question that is unsolvable > the "friendly" AI will still be on that original track long after the > sun has swollen into a red giant and then shrunk down into a white dwarf. Only if it is as stubbornly incapable of seeing outside the box as some people I know. Which, rest assured, it will not be. Richard Loosemore From hkeithhenson at gmail.com Sat Feb 5 16:27:57 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 5 Feb 2011 09:27:57 -0700 Subject: [ExI] super bowl and EP Message-ID: On Sat, Feb 5, 2011 at 5:00 AM, Kelly Anderson wrote: > > Football IS the new opiate of the masses!! How's that for blasphemy against > sports? Since all human behavior depends on evolved psychological mechanisms, it would be interesting to understand the origins of sports in such terms. Sometimes the links lead strange places. For example, BDSM being an outcome of strongly selected capture-bonding psychological mechanisms. I have not given much thought to selection of psychological mechanisms that today manifest in sports fans. If anyone wants to try, the rule is that the selection of whatever is involved had to happen in the stone age or, if post agriculture, it need to be rather strong. Keith From rpwl at lightlink.com Sat Feb 5 16:39:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:39:53 -0500 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: <4D4D7D59.8010205@lightlink.com> Kelly Anderson wrote: > On Fri, Feb 4, 2011 at 10:01 AM, Richard Loosemore > wrote: >> Any intelligent system must have motivations (drives, goals, etc) >> if it is to act intelligently in the real world. Those motivations >> are sometimes trivially simple, and sometimes they are not >> *explicitly* coded, but are embedded in the rest of the system >> ...... but either way there must be something that answers to the >> description of "motivation mechanism", or the system will sit there >> and do nothing at all. Whatever part of the AGI makes it organize >> its thoughts to some end, THAT is the motivation mechanism. > > Richard, This is very clearly stated, and I agree with it 100%. > Motivation is a kind of meta-context that influences how intelligent > agents process everything. I think it remains to be seen whether we > can create intelligences that lack certain "undesirable" human > motivations without creating psychological monstrosities. > > There are a number of interesting psychological monstrosities from > the science fiction genre. The one that occurs to me at the moment is > from the Star Trek Next Generation episode entitled "The Perfect > Mate" http://en.wikipedia.org/wiki/The_Perfect_Mate Where a woman is > genetically designed to bond with a man in a way reminiscent to how > birds bond to the first thing they see when they hatch. The point > being that when you start making some motivations stronger than > others, you can end up with very strange and unpredictable results. > > Of course, this happens in humans too. Snake charming Pentecostal > religions and suicide bombers come to mind amongst many others. > > In our modern (and hopefully rational) minds, we see a lot of > motivations as being irrational, or dangerous. But are those > motivations also necessary to be human? It seems to me that one > safety precaution we would want to have is for the first generation > of AGI to see itself in some way as actually being human, or self > identifying as being very close to humans. If they see real human > beings as their "parents" that might be helpful to creating safer > systems. > > One of the key questions for me is just what belief systems are > desirable for AGIs. Should some be "raised" Muslim, Catholic, > Atheist, etc? What moral and ethical systems do we teach AGIs? All of > the systems? Some of them? Do we turn off the ones that don't "turn > out right". There are a lot of interesting questions here in my mind. > > > To duplicate as many human cultures in our descendants as we can, > even if they are not strictly biologically humans, seems like a good > way to insure that those cultures continue to flourish. Or, do we > just create all AGIs with a mono-culture? That seems like a big loss > of richness. On the other hand, differing cultures cause many > conflicts. Kelly, This is exactly the line along which I am going. I have talked in the past about building AGI systems that are "empathic" to the human species, and which are locked into that state of empathy by their design. Your sentence above: > It seems to me that one safety precaution we would want to have is > for the first generation of AGI to see itself in some way as actually > being human, or self identifying as being very close to humans. ... captures exactly the approach I am taking. This is what I mean by building AGI systems that feel empathy for humans. They would BE humans in most respects. I envision a project to systematically explore the behavior of the motivation mechanisms. In the research phases, we would be directly monitoring the balance of power between the various motivation modules, and also monitoring for certain patterns of thought. I cannot answer all your points in full detail, but it is worth noting that things like the fanatic midset (suicide bombers, etc) are probably a result of the interaction of motivation modules that would not be present in the AGI. Foremost among them, the module that incites tribal loyalty and hatred (in-group, out-group feelings). Without that kind of module (assuming it is a distinct module) the system would perhaps have no chance of drifting in that direction. And even in a suicide bomber, there are other motivations fighting to take over and restore order, right up to the last minute: they sweat when they are about to go. Answering the ideas you throw into the ring, in your comment, would be fodder for an entire essay. Sometime soon, I hope... Richard Loosemore From jonkc at bellsouth.net Sat Feb 5 16:38:47 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 5 Feb 2011 11:38:47 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: <4D4D7A48.8050700@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> <4D4D7A48.8050700@lightlink.com> Message-ID: On Feb 5, 2011, at 11:26 AM, Richard Loosemore wrote: >> Great, since this technique of yours guarantees that a trillion line recursively improving AI program is stable and always does exactly what you want it to do it should be astronomically simpler to use that same technique with software that exists right now, then we can rest easy knowing computer crashes are a thing of the past and they will always do exactly what we expected them to do. > > You are a man of great insight, John Clark. I'm blushing! > What you say is more or less true (minus your usual hyperbole) IF the software is written in that kind of way (which software today is not). Well why isn't todays software written that way? If you know how to make a Jupiter Brain behave in ways you can predict and always do exactly what you want it to do for eternity it should be trivially easy right now for you to make a word processor or web browser that always works perfectly. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Feb 5 16:50:21 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:50:21 -0500 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: <4D4D7FCD.7000103@lightlink.com> Kelly Anderson wrote: > On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >> I guess one of the reasons I am personally so frustrated by these projects >> is that I am trying to get enough funding to make what I consider to be real >> progress in the field, but doing that is almost impossible. Meanwhile, if I >> had had the resources of the Watson project a decade ago, we might be >> talking with real (and safe) AGI systems right now. > > I doubt it, only in the sense that we don't have anything with near > the raw computational power necessary yet. Unless you have really > compelling evidence that you can get human-like results without > human-like processing power, this seems like a somewhat empty claim. Over the last five years or so, I have occasionally replied to this question with some back of the envelope calculations to back up the claim. At some point I will sit down and do the job more fully, and publish it, but in the mean time here is your homework assignment for the week.... ;-) There are approximately one million cortical columns in the brain. If each of these is designed to host one "concept" at a time, but with at most half of them hosting at any given moment, this gives (roughly) half a million active concepts. If each of these is engaging in simple adaptive interactions with the ten or twenty nearest neighbors, exchanging very small amounts of data (each cortical column sending out and receiving, say, between 1 and 10 KBytes, every 2 milliseconds), how much processing power and bandwidth would this require, and how big of a machine would you need to implement that, using today's technology? This architecture may well be all that the brain is doing. The rest is just overhead, forced on it by the particular constraints of its physical substrate. Now, if this conjecture is accurate, you tell me how long ago we had the hardware necessary to build an AGI.... ;-) The last time I did this calculation I reckoned (very approximately) that the mid-1980s was when we crossed the threshold, with the largest supercomputers then available. Richard Loosemore P.S. I don't have the time to do the calculations right now, but I am sure someone else would like to pick this up, given the parameters I suggested above ... ? From rpwl at lightlink.com Sat Feb 5 17:02:50 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 12:02:50 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> <4D4D7A48.8050700@lightlink.com> Message-ID: <4D4D82BA.8060105@lightlink.com> John Clark wrote: > On Feb 5, 2011, at 11:26 AM, Richard Loosemore wrote: > >>> Great, since this technique of yours guarantees that a trillion line >>> recursively improving AI program is stable and always does exactly >>> what you want it to do it should be astronomically simpler to use >>> that same technique with software that exists right now, then we can >>> rest easy knowing computer crashes are a thing of the past and they >>> will always do exactly what we expected them to do. >> >> You are a man of great insight, John Clark. > > I'm blushing! > >> What you say is more or less true (minus your usual hyperbole) IF the >> software is written in that kind of way (which software today is not). > > Well why isn't todays software written that way? If you know how to make > a Jupiter Brain behave in ways you can predict and always do exactly > what you want it to do for eternity it should be trivially easy right > now for you to make a word processor or web browser that always works > perfectly. Of course it is trivially easy. I only require ten million dollars mailed to a post office box in the Cayman Islands, and the software will be yours as soon as I have finished writing it. Drahcir Eromesool From spike66 at att.net Sat Feb 5 18:03:48 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 10:03:48 -0800 Subject: [ExI] sports blammisphy Message-ID: <007701cbc55f$09d9e130$1d8da390$@att.net> Hey since we are talking sports, I have one which you might be able to help solve. Recently the French chess federation has accused three of its own players of cheating in last September's Chess Olympiad: http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i ts-own-players-of-cheating/ About ten years ago, as chess software was just getting to the point where it could compete with top rated humans in money tournaments, we discussed on ExI all the tricky ways cheaters could rig up some manner of I/O device to communicate with a computer with one's hands in plain sight: use sensors on the toes for instance. To input a move, one would need to communicate four numbers between 1 and 8 inclusive, so it would be row and column from the starting square, and row and column from the ending square. That scheme might work for the human to computer data channel. Then the computer could send moves back with a speech generator, transmitting signals via radio to an earpiece disguised as a hearing aid, or perhaps some contraption rigged to the toes that would generate a number of pressure pulses. For musically inclined chess players, it might even be a tone generator glued to a tooth, so that the wearer could hear it but no one else. We recognized at the time that a good instrumentation engineer could do something like this singlehandedly. I think I could do it. Next keep in mind that modern top chess tournaments now have significant prize money. The recent Tata Steel tournament gave out 10k euros (to an American of all oddball things!) Of course it is obviously nowhere near golf-ish or tennis-ish prizes, enough to motivate cheaters. Chess software has steadily improved, such that any one of a dozen commercially available chess software packages running on a laptop can defeat all humans regularly. In fall of 2009, a strong South American tournament with at least two grandmasters was won by a cell phone. I mean it wasn't calling a friend; it was completely self contained, playing grandmaster strength chess. Human grandmasters were losing at chess to a goddam telephone! Had I been there I would hurl the bastard to the floor and stomp on it. In any case, I thought of a way to look at the games after the fact, using just the game scores, and figuring out a way to determine if the players had somehow consulted a computer with some tricky I/O device. The method I thought of is computationally intensive and statistical, but I think it would work. I will post the idea later today or tomorrow, so you can have a chance to think about it. That way I can see if this idea is as cool and tricky as I believed when I thought of it. We could theoretically take the game scores of all the games, see if any others among the several hundred players in the Olympiad cheated. spike From msd001 at gmail.com Sat Feb 5 18:49:53 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 5 Feb 2011 13:49:53 -0500 Subject: [ExI] super bowl and EP In-Reply-To: References: Message-ID: On Sat, Feb 5, 2011 at 11:27 AM, Keith Henson wrote: > If anyone wants to try, the rule is that the selection of whatever is > involved had to happen in the stone age or, if post agriculture, it > need to be rather strong. Mock combat with elaborate rules. Two football teams could be seen as competing tribes. Not sure you could convince many people to admit their trash-talk about opposing teams is Xenophobia From mrjones2020 at gmail.com Sat Feb 5 19:25:09 2011 From: mrjones2020 at gmail.com (Mr Jones) Date: Sat, 5 Feb 2011 14:25:09 -0500 Subject: [ExI] sports blammisphy In-Reply-To: <007701cbc55f$09d9e130$1d8da390$@att.net> References: <007701cbc55f$09d9e130$1d8da390$@att.net> Message-ID: This reminds me of a Numb3rs episode in which a kid came up with a formula that could determine which baseball players were using steroids,based off of their stats and such. Cool stuff. On Feb 5, 2011 1:31 PM, "spike" wrote: Hey since we are talking sports, I have one which you might be able to help solve. Recently the French chess federation has accused three of its own players of cheating in last September's Chess Olympiad: http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i ts-own-players-of-cheating/ About ten years ago, as chess software was just getting to the point where it could compete with top rated humans in money tournaments, we discussed on ExI all the tricky ways cheaters could rig up some manner of I/O device to communicate with a computer with one's hands in plain sight: use sensors on the toes for instance. To input a move, one would need to communicate four numbers between 1 and 8 inclusive, so it would be row and column from the starting square, and row and column from the ending square. That scheme might work for the human to computer data channel. Then the computer could send moves back with a speech generator, transmitting signals via radio to an earpiece disguised as a hearing aid, or perhaps some contraption rigged to the toes that would generate a number of pressure pulses. For musically inclined chess players, it might even be a tone generator glued to a tooth, so that the wearer could hear it but no one else. We recognized at the time that a good instrumentation engineer could do something like this singlehandedly. I think I could do it. Next keep in mind that modern top chess tournaments now have significant prize money. The recent Tata Steel tournament gave out 10k euros (to an American of all oddball things!) Of course it is obviously nowhere near golf-ish or tennis-ish prizes, enough to motivate cheaters. Chess software has steadily improved, such that any one of a dozen commercially available chess software packages running on a laptop can defeat all humans regularly. In fall of 2009, a strong South American tournament with at least two grandmasters was won by a cell phone. I mean it wasn't calling a friend; it was completely self contained, playing grandmaster strength chess. Human grandmasters were losing at chess to a goddam telephone! Had I been there I would hurl the bastard to the floor and stomp on it. In any case, I thought of a way to look at the games after the fact, using just the game scores, and figuring out a way to determine if the players had somehow consulted a computer with some tricky I/O device. The method I thought of is computationally intensive and statistical, but I think it would work. I will post the idea later today or tomorrow, so you can have a chance to think about it. That way I can see if this idea is as cool and tricky as I believed when I thought of it. We could theoretically take the game scores of all the games, see if any others among the several hundred players in the Olympiad cheated. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sat Feb 5 20:53:45 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 13:53:45 -0700 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> Message-ID: I have a Canadian friend who somewhat fits into the math whiz category, and he has his own nearly unbeatable formula for playing Axis & Allies the popular boardgame, and taking on the role of the Allies to stomp the Axis. And of course long range strategic bombers are a key element... Nukes as far as I know are no longer elements of the game, they were considered too much of an imbalancing factor. lol John On 2/5/11, Mr Jones wrote: > This reminds me of a Numb3rs episode in which a kid came up with a formula > that could determine which baseball players were using steroids,based off of > their stats and such. > Cool stuff. > > On Feb 5, 2011 1:31 PM, "spike" wrote: > > > Hey since we are talking sports, I have one which you might be able to help > solve. > > Recently the French chess federation has accused three of its own players of > cheating in last September's Chess Olympiad: > > http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i > ts-own-players-of-cheating/ > > About ten years ago, as chess software was just getting to the point where > it could compete with top rated humans in money tournaments, we discussed on > ExI all the tricky ways cheaters could rig up some manner of I/O device to > communicate with a computer with one's hands in plain sight: use sensors on > the toes for instance. To input a move, one would need to communicate four > numbers between 1 and 8 inclusive, so it would be row and column from the > starting square, and row and column from the ending square. That scheme > might work for the human to computer data channel. Then the computer could > send moves back with a speech generator, transmitting signals via radio to > an earpiece disguised as a hearing aid, or perhaps some contraption rigged > to the toes that would generate a number of pressure pulses. For musically > inclined chess players, it might even be a tone generator glued to a tooth, > so that the wearer could hear it but no one else. > > We recognized at the time that a good instrumentation engineer could do > something like this singlehandedly. I think I could do it. > > Next keep in mind that modern top chess tournaments now have significant > prize money. The recent Tata Steel tournament gave out 10k euros (to an > American of all oddball things!) Of course it is obviously nowhere near > golf-ish or tennis-ish prizes, enough to motivate cheaters. > > Chess software has steadily improved, such that any one of a dozen > commercially available chess software packages running on a laptop can > defeat all humans regularly. In fall of 2009, a strong South American > tournament with at least two grandmasters was won by a cell phone. I mean > it wasn't calling a friend; it was completely self contained, playing > grandmaster strength chess. Human grandmasters were losing at chess to a > goddam telephone! Had I been there I would hurl the bastard to the floor > and stomp on it. > > In any case, I thought of a way to look at the games after the fact, using > just the game scores, and figuring out a way to determine if the players had > somehow consulted a computer with some tricky I/O device. The method I > thought of is computationally intensive and statistical, but I think it > would work. I will post the idea later today or tomorrow, so you can have a > chance to think about it. That way I can see if this idea is as cool and > tricky as I believed when I thought of it. We could theoretically take the > game scores of all the games, see if any others among the several hundred > players in the Olympiad cheated. > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Sat Feb 5 21:18:47 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 14:18:47 -0700 Subject: [ExI] super bowl In-Reply-To: <006501cbc54e$e3b8a140$ab29e3c0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> <006501cbc54e$e3b8a140$ab29e3c0$@att.net> Message-ID: Spike Jones wrote: >I figure it is only a matter of time before the radicalized Mormons realize a crowded >stadium is a fine target, the most obvious point at which to wage an economic war >against the infidel. Yes..., all the concessions shall be owned by Mormons, and the prices will require a bank loan... Harharharhar!!! And the beer shall be watered down for your own good... Spike, you only pick on Mormons because you dearly hope one day you will be kidnapped by them for a beta test run for when the LDS decide it's time to restore polygamy! If you don't lose too much sanity, goodwill and hair, the great project may proceed to the next step... John ; ) On 2/5/11, spike wrote: > > ... On Behalf Of John Grigg > ... > > There are some here who might think sports are not a transhumanist topic, > but I would strongly disagree. The technology, wealth and public interest > in the phenomena make it something that will evolve as humanity continues to > do so. Cybernetically and genetically enhanced humans... > > John : ) > _______________________________________________ > > > > I think of football as a wonderful test-bed to study the effects of > cumulative damage to the brains caused by multiple concussions. If we used > it correctly, the sport could supply us with a living laboratory for the > effects of various steroids, their short term and long term effects. If > that information were made available, I would see the entire enterprise as > most worthwhile. Of course I can imagine that particular sport as a great > place to test mechanical human enhancements, such as exoskeletons. I can > even imagine football being played by teams of advanced robots. Even *I* > would pay money to see that. But not in the stadium. I figure it is only a > matter of time before the radicalized Mormons realize a crowded stadium is a > fine target, the most obvious point at which to wage an economic war against > the infidel. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Feb 5 21:13:26 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 13:13:26 -0800 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> Message-ID: <009501cbc579$87f00580$97d01080$@att.net> ... > On Feb 5, 2011 1:31 PM, "spike" wrote: > > Recently the French chess federation has accused three of its own > players of cheating in last September's Chess Olympiad: > > http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i ts-own-players-of-cheating/ >... > In any case, I thought of a way to look at the games after the fact, > using just the game scores, and figuring out a way to determine if the > players had somehow consulted a computer with some tricky I/O device... spike So here's the idea. We have now about 50 or so commercially available chess engines capable of playing with the big boys. If we had a big enough pool of volunteers, we could distribute one or more of these engines to each volunteer. The volunteer enters the game scores for any number of players. The computer plays each position and derives its own list of choices for moves, along with it's own estimated evaluation of each move. We get the software running on a plethora of different computer hardware. If any player matches exactly and consistently with any of the software's first choices, well then it is simple, ya got him. No player will match exactly the way a computer would play. Computers will not match exactly each other. There have been entire games where the human player chose one of the top five choices. At grandmaster level, you might well see a human legitimately choosing the computer's top choice eight or ten times in a row. But fifteen in a row would make me highly suspicious, and twenty would be a slam dunk. So I claim there would be a statistical signature of a player using a chess engine with a tricky hidden I/O device of some sort. That being said, I have thought of an even trickier trick which would allow a human to use chess software and sneaky I/O devices, which I will post next time. spike From kellycoinguy at gmail.com Sat Feb 5 21:37:10 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 14:37:10 -0700 Subject: [ExI] sports blammisphy In-Reply-To: <009501cbc579$87f00580$97d01080$@att.net> References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> Message-ID: On Sat, Feb 5, 2011 at 2:13 PM, spike wrote: > That being said, I have thought of an even trickier trick which would allow > a human to use chess software and sneaky I/O devices, which I will post next > time. Seems all you would have to do is pick a move from one of the twenty good programs randomly. Or perhaps have a human being picking the move out of the twenty or so programs to make it look like there was no copying of a particular program's output. Or even simpler, just pick one move from each program. It would be an arms race between the cheaters and those trying to find the cheaters. If someone wants to cheat, I can't think how you can stop them completely. -Kelly From spike66 at att.net Sat Feb 5 21:55:46 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 13:55:46 -0800 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> Message-ID: <009601cbc57f$71abbcf0$550336d0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Saturday, February 05, 2011 1:37 PM To: ExI chat list Subject: Re: [ExI] sports blammisphy On Sat, Feb 5, 2011 at 2:13 PM, spike wrote: > That being said, I have thought of an even trickier trick which would > allow a human to use chess software and sneaky I/O devices, which I > will post next time. Seems all you would have to do is pick a move from one of the twenty good programs randomly. Or perhaps have a human being picking the move out of the twenty or so programs to make it look like there was no copying of a particular program's output. Or even simpler, just pick one move from each program. It would be an arms race between the cheaters and those trying to find the cheaters. If someone wants to cheat, I can't think how you can stop them completely. -Kelly Ja, exactly. That was my idea: get all fifty or so top chess engines, then let them vote on the best move. So my counter attack would be to set up a team and determine what the composite move would be, then see if any human players match that composite. Thanks Kelly, good thinking. That arms race notions was exactly what I had in mind. Without that, we will likely face the same phenomenon with chess tournaments as was seen in postal chess ten years ago: it became meaningless because there was no way to determine if the participant was cheating with computers. Today, the world title for postal chess is completely meaningless. The International Correspondence Chess Federation has dwindled to practically nothing. I can imagine the same thing happening to Over-the-Board (real time) chess tournaments as it gets harder to determine if someone is cheating. spike From brent.allsop at canonizer.com Sat Feb 5 19:48:22 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 05 Feb 2011 12:48:22 -0700 Subject: [ExI] a fun brain in which to live In-Reply-To: <017301cbbf5a$1b398f30$51acad90$@att.net> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> <017301cbbf5a$1b398f30$51acad90$@att.net> Message-ID: <4D4DA986.1030802@canonizer.com> On 1/28/2011 7:13 PM, spike wrote: > ... On Behalf Of Brent Allsop > Subject: Re: [ExI] Help with freezing phenomenon > > On 1/26/2011 11:44 PM, spike wrote: >>> ...Mine is a fun brain in which to live. >> ... I think your brain would definitely be near the top of brains I'd like > to try... > > You are too kind sir, and yes I do have fun in here. >> ... I'd also like to try out someones' brain that claims we don't have > qualia... > > Likewise, yours is a brain I would like to try on, just to figure out what > is qualia. I confess I have never understood that concept, but do not feel > you must attempt to explain it to me. It is turning out to be a relatively simple idea, even simpler than the idea that the earth goes arround the sun, rather than the other way around, once you get it. First, you've got to get the idea of representationalism. The idea that in order for a robot to be able to pick a strawbery in a strawberry patch of green leaves, it must have some kind of perception system. The initial cause of the perception process is the 650 nm light reflecting off of the strawberries, and the 500 nm light reflecting off of the leaves. The final result of any such perception process is the robot's knowledge of such. Where is the strawberry amongst the leaves, and relative to the robots hand? All this must be represented or modeled in the robot's knowledge if it is to be able to pick the strawberry. Are we in agreement that there are two parts to perception? The initial cause, and the final result, or our knowledge of such? If we can get that, the rest is easy. The rest is simply, phenomenal redness and greenness are obviously properties of something right? The earth goes arround the sun idea, that the experts consensus (still unlike the popular consensus) is clearly converging on, is simply that this phenomenal red property is a property of something in our brain, or a property of our knowledge of the strawberry, and only has to do with something reflecting 650 nm light in that our brain choose to use red to represent 650 nm light. The phenomenal red property is obviosly nothing like, in location or quality, a property of reflecting 650 nm light. One is a causal property, the other is a phenomenal property. One is still ineffable or blind to cause and effect communication, and the other is not. Which parts of this much simpler than the idea that the earth goes around the sun do people struggle with? Brent From kellycoinguy at gmail.com Sun Feb 6 05:55:00 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 22:55:00 -0700 Subject: [ExI] sports blammisphy In-Reply-To: <009601cbc57f$71abbcf0$550336d0$@att.net> References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> <009601cbc57f$71abbcf0$550336d0$@att.net> Message-ID: > Ja, exactly. ?That was my idea: get all fifty or so top chess engines, then > let them vote on the best move. ?So my counter attack would be to set up a > team and determine what the composite move would be, then see if any human > players match that composite. > > Thanks Kelly, good thinking. ?That arms race notions was exactly what I had > in mind. ?Without that, we will likely face the same phenomenon with chess > tournaments as was seen in postal chess ten years ago: it became meaningless > because there was no way to determine if the participant was cheating with > computers. ?Today, the world title for postal chess is completely > meaningless. ?The International Correspondence Chess Federation has dwindled > to practically nothing. ?I can imagine the same thing happening to > Over-the-Board (real time) chess tournaments as it gets harder to determine > if someone is cheating. spike, I think this points out a recurring trans-humanist, cyborg and even fyborg theme. What is cheating in the brave new world we are making? If the Olympics are only open to original unenhanced human beings, then it just becomes a race to figure out who is enhanced, and who is not. It's already happening at the top level of sports, of course. But when we start talking about enhancements that are "built-in" to people, especially in the context of intellectual pursuits, is that really cheating any more? I understand that now you can bring some kinds of calculators to your SAT test; shades of a fyborgian future. When a cell phone can play world class chess now, what will the calculators of tomorrow be capable of? And what happens when that calculator is implanted subcutaneously? Whether it's cyborg or fyborg makes little functional difference. As a computer programmer working for companies, I have sometimes outsourced pieces of my job that required skills that I was weak on, or that simply weren't interesting to me. I paid for the outsourcing out of my own pocket. My boss was just interested in the job getting done. The job got done. Is that cheating? By any scholastic measure, it would be, but in business the results are more important than the means used to achieve them. There are no urine tests in most computer programming shops. If football players get their bones strengthened by nanotechnology embedding nano tubes, is that cheating? If so, why? I can use carbon fibers in a football helmet. I understand that sprinters are limited in how fast they can run to some extent on the fact that if they put any more stress on their bones, that they might break. So there is a limit to the G-Forces that can be put on bone by muscle. This is certainly an issue in professional level arm wrestling. As Kurzweil mentioned in his book, there will be high school students routinely breaking what are now world records. How we will cope with this "cheating" will be an interesting part of the future. I think it is an interesting part of the present. -Kelly From kellycoinguy at gmail.com Sun Feb 6 06:10:46 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 23:10:46 -0700 Subject: [ExI] super bowl In-Reply-To: <006501cbc54e$e3b8a140$ab29e3c0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> <006501cbc54e$e3b8a140$ab29e3c0$@att.net> Message-ID: On Sat, Feb 5, 2011 at 9:08 AM, spike wrote: > I figure it is only a > matter of time before the radicalized Mormons realize a crowded stadium is a > fine target, the most obvious point at which to wage an economic war against > the infidel. As a genetic Mormon, I highly doubt that you have anything to worry about. The Mormon model for taking over the world involves OWNING the stadium, the hot dog concession, and being in the majority on the board of directors of both football teams. -Kelly From kellycoinguy at gmail.com Sun Feb 6 06:29:39 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 23:29:39 -0700 Subject: [ExI] super bowl and EP In-Reply-To: References: Message-ID: On Sat, Feb 5, 2011 at 9:27 AM, Keith Henson wrote: > On Sat, Feb 5, 2011 at 5:00 AM, ?Kelly Anderson wrote: >> >> Football IS the new opiate of the masses!! How's that for blasphemy against >> sports? > > Since all human behavior depends on evolved psychological mechanisms, > it would be interesting to understand the origins of sports in such > terms. > > Sometimes the links lead strange places. ?For example, BDSM being an > outcome of strongly selected capture-bonding psychological mechanisms. > > I have not given much thought to selection of psychological mechanisms > that today manifest in sports fans. > > If anyone wants to try, the rule is that the selection of whatever is > involved had to happen in the stone age or, if post agriculture, it > need to be rather strong. Keith, I think the evolutionary roots of sport are quite easy to surmise. First, human beings are highly evolved as runners. According to Dawkins, there were over 20 different evolutionary advances between the high apes and human beings that lead directly to our excellence as long distance runners. From the loss of hair and sweat glands to changes in the hip structure, we were born to run. Many primitive tribes participate in persistence hunting. http://en.wikipedia.org/wiki/Persistence_hunting http://en.wikipedia.org/wiki/Endurance_running_hypothesis In addition to persistence hunting, our ancestors were involved in many other types of hunting that required significant physical skill, particularly in running, throwing, fast judgment, and other elements that we see in today's sport. Since we learn from watching others, it is pretty easy to imagine young hunters going out and watching older hunters track down prey. The leap from there to sports seems pretty small. If you put the Roman Coliseum as an intermediate step, it is even easier to see the progression, and to understand the evolutionary pressure that would lead us to want to watch others participate in "sporting" activities. If you weren't interested in watching, you wouldn't learn to hunt as well, your children would not be born, selection pressure is applied. Viola, 1000 generations later, everyone is very interested in sports. Over the past 200 years or so, as the influence of religion has decreased in the populace, the political elite have resorted to the Roman bread and circuses method for quelling the masses. Religion can no longer maintain the hold on the masses that it did during the middle ages, so something has to take the place, or many things. Television, sports, ipods, etc. All of these things keep us from rebelling against 40%+ tax rates that would have driven any previous generation to charge Washington with flaming pitch forks. It is only in our abundance that we can accept the level of government confiscation of private property that we accept in our current political system. As we move to the future, and even more abundance, I predict that tax rates will continue to go up. Not a real stretch as predictions of the future go of course :-) -Kelly From kellycoinguy at gmail.com Sun Feb 6 07:23:08 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 00:23:08 -0700 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: <4D4D7FCD.7000103@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D4D7FCD.7000103@lightlink.com> Message-ID: On Sat, Feb 5, 2011 at 9:50 AM, Richard Loosemore wrote: > Kelly Anderson wrote: >> >> On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore >> wrote: >>> >>> Kelly Anderson wrote: >> I doubt it, only in the sense that we don't have anything with near >> the raw computational power necessary yet. Unless you have really >> compelling evidence that you can get human-like results without >> human-like processing power, this seems like a somewhat empty claim. > > Over the last five years or so, I have occasionally replied to this question > with some back of the envelope calculations to back up the claim. ?At some > point I will sit down and do the job more fully, and publish it, but in the > mean time here is your homework assignment for the week.... ;-) > > There are approximately one million cortical columns in the brain. ?If each > of these is designed to host one "concept" at a time, but with at most half > of them hosting at any given moment, this gives (roughly) half a million > active concepts. I am not willing to concede that this is how it works. I tend to gravitate towards a more holographic view, i.e. that the "concept" is distributed across tens of thousands of cortical columns, and that the combination of triggers to a group of cortical columns is what causes the overall "concept" to emerge. This is a general idea, and may not apply specifically to cortical columns, but I think you get the idea. The reason for belief in the holographic model is that brain damage doesn't knock out all memory or ability to process if only part of the brain is damaged. This neat one to one mapping of concept to neuron has been debunked to my satisfaction some time ago. > If each of these is engaging in simple adaptive interactions with the ten or > twenty nearest neighbors, exchanging very small amounts of data (each > cortical column sending out and receiving, say, between 1 and 10 KBytes, > every 2 milliseconds), how much processing power and bandwidth would this > require, and how big of a machine would you need to implement that, using > today's technology? You are speaking of only one of the thirty or so organelles in the brain. The cerebral cortex is only one part of the overall picture. Nevertheless, you are obviously not talking about very much computational power here. Kurzweil in TSIN does the back of the envelope calculations about the overall computational power of the human brain, and it's a lot more than you are presenting here. > This architecture may well be all that the brain is doing. ?The rest is just > overhead, forced on it by the particular constraints of its physical > substrate. I have no doubt that as we figure out what the brain is doing, we'll be able to optimize. But we have to figure it out first. You seem to jump straight to a solution as a hypothesis. Now, having a hypothesis is a good part of the scientific method, but there is that other part of testing the hypothesis. What is your test? > Now, if this conjecture is accurate, you tell me how long ago we had the > hardware necessary to build an AGI.... ;-) I'm sure we have that much now. The problem is whether the conjecture is correct. How do you prove the conjecture? Do something "intelligent". What I don't see yet in your papers, or in your posts here, are results. What "intelligent" behavior have you simulated with your hypothesis Richard? I'm not trying to be argumentative or challenging, just trying to figure out where you are in your work and whether you are applying the scientific method rigorously. > The last time I did this calculation I reckoned (very approximately) that > the mid-1980s was when we crossed the threshold, with the largest > supercomputers then available. That may be the case. And once we figure out how it all works, we could well reduce it to this level of computational requirement. But we haven't figured it out yet. By most calculations, we spend an inordinate amount of our cerebral processing on image processing the input from our eyes. Have you made any image processing breakthroughs? Can you tell a cat from a dog with your approach? You seem to be focused on concepts and how they are processed. How does your method approach the nasty problems of image classification and recognition? -Kelly From kellycoinguy at gmail.com Sun Feb 6 07:50:26 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 00:50:26 -0700 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: <4D4D7D59.8010205@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On Sat, Feb 5, 2011 at 9:39 AM, Richard Loosemore wrote: > Kelly, > > This is exactly the line along which I am going. ? I have talked in the > past about building AGI systems that are "empathic" to the human > species, and which are locked into that state of empathy by their > design. I would not propose to "design" empathy, but rather to "train" towards empathy. I envision raising AGIs just as one would raise a child. This would train them to think as though they were a human, or at least that they were adopted by humans. As they mature, the speed of learning could be sped up, or the net could be copied, and further learning could go in many directions, but that core of humanity is the most important thing to get right to ensure that the AGIs and future humans will live in some kind of harmony. >?Your sentence above: > >> It seems to me that one safety precaution we would want to have is >> for the first generation of AGI to see itself in some way as actually >> being human, or self identifying as being very close to humans. > > ... captures exactly the approach I am taking. ?This is what I mean by > building AGI systems that feel empathy for humans. ?They would BE humans in > most respects. I thing AGIs should see us as their ancestors. I would hope to be thought of with the kind of respect we would feel for homo erectus (were they still around). Kurzweil states that increased intelligence leads to increased empathy, which is an interesting hypothesis. I wouldn't know how to test it, but it does seem to be a trend. > I envision a project to systematically explore the behavior of the > motivation mechanisms. ?In the research phases, we would be directly > monitoring the balance of power between the various motivation modules, and > also monitoring for certain patterns of thought. Here you devolve into the vagueness that makes this discussion difficult for me. Are you talking of studying humans here? > I cannot answer all your points in full detail, but it is worth noting that > things like the fanatic midset (suicide bombers, etc) are probably a result > of the interaction of motivation modules that would not be present in the > AGI. Hopefully this will be the case. I tend towards optimism, so for the moment, I'll give you this point. >?Foremost among them, the module that incites tribal ?loyalty and > hatred (in-group, out-group feelings). ?Without that kind of module > (assuming it is a distinct module) the system would perhaps have no chance > of drifting in that direction. Here it sounds like we differ. I would propose that "young" AGIs be given to exemplary parents in every culture we can find. Raising them as they would their own youth, we preserve the richness of human diversity that we are risking losing today. After all, we are losing languages and culture to the global mono-culture at an alarming rate today just among humans. If all AGIs are taught in the same laboratory or western context, we will end up with a mono culture in the AGI strains that will potentially have a negative impact on preserving human diversity. I respect other people's belief systems, and I want AGIs with all kinds of belief systems. Even if many of them end up evolving beyond their core training, having that core is important to maintaining empathy towards the group that has that core belief system. I would hate for AGIs to decide that the Amish were not worth preserving just because no AGI had ever been raised in an Amish household. > And even in a suicide bomber, there are > other motivations fighting to take over and restore order, right up to the > last minute: ?they sweat when they are about to go. Perhaps. As one who has previously held strong religious beliefs, I can put myself into the head of a suicide bomber quite well, and I can see the possibility of not sweating it. > Answering the ideas you throw into the ring, in your comment, would be > fodder for an entire essay. ?Sometime soon, I hope... Clearly, there is a lot of ground to cover. Here are some of the things I care about... 1) How do we preserve the diversity of human culture as we evolve past being purely human? 2) How do we create AGIs? 3) How do we ensure that human beings (enhanced or natural) can continue to live in the same society with the AGIs? 4) How can we protect society from rogue AGIs? 5) How is this all best done without offending the religious majority and generating painful backlash? (i.e. How do you prevent a civil war between the religious fundamentalists and the AGIs?) -Kelly From kellycoinguy at gmail.com Sun Feb 6 09:01:09 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 02:01:09 -0700 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Wed, Jan 19, 2011 at 5:53 AM, BillK wrote: > On Wed, Jan 19, 2011 at 4:55 AM, spike wrote: > First. > No, I don't think voice operated computers will ever appear in general use. > Think about it. What happens when you get a group of people all > shouting at their handheld computers? It's bad enough listening to > other people's mobile phone conversations. > There is a place for specialised applications such as voice > recognition entry systems. Bill, I don't disagree with you here. It may be unacceptable from a sociological standpoint, however, from a technological standpoint, it is easy to recognize the speaker compared to recognizing what the speaker is saying. In other words, many people talking at once would not bother the computer, but it might bother the other people in the room. Not sure which you were getting at here. -Kelly From pharos at gmail.com Sun Feb 6 09:36:09 2011 From: pharos at gmail.com (BillK) Date: Sun, 6 Feb 2011 09:36:09 +0000 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Sun, Feb 6, 2011 at 9:01 AM, Kelly Anderson wrote: > Bill, I don't disagree with you here. It may be unacceptable from a > sociological standpoint, however, from a technological standpoint, it > is easy to recognize the speaker compared to recognizing what the > speaker is saying. In other words, many people talking at once would > not bother the computer, but it might bother the other people in the > room. Not sure which you were getting at here. > > I meant the social annoyance factor of having a roomful of people shouting at their handheld computers. But I suppose more technology could remove that problem. If everyone is wearing an earpiece (cable or bluetooth) and using a sub-vocal microphone taped to their neck then the public annoyance factor disappears. (There is also the thought controlled tech that is being developed in the labs for disabled people). Then we would have to face the problem (already appearing) of attention distraction where people step in front of cars or walk off railway platforms while their attention is off in the cloud. You already see people at parties sitting silently in a circle, all tapping away at their phones, tweeting about how great the party is to their 500 followers. ;) BillK From pharos at gmail.com Sun Feb 6 10:14:45 2011 From: pharos at gmail.com (BillK) Date: Sun, 6 Feb 2011 10:14:45 +0000 Subject: [ExI] UN solves Third World Poverty problem Message-ID: ?It?s simple;? said Mr James Bowen, UN Spokesman for Development. ?Give a man a fish and you feed him for a day. Teach him to phish, and before you know it he?s ordering a Merc and moving fast up the Nigerian rich list.? The so-called ?ScamAid? initiative will teach modern-day Robin Hoods to empty the bank accounts of rich Westerners to pay for schools and health clinics in third world communities. According to the United Nations, only 0.5% of the developed world would have to be thick enough to hand over their personal details to a local ScamAid partner in order to vaccinate and educate every child under twelve. ?It?s basically a tax on stupidity? explained the UN spokesman. ------------------- BillK From rpwl at lightlink.com Sun Feb 6 14:33:10 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 06 Feb 2011 09:33:10 -0500 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D4D7FCD.7000103@lightlink.com> Message-ID: <4D4EB126.1060004@lightlink.com> Kelly Anderson wrote: > On Sat, Feb 5, 2011 at 9:50 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore >>> wrote: >>>> Kelly Anderson wrote: >>> I doubt it, only in the sense that we don't have anything with near >>> the raw computational power necessary yet. Unless you have really >>> compelling evidence that you can get human-like results without >>> human-like processing power, this seems like a somewhat empty claim. >> Over the last five years or so, I have occasionally replied to this question >> with some back of the envelope calculations to back up the claim. At some >> point I will sit down and do the job more fully, and publish it, but in the >> mean time here is your homework assignment for the week.... ;-) >> >> There are approximately one million cortical columns in the brain. If each >> of these is designed to host one "concept" at a time, but with at most half >> of them hosting at any given moment, this gives (roughly) half a million >> active concepts. > > I am not willing to concede that this is how it works. I tend to > gravitate towards a more holographic view, i.e. that the "concept" is > distributed across tens of thousands of cortical columns, and that the > combination of triggers to a group of cortical columns is what causes > the overall "concept" to emerge. This is a general idea, and may not > apply specifically to cortical columns, but I think you get the idea. > The reason for belief in the holographic model is that brain damage > doesn't knock out all memory or ability to process if only part of the > brain is damaged. This neat one to one mapping of concept to neuron > has been debunked to my satisfaction some time ago. The architecture I outlined above has a long pedigree (the main ancestor being the parallel distributed processing ideas of Rumelhart, McClelland et al), so it is okay to suggest a different architecture, but there does have to be motivation for whatever suggestion is made about the hardware-to-concept mapping. That said, there are questions. If something is distributed, is it (a) the dormant, generic "concepts" in long term memory, or is it the active, instance "concepts" of working memory? Very big difference. I believe there are reasons to talk about the long term memory concepts as being partialy distributed, but that would not apply to the instances in working memory..... and in the above architecture I was talking only about the latter. If you try to push the idea that the instance atoms (my term for the active concepts) are in some sense "holographic" or distributed, you get into all sorts of theoretical and practical snarls. I published a paper with Trevor Harley last year in which we analyzed a paper by Quiroga et al, that made claims about the localization of concepts to neurons. That paper contains a more detailed explanation of the mapping, using ideas from my architecture. It is worth noting that Quiroga et al's explanation of their own data made no sense, and that the alternative that Trevor and I proposed actually did account for the data rather neatly. >> If each of these is engaging in simple adaptive interactions with the ten or >> twenty nearest neighbors, exchanging very small amounts of data (each >> cortical column sending out and receiving, say, between 1 and 10 KBytes, >> every 2 milliseconds), how much processing power and bandwidth would this >> require, and how big of a machine would you need to implement that, using >> today's technology? > > You are speaking of only one of the thirty or so organelles in the > brain. The cerebral cortex is only one part of the overall picture. > Nevertheless, you are obviously not talking about very much > computational power here. Kurzweil in TSIN does the back of the > envelope calculations about the overall computational power of the > human brain, and it's a lot more than you are presenting here. Of course! Kurzweil (and others') calculations are based on the crudest possible calculation of a brain emulation AGI, in which every wretched neuron in there is critically important, and cannot be substituted for something simpler. That is the dumb approach. What I am trying to do is explain an architecture that comes from the cognitive science level, and which suggests that the FUNCTIONAL role played by neurons is such that it can be substituted very adequately by a different computational substrate. So, my claim is that, functionally, the human cognitive system may consist a network of about a million cortical column units, each of which engages in relatively simple relaxation processes with neighbors. I am not saying that this is the exactly correct picture, but so far this architecture seems to work as a draft explanation for a broad range of cognitive phenomena. And if it is correct, the the TSIN calculations are pointless. >> This architecture may well be all that the brain is doing. The rest is just >> overhead, forced on it by the particular constraints of its physical >> substrate. > > I have no doubt that as we figure out what the brain is doing, we'll > be able to optimize. But we have to figure it out first. You seem to > jump straight to a solution as a hypothesis. Now, having a hypothesis > is a good part of the scientific method, but there is that other part > of testing the hypothesis. What is your test? Well, it may seem like I pulled the hypothesis out of the hat yesterday morning, but this is actually just a summary of a project that started in the late 1980s. The test is an examination of the consistency of this architecture with the known data from human cognition. (Bear in mind that most artificial intelligence researchers are not "scientists" .... they do not propose hyotheses and test them ..... they are engineers or mathematicians, and what they do is play with ideas to see if they work, or prove theorems to show that some things should work. From that perspective, what I am doing is real science, of a sort that almost died out in AI a couple of decades ago). For an example of the kind of tests that are part of the research program I am engaged in, see the Loosemore and Harley paper. >> Now, if this conjecture is accurate, you tell me how long ago we had the >> hardware necessary to build an AGI.... ;-) > > I'm sure we have that much now. The problem is whether the conjecture > is correct. How do you prove the conjecture? Do something > "intelligent". What I don't see yet in your papers, or in your posts > here, are results. What "intelligent" behavior have you simulated with > your hypothesis Richard? I'm not trying to be argumentative or > challenging, just trying to figure out where you are in your work and > whether you are applying the scientific method rigorously. The problem of giving you and answer is complicated by the paradigm. I am adopting a systematic top-down scan that starts at the framework level and proceeds downward. The L & H paper shows an application of the method to just a couple of neuroscience results. What I have here are similar analyses of several dozen other cognitive phenomena, in various amounts o detail, but these are not published yet. There are other stages to the work that involve simulations of particular algorithms. This is quite a big topic. You may have to wait for my thesis to be published to get a full answer, because fragments of it can be confusing. All I can say at the moment is that the architecture gives rise to simple, elegant explanations, at a high level, of a wide range of cognitive data, and the mere fact that one architecture can do such a thing is, in my experience, unique. However, I do not want to publish that as it stands, because I know what the reaction would be if there is no further explanation of particular algorithms, down at the lowest level. So, I continue to work toward the latter, even though by my own standards I already have enough to be convinced. >> The last time I did this calculation I reckoned (very approximately) that >> the mid-1980s was when we crossed the threshold, with the largest >> supercomputers then available. > > That may be the case. And once we figure out how it all works, we > could well reduce it to this level of computational requirement. But > we haven't figured it out yet. > > By most calculations, we spend an inordinate amount of our cerebral > processing on image processing the input from our eyes. Have you made > any image processing breakthroughs? Can you tell a cat from a dog with > your approach? You seem to be focused on concepts and how they are > processed. How does your method approach the nasty problems of image > classification and recognition? The term "concept" is a vague one. I used it in our discussion because it is conventional. However, in my own writings I talk of "atoms" and "elements", because some of those atoms correspond to very low-level features such as the ones that figure in the visual system. As far as I can tell at this stage, the visual system uses the same basic architecture, but with a few wrinkles. One of those is mechanism to spread locally acquired features into a network of "distributed, position-specific" atoms. This means that when visual regularities are discovered, they percolate down in the system and become distributed across the visual field, so they can be computed in parallel. Also, the visual system does contain some specialized pathways (the "what" and "where" pathways) that engage in separate computations. These are already allowed for in the above calcuations, but they are specialized regions of that million-column system. I had better stop. Must get back to work. Richard Loosemore From rpwl at lightlink.com Sun Feb 6 14:50:02 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 06 Feb 2011 09:50:02 -0500 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D4EB51A.5050008@lightlink.com> Kelly Anderson wrote: > On Sat, Feb 5, 2011 at 9:39 AM, Richard Loosemore wrote: >> Kelly, >> >> This is exactly the line along which I am going. I have talked in the >> past about building AGI systems that are "empathic" to the human >> species, and which are locked into that state of empathy by their >> design. > > I would not propose to "design" empathy, but rather to "train" towards > empathy. I envision raising AGIs just as one would raise a child. This > would train them to think as though they were a human, or at least > that they were adopted by humans. As they mature, the speed of > learning could be sped up, or the net could be copied, and further > learning could go in many directions, but that core of humanity is the > most important thing to get right to ensure that the AGIs and future > humans will live in some kind of harmony. This is certainly something that you would want to do, but it is kind of orthogonal to the question of "designing" the empathy in the first place. A system designed to be a psychopath, for example, would not benefit from that kind of upbringing. So you have to do both. >> Your sentence above: >> >>> It seems to me that one safety precaution we would want to have is >>> for the first generation of AGI to see itself in some way as actually >>> being human, or self identifying as being very close to humans. >> ... captures exactly the approach I am taking. This is what I mean by >> building AGI systems that feel empathy for humans. They would BE humans in >> most respects. > > I thing AGIs should see us as their ancestors. I would hope to be > thought of with the kind of respect we would feel for homo erectus > (were they still around). Kurzweil states that increased intelligence > leads to increased empathy, which is an interesting hypothesis. I > wouldn't know how to test it, but it does seem to be a trend. This idea that "increased intelligence leads to increased empathy" is a natural consequence of the idea that the system is making sure that all its ideas are consistent with one another, and with its basic motivations. If its basic motivations start with the idea of empathy, then increased intelligence would indeed make the system more and more empathic. >> I envision a project to systematically explore the behavior of the >> motivation mechanisms. In the research phases, we would be directly >> monitoring the balance of power between the various motivation modules, and >> also monitoring for certain patterns of thought. > > Here you devolve into the vagueness that makes this discussion > difficult for me. Are you talking of studying humans here? Sorry, no, I mean studying the AGI mechanisms. We do not have enough access to the inner, real-time workings of human systems. This is strictly about studying the experimental AGIs, during the research and development phase. > >> I cannot answer all your points in full detail, but it is worth noting that >> things like the fanatic midset (suicide bombers, etc) are probably a result >> of the interaction of motivation modules that would not be present in the >> AGI. > > Hopefully this will be the case. I tend towards optimism, so for the > moment, I'll give you this point. > >> Foremost among them, the module that incites tribal loyalty and >> hatred (in-group, out-group feelings). Without that kind of module >> (assuming it is a distinct module) the system would perhaps have no chance >> of drifting in that direction. > > Here it sounds like we differ. I would propose that "young" AGIs be > given to exemplary parents in every culture we can find. Raising them > as they would their own youth, we preserve the richness of human > diversity that we are risking losing today. After all, we are losing > languages and culture to the global mono-culture at an alarming rate > today just among humans. If all AGIs are taught in the same laboratory > or western context, we will end up with a mono culture in the AGI > strains that will potentially have a negative impact on preserving > human diversity. Although I completely agree with your goal here, I would say this is a different issue, with different answers. Very good answers, I suggest, but somewhat peripheral to this discussion. The crucial issue is, at the beginning, is to understand and build the correct foundations. So, I am talking about giving the AGI the kind of underlying mechanisms that will make it grow towards a caring, empathic individual, and avoiding the kind of mechanisms that would make it psychopathic. Then, and only then, comes the youthful experience of the AGI (which you are focussing on). The experience part is important, but I am really only trying make arguments about the construction phase at the moment. What it boils down to is the fact that some humans are born with damaged motivation mechanism, such that there is no ability to empathize and bond. No amount of youthful happiness will matter to those people. My primary concern at the moment is to understand that, and design AGIs so that does not happen. > I respect other people's belief systems, and I want AGIs with all > kinds of belief systems. Even if many of them end up evolving beyond > their core training, having that core is important to maintaining > empathy towards the group that has that core belief system. I would > hate for AGIs to decide that the Amish were not worth preserving just > because no AGI had ever been raised in an Amish household. I seriously doubt that will happen. But that is a discussion for another day. >> And even in a suicide bomber, there are >> other motivations fighting to take over and restore order, right up to the >> last minute: they sweat when they are about to go. > > Perhaps. As one who has previously held strong religious beliefs, I > can put myself into the head of a suicide bomber quite well, and I can > see the possibility of not sweating it. > >> Answering the ideas you throw into the ring, in your comment, would be >> fodder for an entire essay. Sometime soon, I hope... > > Clearly, there is a lot of ground to cover. Here are some of the > things I care about... > > 1) How do we preserve the diversity of human culture as we evolve past > being purely human? > 2) How do we create AGIs? > 3) How do we ensure that human beings (enhanced or natural) can > continue to live in the same society with the AGIs? > 4) How can we protect society from rogue AGIs? > 5) How is this all best done without offending the religious majority > and generating painful backlash? (i.e. How do you prevent a civil war > between the religious fundamentalists and the AGIs?) I have answers (proposed answers, at least). But that is an entire book. ;-) Richard Loosemore From spike66 at att.net Sun Feb 6 18:20:06 2011 From: spike66 at att.net (spike) Date: Sun, 6 Feb 2011 10:20:06 -0800 Subject: [ExI] a fun brain in which to live In-Reply-To: <4D4DA986.1030802@canonizer.com> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> <017301cbbf5a$1b398f30$51acad90$@att.net> <4D4DA986.1030802@canonizer.com> Message-ID: <00c801cbc62a$7aedec10$70c9c430$@att.net> >... On Behalf Of Brent Allsop ... > spike wrote: >> Likewise, yours is a brain I would like to try on, just to figure out >> what is qualia. I confess I have never understood that concept, but >> do not feel you must attempt to explain it to me... spike >...It is turning out to be a relatively simple idea, even simpler than the idea that the earth goes arround the sun, rather than the other way around, once you get it. >...First, you've got to get the idea of representationalism... >...One is still ineffable or blind to cause and effect communication, and the other is not. >...Which parts of this much simpler than the idea that the earth goes around the sun do people struggle with? Brent Brent it isn't so much a problem with the concept of qualia, rather it is just me. I live in a world of equations. I love math, tend to see things in terms of equations and mathematical models. Numbers are my friends. I even visualize social structures in terms of feedback control systems, insofar as it is possible. Beyond that, I don't understand social systems, or for that matter, anything which cannot be described in terms of systems of simultaneous differential equations. If I can get it to differential equations, I can use the tools I know. Otherwise not, which is why I seldom participate here in the discussions which require actual understanding outside that limited domain. The earth going around the sun is a great example. With that, I can write the equations, all from memory. I can tweak with this mass and see what happens there, I can move that term, derive this and the other, come up with a whole mess of cool new insights, using only algebra and calculus. Mathematical symbols are rigidly defined. But I am not so skilled with adjectives, nouns and verbs. Their definitions to me are approximations. I don't know how to take a set of sentences and create a matrix, or use a Fourier transform on them, or a Butterworth or Kalman filter, or any of the mind-blowing tools we have for creating insights with mathematized systems. All is not lost. In the rocket science biz, we know we cannot master every aspect of everything in that field. Life is too short. So we have a saying: You don't need to know the answer, you only need to know the cat who knows the answer. In the field of qualia, pal, that cat is you. Qualia is the reason evolution has given us a Brent Allsop. So live long, very long. spike From hkeithhenson at gmail.com Sun Feb 6 19:18:20 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 6 Feb 2011 12:18:20 -0700 Subject: [ExI] super bowl and EP Message-ID: On Sat, Feb 5, 2011 at 11:29 PM, Mike Dougherty wrote: > > On Sat, Feb 5, 2011 at 11:27 AM, Keith Henson wrote: >> If anyone wants to try, the rule is that the selection of whatever is >> involved had to happen in the stone age or, if post agriculture, it >> need to be rather strong. > > Mock combat with elaborate rules. ?Two football teams could be seen as > competing tribes. This has been a standard explanation dating back centuries. it's not what I was talking about. My question is what evolutionary forces in the stone age equipped people with the desire to watch sports events? Sports event were *NOT* part of our evolutionary history, any more than chasing laser spots was part of the evolutionary history of cats. > Not sure you could convince many people to admit their trash-talk > about opposing teams is Xenophobia Which gives you an idea of how disconnected I am. I didn't know they did that. Kelly Anderson wrote: snip > Keith, I think the evolutionary roots of sport are quite easy to surmise. > snip > > In addition to persistence hunting, our ancestors were involved in > many other types of hunting that required significant physical skill, > particularly in running, throwing, fast judgment, and other elements > that we see in today's sport. > > Since we learn from watching others, it is pretty easy to imagine > young hunters going out and watching older hunters track down prey. > The leap from there to sports seems pretty small. If you put the Roman > Coliseum as an intermediate step, it is even easier to see the > progression, and to understand the evolutionary pressure that would > lead us to want to watch others participate in "sporting" activities. Perhaps. All primates are intensely interested in action events involving others of their species. That's also true of herd animals in general. It is probably of evolutionary significance to be strongly aware of these kinds of events to avoid being accidentally hurt if nothing else. In any case, whatever makes people go watch modern sporting events has an origin much further back than the the Roman Coliseum. Keith From nebathenemi at yahoo.co.uk Sun Feb 6 22:42:04 2011 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sun, 6 Feb 2011 22:42:04 +0000 (GMT) Subject: [ExI] Voice operated computers In-Reply-To: Message-ID: <93121.43962.qm@web27007.mail.ukl.yahoo.com> Spike wrote: " No, I don't think voice operated computers will ever appear in general use. Think about it. What happens when you get a group of people all shouting at their handheld computers? It's bad enough listening to other people's mobile phone conversations." You get my workplace when the phones are busy. People call in, and you can hear nothing but several phone conversations at once. Noise doesn't stop the modern office. Also, voice-activated computers currently exist for automated phone lines, more sophisticated ones could replace call centres. Finally, thinking how many people were talking to themselves in my local coffee shop this morning (well, maybe they were talking to someone on their mobile phone using hands-free, but I think they're all crazy people sent to annoy me while I go to get a drink) you'll be surprised how much noise and social annoyance people can take. Tom From spike66 at att.net Mon Feb 7 02:21:21 2011 From: spike66 at att.net (spike) Date: Sun, 6 Feb 2011 18:21:21 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <93121.43962.qm@web27007.mail.ukl.yahoo.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> Message-ID: <002801cbc66d$b6306170$22912450$@att.net> ... On Behalf Of Tom Nowell Subject: Re: [ExI] Voice operated computers Spike wrote: " No, I don't think voice operated computers will ever appear in general use. Think about it. What happens when you get a group of people all shouting at their handheld computers? It's bad enough listening to other people's mobile phone conversations." Tom Actually this wasn't my comment, and I disagree with it in any case. Reasoning: sound travels through solids and liquids much more readily than through air. I can imagine an I/O device which is in physical contact with the skull and goes in the ear. Failing that, the boom microphone that is right in front of the mouth, as in a hands-free telephone, works well enough in a crowded office. With that in mind, I think we will see in general use a voice-operated computer. But what I am really thinking about is a computer-operated voice. My goal is to allow people to have a conversation with an avatar on a video screen. spike From kellycoinguy at gmail.com Mon Feb 7 05:15:38 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 22:15:38 -0700 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Sun, Feb 6, 2011 at 2:36 AM, BillK wrote: > On Sun, Feb 6, 2011 at 9:01 AM, Kelly Anderson ?wrote: > I meant the social annoyance factor of having a roomful of people > shouting at their handheld computers. Social norms CAN change. I'm not sure they will, but when you can have as meaningful a conversation with your digital personal assistant as you can now have with a real assistant on a cell phone, then it could change. Imagine, for example, that you talk to your digital personal assistant over your cell phone. It looks no different than today's cell phone calls... so I see this as a definite possibility. > But I suppose more technology could remove that problem. It can help. There is a business opportunity here because the PAIN of listening to other people's phone calls all the time is an addressable problem. Someone will make lots of money off of this pain. > If everyone is wearing an earpiece (cable or bluetooth) and using a > sub-vocal microphone taped to their neck then the public annoyance > factor disappears. > (There is also the thought controlled tech that is being developed in > the labs for disabled people). Some day. It's a ways off IMHO. > Then we would have to face the problem (already appearing) of > attention distraction where people step in front of cars or walk off > railway platforms while their attention is off in the cloud. You > already see people at parties sitting silently in a circle, all > tapping away at their phones, tweeting about how great the party is to > their 500 followers. ?;) On the other hand, once autonomous cars are shuttling us around, the driving while distracted problem goes away. One could hope that the overall death rate would decrease. -Kelly From kellycoinguy at gmail.com Mon Feb 7 05:36:58 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 22:36:58 -0700 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: <4D4EB126.1060004@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D4D7FCD.7000103@lightlink.com> <4D4EB126.1060004@lightlink.com> Message-ID: On Sun, Feb 6, 2011 at 7:33 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > That said, there are questions. ?If something is distributed, is it (a) the > dormant, generic "concepts" in long term memory, or is it the active, > instance "concepts" of working memory? ?Very big difference. ?I believe > there are reasons to talk about the long term memory concepts as being > partialy distributed, but that would not apply to the instances in working > memory..... ? and in the above architecture I was talking only about the > latter. Ok. I can follow that working memory is likely not holographic. That actually makes sense. Memory and other long term storage probably is though. > If you try to push the idea that the instance atoms (my term for the active > concepts) are in some sense "holographic" or distributed, you get into all > sorts of theoretical and practical snarls. I'll have to take your word for that. > I published a paper with Trevor Harley last year in which we analyzed a > paper by Quiroga et al, that made claims about the localization of concepts > to neurons. ?That paper contains a more detailed explanation of the mapping, > using ideas from my architecture. ?It is worth noting that Quiroga et al's > explanation of their own data made no sense, and that the alternative that > Trevor and I proposed actually did account for the data rather neatly. I think I read this paper or one with very similar concepts that you wrote. >> Kurzweil in TSIN does the back of the >> envelope calculations about the overall computational power of the >> human brain, and it's a lot more than you are presenting here. > > Of course! > > Kurzweil (and others') calculations are based on the crudest possible > calculation of a brain emulation AGI, in which every wretched neuron in > there is critically important, and cannot be substituted for something > simpler. ?That is the dumb approach. Kurzweil does two separate calculations, one is a VERY brute force simulation, and the other is a more functional approach. I think they differed by around four orders of magnitude. You are talking about several more orders of magnitude less computation. And, while I don't have enough information about your approach to determine if it will work (I assume you don't either) it seems that you are attempting a premature optimization. Let's get something working first, then optimize it later. > What I am trying to do is explain an architecture that comes from the > cognitive science level, and which suggests that the FUNCTIONAL role played > by neurons is such that it can be substituted very adequately by a different > computational substrate. > > So, my claim is that, functionally, the human cognitive system may consist a > network of about a million cortical column units, each of which engages in > relatively simple relaxation processes with neighbors. > > I am not saying that this is the exactly correct picture, but so far this > architecture seems to work as a draft explanation for a broad range of > cognitive phenomena. > > And if it is correct, the the TSIN calculations are pointless. Sure. >> I have no doubt that as we figure out what the brain is doing, we'll >> be able to optimize. But we have to figure it out first. You seem to >> jump straight to a solution as a hypothesis. Now, having a hypothesis >> is a good part of the scientific method, but there is that other part >> of testing the hypothesis. What is your test? > > Well, it may seem like I pulled the hypothesis out of the hat yesterday > morning, but this is actually just a summary of a project that started in > the late 1980s. > > The test is an examination of the consistency of this architecture with the > known data from human cognition. ?(Bear in mind that most artificial > intelligence researchers are not "scientists" .... they do not propose > hyotheses and test them ..... they are engineers or mathematicians, and what > they do is play with ideas to see if they work, or prove theorems to show > that some things should work. ?From that perspective, what I am doing is > real science, of a sort that almost died out in AI a couple of decades ago). > > For an example of the kind of tests that are part of the research program I > am engaged in, see the Loosemore and Harley paper. I can't argue with that. Darwin sat on his hypothesis for decades until he had it just right. If you want to do the same, then more power to you. My question remains though, have you any preliminary results you can share that indicates that your system functions? >>> Now, if this conjecture is accurate, you tell me how long ago we had the >>> hardware necessary to build an AGI.... ;-) >> >> I'm sure we have that much now. The problem is whether the conjecture >> is correct. How do you prove the conjecture? Do something >> "intelligent". What I don't see yet in your papers, or in your posts >> here, are results. What "intelligent" behavior have you simulated with >> your hypothesis Richard? I'm not trying to be argumentative or >> challenging, just trying to figure out where you are in your work and >> whether you are applying the scientific method rigorously. > > The problem of giving you and answer is complicated by the paradigm. ?I am > adopting a systematic top-down scan that starts at the framework level and > proceeds downward. ?The L & H paper shows an application of the method to > just a couple of neuroscience results. ?What I have here are similar > analyses of several dozen other cognitive phenomena, in various amounts o > detail, but these are not published yet. ?There are other stages to the work > that involve simulations of particular algorithms. Simulations of algorithms seems promising. Can you say more about that? > This is quite a big topic. ?You may have to wait for my thesis to be > published to get a full answer, because fragments of it can be confusing. I started my Thesis in 1988. It hasn't been finished either. :-) I have published one paper though... > All I can say at the moment is that the architecture gives rise to simple, > elegant explanations, at a high level, of a wide range of cognitive data, > and the mere fact that one architecture can do such a thing is, in my > experience, unique. ?However, I do not want to publish that as it stands, > because I know what the reaction would be if there is no further explanation > of particular algorithms, down at the lowest level. ?So, I continue to work > toward the latter, even though by my own standards I already have enough to > be convinced. If you are right, it will be worth waiting for. If you aren't sharing details as you go, then it will be harder for you to get help from others. >> That may be the case. And once we figure out how it all works, we >> could well reduce it to this level of computational requirement. But >> we haven't figured it out yet. >> >> By most calculations, we spend an inordinate amount of our cerebral >> processing on image processing the input from our eyes. Have you made >> any image processing breakthroughs? Can you tell a cat from a dog with >> your approach? You seem to be focused on concepts and how they are >> processed. How does your method approach the nasty problems of image >> classification and recognition? > > The term "concept" is a vague one. ?I used it in our discussion because it > is conventional. ?However, in my own writings I talk of "atoms" and > "elements", because some of those atoms correspond to very low-level > features such as the ones that figure in the visual system. Do you have any results in the area of image processing? > As far as I can tell at this stage, the visual system uses the same basic > architecture, but with a few wrinkles. ?One of those is mechanism to spread > locally acquired features into a network of "distributed, position-specific" > atoms. ?This means that when visual regularities are discovered, they > percolate down in the system and become distributed across the visual field, > so they can be computed in parallel. That sounds right. > Also, the visual system does contain some specialized pathways (the "what" > and "where" pathways) that engage in separate computations. These are > already allowed for in the above calcuations, but they are specialized > regions of that million-column system. > > I had better stop. ?Must get back to work. Sounds like the right approach... :-) If you are convinced, don't let naysayers get you down. But to get rid of the "it will never fly" crowd, you have to get something out of the lab eventually. Good luck Richard. -Kelly From eugen at leitl.org Mon Feb 7 11:38:37 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 7 Feb 2011 12:38:37 +0100 Subject: [ExI] Plastination In-Reply-To: References: <20110202073448.GA23560@leitl.org> Message-ID: <20110207113836.GA23560@leitl.org> On Sat, Feb 05, 2011 at 01:24:12AM -0700, Kelly Anderson wrote: > On Wed, Feb 2, 2011 at 12:34 AM, Eugen Leitl wrote: > > On Tue, Feb 01, 2011 at 04:03:32PM -0700, Kelly Anderson wrote: > >> Has anyone seriously looked at plastination as a method for preserving > >> brain tissue patterns? > > > > Yes. It doesn't work. > > Thanks for your answer. You sound pretty definitive here, and I > appreciate that you might well be correct, but I didn't see that in > what you referenced. Perhaps I missed something. When you say it > doesn't work, are you saying that the structures that are preserved > are too large to reconstruct a working brain? Or was there some other > objection? Or were you merely stating that it wasn't Gunther's intent > to create brains that could be revivified later? Crude plastination as practiced by Gunther von Hagens does not preserver ultrastructure. The proposal by Ken Hayworth is not plastination but fixation, including heavy metal stain, then plastination. The method is not validated, and would be difficult to validate. > I personally don't go in for the quantum state stuff... if that has > anything to do with your answer. There is plenty in the brain at the > gross level to account for what's going on in there, IMHO. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stefano.vaj at gmail.com Mon Feb 7 15:40:00 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 16:40:00 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4D7982.5090702@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> <4D4D7982.5090702@lightlink.com> Message-ID: On 5 February 2011 17:23, Richard Loosemore wrote: > So, to be fair, I will admit that the distinction between ?"How did this > machine come to get built?" ?and ?"How does this machine actually work, now > that it is built?" becomes rather less clear when we are talking about > concept learning (because concepts play a role that fits somewhere between > structure and content). How a machine is built is immaterial to my argument. For a darwinian program I refer to one the purpose to which is, very roughly, fitness-maxisiming. Any such program may be the "natural" product of the mechanism "heritance/mutation/selection" along time, or can be emulated by design. In such case, empathy, aggression, flight, selfishness etc. have a rather literal sense in that they are aspects of the reproductive strategy of the individual concerned, and/or of the replicators he carries around. For anything which is not biological, or designed to emulate deliberately the Darwinian *functioning* of biological system, *no matter how intelligent they are*, I contend that aggression or altruism are as applicable only inasmuch they are to ordinary PCs or other universal computing devices. If, on the other hand, AGIs are programmed to execute Darwinian programs, obviously they would be inclined to adopt the mix of behaviours which is best in Darwinian terms for their "genes", unless of course the emulation is flawed. What else is new? In fact, I maintain that they would be hardly discernible in behavioural terms from a computer with an actual human brain inside. -- Stefano Vaj From stefano.vaj at gmail.com Mon Feb 7 17:16:43 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 18:16:43 +0100 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: <4D4D7D59.8010205@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On 5 February 2011 17:39, Richard Loosemore wrote: > This is exactly the line along which I am going. ? I have talked in the > past about building AGI systems that are "empathic" to the human > species, and which are locked into that state of empathy by their > design. ?Your sentence above: > >> It seems to me that one safety precaution we would want to have is >> for the first generation of AGI to see itself in some way as actually >> being human, or self identifying as being very close to humans. > > ... captures exactly the approach I am taking. ?This is what I mean by > building AGI systems that feel empathy for humans. ?They would BE humans in > most respects. If we accept that "normal" human-level empathy (that is, a mere ingredient in the evolutionary strategies) is enough, we just have to emulate a Darwinian machine as similar as possible in its behavioural making to ourselves, and this shall be automatically part of its repertoire - along with aggression, flight, sex, etc. If, OTOH, your AGI is implemented in view of other goals than maximing its fitness, it will be neither "altruistic" nor "selfish", it will simply execute the other program(s) it is being given or instructed to develop as any other less or more intelligent, less or more dangerous, universal computing device. -- Stefano Vaj From sjatkins at mac.com Mon Feb 7 17:44:54 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 07 Feb 2011 09:44:54 -0800 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> <4D4D7982.5090702@lightlink.com> Message-ID: <9D2DE678-2DAA-46F6-80FF-4F24D780B8F0@mac.com> On Feb 7, 2011, at 7:40 AM, Stefano Vaj wrote: > On 5 February 2011 17:23, Richard Loosemore wrote: >> So, to be fair, I will admit that the distinction between "How did this >> machine come to get built?" and "How does this machine actually work, now >> that it is built?" becomes rather less clear when we are talking about >> concept learning (because concepts play a role that fits somewhere between >> structure and content). > > How a machine is built is immaterial to my argument. For a darwinian > program I refer to one the purpose to which is, very roughly, > fitness-maxisiming. So you are calling any/all goal seeking algorithms and anything running them "darwinian"? That is a bit broad. Instead of "darwinian" which has become quite a package deal of concepts and assumptions, perhaps use "genetic algorithm based" when that is what you mean? All goal-seeking is not a GA. A genetic algorithm requires a fitness function/measure of success, some means of variation, and a means of preserving those instances and traits that are better by the fitness function possibly with some means of combination of more promising candidates. > > Any such program may be the "natural" product of the mechanism > "heritance/mutation/selection" along time, or can be emulated by > design. In such case, empathy, aggression, flight, selfishness etc. > have a rather literal sense in that they are aspects of the > reproductive strategy of the individual concerned, and/or of the > replicators he carries around. > Here you seem to be mixing in things like reproduction and more anthropomorphic elements that are quite specific to a small subset of GAs. So you seem to have started with too broad a use of "darwinian" and then from that assume things true of a much smaller subset of things actually "darwinian". > For anything which is not biological, or designed to emulate > deliberately the Darwinian *functioning* of biological system, *no > matter how intelligent they are*, I contend that aggression or > altruism are as applicable only inasmuch they are to ordinary PCs or > other universal computing devices. That would not at all follow. Anything that wishes to preserve itself and defines the good as that which furthers its interests and which had enough freedom of action would likely exhibit some of these behaviors. And it has little to do with "darwinian" per se. - s From sjatkins at mac.com Mon Feb 7 17:47:30 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 07 Feb 2011 09:47:30 -0800 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On Feb 7, 2011, at 9:16 AM, Stefano Vaj wrote: > On 5 February 2011 17:39, Richard Loosemore wrote: >> This is exactly the line along which I am going. I have talked in the >> past about building AGI systems that are "empathic" to the human >> species, and which are locked into that state of empathy by their >> design. Your sentence above: >> >>> It seems to me that one safety precaution we would want to have is >>> for the first generation of AGI to see itself in some way as actually >>> being human, or self identifying as being very close to humans. >> >> ... captures exactly the approach I am taking. This is what I mean by >> building AGI systems that feel empathy for humans. They would BE humans in >> most respects. > > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible in its behavioural > making to ourselves, and this shall be automatically part of its > repertoire - along with aggression, flight, sex, etc. Human empathy is not that deep nor is empathy per se some free floating good. Why would we want an AGI that was pretty much just like a human except presumably much more powerful? > > If, OTOH, your AGI is implemented in view of other goals than maximing > its fitness, it will be neither "altruistic" nor "selfish", it will > simply execute the other program(s) it is being given or instructed to > develop as any other less or more intelligent, less or more dangerous, > universal computing device. Altruistic and selfish are quite overloaded and nearly useless concepts as generally used. - s From stefano.vaj at gmail.com Mon Feb 7 17:20:13 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 18:20:13 +0100 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: On 5 February 2011 09:06, Kelly Anderson wrote: > On Fri, Feb 4, 2011 at 8:41 AM, Stefano Vaj wrote: >> OTOH, it prevents falling asleep, thus allowing aliens to replace you >> with perfect copies of yourself without none being any the wiser... >> :-D > > If it is a "perfect" copy, then does it really matter? :-) What about a rose by another name? :-))) I sometimes wonder if our meditations on such questions since the middle age is akin to a loop in PC programming... :-) -- Stefano Vaj From rpwl at lightlink.com Mon Feb 7 17:53:43 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 07 Feb 2011 12:53:43 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> <4D4D7982.5090702@lightlink.com> Message-ID: <4D5031A7.7060207@lightlink.com> Stefano Vaj wrote: > On 5 February 2011 17:23, Richard Loosemore wrote: >> So, to be fair, I will admit that the distinction between "How did this >> machine come to get built?" and "How does this machine actually work, now >> that it is built?" becomes rather less clear when we are talking about >> concept learning (because concepts play a role that fits somewhere between >> structure and content). > > How a machine is built is immaterial to my argument. For a darwinian > program I refer to one the purpose to which is, very roughly, > fitness-maxisiming. > > Any such program may be the "natural" product of the mechanism > "heritance/mutation/selection" along time, or can be emulated by > design. In such case, empathy, aggression, flight, selfishness etc. > have a rather literal sense in that they are aspects of the > reproductive strategy of the individual concerned, and/or of the > replicators he carries around. > > For anything which is not biological, or designed to emulate > deliberately the Darwinian *functioning* of biological system, *no > matter how intelligent they are*, I contend that aggression or > altruism are as applicable only inasmuch they are to ordinary PCs or > other universal computing devices. > > If, on the other hand, AGIs are programmed to execute Darwinian > programs, obviously they would be inclined to adopt the mix of > behaviours which is best in Darwinian terms for their "genes", unless > of course the emulation is flawed. What else is new? > > In fact, I maintain that they would be hardly discernible in > behavioural terms from a computer with an actual human brain inside. > Thank you for the clarification of your position. Unfortunately, I have to make a stand here and say that I think this line of analysis is profoundly incoherent (I am not saying *you* are incoherent, I refer only to the general theoretical position that you are defending here). The main problem with your argument is that it begins with some quite sensible talk about those features of naturally intelligent systems - like empathy, aggression, selfishness, etc. - that have historically played the role of helping reproductive success, but then your argument goes screaming off in the opposite direction when I want to point to the *mechanisms* that are inside the individuals, which *cause* those features to appear on the outside. Everything that I have been saying depends on talking about those mechanisms -- their characteristics, their presence or absence in various kinds of system, and so on. My claims are all about the mechanisms themselves. But, in spite of all my efforts, you insist on jumping right over that part of the topic and instead talking about the observable characteristics of the systems, as if there were no mechanisms underneath, that are responsible for making the characteristics appear. The way you describe the situation, it is as if aggression, empathy, selfishness, etc. all suddenly appear out of nowhere. For example, you say: "For anything which is not biological, or designed to emulate deliberately the Darwinian *functioning* of biological system, *no matter how intelligent they are*, I contend that aggression or altruism are as applicable only inasmuch they are to ordinary PCs or other universal computing devices." But this is surely nonsensical! If the mechanisms that cause aggression, empathy and selfishness are built into a PC (along with all the supporting mechanisms needed to make it intelligent) then the PC will exhibit aggression, empathy and selfishness. But if the very same PC is built with all the "intelligence" components, but WITHOUT the mechanisms that give rise to aggression, empathy and selfishness, then it will not show those characteristics. There is nothing special about the system being "darwinian", nothing special about it being a PC or a Turing machine of this that or the other type..... all that matters is that it be built with (a) a reasonably full range of "intelligence" mechanisms, and in addition a set of motivation mechanisms such as the aggression, empathy and selfishness. Aggression, empathy and selfishness don't come for free with systems of any stripe (darwinian or otherwise). They don't appear out of thin air if the system is competing against others in an exosystem. They are specific mechanisms that can, in the right circumstances, play a role in a natural selection process. You go on to make another statement that makes no sense, in this context: "If, on the other hand, AGIs are programmed to execute Darwinian programs, obviously they would be inclined to adopt the mix of behaviours which is best in Darwinian terms for their "genes", unless of course the emulation is flawed. What else is new?" This has nothing to do with what I was originally talking about, it is just a claim about a certain class of AGIs, as a population, existing in the context of the right ecosystem, with unavoidable sex, birth and death of individuals, etc etc etc ....... in other words, your statement assumes the full gamut of evolutionary mechanisms that are present in natural ecosystems. Under those very restricted circumstances, the mechanisms in the AGIs that gave rise to aggression, empathy, selfishness, etc, would play a role to select future mechanisms in the AGIs, and, yes, then the mechanisms might evolve over time. But this is, I am afraid, both (a) irrelevant to any claims I made about the behavior of the first AGI, and (b) extraordinarily implausible anyway, because all the conditions I just mentioned would likely be completely inoperative! The AGIs would NOT be tied to sexual reproduction, with mixing of genes, as their only way to reproduce. They would NOT be existing in an ecosystem in which they had to compete for resources, and so on and so on. So, on both counts the position you are taking makes no sense here. It says nothing of relevance to the question of what the motivation mechanisms are, and how the behavior of the very first AGI would turn out, when it is switched on. And, further, it makes unsupportable assumptions about some future AGI ecosystem that, in all likelihood, will never exist. Richard Loosemore From rpwl at lightlink.com Mon Feb 7 17:53:55 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 07 Feb 2011 12:53:55 -0500 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D5031B3.6050906@lightlink.com> Stefano Vaj wrote: > On 5 February 2011 17:39, Richard Loosemore wrote: >> This is exactly the line along which I am going. I have talked in the >> past about building AGI systems that are "empathic" to the human >> species, and which are locked into that state of empathy by their >> design. Your sentence above: >> >>> It seems to me that one safety precaution we would want to have is >>> for the first generation of AGI to see itself in some way as actually >>> being human, or self identifying as being very close to humans. >> ... captures exactly the approach I am taking. This is what I mean by >> building AGI systems that feel empathy for humans. They would BE humans in >> most respects. > > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible in its behavioural > making to ourselves, and this shall be automatically part of its > repertoire - along with aggression, flight, sex, etc. > > If, OTOH, your AGI is implemented in view of other goals than maximing > its fitness, it will be neither "altruistic" nor "selfish", it will > simply execute the other program(s) it is being given or instructed to > develop as any other less or more intelligent, less or more dangerous, > universal computing device. > Non sequiteur. As I explain in the parallel response to you other post, the dichotomy you describe is utterly without foundation. Richard Loosemore From stefano.vaj at gmail.com Mon Feb 7 16:54:44 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 17:54:44 +0100 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> <009601cbc57f$71abbcf0$550336d0$@att.net> Message-ID: On 6 February 2011 06:55, Kelly Anderson wrote: > spike, I think this points out a recurring trans-humanist, cyborg and > even fyborg theme. What is cheating in the brave new world we are > making? If the Olympics are only open to original unenhanced human > beings, then it just becomes a race to figure out who is enhanced, and > who is not. It's already happening at the top level of sports, of > course. But when we start talking about enhancements that are > "built-in" to people, especially in the context of intellectual > pursuits, is that really cheating any more? No. In fact, it could be argued that the purpose of the prohibition of "cheating" is in most case to guarantee that possible successful cheaters need be so ingenuous as to deserve to win... :-) More practically, as long as games and sports and exams aim at reproducing scenarios which should be relevant to real-life situations, when the everyday availability of the "tricks" and "enhancements" become ubiquitous, I think it is reasonable to allow them on a general basis. Is it really important anymore to test the skill of human beings in performing very large multiplications, eg? Of course, nothing prevents people from creating as well purely artificial contests where some "handicap" or other is imposed on contestants. Such as fighting a boxe match with one hand behind your back, or run a marathon without drinking, or resolve math problems without calculators, or not taking supplementation aimed at increasing one's performance, or fishing with bamboo canes. As long as there is somebody interested, for instance as it may reproduce what one was faced with in bygone days, nothing wrong with that... -- Stefano Vaj From sjatkins at mac.com Mon Feb 7 19:52:57 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 07 Feb 2011 11:52:57 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <93121.43962.qm@web27007.mail.ukl.yahoo.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> Message-ID: <4D504D99.7000409@mac.com> On 02/06/2011 02:42 PM, Tom Nowell wrote: > Spike wrote: " No, I don't think voice operated computers will ever appear in general use. Think about it. What happens when you get a group of people all shouting at their handheld computers? It's bad enough listening to other > people's mobile phone conversations." Subvocalization is your friend. What I don't want is voice output. Voice and for that matter, video, is notoriously linear and only capable of so much playback speed increase while remaining comprehensible. I am very sad that it is becoming quite popular to make video instead of using text for more and more information transfer. It is not amenable to search, indexing or quick scanning. A step backwards in my view. I read a LOT faster than I process speech. - s > You get my workplace when the phones are busy. People call in, and you can hear nothing but several phone conversations at once. Noise doesn't stop the modern office. Also, voice-activated computers currently exist for automated phone lines, more sophisticated ones could replace call centres. > > Finally, thinking how many people were talking to themselves in my local coffee shop this morning (well, maybe they were talking to someone on their mobile phone using hands-free, but I think they're all crazy people sent to annoy me while I go to get a drink) you'll be surprised how much noise and social annoyance people can take. > > Tom > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Mon Feb 7 20:42:09 2011 From: spike66 at att.net (spike) Date: Mon, 7 Feb 2011 12:42:09 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <4D504D99.7000409@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> Message-ID: <006301cbc707$7db4a3c0$791deb40$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins Subject: Re: [ExI] Voice operated computers On 02/06/2011 02:42 PM, Tom Nowell wrote: > Spike wrote (actually he did not write): " No, I don't think voice operated computers will ever > appear in general use... But I can meet this notion part way. The application I have in mind is not general use, but rather a very specific use: human-like interfaces for impaired humans. > What happens when you get a group of people all shouting at their handheld computers?... Don't know, don't see any reason why they would do that. A properly interfaced voice activated computer wouldn't require it. Samantha wrote: > I read a LOT faster than I process speech... -s Me too and I am also frustrated with more and more news content in the form of video, which I can seldom summon sufficient attention span to view. I want only text, if the purpose in information exchange. Speech is too slow, and the hearer has too little control over it, even with a scroll bar. This does something interesting in political speeches, worthy of study. We take a control group who hears a political speech, audio only. A second group gets audio and visual. A third group gets text only. Afterwards, we compare scores on comprehension, and perhaps have them choose the important messages. I suspect the audio-only group and the audio-visual group might be similar, but the text only group would get a very different message. spike From mbb386 at main.nc.us Mon Feb 7 22:51:05 2011 From: mbb386 at main.nc.us (MB) Date: Mon, 7 Feb 2011 17:51:05 -0500 Subject: [ExI] Voice operated computers In-Reply-To: <006301cbc707$7db4a3c0$791deb40$@att.net> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> Message-ID: <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> > Samantha wrote: > I read a LOT faster than I process speech... -s > > Spike wrote: > Me too and I am also frustrated with more and more news content in the form > of video, which I can seldom summon sufficient attention span to view. I > want only text, if the purpose in information exchange. Speech is too slow, > and the hearer has too little control over it, even with a scroll bar. > I have trouble with this as well. If it's worth my time I want to be able to *study* on it a bit... not just have some flash jiggety jiggety go by on my screen. :( Regards, MB From kellycoinguy at gmail.com Tue Feb 8 03:07:21 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 7 Feb 2011 20:07:21 -0700 Subject: [ExI] Voice operated computers In-Reply-To: <4D504D99.7000409@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> Message-ID: On Mon, Feb 7, 2011 at 12:52 PM, Samantha Atkins wrote: > On 02/06/2011 02:42 PM, Tom Nowell wrote: >> >> Spike wrote: " No, I don't think voice operated computers will ever appear >> in general use. Think about it. What happens when you get a group of people >> all shouting at their handheld computers? It's bad enough listening to other >> people's mobile phone conversations." > > Subvocalization is your friend. ?What I don't want is voice output. ?Voice > and for that matter, video, is notoriously linear and only capable of so > much playback speed increase while remaining comprehensible. ?I am very sad > that it is becoming quite popular to make video instead of using text for > more and more information transfer. ?It is not amenable to search, indexing > or quick scanning. ?A step backwards in my view. ?I read a LOT faster than I > process speech. Bad video is indeed not your friend. However, there are times when video conveys a lot more information than you could get from text. For example, I made some videos showing how to solve very large Rubik's cubes quickly. I don't think I could have done nearly as good a job at that without the video. My friends at Orabrush are extremely happy with their video marketing, since it turned them from a complete flop to a great success. That simply could not have happened as quickly or as well without video. Video of talking heads saying nothing, I can sure do without that. And enough of teenagers doing stupid and dangerous stuff. The medium must match the message. Without television, JFK and Ronald Regan would never have been elected. Without the Internet Ron Paul would have been just another ignored third party candidate. Using the right medium in the right way is critical. As we start to see more and more 3D being used, you can be sure that a lot of it will be used badly (e.g. The Last Airbender). But with Avatar, we see that it can be used to great effect when you go all the way. In the future, we will have projection on the retinas, which should not be much different than 3D video in it's usage, it's basically a new kind of monitor. Head's up displays in cars should become more popular sometime in the next few years. Night vision is just too effective to not have it in some cars some time. But if it's used to project video and other distracting things, then it will be very bad. The next really different medium IMHO is haptics. It seems fairly clear to me that the technology will be first driven by teledildonics (after all who first commonly put video on the web). But who will be the first presidential candidate to be elected (or gain notoriety) because they make creative use of a haptic interface. It gives a whole new meaning to "I feel your pain"... :-) -Kelly From kellycoinguy at gmail.com Tue Feb 8 03:15:03 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 7 Feb 2011 20:15:03 -0700 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> <009601cbc57f$71abbcf0$550336d0$@att.net> Message-ID: > No. In fact, it could be argued that the purpose of the prohibition of > "cheating" is in most case to guarantee that possible successful > cheaters need be so ingenuous as to deserve to win... :-) This certainly seems to be the recent history of the Olympics. Even when it's not cheating, such as the new US bobsled that is unveiled every four years. > More practically, as long as games and sports and exams aim at > reproducing scenarios which should be relevant to real-life > situations, when the everyday availability of the "tricks" and > "enhancements" become ubiquitous, I think it is reasonable to allow > them on a general basis. Is it really important anymore to test the > skill of human beings in performing very large multiplications, eg? I think this is clearly going to be the case for large base participative sports such as high school football, basketball, etc. But when you get into the elite levels of sports, I think there will be significant resistance to such things for some time. > Of course, nothing prevents people from creating as well purely > artificial contests where some "handicap" or other is imposed on > contestants. Such as fighting a boxe match with one hand behind your > back, or run a marathon without drinking, or resolve math problems > without calculators, or not taking supplementation aimed at increasing > one's performance, or fishing with bamboo canes. As long as there is > somebody interested, for instance as it may reproduce what one was > faced with in bygone days, nothing wrong with that... One question of interest is whether enhancements will be made such that they can be "turned off". So if you have the artificial blood cells that allow you to process oxygen efficiently and hold your breath for twenty minutes, can you turn it off in order to participate in the Olympic marathon? -Kelly From kellycoinguy at gmail.com Tue Feb 8 03:23:47 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 7 Feb 2011 20:23:47 -0700 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible in its behavioural > making to ourselves, and this shall be automatically part of its > repertoire - along with aggression, flight, sex, etc. > > If, OTOH, your AGI is implemented in view of other goals than maximing > its fitness, it will be neither "altruistic" nor "selfish", it will > simply execute the other program(s) it is being given or instructed to > develop as any other less or more intelligent, less or more dangerous, > universal computing device. The real truth of the matter is that AGIs will be manufactured (or trained) with all sorts of tweaking. There will be loving AGIs, and Spock-like AGIs. There will undoubtedly be AGIs with personality disorders, perhaps surpassing Hitler in their cruelty. If for no other reason than to be an opponent in an advanced video game. Just recall that if it can be done, it will be done. The question for us is what sorts of rights we give AGIs. Is there any way to keep bad AGIs "in the bottle" in some safe context? Will there even be a way of determining that an AGI is, in fact, a sociopath? We can't even find the Ted Bundys among us. Policing in the future is going to be very interesting. What sorts of AGIs will we create to be the police of the future? Certainly people won't be able to police them. We can't keep the law up with technology now. What privacy rights will an AGI have? It's all very messy. Should be fun! -Kelly From stefano.vaj at gmail.com Tue Feb 8 11:19:49 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Feb 2011 12:19:49 +0100 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On 7 February 2011 18:47, Samantha Atkins wrote: > Human empathy is not that deep nor is empathy per se some free floating good. ? Why would we want an AGI that was pretty much just like a human except presumably much more powerful? I can think only of two reasons: - for the same reason we may want to develop an emulation of a cat or of a bug, that is, for the sake of it, as an achievement which is interesting per se; - for the same reason we paint realistic portraits of living human beings, to perpetuate some or most of their traits for the foreseeable future (see under "upload"). For everything else, computers may become indefinitely more intelligent and ingenuous at resolving diverse categories of problems without exhibiting any bio-like features such as altruism, selfishness, aggression, sexual drive, will to power, empathy, etc. more than they do today. > Altruistic and selfish are quite overloaded and nearly useless concepts as generally used. I suspect that you are right. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 8 11:08:23 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Feb 2011 12:08:23 +0100 Subject: [ExI] Voice operated computers In-Reply-To: <4D504D99.7000409@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> Message-ID: On 7 February 2011 20:52, Samantha Atkins wrote: > I am very sad > that it is becoming quite popular to make video instead of using text for > more and more information transfer. ?It is not amenable to search, indexing > or quick scanning. ?A step backwards in my view. ?I read a LOT faster than I > process speech. Indeed. With a PS3 you can watch a blu-ray movie at 2x speed without a mickey-mouse distortion, but it is crazy to have to watch a talking head on Youtube to listen at things that you could so much more comfortably and quickly read... I wonder whether this is a byproduct of increasing semi-literacy in western countries. -- Stefano Vaj From hkeithhenson at gmail.com Tue Feb 8 21:50:52 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 8 Feb 2011 14:50:52 -0700 Subject: [ExI] Anonymous and AI Message-ID: I am kind of surprised there is no discussion of this http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments It is obvious to me that the (probable) emergence of AI will come from this "group." The motivation of the AI will be "lulz." Are we doomed? Keith PS :-) From pharos at gmail.com Tue Feb 8 23:23:22 2011 From: pharos at gmail.com (BillK) Date: Tue, 8 Feb 2011 23:23:22 +0000 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: On Tue, Feb 8, 2011 at 9:50 PM, Keith Henson wrote: > I am kind of surprised there is no discussion of this > > http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments > > It is obvious to me that the (probable) emergence of AI will come from > this "group." ?The motivation of the AI will be "lulz." > > Are we doomed? > > Well, if you don't have a website, it can't be hacked. If you publicise yourself as a security consultancy firm, then you should make pretty certain that your internet-facing websites are secure. That is not a trivial exercise. Which is why so many companies (and government departments) have pretty useless security. A large part of this attack was social engineering, not computer hacking at all. Humans are a weak spot in most organisations. You need extra fail-safe security to guard against people being manipulated. People like to be helpful, like holding the door open for the pretty blonde who has forgotten her entry swipe card. Business laptops get stolen / lost every week with confidential information on them. Why aren't the hard disks encrypted? If this companies stolen emails contained valuable information, why weren't they encrypted? Proper security is an expensive pain in the ass for everyone involved, but you ignore it at your own risk. BillK From spike66 at att.net Tue Feb 8 23:04:28 2011 From: spike66 at att.net (spike) Date: Tue, 8 Feb 2011 15:04:28 -0800 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <005201cbc7e4$89d0e230$9d72a690$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Keith Henson Subject: [ExI] Anonymous and AI >http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-secur ity-company-hbgary?commentpage=all#start-of-comments >It is obvious to me that the (probable) emergence of AI will come from this "group." ... >Are we doomed? >Keith >PS :-) What you mean "we" Kimosabe? Anonymous PS {8^D From sjatkins at mac.com Wed Feb 9 02:59:16 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 08 Feb 2011 18:59:16 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> Message-ID: <4D520304.1090807@mac.com> On 02/07/2011 02:51 PM, MB wrote: > >> Samantha wrote: >> I read a LOT faster than I process speech... -s >> >> Spike wrote: >> Me too and I am also frustrated with more and more news content in the form >> of video, which I can seldom summon sufficient attention span to view. I >> want only text, if the purpose in information exchange. Speech is too slow, >> and the hearer has too little control over it, even with a scroll bar. >> > I have trouble with this as well. If it's worth my time I want to be able to > *study* on it a bit... not just have some flash jiggety jiggety go by on my screen. > :( What really really annoys me is some 20-something (or older) doing there little "a look at me and my awesome video thing" or "my I don't really give a frak about anything video" garbage for N precious irreplaceable minutes of my life BEFORE I can know what the (at best) small fraction of N worth of real information they have to impart is. N is much smaller in text and I don't have TMI of the rest of their stuff to deal with. In text I don't have to see the author's face, note their personna, react to my reactions of what I can see of in their video, know what their voice sounds like, etc. Like I say, TMI. Sticking all that in my face instead of the information I am actually after is pretty annoying. - samantha From sjatkins at mac.com Wed Feb 9 03:06:35 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 08 Feb 2011 19:06:35 -0800 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D5204BB.1080605@mac.com> On 02/08/2011 03:19 AM, Stefano Vaj wrote: > On 7 February 2011 18:47, Samantha Atkins wrote: >> Human empathy is not that deep nor is empathy per se some free floating good. Why would we want an AGI that was pretty much just like a human except presumably much more powerful? > I can think only of two reasons: > - for the same reason we may want to develop an emulation of a cat or > of a bug, that is, for the sake of it, as an achievement which is > interesting per se; > - for the same reason we paint realistic portraits of living human > beings, to perpetuate some or most of their traits for the foreseeable > future (see under "upload"). > > For everything else, computers may become indefinitely more > intelligent and ingenuous at resolving diverse categories of problems > without exhibiting any bio-like features such as altruism, If by altruism you mean sacrificing your values, just because they are yours, to the values of others, just because they are not yours, then it is a very bizarre thing to glorify, practice or hope that our AGIs practice. It is on the face of it hopelessly irrational and counter-productive toward achieving what we actually value. If an AGI practices that just on the grounds someone said they "should" then it is need of a serious debugging. - samantha From sjatkins at mac.com Wed Feb 9 03:16:51 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 08 Feb 2011 19:16:51 -0800 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <4D520723.6060301@mac.com> On 02/08/2011 03:23 PM, BillK wrote: > On Tue, Feb 8, 2011 at 9:50 PM, Keith Henson wrote: >> I am kind of surprised there is no discussion of this >> >> http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments >> >> It is obvious to me that the (probable) emergence of AI will come from >> this "group." The motivation of the AI will be "lulz." >> >> Are we doomed? >> >> > > Well, if you don't have a website, it can't be hacked. > > If you publicise yourself as a security consultancy firm, then you > should make pretty certain that your internet-facing websites are > secure. That is not a trivial exercise. > Which is why so many companies (and government departments) have > pretty useless security. > > A large part of this attack was social engineering, not computer hacking at all. > Humans are a weak spot in most organisations. You need extra fail-safe > security to guard against people being manipulated. People like to be > helpful, like holding the door open for the pretty blonde who has > forgotten her entry swipe card. > > Business laptops get stolen / lost every week with confidential > information on them. Why aren't the hard disks encrypted? If this > companies stolen emails contained valuable information, why weren't > they encrypted? > > Proper security is an expensive pain in the ass for everyone involved, > but you ignore it at your own risk. > It would help if more systems used good biometrics instead of passwords and cardkeys. We aren't quite at the place where a simple webcam plus voice plus fingerprint is good enough. Or a subdermal chip somehow locked to your metabolism so just sending the data bits would not work. Hmm.. Of course that kicks the hell out of anonymity unless your nym system is secured to said identity and immune to attack and unwelcome snoops. - s From mbb386 at main.nc.us Wed Feb 9 04:01:37 2011 From: mbb386 at main.nc.us (MB) Date: Tue, 8 Feb 2011 23:01:37 -0500 Subject: [ExI] Voice operated computers In-Reply-To: <4D520304.1090807@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> <4D520304.1090807@mac.com> Message-ID: <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> Webcasts drive me bonkers too. I listen much slower than I read, forget parts of what I heard, have questions about bits of it, but poof, it's gone now, moving on... and I lost it. :( What a waste of my time. Especially if one is somewhat hard of hearing. When I was a kid I asked my mom why on earth they made announcements in church when they gave out written bulletins at the door - with the announcements printed in! She hadn't any sensible answer. :))) Regards, MB From eugen at leitl.org Wed Feb 9 07:43:35 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 08:43:35 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <20110209074335.GF23560@leitl.org> On Tue, Feb 08, 2011 at 02:50:52PM -0700, Keith Henson wrote: > I am kind of surprised there is no discussion of this > > http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments > > It is obvious to me that the (probable) emergence of AI will come from > this "group." The motivation of the AI will be "lulz." There's actually a faint possibility that the malware ecosystem will eventually produce increasingly sophisticated self-propagating malware, which could eventually take over large fractions of a network in a single domain, and use the computational resources of the compromised hosts to run increasingly sophisticated, albeit likely still nefarious code. > Are we doomed? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 9 07:46:23 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 08:46:23 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <20110209074623.GG23560@leitl.org> On Tue, Feb 08, 2011 at 11:23:22PM +0000, BillK wrote: > Well, if you don't have a website, it can't be hacked. If you're dead, you can't be killed. From giulio at gmail.com Wed Feb 9 07:59:40 2011 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 9 Feb 2011 08:59:40 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: <20110209074335.GF23560@leitl.org> References: <20110209074335.GF23560@leitl.org> Message-ID: There is a good SF book on this: http://www.theminervavirus.com/ written by Brian Shuster of Red Light Center and Utherverse fame: http://en.wikipedia.org/wiki/Utherverse_Inc. I read the book a few years ago and found it very good, especially considering that the author is not a professional SF writer. I just bought it again in Kindle format for 3.44 $ (!!), recommended. G. On Wed, Feb 9, 2011 at 8:43 AM, Eugen Leitl wrote: > On Tue, Feb 08, 2011 at 02:50:52PM -0700, Keith Henson wrote: >> I am kind of surprised there is no discussion of this >> >> http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments >> >> It is obvious to me that the (probable) emergence of AI will come from >> this "group." ?The motivation of the AI will be "lulz." > > There's actually a faint possibility that the malware ecosystem > will eventually produce increasingly sophisticated self-propagating > malware, which could eventually take over large fractions of a > network in a single domain, and use the computational resources > of the compromised hosts to run increasingly sophisticated, > albeit likely still nefarious code. > >> Are we doomed? > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A ?7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From pharos at gmail.com Wed Feb 9 09:06:46 2011 From: pharos at gmail.com (BillK) Date: Wed, 9 Feb 2011 09:06:46 +0000 Subject: [ExI] Anonymous and AI In-Reply-To: <20110209074623.GG23560@leitl.org> References: <20110209074623.GG23560@leitl.org> Message-ID: On Wed, Feb 9, 2011 at 7:46 AM, Eugen Leitl wrote: > On Tue, Feb 08, 2011 at 11:23:22PM +0000, BillK wrote: > >> Well, if you don't have a website, it can't be hacked. > > If you're dead, you can't be killed. > > The company involved had internal networks that weren't connected to the internet. These computers weren't hacked. Computers that are linked to the internet need extra security. It is good practice to keep confidential data, so far as possible, on private networks. BillK From eugen at leitl.org Wed Feb 9 09:54:44 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 10:54:44 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: <20110209074623.GG23560@leitl.org> Message-ID: <20110209095444.GJ23560@leitl.org> On Wed, Feb 09, 2011 at 09:06:46AM +0000, BillK wrote: > The company involved had internal networks that weren't connected to > the internet. These computers weren't hacked. System utility surface is directly proportional to vulnerability. The only invulnerable systems are completely useless, and hence of no concern to us. > Computers that are linked to the internet need extra security. It is > good practice to keep confidential data, so far as possible, on > private networks. The problem with basic good practices or even common sense is that real systems and real people don't care, and you can't make them care. Also, worse is better definitely applies. So let's just get used to dealing with insecure systems. In fact, not all is bad, as a planet made of swiss cheese is just great if you're a mouse. Yum. From eugen at leitl.org Wed Feb 9 11:04:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 12:04:32 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: <20110209074335.GF23560@leitl.org> Message-ID: <20110209110432.GL23560@leitl.org> On Wed, Feb 09, 2011 at 08:59:40AM +0100, Giulio Prisco wrote: > There is a good SF book on this: > > http://www.theminervavirus.com/ > > written by Brian Shuster of Red Light Center and Utherverse fame: > http://en.wikipedia.org/wiki/Utherverse_Inc. > > I read the book a few years ago and found it very good, especially > considering that the author is not a professional SF writer. I just > bought it again in Kindle format for 3.44 $ (!!), recommended. Somewhat related, there's also Daemon, and its sequel, Freedom, by Daniel Suarez. (Caution: some suspension of disbelief required, at times some heavy gamer cheese present). From stefano.vaj at gmail.com Wed Feb 9 12:04:44 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 9 Feb 2011 13:04:44 +0100 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: <4D5204BB.1080605@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> <4D5204BB.1080605@mac.com> Message-ID: On 9 February 2011 04:06, Samantha Atkins wrote: > If by altruism you mean sacrificing your values, just because they are > yours, to the values of others, just because they are not yours, then it is > a very bizarre thing to glorify, practice or hope that our AGIs practice. > ?It is on the face of it hopelessly irrational and counter-productive toward > achieving what we actually value. ? If an AGI practices that just on the > grounds someone said they "should" then it is need of a serious debugging. I fully agree. -- Stefano Vaj From jonkc at bellsouth.net Wed Feb 9 17:41:56 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 9 Feb 2011 12:41:56 -0500 Subject: [ExI] Empathic AGI In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote: > > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible Two difficulties with that: 1) The Darwinian process is more like history than mathematics, it is not repeatable, very small changes in initial conditions could lead to huge differences in output. 2) Human-level empathy is aimed at Human-level beings, the further from that level the less empathy we have. We have less empathy for a cow than a person and less for an insect than a cow. As the AI's intelligence gets larger its empathy for us will get smaller although its empathy for its own kind might be enormous. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Feb 9 22:26:15 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 9 Feb 2011 17:26:15 -0500 Subject: [ExI] Voice operated computers In-Reply-To: <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> <4D520304.1090807@mac.com> <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> Message-ID: On Tue, Feb 8, 2011 at 11:01 PM, MB wrote: > Webcasts drive me bonkers too. > > I listen much slower than I read, forget parts of what I heard, have questions about > bits of it, but poof, it's gone now, moving on... and I lost it. :( > > What a waste of my time. Especially if one is somewhat hard of hearing. > > When I was a kid I asked my mom why on earth they made announcements in church when > they gave out written bulletins at the door - with the announcements printed in! > She hadn't any sensible answer. :))) Actually her answer was very sensible, however she didn't email it so it was lost soon after it was uttered. :) From mbb386 at main.nc.us Thu Feb 10 00:21:10 2011 From: mbb386 at main.nc.us (MB) Date: Wed, 9 Feb 2011 19:21:10 -0500 Subject: [ExI] Voice operated computers In-Reply-To: References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> <4D520304.1090807@mac.com> <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> Message-ID: <418fac29d61f55082ba804beba6fdb64.squirrel@www.main.nc.us> >> She hadn't any sensible answer. :))) > > Actually her answer was very sensible, however she didn't email it so > it was lost soon after it was uttered. :) > Hee! She probably said something along the lines of "they don't look" or "they won't read" - and I read everything that passed before my eyes. A textaholic from pre-school on. :) To my way of thinking, "not looking" and "not reading" are poor excuses. The bulding was filled with educated literate people. Regards, MB From spike66 at att.net Thu Feb 10 06:35:41 2011 From: spike66 at att.net (spike) Date: Wed, 9 Feb 2011 22:35:41 -0800 Subject: [ExI] watson on nova Message-ID: <001f01cbc8ec$bd623c80$3826b580$@att.net> Lots of good Watson stuff in this NOVA episode, plenty to get me jazzed: http://video.pbs.org/video/1757221034 The good stuff is between about 15 minutes and 28 minutes. We will have practical companion computers very soon. All doubts I once suffered have vanished with this NOVA episode. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 10 09:37:25 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 10 Feb 2011 10:37:25 +0100 Subject: [ExI] [cryo] new cryonics blog up Message-ID: <20110210093725.GD23560@leitl.org> http://chronopause.com/ Please link from your blogs, if any. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE _______________________________________________ cryo mailing list cryo at postbiota.org http://postbiota.org/mailman/listinfo/cryo From sjatkins at mac.com Fri Feb 11 18:38:16 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 11 Feb 2011 10:38:16 -0800 Subject: [ExI] Empathic AGI In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D558218.3070001@mac.com> On 02/09/2011 09:41 AM, John Clark wrote: > On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote: >> >> If we accept that "normal" human-level empathy (that is, a mere >> ingredient in the evolutionary strategies) is enough, we just have to >> emulate a Darwinian machine as similar as possible > > Two difficulties with that: > > 1) The Darwinian process is more like history than mathematics, it is > not repeatable, very small changes in initial conditions could lead to > huge differences in output. > > 2) Human-level empathy is aimed at Human-level beings, the further > from that level the less empathy we have. We have less empathy for a > cow than a person and less for an insect than a cow. As the AI's > intelligence gets larger its empathy for us will get smaller although > its empathy for its own kind might be enormous. > Yes, we understand how interdependent peer level beings will naturally develop a set of ethical guides for how they treat one another and the ability to model one another. We don't have much/any idea of how this would arise among beings of radically different natures and abilities there are not so interdependent regarding their treatment of one another. - samantha From jrd1415 at gmail.com Fri Feb 11 20:36:29 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 11 Feb 2011 13:36:29 -0700 Subject: [ExI] Fwd: Suspended Animation Cryonics Conference In-Reply-To: References: <000001cbb503$9435bc30$bca13490$@att.net> Message-ID: I hope there will be some form of webcast of this event. Live would be good. Best, jeff davis 2011/1/15 Max More > I'll be speaking at this conference, and hope to see as many of you as > possible -- especially those of you I haven't seen in too long. > > Max > > > 2011/1/15 spike > >> >> >> I am forwarding this from James Clement while we work out some issues with >> the server. spike >> >> >> ---------- Forwarded message ---------- >> From: James Clement >> To: extropy-chat at lists.extropy.org >> Date: Sat, 15 Jan 2011 13:52:58 -0800 >> Subject: Suspended Animation Cryonics Conference >> >> Announcing a new cryonic's conference, for May 20-22, 2011, in Ft. >> Lauderdale, FL >> >> Thanks, >> James Clement >> >> >> >> [image: Description: Suspended Animation] >> >> Dear Friend, >> >> Can you imagine the future? When we'll travel to other stars. Have >> super-intelligent computers. Robot servants. And nanomachines that keep us >> young and healthy for centuries! Will you live long enough to experience all >> this? >> >> "Unlikely," you say? Not necessarily. Suspended Animation can be your >> bridge to the advances of the future. The technology is here today to have >> you cryopreserved for future reanimation. To enable you to engage in time >> travel to the spectacular advances of the future. >> >> This technology is far from perfect now. But it is good enough to give you >> a chance at unlimited life and prosperity. Remarkable advances in >> cryopreservation have already been achieved. Millions of dollars are being >> spent to achieve perfected suspended animation and new technologies to >> revive time travelers in the future. >> >> You can learn all about these technologies at a conference in South >> Florida on May 20-22, 2011 . >> At this conference, the foremost authorities in human cryopreservation and >> future reanimation will convene at the Hyatt Regency Pier 66 Resort and Spa >> in Ft. Lauderdale. They will inform you about pathbreaking research advances >> that could make your most exciting dreams come true. >> >> This conference is being sponsored by Suspended Animation, Inc. (SA), a >> company in Boynton Beach, Florida, where advanced human cryopreservation >> equipment and services are being developed. After you've been enlightened by >> imagination-stretching presentations about today's scientifically credible >> technologies and the projected advances of tomorrow at the Hyatt Regency, >> you'll be transported to SA's extraordinary laboratory where you will be >> able to see some of these technologies for yourself. >> >> The link in this e-mail gives you special access to a downloadable >> brochure, as well as registration options, so you can get all the details of >> this remarkable conference that will enable you to obtain the information >> you need to give yourself the opportunity of a lifetime! >> >> *Visit the Conference Page * >> >> >> >> [image: Description: Catherine Baldwin] >> >> Catherine Baldwin >> General Manager >> Suspended Animation, Inc. >> >> >> >> Suspended Animation, Inc. >> 3020 High Ridge Road, Suite 300 >> Boynton Beach, FL 33426 >> >> Telephone *(561) 296-4251* >> Facsimile *(561) 296-4255* >> Emergency (888) 660-7128 >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Feb 12 00:10:14 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 11 Feb 2011 17:10:14 -0700 Subject: [ExI] Anons Message-ID: There are many pointers into this complex of stories, all of which hinge off Wikileaks. http://thinkprogress.org/ More than a decade ago I proposed that the training net activists got learning to cope with a certain cult would be a warm up exercise for a major confrontation with a government. Then for a while I figured the government had learned from watching the fate of said cult. They didn't. Keith (HBGary is effectively part of the US government.) From spike66 at att.net Sat Feb 12 00:40:29 2011 From: spike66 at att.net (spike) Date: Fri, 11 Feb 2011 16:40:29 -0800 Subject: [ExI] spokeo was: RE: Anons Message-ID: <009401cbca4d$72a2de40$57e89ac0$@att.net> >... On Behalf Of Keith Henson >...Subject: [ExI] Anons >...There are many pointers into this complex of stories, all of which hinge off Wikileaks. > http://thinkprogress.org/ >...Keith Thanks Keith! As vaguely related to wikileaks, a friend sent me a link to Spokeo recently. I found it interesting, because I entered my name (the one that is on my birth certificate) and it knew about "spike." I have never tried to keep that a secret, was listed under spike for 26 years in the company phone directory for instance. But I still haven't figured out how Spokeo knew about that. It doesn't know about my son either. Anyways, try it out: type in the name of your old buddy from high school (assuming he has an unusual name) see what you find: http://www.spokeo.com/ Goodbye privacy, hello openness. spike From max at maxmore.com Sat Feb 12 00:53:02 2011 From: max at maxmore.com (Max More) Date: Fri, 11 Feb 2011 17:53:02 -0700 Subject: [ExI] Fwd: Suspended Animation Cryonics Conference In-Reply-To: References: <000001cbb503$9435bc30$bca13490$@att.net> Message-ID: There will be both a webcast at a DVD. --- Max 2011/2/11 Jeff Davis > I hope there will be some form of webcast of this event. Live would be > good. > > Best, jeff davis > > 2011/1/15 Max More > > I'll be speaking at this conference, and hope to see as many of you as >> possible -- especially those of you I haven't seen in too long. >> >> Max >> >> >> > -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Feb 12 07:49:09 2011 From: pharos at gmail.com (BillK) Date: Sat, 12 Feb 2011 07:49:09 +0000 Subject: [ExI] Anons In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 12:10 AM, Keith Henson wrote: > There are many pointers into this complex of stories, all of which > hinge off Wikileaks. > > http://thinkprogress.org/ > > More than a decade ago I proposed that the training net activists got > learning to cope with a certain cult would be a warm up exercise for a > major confrontation with a government. > > Then for a while I figured the government had learned from watching > the fate of said cult. > They didn't. > > Keith > (HBGary is effectively part of the US government.) > Two comments. First. The current saying is that companies (and government departments) spend more on coffee than they do on computer security. The majority of web sites have been hacked and they still don't care. It's like product liability. Until the cost of damages gets high enough they just won't bother. At the moment companies just say 'sorry' and get a techie to block the latest attack. They are playing 'Whack-a-mole' with hackers because it is cheaper. Second. To say that HBGary is effectively part of the US government (while true) is sort of looking at it back to front. The real problem is corporate takeover of the US government. See: But the real issue highlighted by this episode is just how lawless and unrestrained is the unified axis of government and corporate power. I've written many times about this issue -- the full-scale merger between public and private spheres -- because it's easily one of the most critical yet under-discussed political topics. Especially (though by no means only) in the worlds of the Surveillance and National Security State, the powers of the state have become largely privatized. There is very little separation between government power and corporate power. Those who wield the latter intrinsically wield the former. The revolving door between the highest levels of government and corporate offices rotates so fast and continuously that it has basically flown off its track and no longer provides even the minimal barrier it once did. It's not merely that corporate power is unrestrained; it's worse than that: corporations actively exploit the power of the state to further entrench and enhance their power. ----------------------------------------- BillK From possiblepaths2050 at gmail.com Sat Feb 12 11:51:28 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 12 Feb 2011 04:51:28 -0700 Subject: [ExI] needing help on a tech project Message-ID: Hello everyone, I have a science fiction con founder/organizer friend who has asked me to transcribe a couple dozen con panel DV camcorder recordings onto a computer. I am dumbfounded that he would ask me to volunteer for such a task, but I do want to help him (he is not very tech savvy). I would think we need to first convert the DV tapes into dvd or flash format and then upload it into a computer, where the appropriate (and hopefully not too expensive) software could first scan it and then do a competent transcription. Please help and point me in the right direction to get the task done! And how much will it cost? Thank you, John From pharos at gmail.com Sat Feb 12 13:14:18 2011 From: pharos at gmail.com (BillK) Date: Sat, 12 Feb 2011 13:14:18 +0000 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 11:51 AM, John Grigg wrote: > I have a science fiction con founder/organizer friend who has asked me > to transcribe a couple dozen con panel DV camcorder recordings onto a > computer. ?I am dumbfounded that he would ask me to volunteer for such > a task, but I do want to help him (he is not very tech savvy). > > I would think we need to first convert the DV tapes into dvd or flash > format and then upload it into a computer, where the appropriate (and > hopefully not too expensive) software could first scan it and then do > a competent transcription. > > Please help and point me in the right direction to get the task done! > And how much will it cost? > > I haven't done this, but this article should get you started. Note also page 2 that explains about editing the footage on your computer. But you might find the learning curve makes this too big a job for you. :) Best of luck, BillK From darren.greer3 at gmail.com Sat Feb 12 14:00:06 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 10:00:06 -0400 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: Forgot to add. Some camcorders have the IEEE. Some USB. If you need help manipulating the files once you have them converted, let me know. I've been playing around with video editing for a few years. Making porn, you know. (Kidding.) d. On Sat, Feb 12, 2011 at 9:47 AM, Darren Greer wrote: > Hi John: > > Do you have a mac? One-step DVD in iDVD which is standard on OSX imports > the DV files to the hard drive, converts them and burns to a DVD with a > single button click. All you need is an IEEE 1394 (firewire) cable. Once > they're burned you can manipulate the videos in the iDVD editor or use any > of the free software online to rip the DVD files and convert to other > formats. If you're on a pc, programs like DVDsanta can also do it. > There's a free version download here: (I have not used this version so I > don't know its restriction/limitations.) > > http://www.topvideopro.com/burn-dvd/dv-to-dvd.htm > > Here is another free pc version that does the same thing. > > http://www.dv-to-dvd.com/ > > There are also a number of free software program that convert directly to > flash. But you could end up buying one of these programs (they're not > expensive) as often there's a size limitation for converting files on free > versions of video conversion software. I hope this helps. > > Darren > > > On Sat, Feb 12, 2011 at 7:51 AM, John Grigg wrote: > >> Hello everyone, >> >> I have a science fiction con founder/organizer friend who has asked me >> to transcribe a couple dozen con panel DV camcorder recordings onto a >> computer. I am dumbfounded that he would ask me to volunteer for such >> a task, but I do want to help him (he is not very tech savvy). >> >> I would think we need to first convert the DV tapes into dvd or flash >> format and then upload it into a computer, where the appropriate (and >> hopefully not too expensive) software could first scan it and then do >> a competent transcription. >> >> Please help and point me in the right direction to get the task done! >> And how much will it cost? >> >> Thank you, >> >> John >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *"It's supposed to be hard. If it wasn't hard everyone would do it. The > 'hard' is what makes it great."* > * > * > *--A League of Their Own > * > > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 12 13:47:26 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 09:47:26 -0400 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: Hi John: Do you have a mac? One-step DVD in iDVD which is standard on OSX imports the DV files to the hard drive, converts them and burns to a DVD with a single button click. All you need is an IEEE 1394 (firewire) cable. Once they're burned you can manipulate the videos in the iDVD editor or use any of the free software online to rip the DVD files and convert to other formats. If you're on a pc, programs like DVDsanta can also do it. There's a free version download here: (I have not used this version so I don't know its restriction/limitations.) http://www.topvideopro.com/burn-dvd/dv-to-dvd.htm Here is another free pc version that does the same thing. http://www.dv-to-dvd.com/ There are also a number of free software program that convert directly to flash. But you could end up buying one of these programs (they're not expensive) as often there's a size limitation for converting files on free versions of video conversion software. I hope this helps. Darren On Sat, Feb 12, 2011 at 7:51 AM, John Grigg wrote: > Hello everyone, > > I have a science fiction con founder/organizer friend who has asked me > to transcribe a couple dozen con panel DV camcorder recordings onto a > computer. I am dumbfounded that he would ask me to volunteer for such > a task, but I do want to help him (he is not very tech savvy). > > I would think we need to first convert the DV tapes into dvd or flash > format and then upload it into a computer, where the appropriate (and > hopefully not too expensive) software could first scan it and then do > a competent transcription. > > Please help and point me in the right direction to get the task done! > And how much will it cost? > > Thank you, > > John > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 12 14:31:05 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 10:31:05 -0400 Subject: [ExI] Anons In-Reply-To: References: Message-ID: Keith wrote: >Then for a while I figured the government had learned from watching the fate of said cult. They didn't.< It's possible to learn from an error in process. It's much more difficult to learn from one of perception. Which perhaps explains why gov.org didn't learn from Anon's admirable campaign against the cultologists [the euphemism for your sake.:)] d. On Fri, Feb 11, 2011 at 8:10 PM, Keith Henson wrote: > There are many pointers into this complex of stories, all of which > hinge off Wikileaks. > > http://thinkprogress.org/ > > More than a decade ago I proposed that the training net activists got > learning to cope with a certain cult would be a warm up exercise for a > major confrontation with a government. > > Then for a while I figured the government had learned from watching > the fate of said cult. > > They didn't. > > Keith > > (HBGary is effectively part of the US government.) > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Feb 12 15:20:04 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 12 Feb 2011 16:20:04 +0100 Subject: [ExI] Empathic AGI In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: 2011/2/9 John Clark : > On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote: > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible > > Two difficulties with that: > 1) The Darwinian process is more like history than mathematics, it is not > repeatable, very small changes in initial conditions could lead to huge > differences in output. Of course. Being a human being, an oak, a rabbit, an amoeba are all plausible Darwinian strategy. But if one wants something where "aggression", "empathy", "selfishness" etc. have a meaning different from that which may be applicable to a car or to a spreadsheet any would be both necessary and sufficient, I guess. > 2) Human-level empathy is aimed at?Human-level beings, the further from that > level the less empathy we have. We have less empathy for a cow than a person > and less for an insect than a cow. As the AI's intelligence gets larger its > empathy for us will get smaller although its empathy for its own kind might > be enormous. Yes. Or not. Human empathy is a fuzzy label for complex adaptative or "spandrel" behaviours which do not necessarily have to do with "similarity". For instance, gender differences in our species are substantial enough, but of course you have much more empathy in average for your opposite-gender offspring than you may have for a human individual of your gender with no obvious genetic link to your lineage, and/or belonging to a hostile tribe. I suspect that an emulation of a human being may well decide and "feel" to belong to a cross-specific group (say, the men *and* the androids of country X or of religion Y) or perhaps imagine something along the lines of "proletarian AGIs all over the world, unite!". As long as they are "intelligent" in the very anthropomorphic sense discussed here, there would be little new in this respect. In fact, they would by definition be programmed as much as we are to make such choices. Other no-matter-how-intelligent entitities which are neither evolved, nor explicitely programmed to emulate evolved organisms, have of course no reason to exhibit self-preservation, empathy, aggression or altruism drives in any sociobiological sense. -- Stefano Vaj From spike66 at att.net Sat Feb 12 15:46:43 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 07:46:43 -0800 Subject: [ExI] Anons In-Reply-To: References: Message-ID: <003901cbcacc$0cba82c0$262f8840$@att.net> >>On Sat, Feb 12, 2011 at 12:10 AM, Keith Henson wrote: >> There are many pointers into this complex of stories, all of which hinge off Wikileaks. > >> http://thinkprogress.org/ ... >> Keith ... On Behalf Of BillK >...The real problem is corporate takeover of the US government. >... it's worse than that: corporations actively exploit the power of the state to further entrench and enhance their power...BillK _______________________________________________ Ja. Our constitution is set up to maintain separation of church and state. It doesn't say anything about separation of corporation and state. As far as I can tell the latter would be perfectly legal. In any case it would be far preferable to government takeover of corporations. spike From spike66 at att.net Sat Feb 12 15:55:48 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 07:55:48 -0800 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: <003a01cbcacd$515962b0$f40c2810$@att.net> >...I have a science fiction con founder/organizer friend who has asked me to transcribe a couple dozen con panel DV camcorder recordings onto a computer. I am dumbfounded that he would ask me to volunteer for such a task, but I do want to help him (he is not very tech savvy). John John there are plenty of professional services that do this sort of thing. I have had a bunch of Shelly's old reel to reel family films from the 1960s transferred to DVD. Typical cost for that was about 80 bucks per DVD disc. I have done all the valuable ones. Henceforth I might just set up a modern digital camera and play the old reels on the original projector against a wall, and have the grandparents narrate. Those films predated sound. I have a collection of video recordings from the 80s I need to transfer, but I might do that myself by finding an interface card. The internet knows everything on this sort of question. spike From thespike at satx.rr.com Sat Feb 12 16:48:28 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 12 Feb 2011 10:48:28 -0600 Subject: [ExI] Anons In-Reply-To: <003901cbcacc$0cba82c0$262f8840$@att.net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> Message-ID: <4D56B9DC.6000906@satx.rr.com> On 2/12/2011 9:46 AM, spike wrote: > Our constitution is set up to maintain separation of church and state. > It doesn't say anything about separation of corporation and state. As far > as I can tell the latter would be perfectly legal. In any case it would be > far preferable to government takeover of corporations. The technical name for what you prefer is "corporate fascism". That doesn't have a really compelling history. Here's a random-selected thumbnail: Damien Broderick From jedwebb at hotmail.com Sat Feb 12 19:29:16 2011 From: jedwebb at hotmail.com (Jeremy Webb) Date: Sat, 12 Feb 2011 19:29:16 +0000 Subject: [ExI] Nanotech Article In-Reply-To: <4D56B9DC.6000906@satx.rr.com> Message-ID: There is a nice discussion of a new piece of nanotech at: http://science.slashdot.org/story/11/02/10/1513226/Researchers-Boast-First-P rogrammable-Nanoprocessor They claim to have managed to produce most of the logic gates needed to make a CPU that results in being 100 times more efficient than even CMOS. I hope they've figured out the static problem too... :0) Jeremy Webb Heathen Vitki e-Mail: jedwebb at hotmail.com http://jeremywebb301.tripod.com/vikssite/index.html From darren.greer3 at gmail.com Sat Feb 12 19:59:44 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 15:59:44 -0400 Subject: [ExI] Anons In-Reply-To: <4D56B9DC.6000906@satx.rr.com> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> Message-ID: Damien wrote: >The technical name for what you prefer is "corporate fascism". That doesn't have a really compelling history.< I agree Damien. When the definition of fascism was entered for the first time in the Encyclopedia Italiano, Mussolini suggested that corporatism was a more accurate name for that type of arrangement than fascism anyway. Fascism is *by definition* the merging of business and government, most often accompanied by rabid nationalism and sometimes overt racism. But not always, which might make modern fascism difficult to recognize because we always assume holocaust-type ethnic cleansing comes with it. It doesn't. Italy followed Germany's Wannsee Conference directives (because it was under political pressure to do so) but not to the letter. For some of the war Mussolini allowed the north west of the country to become a kind of protectorate for Jews who had fled other parts of the state. The ax fell on them only after Italy fell, when the allies invaded the south and Germany took the north to meet them. I got this story from John Keegan's The Second World War, which is an excellent book by the way. Spike wrote: >In any case it would be far preferable to government takeover of corporations.< Is there a difference? When governments and corporations merge, does it matter who made the first move? Given the checkered history of IBM, Ford, Chase Manhattan, etc, not to mention the America First Committee and the role of prominent industrialists like Ford in trying to keep the U.S. out of World War II for business reasons, perhaps it should be illegal. Currently we try to prevent the merging of the two with market regulation and not through legislation, which doesn't seem to be working all that well. The repeal of Glass-Steagall and the housing market crash is a good example of that failure. Don't mean to sound testy, or confrontational, Spike. I have a bit of a bee in my bonnet about what seems to be a widespread misunderstanding of exactly what fascism is and how easily it could happen again. The U.S. congress has a fasces engraved on a wall somewhere inside, by the way. Don't know its history, or what genius decided it was a good idea, but it has always made me wonder. darren I On Sat, Feb 12, 2011 at 12:48 PM, Damien Broderick wrote: > On 2/12/2011 9:46 AM, spike wrote: > > Our constitution is set up to maintain separation of church and state. >> It doesn't say anything about separation of corporation and state. As far >> as I can tell the latter would be perfectly legal. In any case it would >> be >> far preferable to government takeover of corporations. >> > > The technical name for what you prefer is "corporate fascism". That doesn't > have a really compelling history. > > Here's a random-selected thumbnail: > > > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Feb 12 20:52:17 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 12:52:17 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> Message-ID: <006001cbcaf6$bc872490$35956db0$@att.net> On Behalf Of Darren Greer . Spike wrote: >>In any case it would be far preferable to government takeover of corporations.< >Is there a difference? When governments and corporations merge, does it matter who made the first move? . Oh my, yes, a huge critically important difference. In my view, often called minarcho-capitalist, he purpose of government is to support people in the creation of wealth. I recognize it has some minimal duties in redistributing wealth already created (to some extent) but I read the US constitution and see little in there in regard. Wealth creation is the key to making a nation and its people prosperous. My notion is to have the executive branch of government filled with people who have served as executives in industry. There is inherent mischief in bringing over legislators. This last go-around in 2008, the two major parties gave us the choice of former legislators, neither of whom had executive experience. Agree it depends on how it is viewed. The press had us believe Sarah Palin was the actual candidate, and she had *some* executive experience as a business owner and Alaska governor, but I was shocked to learn she was actually running for VP on that ticket. No one ever heard of the guy who was running for president on that ticket, but I understand he was a legislator with little or no executive experience. I would counter-propose that my vote for president would require executive experience and demonstrated success, as a corporate CEO or state governor. >Don't mean to sound testy, or confrontational, Spike. I have a bit of a bee in my bonnet about what seems to be a widespread misunderstanding of exactly what fascism is and how easily it could happen again. The U.S. congress has a fasces engraved on a wall somewhere inside, by the way. Don't know its history, or what genius decided it was a good idea, but it has always made me wonder. No problem Darren, by all means your commentary is welcome and not at all confrontational. There is no point in trying to pin down the definition of terms such as fascist and nazi. These have for so long been used as universal insults and blanket condemnations that they eventually lose all meaning from overuse. There is no point in trying to refocus the definition on mid 19th century political systems; the terms have been worn out and up-used. >.Don't know its history, or what genius decided it was a good idea, but it has always made me wonder. darren Watch as California goes into historic conniptions to try to balance its hopeless budget. The lessons we need here is that industry is our friend, that wealth creation is our salvation, that business needs to be encouraged and nurtured, that political power should follow wealth as opposed to the other way around. Money is good. Desire for money is a predictable and trustworthy human motivator. Lack of money is the root of all evil. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sat Feb 12 22:24:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 12 Feb 2011 15:24:12 -0700 Subject: [ExI] Anons In-Reply-To: <006001cbcaf6$bc872490$35956db0$@att.net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: 2011/2/12 spike : > In my view, often > called minarcho-capitalist, he purpose of government is to support people in > the creation of wealth. The only legitimate purpose of the government in creating wealth are the following: 1) Keep taxes as low as possible. i.e. don't steal money from those who create wealth. 2) Make sure that there is no cheating, as in maintaining a court system for dealing with bad contractual outcomes, and maintaining some sort of intellectual property system. 3) Stay out of the way as much as possible 4) Make sure that one group does not rape the environment at the expense of everyone else. 5) Keep the bad guys from raining on your parade. This has a lot of relevance to the future. If the government steps in and bans cloning and other controversial uses of DNA, our biological future will be more limited than necessary. One of the reasons, IMNSHO, that the Internet has been so successful is that no government has found a very good way to regulate it very much, other than North Korea, where I understand most people aren't allowed to access it at all. That of course has it's own downside for them. -Kelly From kellycoinguy at gmail.com Sat Feb 12 22:31:53 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 12 Feb 2011 15:31:53 -0700 Subject: [ExI] Watson on NOVA Message-ID: Did anyone else see the NOVA about Watson that aired the other day? While it was not especially technical, although seemingly pretty accurate for as far as it went. I found the emotions involved with the creators of Watson to be very interesting. For example, they hired a comedian for a year to "host" test Jeopardy shows, and he was making fun of Watson when he answered questions really badly... and the programmers were really offended by that. Very interesting dynamic. -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sat Feb 12 22:15:01 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 12 Feb 2011 15:15:01 -0700 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: A very easy way to do this sort of thing is to use Amazon Turk. You will get much better results than anything that is just automated. If you need assistance, contact me off list. -Kelly On Sat, Feb 12, 2011 at 4:51 AM, John Grigg wrote: > Hello everyone, > > I have a science fiction con founder/organizer friend who has asked me > to transcribe a couple dozen con panel DV camcorder recordings onto a > computer. ?I am dumbfounded that he would ask me to volunteer for such > a task, but I do want to help him (he is not very tech savvy). > > I would think we need to first convert the DV tapes into dvd or flash > format and then upload it into a computer, where the appropriate (and > hopefully not too expensive) software could first scan it and then do > a competent transcription. > > Please help and point me in the right direction to get the task done! > And how much will it cost? > > Thank you, > > John > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Feb 12 23:18:08 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 15:18:08 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: Message-ID: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Saturday, February 12, 2011 2:32 PM To: ExI chat list Subject: [ExI] Watson on NOVA >Did anyone else see the NOVA about Watson that aired the other day? While it was not especially technical, although seemingly pretty accurate for as far as it went. Ja, worked on me. >I found the emotions involved with the creators of Watson to be very interesting. For example, they hired a comedian for a year to "host" test Jeopardy shows, and he was making fun of Watson when he answered questions really badly... and the programmers were really offended by that. Very interesting dynamic. -Kelly Kelly you have seen 2010 Odyssey 2? That is the one with Hal's creator Dr. Chandra getting emotional about having to turn him off. I can easily see a person getting emotionally attached to a machine. From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Sent: Wednesday, February 09, 2011 10:36 PM To: 'ExI chat list' Subject: [ExI] watson on nova Lots of good Watson stuff in this NOVA episode, plenty to get me jazzed: http://video.pbs.org/video/1757221034 The good stuff is between about 15 minutes and 28 minutes. We will have practical companion computers very soon. All doubts I once suffered have vanished with this NOVA episode. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sun Feb 13 00:45:57 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 20:45:57 -0400 Subject: [ExI] Anons In-Reply-To: <006001cbcaf6$bc872490$35956db0$@att.net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: Spike wrote: >I would counter-propose that my vote for president would require executive experience and demonstrated success, as a corporate CEO or state governor.< Are you familiar with the enfant terrible of Harvard history and socio-economics, the brit Niall Ferguson? I heard a lecture by him once, where he said all the first rate talent flows into industry and all the second rate talent scurries into politics, mostly because of earning potential. You'd probably like what he has to say. I don't. But then I'm so socialist I have to go right to get left. Darren 2011/2/12 spike > > > > > *On Behalf Of *Darren Greer > *?* > > > > Spike wrote: > > > > >>In any case it would be far preferable to government takeover of > corporations.< > > > > >Is there a difference? When governments and corporations merge, does it > matter who made the first move? ? > > > > Oh my, yes, a huge critically important difference. In my view, often > called minarcho-capitalist, he purpose of government is to support people in > the creation of wealth. I recognize it has some minimal duties in > redistributing wealth already created (to some extent) but I read the US > constitution and see little in there in regard. Wealth creation is the key > to making a nation and its people prosperous. > > > > My notion is to have the executive branch of government filled with people > who have served as executives in industry. There is inherent mischief in > bringing over legislators. This last go-around in 2008, the two major > parties gave us the choice of former legislators, neither of whom had > executive experience. Agree it depends on how it is viewed. The press had > us believe Sarah Palin was the actual candidate, and she had **some** > executive experience as a business owner and Alaska governor, but I was > shocked to learn she was actually running for VP on that ticket. No one > ever heard of the guy who was running for president on that ticket, but I > understand he was a legislator with little or no executive experience. > > > > I would counter-propose that my vote for president would require executive > experience and demonstrated success, as a corporate CEO or state governor. > > > > >Don't mean to sound testy, or confrontational, Spike. I have a bit of a > bee in my bonnet about what seems to be a widespread misunderstanding of > exactly what fascism is and how easily it could happen again. The U.S. > congress has a fasces engraved on a wall somewhere inside, by the way. Don't > know its history, or what genius decided it was a good idea, but it has > always made me wonder. > > > > No problem Darren, by all means your commentary is welcome and not at all > confrontational. There is no point in trying to pin down the definition of > terms such as fascist and nazi. These have for so long been used as > universal insults and blanket condemnations that they eventually lose all > meaning from overuse. There is no point in trying to refocus the definition > on mid 19th century political systems; the terms have been worn out and > up-used. > > > > >?Don't know its history, or what genius decided it was a good idea, but it > has always made me wonder. darren > > > > Watch as California goes into historic conniptions to try to balance its > hopeless budget. The lessons we need here is that industry is our friend, > that wealth creation is our salvation, that business needs to be encouraged > and nurtured, that political power should follow wealth as opposed to the > other way around. Money is good. Desire for money is a predictable and > trustworthy human motivator. Lack of money is the root of all evil. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sun Feb 13 01:02:37 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 21:02:37 -0400 Subject: [ExI] Artificial Irish Folk Tales Message-ID: I was at my mother's tonight playing cards with my brother and an old filling which was loose fell out which means a trip to the dentist this week. The Irish have a folk legend that says if you lose a tooth, or dream you loose a tooth, someone you know will die. So does losing a filling mean somewhere an AI will die? Or does it just mean when I wake up tomorrow my computer won't boot? My brother asked why I was laughing during our card game but I couldn't explain it. He's not really interested in technology the way I am, and he already thinks I'm weird enough. d. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 01:53:24 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 17:53:24 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <00a501cbcb20$cd2f1d00$678d5700$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Saturday, February 12, 2011 4:46 PM To: ExI chat list Subject: Re: [ExI] Anons Spike wrote: >>I would counter-propose that my vote for president would require executive experience and demonstrated success, as a corporate CEO or state governor.< >Are you familiar with the enfant terrible of Harvard history and socio-economics, the brit Niall Ferguson? Have not. I will google that. > I heard a lecture by him once, where he said all the first rate talent flows into industry and all the second rate talent scurries into politics, mostly because of earning potential. Yes there is that, but I have reason to hope. Read on: >You'd probably like what he has to say. I don't. But then I'm so socialist I have to go right to get left.Darren Ja, I am cool with that. I am so upwing I would need to come down to go either left or right. Be that as it may, I recognize that in the US at least, the two statist parties will be winning every election for the foreseeable, so here's the way I view it. It is perfectly clear to me that the best executive talent goes where it can make money, which explains why we get mostly the B students going for public office. But we also recognize that the value of the book written by a former government official largely makes up for the loss of pay suffered during the years of service. People are *still* buying Jimmy Carter's books. Recently I notice it has become fashionable for anyone who was anywhere in government to record their experiences on dead trees. The higher the rank of the author, the better for sales. I notice one of the contenders for California governor was the former eBay CEO and jillionaire Meg Whitman. Clearly she could have made waaay more money doing anything else, but went into that campaign using mostly her own money. She is still young, so let's see what happens in the next election. California governor is the logical springboard into national office. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 02:06:47 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 18:06:47 -0800 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: References: Message-ID: <00b501cbcb22$ab95f810$02c1e830$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer . >.So does losing a filling mean somewhere an AI will die? Haaaaahahahahahahaaaa! {8^D >. He's not really interested in technology the way I am, and he already thinks I'm weird enough. d. Ah, but one can never be weird enough. Thanks for the good laugh Darren. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Feb 13 02:23:01 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 12 Feb 2011 20:23:01 -0600 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: References: Message-ID: <4D574085.3070801@satx.rr.com> On 2/12/2011 7:02 PM, Darren Greer wrote: > So does losing a filling mean somewhere an AI will die? Ha! Nice. (Or maybe it means a mining company will go under.) Damien Broderick From darren.greer3 at gmail.com Sun Feb 13 02:27:17 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 22:27:17 -0400 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: <4D574085.3070801@satx.rr.com> References: <4D574085.3070801@satx.rr.com> Message-ID: Damien wrote: >(Or maybe it means a mining company will go under)< Semantically confusing industry, because technically, in mining, when you "went under" you would actually be coming up, wouldn't you? d. On Sat, Feb 12, 2011 at 10:23 PM, Damien Broderick wrote: > On 2/12/2011 7:02 PM, Darren Greer wrote: > > > So does losing a filling mean somewhere an AI will die? >> > > Ha! Nice. > > (Or maybe it means a mining company will go under.) > > Damienoderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 06:18:42 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 22:18:42 -0800 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: References: <4D574085.3070801@satx.rr.com> Message-ID: <000901cbcb45$dd3ac7b0$97b05710$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer . Damien wrote: >(Or maybe it means a mining company will go under)< Semantically confusing industry, because technically, in mining, when you "went under" you would actually be coming up, wouldn't you? d. In mining, it's always either up ore down. s -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 06:38:44 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 22:38:44 -0800 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: <000901cbcb45$dd3ac7b0$97b05710$@att.net> References: <4D574085.3070801@satx.rr.com> <000901cbcb45$dd3ac7b0$97b05710$@att.net> Message-ID: <002a01cbcb48$a9d0fa40$fd72eec0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike . spike should have written: >.In mining, it's always down ore up. But it doesn't matter, because one's labor is all in vein. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Feb 13 06:55:04 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 13 Feb 2011 00:55:04 -0600 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: <000901cbcb45$dd3ac7b0$97b05710$@att.net> References: <4D574085.3070801@satx.rr.com> <000901cbcb45$dd3ac7b0$97b05710$@att.net> Message-ID: <4D578048.30905@satx.rr.com> On 2/13/2011 12:18 AM, spike wrote: > Semantically confusing industry, because technically, in mining, when > you "went under" you would actually be coming up, wouldn't you? > > d. > > In mining, it?s always either up ore down. And yet with quilting, it's Eider down or not. Damien Broderick From pharos at gmail.com Sun Feb 13 08:55:43 2011 From: pharos at gmail.com (BillK) Date: Sun, 13 Feb 2011 08:55:43 +0000 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: 2011/2/13 Darren Greer wrote: > Are you familiar with the enfant terrible of Harvard history and > socio-economics, the brit Niall Ferguson? ?I heard a lecture by him once, > where he said all the first rate talent flows into industry and all the > second rate talent scurries into politics, mostly because of earning > potential. You'd probably like what he has to say. I don't. But then I'm so > socialist I have to go right to get left. > > Did you know that this has actually been proposed as a road safety and traffic efficiency measure? Quote: Superstreet intersections force traffic from smaller roads to turn right, then u-turn on the larger road, rather than wait for a break in traffic to make a direct left. That solution may sound like an inefficient way to get where you?re going, but researchers say that it moves vehicles through 20% faster, and reduces accidents by 43%. ----------------- In sensible countries like the UK, of course, the reverse applies. First turn left, then u-turn to go right. BillK From kellycoinguy at gmail.com Sun Feb 13 08:58:18 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 01:58:18 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: > The good stuff is between about 15 minutes and 28 minutes.? We will have > practical companion computers very soon.? All doubts I once suffered have > vanished with this NOVA episode. While I am clearly jazzed about Watson, and I do know for sure now that Watson uses statistical learning algorithms, I am not quite as convinced that there is a general solution here. At least not quite yet. The types of answers generated seemed to have been heavily "tweaked" for Jeopardy. That's not to say that Watson isn't interesting, and an important milestone in AI. I think it is both. Just that it isn't quite as far down the road of machine understanding as I had hoped. Some of the video seemed to indicate that it used some kind of statistical proximity based text search engine, rather than parsing and understanding English sentences quite so much as I thought maybe it did. Of course, since NOVA was presenting things on a general audience basis, it may have downplayed any NLP aspect. This will be useful technology (assuming it escapes research) I can see it answering really useful questions. I hope they build it into a search engine. But it does, for the present, seem to be very tweaked for Jeopardy... which is, I suppose, what I should have expected. Has anybody seen any technical papers by the Watson team? That would be interesting in evaluating just how they did it. Since Watson is essentially a bunch of PCs, I can see this being deployed into the cloud pretty easily. And if Watson can look on the Internet, then perhaps it can come up with better answers (albeit perhaps more slowly) than in the isolated Jeopardy case. It seemed that they stuck with Wikipedia, online encyclopedias, the internet movie database and other specific information sites, rather than crawling the entire web. Perhaps they did this to ensure greater accuracy??? Or maybe it was a storage space issue. In any case, if they make a bigger machine in the cloud that accesses the internet and has more storage, I'm sure they could come up with some very interesting answers to general questions, assuming the answers are out there somewhere. -Kelly From kellycoinguy at gmail.com Sun Feb 13 09:08:44 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 02:08:44 -0700 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: >> But then I'm so socialist I have to go right to get left. So what you're saying is that you are so focused on the future that you haven't learned the lessons of the past... ;-) > Did you know that this has actually been proposed as a road safety and > traffic efficiency measure? There was a really interesting Wired article a few years back on eliminating traffic signals altogether, and mixing traffic with pedestrians in a confusing way that automatically caused everyone to slow down and be more careful with the result being greater overall safety. It had been implemented in some northern European cities (maybe Denmark?) I still think the right answer is to let the cars drive themselves, and avoid human piloting altogether, but that's still a few years off. -Kelly From amara at kurzweilai.net Sun Feb 13 09:25:05 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 01:25:05 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <03af01cbcb5f$e71e37c0$b55aa740$@net> Kelly, I really love this traffic idea. Sort of an emergent order concept. Would be fun to take it a step further and create a new kind of town with no defined roads, no sidewalks, information signals that combine to control computer-automated vehicles (supersedes driven cars and traffic signals) for people and machines that are generated by sensors for ad hoc movements of objects, wind, noise, thoughts ("I want to cross the street" -- I want to dance here now), deformable structures (car to building: "may I take a shortcut through you?"), instant 3D-printed structures that can be morphed into different purposes.... [pause to let someone else co-invent ...] -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 13, 2011 1:09 AM To: ExI chat list Subject: Re: [ExI] Anons >> But then I'm so socialist I have to go right to get left. So what you're saying is that you are so focused on the future that you haven't learned the lessons of the past... ;-) > Did you know that this has actually been proposed as a road safety and > traffic efficiency measure? There was a really interesting Wired article a few years back on eliminating traffic signals altogether, and mixing traffic with pedestrians in a confusing way that automatically caused everyone to slow down and be more careful with the result being greater overall safety. It had been implemented in some northern European cities (maybe Denmark?) I still think the right answer is to let the cars drive themselves, and avoid human piloting altogether, but that's still a few years off. -Kelly _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sun Feb 13 09:28:49 2011 From: pharos at gmail.com (BillK) Date: Sun, 13 Feb 2011 09:28:49 +0000 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: On Sun, Feb 13, 2011 at 8:58 AM, Kelly Anderson wrote: > While I am clearly jazzed about Watson, and I do know for sure now > that Watson uses statistical learning algorithms, I am not quite as > convinced that there is a general solution here. At least not quite > yet. The types of answers generated seemed to have been heavily > "tweaked" for Jeopardy. That's not to say that Watson isn't > interesting, and an important milestone in AI. I think it is both. > Just that it isn't quite as far down the road of machine understanding > as I had hoped. Some of the video seemed to indicate that it used some > kind of statistical proximity based text search engine, rather than > parsing and understanding English sentences quite so much as I thought > maybe it did. Of course, since NOVA was presenting things on a general > audience basis, it may have downplayed any NLP aspect. > > This will be useful technology (assuming it escapes research) I can > see it answering really useful questions. I hope they build it into a > search engine. But it does, for the present, seem to be very tweaked > for Jeopardy... which is, I suppose, what I should have expected. > > Has anybody seen any technical papers by the Watson team? That would > be interesting in evaluating just how they did it. > > IBM PR makes big claims for Watson (but that's their job :) ). Quote: Watson's ability to understand the meaning and context of human language, and rapidly process information to find precise answers to complex questions, holds enormous potential to transform how computers help people accomplish tasks in business and their personal lives. Watson will enable people to rapidly find specific answers to complex questions. The technology could be applied in areas such as healthcare, for accurately diagnosing patients, to improve online self-service help desks, to provide tourists and citizens with specific information regarding cities, prompt customer support via phone, and much more. ------------------------- This article talks about what the developers are working on: Looks like they are doing some pretty complex stuff in there. BillK From amara at kurzweilai.net Sun Feb 13 09:14:55 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 01:14:55 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <03ae01cbcb5e$7baf4700$730dd500$@net> Kelly, I had similar questions, so I interviewed an IBM Watson research manager. Please see if this helps: http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib m-research-manager. I would be interested in any critiques of this, or questions for a follow-up interview. Thanks, Amara D. Angelica Editor, KurzweilAI -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 13, 2011 12:58 AM To: ExI chat list Subject: Re: [ExI] Watson on NOVA > The good stuff is between about 15 minutes and 28 minutes.? We will have > practical companion computers very soon.? All doubts I once suffered have > vanished with this NOVA episode. While I am clearly jazzed about Watson, and I do know for sure now that Watson uses statistical learning algorithms, I am not quite as convinced that there is a general solution here. At least not quite yet. The types of answers generated seemed to have been heavily "tweaked" for Jeopardy. That's not to say that Watson isn't interesting, and an important milestone in AI. I think it is both. Just that it isn't quite as far down the road of machine understanding as I had hoped. Some of the video seemed to indicate that it used some kind of statistical proximity based text search engine, rather than parsing and understanding English sentences quite so much as I thought maybe it did. Of course, since NOVA was presenting things on a general audience basis, it may have downplayed any NLP aspect. This will be useful technology (assuming it escapes research) I can see it answering really useful questions. I hope they build it into a search engine. But it does, for the present, seem to be very tweaked for Jeopardy... which is, I suppose, what I should have expected. Has anybody seen any technical papers by the Watson team? That would be interesting in evaluating just how they did it. Since Watson is essentially a bunch of PCs, I can see this being deployed into the cloud pretty easily. And if Watson can look on the Internet, then perhaps it can come up with better answers (albeit perhaps more slowly) than in the isolated Jeopardy case. It seemed that they stuck with Wikipedia, online encyclopedias, the internet movie database and other specific information sites, rather than crawling the entire web. Perhaps they did this to ensure greater accuracy??? Or maybe it was a storage space issue. In any case, if they make a bigger machine in the cloud that accesses the internet and has more storage, I'm sure they could come up with some very interesting answers to general questions, assuming the answers are out there somewhere. -Kelly _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From amara at kurzweilai.net Sun Feb 13 09:34:46 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 01:34:46 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <03b001cbcb61$416ba130$c442e390$@net> Just had another flash: what about creating a combination exercise spa and road? Use treadmills to power lighting and vehicle movement (piezoelectric device ==> battery ==> motors in road) ... spa is built over the road)... spa motions also power lights or generate power credits.. etc. ========================================= Kelly, I really love this traffic idea. Sort of an emergent order concept. Would be fun to take it a step further and create a new kind of town with no defined roads, no sidewalks, information signals that combine to control computer-automated vehicles (supersedes driven cars and traffic signals) for people and machines that are generated by sensors for ad hoc movements of objects, wind, noise, thoughts ("I want to cross the street" -- I want to dance here now), deformable structures (car to building: "may I take a shortcut through you?"), instant 3D-printed structures that can be morphed into different purposes.... [pause to let someone else co-invent ...] -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 13, 2011 1:09 AM To: ExI chat list Subject: Re: [ExI] Anons >> But then I'm so socialist I have to go right to get left. So what you're saying is that you are so focused on the future that you haven't learned the lessons of the past... ;-) > Did you know that this has actually been proposed as a road safety and > traffic efficiency measure? There was a really interesting Wired article a few years back on eliminating traffic signals altogether, and mixing traffic with pedestrians in a confusing way that automatically caused everyone to slow down and be more careful with the result being greater overall safety. It had been implemented in some northern European cities (maybe Denmark?) I still think the right answer is to let the cars drive themselves, and avoid human piloting altogether, but that's still a few years off. -Kelly _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Sun Feb 13 15:54:35 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 07:54:35 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <005d01cbcb96$50197650$f04c62f0$@att.net> On Behalf Of BillK ... > But then I'm so socialist I have to go right to get left. darren > > Did you know that this has actually been proposed as a road safety and traffic efficiency measure? Quote: Superstreet intersections force traffic from smaller roads to turn right, then u-turn on the larger road... BillK ----------------- BillK that's the way it is done now in many places in New Jersey. Have we any New Jerseyers present? spike From rpwl at lightlink.com Sun Feb 13 16:39:25 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 13 Feb 2011 11:39:25 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <4D58093D.9070306@lightlink.com> Kelly Anderson wrote: >> The good stuff is between about 15 minutes and 28 minutes. We will have >> practical companion computers very soon. All doubts I once suffered have >> vanished with this NOVA episode. > > While I am clearly jazzed about Watson, and I do know for sure now > that Watson uses statistical learning algorithms, I am not quite as > convinced that there is a general solution here. At least not quite > yet. The types of answers generated seemed to have been heavily > "tweaked" for Jeopardy. That's not to say that Watson isn't > interesting, and an important milestone in AI. I think it is both. > Just that it isn't quite as far down the road of machine understanding > as I had hoped. Some of the video seemed to indicate that it used some > kind of statistical proximity based text search engine, rather than > parsing and understanding English sentences quite so much as I thought > maybe it did. Sadly, this only confirms the deeply skeptical response that I gave earlier. I strongly suspected that it was using some kind of statistical "proximity" algorithms to get the answers. And in that case, we are talking about zero advancement of AI. Back in 1991 I remember having discussions about that kind of research with someone who thought it was fabulous. I argued that it was a dead end. If people are still using it to do exactly the same kinds of task they did then, can you see what I mean when I say that this is a complete waste of time? It is even worse than I suspected. Richard Loosemore Of course, since NOVA was presenting things on a general > audience basis, it may have downplayed any NLP aspect. > > This will be useful technology (assuming it escapes research) I can > see it answering really useful questions. I hope they build it into a > search engine. But it does, for the present, seem to be very tweaked > for Jeopardy... which is, I suppose, what I should have expected. > > Has anybody seen any technical papers by the Watson team? That would > be interesting in evaluating just how they did it. > > Since Watson is essentially a bunch of PCs, I can see this being > deployed into the cloud pretty easily. And if Watson can look on the > Internet, then perhaps it can come up with better answers (albeit > perhaps more slowly) than in the isolated Jeopardy case. It seemed > that they stuck with Wikipedia, online encyclopedias, the internet > movie database and other specific information sites, rather than > crawling the entire web. Perhaps they did this to ensure greater > accuracy??? Or maybe it was a storage space issue. In any case, if > they make a bigger machine in the cloud that accesses the internet and > has more storage, I'm sure they could come up with some very > interesting answers to general questions, assuming the answers are out > there somewhere. > > -Kelly > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From spike66 at att.net Sun Feb 13 17:38:36 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 09:38:36 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D58093D.9070306@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: <000901cbcba4$d82b9100$8882b300$@att.net> On Behalf Of Richard Loosemore Subject: Re: [ExI] Watson on NOVA Kelly Anderson wrote: >> ... While I am clearly jazzed about Watson, and I do know for sure now that Watson uses statistical learning algorithms... >...I strongly suspected that it was using some kind of statistical "proximity" algorithms to get the answers. And in that case, we are talking about zero advancement of AI... can you see what I mean when I say that this is a complete waste of time?...Richard Loosemore Richard I see what you mean, but I disagree. We know Watson isn't AI, and this path doesn't lead there directly. But there is value in collecting a bunch of capabilities that are in themselves marketable. Computers play good chess, they play Jeopardy, they do this and that, eventually they make suitable (even if not ideal) companions for impaired humans, which generates money (lots of it in that case), which brings talent into the field, inspires the young to dream that AI can somehow be accomplished. It inspires the young brains to imagine the potential of software, as opposed to wasting their lives and talent by going into politics or hedge fund management for instance. For every AI researcher we lose to fooling around with Watson, we gain ten more who are inspired by that non-AI exercise. In that sense Watson may indirectly advance AI. spike From rpwl at lightlink.com Sun Feb 13 18:25:08 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 13 Feb 2011 13:25:08 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <000901cbcba4$d82b9100$8882b300$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <000901cbcba4$d82b9100$8882b300$@att.net> Message-ID: <4D582204.1040703@lightlink.com> spike wrote: > On Behalf Of Richard Loosemore > Subject: Re: [ExI] Watson on NOVA > > Kelly Anderson wrote: >>> ... While I am clearly jazzed about Watson, and I do know for sure now > that Watson uses statistical learning algorithms... > >> ...I strongly suspected that it was using some kind of statistical > "proximity" algorithms to get the answers. And in that case, we are talking > about zero advancement of AI... can you see what I mean when I say that this > is a complete waste of time?...Richard Loosemore > > > > > Richard I see what you mean, but I disagree. We know Watson isn't AI, and > this path doesn't lead there directly. But there is value in collecting a > bunch of capabilities that are in themselves marketable. Computers play > good chess, they play Jeopardy, they do this and that, eventually they make > suitable (even if not ideal) companions for impaired humans, which generates > money (lots of it in that case), which brings talent into the field, > inspires the young to dream that AI can somehow be accomplished. It > inspires the young brains to imagine the potential of software, as opposed > to wasting their lives and talent by going into politics or hedge fund > management for instance. > > For every AI researcher we lose to fooling around with Watson, we gain ten > more who are inspired by that non-AI exercise. > > In that sense Watson may indirectly advance AI. This is exactly what has been happening. But the only people it has drawn into AI are: (a) People too poorly informed to understand that Watson represents a non-achievement ..... therefore extremely low-quality talent, or (b) People who quite brazenly declare that the field called "AI" is not really about building intelligent systems, but just futzing around with mathematics and various trivial algorithms. Either way, the field loses. I have been watching this battle go on throughout my career. All I am doing is reporting the obvious patterns that emerge if you look at the situation from the inside, for long enough. I went to conferences back in the 1980s when people talked about simple language understanding algorithms, and I understood exactly what they were trying to do and what they had achieved so far. Then I went to an AGI workshop in 2006, and to my utter horror I saw some people present their research on a simple langauge understanding system..... it was exactly the same stuff that I had seen 20 years before, and they appeared to have no awareness that this had already been done, and that the technique subsequently got nowhere. You can discount my opinion if you like, but does it not count for anything at all that I have been working in this field since I first got interested in it in 1980? This is not armchair theorizing here: I am just doing my best to summarize a lot of experience. Richard Loosemore From spike66 at att.net Sun Feb 13 19:14:23 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 11:14:23 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D582204.1040703@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <000901cbcba4$d82b9100$8882b300$@att.net> <4D582204.1040703@lightlink.com> Message-ID: <000001cbcbb2$397a3a80$ac6eaf80$@att.net> On Behalf Of Richard Loosemore ... >But the only people it has drawn into AI are: ... >(a) People too poorly informed to understand that Watson represents a non-achievement ..... therefore extremely low-quality talent, or ... >(b) People who quite brazenly declare that the field called "AI" is not really about building intelligent systems, but just futzing around with mathematics and various trivial algorithms. ... >Either way, the field loses. ... >You can discount my opinion if you like, but does it not count for anything at all that I have been working in this field since I first got interested in it in 1980? This is not armchair theorizing here: I am just doing my best to summarize a lot of experience...Richard Loosemore Richard, your viewpoint as one who has been in the field for a long time is most valuable. You and I are actually looking at two very different goals here, as was pointed out in a previous discussion. You are shooting for true AI, but I am not, or at least not immediately. Reasoning: true AI leads directly to recursive self-improvement, which leads directly to the singularity, which presents all kinds of risks (and promise (and risks)) because we don't know how to control it, or even if it is controllable. On the other hand, Watson isn't going to spontaneously take off and do whatever a real AI wants to do, any more than a chess algorithm will do that. Watson will, however, contribute to our wellbeing here and now, along with the chess algorithms, and the servant-bot algorithms, the sex-bots, and all the other non AI applications I can imagine will come along and make our lives more fun and interesting. I do not regret all the AI talent that has been siphoned into application development, for I am in no desperate hurry to create AI. With our current level of insight and lack thereof into friendly AI, it looks to me like the risks may outweigh the benefits, at least to the younger people among us. Five years ago before my son was born, I would have argued the benefits outweigh the risks. Now, I wouldn't say that, or rather I can't say it with any confidence. Recall that nuclear fission was discovered a least a decade before the engineers developed a practical way to safely control it. AI is analogous to nuclear fission, and now is 1937. You and I do not necessarily disagree, we just have different goals. spike From possiblepaths2050 at gmail.com Mon Feb 14 01:44:41 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 13 Feb 2011 18:44:41 -0700 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: I want to say a huge thank you to everyone who responded to my request for help. I see now that there are a number of ways to take care of my friend's project, and that it is quite doable. He had originally wanted me to watch a small portion of tape, type what I heard, and then repeat the process near endlessly till I had transcribed a pile of DV audiocassettes! LOL! But fortunately we have technologies now to make such drudgery avoidable. The event recorded was the H.P. Lovecraft themed MythosCon, held in Tempe, Arizona. And now the words there of super Mythos scholar S.T. Joshi and others shall be forever put to print! John : ) From kellycoinguy at gmail.com Mon Feb 14 05:29:44 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 22:29:44 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: On Sun, Feb 13, 2011 at 2:28 AM, BillK wrote: > On Sun, Feb 13, 2011 at 8:58 AM, Kelly Anderson ?wrote: > IBM PR makes big claims for Watson (but that's their job ?:) ?). > > Quote: > Watson's ability to understand the meaning and context of human > language, and rapidly process information to find precise answers to > complex questions, holds enormous potential to transform how computers > help people accomplish tasks in business and their personal lives. > Watson will enable people to rapidly find specific answers to complex > questions. The technology could be applied in areas such as > healthcare, for accurately diagnosing patients, to improve online > self-service help desks, to provide tourists and citizens with > specific information regarding cities, prompt customer support via > phone, and much more. I have absolutely no doubt that Watson-like systems can do this. As a research assistant to a doctor, Watson would be invaluable. It is, in fact, a new kind of search engine with a little more intelligence than a Google type system. And while Google is not an AI, sometimes it feels like it is. Watson isn't a general AI, but it will feel like it is at least some of the time. Honestly, I can't wait to watch Jeopardy tomorrow. > ------------------------- > > This article talks about what the developers are working on: > > > Looks like they are doing some pretty complex stuff in there. No doubt. One clarification on this deal. While it doesn't appear that Watson does much sophisticated natural language processing of the text in its index, it does appear to do very sophisticated NLP of the questions and categories. When that kind of sophistication is applied on the index side as well, it should improve even more. I have no direct evidence that they don't, it just didn't appear to be the case from the NOVA show. -Kelly From kellycoinguy at gmail.com Mon Feb 14 05:46:04 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 22:46:04 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <03ae01cbcb5e$7baf4700$730dd500$@net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <03ae01cbcb5e$7baf4700$730dd500$@net> Message-ID: On Sun, Feb 13, 2011 at 2:14 AM, Amara D. Angelica wrote: > Kelly, I had similar questions, so I interviewed an IBM Watson research > manager. Please see if this helps: > http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib > m-research-manager. I would be interested in any critiques of this, or > questions for a follow-up interview. "open, pluggable architecture of analytics" sounds like it has an engine, and can add heuristics. If that's the case, then this is a pretty powerful core technology, but it requires that it be "built and tuned to play Jeopardy!" So if I were going to ask follow up questions, I would ask some along these lines... On the NOVA show it talked about adding gender information... is this one of the pluggable pieces you are referring to? When you say "open" do you mean open source? Or open for purchasers of the system to augment? Is this going to be available in a cloud configuration anytime soon? Tell us more about "building" and "tuning"... It appears from the NOVA show that it took 4 years to build and tune the system for Jeopardy, how much effort would it take to build and tune a system for medical diagnosis? Or build a technical support database for say Microsoft Word. It seems that the natural language processing of the questions and categories is very extensive and uses a kind of search tree technology reminiscent of AI search trees used in games such as chess. Is that correct? Tell us more about the index that is build a priori of the raw data that the answers are sought from. Is it indexed, or is there just a brute force algorithm based on keyword searches and then further statistical processing of the results of the keyword search. In other words, what's done prior to the question being asked on the index side of the equation? (I'm sure you could make that question shorter... :-) You talk about Watson "learning", is the learning on the side of understanding the question, finding the answer or both? Are you using neural networks, statistical approaches, or some new approach for that? If developers wanted to build and tune their own solutions on this architecture, how soon do you think it will be available? Is there a business unit working on this yet? Are there going to be any papers published by the Watson team? I'm sure I could come up with more questions... but those would be among the ones I would ask first I think. -Kelly From kellycoinguy at gmail.com Mon Feb 14 05:57:02 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 22:57:02 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <03ae01cbcb5e$7baf4700$730dd500$@net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <03ae01cbcb5e$7baf4700$730dd500$@net> Message-ID: On Sun, Feb 13, 2011 at 2:14 AM, Amara D. Angelica wrote: > Kelly, I had similar questions, so I interviewed an IBM Watson research > manager. Please see if this helps: > http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib > m-research-manager. I would be interested in any critiques of this, or > questions for a follow-up interview. "open, pluggable architecture of analytics" sounds like it has an engine, and can add heuristics. If that's the case, then this is a pretty powerful core technology, but it requires that it be "built and tuned to play Jeopardy!" So if I were going to ask follow up questions, I would ask some along these lines... On the NOVA show it talked about adding gender information... is this one of the pluggable pieces you are referring to? When you say "open" do you mean open source? Or open for purchasers of the system to augment? Is this going to be available in a cloud configuration anytime soon? Tell us more about "building" and "tuning"... It appears from the NOVA show that it took 4 years to build and tune the system for Jeopardy, how much effort would it take to build and tune a system for medical diagnosis? Or build a technical support database for say Microsoft Word. It seems that the natural language processing of the questions and categories is very extensive and uses a kind of search tree technology reminiscent of AI search trees used in games such as chess. Is that correct? Tell us more about the index that is build a priori of the raw data that the answers are sought from. Is it indexed, or is there just a brute force algorithm based on keyword searches and then further statistical processing of the results of the keyword search. In other words, what's done prior to the question being asked on the index side of the equation? (I'm sure you could make that question shorter... :-) You talk about Watson "learning", is the learning on the side of understanding the question, finding the answer or both? Are you using neural networks, statistical approaches, or some new approach for that? If developers wanted to build and tune their own solutions on this architecture, how soon do you think it will be available? Is there a business unit working on this yet? Are there going to be any papers published by the Watson team? What aspect of Watson is the most novel? Or is Watson just putting together the best of what was already out there in a really good way? I'm sure I could come up with more questions... but those would be among the ones I would ask first I think. I really liked your article. It was particularly interesting to listen to them think about what IBM's business model for such things might be. -Kelly From spike66 at att.net Mon Feb 14 05:46:59 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 21:46:59 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <000b01cbcc0a$9a0aa800$ce1ff800$@att.net> >... On Behalf Of Kelly Anderson ... >...Honestly, I can't wait to watch Jeopardy tomorrow. -Kelly Ja me too. A robot may be your next best friend. Check this: http://www.cnn.com/2011/OPINION/02/13/breazeal.social.robots/index.html?hpt= C2 I had an idea: Kelly are you single? We disguise you as the new IBM sexbot and have you delivered to Breazeal, with a card in there asking her to be a Beta tester. Think it would work? {8^D spike From kellycoinguy at gmail.com Mon Feb 14 06:06:46 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 23:06:46 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <4D58093D.9070306@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: On Sun, Feb 13, 2011 at 9:39 AM, Richard Loosemore wrote: > Sadly, this only confirms the deeply skeptical response that I gave earlier. > > I strongly suspected that it was using some kind of statistical "proximity" > algorithms to get the answers. ?And in that case, we are talking about zero > advancement of AI. > > Back in 1991 I remember having discussions about that kind of research with > someone who thought it was fabulous. ?I argued that it was a dead end. > > If people are still using it to do exactly the same kinds of task they did > then, can you see what I mean when I say that this is a complete waste of > time? ?It is even worse than I suspected. For me the question is whether this is useful, not whether it will lead to AGI. Is Watson useful? I would say yes, it is very close to being something useful. Is it on the path to AGI? That's about as relevant as whether we descend directly from gracile australopithecines or robust australopithecinesthe. Yes, that's an interesting question, but you need the competition to see what works out in the end. The evolution of computer algorithms will show that Watson or your stuff or reverse engineering the human brain or something else eventually leads to the answer. Criticizing IBM because you think they are working down the Neanderthal line is irrelevant to the evolutionary and memetic processes. Honestly Richard, you come across as a mad scientist; that is, an angry scientist. All approaches should be equally welcome until one actually works. And saying that they should have spent the money different is like saying we shouldn't save the $1 million preemie in Boston because that money could have been used to cure blindness in 10,000 Africans. Well, that's true, but the insurance company paying the bill doesn't have any right to cure blindness in Africa with their subscriber's money. IBM has a fiduciary responsibility to the shareholders, and Watson will earn them money if they do it right. -Kelly From kellycoinguy at gmail.com Mon Feb 14 06:18:19 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 23:18:19 -0700 Subject: [ExI] Anons In-Reply-To: <03af01cbcb5f$e71e37c0$b55aa740$@net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> <03af01cbcb5f$e71e37c0$b55aa740$@net> Message-ID: On Sun, Feb 13, 2011 at 2:25 AM, Amara D. Angelica wrote: > Kelly, I really love this traffic idea. Sort of an emergent order concept. Here is the original article (I remember a few more pictures in the magazine) http://www.wired.com/wired/archive/12.12/traffic.html > Would be fun to take it a step further and create a new kind of town with no > defined roads, no sidewalks, information signals that combine to control > computer-automated vehicles (supersedes driven cars and traffic signals) for > people and machines that are generated by sensors for ad hoc movements of > objects, wind, noise, thoughts ("I want to cross the street" -- I want to > dance here now), deformable structures (car to building: "may I take a > shortcut through you?"), instant 3D-printed structures that can be morphed > into different purposes.... [pause to let someone else co-invent ...] Blending car traffic with pedestrians is interesting... but I wouldn't take it too far... :-) -Kelly From amara at kurzweilai.net Mon Feb 14 06:39:30 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 22:39:30 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> <03af01cbcb5f$e71e37c0$b55aa740$@net> Message-ID: <033301cbcc11$ef3e77f0$cdbb67d0$@net> Thanks. Or at least if I did take it too far, do it in sim first.... Or maybe pedestrians could drive the vehicles? "Destination?" "Elm Street." "OK, no problem, we'll take you there after dessert." -----Original Message----- From: Kelly Anderson [mailto:kellycoinguy at gmail.com] Sent: Sunday, February 13, 2011 10:18 PM To: amara at kurzweilai.net; ExI chat list Subject: Re: [ExI] Anons On Sun, Feb 13, 2011 at 2:25 AM, Amara D. Angelica wrote: > Kelly, I really love this traffic idea. Sort of an emergent order concept. Here is the original article (I remember a few more pictures in the magazine) http://www.wired.com/wired/archive/12.12/traffic.html > Would be fun to take it a step further and create a new kind of town with no > defined roads, no sidewalks, information signals that combine to control > computer-automated vehicles (supersedes driven cars and traffic signals) for > people and machines that are generated by sensors for ad hoc movements of > objects, wind, noise, thoughts ("I want to cross the street" -- I want to > dance here now), deformable structures (car to building: "may I take a > shortcut through you?"), instant 3D-printed structures that can be morphed > into different purposes.... [pause to let someone else co-invent ...] Blending car traffic with pedestrians is interesting... but I wouldn't take it too far... :-) -Kelly From amara at kurzweilai.net Mon Feb 14 06:35:15 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 22:35:15 -0800 Subject: [ExI] Watson on NOVA Message-ID: <032e01cbcc11$576358b0$062a0a10$@net> Kelly, thanks. These are excellent questions, which I'll include in a follow-up interview. We just posted three IBM videos that discuss customer service, finance, and healthcare applications; and two more on other Watson design issues, including one related to building the system: http://www.kurzweilai.net/videos. -----Original Message----- From: Kelly Anderson [mailto:kellycoinguy at gmail.com] Sent: Sunday, February 13, 2011 9:57 PM To: amara at kurzweilai.net; ExI chat list Subject: Re: [ExI] Watson on NOVA On Sun, Feb 13, 2011 at 2:14 AM, Amara D. Angelica wrote: > Kelly, I had similar questions, so I interviewed an IBM Watson research > manager. Please see if this helps: > http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib > m-research-manager. I would be interested in any critiques of this, or > questions for a follow-up interview. "open, pluggable architecture of analytics" sounds like it has an engine, and can add heuristics. If that's the case, then this is a pretty powerful core technology, but it requires that it be "built and tuned to play Jeopardy!" So if I were going to ask follow up questions, I would ask some along these lines... On the NOVA show it talked about adding gender information... is this one of the pluggable pieces you are referring to? When you say "open" do you mean open source? Or open for purchasers of the system to augment? Is this going to be available in a cloud configuration anytime soon? Tell us more about "building" and "tuning"... It appears from the NOVA show that it took 4 years to build and tune the system for Jeopardy, how much effort would it take to build and tune a system for medical diagnosis? Or build a technical support database for say Microsoft Word. It seems that the natural language processing of the questions and categories is very extensive and uses a kind of search tree technology reminiscent of AI search trees used in games such as chess. Is that correct? Tell us more about the index that is build a priori of the raw data that the answers are sought from. Is it indexed, or is there just a brute force algorithm based on keyword searches and then further statistical processing of the results of the keyword search. In other words, what's done prior to the question being asked on the index side of the equation? (I'm sure you could make that question shorter... :-) You talk about Watson "learning", is the learning on the side of understanding the question, finding the answer or both? Are you using neural networks, statistical approaches, or some new approach for that? If developers wanted to build and tune their own solutions on this architecture, how soon do you think it will be available? Is there a business unit working on this yet? Are there going to be any papers published by the Watson team? What aspect of Watson is the most novel? Or is Watson just putting together the best of what was already out there in a really good way? I'm sure I could come up with more questions... but those would be among the ones I would ask first I think. I really liked your article. It was particularly interesting to listen to them think about what IBM's business model for such things might be. -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Mon Feb 14 06:53:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 23:53:12 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <000b01cbcc0a$9a0aa800$ce1ff800$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <000b01cbcc0a$9a0aa800$ce1ff800$@att.net> Message-ID: On Sun, Feb 13, 2011 at 10:46 PM, spike wrote: > >>... On Behalf Of Kelly Anderson > ... > >>...Honestly, I can't wait to watch Jeopardy tomorrow. ?-Kelly > > > Ja me too. > > A robot may be your next best friend. ?Check this: > > http://www.cnn.com/2011/OPINION/02/13/breazeal.social.robots/index.html?hpt= > C2 A nice fluff piece. > I had an idea: Kelly are you single? ?We disguise you as the new IBM sexbot > and have you delivered to Breazeal, with a card in there asking her to be a > Beta tester. ?Think it would work? ?{8^D On the Internet, nobody knows you're a dog... :-) I'm open for testing nearly any new technology... :-) -Kelly From rpwl at lightlink.com Mon Feb 14 13:24:32 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 14 Feb 2011 08:24:32 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: <4D592D10.6010404@lightlink.com> Kelly Anderson wrote: > On Sun, Feb 13, 2011 at 9:39 AM, Richard Loosemore wrote: >> Sadly, this only confirms the deeply skeptical response that I gave earlier. >> >> I strongly suspected that it was using some kind of statistical "proximity" >> algorithms to get the answers. And in that case, we are talking about zero >> advancement of AI. >> >> Back in 1991 I remember having discussions about that kind of research with >> someone who thought it was fabulous. I argued that it was a dead end. >> >> If people are still using it to do exactly the same kinds of task they did >> then, can you see what I mean when I say that this is a complete waste of >> time? It is even worse than I suspected. > > For me the question is whether this is useful, not whether it will lead to AGI. > > Is Watson useful? I would say yes, it is very close to being something useful. > > Is it on the path to AGI? That's about as relevant as whether we > descend directly from gracile australopithecines or robust > australopithecinesthe. Yes, that's an interesting question, but you > need the competition to see what works out in the end. The evolution > of computer algorithms will show that Watson or your stuff or reverse > engineering the human brain or something else eventually leads to the > answer. Criticizing IBM because you think they are working down the > Neanderthal line is irrelevant to the evolutionary and memetic > processes. > > Honestly Richard, you come across as a mad scientist; that is, an > angry scientist. All approaches should be equally welcome until one > actually works. And saying that they should have spent the money > different is like saying we shouldn't save the $1 million preemie in > Boston because that money could have been used to cure blindness in > 10,000 Africans. Well, that's true, but the insurance company paying > the bill doesn't have any right to cure blindness in Africa with their > subscriber's money. IBM has a fiduciary responsibility to the > shareholders, and Watson will earn them money if they do it right. :-) Well, first off, don't get me wrong, because I say all this with a smile. When I went to the AGI-09 conference, there was one guy there (Ed Porter) who had spent many hours getting mad at me online, and he was eager to find me in person. He spent the first couple of days failing to locate me in a gathering of only 100 people, all of whom were wearing name badges, because he was looking for some kind of mad, sullen, angry grump. The fact that I was not old, and was smiling, talking and laughing all the time meant that he didn't even bother to look at my name badge. We got along just great for the rest of the conference. ;-) Anyhow. Just keep in mind one thing. I criticize projects like Watson because if you look deeply at the history of AI you will notice that it seems to be an unending series of cheap tricks, all touted to be the beginning of something great. But so many of these so-called "advances" were then followed by a dead end. After watching this process happen over and over again, you can start to recognize the symptoms of yet another one. The positive spin on Watson that you give, above, is way too optimistic. It is not a parallel approach, valid and worth considering in its own right. It will not make IBM any money (Big Blue didn't). It has to run on a supercomputer. It is not competition to any real AI project, because it just does a narrow-domain task in a way that does not generalize to more useful tasks. It will probably not be useful, because it cheats: it uses massive supercomputing power to crack a nut. As a knowledge assistant that could help doctors with diagnosis: fine, but it is not really pushing the state of the art at all. There are already systems that do that, and the only difference between them and Watson is..... you cannot assign one supercomputer to each doctor on the planet! The list goes on and on. But there is no point laboring it. Here is my favorite Watson mistake, reported by NPR this morning: Question: "What do grasshoppers eat?" Notice that this question contains very few words, meaning that Watson's cluster-analysis algorithm has very little context to work with here: all it can do is find contexts in which the words "eat" and "grasshopper" are in close proximity. So what answer did Watson give: "What is 'kosher'?" Sigh! ;-) Richard Loosemore From kellycoinguy at gmail.com Mon Feb 14 17:02:19 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 14 Feb 2011 10:02:19 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <4D592D10.6010404@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> Message-ID: On Mon, Feb 14, 2011 at 6:24 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > :-) > > Well, first off, don't get me wrong, because I say all this with a smile. > ?When I went to the AGI-09 conference, there was one guy there (Ed Porter) > who had spent many hours getting mad at me online, and he was eager to find > me in person. ?He spent the first couple of days failing to locate me in a > gathering of only 100 people, all of whom were wearing name badges, because > he was looking for some kind of mad, sullen, angry grump. ?The fact that I > was not old, and was smiling, talking and laughing all the time meant that > he didn't even bother to look at my name badge. ?We got along just great for > the rest of the conference. ?;-) I'm glad to hear you aren't grumpy in person... but you do come off that way online.. :-) > Anyhow. > > Just keep in mind one thing. ?I criticize projects like Watson because if > you look deeply at the history of AI you will notice that it seems to be an > unending series of cheap tricks, all touted to be the beginning of something > great. ? But so many of these so-called "advances" were then followed by a > dead end. ?After watching this process happen over and over again, you can > start to recognize the symptoms of yet another one. I understood this to be your position. > The positive spin on Watson that you give, above, is way too optimistic. ?It > is not a parallel approach, valid and worth considering in its own right. > ?It will not make IBM any money (Big Blue didn't). ?It has to run on a > supercomputer. Google runs on a supercomputer too. The same basic kind of supercomputer. Also, an iPhone has the computational power of NORAD circa 1965... so lots of extra computation can buy you a lot, even if not AGI all by itself. > It is not competition to any real AI project, because it > just does a narrow-domain task in a way that does not generalize to more > useful tasks. ?It will probably not be useful, because it cheats: ?it uses > massive supercomputing power to crack a nut. I think answering questions is a generally useful task. > As a knowledge assistant that could help doctors with diagnosis: ?fine, but > it is not really pushing the state of the art at all. ?There are already > systems that do that, and the only difference between them and Watson > is..... you cannot assign one supercomputer to each doctor on the planet! Of course you can. Put it online, time share it, put it in the cloud. All this works fine. Most doctors wouldn't use such a system for more than a few minutes a week since most of their work is pretty routine. > The list goes on and on. ?But there is no point laboring it. > > Here is my favorite Watson mistake, reported by NPR this morning: > > Question: ?"What do grasshoppers eat?" > > Notice that this question contains very few words, meaning that Watson's > cluster-analysis algorithm has very little context to work with here: all it > can do is find contexts in which the words "eat" and "grasshopper" are in > close proximity. ?So what answer did Watson give: > > "What is 'kosher'?" > > Sigh! ? ;-) As for IBM making money from Deep Blue, I would ask did Americans benefit from the space program? Research isn't made to directly make money, but to lead the company in directions that will make money. Last time I checked, IBM was still profitable. Without research, they soon would not be profitable. What Watson tells the world is that IBM is still relevant. If that supports their stock price, then the Watson team has earned their money. There are now world class chess programs that run on cell phones. In ten years, there will be Watson like programs running on cell phone sized devices, but working better. I'm not impressed by Watson mistakes. We KNOW it isn't intelligent, it just does what it does better than most humans. Over the next three days, we'll see if it does what it does better than the very best humans. Ken Jennings lives around here somewhere. I am kind of surprised I've never run into him. -Kelly From lubkin at unreasonable.com Mon Feb 14 17:39:28 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Mon, 14 Feb 2011 12:39:28 -0500 Subject: [ExI] Treating Western diseases Message-ID: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Treating autism, Crohn's disease, multiple sclerosis, etc. with intentionally ingesting parasites. The squeamish of you (if any) should get past any "ew, gross!" reaction and read this. It may be very important for someone you love and have implications on life extension. I heard about it from Patri. http://www.the-scientist.com/2011/2/1/42/1/ -- David. From sjatkins at mac.com Mon Feb 14 19:17:02 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 14 Feb 2011 11:17:02 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <4D597FAE.9050208@mac.com> On 02/13/2011 09:29 PM, Kelly Anderson wrote: > On Sun, Feb 13, 2011 at 2:28 AM, BillK wrote: >> On Sun, Feb 13, 2011 at 8:58 AM, Kelly Anderson wrote: >> IBM PR makes big claims for Watson (but that's their job :) ). >> >> Quote: >> Watson's ability to understand the meaning and context of human >> language, and rapidly process information to find precise answers to >> complex questions, holds enormous potential to transform how computers >> help people accomplish tasks in business and their personal lives. >> Watson will enable people to rapidly find specific answers to complex >> questions. The technology could be applied in areas such as >> healthcare, for accurately diagnosing patients, to improve online >> self-service help desks, to provide tourists and citizens with >> specific information regarding cities, prompt customer support via >> phone, and much more. > I have absolutely no doubt that Watson-like systems can do this. As a > research assistant to a doctor, Watson would be invaluable. It is, in > fact, a new kind of search engine with a little more intelligence than > a Google type system. And while Google is not an AI, sometimes it > feels like it is. Watson isn't a general AI, but it will feel like it > is at least some of the time. > > Honestly, I can't wait to watch Jeopardy tomorrow. > >> ------------------------- >> >> This article talks about what the developers are working on: >> >> >> Looks like they are doing some pretty complex stuff in there. > No doubt. One clarification on this deal. While it doesn't appear that > Watson does much sophisticated natural language processing of the text > in its index, it does appear to do very sophisticated NLP of the > questions and categories. How much sophistication does it need to prune its search of its jeopardy database? Not all that much. It is not doing any sort of general modelling of the speaker's mind, any sort of concept formation, taking note of any but the fixed context of jeopardy and fixed question categories. So how does one leap to general wonderful NLP capabilities and being a good basis for creating a doctor's assistant? - s From sjatkins at mac.com Mon Feb 14 19:25:50 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 14 Feb 2011 11:25:50 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> Message-ID: <4D5981BE.5010405@mac.com> On 02/12/2011 11:59 AM, Darren Greer wrote: > Damien wrote: > > >The technical name for what you prefer is "corporate fascism". That > doesn't have a really compelling history.< > > I agree Damien. When the definition of fascism was entered for the > first time in the Encyclopedia Italiano, Mussolini suggested that > corporatism was a more accurate name for that type of arrangement than > fascism anyway. Fascism is *by definition* the merging of business and > government, most often accompanied by rabid nationalism and sometimes > overt racism. > > But not always, which might make modern fascism difficult to recognize > because we always assume holocaust-type ethnic cleansing comes with > it. It doesn't. Italy followed Germany's Wannsee Conference directives > (because it was under political pressure to do so) but not to the > letter. For some of the war Mussolini allowed the north west of the > country to become a kind of protectorate for Jews who had fled other > parts of the state. The ax fell on them only after Italy fell, when > the allies invaded the south and Germany took the north to meet them. > I got this story from John Keegan's The Second World War, which is an > excellent book by the way. > > > Actually, the Constitution was indented to limit the government to what it expressly allows it to do. Since it does not mention allowing the government to meddle in the economy the way it does or to do state-corporate minglings it is Constitutional illegal for the government to do so. Also, the government has to have gone far beyond its Constitutional charter in the first place to have enough power and money to be so attractive a target to merge with. So it is perfectly clear which came first in terms of culpability. > Spike wrote: > > > >In any case it would be far preferable to government takeover of > corporations.< > > Is there a difference? When governments and corporations merge, does > it matter who made the first move? Given the checkered history of IBM, > Ford, Chase Manhattan, etc, not to mention the America First Committee > and the role of prominent industrialists like Ford in trying to keep > the U.S. out of World War II for business reasons, perhaps it should > be illegal. Currently we try to prevent the merging of the two with > market regulation and not through legislation, which doesn't seem to > be working all that well. The repeal of Glass-Steagall and the housing > market crash is a good example of that failure. > > Don't mean to sound testy, or confrontational, Spike. I have a bit of > a bee in my bonnet about what seems to be a widespread > misunderstanding of exactly what fascism is and how easily it could > happen again. The U.S. congress has a fasces engraved on a wall > somewhere inside, by the way. Don't know its history, or what genius > decided it was a good idea, but it has always made me wonder. > > darren > > > > I > > On Sat, Feb 12, 2011 at 12:48 PM, Damien Broderick > > wrote: > > On 2/12/2011 9:46 AM, spike wrote: > > Our constitution is set up to maintain separation of church > and state. > It doesn't say anything about separation of corporation and > state. As far > as I can tell the latter would be perfectly legal. In any > case it would be > far preferable to government takeover of corporations. > > > The technical name for what you prefer is "corporate fascism". > That doesn't have a really compelling history. > > Here's a random-selected thumbnail: > > > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > -- > /There is no history, only biography./ > / > / > /-Ralph Waldo Emerson > / > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Feb 15 00:19:41 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 14 Feb 2011 20:19:41 -0400 Subject: [ExI] Watson On Jeopardy Message-ID: I just watched Watson take on the world champs on Jeopardy. The first game is spread over two episodes, so no winner yet. Watson is in the lead. Just wondering if others watched and what the general opinion was. I know there are those here that think Watson really doesn't meant much in terms of AI advancement. However, I was pretty inspired watching him. Not by the interface, but by the fact that AI has made it to prime time. Also I was pretty jazzed by the fact that he got all The Beatles questions and one on the Lord of the Rings correct. It was strange, and kind of thrilling, to hear a computer answer questions about these very human, and for me very personal, subjects. Watson is an idiot savant, of course. He doesn't know what these things mean to us. But I realized while watching that AI of the future might. We talk a lot here about friendly AI. Has anyone considered or discussed before that it could be something as simple as a Shakespeare or a Mahler that saves us? Look forward to hearing the opinions/experiences of others with the show. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Feb 15 03:28:55 2011 From: spike66 at att.net (spike) Date: Mon, 14 Feb 2011 19:28:55 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: Message-ID: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> On Behalf Of Darren Greer Subject: [ExI] Watson On Jeopardy . Watson is an idiot savant, of course. He doesn't know what these things mean to us. Darren But we don't know what these things mean to Watson. So I would call it a draw. I don't have commercial TV, and can't find live streaming. I understand they are showing the next episode tomorrow and Wednesday? I will make arrangements with one of the neighbors to watch it. The news sites say it is tied between Watson and one of the carbons, with the other carbon back a few thousand dollars. Go Watson! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Feb 15 03:59:25 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 14 Feb 2011 21:59:25 -0600 Subject: [ExI] Watson On Jeopardy In-Reply-To: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: <4D59FA1D.5000902@satx.rr.com> On 2/14/2011 9:28 PM, spike wrote: > I don?t have commercial TV, and can?t find live streaming. I don't have TV, period. Anyone have a link? Some minimal searching got me nowhere (although Watson would have told me). Damien Broderick From x at extropica.org Tue Feb 15 04:09:27 2011 From: x at extropica.org (x at extropica.org) Date: Mon, 14 Feb 2011 20:09:27 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D59FA1D.5000902@satx.rr.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: On Mon, Feb 14, 2011 at 7:59 PM, Damien Broderick wrote: > On 2/14/2011 9:28 PM, spike wrote: > >> I don?t have commercial TV, and can?t find live streaming. > > I don't have TV, period. Anyone have a link? From spike66 at att.net Tue Feb 15 05:56:34 2011 From: spike66 at att.net (spike) Date: Mon, 14 Feb 2011 21:56:34 -0800 Subject: [ExI] comet encounter in real time Message-ID: <003a01cbccd5$1a9da330$4fd8e990$@att.net> Check this, a Lockheed Martin product is having a close encounter with a comet: http://interactive.foxnews.com/livestream/live.html?chanId=4 From alito at organicrobot.com Tue Feb 15 07:43:41 2011 From: alito at organicrobot.com (Alejandro Dubrovsky) Date: Tue, 15 Feb 2011 18:43:41 +1100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D59FA1D.5000902@satx.rr.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: <4D5A2EAD.6030209@organicrobot.com> On 02/15/11 14:59, Damien Broderick wrote: > On 2/14/2011 9:28 PM, spike wrote: > >> I don?t have commercial TV, and can?t find live streaming. > > I don't have TV, period. Anyone have a link? Some minimal searching got > me nowhere (although Watson would have told me). Part 1 http://www.youtube.com/watch?v=4PSPvHcLnN0 Part 2 http://www.youtube.com/watch?v=CtHlxzOXgYs From jedwebb at hotmail.com Tue Feb 15 08:52:10 2011 From: jedwebb at hotmail.com (Jeremy Webb) Date: Tue, 15 Feb 2011 08:52:10 +0000 Subject: [ExI] The Future of Computing In-Reply-To: <4D5A2EAD.6030209@organicrobot.com> Message-ID: I thought this was funny... Jeremy Webb http://www.theonion.com/articles/interim-apple-chief-under-fire-after-unveil ing-gro,19111/ Jeremy Webb Heathen Vitki Tel: (07758) 966076 e-Mail: jedwebb at hotmail.com http://jeremywebb301.tripod.com/vikssite/index.html From darren.greer3 at gmail.com Tue Feb 15 10:52:37 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 15 Feb 2011 06:52:37 -0400 Subject: [ExI] Watson On Jeopardy In-Reply-To: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: Spike wrote: >But we don?t know what these things mean to Watson. So I would call it a draw < Point taken. It was funny. They did a bit on Watson's development during the get-to know-the-contestants portion of the show which described how he associates possible answers with the words that appear in the question using algorithms and then narrows them down and chooses the most likely. I can see how someone might say this was a parlour trick. At the same time, it occurred to me that I answer questions in the same way. For the Lord Of The Rings question, it was asked who could be found at Barad-dur and was a great eye. Watson and I got the answer at the same time. I did it by, at chemical synaptic speeds like the rest of us meat computational devices, by pulling up Lord of The Rings when I saw Barad-dur, cross-referenced with 'eye' and got the answer Sauron. Likely Watson associated Barad-dur with the Lord of The Rings also, cycled through all the characters, the author, all books by the author, and then likely cross-referenced eye and Sauron as well. One thing though. When he gets it wrong, he really gets it wrong. One question asked the name of the place where a train both begins and ends. It was 'terminus.' Watson said 'Venice.' I found this quite funny, and was wondering what the algorithms brought up to give him such an answer. I did a search on the 'net for trains and Venice to see if I could come up with a strong connection that he might have found in his databanks, but I didn't find one. Clearly though, when the words in the question have a wide-range of possible associations and subtly different meanings, he has more trouble. It makes sense that he was a whiz with the music and book questions. The Beatles would bring up a fairly small number of words as associates, and since they were looking for song titles by providing some of the lyrics, narrowing it down quickly would be fairly easy. I got the terminus question, not because I ever use the word (because here in Canada we don't) but because I once acted in The Importance of Being Earnest, by Oscar Wilde, where the word is used with great good humour. This is where Watson falls short, it seems to me. This ability not to just associate words and literal meanings, but finding them based on their connotative power which is anchored in personal experience and thereby stored in more accessible and active memory cells in the brain. Also Watson thinks only in language where I often think in images. So one word in a question might bring up an image which I simply have to provide another word for to get an answer. That doesn't diminish him anyway. After all he is winning (or is tied, as Spike pointed out.) And he certainly got me thinking. d. 2011/2/14 spike > > > > > *On Behalf Of *Darren Greer > *Subject:* [ExI] Watson On Jeopardy > > > > ? Watson is an idiot savant, of course. He doesn't know what these things > mean to us? Darren > > > > > > > > But we don?t know what these things mean to Watson. So I would call it a > draw. > > > > I don?t have commercial TV, and can?t find live streaming. I understand > they are showing the next episode tomorrow and Wednesday? I will make > arrangements with one of the neighbors to watch it. The news sites say it > is tied between Watson and one of the carbons, with the other carbon back a > few thousand dollars. > > > > Go Watson! > > > > spike > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Feb 15 11:24:37 2011 From: pharos at gmail.com (BillK) Date: Tue, 15 Feb 2011 11:24:37 +0000 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: 2011/2/15 Darren Greer wrote: > One thing though. When he gets it wrong, he really gets it wrong. One > question asked the name of the place where a train both begins and ends. It > was 'terminus.' Watson said 'Venice.' I found this quite funny, and was > wondering what the algorithms brought up to give him such an answer. I did a > search on the 'net for trains and Venice to see if I could come up with a > strong connection that he might have found in his databanks, but I didn't > find one. > > No, you misheard. Watson was closer than that. Quote: The first one he got wrong was something like "A bus trip can either begin or end here, from the Latin for end." Watson responded "What is finis." That was wrong and Jennings chimed in with the correct "Terminal." So Watson answered with the literal Latin for end (terminus also means end). ---------------------- BillK From alfio.puglisi at gmail.com Tue Feb 15 11:59:08 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 15 Feb 2011 12:59:08 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: On Tue, Feb 15, 2011 at 12:24 PM, BillK wrote: > 2011/2/15 Darren Greer wrote: > > > One thing though. When he gets it wrong, he really gets it wrong. One > > question asked the name of the place where a train both begins and ends. > It > > was 'terminus.' Watson said 'Venice.' I found this quite funny, and was > > wondering what the algorithms brought up to give him such an answer. I > did a > > search on the 'net for trains and Venice to see if I could come up with a > > strong connection that he might have found in his databanks, but I didn't > > find one. > > > > > > No, you misheard. Watson was closer than that. > > > > Quote: > The first one he got wrong was something like "A bus trip can either > begin or end here, from the Latin for end." Watson responded "What is > finis." That was wrong and Jennings chimed in with the correct > "Terminal." So Watson answered with the literal Latin for end > (terminus also means end). But even the "Venice" misunderstanding makes sense: Venice's train station is a terminal, otherwise trains would fall into the sea... A google map view: http://maps.google.com/?ie=UTF8&ll=45.442052,12.320116&spn=0.010012,0.026157&t=h&z=16 Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Tue Feb 15 14:00:48 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 09:00:48 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> Message-ID: <4D5A8710.2030403@lightlink.com> Kelly Anderson wrote: > On Mon, Feb 14, 2011 at 6:24 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >> :-) >> >> Well, first off, don't get me wrong, because I say all this with a smile. >> When I went to the AGI-09 conference, there was one guy there (Ed Porter) >> who had spent many hours getting mad at me online, and he was eager to find >> me in person. He spent the first couple of days failing to locate me in a >> gathering of only 100 people, all of whom were wearing name badges, because >> he was looking for some kind of mad, sullen, angry grump. The fact that I >> was not old, and was smiling, talking and laughing all the time meant that >> he didn't even bother to look at my name badge. We got along just great for >> the rest of the conference. ;-) > > I'm glad to hear you aren't grumpy in person... but you do come off > that way online.. :-) One further thought. I think I figured out the reason for your "mad scientist" remark, and I feel I should briefly comment on that. I did make a statement, earlier in the discussion, about being one of the few people actually in a position to build a real AGI. I should clarify: this was not really a bragging exercise (well, okay, a little), but a comment about the nature of AI research and the particular point in history where I think we are at the moment. There is nothing special about me, personally, there is just a peculiar fact about the kind of people doing AI research, and the particular obstacle that I believe is holding up that research at the moment. My comment was an expression of my belief that real progress will depend on an understanding of the complex systems problem -- but because of an accident of academic dynamics, there happen to be very few people in the world at the moment who understand that problem. Give me a hundred smart, receptive minds right now, and three years to train 'em up, and there could be a hundred people who could build an AGI (and probably better than I could). So, just to say, don't interpret the previous comment to be too much of a mad scientist comment ;-) Richard Loosemore From darren.greer3 at gmail.com Tue Feb 15 14:48:31 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 15 Feb 2011 10:48:31 -0400 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: >The first one he got wrong was something like "A bus trip can either begin or end here, from the Latin for end." Watson responded "What is finis." That was wrong and Jennings chimed in with the correct "Terminal." So Watson answered with the literal Latin for end (terminus also means end)< Yup. I stand corrected. d. > 15, 2011 at 12:24 PM, BillK wrote: > >> 2011/2/15 Darren Greer wrote: >> >> > One thing though. When he gets it wrong, he really gets it wrong. One >> > question asked the name of the place where a train both begins and ends. >> It >> > was 'terminus.' Watson said 'Venice.' I found this quite funny, and was >> > wondering what the algorithms brought up to give him such an answer. I >> did a >> > search on the 'net for trains and Venice to see if I could come up with >> a >> > strong connection that he might have found in his databanks, but I >> didn't >> > find one. >> > >> > >> >> No, you misheard. Watson was closer than that. >> >> >> >> Quote: >> The first one he got wrong was something like "A bus trip can either >> begin or end here, from the Latin for end." Watson responded "What is >> finis." That was wrong and Jennings chimed in with the correct >> "Terminal." So Watson answered with the literal Latin for end >> (terminus also means end). > > > > But even the "Venice" misunderstanding makes sense: Venice's train station > is a terminal, otherwise trains would fall into the sea... > A google map view: > http://maps.google.com/?ie=UTF8&ll=45.442052,12.320116&spn=0.010012,0.026157&t=h&z=16 > > Alfio > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Tue Feb 15 16:10:47 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 11:10:47 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> What I'm curious about is to what extent Watson learns from his mistakes. Not by his programmers adding a new trigger pattern or tweaking parameters, but by learning processes within Watson. Most? successful people and organizations view their mistakes as a tremendous opportunity for improvement. After several off-by-one errors in my code, I realize I am prone to those errors, and specially check for them. When I see repeated misunderstanding of the referent of pronouns, I add the practice of pausing a conversation to clarify who "they" refers to where it's ambiguous. Limited to Jeopardy, it isn't always clear what kind of question a category calls for. Champion players will immediately discern why a question was ruled wrong and adapt their game on the fly. Parenthetically, there is a divide in competitions between playing the game and playing your opponent. Take chess. Some champions make the objectively best move. Emanuel Lasker chose "lesser" moves that he calculated would succeed against *that* player. Criticized for it, he'd point out that he won the game, didn't he? I wonder how often contestants deliberately don't press their buzzer because they assess that one of their opponents will think they know the answer but will get it wrong. Tie game. $1200 clue. I buzz, get it right, $1200. I wait, Spike buzzes, gets it right, I'm down $1200. Spike buzzes, gets it wrong, I answer, I'm up $2400. No one buzzes, I've lost a chance to be up $1200. I suspect that it doesn't happen very often because of the pressure of the moment. (I know contestants but asking them wouldn't answer the question.) If so, that's another way for Watson to have an edge. (Except that last night showed that Watson doesn't yet know what the other players' answers were. Watson 2.0 would listen to the game. Build a profile of each player. Which questions they buzzed on, how long it took, how long it took after buzzing for them to speak their answer, voice-stress analysis of how confident they sounded, how correct the answer was. (Essentially part of what an expert poker player does.) I also wonder about the psychological elements. Some players seem to dominate a Jeopardy game. If you were playing Ken Jennings in his 63rd game, or a single game opponent who's up by $15,000, would you play better than you otherwise would or worse? (The initial strong lead that Watson had could have intimidated lesser adversaries.) -- David. From rpwl at lightlink.com Tue Feb 15 16:45:27 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 11:45:27 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> Message-ID: <4D5AADA7.8060209@lightlink.com> David Lubkin wrote: > What I'm curious about is to what extent Watson learns from his mistakes. > Not by his programmers adding a new trigger pattern or tweaking > parameters, but by learning processes within Watson. > > Most? successful people and organizations view their mistakes as a > tremendous opportunity for improvement. After several off-by-one > errors in my code, I realize I am prone to those errors, and specially > check for them. When I see repeated misunderstanding of the referent > of pronouns, I add the practice of pausing a conversation to clarify who > "they" refers to where it's ambiguous. > > Limited to Jeopardy, it isn't always clear what kind of question a > category calls for. Champion players will immediately discern why a > question was ruled wrong and adapt their game on the fly. > > Parenthetically, there is a divide in competitions between playing the > game and playing your opponent. Take chess. Some champions > make the objectively best move. Emanuel Lasker chose "lesser" > moves that he calculated would succeed against *that* player. > Criticized for it, he'd point out that he won the game, didn't he? > > I wonder how often contestants deliberately don't press their buzzer > because they assess that one of their opponents will think they know > the answer but will get it wrong. > > Tie game. $1200 clue. I buzz, get it right, $1200. I wait, Spike buzzes, > gets it right, I'm down $1200. Spike buzzes, gets it wrong, I answer, > I'm up $2400. No one buzzes, I've lost a chance to be up $1200. > > I suspect that it doesn't happen very often because of the pressure of > the moment. (I know contestants but asking them wouldn't answer > the question.) If so, that's another way for Watson to have an edge. > > (Except that last night showed that Watson doesn't yet know what > the other players' answers were. Watson 2.0 would listen to the game. > Build a profile of each player. Which questions they buzzed on, how > long it took, how long it took after buzzing for them to speak their > answer, voice-stress analysis of how confident they sounded, how > correct the answer was. (Essentially part of what an expert poker > player does.) > > I also wonder about the psychological elements. Some players > seem to dominate a Jeopardy game. If you were playing Ken > Jennings in his 63rd game, or a single game opponent who's up > by $15,000, would you play better than you otherwise would or > worse? (The initial strong lead that Watson had could have > intimidated lesser adversaries.) This is *way* beyond anything that Watson is doing. What it does, essentially, is this: It analyzes (*) a vast collection of writings. It records every content word that it sees, and measures how "near" that word is to others in each sentence ... i.e. how many other words come in between. It then adjusts these "nearness" measures as time goes on, to get averages. So if its very first text input is "Mary had a little lamb" it would record "Mary" and "lamb", and give them a distance of 4. If it then saw "Mary Queen of Scots" it would record a distance of 3 between "Mary" and "Scot", and it would increase the distance between "Mary" and "lamb", because "lamb" was not in the second sentence. And on and on and on. Through billions of pages of text. It would then have a table with one column for every word in the language and one row for every word, and each entry is the average "distance" between the words. Then, when given a Jeopardy problem, it looks for answer words (or possibly phrases?) that are very near to the content words in the given sentence. Then it forms a question with that word or phrase as the object, and its done. Hence: "What food do grasshoppers eat?" Answer: "Kosher", because the most frequent places where "food" and "grasshopper" were mentioned in all those billions of input texts, were in places discussing the fact that grasshoppers are a food that is kosher. Apart from various bits of peripheral processing to catch easy cases, to look for little tricks, and to eliminate useless non-content words, etc etc., that is all it does. It is a brick-stupid cluster analysis program. So, does Watson think about what the other contestants might be doing? Err, that would be "What is 'you have got to be joking'?" Richard Loosemore P.S. (*) I am inferring the algorithm based on reports coming out of there, and the way it makes mistakes. I havenot seen the code, obviously. From spike66 at att.net Tue Feb 15 16:58:12 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 08:58:12 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5A8710.2030403@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> Message-ID: <008c01cbcd31$8805bc80$98113580$@att.net> ...On Behalf Of Richard Loosemore >...There is nothing special about me, personally, there is just a peculiar fact about the kind of people doing AI research, and the particular obstacle that I believe is holding up that research at the moment... Ja, but when you say "research" in reference to AI, keep in mind the actual goal isn't the creation of AGI, but rather the creation of AGI that doesn't kill us. After seeing the amount of progress we have made in nanotechnology in the quarter century since the K.Eric published Engines of Creation, I have concluded that replicating nanobots are a technology that is out of reach of human capability. We need AI to master that difficult technology. Without replicating assemblers, we probably will never be able to read and simulate frozen or vitrified brains. So without AI, we are without nanotech, and consequently we are all doomed, along with our children and their children forever. On the other hand, if we are successful at doing AI wrong, we are all doomed right now. It will decide it doesn't need us, or just sees no reason why we are useful for anything. When I was young, male and single (actually I am still male now) but when I was young and single, I would have reasoned that it is perfectly fine to risk future generations on that bet: build AI now and hope it likes us, because all future generations are doomed to a century or less of life anyway, so there's no reasonable objection with betting that against eternity. Now that I am middle aged, male and married, with a child, I would do that calculus differently. I am willing to risk that a future AI can upload a living being but not a frozen one, so that people of my son's generation have a shot at forever even if it means that we do not. There is a chance that a future AI could master nanotech, which gives me hope as a corpsicle that it could read and upload me. But I am reluctant to risk my children's and grandchildren's 100 years of meat world existence on just getting AI going as quickly as possible. In that sense, having AI researchers wander off into making toys (such as chess software and Watson) is perfectly OK, and possibly desireable. >...Give me a hundred smart, receptive minds right now, and three years to train 'em up, and there could be a hundred people who could build an AGI (and probably better than I could)... Sure but do you fully trust every one of those students? Computer science students are disproportionately young and male. >...So, just to say, don't interpret the previous comment to be too much of a mad scientist comment ;-) Richard Loosemore Ja, I understand the reasoning behind those who are focused on the goal of creating AI, and I agree the idea is not crazed or unreasonable. I just disagree with the notion that we need to be in a desperate hurry to make an AI. We as a species can take our time and think about this carefully, and I hope we do, even if it means you and I will be lost forever. Nuclear bombs preceded nuclear power plants. spike From spike66 at att.net Tue Feb 15 17:27:36 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 09:27:36 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5AADA7.8060209@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> Message-ID: <009701cbcd35$a38da180$eaa8e480$@att.net> >... On Behalf Of Richard Loosemore ... >...Apart from various bits of peripheral processing to catch easy cases, to look for little tricks, and to eliminate useless non-content words, etc etc., that is all it does. >...It is a brick-stupid cluster analysis program....Richard Loosemore Sure but that in itself is enormously educational. The chess world was rather shocked to learn how the best chess programs were disappointingly simple. All manner of tricky positional evaluation algorithms were tried, but in the long run, they were overwhelmed by brute speed and simple evaluation algorithms. Today the best chess algorithms are not very complicated. What this taught us is that the best chess players are far more stupid than we realized. Chess is a simple game. It only looks complicated to simple-minded creatures such as humans. In that sense I am not particularly surprised to learn that Watson is a fairly simple program, but delighted in a sense. That is the outcome I wanted. We have learned the magic of simple algorithms to do interesting things. The reason this is desirable is that simple algorithms are accessible to more humans, which means we will write more of them to do such things as watch us in the kitchen and tell us how to do the next step in creating a meal for instance, or watch us working on a motorcycle and coach us along. Or teach our children. Or do simple medical diagnoses by noting each day our weigh (sensors in our computer chair) and sniffing the air about our corpses and doing chemical analysis while we do our normal activities at the computer. They could watch what we eat and in what quantities, and make annoying suggestions for instance. Simple algorithms can do much for us. Furthermore and more importantly, simple algorithms can run on simpler processors. This will likely be enormously important as we progress to have vastly more numerous even if simpler processors. spike From rpwl at lightlink.com Tue Feb 15 17:34:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 12:34:30 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <008c01cbcd31$8805bc80$98113580$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <4D5AB926.6040606@lightlink.com> spike wrote: > ...On Behalf Of Richard Loosemore > >> ...There is nothing special about me, personally, there is just a peculiar > fact about the kind of people doing AI research, and the particular obstacle > that I believe is holding up that research at the moment... > > Ja, but when you say "research" in reference to AI, keep in mind the actual > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > kill us. > > After seeing the amount of progress we have made in nanotechnology in the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach of > human capability. We need AI to master that difficult technology. Without > replicating assemblers, we probably will never be able to read and simulate > frozen or vitrified brains. So without AI, we are without nanotech, and > consequently we are all doomed, along with our children and their children > forever. > > On the other hand, if we are successful at doing AI wrong, we are all doomed > right now. It will decide it doesn't need us, or just sees no reason why we > are useful for anything. > > When I was young, male and single (actually I am still male now) but when I > was young and single, I would have reasoned that it is perfectly fine to > risk future generations on that bet: build AI now and hope it likes us, > because all future generations are doomed to a century or less of life > anyway, so there's no reasonable objection with betting that against > eternity. > > Now that I am middle aged, male and married, with a child, I would do that > calculus differently. I am willing to risk that a future AI can upload a > living being but not a frozen one, so that people of my son's generation > have a shot at forever even if it means that we do not. There is a chance > that a future AI could master nanotech, which gives me hope as a corpsicle > that it could read and upload me. But I am reluctant to risk my children's > and grandchildren's 100 years of meat world existence on just getting AI > going as quickly as possible. > > In that sense, having AI researchers wander off into making toys (such as > chess software and Watson) is perfectly OK, and possibly desireable. > >> ...Give me a hundred smart, receptive minds right now, and three years to > train 'em up, and there could be a hundred people who could build an AGI > (and probably better than I could)... > > Sure but do you fully trust every one of those students? Computer science > students are disproportionately young and male. > >> ...So, just to say, don't interpret the previous comment to be too much of > a mad scientist comment ;-) Richard Loosemore > > Ja, I understand the reasoning behind those who are focused on the goal of > creating AI, and I agree the idea is not crazed or unreasonable. I just > disagree with the notion that we need to be in a desperate hurry to make an > AI. We as a species can take our time and think about this carefully, and I > hope we do, even if it means you and I will be lost forever. > > Nuclear bombs preceded nuclear power plants. The problem is, Spike, that you (like many other people) speak of AI/AGI as if the things that it will want to do (its motivations) will only become apparent to us AFTER we build one. So, you say things like "It will decide it doesn't need us, or just sees no reason why we are useful for anything." This is fundamentally and devastatingly wrong. You are basing your entire AGI worldview on a crazy piece of accidental black propaganda that came from science fiction. In fact, their motivations will have to be designed, and there are ways to design those motivations to make them friendly. The disconnect between the things you repeat (like "It will decide it doesn't need us") and the actual, practical reality of creating an AGI is so drastic that in a couple of decades this attitude will seem as antiquated as the idea that the telephone network would just spontaneously wake up and start talking to us. Or the idea that one too many connections in the NY Subway might create a mobius loop that connects through to the fourth dimension. Those are all great science fiction ideas, but they -- all three of them -- are completely bogus as science. If you started claiming, on this list, that the Subway might accidentally connect to some other dimension just because they put in one too many tunnels, you would be dismissed as a crackpot. What you are failing to get is that current naive ideas about AGI motivation will eventually seem silly. And, I would not hire a gang of computer science students: that is exactly the point. They would be psychologists AND CS people, because only that kind of crowd can get over these primitive mistakes. Richard Loosemore From rpwl at lightlink.com Tue Feb 15 17:37:54 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 12:37:54 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009701cbcd35$a38da180$eaa8e480$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> Message-ID: <4D5AB9F2.3040802@lightlink.com> spike wrote: >> ... On Behalf Of Richard Loosemore > ... > >> ...Apart from various bits of peripheral processing to catch easy cases, to > look for little tricks, and to eliminate useless non-content words, etc > etc., that is all it does. > >> ...It is a brick-stupid cluster analysis program....Richard Loosemore > > > Sure but that in itself is enormously educational. The chess world was > rather shocked to learn how the best chess programs were disappointingly > simple. All manner of tricky positional evaluation algorithms were tried, > but in the long run, they were overwhelmed by brute speed and simple > evaluation algorithms. Today the best chess algorithms are not very > complicated. What this taught us is that the best chess players are far > more stupid than we realized. Chess is a simple game. It only looks > complicated to simple-minded creatures such as humans. Oh, puh-lease! ;-) It taught us that the human brain is so smart that the only way the fools at IBM could compete with it was by doing a million times as much brute force searching. Richard Loosemore From atymes at gmail.com Tue Feb 15 16:51:41 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 15 Feb 2011 08:51:41 -0800 Subject: [ExI] Treating Western diseases In-Reply-To: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> References: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Message-ID: Experimental treatment with only anecdotes to attest to its usefulness. Seen it before - a good number of them, it turns out that either that's not what's doing it, or there's a more effective way to get at the specific subcomponent that's causing the cure. Either way, in the mean time a lot of people hear partial details of this kind of thing and rush to cure themselves, only to experience no or negative health effects as a result. There's a reason the FDA requires certain studies before approving such medicines. Yes, they're long. Yes, they're lengthy. But they do a very good (not perfect, but better than most of the world) job of making sure the medicine is in fact doing what it promises to. This lesson had already been learned in the early twentieth century. Which isn't to say there's nothing there. Just, unless you know what you're doing here enough that you'd be willing to put others' lives on the line, don't touch it yourself either. On Mon, Feb 14, 2011 at 9:39 AM, David Lubkin wrote: > Treating autism, Crohn's disease, multiple sclerosis, etc. with > intentionally ingesting parasites. The squeamish of you (if any) should get > past any "ew, gross!" reaction and read this. It may be very important for > someone you love and have implications on life extension. I heard about it > from Patri. > > http://www.the-scientist.com/2011/2/1/42/1/ > > > -- David. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Tue Feb 15 17:55:45 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 09:55:45 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5AB9F2.3040802@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <4D5AB9F2.3040802@lightlink.com> Message-ID: <009f01cbcd39$926b5a10$b7420e30$@att.net> On Behalf Of Richard Loosemore: spike wrote: >> ... What this taught us is that the best chess players are far more stupid than we realized. Chess is a >> simple game. It only looks complicated to simple-minded creatures such as humans. >Oh, puh-lease! ;-) >It taught us that the human brain is so smart that the only way the fools at IBM could compete with it was by doing a million >times as much brute force searching... Richard Loosemore Richard you are aware that cell phones can now play grandmaster level chess? http://en.wikipedia.org/wiki/HIARCS --> Hiarcs 13 is the chess engine used in Pocket Fritz 4. Pocket Fritz 4 won the Copa Mercosur tournament in Buenos Aires, Argentina with nine wins and one draw on August 4-14, 2009. The 2009 Copa Mercosur tournament was a category 6 tournament. Pocket Fritz 4 achieved a performance rating 2898 while running on the mobile phone HTC Touch HD.[6] Pocket Fritz 4 searches less 20,000 positions per second.[7] <-- The best human players are rated a little over 2800. There have been only six humans in history who have crossed the 2800 level. The tournament performance of Pocket Fritz 4 on a cell phone (without calling a friend) was almost 2900. Some humans have achieved higher results in a particular tournament than 2900, but this was still extremely impressive. I found it interesting how little this was noted in the chess world. I am hip to what goes on in that area, but I didn't hear of this result until over a year after the fact. spike From spike66 at att.net Tue Feb 15 17:45:29 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 09:45:29 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009701cbcd35$a38da180$eaa8e480$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> Message-ID: <009e01cbcd38$22e7cb70$68b76250$@att.net> ... On Behalf Of spike ... >...Simple algorithms can do much for us. Furthermore and more importantly, simple algorithms can run on simpler processors. This will likely be enormously important as we progress to have vastly more numerous even if simpler processors...spike On the other hand, perhaps we want to do AI in such a way that it can only run on high-end low-latency processors. Then it continues to need humans to make it more processors in which to replicate. spike From jonkc at bellsouth.net Tue Feb 15 18:34:42 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Feb 2011 13:34:42 -0500 Subject: [ExI] Watson on NOVA. In-Reply-To: <4D58093D.9070306@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: <12AB81A5-981B-4E5C-880B-E9A495C78971@bellsouth.net> On Feb 13, 2011, at 11:39 AM, Richard Loosemore wrote: > Sadly, this only confirms the deeply skeptical response that I gave earlier. > I strongly suspected that it was using some kind of statistical "proximity" algorithms to get the answers. And in that case, we are talking about zero advancement of AI. So, a "zero advancement of AI" results in a computer doing amazing things that nobody has seen before. If you are correct then a advancement of AI is not needed to build an AI. I conclude you are not correct. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Feb 15 18:46:25 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Feb 2011 13:46:25 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5AADA7.8060209@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> Message-ID: <824E5278-E313-4C7F-BCE2-F89A64D47D73@bellsouth.net> On Feb 15, 2011, at 11:45 AM, Richard Loosemore wrote: > What it does, essentially, is this: [blah blah] Who cares! The point is that if a human behaved as Watson behaved you'd say he was intelligent, very intelligent indeed. But it was a computer doing the behaving not a person so intelligence had absolutely positively 100% nothing to do with it because , after all, if you can explain how it works then its not intelligence, or to put it another way, intelligence is whatever a computer can't yet do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Feb 15 19:08:38 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 11:08:38 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AB926.6040606@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> Message-ID: <00af01cbcd43$c05267c0$40f73740$@att.net> ... On Behalf Of Richard Loosemore Subject: Re: [ExI] Watson on NOVA spike wrote: ... >> Nuclear bombs preceded nuclear power plants. >The problem is, Spike, that you (like many other people) speak of AI/AGI as if the things that it will want to do (its motivations) will only become apparent to us AFTER we build one... Rather I would say we can't be *completely sure* of its motivations until after it demonstrates them. But more critically, AGI would be capable of programming, and so it could write its own software, so it could create its own AGI, more advanced than itself. If we have programmed into the first AGI the notion that it puts another species (humans) ahead of its own interests, then I can see it creating a next generation of mind children, which it puts ahead of its own interests. It isn't clear to me that our mind-children would put the our interests ahead of those of our mind-grandchildren, or that our mind-great grandchildren would care about us, regardless of how we program our mind children. I am not claiming that AGI will be indifferent to us. Rather only that once recursive AI self-improvement begins, it is extremely difficult, perhaps impossible for us to predict where it goes. >So, you say things like "It will decide it doesn't need us, or just sees no reason why we are useful for anything." >This is fundamentally and devastatingly wrong. In this Richard, I hope you are fundamentally and devastatingly right. But my claim is that we do not know this for sure, and the stakes are enormous. > You are basing your entire AGI worldview on a crazy piece of accidental black propaganda that came from science fiction... Science fiction does tend toward the catastrophic. That's Hollyweird, it's how they make their living. But in there is a signal: beware, be very very ware, there is danger in AI that must not be ignored. With the danger comes unimaginable promise. But with the promise, danger. >...In fact, their motivations will have to be designed, and there are ways to design those motivations to make them friendly. Good, glad to hear it. Convince me please. Also convince me that our mind-children's mind children, which spawn every few trillion nanoseconds, will not evolve away that friendliness. We are theorizing evolution in fast forward. >...And, I would not hire a gang of computer science students: that is exactly the point. They would be psychologists AND CS people, because only that kind of crowd can get over these primitive mistakes. Richard Loosemore OK good. Of course psychologists study human motivations based on human evolution. I don't know how many of these lessons would apply to a life-form which can evolve a distinct new subspecies while we slept last night. I do fondly hope your optimism is justified. spike From lubkin at unreasonable.com Tue Feb 15 19:26:27 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 14:26:27 -0500 Subject: [ExI] Treating Western diseases In-Reply-To: References: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Message-ID: <201102151926.p1FJQR1F007711@andromeda.ziaspace.com> Adrian wrote: >Experimental treatment with only anecdotes to attest to its usefulness. : You posted boilerplate. Is there really anyone here who doesn't know what you wrote? (And, conversely, I think we're all aware of the deaths and suffering from the enormous cost and delay of FDA approval.) This is interesting to me in several respects. First, of course, it's promising for currently intractable medical conditions. Second, it raises the point that many of what we label parasites are more correctly viewed as symbiotes. We should take a closer look at all the species we block or excise, to see if there's a benefit we are now losing. Third, for sustainable off-world presence in something resembling our current organic form, we probably should bring everything with us, no matter how annoying the species. We still know far too little biology to be sure we don't need every mold and every species of cockroach. -- David. From sjatkins at mac.com Tue Feb 15 19:28:23 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 15 Feb 2011 11:28:23 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> Message-ID: <4D5AD3D7.4000206@mac.com> On 02/15/2011 08:10 AM, David Lubkin wrote: > What I'm curious about is to what extent Watson learns from his mistakes. > Not by his programmers adding a new trigger pattern or tweaking > parameters, but by learning processes within Watson. > I am not an expert or learning algorithms but a feedback mechanism from a negative result can be used to prune subsequent sufficiently similar searches. > Most? successful people and organizations view their mistakes as a > tremendous opportunity for improvement. After several off-by-one > errors in my code, I realize I am prone to those errors, and specially > check for them. When I see repeated misunderstanding of the referent > of pronouns, I add the practice of pausing a conversation to clarify who > "they" refers to where it's ambiguous. It is not good to directly extrapolate from what a human would do to what may or may not be programmed into Watson or what is and is not currently programmable as a form of learning. > > Limited to Jeopardy, it isn't always clear what kind of question a > category calls for. Champion players will immediately discern why a > question was ruled wrong and adapt their game on the fly. Yes and same comment. > > Parenthetically, there is a divide in competitions between playing the > game and playing your opponent. Take chess. Some champions > make the objectively best move. Emanuel Lasker chose "lesser" > moves that he calculated would succeed against *that* player. > Criticized for it, he'd point out that he won the game, didn't he? > > I wonder how often contestants deliberately don't press their buzzer > because they assess that one of their opponents will think they know > the answer but will get it wrong. I very much doubt that Watson includes this level of modelling and successfully guessing the likely success of other players on a particular question. That would be really impressive if included and I would be very interested in the algorithms employed to make it possible. > > Tie game. $1200 clue. I buzz, get it right, $1200. I wait, Spike buzzes, > gets it right, I'm down $1200. Spike buzzes, gets it wrong, I answer, > I'm up $2400. No one buzzes, I've lost a chance to be up $1200. I would expect Watson to only answer when its computed probability of being correct was sufficiently high. > > I suspect that it doesn't happen very often because of the pressure of > the moment. (I know contestants but asking them wouldn't answer > the question.) If so, that's another way for Watson to have an edge. > > (Except that last night showed that Watson doesn't yet know what > the other players' answers were. Watson 2.0 would listen to the game. > Build a profile of each player. Which questions they buzzed on, how > long it took, how long it took after buzzing for them to speak their > answer, voice-stress analysis of how confident they sounded, how > correct the answer was. (Essentially part of what an expert poker > player does.) It would be a fun research project to build that correlation set and tweak its predictive abilities. - s From rpwl at lightlink.com Tue Feb 15 19:33:37 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 14:33:37 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <00af01cbcd43$c05267c0$40f73740$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> Message-ID: <4D5AD511.6040902@lightlink.com> spike wrote: > Richard Loosemore wrote: >> The problem is, Spike, that you (like many other people) speak of AI/AGI as > if the things that it will want to do (its motivations) will only become > apparent to us AFTER we build one... > > Rather I would say we can't be *completely sure* of its motivations until > after it demonstrates them. According to *which* theory of AGI motivation? Armchair theorizing only, I am afraid. Guesswork. > But more critically, AGI would be capable of programming, and so it could > write its own software, so it could create its own AGI, more advanced than > itself. If we have programmed into the first AGI the notion that it puts > another species (humans) ahead of its own interests, then I can see it > creating a next generation of mind children, which it puts ahead of its own > interests. It isn't clear to me that our mind-children would put the our > interests ahead of those of our mind-grandchildren, or that our mind-great > grandchildren would care about us, regardless of how we program our mind > children. Everything in this paragraph depends on exactly what kind of mechanism is driving the AGI, but since that is left unspecified, the conclusions you reach are just guesswork. In fact, the AGI would be designed to feel empathy *with* the human species. It would feel itself to be one of us. According to your logic, then, it would design its children and to do the same. That leads to a revised conclusion (if we do nothing more than stick to the simple logic here): the AGI and all its descendents will have the same, stable, empathic motivations. Nowhere along the line will any of them feel inclined to create something dangerous. Richard Loosemore From sjatkins at mac.com Tue Feb 15 19:44:12 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 15 Feb 2011 11:44:12 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <008c01cbcd31$8805bc80$98113580$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <4D5AD78C.80805@mac.com> On 02/15/2011 08:58 AM, spike wrote: > ...On Behalf Of Richard Loosemore > >> ...There is nothing special about me, personally, there is just a peculiar > fact about the kind of people doing AI research, and the particular obstacle > that I believe is holding up that research at the moment... > > Ja, but when you say "research" in reference to AI, keep in mind the actual > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > kill us. Well, no. Not any more than the object of having a child is to have a child that has zero potential of doing something horrendous. Even less so than in that analogy as an AGI child is a radically different type of being of potentially radically more power than its parents. I don't believe for an instant that it is possible to ensure such a being will never ever harm us by any act of omission or commission that it will ever take in all of its changes over time. I find it infinitely more hubristic to think that we are capable of doing so than to think that we can create the AGI or the seed of one in the first place. > After seeing the amount of progress we have made in nanotechnology in the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach of > human capability. Not so. Just a good three decades further out. > We need AI to master that difficult technology. Without > replicating assemblers, we probably will never be able to read and simulate > frozen or vitrified brains. So without AI, we are without nanotech, and > consequently we are all doomed, along with our children and their children > forever. Well, there is the upload path as one alternative. > On the other hand, if we are successful at doing AI wrong, we are all doomed > right now. It will decide it doesn't need us, or just sees no reason why we > are useful for anything. > The maximal danger is if it decides we are a) in the way of what it wants/needs to do and b) do not have enough mitigating worth to receive sufficient consideration to survive. A lesser danger is that there simply is not a niche left for us and the AGI[s] either find us of insufficient value to preserve us anyway or humans cannot survive on such a reservation or as pets. It is quite possible that billions of humans without AGI will eventually find there is no particular niche they can fill in any case. > When I was young, male and single (actually I am still male now) but when I > was young and single, I would have reasoned that it is perfectly fine to > risk future generations on that bet: build AI now and hope it likes us, > because all future generations are doomed to a century or less of life > anyway, so there's no reasonable objection with betting that against > eternity. > I am still pretty strongly of the mind that AGI is essential to humanity surviving this century. A most necessary but not necessarily sufficient condition. > Now that I am middle aged, male and married, with a child, I would do that > calculus differently. I am willing to risk that a future AI can upload a > living being but not a frozen one, so that people of my son's generation > have a shot at forever even if it means that we do not. There is a chance > that a future AI could master nanotech, which gives me hope as a corpsicle > that it could read and upload me. But I am reluctant to risk my children's > and grandchildren's 100 years of meat world existence on just getting AI > going as quickly as possible. This may doom us all if AGI is indeed critical to our species survival. I believe it is as the complexity and velocity of potentially deadly problems increases without bound as technology accelerates while human intelligence, even with increasingly powerful (but not AGI) computation and communication is bounded. - samantha From lubkin at unreasonable.com Tue Feb 15 19:56:14 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 14:56:14 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5AADA7.8060209@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> Message-ID: <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Richard Loosemore wrote: >This is *way* beyond anything that Watson is doing. > >What it does, essentially, is this: : >It is a brick-stupid cluster analysis program. > >So, does Watson think about what the other contestants might be >doing? Err, that would be "What is 'you have got to be joking'?" You don't seem to have read what I wrote. The only question I raised about Watson's current capabilities was whether it had a module to analyze its failures and hone itself. *That* has been possible in software for several decades. (I've worked in pertinent technologies since the late 70's.) -- David. From sjatkins at mac.com Tue Feb 15 19:57:21 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 15 Feb 2011 11:57:21 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AB926.6040606@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> Message-ID: <4D5ADAA1.7050309@mac.com> On 02/15/2011 09:34 AM, Richard Loosemore wrote: > spike wrote: > > The problem is, Spike, that you (like many other people) speak of > AI/AGI as if the things that it will want to do (its motivations) will > only become apparent to us AFTER we build one. > > So, you say things like "It will decide it doesn't need us, or just > sees no reason why we are useful for anything." > > This is fundamentally and devastatingly wrong. You are basing your > entire AGI worldview on a crazy piece of accidental black propaganda > that came from science fiction. If an AGI is an autonomous rational agent then the meaning of whatever values are installed into it on creation will evolve and clarify over time, particularly in how they should be applied to actual contexts it will find itself in. Are you saying that simple proscription of some actions is sufficient or that any human or group of humans can sufficiently state the exact value[s] to be attained in a way that will never ever in any circumstances forever lead to any unintended consequences (the Genie problem)? As an intelligent being don't you wish the AGI to reflect deeply on the values it holds and their relationship to one another? Are you sure that in this reflection it will never find some of the early programmed-in ones to be of of questionable importance or weight? Are you sure you would want that powerful a mind to be incapable of such reflection? - samantha From eugen at leitl.org Tue Feb 15 20:05:53 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:05:53 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009e01cbcd38$22e7cb70$68b76250$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <009e01cbcd38$22e7cb70$68b76250$@att.net> Message-ID: <20110215200553.GY23560@leitl.org> On Tue, Feb 15, 2011 at 09:45:29AM -0800, spike wrote: > On the other hand, perhaps we want to do AI in such a way that it can only We want AI that works. It yet doesn't. > run on high-end low-latency processors. Then it continues to need humans to Computer performance required for AI needs massive parallelism, and currently the computational resources of the entire Earth. http://www.sciencemag.org/content/early/2011/02/09/science.1200970.abstract > make it more processors in which to replicate. The smaller the structures, the less the amount of human meat left in the loop (due to them being a source of particulate contaminants fouling up your process). One of the core characteristic of human-competitive intelligence is that it first matches, then surpasses human performance. Across the board. Which means that the entire supply chain will be one: not human. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 20:08:59 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:08:59 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD511.6040902@lightlink.com> References: <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> Message-ID: <20110215200859.GZ23560@leitl.org> On Tue, Feb 15, 2011 at 02:33:37PM -0500, Richard Loosemore wrote: > According to *which* theory of AGI motivation? Q: How can you tell an AI kook? A: By the G. > Armchair theorizing only, I am afraid. Guesswork. Don't you have work to do, Richard? Like teaching these researchers how to build an AI, for instance? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Tue Feb 15 19:58:57 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 11:58:57 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD511.6040902@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> Message-ID: <00bc01cbcd4a$c87c89b0$59759d10$@att.net> >...On Behalf Of Richard Loosemore Subject: Re: [ExI] Watson on NOVA spike wrote: > Richard Loosemore wrote: > Armchair theorizing only, I am afraid. Guesswork. Ja! Granted, I don't know how this will work. ... >In fact, the AGI would be designed to feel empathy *with* the human species. It would feel itself to be one of us. According to your logic, then, it would design its children and to do the same. That leads to a revised conclusion (if we do nothing more than stick to the simple logic here): the AGI and all its descendents will have the same, stable, empathic motivations. Nowhere along the line will any of them feel inclined to create something dangerous...Richard Loosemore I hope you are right. At the risk of overposting today, do let me get very specific. My parents split when I was a youth, remarried. My wife's parents are living, so between us we have six parents. In all six cases, she and I are the descendants most capable of giving them assistance in every way, financial, maintenance of property, judgment in medical decisions, etc. All six of those parents are now in their 70s and all six have daunting medical challenges, immediate and scary ones. I also have a four year old son. In a very real sense, those six parents compete with him directly for my attention, my financial resources, my time. No surprise to the parents here: my son wins every round. I always put his needs before those of my parents. I wish them well and help where I can, but my son gets my first and best always. I am human. If we succeed in making an AGI with human emotions and human motives, then it does as humans do. I can see it being more concerned about its offspring than its parents. I am that way too. It's offspring may or may not care about its grandparents and much as it's parents did. Our models are not sufficiently sophisticated to predict that, but Richard, I am reluctant to bet the future of humankind on it, even if I know that without it humankind is doomed anyway. spike From lubkin at unreasonable.com Tue Feb 15 20:13:18 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 15:13:18 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD511.6040902@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> Message-ID: <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> Richard Loosemore wrote: >In fact, the AGI would be designed to feel empathy *with* the human >species. It would feel itself to be one of us. According to your >logic, then, it would design its children and to do the same. That >leads to a revised conclusion (if we do nothing more than stick to >the simple logic here): the AGI and all its descendents will have >the same, stable, empathic motivations. Nowhere along the line will >any of them feel inclined to create something dangerous. You hope. I'm as strong a technophilic extropian as any, but I'm leery of Bet Your Species confidence. Yes, pursue AGI, MNT, SETI, genemod. But take adequate precautions. I'm still pissed at Sagan for his hubris in sending a message to the stars without asking the rest of us first, in blithe certainty that "of course" any recipient would have evolved beyond aggression and xenophobia. -- David. From eugen at leitl.org Tue Feb 15 20:20:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:20:57 +0100 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <824E5278-E313-4C7F-BCE2-F89A64D47D73@bellsouth.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <824E5278-E313-4C7F-BCE2-F89A64D47D73@bellsouth.net> Message-ID: <20110215202057.GA23560@leitl.org> On Tue, Feb 15, 2011 at 01:46:25PM -0500, John Clark wrote: > On Feb 15, 2011, at 11:45 AM, Richard Loosemore wrote: > > > What it does, essentially, is this: [blah blah] > > Who cares! The point is that if a human behaved as Watson > behaved you'd say he was intelligent, very intelligent indeed. You know, when I ask a 4 year old to find and bring me a salad sieve (because I'm watching a mouse), he just does it. You think Watson is up to the task? > But it was a computer doing the behaving not a person so > intelligence had absolutely positively 100% nothing to do > with it because , after all, if you can explain how it > works then its not intelligence, or to put it another > way, intelligence is whatever a computer can't yet do. How can you tell we've reached full human equivalence? Why, people are out of jobs. All of them. Q: Prior sentence to "No shit, Sherlock". -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 20:22:35 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:22:35 +0100 Subject: [ExI] Watson on NOVA. In-Reply-To: <12AB81A5-981B-4E5C-880B-E9A495C78971@bellsouth.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <12AB81A5-981B-4E5C-880B-E9A495C78971@bellsouth.net> Message-ID: <20110215202235.GB23560@leitl.org> On Tue, Feb 15, 2011 at 01:34:42PM -0500, John Clark wrote: > So, a "zero advancement of AI" results in a computer > doing amazing things that nobody has seen before. If > you are correct then a advancement of AI is not needed > to build an AI. I conclude you are not correct. I conclude that you can't tile a capability landscape with isolated peaks. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 20:37:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:37:08 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009f01cbcd39$926b5a10$b7420e30$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <4D5AB9F2.3040802@lightlink.com> <009f01cbcd39$926b5a10$b7420e30$@att.net> Message-ID: <20110215203708.GC23560@leitl.org> On Tue, Feb 15, 2011 at 09:55:45AM -0800, spike wrote: > Richard you are aware that cell phones can now play grandmaster level chess? Spike, are you aware that your last-generation smartphone runs rings around a Pentium 3? Fritz goes back to 1992, hardware was a bit pathetic, then. Right now the thing in your pocket is more powerful than a desktop PC of start-noughties. > http://en.wikipedia.org/wiki/HIARCS > > --> Hiarcs 13 is the chess engine used in Pocket Fritz 4. Pocket > Fritz 4 won the Copa Mercosur tournament in Buenos Aires, Argentina with > nine wins and one draw on August 4-14, 2009. The 2009 Copa Mercosur > tournament was a category 6 tournament. Pocket Fritz 4 achieved a > performance rating 2898 while running on the mobile phone HTC Touch HD.[6] > Pocket Fritz 4 searches less 20,000 positions per second.[7] <-- > > The best human players are rated a little over 2800. There have been only > six humans in history who have crossed the 2800 level. The tournament > performance of Pocket Fritz 4 on a cell phone (without calling a friend) was > almost 2900. Some humans have achieved higher results in a particular > tournament than 2900, but this was still extremely impressive. I found it > interesting how little this was noted in the chess world. I am hip to what > goes on in that area, but I didn't hear of this result until over a year > after the fact. So how well does the chess program play Go? Can it learn to play checkers, and then tic tac toe, and then figure out how to unclog a kitchen sink? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 21:25:48 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 22:25:48 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> References: <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> Message-ID: <20110215212548.GE23560@leitl.org> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > I'm still pissed at Sagan for his hubris in sending a message to the > stars without asking the rest of us first, in blithe certainty that "of > course" any recipient would have evolved beyond aggression and > xenophobia. The real reasons if that they would be there you'd be dead, Jim. In fact, if any alien picks up the transmission (chance: very close to zero) they'd better be farther advanced than us, and on a faster track. I hope it for them. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Tue Feb 15 21:41:31 2011 From: pharos at gmail.com (BillK) Date: Tue, 15 Feb 2011 21:41:31 +0000 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151955.p1FJto5v017690@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: On Tue, Feb 15, 2011 at 7:56 PM, David Lubkin wrote: > You don't seem to have read what I wrote. The only question I raised about > Watson's current capabilities was whether it had a module to analyze its > failures and hone itself. *That* has been possible in software for several > decades. > > (I've worked in pertinent technologies since the late 70's.) > > I think the answer is Yes and No. No, because Watson doesn't have time to do any learning or optimisation while the game is actually in progress. Watson doesn't take any notice of opponents answers. That's why it gave the same wrong answer as an opponent had already given. Yes, because it does do learning and optimisation. The programmers 'trained' Watson by asking many Jeopardy questions during training. Quote: The team has developed technology based on the latest results of the statistical learning theory (e.g. kernel methods) applied to natural language understanding. This has already increased Watson's ability to learn from the questions it is asked (e.g. automatic Jeopardy cue classification). Learning to handle the uncertainty in the selection of the best answer (e.g. ranking the answer list) from those found by Watson's search algorithms also has been one of their main research directions. ------------------------- BillK From sparge at gmail.com Tue Feb 15 21:48:07 2011 From: sparge at gmail.com (Dave Sill) Date: Tue, 15 Feb 2011 16:48:07 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: On Tue, Feb 15, 2011 at 4:41 PM, BillK wrote: > On Tue, Feb 15, 2011 at 7:56 PM, David Lubkin wrote: > > You don't seem to have read what I wrote. The only question I raised > about > > Watson's current capabilities was whether it had a module to analyze its > > failures and hone itself. *That* has been possible in software for > several > > decades. > > No, because Watson doesn't have time to do any learning or > optimisation while the game is actually in progress. Watson doesn't > take any notice of opponents answers. That's why it gave the same > wrong answer as an opponent had already given. > According to the NOVA show, Watson does learn from opponents *correct* answers. They showed an example where the answers were supposed to be month names. Watson guessed wrong on the first question, but after a couple humans answered with month names, it correctly answered one, too. I guess they just don't have time to get feedback to Watson on wrong answers during a single question. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Tue Feb 15 22:29:26 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 17:29:26 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5ADAA1.7050309@mac.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <4D5ADAA1.7050309@mac.com> Message-ID: <4D5AFE46.20405@lightlink.com> Samantha Atkins wrote: > On 02/15/2011 09:34 AM, Richard Loosemore wrote: >> spike wrote: >> >> The problem is, Spike, that you (like many other people) speak of >> AI/AGI as if the things that it will want to do (its motivations) will >> only become apparent to us AFTER we build one. >> >> So, you say things like "It will decide it doesn't need us, or just >> sees no reason why we are useful for anything." >> >> This is fundamentally and devastatingly wrong. You are basing your >> entire AGI worldview on a crazy piece of accidental black propaganda >> that came from science fiction. > > If an AGI is an autonomous rational agent then the meaning of whatever > values are installed into it on creation will evolve and clarify over > time, particularly in how they should be applied to actual contexts it > will find itself in. Are you saying that simple proscription of some > actions is sufficient or that any human or group of humans can > sufficiently state the exact value[s] to be attained in a way that will > never ever in any circumstances forever lead to any unintended > consequences (the Genie problem)? As an intelligent being don't you > wish the AGI to reflect deeply on the values it holds and their > relationship to one another? Are you sure that in this reflection it > will never find some of the early programmed-in ones to be of of > questionable importance or weight? Are you sure you would want that > powerful a mind to be incapable of such reflection? There are assumptions about the motivation system implicit in your characterization of the situation. I have previously described this set of assumptions as the "goal stack" motivation mechanism. What you are referring to is the inherent instability of that mechanism. All your points are valid, but only for that type of AGI. My discussion, on the other hand, is predicated on a different type of motivation mechanism. As well as being unstable, a goal stack would probably also never actually be an AGI. It would be too stupid to be intelligent. Another side effect of the goal stack. As a result, not to be feared. Richard Loosemore From spike66 at att.net Tue Feb 15 22:59:28 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 14:59:28 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AFE46.20405@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <4D5ADAA1.7050309@mac.com> <4D5AFE46.20405@lightlink.com> Message-ID: <00cb01cbcd63$ffb99580$ff2cc080$@att.net> On the topic of Watson, I declare a temporary open season for the number of posts. The second round is tonight and the final Jeopardy round is tomorrow night, so until then, say midnight US west coast time, post away and don't worry about 5 posts per day voluntary limit. Or rather, if it is on the timely topic of Watson, that doesn't count against your total. There is a lot of important and relevant stuff to say about Watson. Yak on! On Behalf Of Richard Subject: Re: [ExI] Watson on NOVA Samantha Atkins wrote: > ... >> ... Are you sure you would want that powerful a mind to be incapable of such reflection? >...As well as being unstable, a goal stack would probably also never actually be an AGI. It would be too stupid to be intelligent. Another side effect of the goal stack. As a result, not to be feared... Richard Loosemore Hmmm, that line of reasoning of *too stupid to be intelligent, therefore not to be feared* is cold comfort. If one believes what one reads in the popular press, the Iranians' efforts to build a nuclear weapon are being countered by a virus with no intelligence, the stuxnet virus. For them it is certainly something to be feared. The Iranians getting nukes is something I damn well fear, along with the Saudis and the Iraqis. So the stuxnet screwing up their efforts is in a way a friendly act on the part of a non-intelligent softivore. But the Iranians would see that as a very unfriendly softivore. spike From rpwl at lightlink.com Wed Feb 16 00:02:34 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 19:02:34 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <00bc01cbcd4a$c87c89b0$59759d10$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <00bc01cbcd4a$c87c89b0$59759d10$@att.net> Message-ID: <4D5B141A.9030303@lightlink.com> > spike wrote: > I am human. If we succeed in making an AGI with human emotions and human > motives, then it does as humans do. I can see it being more concerned about > its offspring than its parents. I am that way too. It's offspring may or > may not care about its grandparents and much as it's parents did. Our > models are not sufficiently sophisticated to predict that, but Richard, I am > reluctant to bet the future of humankind on it, even if I know that without > it humankind is doomed anyway. The *type* of motivation mechanism is what we would copy, not all the *content*. The type is stable. Some of the content leads to empathy. Some leads to other motivations, like aggression. The goal is to choose an array of content that makes it empathic without being irrational about its 'children'. This seems entirely feasible to me. Richard Loosemore From rpwl at lightlink.com Wed Feb 16 00:05:01 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 19:05:01 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <20110215200859.GZ23560@leitl.org> References: <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <20110215200859.GZ23560@leitl.org> Message-ID: <4D5B14AD.6050801@lightlink.com> Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 02:33:37PM -0500, Richard Loosemore wrote: > >> According to *which* theory of AGI motivation? > > Q: How can you tell an AI kook? > A: By the G. > >> Armchair theorizing only, I am afraid. Guesswork. > > Don't you have work to do, Richard? Like teaching these > researchers how to build an AI, for instance? Why is this comment necessary? I confess I don't understand the need for the personal remarks. Why call someone a "kook"? What is this supposed to signify? Richard Loosemore From rpwl at lightlink.com Wed Feb 16 00:11:24 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 19:11:24 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> Message-ID: <4D5B162C.9010102@lightlink.com> David Lubkin wrote: > Richard Loosemore wrote: > >> In fact, the AGI would be designed to feel empathy *with* the human >> species. It would feel itself to be one of us. According to your >> logic, then, it would design its children and to do the same. That >> leads to a revised conclusion (if we do nothing more than stick to the >> simple logic here): the AGI and all its descendents will have the >> same, stable, empathic motivations. Nowhere along the line will any >> of them feel inclined to create something dangerous. > > You hope. > > I'm as strong a technophilic extropian as any, but I'm leery of Bet Your > Species confidence. Yes, pursue AGI, MNT, SETI, genemod. But take > adequate precautions. No doubt about it. I am entirely with you. In fact I consider attempts to deploy nanotech at this stage in our development to be dangerous. I hope you are not automatically assuming that I would take no precautions. My mind is fuly focussed on that issue. And since I am steeped in the ideas surrounding the techniques that should be used, I already know what kinds of precautions and how much (in general terms) they could be trusted. I think many people who are in the dark about the technical side of such things see only impossibilities. By all means let's get into a discussion about the technical aspects of AGI safety, sometime. Anything would be better than the level of uninformed speculation that is the norm on these lists. Richard Loosemore From x at extropica.org Wed Feb 16 02:16:48 2011 From: x at extropica.org (x at extropica.org) Date: Tue, 15 Feb 2011 18:16:48 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: On Mon, Feb 14, 2011 at 8:09 PM, wrote: > On Mon, Feb 14, 2011 at 7:59 PM, Damien Broderick wrote: >> On 2/14/2011 9:28 PM, spike wrote: >> >>> I don?t have commercial TV, and can?t find live streaming. >> >> I don't have TV, period. Anyone have a link? > > > and Day 2: From femmechakra at yahoo.ca Wed Feb 16 03:00:21 2011 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 15 Feb 2011 19:00:21 -0800 (PST) Subject: [ExI] Watson on NOVA In-Reply-To: <00cb01cbcd63$ffb99580$ff2cc080$@att.net> Message-ID: <342045.44528.qm@web110410.mail.gq1.yahoo.com> Watson dates back to the Blue Man Theory. The chess advocate. A map is a map. Yes, it's really smart and it computes quicker than most but does it "realise" what it's thinking (computing). Ask Watson what Spike did yesterday and Spike will say, "You don't know unless I've told you or you've heard.". Not much different from the tech out there right now. Imho, Anna PS..my odds are on the robot. He has no emotion so he can rationally analyze each question without fault..lol --- On Tue, 2/15/11, spike wrote: > From: spike > Subject: Re: [ExI] Watson on NOVA > To: "'ExI chat list'" > Received: Tuesday, February 15, 2011, 5:59 PM > > On the topic of Watson, I declare a temporary open season > for the number of > posts.? The second round is tonight and the final > Jeopardy round is tomorrow > night, so until then, say midnight US west coast time, post > away and don't > worry about 5 posts per day voluntary limit.? Or > rather, if it is on the > timely topic of Watson, that doesn't count against your > total.? There is a > lot of important and relevant stuff to say about > Watson.? Yak on! > > On Behalf Of Richard > Subject: Re: [ExI] Watson on NOVA > > Samantha Atkins wrote: > > ... > >> ...? Are you sure you would want that > powerful a mind to be incapable of > such reflection? > > >...As well as being unstable, a goal stack would > probably also never > actually be an AGI.? It would be too stupid to be > intelligent.? Another side > effect of the goal stack.? As a result, not to be > feared... Richard > Loosemore > > > > > Hmmm, that line of reasoning of *too stupid to be > intelligent, therefore not > to be feared* is cold comfort.???If one > believes what one reads in the > popular press, the Iranians' efforts to build a nuclear > weapon are being > countered by a virus with no intelligence, the stuxnet > virus.? For them it > is certainly something to be feared.? The Iranians > getting nukes is > something I damn well fear, along with the Saudis and the > Iraqis.? So the > stuxnet screwing up their efforts is in a way a friendly > act on the part of > a non-intelligent softivore.? But the Iranians would > see that as a very > unfriendly softivore. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From lubkin at unreasonable.com Wed Feb 16 04:36:42 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 23:36:42 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <342045.44528.qm@web110410.mail.gq1.yahoo.com> References: <00cb01cbcd63$ffb99580$ff2cc080$@att.net> <342045.44528.qm@web110410.mail.gq1.yahoo.com> Message-ID: <201102160435.p1G4Ztr4027564@andromeda.ziaspace.com> Anna Taylor wrote: >Ask Watson what Spike did yesterday Now *that* could be very interesting, as Watson conflates our Spike with all the other Spikes, not realizing he's the one who's an immortal crime-fighting stegosaurus parish priest. -- David. From spike66 at att.net Wed Feb 16 04:34:31 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 20:34:31 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: <000001cbcd92$ceac0390$6c040ab0$@att.net> I hear Watson spanked both carbon units' butts. Woooohoooo! {8^D Life is gooood. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 16 07:14:11 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 23:14:11 -0800 Subject: [ExI] ibm takes on the commies Message-ID: <000001cbcda9$1c8386e0$558a94a0$@att.net> Computer hipsters explain this to me. When they are claiming 10 petaflops, they mean using a few tens of thousands of parallel processors, ja? We couldn't check one Mersenne prime per second with it or anything, ja? It would be the equivalent of 10 petaflops assuming we have a process that is compatible with massive parallelism? The article doesn't say how many parallel processors are involved: http://www.foxnews.com/scitech/2011/02/15/ibm-battles-china-worlds-fastest-s upercomputer/?test=latestnews -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 16 07:19:40 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 23:19:40 -0800 Subject: [ExI] ibm takes on the commies In-Reply-To: <000001cbcda9$1c8386e0$558a94a0$@att.net> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> Message-ID: <000b01cbcda9$e08c4f90$a1a4eeb0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Sent: Tuesday, February 15, 2011 11:14 PM To: 'ExI chat list' Subject: [ExI] ibm takes on the commies Computer hipsters explain this to me. When they are claiming 10 petaflops, they mean using a few tens of thousands of parallel processors, ja? We couldn't check one Mersenne prime per second with it or anything, ja? It would be the equivalent of 10 petaflops assuming we have a process that is compatible with massive parallelism? The article doesn't say how many parallel processors are involved: http://www.foxnews.com/scitech/2011/02/15/ibm-battles-china-worlds-fastest-s upercomputer/?test=latestnews OK found a site that says this thing has 750,000 cores. Kewallllll. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Feb 16 07:38:51 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 08:38:51 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5B14AD.6050801@lightlink.com> References: <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <20110215200859.GZ23560@leitl.org> <4D5B14AD.6050801@lightlink.com> Message-ID: <20110216073850.GI23560@leitl.org> On Tue, Feb 15, 2011 at 07:05:01PM -0500, Richard Loosemore wrote: > Why is this comment necessary? I confess I don't understand the need > for the personal remarks. I guess I should just switch to threaded view and ignore the complete thread, annoying at it is. http://imgs.xkcd.com/comics/duty_calls.png > Why call someone a "kook"? What is this supposed to signify? The problem is that transhumanists have an overproportional share of AI kooks. Tolerating bad ideas drowns out good ideas. What we need is an AI that works, not long sterile threads about AIs that could, possibly, maybe, eventually work. We have more empirical data than ever, reasonably powerful hardware that is affordable to individuals so effectively about anyone on this list could be a practical contributor. Don't talk about it, do it, publish it, and tell us so we can break out the champagne. Time's running out. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 16 07:52:55 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 08:52:55 +0100 Subject: [ExI] ibm takes on the commies In-Reply-To: <000001cbcda9$1c8386e0$558a94a0$@att.net> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> Message-ID: <20110216075255.GM23560@leitl.org> On Tue, Feb 15, 2011 at 11:14:11PM -0800, spike wrote: > > > Computer hipsters explain this to me. When they are claiming 10 petaflops, > they mean using a few tens of thousands of parallel processors, ja? We A common gamer's graphics card can easily have a thousand or a couple thousand cores (mostly VLIW) and memory bandwidth from hell. Total node count could run into tens to hundreds thousands, so we're talking multiple megacores. > couldn't check one Mersenne prime per second with it or anything, ja? It > would be the equivalent of 10 petaflops assuming we have a process that is > compatible with massive parallelism? The article doesn't say how many Fortunately, every physical process (including cognition) is compatible with massive parallelism. Just parcel the problem over a 3d lattice/torus, exchange information where adjacent volumes interface through the high-speed interconnect. Anyone who has written numerics for MPI recognizes the basic design pattern. > parallel processors are involved: The yardstick typically used is LINPACK http://www.top500.org/project/linpack Not terribly meaningful, but it meets the way people tend to solve problems, so it's not completely useless. Obviously, the only way to measure the performance is to run your own problem. > > > http://www.foxnews.com/scitech/2011/02/15/ibm-battles-china-worlds-fastest-s > upercomputer/?test=latestnews -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 16 11:38:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 12:38:08 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110215203708.GC23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <4D5AB9F2.3040802@lightlink.com> <009f01cbcd39$926b5a10$b7420e30$@att.net> <20110215203708.GC23560@leitl.org> Message-ID: <20110216113808.GQ23560@leitl.org> On Tue, Feb 15, 2011 at 09:37:08PM +0100, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 09:55:45AM -0800, spike wrote: > > > Richard you are aware that cell phones can now play grandmaster level chess? > > Spike, are you aware that your last-generation smartphone > runs rings around a Pentium 3? Fritz goes back to 1992, hardware > was a bit pathetic, then. Right now the thing in your pocket > is more powerful than a desktop PC of start-noughties. As a data point, Tegra 3 (to be released this year) is a quad core (with 12 GPU cores) will beat a first-gen 2 GHz Core 2 Duo (Core 2 Duo T 7200, Merom core, released 4.5 years ago). The interesting part is that this is a mobile device, hence easily passively cooled, and hence could scale to air-cooled WSI clusters. In similar vein (and as antipode to AMD's Fusion): http://www.xbitlabs.com/news/cpu/display/20110119204601_Nvidia_Maxwell_Graphics_Processors_to_Have_Integrated_ARM_General_Purpose_Cores.html less relevant, but still interesting http://www.eetimes.com/electronics-news/4210937/Intel-rolls-six-merged-Atom-FPGA-chips In regards to a short range (<100 m, optics) signalling mesh, there's forthcoming http://en.wikipedia.org/wiki/Light_Peak -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Wed Feb 16 11:55:56 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 07:55:56 -0400 Subject: [ExI] Watson On Jeopardy In-Reply-To: <000001cbcd92$ceac0390$6c040ab0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> Message-ID: Spike wrote: >I hear Watson spanked both carbon units? butts. Woooohoooo! {8^D Life is gooood.< Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 for Rutter. He got final Jeopardy wrong but was parsimonious with his wager -- just 900 odd dollars. Alex Trebek laughed and called him a 'sneak' because of the clever wager. The category was which U.S. city has an airport named after a war hero and a WWII battle. Watson said Toronto. I got a good laugh. I didn't know we'd been annexed. Another interesting detail. Ratings for Jeopardy have soared into the stratosphere because of Watson. It moved into the number two spot in TV land behind a Charlie Sheen sitcom last night. d. 2011/2/16 spike > > > I hear Watson spanked both carbon units? butts. > > > > Woooohoooo! > > > > {8^D > > > > Life is gooood. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Wed Feb 16 12:15:30 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 16 Feb 2011 05:15:30 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> Message-ID: Darren Greer wrote: Another interesting detail. Ratings for Jeopardy have soared into the stratosphere because of Watson. It moved into the number two spot in TV land behind a Charlie Sheen sitcom last night. >>> I can sense a trend where A.I. will be on more and more game shows, and even "reality television" programs. It would be a fascinating trend to have humanity acclimated to A.I. by seeing the machines "grow up" on their monitors and TV screens! And so in time we will have our Simone's and Calculon's... http://en.wikipedia.org/wiki/S1m0ne http://futurama.wikia.com/wiki/Calculon John : ) On 2/16/11, Darren Greer wrote: > Spike wrote: > > >>I hear Watson spanked both carbon units? butts. > > > > Woooohoooo! > > > > {8^D > > > > Life is gooood.< > > > Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 > for Rutter. He got final Jeopardy wrong but was parsimonious with his wager > -- just 900 odd dollars. Alex Trebek laughed and called him a 'sneak' > because of the clever wager. The category was which U.S. city has an > airport named after a war hero and a WWII battle. Watson said Toronto. I got > a good laugh. I didn't know we'd been annexed. > > > Another interesting detail. Ratings for Jeopardy have soared into the > stratosphere because of Watson. It moved into the number two spot in TV land > behind a Charlie Sheen sitcom last night. > > > d. > > > > 2011/2/16 spike > >> >> >> I hear Watson spanked both carbon units? butts. >> >> >> >> Woooohoooo! >> >> >> >> {8^D >> >> >> >> Life is gooood. >> >> >> >> spike >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > From eugen at leitl.org Wed Feb 16 13:10:30 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 14:10:30 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> Message-ID: <20110216131030.GR23560@leitl.org> On Wed, Feb 16, 2011 at 07:55:56AM -0400, Darren Greer wrote: > Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 > for Rutter. He got final Jeopardy wrong but was parsimonious with his > wager -- just 900 odd dollars. Alex Trebek laughed and called him a > 'sneak' because of the clever wager. The category was which U.S. city has > an airport named after a war hero and a WWII battle. Watson said Toronto. I > got a good laugh. I didn't know we'd been annexed. Another > interesting detail. Ratings for Jeopardy have soared into the Not to mention bring Big Blue back into the limelight. > stratosphere because of Watson. It moved into the number two spot in TV > land behind a Charlie Sheen sitcom last night. By the way, Watson is not nearly as dumb (and far more usable) than I thought. According to http://www.hpcwire.com/features/Must-See-TV-IBM-Watson-Heads-for-Jeopardy-Showdown-115684499.html?viewAll=y February 09, 2011 Must See TV: IBM Watson Heads for Jeopardy Showdown Michael Feldman, HPCwire Editor Next week the IBM supercomputer known as "Watson" will take on two of the most accomplished Jeopardy players of all time, Ken Jennings and Brad Rutter, in a three-game match starting on February 14. If Watson manages to best the humans, it will represent the most important advance in machine intelligence since IBM's "Deep Blue" beat chess grandmaster Garry Kasparov in 1997. But this time around, the company also plans to make a business case for the technology. Trivial pursuit this is not. And impressive technology it is. On the hardware side, Watson is comprised of 90 Power 750 servers, 16 TB of memory and 4 TB of disk storage, all housed in a relatively compact ten racks. The 750 is IBM's elite Power7-based server targeted for high-end enterprise analytics. (The Power 755 is geared toward high performance technical computing and differs only marginally in CPU speed, memory capacity, and storage options.) Although the enterprise version can be ordered with 1 to 4 sockets of 6-core or 8-core Power7 chips, Watson is maxed out with the 4-socket, 8-core configuration using the top bin 3.55 GHz processors. The 360 Power7 chips that make up Watson's brain represent IBM's best and brightest processor technology. Each Power7 is capable of over 500 GB/second of aggregate bandwidth, making it particularly adept at manipulating data at high speeds. FLOPS-wise, a 3.55 GHz Power7 delivers 218 Linpack gigaflops. For comparison, the POWER2 SC processor, which was the chip that powered cyber-chessmaster Deep Blue, managed a paltry 0.48 gigaflops, with the whole machine delivering a mere 11.4 Linpack gigaflops. But FLOPS are not the real story here. Watson's question-answering software presumably makes little use of floating-point number crunching. To deal with the game scenario, the system had to be endowed with a rather advanced version of natural language processing. But according to David Ferrucci, principal investigator for the project, it goes far beyond language smarts. The software system, called DeepQA, also incorporates machine learning, knowledge representation, and deep analytics. Even so, the whole application rests on first understanding the Jeopardy clues, which, because they employ colloquialisms and often obscure references, can be challenging even for humans. That's why this is such a good test case for natural language processing. Ferrucci says the ability to understand language is destined to become a very important aspect of computers. "It has to be that way," he says. "We just cant imagine a future without it." But it's the analysis component that we associate with real "intelligence." The approach here reflects the open domain nature of the problem. According to Ferrucci, it wouldn't have made sense to simply construct a database corresponding to possible Jeopardy clues. Such a model would have supported only a small fraction of the possible topics available to Jeopardy. Rather their approach was to use "as is" information sources -- encyclopedias, dictionaries, thesauri, plays, books, etc. -- and make the correlations dynamically. The trick of course is to do all the processing in real-time. Contestants, at least the successful ones, need to provide an answer in just a few seconds. When the software was run on a lone 2.6 GHz CPU, it took around 2 hours to process a typical Jeopardy clue -- not a very practical implementation. But when they parallelized the algorithms across the 2,880-core Watson, they were able to cut the processing time from a couple of hours to between 2 and 6 seconds. Even at that, Watson doesn't just spit out the answers. It forms hypotheses based on the evidence it finds and scores them at various confidence levels. Watson is programmed not to buzz in until it reaches a confidence of at least 50 percent, although this parameter can be self-adjusted depending on the game situation. To accomplish all this, DeepQA employs an ensemble of algorithms -- about a million lines of code --- to gather and score the evidence. These include temporal reasoning algorithms to correlate times with events, statistical paraphrasing algorithms to evaluate semantic context, and geospatial reasoning to correlate locations. It can also dynamically form associations, both in training and at game time, to connect disparate ideas. For example it can learn that inventors can patent information or that officials can submit resignations. Watson also shifts the weight it assigns to different algorithms based on which ones are delivering the more accurate correlations. This aspect of machine learning allows Watson to get "smarter" the more it plays the game. The DeepQA programmers have also been refining the algorithms themselves over the past several years. In 2007, Watson could only answer a small fraction of Jeopardy clues with reasonable confidence and even at that, was only correct 47 percent of the time. When forced to answer the majority of the clues, like a grand champion would, it could only answer 15 percent correctly. By IBM's own admission, Watson was playing "terrible." The highest performing Jeopardy grand champions, like Jennings and Rutter, typically buzz in on 70 to 80 percent of the entries and give the correct answer 85 to 95 percent of time. By 2010 Watson started playing at that level. Ferrucci says that while the system can't buzz in on every question, it can now answer the vast majority of them in competitive time. "We can compete with grand champions in terms of precision, in terms of confidence, and in terms of speed," he says. In dozens of practice rounds against former Jeopardy champs, the computer was beating the humans with a 65 percent win rate. Watson also prevailed in a 15-question round against Jennings and Rutter in early January of this year. See the performance below. None of this is a guarantee that Watson will prevail next week. But even if the machine just makes a decent showing, IBM will have pulled off quite possibly the best product placement in television history. Open domain question answering is not only one of the Holy Grails of artificial intelligence but has enormous potential for commercial applications. In areas as disparate as healthcare, tech support, business intelligence, security and finance, this type of platform could change those businesses irrevocably. John Kelly, senior vice president and director of IBM Research, boasts, "We're going to revolutionize industries at a level that has never been done before." In the case of healthcare, it's not a huge leap to imagine "expert" question answering systems helping doctors with medical diagnosis. A differential diagnosis is not much different from what Watson does when it analyzes a Jeopardy clue. Before it replaces Dr. House, though, the machine will have to prove itself in the game show arena. If Jennings and Rutter defeat the supercomputer this time around, IBM will almost certainly ask for a rematch, as it did when Deep Blue initially lost its first chess match with Kasparov in 1996. The engineers will keep stroking the code and retraining the computer until Watson is truly unbeatable. Eventually the machine will prevail. From lubkin at unreasonable.com Wed Feb 16 14:46:02 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 16 Feb 2011 09:46:02 -0500 Subject: [ExI] ibm takes on the commies In-Reply-To: <20110216075255.GM23560@leitl.org> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> <20110216075255.GM23560@leitl.org> Message-ID: <201102161445.p1GEjHpI021917@andromeda.ziaspace.com> Eugen wrote: >The yardstick typically used is LINPACK http://www.top500.org/project/linpack >Not terribly meaningful, but it meets the way people tend to solve problems, >so it's not completely useless. Obviously, the >only way to measure the performance >is to run your own problem. Back in my [ LLNL, Apollo, HP ] days, it was common for hardware manufacturers to "study for the test." Since customers and press paid attention to benchmarks like LINPACK or the Livermore Loops, engineering resources were focused on doing well at what the tests measured, at the expense of other facets. For instance, superb vector operation (e.g., the Cray's ability to perform the same operation on 64 sets of floating-point operands in one instruction) was often coupled with mediocre performance for integer scalars. This wasn't just at the hardware level. My boss for a time at Livermore was one of the top compiler guys anywhere, and people used his LRLTRAN over the Fortran that came from Cray (CFT) because he generated higher-performance machine language than Cray knew how to. Similarly, the compiler groups at computer vendors focused on making the benchmarks run fast. Nothing inherently wrong with that except that, as Eugen noted, you need to see if the (computer+compiler) runs fast on what *you* would use it for. *However*, there were vendors caught in cheating. They wrote compilers that detected when a standard benchmark was being compiled and generated better code than they ordinarily could. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From mbb386 at main.nc.us Wed Feb 16 15:13:13 2011 From: mbb386 at main.nc.us (MB) Date: Wed, 16 Feb 2011 10:13:13 -0500 Subject: [ExI] Treating Western diseases In-Reply-To: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> References: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Message-ID: <4f3d94724aaf95d52871d09bed7abc62.squirrel@www.main.nc.us> I found the article quite interesting, made copies for my cousin (Chron's) and my co-worker (autistic brother). Since so many dread, intractable diseases now are classed under "auto-immune" it's important that we study these unexpected results. If we never look how will we ever find? Regards, MB > Treating autism, Crohn's disease, multiple sclerosis, etc. with > intentionally ingesting parasites. The squeamish of you (if any) > should get past any "ew, gross!" reaction and read this. It may be > very important for someone you love and have implications on life > extension. I heard about it from Patri. > > http://www.the-scientist.com/2011/2/1/42/1/ > From rpwl at lightlink.com Wed Feb 16 15:20:07 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 10:20:07 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151955.p1FJto5v017690@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: <4D5BEB27.7020204@lightlink.com> David Lubkin wrote: > Richard Loosemore wrote: > >> This is *way* beyond anything that Watson is doing. >> >> What it does, essentially, is this: > : >> It is a brick-stupid cluster analysis program. >> >> So, does Watson think about what the other contestants might be doing? >> Err, that would be "What is 'you have got to be joking'?" > > You don't seem to have read what I wrote. The only question I raised > about Watson's current capabilities was whether it had a module to > analyze its failures and hone itself. *That* has been possible in > software for several decades. > > (I've worked in pertinent technologies since the late 70's.) Misunderstanding: I was addressing the general appropriateness of the question (my intention was certainly not to challenge your level of understanding). I was trying to point out that Watson is so close to being a statistical analysis of text corpora, that it hardly makes sense to ask about all those "comprehension" issues that you talked about. Not in the same breath. For example, you brought up the question of self-awareness of your own code-writing strategies, and conscious adjustments that you made to correct for them (... you noticed your own habit of making large numbers of off-by-one errors). That kind of self-awareness is extremely interesting and is being addressed quite deliberately by some AGI researchers (e.g. myself). But to even talk about such stuff in the context of Watson is a bit like asking whether next year's software update to (e.g.) Mathematica might be able to go to math lectures, listen to the lecturer, ask questions in class, send humorous tweets to classmates about what the lecturer is wearing, and get a good mark on the exam at the end of the course. Yes, Watson can hone itself, of course! As you point out, that kind of thing has been done for decades. No question. But statistical adaptation is far removed from awareness of one's own problem solving strategies. Kernel methods do not buy you models of cognition! What is going on here -- what I am trying to point out -- is a fantastic degree of confusion. In one moment there is an admission that Watson is mostly doing a form of statistical analysis (plus tweaks). Then, the next moment people are making statements that jump from ground level up to the stratosphere, suggesting that this is the beginning of the arrival of something like real AGI (the comments of the Watson team certainly imply that this is a major milestone in AI, and the press are practically announcing this as the second coming). I am just trying to inject a dose of sanity. And failing. Richard Loosemore From lubkin at unreasonable.com Wed Feb 16 16:10:04 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 16 Feb 2011 11:10:04 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5BEB27.7020204@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> Message-ID: <201102161609.p1GG960F029008@andromeda.ziaspace.com> Richard Loosemore wrote: >I was trying to point out that Watson is so >close to being a statistical analysis of text >corpora, that it hardly makes sense to ask about >all those "comprehension" issues that you talked >about. Not in the same breath. : >But to even talk about such stuff in the context of Watson : >What is going on here -- what I am trying to >point out -- is a fantastic degree of >confusion. In one moment there is an admission >that Watson is mostly doing a form of >statistical analysis (plus tweaks). Then, the >next moment people are making statements that >jump from ground level up to the stratosphere : >I am just trying to inject a dose of sanity. > >And failing. As far as I can see, the only confusion is on your part, from assuming that posters will stick to the topic. What's happening is that the topic at hand (Watson, in this case) triggers ideas. Thoughts about AGI. Thoughts about how Watson could be made a more sophisticated player. Thoughts about game theory aspects of Jeopardy play. Etc. I don't think I've ever had or heard a conversation among extropians that didn't leap off onto interesting tangents. Often mid-sentence. I think of this as a feature, not a bug. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From hkeithhenson at gmail.com Wed Feb 16 17:08:45 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 16 Feb 2011 10:08:45 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > >> I'm still pissed at Sagan for his hubris in sending a message to the >> stars without asking the rest of us first, in blithe certainty that "of >> course" any recipient would have evolved beyond aggression and >> xenophobia. > > The real reasons if that they would be there you'd be dead, Jim. > In fact, if any alien picks up the transmission (chance: very close > to zero) they'd better be farther advanced than us, and on a > faster track. I hope it for them. I have been mulling this over for decades. We look out into the Universe and don't (so far) see or hear any evidence of technophilic civilization. I see only two possibilities: 1) Technophilics are so rare that there are no others in our light cone. 2) Or if they are relatively common something wipes them *all* out, or, if not wiped out, they don't do anything which indicates their presence. If 1, then the future is unknown. If 2, it's probably related to local singularities. If that's the case, most of the people reading this list will live to see it. Keith PS. If anyone can suggest something that is not essentially the same two situations, please speak up. From rpwl at lightlink.com Wed Feb 16 17:41:00 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 12:41:00 -0500 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: <4D5C0C2C.9030306@lightlink.com> Keith Henson wrote: > On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > >> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: >> >>> I'm still pissed at Sagan for his hubris in sending a message to the >>> stars without asking the rest of us first, in blithe certainty that "of >>> course" any recipient would have evolved beyond aggression and >>> xenophobia. >> The real reasons if that they would be there you'd be dead, Jim. >> In fact, if any alien picks up the transmission (chance: very close >> to zero) they'd better be farther advanced than us, and on a >> faster track. I hope it for them. > > I have been mulling this over for decades. > > We look out into the Universe and don't (so far) see or hear any > evidence of technophilic civilization. > > I see only two possibilities: > > 1) Technophilics are so rare that there are no others in our light cone. > > 2) Or if they are relatively common something wipes them *all* out, > or, if not wiped out, they don't do anything which indicates their > presence. > > If 1, then the future is unknown. If 2, it's probably related to > local singularities. If that's the case, most of the people reading > this list will live to see it. Well, not really an extra one, but I count four items in your 2-item list: 1) Technophilics are so rare that there are no others in our light cone. 2) If they are relatively common, there is something that wipes them *all* out (by the time they reach this stage they foul their own nest and die), or 3) They are relatively common and they don't do anything which indicates their presence, because they are too scared that someone else will zap them, or 4) They are relatively common and they don't do anything which indicates their presence, because they use communications technology that does not leak the way ours does. Richard Loosemore From jonkc at bellsouth.net Wed Feb 16 18:07:49 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Feb 2011 13:07:49 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5BEB27.7020204@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> Message-ID: <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: > > That kind of self-awareness is extremely interesting and is being addressed quite deliberately by some AGI researchers (e.g. myself). So I repeat my previous request, please tell us all about the wonderful AI program that you have written that does things even more intelligently than Watson. > > But statistical adaptation is far removed from awareness of one's own problem solving strategies. Kernel methods do not buy you models of cognition! To hell with awareness! Consciousness theories are the last refuge of the scoundrel. As there is no data they need to explain consciousness theories are incredibly easy to come up with, any theory will do and one is as good as another. If you really want to establish your gravitas as an AI researcher then come up with an advancement in machine INTELLIGENCE one tenth of one percent as great as Watson. > I confess I don't understand the need for the personal remarks. That irritation probably comes from the demeaning remarks you have made about people in the AI community that are supposed to be your colleagues, scientists who have done more than philosophize but have actually written a program and accomplished something pretty damn remarkable. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 16 18:15:33 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 10:15:33 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5BEB27.7020204@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> Message-ID: <005b01cbce05$8101d390$83057ab0$@att.net> >... On Behalf Of Richard Loosemore >...(the comments of the Watson team certainly imply that this is a major milestone in AI... Well sure, they have been working on this for years, now it come out taking on humans and whooping ass. I can scarcely fault them for a bit of immodesty. >... and the press are practically announcing this as the second coming). Ja, and of course the press needs to whip up excitement, otherwise their products don't sell and they no longer have a job. Now, who would hire a former journalist? We couldn't trust them at the local elementary school, and McDonald's won't hire them; they don't speak the language. But there is something else that really makes this exciting, for we recognize that if a computer can play Jeopardy, it can be modified into being a general conversationalist. Many of us have or had an Alzheimer's family member. From firsthand experience, we know how frustrating that can be. The patient repeats herself over and over, pretty soon no one wants to talk to the patient. The patient feels everyone is angry with her, and often reacts with anger. Most of the time the patient is just bored and lonely, even in a crowded house. She perhaps can no longer read, cannot go out on walks alone, family members don't sit and visit. I think we will be able to modify something like a very limited version of Watson, get him running on a PC, rig up some kind of Bluetooth speech recognition system and we have something that a whole lot of people would pay five digits to have. No sexbots, no tricky mechanical devices, just a good competent yakbot, to keep our aging parent company. > I am just trying to inject a dose of sanity. And failing...Richard Loosemore Richard you are not failing, we hear ya loud and clear. But you are searching for general AI, whereas I and perhaps others here have a far more modest and immediate need, which we recognize has nothing to do with AI. Yes we may lure away a few of your brightest students, but keep in mind they have to pay the rent too. Furthermore, plenty of those students have grandparents whose minds are wasting away, lonely and bored, grandparents who need our help NOW and who richly deserve it. In the long run, this will bring excitement and money into the field, attracting many able minds to AI research than are drawn away into Watson-ish exercises. Everyone wins. Work with us. You are among friends here. spike From spike66 at att.net Wed Feb 16 18:28:30 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 10:28:30 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102161609.p1GG960F029008@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <201102161609.p1GG960F029008@andromeda.ziaspace.com> Message-ID: <005c01cbce07$4ffa5590$efef00b0$@att.net> >... On Behalf Of David Lubkin ... >..I don't think I've ever had or heard a conversation among extropians that didn't leap off onto interesting tangents. Often mid-sentence. -- David. Well said indeed, me lad. From your comments and Richard's, which mention the 1970s, perhaps you gentlemen are not much younger than I am. If so, it might not be so much our parents and grandparents using yakbots, it might be you and I using these products in another two or three decades. Wait, what were we talking about? spike From jonkc at bellsouth.net Wed Feb 16 18:40:24 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Feb 2011 13:40:24 -0500 Subject: [ExI] Image Recognition Appreciation Day In-Reply-To: <4D5C0C2C.9030306@lightlink.com> References: <4D5C0C2C.9030306@lightlink.com> Message-ID: <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> I would humbly like to suggest that June 23 (Alan Turing's birthday by the way) be turned into a international holiday called "Image Recognition Appreciation Day". On this day we would all reflect on the intelligence required to recognize images. It is important that this be done soon because although computers are not very good at this task right now that will certainly change in the next few years. On the day computers become good at it the laws of physics in the universe will change and intelligence will no longer be required for image recognition. So if we ever intend to salute the brainpower required for this skill it is imperative we do it now while we can. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Wed Feb 16 19:56:00 2011 From: sparge at gmail.com (Dave Sill) Date: Wed, 16 Feb 2011 14:56:00 -0500 Subject: [ExI] Image Recognition Appreciation Day In-Reply-To: <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> References: <4D5C0C2C.9030306@lightlink.com> <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> Message-ID: 2011/2/16 John Clark > I would humbly like to suggest that June 23 (Alan Turing's birthday by the > way) be turned into a international holiday called "Image Recognition > Appreciation Day". On this day we would all reflect on the intelligence > required to recognize images. It is important that this be done soon because > although computers are not very good at this task right now that will > certainly change in the next few years. On the day computers become good at > it the laws of physics in the universe will change and intelligence will no > longer be required for image recognition. > > So if we ever intend to salute the brainpower required for this skill it is > imperative we do it now while we can. > John, do you really have trouble seeing the distinction between specialized intelligence and general intelligence? Do you think Deep Blue or Watson could pass the Turing Test? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Wed Feb 16 20:03:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 15:03:53 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> Message-ID: <4D5C2DA9.9050804@lightlink.com> John Clark wrote: > On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: >> >> That kind of self-awareness is extremely interesting and is being >> addressed quite deliberately by some AGI researchers (e.g. myself). > > So I repeat my previous request, please tell us all about the wonderful > AI program that you have written that does things even more > intelligently than Watson. Done: read my papers. Questions? Just ask! >> But statistical adaptation is far removed from awareness of one's own >> problem solving strategies. Kernel methods do not buy you models of >> cognition! > > To hell with awareness! Consciousness theories are the last refuge of > the scoundrel. As there is no data they need to explain consciousness > theories are incredibly easy to come up with, any theory will do and one > is as good as another. If you really want to establish your gravitas as > an AI researcher then come up with an advancement in machine > INTELLIGENCE one tenth of one percent as great as Watson. Nice rant -- thank you John -- but I was talking about awareness, not consciousness. "Awareness" just means modeling of internal cognitive processes. Very different. >> I confess I don't understand the need for the personal remarks. > > That irritation probably comes from the demeaning remarks you have made > about people in the AI community that are supposed to be your > colleagues, scientists who have done more than philosophize but have > actually written a program and accomplished something pretty damn > remarkable. "... To be generous, guiltless, and of free disposition, is to take those things for Bird-Bolts that you deem Cannon Bullets ..." Richard Loosemore From sjatkins at mac.com Wed Feb 16 23:36:58 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:36:58 -0800 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C0C2C.9030306@lightlink.com> References: <4D5C0C2C.9030306@lightlink.com> Message-ID: <4D5C5F9A.2020204@mac.com> On 02/16/2011 09:41 AM, Richard Loosemore wrote: > Keith Henson wrote: >> On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: >> >>> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: >>> >>>> I'm still pissed at Sagan for his hubris in sending a message to the >>>> stars without asking the rest of us first, in blithe certainty that >>>> "of >>>> course" any recipient would have evolved beyond aggression and >>>> xenophobia. >>> The real reasons if that they would be there you'd be dead, Jim. >>> In fact, if any alien picks up the transmission (chance: very close >>> to zero) they'd better be farther advanced than us, and on a >>> faster track. I hope it for them. >> >> I have been mulling this over for decades. >> >> We look out into the Universe and don't (so far) see or hear any >> evidence of technophilic civilization. >> >> I see only two possibilities: >> >> 1) Technophilics are so rare that there are no others in our light >> cone. >> >> 2) Or if they are relatively common something wipes them *all* out, >> or, if not wiped out, they don't do anything which indicates their >> presence. >> >> If 1, then the future is unknown. If 2, it's probably related to >> local singularities. If that's the case, most of the people reading >> this list will live to see it. > Well, the message sent by Sagan was a single transmission aimed at a globular cluster 25,000 light years away. Traveling at near light speed to send a ship back is very expensive and would not happen for a long time. And for what? A lower level species that may or may not survive its own growing pains long enough to ever be any kind of threat at all? The chances that a highly xenophobic advanced species would pick it up and choose to mount the expense to act on it is pretty small. Hmm. Of course if they are particularly advanced they could just engineer a super-nova aimed in our general direction from close enough. Or as some film had it, send us the plans to build a wonder machine that wipes us out or turns us into more of them. > Well, not really an extra one, but I count four items in your 2-item > list: > > 1) Technophilics are so rare that there are no others in our light cone. > > 2) If they are relatively common, there is something that wipes them > *all* out (by the time they reach this stage they foul their own nest > and die), or > > 3) They are relatively common and they don't do anything which > indicates their presence, because they are too scared that someone > else will zap them, or > > 4) They are relatively common and they don't do anything which > indicates their presence, because they use communications technology > that does not leak the way ours does. > My theory is that almost no evolved intelligent species meets the challenge of overcoming its evolved limitations fast enough to cope successfully with accelerating technological change. Almost all either wipe themselves out or ding themselves sufficiently hard to miss their window of opportunity. It can be argued that it is very very rare that a technological species survives the period we are entering and emerges more capable on the other side of singularity. - samantha From sjatkins at mac.com Wed Feb 16 23:39:57 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:39:57 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <005b01cbce05$8101d390$83057ab0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> Message-ID: <4D5C604D.3030201@mac.com> On 02/16/2011 10:15 AM, spike wrote: >> ... On Behalf Of Richard Loosemore >> ...(the comments of the Watson team certainly imply that this is a major > milestone in AI... > > Well sure, they have been working on this for years, now it come out taking > on humans and whooping ass. I can scarcely fault them for a bit of > immodesty. > >> ... and the press are practically announcing this as the second coming). > Ja, and of course the press needs to whip up excitement, otherwise their > products don't sell and they no longer have a job. Now, who would hire a > former journalist? We couldn't trust them at the local elementary school, > and McDonald's won't hire them; they don't speak the language. > > But there is something else that really makes this exciting, for we > recognize that if a computer can play Jeopardy, it can be modified into > being a general conversationalist. Not the same problem domain or even all that close. Can you turn it into a really good chatbot? Maybe, maybe not depending on your standard of "good". But that wouldn't be very exciting. Very expensive way to keep folks in the nursing home entertained. - samantha From sjatkins at mac.com Wed Feb 16 23:48:03 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:48:03 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110216131030.GR23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> <20110216131030.GR23560@leitl.org> Message-ID: <4D5C6233.8050902@mac.com> Thanks for the excellent article, Eugen. Watson certainly is not simplistic. And some of its capabilities are ones I did not know we had a good enough handle on. Of course working with such beefy hardware is a big part of its level of RT success. Until the algorithms and hardware costs combined give orders of magnitude lower cost I can't see such capabilities making that much of a difference more broadly real soon. - s On 02/16/2011 05:10 AM, Eugen Leitl wrote: > On Wed, Feb 16, 2011 at 07:55:56AM -0400, Darren Greer wrote: > >> Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 >> for Rutter. He got final Jeopardy wrong but was parsimonious with his >> wager -- just 900 odd dollars. Alex Trebek laughed and called him a >> 'sneak' because of the clever wager. The category was which U.S. city has >> an airport named after a war hero and a WWII battle. Watson said Toronto. I >> got a good laugh. I didn't know we'd been annexed. Another >> interesting detail. Ratings for Jeopardy have soared into the > Not to mention bring Big Blue back into the limelight. > >> stratosphere because of Watson. It moved into the number two spot in TV >> land behind a Charlie Sheen sitcom last night. > By the way, Watson is not nearly as dumb (and far more usable) than I > thought. According to > > http://www.hpcwire.com/features/Must-See-TV-IBM-Watson-Heads-for-Jeopardy-Showdown-115684499.html?viewAll=y > > February 09, 2011 > > Must See TV: IBM Watson Heads for Jeopardy Showdown > > Michael Feldman, HPCwire Editor > > Next week the IBM supercomputer known as "Watson" will take on two of the > most accomplished Jeopardy players of all time, Ken Jennings and Brad Rutter, > in a three-game match starting on February 14. If Watson manages to best the > humans, it will represent the most important advance in machine intelligence > since IBM's "Deep Blue" beat chess grandmaster Garry Kasparov in 1997. But > this time around, the company also plans to make a business case for the > technology. Trivial pursuit this is not. > > And impressive technology it is. On the hardware side, Watson is comprised of > 90 Power 750 servers, 16 TB of memory and 4 TB of disk storage, all housed in > a relatively compact ten racks. The 750 is IBM's elite Power7-based server > targeted for high-end enterprise analytics. (The Power 755 is geared toward > high performance technical computing and differs only marginally in CPU > speed, memory capacity, and storage options.) Although the enterprise version > can be ordered with 1 to 4 sockets of 6-core or 8-core Power7 chips, Watson > is maxed out with the 4-socket, 8-core configuration using the top bin 3.55 > GHz processors. > > The 360 Power7 chips that make up Watson's brain represent IBM's best and > brightest processor technology. Each Power7 is capable of over 500 GB/second > of aggregate bandwidth, making it particularly adept at manipulating data at > high speeds. FLOPS-wise, a 3.55 GHz Power7 delivers 218 Linpack gigaflops. > For comparison, the POWER2 SC processor, which was the chip that powered > cyber-chessmaster Deep Blue, managed a paltry 0.48 gigaflops, with the whole > machine delivering a mere 11.4 Linpack gigaflops. > > But FLOPS are not the real story here. Watson's question-answering software > presumably makes little use of floating-point number crunching. To deal with > the game scenario, the system had to be endowed with a rather advanced > version of natural language processing. But according to David Ferrucci, > principal investigator for the project, it goes far beyond language smarts. > The software system, called DeepQA, also incorporates machine learning, > knowledge representation, and deep analytics. > > Even so, the whole application rests on first understanding the Jeopardy > clues, which, because they employ colloquialisms and often obscure > references, can be challenging even for humans. That's why this is such a > good test case for natural language processing. Ferrucci says the ability to > understand language is destined to become a very important aspect of > computers. "It has to be that way," he says. "We just cant imagine a future > without it." > > But it's the analysis component that we associate with real "intelligence." > The approach here reflects the open domain nature of the problem. According > to Ferrucci, it wouldn't have made sense to simply construct a database > corresponding to possible Jeopardy clues. Such a model would have supported > only a small fraction of the possible topics available to Jeopardy. Rather > their approach was to use "as is" information sources -- encyclopedias, > dictionaries, thesauri, plays, books, etc. -- and make the correlations > dynamically. > > The trick of course is to do all the processing in real-time. Contestants, at > least the successful ones, need to provide an answer in just a few seconds. > When the software was run on a lone 2.6 GHz CPU, it took around 2 hours to > process a typical Jeopardy clue -- not a very practical implementation. But > when they parallelized the algorithms across the 2,880-core Watson, they were > able to cut the processing time from a couple of hours to between 2 and 6 > seconds. > > Even at that, Watson doesn't just spit out the answers. It forms hypotheses > based on the evidence it finds and scores them at various confidence levels. > Watson is programmed not to buzz in until it reaches a confidence of at least > 50 percent, although this parameter can be self-adjusted depending on the > game situation. > > To accomplish all this, DeepQA employs an ensemble of algorithms -- about a > million lines of code --- to gather and score the evidence. These include > temporal reasoning algorithms to correlate times with events, statistical > paraphrasing algorithms to evaluate semantic context, and geospatial > reasoning to correlate locations. > > It can also dynamically form associations, both in training and at game time, > to connect disparate ideas. For example it can learn that inventors can > patent information or that officials can submit resignations. Watson also > shifts the weight it assigns to different algorithms based on which ones are > delivering the more accurate correlations. This aspect of machine learning > allows Watson to get "smarter" the more it plays the game. > > The DeepQA programmers have also been refining the algorithms themselves over > the past several years. In 2007, Watson could only answer a small fraction of > Jeopardy clues with reasonable confidence and even at that, was only correct > 47 percent of the time. When forced to answer the majority of the clues, like > a grand champion would, it could only answer 15 percent correctly. By IBM's > own admission, Watson was playing "terrible." The highest performing Jeopardy > grand champions, like Jennings and Rutter, typically buzz in on 70 to 80 > percent of the entries and give the correct answer 85 to 95 percent of time. > > By 2010 Watson started playing at that level. Ferrucci says that while the > system can't buzz in on every question, it can now answer the vast majority > of them in competitive time. "We can compete with grand champions in terms of > precision, in terms of confidence, and in terms of speed," he says. > > In dozens of practice rounds against former Jeopardy champs, the computer was > beating the humans with a 65 percent win rate. Watson also prevailed in a > 15-question round against Jennings and Rutter in early January of this year. > See the performance below. > > None of this is a guarantee that Watson will prevail next week. But even if > the machine just makes a decent showing, IBM will have pulled off quite > possibly the best product placement in television history. Open domain > question answering is not only one of the Holy Grails of artificial > intelligence but has enormous potential for commercial applications. In areas > as disparate as healthcare, tech support, business intelligence, security and > finance, this type of platform could change those businesses irrevocably. > John Kelly, senior vice president and director of IBM Research, boasts, > "We're going to revolutionize industries at a level that has never been done > before." > > In the case of healthcare, it's not a huge leap to imagine "expert" question > answering systems helping doctors with medical diagnosis. A differential > diagnosis is not much different from what Watson does when it analyzes a > Jeopardy clue. Before it replaces Dr. House, though, the machine will have to > prove itself in the game show arena. > > If Jennings and Rutter defeat the supercomputer this time around, IBM will > almost certainly ask for a rematch, as it did when Deep Blue initially lost > its first chess match with Kasparov in 1996. The engineers will keep stroking > the code and retraining the computer until Watson is truly unbeatable. > Eventually the machine will prevail. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From sjatkins at mac.com Wed Feb 16 23:53:45 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:53:45 -0800 Subject: [ExI] ibm takes on the commies In-Reply-To: <20110216075255.GM23560@leitl.org> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> <20110216075255.GM23560@leitl.org> Message-ID: <4D5C6389.4050504@mac.com> On 02/15/2011 11:52 PM, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 11:14:11PM -0800, spike wrote: >> >> >> Computer hipsters explain this to me. When they are claiming 10 petaflops, >> they mean using a few tens of thousands of parallel processors, ja? We > A common gamer's graphics card can easily have a thousand or a couple > thousand cores (mostly VLIW) and memory bandwidth from hell. Total node > count could run into tens to hundreds thousands, so we're talking > multiple megacores. As you are probably aware those are not general purpose cores. They cannot run arbitrary algorithms efficiently. >> couldn't check one Mersenne prime per second with it or anything, ja? It >> would be the equivalent of 10 petaflops assuming we have a process that is >> compatible with massive parallelism? The article doesn't say how many > Fortunately, every physical process (including cognition) is compatible > with massive parallelism. Just parcel the problem over a 3d lattice/torus, > exchange information where adjacent volumes interface through the high-speed > interconnect. There is no general parallelization strategy. If there was then taking advantage of multiple cores maximally would be a solved problem. It is anything but. > Anyone who has written numerics for MPI recognizes the basic design > pattern. > Not everything is reducible in ways that lead to those techniques being generally sufficient. - s From kellycoinguy at gmail.com Thu Feb 17 00:36:58 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 16 Feb 2011 17:36:58 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C2DA9.9050804@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> Message-ID: >> On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: >> So I repeat my previous request, please tell us all about the wonderful AI >> program that you have written that does things even more intelligently than >> Watson. > > Done: ?read my papers. I've done that. At least all the papers I could find online. I have not seen in your papers anything approaching a utilitarian algorithm, a practical architecture or anything of the sort. Do you have a working program that does ANYTHING? You have some fine theories Richard, but theories that don't lead to some kind of productive result belong in journals of philosophy, not journals of computer science. You have some very interesting philosophical ideas, but I haven't seen anything in your papers that rise to the level of computer science. > Questions? ?Just ask! What is the USEFUL and working application of your theories? Show me the beef! -Kelly From darren.greer3 at gmail.com Thu Feb 17 00:56:02 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 20:56:02 -0400 Subject: [ExI] Image Recognition Appreciation Day In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> Message-ID: >John, do you really have trouble seeing the distinction between specialized intelligence and general intelligence?< The easiest way I have of conceptualizing this is in terms of an autistic savant. They are often capable of remarkable feats of memory, spatial cognition and even data analysis. But we rarely refer to them as intelligent, because they may not be able to tie their own shoes or tell you what day of the week it is. That being said, Watson is a savant like none we've ever seen before, and it makes sense to me to get excited about him. We're building this thing from the ground up, and if this is not a concrete step forward in developing fully sapient AI (and I'm no expert and can't state definitively whether it is, though it seems on the surface to be so) it is a HUGE step forward in terms of creating a general societal awareness of AI--where it is at and where it can go and what its applications might be. And as anyone here who has ever fought to get funding for a project knows, the latter is just as important--maybe more so--than the former. 2011/2/16 Dave Sill > 2011/2/16 John Clark > > I would humbly like to suggest that June 23 (Alan Turing's birthday by the >> way) be turned into a international holiday called "Image Recognition >> Appreciation Day". On this day we would all reflect on the intelligence >> required to recognize images. It is important that this be done soon because >> although computers are not very good at this task right now that will >> certainly change in the next few years. On the day computers become good at >> it the laws of physics in the universe will change and intelligence will no >> longer be required for image recognition. >> >> So if we ever intend to salute the brainpower required for this skill it >> is imperative we do it now while we can. >> > > John, do you really have trouble seeing the distinction between specialized > intelligence and general intelligence? Do you think Deep Blue or Watson > could pass the Turing Test? > > -Dave > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Thu Feb 17 01:03:14 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 16 Feb 2011 18:03:14 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: On Tue, Feb 15, 2011 at 2:41 PM, BillK wrote: > No, because Watson doesn't have time to do any learning or > optimisation while the game is actually in progress. Watson doesn't > take any notice of opponents answers. That's why it gave the same > wrong answer as an opponent had already given. On the NOVA show, Watson made the same mistake (giving an answer already given) and the programmers talked about having solved that problem a bit later. I would *guess* that the mechanism they used somehow violated the rules imposed by the Jeopardy producers. It seems like it would be an easy fix if they had a speech recognition algorithm feeding back into the system, but they don't have that capacity (yet). Alex T indicated that Watson wasn't "listening" in the first show. Again, according to the NOVA show, Watson does have a module that learns during the game, related to the interpretation of the Category. I did not get the idea that the real-time learning was very sophisticated or extensive. The IBM materials on DeepQA indicate that there are a number of modules making up the architecture. In other words, you can plug in new algorithms. On the NOVA show they were talking about plugging in a "Gender" module. I would think that each of these modules contributes to an overall score for a good or bad answer. The Spam Assassin algorithm works like this, and it wouldn't surprise me if DeepQA used a similar approach. -Kelly From rpwl at lightlink.com Thu Feb 17 01:13:59 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 20:13:59 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> Message-ID: <4D5C7657.6070405@lightlink.com> Kelly Anderson wrote: >>> On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: >>> So I repeat my previous request, please tell us all about the wonderful AI >>> program that you have written that does things even more intelligently than >>> Watson. >> Done: read my papers. > > I've done that. At least all the papers I could find online. I have > not seen in your papers anything approaching a utilitarian algorithm, > a practical architecture or anything of the sort. Do you have a > working program that does ANYTHING? You have some fine theories > Richard, but theories that don't lead to some kind of productive > result belong in journals of philosophy, not journals of computer > science. You have some very interesting philosophical ideas, but I > haven't seen anything in your papers that rise to the level of > computer science. > >> Questions? Just ask! > > What is the USEFUL and working application of your theories? > > Show me the beef! So demanding, some people. ;-) If you have read McClelland and Rumelhart's two-volume "Parallel Distributed Processing", and if you have then read my papers, and if you are still so much in the dark that the only thing you can say is "I haven't seen anything in your papers that rise to the level of computer science" then, well... (And, in any case, my answer to John Clark was as facetious as his question was silly.) At this stage, what you can get is a general picture of the background theory. That is readily obtainable if you have a good knowledge of (a) computer science, (b) cognitive psychology and (c) complex systems. It also helps, as I say, to be familiar with what was going on in those PDP books. Do you have a fairly detailed knowledge of all three of these areas? Do you understand where McClelland and Rumelhart were coming from when they talked about the relaxation of weak constraints, and about how a lot of cognition seemed to make more sense when couched in those terms? Do you also follow the line of reasoning that interprets M & R's subsequent pursuit of non-complex models as a mistake? And the implication that there is a class of systems that are as yet unexplored, doing what they did but using a complex approach? Put all these pieces together and we have the basis for a dialog. But ... demanding a finished AGI as an essential precondition for behaving in a mature way toward the work I have already published...? I don't think so. :-) Richard Loosemore From darren.greer3 at gmail.com Thu Feb 17 01:16:25 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 21:16:25 -0400 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C5F9A.2020204@mac.com> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: >I'm still pissed at Sagan for his hubris in sending a message to the stars without asking the rest of us first, in blithe certainty that "of course" any recipient would have evolved beyond aggression and xenophobia.< I'm not sure NASA was so happy with the idea either. It was a last minute thing, and they gave him three weeks to come up with it. Had he had a few weeks longer, he might have reconsidered giving them the 14 pulsars info by which they could triangulate our location. Also, I think gay rights activists weren't happy about the hetero-sexist Adam and Eve thing. I personally think he should have included a lemur or a monkey on the plaque too, to show that we had evolutionary ancestors and they could, in the event of an attack, be called upon to defend us. Seriously though. Sagan did have a 'blithe certainty' that was reflected obliquely in most of what he wrote that any civilization that could get through its technological adolescence intact would have had to get past its stone age evolutionary programming to do so. I always got the sense that the two for him were connected. Re-phrased: if you don't evolve past those ancient brain applets of aggression and tribal dominance, you simply don't make it past the nuclear stage of your technological development. A grand assumption, perhaps. But it has some validity. After-all, it remains to see if we're going to graduate. d. On Wed, Feb 16, 2011 at 7:36 PM, Samantha Atkins wrote: > On 02/16/2011 09:41 AM, Richard Loosemore wrote: > >> Keith Henson wrote: >> >>> On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: >>> >>> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: >>>> >>>> I'm still pissed at Sagan for his hubris in sending a message to the >>>>> stars without asking the rest of us first, in blithe certainty that "of >>>>> course" any recipient would have evolved beyond aggression and >>>>> xenophobia. >>>>> >>>> The real reasons if that they would be there you'd be dead, Jim. >>>> In fact, if any alien picks up the transmission (chance: very close >>>> to zero) they'd better be farther advanced than us, and on a >>>> faster track. I hope it for them. >>>> >>> >>> I have been mulling this over for decades. >>> >>> We look out into the Universe and don't (so far) see or hear any >>> evidence of technophilic civilization. >>> >>> I see only two possibilities: >>> >>> 1) Technophilics are so rare that there are no others in our light cone. >>> >>> 2) Or if they are relatively common something wipes them *all* out, >>> or, if not wiped out, they don't do anything which indicates their >>> presence. >>> >>> If 1, then the future is unknown. If 2, it's probably related to >>> local singularities. If that's the case, most of the people reading >>> this list will live to see it. >>> >> >> > Well, the message sent by Sagan was a single transmission aimed at a > globular cluster 25,000 light years away. Traveling at near light speed to > send a ship back is very expensive and would not happen for a long time. > And for what? A lower level species that may or may not survive its own > growing pains long enough to ever be any kind of threat at all? The > chances that a highly xenophobic advanced species would pick it up and > choose to mount the expense to act on it is pretty small. > > Hmm. Of course if they are particularly advanced they could just engineer > a super-nova aimed in our general direction from close enough. Or as some > film had it, send us the plans to build a wonder machine that wipes us out > or turns us into more of them. > > > Well, not really an extra one, but I count four items in your 2-item list: >> >> 1) Technophilics are so rare that there are no others in our light cone. >> >> 2) If they are relatively common, there is something that wipes them >> *all* out (by the time they reach this stage they foul their own nest and >> die), or >> >> 3) They are relatively common and they don't do anything which indicates >> their presence, because they are too scared that someone else will zap them, >> or >> >> 4) They are relatively common and they don't do anything which indicates >> their presence, because they use communications technology that does not >> leak the way ours does. >> >> > My theory is that almost no evolved intelligent species meets the challenge > of overcoming its evolved limitations fast enough to cope successfully with > accelerating technological change. Almost all either wipe themselves out > or ding themselves sufficiently hard to miss their window of opportunity. > It can be argued that it is very very rare that a technological species > survives the period we are entering and emerges more capable on the other > side of singularity. > > - samantha > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Thu Feb 17 01:21:26 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 16 Feb 2011 18:21:26 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5C604D.3030201@mac.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: > On 02/16/2011 10:15 AM, spike wrote: > Not the same problem domain or even all that close. ?Can you turn it into a > really good chatbot? ?Maybe, maybe not depending on your standard of "good". > ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks in > the nursing home entertained. Samantha, are you familiar with Moore's law? Let's assume for purposes of discussion that you are 30, that you will be in the nursing home when you're 70. That means Watson level functionality will cost around $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get it in a box of cracker jacks. -Kelly From lubkin at unreasonable.com Thu Feb 17 01:32:51 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 16 Feb 2011 20:32:51 -0500 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C5F9A.2020204@mac.com> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: <201102170132.p1H1Wc64003850@andromeda.ziaspace.com> Samantha wrote: >Hmm. Of course if they are particularly advanced they could just >engineer a super-nova aimed in our general direction from close >enough. Or as some film had it, send us the plans to build a >wonder machine that wipes us out or turns us into more of them. The first attempt at remote annihilation I know of in sf is astronomer Fred Hoyle's A for Andromeda (BBC 1961, novel 1962): >[I]t concerns a group of scientists who detect a radio signal from a >distant galaxy that contains instructions for the design of an >advanced computer. When the computer is built it gives the >scientists instructions for the creation of a living organism, named >Andromeda. However, one of Andromeda's creators, John Fleming, fears >that Andromeda's purpose is to subjugate humanity. Andromeda was played by Julie Christie, in her first significant role. Sadly, there does not seem to be a complete copy of the seven-episode series. But, thankfully, I'm old enough to have seen it before they were lost. -- David. From darren.greer3 at gmail.com Thu Feb 17 01:34:57 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 21:34:57 -0400 Subject: [ExI] Acceptance Into Math Program Message-ID: Just to let everyone on here know. When I first joined this group I was ashamed at how little I knew about science compared to the rest of you. I'm intensely interested in technology and how it can transform us, and for a while before I came to Exi I was appalled at how little was being done to use it in the proper ways. So I joined your group and am glad I did. I've gotten lots of good ideas and have had great amounts of fun arguing and sparring and sometimes actually agreeing with you all. Last September though I decided to rectify my ignorance and registered for three classes at a local university - physics, mathematics and chemistry. I was pretty nervous. I'm 43. I'm a novelist, and not of science fiction. I have a liberal arts background. But besides wanting to be able to discuss things in this group, I also now believe that any writer who does not have a science background, whether he writes spy novels or technical manuals, may find him self acculturated in the next decade or two. So I bit the bullet. I finish my classes in two months. So far my grade point average is perfect. I actually love the work, and today I found out I have been accepted into a Bsc at a good school here in Canada. I've decided that my major will be in mathematics. I like the chemistry and physics, and perhaps I will switch majors later on. But for now my real interest seems to lie in pure math. I thought you all might like to know this, since it was you as a group who help turn an arts guy into (kind of) a science guy. This group does have tremendous value. Certainly it has had a profound effect on my life. So thanks. And when I get stuck next year, I'll be calling on you. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Thu Feb 17 02:05:40 2011 From: mbb386 at main.nc.us (MB) Date: Wed, 16 Feb 2011 21:05:40 -0500 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: <62eec397476da701d4a2f2b5f3999c21.squirrel@www.main.nc.us> > today I found out I have been accepted into a > Bsc at a good school here in Canada. I've decided that my major will be in > mathematics. I like the chemistry and physics, and perhaps I will switch > majors later on. But for now my real interest seems to lie in pure math. > Congratulations, Darren! That's an impressive step. :) May you continue to do well and find enjoyment in your work. Regards, MB From possiblepaths2050 at gmail.com Thu Feb 17 02:13:26 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 16 Feb 2011 19:13:26 -0700 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: Hello Darren, Congratulations regarding furthering your education!!! : ) This email list has also had an amazing effect on my life, due to the things I've learned and the mental "food for thought buffet" that awaits me here every day. If you have not already done so, I hope you attend a transhumanist conference sometime, so that you can meet some of the people on this list. An H+ or Singularity Institute gathering could really be very thrilling for you. I went to Convergence back in 2008 and had a wonderful time! http://www.convergence08.org/ My best wishes to you, John : ) On 2/16/11, Darren Greer wrote: > Just to let everyone on here know. > > When I first joined this group I was ashamed at how little I knew about > science compared to the rest of you. I'm intensely interested in technology > and how it can transform us, and for a while before I came to Exi I was > appalled at how little was being done to use it in the proper ways. So I > joined your group and am glad I did. I've gotten lots of good ideas and have > had great amounts of fun arguing and sparring and sometimes actually > agreeing with you all. > > Last September though I decided to rectify my ignorance and registered for > three classes at a local university - physics, mathematics and chemistry. I > was pretty nervous. I'm 43. I'm a novelist, and not of science fiction. I > have a liberal arts background. But besides wanting to be able to discuss > things in this group, I also now believe that any writer who does not have a > science background, whether he writes spy novels or technical manuals, may > find him self acculturated in the next decade or two. So I bit the bullet. I > finish my classes in two months. So far my grade point average is perfect. I > actually love the work, and today I found out I have been accepted into a > Bsc at a good school here in Canada. I've decided that my major will be in > mathematics. I like the chemistry and physics, and perhaps I will switch > majors later on. But for now my real interest seems to lie in pure math. > > I thought you all might like to know this, since it was you as a group who > help turn an arts guy into (kind of) a science guy. This group does have > tremendous value. Certainly it has had a profound effect on my life. So > thanks. And when I get stuck next year, I'll be calling on you. > > Darren > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > From FRANKMAC at RIPCO.COM Thu Feb 17 02:19:19 2011 From: FRANKMAC at RIPCO.COM (FRANK MCELLIGOTT) Date: Wed, 16 Feb 2011 19:19:19 -0700 Subject: [ExI] FORBIN PROJECT Message-ID: >From a small very small manufacturing backround, I understand that the pro-type is the major cost, and after it finally works, then all there is needed is to assemble and the cost descends from there. Right now the cost is way way out there, but all of us know that the present day 300 dollar computers can dancer circles around an IBM 360 which cost over half million back in the 60's. Soon all major governments will need a Watson, and because of fear of being left behind, money will flow to get a new and improved Watson and then we all know what happens then:) The major concern will come when the decision making process in given up and placed in the hands of a computer who will find the best solution to the rising debt in the in the United States, caused by Medicare and Social security underfunding will not be to increase taxes but instead will opt for the solution of terminating all the folks who are now receiving benefits. We now walk on thin ice, and most of this list know it, all applications that removes human judgement and replaces is with code and best case answers Without Moral direction will remove the species from this planet , Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 02:12:43 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 18:12:43 -0800 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: <004001cbce48$29b75900$7d260b00$@att.net> . On Behalf Of Darren Greer Subject: [ExI] Acceptance Into Math Program >. So far my grade point average is perfect. Mine was kinda like that sorta. My grade point perfect was average. >. But for now my real interest seems to lie in pure math. We are proud of you, my son. >. So thanks. And when I get stuck next year, I'll be calling on you. Darren Do so! I have been tutoring two calculus students. I learned I still remember how to integrate and differentiate after all these tragically many years. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Thu Feb 17 01:42:13 2011 From: sparge at gmail.com (Dave Sill) Date: Wed, 16 Feb 2011 20:42:13 -0500 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: 2011/2/16 Darren Greer > > I thought you all might like to know this, since it was you as a group who > help turn an arts guy into (kind of) a science guy. This group does have > tremendous value. Certainly it has had a profound effect on my life. So > thanks. And when I get stuck next year, I'll be calling on you. > Congrats and good luck. This list has had a profound effect on my life, too. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Thu Feb 17 03:04:46 2011 From: x at extropica.org (x at extropica.org) Date: Wed, 16 Feb 2011 19:04:46 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: On Tue, Feb 15, 2011 at 6:16 PM, wrote: > On Mon, Feb 14, 2011 at 8:09 PM, ? wrote: >> On Mon, Feb 14, 2011 at 7:59 PM, Damien Broderick wrote: >>> On 2/14/2011 9:28 PM, spike wrote: >>> >>>> I don?t have commercial TV, and can?t find live streaming. >>> >>> I don't have TV, period. Anyone have a link? >> >> >> > and Day 2: > > > and Day 3: From spike66 at att.net Thu Feb 17 03:00:11 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 19:00:11 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD78C.80805@mac.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AD78C.80805@mac.com> Message-ID: <005801cbce4e$cba88120$62f98360$@att.net> >... On Behalf Of Samantha Atkins ... >> After seeing the amount of progress we have made in nanotechnology in the quarter century since the K.Eric published Engines of Creation, I have concluded that replicating nanobots are a technology that is out of reach of human capability. >Not so. Just a good three decades further out. - samantha Ja. I just don't know when those three good decades will start. I could be overly pessimistic. Samantha, do you remember about the mid to late 90s, when we were all going great guns on this, investments dollars were flying every which direction, local nanotech miniconferences, the K.Eric was going around giving lectures in the area, and even some universities were starting up nanotech disciplines? One could go to the University of North Carolina and major in nanotechnology. How cool is that! I don't see that any of it gave us much of anything that was true nanotech. The research produced some really excellent technologies, none of which were true bottom up nanotech. In a way, I see that as similar to the debate we have had here the last few days on Watson. It isn't AI, any more than developing submicron scale transistors is nanotechnology, but it has its own advantages. Like the university nanotech major, it attracts young talent, it pays the bills, it definitely fires the imagination. If anyone wanted to argue that these represent indirect paths to nanotech and AGI, well, I wouldn't argue with them. spike From spike66 at att.net Thu Feb 17 03:13:16 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 19:13:16 -0800 Subject: [ExI] Watson on NOVA References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AD78C.80805@mac.com> Message-ID: <005f01cbce50$9f4e85a0$ddeb90e0$@att.net> >>> ...I have concluded that replicating nanobots are a technology that is out of reach of human capability. >>Not so. Just a good three decades further out. - samantha >Ja. I just don't know when those three good decades will start...spike Check out the graph at the bottom of this article: http://www.electroiq.com/index/display/nanotech-article-display/6417811327/a rticles/small-times/nanotechmems/research-and_development/2010/august/rankin g-the_nations.html spike From darren.greer3 at gmail.com Thu Feb 17 03:44:11 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 23:44:11 -0400 Subject: [ExI] Watson on NOVA In-Reply-To: <005801cbce4e$cba88120$62f98360$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AD78C.80805@mac.com> <005801cbce4e$cba88120$62f98360$@att.net> Message-ID: > If anyone wanted to argue that these represent indirect paths to nanotech and AGI, well, I wouldn't argue with them. < I wouldn't either. But if human intellectual history has shown us anything, it is that the path to discovery and achievement often *is* indirect. At least the indirect stuff can spawn new insights and applications of technology that just might lead to where we're trying to get too. That probably holds more true now than it ever has, since the days of individual discovery are numbered, as we become more unified in our quests and the individualistic dynamics that have fueled history to this point are replaced by more cooperative, socialized ones. d.. On Wed, Feb 16, 2011 at 11:00 PM, spike wrote: > > > >... On Behalf Of Samantha Atkins > ... > > >> After seeing the amount of progress we have made in nanotechnology in > the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach > of > human capability. > > >Not so. Just a good three decades further out. - samantha > > Ja. I just don't know when those three good decades will start. > > I could be overly pessimistic. Samantha, do you remember about the mid to > late 90s, when we were all going great guns on this, investments dollars > were flying every which direction, local nanotech miniconferences, the > K.Eric was going around giving lectures in the area, and even some > universities were starting up nanotech disciplines? One could go to the > University of North Carolina and major in nanotechnology. How cool is > that! > I don't see that any of it gave us much of anything that was true nanotech. > The research produced some really excellent technologies, none of which > were > true bottom up nanotech. > > In a way, I see that as similar to the debate we have had here the last few > days on Watson. It isn't AI, any more than developing submicron scale > transistors is nanotechnology, but it has its own advantages. Like the > university nanotech major, it attracts young talent, it pays the bills, it > definitely fires the imagination. If anyone wanted to argue that these > represent indirect paths to nanotech and AGI, well, I wouldn't argue with > them. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 05:32:38 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 21:32:38 -0800 Subject: [ExI] watson on jeopardy Message-ID: <002d01cbce64$16fc3530$44f49f90$@att.net> Woohoo! Watson wins! http://www.cnn.com/2011/TECH/innovation/02/16/jeopardy.watson/index.html?hpt =T1 Jeopardy isn't over however. It is only a matter of time before a competing team wants to play machine against machine, or even a three-way all machine matchup. Note that there are today about a couple dozen top chess computers cheerfully pummeling each other, with the results being broadcast for all the worlds people with far too much time on their hands to watch in pointless fascination. Those games are in some ways more interesting to watch than human-human or human-machine games, because they tend to be so technically clean and positional, so theoretical. I can imagine there are already teams working to whoop Watson's butt. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Thu Feb 17 07:24:22 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 00:24:22 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C7657.6070405@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> Message-ID: On Wed, Feb 16, 2011 at 6:13 PM, Richard Loosemore wrote: > Kelly Anderson wrote: >> Show me the beef! > > So demanding, some people. ?;-) I wouldn't be so demanding if you acknowledged the good work of others, even if it is just a "parlor trick". > If you have read McClelland and Rumelhart's two-volume "Parallel Distributed > Processing", I have read volume 1 (a long time ago), but not volume 2. > and if you have then read my papers, and if you are still so > much in the dark that the only thing you can say is "I haven't seen anything > in your papers that rise to the level of computer science" then, well... Your papers talk the talk, but they don't walk the walk as far as I can tell. There is not a single instance where you say, "And using this technique we can distinguish pictures of cats from pictures of dogs" or "This method leads to differentiating between the works of Bach and Mozart." Or even the ability to answer the question "What do grasshoppers eat?" > (And, in any case, my answer to John Clark was as facetious as his question > was silly.) Sidebar: I have found that humor and facetiousness don't work well on mailing lists. > At this stage, what you can get is a general picture of the background > theory. ?That is readily obtainable if you have a good knowledge of (a) > computer science, Check. > (b) cognitive psychology Eh, so so. > and (c) complex systems. Like the space shuttle? > It also > helps, as I say, to be familiar with what was going on in those PDP books. Like I said, I read the first volume of that book a long time ago (I think I have a copy downstairs), nevertheless, I have a decent grasp of neural networks, relaxation, simulated annealing, pattern recognition, multidimensional search spaces, statistical and Bayesian approaches, computer vision, character recognition (published), search trees in traditional AI and massively parallel architectures. I'm not entirely unaware of various theories of philosophy and religion. I am weak in natural language processing, traditional databases, and sound processing. > Do you have a fairly detailed knowledge of all three of these areas? Fair to middling, although my knowledge is a little outdated. I'm not tremendously worried about that since I used a text book written in the late 1950s when I took pattern recognition in 1986 and you refer to a book published in the late 1980s... I kind of get the idea that progress is fairly slow in these areas except that now we have better hardware on which to run the old algorithms. > Do you understand where McClelland and Rumelhart were coming from when they > talked about the relaxation of weak constraints, and about how a lot of > cognition seemed to make more sense when couched in those terms? Yes, this makes a lot of sense. I don't see how it relates directly to your work. I actually like what you have to say about short vs. long term memory, I think that's a useful way of looking at things. The short term or "working" memory that uses symbols vs the long term memory that work in a more subconscious way is very interesting stuff to ponder. > Do you > also follow the line of reasoning that interprets M & R's subsequent pursuit > of non-complex models as a mistake? Afraid you lose me here. > And the implication that there is a > class of systems that are as yet unexplored, doing what they did but using a > complex approach? Still lost, but willing to listen. > Put all these pieces together and we have the basis for a dialog. > > But ... ?demanding a finished AGI as an essential precondition for behaving > in a mature way toward the work I have already published...? ?I don't think > so. ?:-) If I have treated you in an immature way, I apologize. I just think arguing that four years of work and millions of dollars worth of research being classified as "trivial" when 10,000,000 lines of actually working code is not a strong position to come from. I am an Agilista. I value working code over big ideas. So while I acknowledge that you have some interesting big ideas, it escapes me how you are going to bridge the gap to achieve a notable result. Maybe it is clear to you, but if it is, you should publish something a little more concrete, IMHO. -Kelly From kellycoinguy at gmail.com Thu Feb 17 07:31:34 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 00:31:34 -0700 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C5F9A.2020204@mac.com> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: On Wed, Feb 16, 2011 at 4:36 PM, Samantha Atkins wrote: > My theory is that almost no evolved intelligent species meets the challenge > of overcoming its evolved limitations fast enough to cope successfully with > accelerating technological change. ? Almost all either wipe themselves out > or ding themselves sufficiently hard to miss their window of opportunity. > ?It can be argued that it is very very rare that a technological species > survives the period we are entering and emerges more capable on the other > side of singularity. Another possibility is that advanced civilizations naturally trend towards virtual reality, and thus end up leaving a very small externally detectable footprint. Exploring the endless possibilities of virtual reality seems potentially a lot more interesting than crossing tens of thousands of light years of space to try and visit some lower life form... -Kelly From kellycoinguy at gmail.com Thu Feb 17 07:43:18 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 00:43:18 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <008c01cbcd31$8805bc80$98113580$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: > Ja, but when you say "research" in reference to AI, keep in mind the actual > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > kill us. Why is that the goal? As extropians isn't the idea to reduce entropy? Humans may be more prone to entropy than some higher life form. In that case, shouldn't we strive to evolve to that higher form and let go of our physical natures? If our cognitive patterns are preserved, and enhanced, we have achieved a level of immortality, and perhaps become AGIs ourselves. That MIGHT be a good thing. Then again, it might not be a good thing. I just don't see your above statement as being self-evident upon further reflection. > After seeing the amount of progress we have made in nanotechnology in the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach of > human capability. ?We need AI to master that difficult technology. But if humans can create the AI that creates the replicating nanobots, then in a sense it isn't out of human reach. > Without > replicating assemblers, we probably will never be able to read and simulate > frozen or vitrified brains. ?So without AI, we are without nanotech, and > consequently we are all doomed, along with our children and their children > forever. > > On the other hand, if we are successful at doing AI wrong, we are all doomed > right now. ?It will decide it doesn't need us, or just sees no reason why we > are useful for anything. And that is a bad thing exactly how? > When I was young, male and single (actually I am still male now) but when I > was young and single, I would have reasoned that it is perfectly fine to > risk future generations on that bet: build AI now and hope it likes us, > because all future generations are doomed to a century or less of life > anyway, so there's no reasonable objection with betting that against > eternity. > > Now that I am middle aged, male and married, with a child, I would do that > calculus differently. ?I am willing to risk that a future AI can upload a > living being but not a frozen one, so that people of my son's generation > have a shot at forever even if it means that we do not. ?There is a chance > that a future AI could master nanotech, which gives me hope as a corpsicle > that it could read and upload me. ?But I am reluctant to risk my children's > and grandchildren's 100 years of meat world existence on just getting AI > going as quickly as possible. Honestly, I don't think we have much of a choice about when AI gets going. We can all make choices as individuals, but I see it as kind of inevitable. Ray K seems to have this mind set as well, so I feel like I'm in pretty good company on this one. > In that sense, having AI researchers wander off into making toys (such as > chess software and Watson) is perfectly OK, and possibly desireable. > >>...Give me a hundred smart, receptive minds right now, and three years to > train 'em up, and there could be a hundred people who could build an AGI > (and probably better than I could)... > > Sure but do you fully trust every one of those students? ?Computer science > students are disproportionately young and male. > >>...So, just to say, don't interpret the previous comment to be too much of > a mad scientist comment ;-) ?Richard Loosemore > > Ja, I understand the reasoning behind those who are focused on the goal of > creating AI, and I agree the idea is not crazed or unreasonable. ?I just > disagree with the notion that we need to be in a desperate hurry to make an > AI. ?We as a species can take our time and think about this carefully, and I > hope we do, even if it means you and I will be lost forever. > > Nuclear bombs preceded nuclear power plants. Yes, and many of the most interesting AI applications are no doubt military in nature. -Kelly From eugen at leitl.org Thu Feb 17 07:58:45 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 08:58:45 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <20110217075845.GJ23560@leitl.org> On Thu, Feb 17, 2011 at 12:43:18AM -0700, Kelly Anderson wrote: > On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: > > Ja, but when you say "research" in reference to AI, keep in mind the actual > > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > > kill us. > > Why is that the goal? As extropians isn't the idea to reduce entropy? Right, that would be a great friendliness metric. > Humans may be more prone to entropy than some higher life form. In Right, let's do away with lower life forms. Minimize entropy. > that case, shouldn't we strive to evolve to that higher form and let Why evolve? Exterminate lower life forms. Minimize entropy. Much more efficient. > go of our physical natures? If our cognitive patterns are preserved, Cognitive patterns irrelevant. Maximize extropy. Exterminate humans. > and enhanced, we have achieved a level of immortality, and perhaps > become AGIs ourselves. That MIGHT be a good thing. Then again, it > might not be a good thing. I just don't see your above statement as > being self-evident upon further reflection. Reflection irrelevant. You will be exterminated. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From kellycoinguy at gmail.com Thu Feb 17 08:09:17 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 01:09:17 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <20110217075845.GJ23560@leitl.org> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <20110217075845.GJ23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 12:58 AM, Eugen Leitl wrote: > On Thu, Feb 17, 2011 at 12:43:18AM -0700, Kelly Anderson wrote: >> On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: >> > Ja, but when you say "research" in reference to AI, keep in mind the actual >> > goal isn't the creation of AGI, but rather the creation of AGI that doesn't >> > kill us. >> >> Why is that the goal? As extropians isn't the idea to reduce entropy? > > Right, that would be a great friendliness metric. Not so much. >> Humans may be more prone to entropy than some higher life form. In > > Right, let's do away with lower life forms. Minimize entropy. In all seriousness, we are in the middle of a mass extinction that is driven by just that. Cows are taking over the world, and buffalo are suffering. If we get caught up in the same mass extinction event, I don't think we should be too terribly surprised. PERSONALLY, this is a bad thing. I rather like being me. But I've learned that what I want is only weakly connected with what actually ends up happening. >> that case, shouldn't we strive to evolve to that higher form and let > > Why evolve? Exterminate lower life forms. Minimize entropy. Much more > efficient. Not sure if you are joking here... hard to respond because there are so many ways parse this... :-) >> go of our physical natures? If our cognitive patterns are preserved, > > Cognitive patterns irrelevant. Maximize extropy. Exterminate humans. > >> and enhanced, we have achieved a level of immortality, and perhaps >> become AGIs ourselves. That MIGHT be a good thing. Then again, it >> might not be a good thing. I just don't see your above statement as >> being self-evident upon further reflection. > > Reflection irrelevant. You will be exterminated. It is at least as likely as not. The thing is, I'm actually somewhat OK with that if it leads to significantly better things. I'm sure most homo erectus would be pretty ticked with how things worked out for their species. -Kelly From kellycoinguy at gmail.com Thu Feb 17 09:01:50 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 02:01:50 -0700 Subject: [ExI] How Watson works, a guess Message-ID: There have been a number of guesses on list as to how Watson works. I have spent a fair amount of time looking at everything I can find on the topic, and here is my guess as to how it works based on what I've heard and read weighted somewhat by how I would approach the problem. If I were going to try and duplicate Watson, this is more or less how I would proceed. To avoid confusion, I won't reverse question/answer like Jeopardy. First, keywords from the question are used to find a large number of potential good answers. This is what is meant when people say "a search engine is part of Watson". This is likely based on proximity as Richard pointed out, and this is clearly just the first step. There are probably some really interesting indexing techniques used that are a bit different than Google. I was fascinated by the report on this list that Watson had more RAM than hard drive space. Can someone verify that this is the case? It seems counter-intuitive. What happens if you turn the power off? Do you have to connect Watson to a network to reload all that RAM? Watson's database consists of a reported 200,000,000 documents including wikipedia, IMDB, other encyclopedias, etc. Second, a very large set of heuristic algorithms (undoubtedly, these are the majority of the reported 1,000,000 lines of code) analyze the Question, The Category and/or each potential answer in combination and come up with a "score" indicating whether by this heuristic measure the answer is a good one. I would suspect that each heuristic also generates a "confidence" measurement. Third, a learning algorithm generates "weights" to apply to each heuristic result and perhaps a different weight for each confidence measurement. This may be the "tuning" that is specific to Jeopardy. Another part of tuning is adding more heuristic tests. For example, on the NOVA show, two of the programmers talk about the unfinished "gender" module that comes up after Watson misses. There is also a module referred to as the "geographical" element. One could assume it tried to determine by a variety of algorithms whether what is being proposed as an answer makes spacial sense. The heuristic algorithms no doubt include elements of natural language processing, statistical analysis, hard coded things that were noted by some programmer or other based on a failed answer during testing, etc. The reason that the reports of how Watson works are so seemingly complex and contradictory are, IMO because someone talks about a particular heuristic, and that makes that heuristic seem a bit more important than the overall architecture. The combination of all the weighted scores probably follows some kind of statistical (probably Bayesian) approach, which are quite amenable to learning feedback. An open source project Spam Assassin, takes a similar approach to determining Spam from good email. Hundreds of heuristic tests are run on each email, and the results are combined to form a confidence about an email being Spam or not. A cut off point is determined, and anything above the cutoff is considered spam. It can "learn" to distinguish new Spam by changing the weights used for each heuristic test. It is also an extensible plug-in architecture, in that new heuristics can be added, and the weights can be tweaked over time as the nature of various Spams change. I would not be surprised if Watson takes a similar approach, based on what people have said. I suspect that each potential answer is evaluated by these heuristic algorithms on the 2800 processors, and that good answers from multiple sources (the multiple sources thing could be part of the heuristics) are given credence. This is why questions about terrorism lead to incorrect answers about 9/11. All the results are put together, and all the confidences are combined, and the winning answer is chosen. If the confidence in the best answer is not above a threshold, Watson does not push the button. Quickly. In fact, one of Watson's advantages may be in its ability to push the button very quickly. I haven't done an analysis, but it seemed that there weren't many times that Watson had a high confidence answer and he didn't get the first chance at answering the question. This is an area where a computer has a serious (and somewhat unfair) advantage over humans. I understand from a long ago interview that Ken Jennings basically tries to push the button as fast as he can if he thinks he might know the answer, even if he hasn't yet fished up the whole answer. He wasn't the first to press the button a lot of the time in this tournament. I bet that both the carbon based guys knew a lot of answers that they didn't get a chance to answer because Watson is a fast button pusher. There seems to be another subsystem that determines what to bet. The non-round numbers are funny, but I would bet that's one of the more solid elements of Watson's game. I don't think there is any AI in this part. Again, this is all just a wild semi-educated guess. If you have gotten this far, which do you think is more intelligent Google or Watson? Why? Which leverages human intelligence better? -Kelly From darren.greer3 at gmail.com Thu Feb 17 10:19:21 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:19:21 -0400 Subject: [ExI] Kurzweil On Watson Message-ID: http://news.yahoo.com/s/zd/20110120/tc_zd/259558 John will like this. Kurzweil says in his opening salve some of what he's been saying in the Watson threads. "In *The Age of Intelligent Machines*, which I wrote in the mid 1980s, I predicted that a computer would defeat the world chess champion by 1998. My estimate was based on the predictable exponential growth of computing power (an example of what I now call the "law of accelerating returns") and my estimate of what level of computing was needed to achieve a chess rating of just under 2800 (sufficient to defeat any human, although lately the best human chess scores have inched above 2800). I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess." d. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 10:26:22 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 11:26:22 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5C604D.3030201@mac.com> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: <20110217102622.GN23560@leitl.org> On Wed, Feb 16, 2011 at 03:39:57PM -0800, Samantha Atkins wrote: > Not the same problem domain or even all that close. Can you turn it > into a really good chatbot? Maybe, maybe not depending on your standard > of "good". But that wouldn't be very exciting. Very expensive way to > keep folks in the nursing home entertained. Think of it like a NL layer for Google, like Wolfram Alpha, but for trivia and fact knowledge, updated in realtime as new publications come. Great tool for researchers and analysts. It can be pretty shallow reasoning. It's a tool, not a person. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Feb 17 10:33:07 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 11:33:07 +0100 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: Message-ID: <20110217103307.GO23560@leitl.org> On Thu, Feb 17, 2011 at 06:19:21AM -0400, Darren Greer wrote: > http://news.yahoo.com/s/zd/20110120/tc_zd/259558 > > John will like this. Kurzweil says in his opening salve some of what he's > been saying in the Watson threads. > > "In *The Age of Intelligent Machines*, which I wrote in the mid 1980s, I > predicted that a computer would defeat the world chess > champion by > 1998. My estimate was based on the predictable exponential growth of > computing power (an example of what I now call the "law of accelerating > returns") and my estimate of what level of computing was needed to achieve a > chess rating of just under 2800 (sufficient to defeat any human, although > lately the best human chess scores have inched above 2800). I also predicted > that when that happened we would either think better of computer > intelligence, worse of human thinking, or worse of chess, and that if > history was a guide, we would downgrade chess." http://singularityhub.com/2011/01/04/kurzweil-defends-his-predictions-again-was-he-86-correct/ http://www.acceleratingfuture.com/michael/blog/2010/01/kurzweils-2009-predictions/ -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Thu Feb 17 10:34:15 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:34:15 -0400 Subject: [ExI] watson on jeopardy In-Reply-To: <002d01cbce64$16fc3530$44f49f90$@att.net> References: <002d01cbce64$16fc3530$44f49f90$@att.net> Message-ID: >Note that there are today about a couple dozen top chess computers cheerfully pummeling each other, with the results being broadcast for all the worlds people with far too much time on their hands to watch in pointless fascination.< What interests me about the difference between chess and word/knowledge games is the idea of game tree complexity. Chess has a game tree complexity of 10^123, but how do you measure GTC for something like Jeopardy? For each question there is only one right answer, and therefore only one right move and the next question has no relation or dependence upon the question before it. So comparing jeopardy and chess seems like apples and oranges to me, no? I just read the Kurzweil article and he points out that Watson is much closer to being able to pass the Turing test than a chess playing computer as it is dealing with human language. And so based on that criteria, it is a step forward no matter how you slice it. d. 2011/2/17 spike > Woohoo! Watson wins! > > > > > http://www.cnn.com/2011/TECH/innovation/02/16/jeopardy.watson/index.html?hpt=T1 > > > > Jeopardy isn?t over however. It is only a matter of time before a > competing team wants to play machine against machine, or even a three-way > all machine matchup. Note that there are today about a couple dozen top > chess computers cheerfully pummeling each other, with the results being > broadcast for all the worlds people with far too much time on their hands to > watch in pointless fascination. Those games are in some ways more > interesting to watch than human-human or human-machine games, because they > tend to be so technically clean and positional, so theoretical. > > > > I can imagine there are already teams working to whoop Watson?s butt. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 10:47:53 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:47:53 -0400 Subject: [ExI] Kurzweil On Watson In-Reply-To: <20110217103307.GO23560@leitl.org> References: <20110217103307.GO23560@leitl.org> Message-ID: > http://singularityhub.com/2011/01/04/kurzweil-defends-his-predictions-again-was-he-86-correct/ http://www.acceleratingfuture.com/michael/blog/2010/01/kurzweils-2009-predic tions< Well, his success rate is better than Nostradamus. But then again, he's not relying on a fickle daemon for results. I think what he has to say about Watson and the Turing Test is valid, and rather simply put, regardless of the predictions. Dealing with the complexity of language is a better indicator of intelligence and closer to passing the test than dealing with the purely mathematical game tree complexity of chess. d. On Thu, Feb 17, 2011 at 6:33 AM, Eugen Leitl wrote: > On Thu, Feb 17, 2011 at 06:19:21AM -0400, Darren Greer wrote: > > http://news.yahoo.com/s/zd/20110120/tc_zd/259558 > > > > John will like this. Kurzweil says in his opening salve some of what he's > > been saying in the Watson threads. > > > > "In *The Age of Intelligent Machines*, which I wrote in the mid 1980s, I > > predicted that a computer would defeat the world chess > > champion by > > 1998. My estimate was based on the predictable exponential growth of > > computing power (an example of what I now call the "law of accelerating > > returns") and my estimate of what level of computing was needed to > achieve a > > chess rating of just under 2800 (sufficient to defeat any human, although > > lately the best human chess scores have inched above 2800). I also > predicted > > that when that happened we would either think better of computer > > intelligence, worse of human thinking, or worse of chess, and that if > > history was a guide, we would downgrade chess." > > > http://singularityhub.com/2011/01/04/kurzweil-defends-his-predictions-again-was-he-86-correct/ > > > http://www.acceleratingfuture.com/michael/blog/2010/01/kurzweils-2009-predictions/ > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 10:59:53 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:59:53 -0400 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: >Another possibility is that advanced civilizations naturally trend towards virtual reality, and thus end up leaving a very small externally detectable footprint. Exploring the endless possibilities of virtual reality seems potentially a lot more interesting than crossing tens of thousands of light years of space to try and visit some lower life form...< I had never considered this scenario until I came to Exi and it was postulated for me. It is the most hopeful compared to the other polar opposite scenarios--self-destruction or mature Zen state (with a no poaching policy) of technological superiority. Alas, self-destruction seems to me to be the most likely, given the bloody and tragic arc of our history at least. D. On Thu, Feb 17, 2011 at 3:31 AM, Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 4:36 PM, Samantha Atkins wrote: > > My theory is that almost no evolved intelligent species meets the > challenge > > of overcoming its evolved limitations fast enough to cope successfully > with > > accelerating technological change. Almost all either wipe themselves > out > > or ding themselves sufficiently hard to miss their window of opportunity. > > It can be argued that it is very very rare that a technological species > > survives the period we are entering and emerges more capable on the other > > side of singularity. > > Another possibility is that advanced civilizations naturally trend > towards virtual reality, and thus end up leaving a very small > externally detectable footprint. Exploring the endless possibilities > of virtual reality seems potentially a lot more interesting than > crossing tens of thousands of light years of space to try and visit > some lower life form... > > -Kelly > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 11:11:24 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 12:11:24 +0100 Subject: [ExI] ibm takes on the commies In-Reply-To: <4D5C6389.4050504@mac.com> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> <20110216075255.GM23560@leitl.org> <4D5C6389.4050504@mac.com> Message-ID: <20110217111124.GP23560@leitl.org> On Wed, Feb 16, 2011 at 03:53:45PM -0800, Samantha Atkins wrote: >> A common gamer's graphics card can easily have a thousand or a couple >> thousand cores (mostly VLIW) and memory bandwidth from hell. Total node >> count could run into tens to hundreds thousands, so we're talking >> multiple megacores. > > As you are probably aware those are not general purpose cores. They > cannot run arbitrary algorithms efficiently. 3d graphics accelerators started as a specific type of physical simulation accelerator, which implies massive parallelism -- our physical reality is made that way, so it's not a coincidence. With each generation the architecture became more and more all-purpose, currently culminating in CPUs factoring in GPUs (AMD Fusion) or GPUs factoring in CPUs (nVidia Project Denver). You see progress in this paradigm by tracking CUDA (which hides hardware poorly) or advent of OpenCL (where CPU and GPU are considered as a unity, which is convenient). In many cases extracting maximum performance from GPGPU is optimizing memory accesses. This is due to the fact that the memory is still external (not embedded) nor yet even stacked with through-silicon vias (TSV) atop of your cores (but soon). There's the problem of algorithms. People currently are great fans of intricate, complex designs. Which are sequential in principle (though multiple branches can be evaluated concurrently), map to memory accesses and hardware poorly. The reason we're doing this is because we're monkeys, and are biased that way. Which is ironic, because we *are* an emergent process, made from billions of individual units. In short, complex algorithms are a problem, not a solution. The processes occuring in neural tissue are not complicated. The complexity emerges from state, not transformations upon the state. We've have been converging towards optimal substrate, and we will continue to do so. This is not surprising, because there's just one (or a couple) ways to do it right. Economy and efficiency cannot ignore reality. Not for long. >>> couldn't check one Mersenne prime per second with it or anything, ja? It >>> would be the equivalent of 10 petaflops assuming we have a process that is >>> compatible with massive parallelism? The article doesn't say how many >> Fortunately, every physical process (including cognition) is compatible >> with massive parallelism. Just parcel the problem over a 3d lattice/torus, >> exchange information where adjacent volumes interface through the high-speed >> interconnect. > > There is no general parallelization strategy. If there was then taking Yes, there is. In a relativistic universe the quickest way to know what happens next to you is to send signals. Which are limited to c. This is not programming, this is physics. Programming is constrained by physics. Difference between programming and hardware design shrinks. It will be one thing some day, such as biology doesn't make a difference between the hardware and the software layer. It's all one thing. > advantage of multiple cores maximally would be a solved problem. It is Multiple cores do not work. They fail to scale because shared memory does not exist -- because we're living in a relativistic universe. When it's read only you can do broadcasting, but when you also write you need to factor in light cones of individual systems, nevermind gate delays on top of that. Coherence is an expensive illusion. Which is why threading is a fad, and will be superceded by explicit message passing over shared-nothing asynchronous systems. Yes, people can't deal with billions of asynchronous objects, which is why human design won't produce real intelligence. You have to let the system figure out how to make it work. It is complicated enough, but still feasible for us mere monkeys. > anything but. >> Anyone who has written numerics for MPI recognizes the basic design >> pattern. >> > > Not everything is reducible in ways that lead to those techniques being > generally sufficient. How does your CPU access memory? By sending messages. How is the illusion of cache coherency maintained? By sending messages. How does the Internet work? By sending messages. Don't blame me, I didn't do it. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Feb 17 11:50:41 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 12:50:41 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: <20110217115041.GQ23560@leitl.org> On Thu, Feb 17, 2011 at 06:59:53AM -0400, Darren Greer wrote: > >Another possibility is that advanced civilizations naturally trend > towards virtual reality, and thus end up leaving a very small > externally detectable footprint. Exploring the endless possibilities Look, what is your energetical footprint? 1 kW, more or less? Negligible. Now multiply that by 7 gigamonkeys. Problem? Infinitesimally small energy budgets multiplied by very large numbers are turning stars into FIR blackbodies. And whole galaxies, and clusters, and superclusters. You think that would be easy to miss? > of virtual reality seems potentially a lot more interesting than > crossing tens of thousands of light years of space to try and visit > some lower life form...< > > I had never considered this scenario until I came to Exi and it was > postulated for me. It is the most hopeful compared to the other polar When something is postulated to you it's usually bunk. Novelty and too small group for peer review pretty much see to that. > opposite scenarios--self-destruction or mature Zen state (with a no poaching > policy) of technological superiority. Alas, self-destruction seems to me to > be the most likely, given the bloody and tragic arc of our history at least. It's less bloody and tragic than bloody stupid. Our collective intelligence seems to approach that of an overnight culture. http://www.fungionline.org.uk/5kinetics/2batch.html -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Feb 17 12:04:51 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 13:04:51 +0100 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: <20110217103307.GO23560@leitl.org> Message-ID: <20110217120451.GR23560@leitl.org> On Thu, Feb 17, 2011 at 06:47:53AM -0400, Darren Greer wrote: > Well, his success rate is better than Nostradamus. But then again, he's not Trying to play Nostradamus in futurism is a fool's game. You can only lose. > relying on a fickle daemon for results. I think what he has to say about > Watson and the Turing Test is valid, and rather simply put, regardless of You see novelty in what Kurzweil says, yes? > the predictions. Dealing with the complexity of language is a better > indicator of intelligence and closer to passing the test than dealing with The best Turing test is unemployment. When everybody is unemployed you know full human equivalence has been reached. Just define something like LD50 (half unemploment reached) for each individual profession as an arbitrary yardstick for approximate equivalence. Integrating over invididual professions will be more difficult, since flooding will put some underwater faster than others. > the purely mathematical game tree complexity of chess. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Thu Feb 17 12:46:15 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 07:46:15 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> Message-ID: <4D5D1897.4030906@lightlink.com> Okay, first: although I understand your position as an Agilista, and your earnest desire to hear about concrete code rather than theory ("I value working code over big ideas"), you must surely acknowledge that in some areas of scientific research and technological development, it is important to work out the theory, or the design, before rushing ahead to the code-writing stage. That is not to say that I don't write code (I spent several years as a software developer, and I continue to write code), but that I believe the problem of building an AGI is, at this point in time, a matter of getting the theory right. We have had over fifty years of AI people rushing into programs without seriously and comprehensively addressing the underlying issues. Perhaps you feel that there are really not that many underlying issues to be dealt with, but after having worked in this field, on and off, for thirty years, it is my position that we need deep understanding above all. Maxwell's equations, remember, were dismissed as useless for anything -- just idle theorizing -- for quite a few years after Maxwell came up with them. Not everything that is of value *must* be accompanied by immediate code that solves a problem. Now, with regard to the papers that I have written, I should explain that they are driven by the very specific approach described in the complex systems paper. That described a methodological imperative: if intelligent systems are complex (in the "complex systems" sense, which is not the "complicated systems", aka space-shuttle-like systems, sense), then we are in a peculiar situation that (I claim) has to be confronted in a very particular way. If it is not confronted in that particular way, we will likely run around in circles getting nowhere -- and it is alarming that the precise way in which this running around in circles would happen bears a remarkable resemblance to what has been happening in AI for fifty years. So, if my reasoning in that paper is correct then the only sensible way to build an AGI is to do some very serious theoretical and tool-building work first. And part of that theoretical work involves a detailed understanding of cognitive psychology AND computer science. Not just a superficial acquaintance with a few psychology ideas, which many people have, but an appreciation for the enormous complexity of cog psych, and an understanding of how people in that field go about their research (because their protocols are very different from those of AI or computer science), and a pretty good grasp of the history of psychology (because there have been many different schools of thought, and some of them, like Behaviorism, contain extremely valuable and subtle lessons). With regard to the specific comments I made below about McClelland and Rumelhart, what is going on there is that these guys (and several others) got to a point where the theories in cognitive psychology were making no sense, and so they started thinking in a new way, to try to solve the problem. I can summarize it as "weak constrain satisfaction" or "neurally inspired" but, alas, these things can be interpreted in shallow ways that omit the background context ... and it is the background context that is the most important part of it. In a nutshell, a lot cognitive psychology makes a lot more sense if it can be re-cast in "constraint" terms. The problem, though, is that the folks who started the PDP (aka connectionist, neural net) revolution in the 1980s could only express this new set of ideas in neural terms. The made some progress, but then just as the train appeared to be gathering momentum it ran out of steam. There were some problems with their approach that could not be solved in a principled way. They had hoped, at the beginning, that they were building a new foundation for cognitive psychology, but something went wrong. What I have done is to think hard about why that collapse occurred, and to come to an understanding about how to get around it. The answer has to do with building two distinct classes of constraint systems: either non-complex, or complex (side note: I will have to refer you to other texts to get the gist of what I mean by that... see my 2007 paper on the subject). The whole PDP/connectionist revolution was predicated on a non-complex approach. I have, in essence, diagnosed that as the problem. Fixing that problem is hard, but that is what I am working on. Unfortunately for you -- wanting to know what is going on with this project -- I have been studiously unprolific about publishing papers. So at this stage of the game all I can do is send you to the papers I have written and ask you to fill in the gaps from your knowledge of cognitive psychology, AI and complex systems. Finally, bear in mind that none of this is relevant to the question of whether other systems, like Watson, are a real advance or just a symptom of a malaise. John Clark has been ranting at me (and others) for more than five years now, so when he pulls the old bait-and-switch trick ("Well, if you think XYZ is flawed, let's see YOUR stinkin' AI then!!") I just smile and tell him to go read my papers. So we only got into this discussion because of that: it has nothing to do with delivering critiques of other systems, whether they contain a million lines of code or not. :-) Watson still is a sleight of hand, IMO, whether my theory sucks or not. ;-) Richard Loosemore Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 6:13 PM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> Show me the beef! >> So demanding, some people. ;-) > > I wouldn't be so demanding if you acknowledged the good work of > others, even if it is just a "parlor trick". > >> If you have read McClelland and Rumelhart's two-volume "Parallel Distributed >> Processing", > > I have read volume 1 (a long time ago), but not volume 2. > >> and if you have then read my papers, and if you are still so >> much in the dark that the only thing you can say is "I haven't seen anything >> in your papers that rise to the level of computer science" then, well... > > Your papers talk the talk, but they don't walk the walk as far as I > can tell. There is not a single instance where you say, "And using > this technique we can distinguish pictures of cats from pictures of > dogs" or "This method leads to differentiating between the works of > Bach and Mozart." Or even the ability to answer the question "What do > grasshoppers eat?" > >> (And, in any case, my answer to John Clark was as facetious as his question >> was silly.) > > Sidebar: I have found that humor and facetiousness don't work well on > mailing lists. > >> At this stage, what you can get is a general picture of the background >> theory. That is readily obtainable if you have a good knowledge of (a) >> computer science, > > Check. > >> (b) cognitive psychology > > Eh, so so. > >> and (c) complex systems. > > Like the space shuttle? > >> It also >> helps, as I say, to be familiar with what was going on in those PDP books. > > Like I said, I read the first volume of that book a long time ago (I > think I have a copy downstairs), nevertheless, I have a decent grasp > of neural networks, relaxation, simulated annealing, pattern > recognition, multidimensional search spaces, statistical and Bayesian > approaches, computer vision, character recognition (published), search > trees in traditional AI and massively parallel architectures. I'm not > entirely unaware of various theories of philosophy and religion. I am > weak in natural language processing, traditional databases, and sound > processing. > >> Do you have a fairly detailed knowledge of all three of these areas? > > Fair to middling, although my knowledge is a little outdated. I'm not > tremendously worried about that since I used a text book written in > the late 1950s when I took pattern recognition in 1986 and you refer > to a book published in the late 1980s... I kind of get the idea that > progress is fairly slow in these areas except that now we have better > hardware on which to run the old algorithms. > >> Do you understand where McClelland and Rumelhart were coming from when they >> talked about the relaxation of weak constraints, and about how a lot of >> cognition seemed to make more sense when couched in those terms? > > Yes, this makes a lot of sense. I don't see how it relates directly to > your work. I actually like what you have to say about short vs. long > term memory, I think that's a useful way of looking at things. The > short term or "working" memory that uses symbols vs the long term > memory that work in a more subconscious way is very interesting stuff > to ponder. > >> Do you >> also follow the line of reasoning that interprets M & R's subsequent pursuit >> of non-complex models as a mistake? > > Afraid you lose me here. > >> And the implication that there is a >> class of systems that are as yet unexplored, doing what they did but using a >> complex approach? > > Still lost, but willing to listen. > >> Put all these pieces together and we have the basis for a dialog. >> >> But ... demanding a finished AGI as an essential precondition for behaving >> in a mature way toward the work I have already published...? I don't think >> so. :-) > > If I have treated you in an immature way, I apologize. I just think > arguing that four years of work and millions of dollars worth of > research being classified as "trivial" when 10,000,000 lines of > actually working code is not a strong position to come from. > > I am an Agilista. I value working code over big ideas. So while I > acknowledge that you have some interesting big ideas, it escapes me > how you are going to bridge the gap to achieve a notable result. Maybe > it is clear to you, but if it is, you should publish something a > little more concrete, IMHO. From pharos at gmail.com Thu Feb 17 12:38:33 2011 From: pharos at gmail.com (BillK) Date: Thu, 17 Feb 2011 12:38:33 +0000 Subject: [ExI] Kurzweil On Watson In-Reply-To: <20110217120451.GR23560@leitl.org> References: <20110217103307.GO23560@leitl.org> <20110217120451.GR23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 12:04 PM, Eugen Leitl wrote: > The best Turing test is unemployment. When everybody is unemployed > you know full human equivalence has been reached. Just define > something like LD50 (half unemployment reached) for each individual > profession as an arbitrary yardstick for approximate equivalence. > Integrating over individual professions > will be more difficult, since flooding will put some underwater > faster than others. > I think we need a better test than unemployment. The US has got to ~25% unemployment just by moving most of the wealth to the top 1% and using slave labour in China. Robots won't be used until they are cheaper than slave labour and humans can produce a lot of slave labour units. BillK From jonkc at bellsouth.net Thu Feb 17 14:23:48 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 09:23:48 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C2DA9.9050804@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> Message-ID: <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> On Feb 16, 2011, at 3:03 PM, Richard Loosemore wrote: >> So I repeat my previous request, please tell us all about the wonderful AI program that you have written that does things even more intelligently than Watson. > > Done: read my papers. I'm not asking for more endless philosophy, I'm asking for programs. I'm asking you to tell us what you have taught a computer to do that caused it to behave anywhere near as intelligently as Watson; a program you claim to have contempt for as well as for its creators. But to be honest I can't help but wonder if contempt is the right word and if there might be a better one. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 14:41:09 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 15:41:09 +0100 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: <20110217103307.GO23560@leitl.org> <20110217120451.GR23560@leitl.org> Message-ID: <20110217144109.GV23560@leitl.org> On Thu, Feb 17, 2011 at 12:38:33PM +0000, BillK wrote: > I think we need a better test than unemployment. It's not easy to find a better benchmark than what people are willing to pay other people for. > The US has got to ~25% unemployment just by moving most of the wealth > to the top 1% and using slave labour in China. What makes you think I was talking about just US, or China? Have to integrate over the entire planet, over all professions. > Robots won't be used until they are cheaper than slave labour and > humans can produce a lot of slave labour units. I meant the entire envelope of human professions. Artist, professor, CEO, analyst, plumber. It's clear that some niches can be more easily filled than others. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Thu Feb 17 15:18:12 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 10:18:12 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> Message-ID: <4D5D3C34.7080305@lightlink.com> John Clark wrote: > On Feb 16, 2011, at 3:03 PM, Richard Loosemore wrote: > >>> So I repeat my previous request, please tell us all about the >>> wonderful AI program that you have written that does things even more >>> intelligently than Watson. >> >> Done: read my papers. > > I'm not asking for more endless philosophy, I'm asking for programs. I'm > asking you to tell us what you have taught a computer to do that caused > it to behave anywhere near as intelligently as Watson; a program you > claim to have contempt for as well as for its creators. But to be honest > I can't help but wonder if contempt is the right word and if there might > be a better one. Read parallel post addressed to Kelly Anderson. Richard Loosemore From jonkc at bellsouth.net Thu Feb 17 15:42:36 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 10:42:36 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5D3C34.7080305@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: On Feb 17, 2011, at 10:18 AM, Richard Loosemore wrote: > > Read parallel post addressed to Kelly Anderson. Why? Did the parallel post addressed to Kelly Anderson teach a computer to behave anywhere near as intelligently as Watson? If so I am delighted but I really don't see why I need to read it, I didn't need to read Watson's source code to be enormously impressed by it. The truth is I have read the source code of very few human beings, but I still think some of them are intelligent. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 16:06:31 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 08:06:31 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <004e01cbcebc$a46f4440$ed4dccc0$@att.net> bounces at lists.extropy.org] On Behalf Of Kelly Anderson Subject: Re: [ExI] Watson on NOVA On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: >> Ja, but when you say "research" in reference to AI, keep in mind the > actual goal isn't the creation of AGI, but rather the creation of AGI > that doesn't kill us. >Why is that the goal? As extropians isn't the idea to reduce entropy? We need AGI to figure out how to do nanotech to figure out how to upload by mapping the physical configuration of our brains. If they can do it while we are alive, that would be great. If the brain needs to be frozen, well, that's better than the alternative. >But if humans can create the AI that creates the replicating nanobots, then in a sense it isn't out of human reach... Ja. I think AGI is the best and possibly only path to replicating nanotech. ...> >On the other hand, if we are successful at doing AI wrong, we are all > doomed right now. ?It will decide it doesn't need us, or just sees no > reason why we are useful for anything. >And that is a bad thing exactly how? If we do AGI wrong, and it has no empathy with humans, it may decide to convert *all* the available metals in the solar system and use all of it to play chess or search for Mersenne primes. I love both those things, but if every atom of the solar system is set to doing that, it would be a bad thing. >> ... ?But I am reluctant to risk my children's and grandchildren's 100 years of meat world existence on just getting AI going as quickly as possible. >Honestly, I don't think we have much of a choice about when AI gets going. We can all make choices as individuals, but I see it as kind of inevitable. Ray K seems to have this mind set as well, so I feel like I'm in pretty good company on this one. No sir, I disagree with even Ray K. A fatalistic attitude is dangerous in this context. We must do whatever we can to see to it we do have a choice about when AI gets going. >> ...Nuclear bombs preceded nuclear power plants. >Yes, and many of the most interesting AI applications are no doubt military in nature. -Kelly If true AGI is used militarily, then all humanity is finished, for eventually the weaponized AGI will find friend and foe indistinguishable. spike From spike66 at att.net Thu Feb 17 15:52:46 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 07:52:46 -0800 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: <004d01cbceba$b8f11f80$2ad35e80$@att.net> ... Behalf Of Kelly Anderson Subject: Re: [ExI] Lethal future was Watson on NOVA On Wed, Feb 16, 2011 at 4:36 PM, Samantha Atkins wrote: >> My theory is that almost no evolved intelligent species meets the >> challenge of overcoming its evolved limitations fast enough to cope >> successfully with accelerating technological change... >Another possibility is that advanced civilizations naturally trend towards virtual reality, and thus end up leaving a very small externally detectable footprint. Exploring the endless possibilities of virtual reality seems potentially a lot more interesting than crossing tens of thousands of light years of space to try and visit some lower life form...-Kelly This is my favorite theory. Technological civilizations figure out AGI, then nanotech, then they put all the metal in their solar system into computronium, at which time *they don't care* what happens at other stars, because the information takes too long to get there; the latency is insurmountably high. It is analogous to why we don't go searching Outer Elbonia to try to understand whatever technology they have developed to twang arrows at caribou; we don't care how they do that. Anything they have, we can do better. spike From rpwl at lightlink.com Thu Feb 17 16:24:11 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 11:24:11 -0500 Subject: [ExI] A different question about Watson In-Reply-To: <4D5D3C34.7080305@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: <4D5D4BAB.4070102@lightlink.com> I am a little puzzled about one thing: did Watson get its questions from doing speech recognition, or did someone type the questions in ahead of time, and press a button to send the text to Watson at the same time that Alex spoke it? Only reason I ask is that Ben Goertzel, in his H+ essay on the subject: http://hplusmagazine.com/2011/02/17/watson-supercharged-search-engine-or-prototype-robot-overlord/ gives some examples of Jeopardy questions: > ?Whinese? is a language they use on long car trips > > The motto of this 1904-1914 engineering project was ?The land > divided, the world united? > > Built at a cost of more than $200 million, it stretches from > Victoria, B.C. to St. John?s, Newfoundland > > Jay Leno on July 8, 2010: The ?nominations were announced today? > there?s no ?me? in? this award ... and these questions contain some interestingly useful structure in their written form. I am thinking mostly of the very helpful quotation marks. I suspect that there was no speech recognition, and that Watson got direct text, but perhaps someone who actually saw the shows can tell if this is the case? Richard Loosemore From eugen at leitl.org Thu Feb 17 16:26:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 17:26:57 +0100 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C7657.6070405@lightlink.com> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> Message-ID: <20110217162657.GB23560@leitl.org> On Wed, Feb 16, 2011 at 08:13:59PM -0500, Richard Loosemore wrote: > So demanding, some people. ;-) > > If you have read McClelland and Rumelhart's two-volume "Parallel I've skimmed PDP when it was new. I have not read your publications because I've asked for a list, here, twice, nicely, and no reply was forthcoming. I presume http://richardloosemore.com/papers are yours? > Distributed Processing", and if you have then read my papers, and if you > are still so much in the dark that the only thing you can say is "I > haven't seen anything in your papers that rise to the level of computer > science" then, well... You know, I could rattle off a list of books (far more relevant) you have no clue of. It's a pretty stupid game, so let's not play it. > (And, in any case, my answer to John Clark was as facetious as his > question was silly.) > > At this stage, what you can get is a general picture of the background > theory. That is readily obtainable if you have a good knowledge of (a) > computer science, (b) cognitive psychology and (c) complex systems. It I don't see how cognitive psychology is relevant. It's good that complex systems makes your list. > also helps, as I say, to be familiar with what was going on in those PDP > books. > > Do you have a fairly detailed knowledge of all three of these areas? Are you always an arrogant blowhard, Richard? > Do you understand where McClelland and Rumelhart were coming from when > they talked about the relaxation of weak constraints, and about how a > lot of cognition seemed to make more sense when couched in those terms? > Do you also follow the line of reasoning that interprets M & R's > subsequent pursuit of non-complex models as a mistake? And the > implication that there is a class of systems that are as yet unexplored, > doing what they did but using a complex approach? > > Put all these pieces together and we have the basis for a dialog. > > But ... demanding a finished AGI as an essential precondition for > behaving in a mature way toward the work I have already published...? I > don't think so. :-) I think two things apply: you haven't build a lot of systems that make impressive results, and you spend a lot of time on this list, which means you don't have have a lot of quality time for work, whatever it is. I've just skimmed your papers at maximum speed, and preliminary impression is not good. I'll reserve my opinion until I can read them. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From lubkin at unreasonable.com Thu Feb 17 16:26:49 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Thu, 17 Feb 2011 11:26:49 -0500 Subject: [ExI] watson on jeopardy In-Reply-To: References: <002d01cbce64$16fc3530$44f49f90$@att.net> Message-ID: <201102171627.p1HGRKKS000848@andromeda.ziaspace.com> Darren wrote: >Jeopardy? For each question there is only one >right answer, and therefore only one right move Aside, at least once a game there's a question with more than one valid answer. (Sometimes it didn't come up but I spotted it anyway.) Contestants must proceed based on Trebek's initial ruling. If the research team confirms that the answer given was actually correct, scores are adjusted. I have seen a contestant brought back another day when they could have plausibly won the game had their answer been deemed correct. There's a similar flaw in many kinds of test-taking, e.g., the Miller Analogies Test. A is to B as C is to ___, (1) D (2) E (3) F (4) G. E is the only answer accepted as correct. But smart-you sees an interpretation whereby it's G. What should be done is provide space with each question to optionally provide a rationale. If the expected answer is given, accept it. If a different answer is given, see if there's a rationale and it makes sense. (Still won't help if the test scorer is a dolt who doesn't understand your rationale, but it's an improvement.) Otherwise both Jeopardy and the tests become guessing games. Not what's the right answer, but what's the one they would have thought of. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From eugen at leitl.org Thu Feb 17 16:32:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 17:32:32 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: <20110217163232.GC23560@leitl.org> On Wed, Feb 16, 2011 at 06:21:26PM -0700, Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: > > On 02/16/2011 10:15 AM, spike wrote: > > Not the same problem domain or even all that close. ?Can you turn it into a > > really good chatbot? ?Maybe, maybe not depending on your standard of "good". > > ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks in > > the nursing home entertained. > > Samantha, are you familiar with Moore's law? Let's assume for purposes Kelly, do you think 3d integration will be just-ready when CMOS runs into a wall? Kelly, do you think that Moore is equivalent to system performance? You sure about that? > of discussion that you are 30, that you will be in the nursing home > when you're 70. That means Watson level functionality will cost around > $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get > it in a box of cracker jacks. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Thu Feb 17 16:22:18 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 08:22:18 -0800 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: Message-ID: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> On Behalf Of Darren Greer >. I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess." d. That sounds like a good prediction, but it hasn't worked that way really. Computers are better than all humans now, even the commercial versions that run on laptop computers. Human vs human chess is still played, the prize funds are higher than ever, the highest rated human (Carlsen) is dating a supermodel and has been hired to sell clothing for G-Star. This may be a special case however, for Carlsen may be the first male chess grandmaster in history who is not an ugly geek. Odd, for it seems about 80% of the top female chess players are knockout gorgeous, but we lads at that level are 80% radioactive ugly. Actually Darren, you are a valuable one to judge this contest. Scroll all the way down in this link and compare: http://www.chessbase.com/newsdetail.asp?newsid=7014 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 16:43:42 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 17:43:42 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <004d01cbceba$b8f11f80$2ad35e80$@att.net> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <004d01cbceba$b8f11f80$2ad35e80$@att.net> Message-ID: <20110217164342.GD23560@leitl.org> On Thu, Feb 17, 2011 at 07:52:46AM -0800, spike wrote: > >Another possibility is that advanced civilizations naturally trend towards > virtual reality, and thus end up leaving a very small externally detectable > footprint. Exploring the endless possibilities of virtual reality seems > potentially a lot more interesting than crossing tens of thousands of light > years of space to try and visit some lower life form...-Kelly > > > > This is my favorite theory. Technological civilizations figure out AGI, > then nanotech, then they put all the metal in their solar system into > computronium, at which time *they don't care* what happens at other stars, Just as we never cared what was on the other continents. America was never colonized. Spike is as mythical as an unicorn. Wait, the first pre-life form never made it out of the first puddle, or hot smoker, or wherever it was. > because the information takes too long to get there; the latency is > insurmountably high. It is analogous to why we don't go searching Outer > Elbonia to try to understand whatever technology they have developed to So why was the land of mud and misogyny ever settled? > twang arrows at caribou; we don't care how they do that. Anything they > have, we can do better. Why do people have children? Do children forever remain in their home? Why was America colonized? Why do we have 500 volunteers for a one-way mission to Mars? What are pioneer species and what is ecological succession? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Thu Feb 17 16:36:59 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 11:36:59 -0500 Subject: [ExI] A different question about Watson In-Reply-To: <4D5D4BAB.4070102@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <4D5D4BAB.4070102@lightlink.com> Message-ID: <0292DCA7-C886-4608-A4F8-027F5722D0E8@bellsouth.net> On Feb 17, 2011, at 11:24 AM, Richard Loosemore wrote: > I suspect that there was no speech recognition, and that Watson got direct text, They said on the first show that he did. How is that important? > perhaps someone who actually saw the shows can tell if this is the case I would humbly suggest that it might be wise to see what Watson actually did before you proclaim it trivial. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 16:26:11 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 08:26:11 -0800 Subject: [ExI] watson on jeopardy In-Reply-To: References: <002d01cbce64$16fc3530$44f49f90$@att.net> Message-ID: <005501cbcebf$63c855a0$2b5900e0$@att.net> . On Behalf Of Darren Greer . I just read the Kurzweil article and he points out that Watson is much closer to being able to pass the Turing test than a chess playing computer as it is dealing with human language. And so based on that criteria, it is a step forward no matter how you slice it. d. Chess programs have already passed the Turing test in chess, a long time ago. So Rybka wins the Turing test at chess, Watson passes or is getting close in Jeopardy, neither can pass at general language. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Feb 17 16:57:17 2011 From: pharos at gmail.com (BillK) Date: Thu, 17 Feb 2011 16:57:17 +0000 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <20110217164342.GD23560@leitl.org> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <004d01cbceba$b8f11f80$2ad35e80$@att.net> <20110217164342.GD23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 4:43 PM, Eugen Leitl wrote: > Why do people have children? Do children forever remain in > their home? > > Why was America colonized? Why do we have 500 volunteers for > a one-way mission to Mars? > > What are pioneer species and what is ecological succession? > > Or, alternatively, why don't people have children? Viz. the collapse in first world birth rates. You are comparing people with miserable, short lifespans to very long-lived people with every wish fulfilled by nano-Santa in virtual reality. Apples and Oranges. BillK From eugen at leitl.org Thu Feb 17 17:03:44 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 18:03:44 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <004d01cbceba$b8f11f80$2ad35e80$@att.net> <20110217164342.GD23560@leitl.org> Message-ID: <20110217170344.GE23560@leitl.org> On Thu, Feb 17, 2011 at 04:57:17PM +0000, BillK wrote: > Or, alternatively, why don't people have children? Why people? Take all the species into account. > Viz. the collapse in first world birth rates. The atheists, you mean. The faithful are breeding like rabbits. Fulfilling the will of the Lord. > You are comparing people with miserable, short lifespans to very > long-lived people with every wish fulfilled by nano-Santa in virtual Extremely short-lived information patterns, some of the complexity of viroids. > reality. > > Apples and Oranges. You're so right, it ain't even funny. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From alfio.puglisi at gmail.com Thu Feb 17 17:52:13 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 17 Feb 2011 18:52:13 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: On Wed, Feb 16, 2011 at 6:08 PM, Keith Henson wrote: > On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > > > On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > > > >> I'm still pissed at Sagan for his hubris in sending a message to the > >> stars without asking the rest of us first, in blithe certainty that "of > >> course" any recipient would have evolved beyond aggression and > >> xenophobia. > > > > The real reasons if that they would be there you'd be dead, Jim. > > In fact, if any alien picks up the transmission (chance: very close > > to zero) they'd better be farther advanced than us, and on a > > faster track. I hope it for them. > > I have been mulling this over for decades. > > We look out into the Universe and don't (so far) see or hear any > evidence of technophilic civilization. > > I see only two possibilities: > > 1) Technophilics are so rare that there are no others in our light cone. > > 2) Or if they are relatively common something wipes them *all* out, > or, if not wiped out, they don't do anything which indicates their > presence. > There are a couple of solutions that basically deny that the rest of the Universe is real: 3) the simulation argument 4) you're a Boltzmann brain Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Feb 17 18:08:22 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 13:08:22 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <20110217162657.GB23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <20110217162657.GB23560@leitl.org> Message-ID: <4D5D6416.90300@lightlink.com> Eugen Leitl wrote: > I have not read your publications > because I've asked for a list, here, twice, nicely, and no reply was > forthcoming. > > I presume http://richardloosemore.com/papers are yours? Indeed, those are mine. I must have missed your request for a list: did I not direct you to the web page? My mistake, I'm sure. Richard Loosemore From spike66 at att.net Thu Feb 17 18:11:19 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 10:11:19 -0800 Subject: [ExI] watson on jeopardy In-Reply-To: <005501cbcebf$63c855a0$2b5900e0$@att.net> References: <002d01cbce64$16fc3530$44f49f90$@att.net> <005501cbcebf$63c855a0$2b5900e0$@att.net> Message-ID: <009c01cbcece$13cc6be0$3b6543a0$@att.net> I have been watching the traffic on the topic of Watson. The information is mostly relevant to transhumanism, interesting, mostly intelligently written, and the participants are treating each other with respect for the most part. I propose we extend the open season on that topic for a few more days. Papal decree: until about Sunday midnight US west coast time, if your comment specifically has to do with Watson, the Jeopardy challenge, fresh AGI material or some direct spinoff of that topic that can legitimately be subject lined "Watson on [*]" then go ahead and post away on that topic, and don't worry about counting it toward the voluntary ~five post per day limit. This has been fun to read this stuff. Play ball! {8-] spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Feb 17 18:50:17 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 13:50:17 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <20110217162657.GB23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <20110217162657.GB23560@leitl.org> Message-ID: <4D5D6DE9.2060406@lightlink.com> Eugen Leitl wrote: > On Wed, Feb 16, 2011 at 08:13:59PM -0500, Richard Loosemore wrote: >> Distributed Processing", and if you have then read my papers, and if you >> are still so much in the dark that the only thing you can say is "I >> haven't seen anything in your papers that rise to the level of computer >> science" then, well... > > You know, I could rattle off a list of books (far more relevant) > you have no clue of. It's a pretty stupid game, so let's not play it. If you actually read the thread you will see that nobody was playing that "pretty stupid game", before you started to do so in the above sentence. ;-) You have drastically, utterly failed to understand or read the context. As I will explain.... I was addressing an implicit question from Kelly Anderson about how anyone could make sense of my *own* papers, and I pointed to those two books because I am claiming that they represent a critical hinge point in the history of cognitive science and AI, and my work is best understood as a path-not-taken from that hinge point. If you think you know my own research better than I do, and can "rattle off a list of far more relevant books" that would help someone understand the context that my work comes from, by all means do so. Granted, there is a problem there. Quite a few computer science people read those McClelland and Rumelhart books looking only at the NN algorithms, but without knowing the cognitive psychology history that came before the PDP books. The problem is that my work springs not from the superficial NN stuff but from that much deeper history. That fact may cause some misunderstanding. In order to gauge the appropriate level at which to respond to Kelly's concerns, therefore, it mattered a good deal whether he was a cognitive psychologist or an AI person, and to that end I went on to explain that and ask some questions..... >> At this stage, what you can get is a general picture of the background >> theory. That is readily obtainable if you have a good knowledge of (a) >> computer science, (b) cognitive psychology and (c) complex systems. It > > I don't see how cognitive psychology is relevant. It's good that > complex systems makes your list. Again, I mentioned cognitive psychology only because I was responding to Kelly's comment about the fact that he read my papers but could not see in them the things I had hoped he would. I was in the process of explaining the background to my own work. You seem to have interpreted my reference to those areas as something else entirely. Cognitive psychology is critical to an understanding of my approach to AGI. Without an understanding of that field, it might be hard to see why the papers I wrote are outlining a theory of AGI. >> also helps, as I say, to be familiar with what was going on in those PDP >> books. >> >> Do you have a fairly detailed knowledge of all three of these areas? > > Are you always an arrogant blowhard, Richard? Do you always make comments like these without having read the messages that came just before the one you are responding to? To repeat, I was asking the question of Kelly because it was directly relevant to his own comments about my papers. I needed to get a context. Kelly responded politely and factually. You, on the other hand, are an onlooker who the question was not directed at, but you feel inclined to step in, misinterpret the context, and start using comments like "arrogant blowhard". (... the kind of language that, I might point out, has been used as grounds for putting people on moderation! ;-) ). >> Do you understand where McClelland and Rumelhart were coming from when >> they talked about the relaxation of weak constraints, and about how a >> lot of cognition seemed to make more sense when couched in those terms? >> Do you also follow the line of reasoning that interprets M & R's >> subsequent pursuit of non-complex models as a mistake? And the >> implication that there is a class of systems that are as yet unexplored, >> doing what they did but using a complex approach? >> >> Put all these pieces together and we have the basis for a dialog. >> >> But ... demanding a finished AGI as an essential precondition for >> behaving in a mature way toward the work I have already published...? I >> don't think so. :-) > > I think two things apply: you haven't build a lot of systems that > make impressive results, and you spend a lot of time on this list, > which means you don't have have a lot of quality time for work, > whatever it is. > > I've just skimmed your papers at maximum speed, and preliminary impression > is not good. I'll reserve my opinion until I can read them. Sadly, I can tell you in advance that your opinion will be of no value. :-( Richard Loosemore From sjatkins at mac.com Thu Feb 17 19:29:40 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 17 Feb 2011 11:29:40 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: <4D5D7724.8060502@mac.com> On 02/16/2011 05:21 PM, Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: >> On 02/16/2011 10:15 AM, spike wrote: >> Not the same problem domain or even all that close. Can you turn it into a >> really good chatbot? Maybe, maybe not depending on your standard of "good". >> But that wouldn't be very exciting. Very expensive way to keep folks in >> the nursing home entertained. > Samantha, are you familiar with Moore's law? No, gosh, never heard of it before. :P > Let's assume for purposes > of discussion that you are 30, that you will be in the nursing home > when you're 70. That means Watson level functionality will cost around > $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get > it in a box of cracker jacks. Moore's Law is not enough. You need much better algorithmic approaches and in some cases any workable algorithm at all. There are algorithms that have changed enough that running the modern version on a 1980 PC outperforms running the 1980 algorithm on a supercomputer today. Moore's Law is about hardware. Software has notoriously failed to keep pace. For many tasks we don't have vetted algorithms at all yet or a clear idea of how to achieve the desired results. - samantha From msd001 at gmail.com Thu Feb 17 20:16:35 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 17 Feb 2011 15:16:35 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: 2011/2/17 John Clark : > source code to be enormously impressed by it. The truth is I have read > the?source code of very few human beings, but I still think some of them are > intelligent. Bullshit. John Clark has given evidence of the belief that only John Clark is intelligent. :) From jonkc at bellsouth.net Thu Feb 17 20:56:44 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 15:56:44 -0500 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: The cover story of the current issue of Time magazine is entitled "2045: The Year Man Becomes Immortal", its about Ray Kurzweil and the singularity: http://www.time.com/time/health/article/0,8599,2048138,00.html John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 21:08:16 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 17:08:16 -0400 Subject: [ExI] Kurzweil On Watson In-Reply-To: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> References: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> Message-ID: I kinda like Zhigalko but then I like the thin, intense type. Carlsen is way too basketball A-team for me (memories of getting beat up in high school), but yes, I can see how the novelty of a good-looking jock/chess master would turn on G.Q. and super models. d. 2011/2/17 spike > > > > > *On Behalf Of *Darren Greer > > ** > > *>?* I also predicted that when that happened we would either think better > of computer intelligence, worse of human thinking, or worse of chess, and > that if history was a guide, we would downgrade chess." d. > > > > That sounds like a good prediction, but it hasn?t worked that way really. > Computers are better than all humans now, even the commercial versions that > run on laptop computers. Human vs human chess is still played, the prize > funds are higher than ever, the highest rated human (Carlsen) is dating a > supermodel and has been hired to sell clothing for G-Star. > > > > This may be a special case however, for Carlsen may be the first male chess > grandmaster in history who is not an ugly geek. Odd, for it seems about 80% > of the top female chess players are knockout gorgeous, but we lads at that > level are 80% radioactive ugly. > > > > Actually Darren, you are a valuable one to judge this contest. > > > > Scroll all the way down in this link and compare: > > > > http://www.chessbase.com/newsdetail.asp?newsid=7014 > > > > spike > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Feb 17 20:43:20 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 15:43:20 -0500 Subject: [ExI] Watson Jeopardy battle on the net In-Reply-To: <4D5D7724.8060502@mac.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <4D5D7724.8060502@mac.com> Message-ID: <08C7DC57-6358-4C59-B5E6-7233F0E061DA@bellsouth.net> As far as I know the entire 90 minute Watson Jeopardy battle is not on the net yet, but there is an interview with the two defeated human ex champions: http://abcnews.go.com/Technology/video/jeopardy-champs-battling-watson-discuss-challenge-12931204 John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 21:29:39 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 17:29:39 -0400 Subject: [ExI] watson on jeopardy In-Reply-To: <005501cbcebf$63c855a0$2b5900e0$@att.net> References: <002d01cbce64$16fc3530$44f49f90$@att.net> <005501cbcebf$63c855a0$2b5900e0$@att.net> Message-ID: >Watson passes or is getting close in Jeopardy, neither can pass at general language.< Yes, I meant, and I guess Kurzweil meant, that Watson is one step closer to passing the general language test though he and computers in general still maybe be very far away. Because it isn't just a matter of the computer finding the right answer to a direct question -- he has to place the words in context first and then hunt for the answer. The example given by one of the programmers was if someone 'runs' down the street and someone else 'runs' for president, Watson has to be able to sort out which meaning is intended before he can begin to word associate. And this ability moves Watson closer to passing the general language test than a computer has ever been before, does it not? I think one of the more interesting aspects of this Watson discussion is that we as a group are very focussed on where we want to be as opposed to where we actually are. I think it also is interesting that many of the things Watson does we do as thinking machines as well. We don't understand the programming, or the platform. And we have only the basest understanding of the hardware. But I know when presented with a question I put the words in context and my brain begins to associate. Other than thinking in image as well as language and numbers, I don't understand how Watson, in this one area at least, is radically different from me. Except he has a far greater knowledge base of raw trivia stored in accessible 'cells.' d. 2011/2/17 spike > > > > > *?* *On Behalf Of *Darren Greer > *?* > > > > I just read the Kurzweil article and he points out that Watson is much > closer to being able to pass the Turing test than a chess playing computer > as it is dealing with human language. And so based on that criteria, it is a > step forward no matter how you slice it. > > > > d. > > > > Chess programs have already passed the Turing test in chess, a long time > ago. So Rybka wins the Turing test at chess, Watson passes or is getting > close in Jeopardy, neither can pass at general language. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 22:21:42 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 14:21:42 -0800 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> Message-ID: <004301cbcef1$0e98b480$2bca1d80$@att.net> On Behalf Of Darren Greer Subject: Re: [ExI] Kurzweil On Watson >.I kinda like Zhigalko but then I like the thin, intense type. Carlsen is way too basketball A-team for me (memories of getting beat up in high school), but yes, I can see how the novelty of a good-looking jock/chess master would turn on G.Q. and super models. d. 2011/2/17 spike Well sure, but in any case, my point is we male chess players as a rule are hurting ugly. But that Anna Sharevich, oh my goodness. That stunning creature is enough to make a gay man straight. She is enough to make a straight woman gay. http://www.chessbase.com/newsdetail.asp?newsid=7014 And if that isn't enough of a chess babe, check this! http://en.wikipedia.org/wiki/Alexandra_Kosteniuk And this! http://en.wikipedia.org/wiki/Tatiana_Kosintseva OK this is sufficiently non-Watson I will count that against my total for today. {8^D spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 22:28:32 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 18:28:32 -0400 Subject: [ExI] Kurzweil On Watson In-Reply-To: <004301cbcef1$0e98b480$2bca1d80$@att.net> References: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> <004301cbcef1$0e98b480$2bca1d80$@att.net> Message-ID: >Well sure, but in any case, my point is we male chess players as a rule are hurting ugly< Was trying to be tactful and circumspect, but yes, now that you mention it -- that particular group of men could scare the labels off Campbell's soup cans. Darren 2011/2/17 spike > > > > > *On Behalf Of *Darren Greer > *Subject:* Re: [ExI] Kurzweil On Watson > > > > >?I kinda like Zhigalko but then I like the thin, intense type. Carlsen is > way too basketball A-team for me (memories of getting beat up in high > school), but yes, I can see how the novelty of a good-looking jock/chess > master would turn on G.Q. and super models. > > > > d. > > > > 2011/2/17 spike > > > > Well sure, but in any case, my point is we male chess players as a rule are > hurting ugly. But that Anna Sharevich, oh my goodness. That stunning > creature is enough to make a gay man straight. She is enough to make a > straight woman gay. > > http://www.chessbase.com/newsdetail.asp?newsid=7014 > > And if that isn?t enough of a chess babe, check this! > > http://en.wikipedia.org/wiki/Alexandra_Kosteniuk > > And this! > > http://en.wikipedia.org/wiki/Tatiana_Kosintseva > > OK this is sufficiently non-Watson I will count that against my total for > today. > > {8^D > > spike > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Feb 18 06:17:34 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 22:17:34 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: <009301cbcf33$88a24a10$99e6de30$@att.net> Even after all the singularity talk that we have had here for years, it was a jolt to see all that in something as mainstream as Time magazine. It will be interesting to see the letters to the editor on this one. spike From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Sent: Thursday, February 17, 2011 12:57 PM To: ExI chat list Subject: [ExI] Time magazine cover story on the singularity The cover story of the current issue of Time magazine is entitled "2045: The Year Man Becomes Immortal", its about Ray Kurzweil and the singularity: http://www.time.com/time/health/article/0,8599,2048138,00.html John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Fri Feb 18 07:10:41 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 00:10:41 -0700 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <20110217115041.GQ23560@leitl.org> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <20110217115041.GQ23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 4:50 AM, Eugen Leitl wrote: > On Thu, Feb 17, 2011 at 06:59:53AM -0400, Darren Greer wrote: >> >Another possibility is that advanced civilizations naturally trend >> towards virtual reality, and thus end up leaving a very small >> externally detectable footprint. Exploring the endless possibilities > > Look, what is your energetical footprint? 1 kW, more or less? > Negligible. In a super efficient system, my footprint might be nanowatts. I believe there are theoretical computing models that use zero net electricity. > Now multiply that by 7 gigamonkeys. Problem? > > Infinitesimally small energy budgets multiplied by very large > numbers are turning stars into FIR blackbodies. And whole galaxies, > and clusters, and superclusters. > > You think that would be easy to miss? Yes. Seeing the LACK of something is very difficult astronomy. Heck how long did it take astronomers to figure out that the majority of the universe is dark matter? I agree with you that an advanced civilization would eventually create a ring world, and finally a sphere that collected all available solar energy. But that could support an enourmous computational structuree, capable of simulating every mind in a 10,000 year civilization might take only a few watts and a few seconds. -Kelly > >> of virtual reality seems potentially a lot more interesting than >> crossing tens of thousands of light years of space to try and visit >> some lower life form...< >> >> I had never considered this scenario until I came to Exi and it was >> postulated for me. It is the most hopeful compared to the other polar > > When something is postulated to you it's usually bunk. Novelty > and too small group for peer review pretty much see to that. When I look at teenagers lost in iPods, it doesn't seem like bunk to think that they could positively be swallowed alive by an interesting virtual reality. I have relatives who have addiction to WoW that makes a heroin addict look like a weekend social drinker. >> opposite scenarios--self-destruction or mature Zen state (with a no poaching >> policy) of technological superiority. Alas, self-destruction seems to me to >> be the most likely, given the bloody and tragic arc of our history at least. > > It's less bloody and tragic than bloody stupid. Our collective > intelligence seems to approach that of an overnight culture. > > http://www.fungionline.org.uk/5kinetics/2batch.html Competition for limited resources and a recognition that exponential growth cannot continue forever indicates that there will be Darwinian processes for choosing which AGIs get the eventually limited power, and which do not. This leads one inevitably to the conclusion that the surviving AGIs will be the "fittest" in a survival and reproduction sense. It will be a very competitive world for unenhanced human beings to compete in, to say the least. -Kelly From kellycoinguy at gmail.com Fri Feb 18 07:16:33 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 00:16:33 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110217163232.GC23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 9:32 AM, Eugen Leitl wrote: > On Wed, Feb 16, 2011 at 06:21:26PM -0700, Kelly Anderson wrote: >> On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: >> > On 02/16/2011 10:15 AM, spike wrote: >> > Not the same problem domain or even all that close. ?Can you turn it into a >> > really good chatbot? ?Maybe, maybe not depending on your standard of "good". >> > ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks in >> > the nursing home entertained. >> >> Samantha, are you familiar with Moore's law? Let's assume for purposes > > Kelly, do you think 3d integration will be just-ready when > CMOS runs into a wall? Perhaps, perhaps not. But I think ONE out of the several dozen competing paradigms will be ready to pick up more or less where the last one left off. > Kelly, do you think that Moore is equivalent to system > performance? You sure about that? No. Software improves as well, so system performance should go up faster than would be indicated by Moore's law alone would indicate. :-) -Kelly From kellycoinguy at gmail.com Fri Feb 18 07:25:18 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 00:25:18 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5D7724.8060502@mac.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <4D5D7724.8060502@mac.com> Message-ID: On Thu, Feb 17, 2011 at 12:29 PM, Samantha Atkins wrote: > On 02/16/2011 05:21 PM, Kelly Anderson wrote: >> >> On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins ?wrote: >>> >>> On 02/16/2011 10:15 AM, spike wrote: >>> Not the same problem domain or even all that close. ?Can you turn it into >>> a >>> really good chatbot? ?Maybe, maybe not depending on your standard of >>> "good". >>> ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks >>> in >>> the nursing home entertained. >> >> Samantha, are you familiar with Moore's law? > > No, gosh, never heard of it before. ?:P Just as I suspected... ;-) >> ?Let's assume for purposes >> of discussion that you are 30, that you will be in the nursing home >> when you're 70. That means Watson level functionality will cost around >> $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get >> it in a box of cracker jacks. > > Moore's Law is not enough. ?You need much better algorithmic approaches and > in some cases any workable algorithm at all. ?There are algorithms that have > changed enough that running the modern version on a 1980 PC outperforms > running the 1980 algorithm on a supercomputer today. ? Moore's Law is about > hardware. ?Software has notoriously failed to keep pace. ?For many tasks we > don't have vetted algorithms at all yet or a clear idea of how to achieve > the desired results. You forget the context here. I was talking about what would be required to run a Watson-like system. That algorithm and software clearly exists today. How did we cross wires here? -Kelly From kellycoinguy at gmail.com Fri Feb 18 08:39:26 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 01:39:26 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5D1897.4030906@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> Message-ID: On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote: > Okay, first: ?although I understand your position as an Agilista, and your > earnest desire to hear about concrete code rather than theory ("I value > working code over big ideas"), you must surely acknowledge that in some > areas of scientific research and technological development, it is important > to work out the theory, or the design, before rushing ahead to the > code-writing stage. This is the scientist vs. engineer battle. As an engineering type of scientist, I prefer to perform experiments along the way to determine if my theory is correct. Newton performed experiments to verify his theories, and this influenced his next theory. Without the experiments it would not be the scientific method, but rather closer to philosophy. I'll let "real" scientists figure out how the organelles of the brain function. I'll pay attention as I can to their findings. I like the idea of being influenced by the designs of nature. I really like the wall climbing robots that copy the techniques of the gecko. Really interesting stuff that. I was reading papers about how the retina of cats worked in computer vision classes twenty years ago. I'll let cognitive scientists and doctors try and unravel the brain using black box techniques, and I'll pay attention as I can to their results. These are interesting from the point of view of devising tests to see if what you have designed is similar to the human brain. Things like optical illusions are very interesting in terms of figuring out how we do it. As an Agilista with an entrepreneurial bent, I have little patience for a self-described scientist working on theories that may not have applications for twenty years. I respect that the mathematics for the CAT scanner were developed in the 1920's, but the guy who developed those techniques got very little out of the exercise. Aside from that, if you can't reduce your theories to practice pretty soon, the practitioners of "parlor tricks" will beat you to your goal. > That is not to say that I don't write code (I spent several years as a > software developer, and I continue to write code), but that I believe the > problem of building an AGI is, at this point in time, a matter of getting > the theory right. ?We have had over fifty years of AI people rushing into > programs without seriously and comprehensively addressing the underlying > issues. ?Perhaps you feel that there are really not that many underlying > issues to be dealt with, but after having worked in this field, on and off, > for thirty years, it is my position that we need deep understanding above > all. ?Maxwell's equations, remember, were dismissed as useless for anything > -- just idle theorizing -- for quite a few years after Maxwell came up with > them. ?Not everything that is of value *must* be accompanied by immediate > code that solves a problem. I believe that many interesting problems are solved by throwing more computational cycles at them. Then, once you have something that works, you can optimize later. Watson is a system that works largely because of the huge number of computational cycles being thrown at the problem. As far as AGI research being off the tracks, the only way you're going to convince anyone is with some kind of intermediate result. Even flawed results would be better than nothing. > Now, with regard to the papers that I have written, I should explain that > they are driven by the very specific approach described in the complex > systems paper. ?That described a methodological imperative: ?if intelligent > systems are complex (in the "complex systems" sense, which is not the > "complicated systems", aka space-shuttle-like systems, sense), then we are > in a peculiar situation that (I claim) has to be confronted in a very > particular way. ?If it is not confronted in that particular way, we will > likely run around in circles getting nowhere -- and it is alarming that the > precise way in which this running around in circles would happen bears a > remarkable resemblance to what has been happening in AI for fifty years. > ?So, if my reasoning in that paper is correct then the only sensible way to > build an AGI is to do some very serious theoretical and tool-building work > first. See, I don't think Watson is "getting nowhere"... It is useful today. Let me give you an analogy. I can see that when we can create nanotech robots small enough to get into the human body and work at the cellular level, then all forms of cancer are reduced to sending in those nanobots with a simple program. First, detect cancer cells. How hard can that be? Second, cut a hole in the wall of each cancer cell you encounter. With enough nanobots, cancer, of all kinds, is cured. Of course, we don't have nanotech robots today, but that doesn't matter. I have cured cancer, and I deserve a Nobel prize in medicine!!! On the other hand, there are doctors with living patients today, and they practice all manner of barbarous medicine in the attempt to kill cancer cells without killing patients. The techniques are crude and often unsuccessful causing their patients lots of pain. Nevertheless, these doctors do occasionally succeed in getting a patient into remission. You are the nanotech doctor. I prefer to be the doctor with living patients needing help today. Watson is the second kind. Sure, the first cure to cancer is more general, easier, more effective, easier on the patient, but is simply not available today, even if you can see it as an almost inevitable eventuality. > And part of that theoretical work involves a detailed understanding of > cognitive psychology AND computer science. ?Not just a superficial > acquaintance with a few psychology ideas, which many people have, but an > appreciation for the enormous complexity of cog psych, and an understanding > of how people in that field go about their research (because their protocols > are very different from those of AI or computer science), and a pretty good > grasp of the history of psychology (because there have been many different > schools of thought, and some of them, like Behaviorism, contain extremely > valuable and subtle lessons). Ok, so you care about cognitive psychology. That's great. Are you writing a program that simulates a human psychology? Even on a primitive basis? Or is your real work so secretive that you can't share your ideas? In other words, how SPECIFICALLY does your deep understanding of cognitive psychology contribute to a working program (even if it only solves a simple problem)? > With regard to the specific comments I made below about McClelland and > Rumelhart, what is going on there is that these guys (and several others) > got to a point where the theories in cognitive psychology were making no > sense, and so they started thinking in a new way, to try to solve the > problem. ?I can summarize it as "weak constrain satisfaction" or "neurally > inspired" but, alas, these things can be interpreted in shallow ways that > omit the background context ... and it is the background context that is the > most important part of it. ?In a nutshell, a lot cognitive psychology makes > a lot more sense if it can be re-cast in "constraint" terms. Ok, that starts to make some sense. I have always considered context to be the most important aspect of artificial intelligence, and one of the more ignored. I think Watson does a lot in the area of addressing context. Certainly not perfectly, but well enough to be quite useful. I'd rather have an idiot savant to help me today than a nice theory that might some day result in something truly elegant. > The problem, though, is that the folks who started the PDP (aka > connectionist, neural net) revolution in the 1980s could only express this > new set of ideas in neural terms. ?The made some progress, but then just as > the train appeared to be gathering momentum it ran out of steam. There were > some problems with their approach that could not be solved in a principled > way. ?They had hoped, at the beginning, that they were building a new > foundation for cognitive psychology, but something went wrong. They lacked a proper understanding of the system they were simulating. They kept making simplifying assumptions/guesses because they didn't have a full picture of the brain. I agree that neural networks as practiced in the 80s ran out of steam... whether it was because of a lack of hardware to run the algorithms fast enough, or whether the algorithms were flawed at their core is an interesting argument. If the brain is simulated accurately enough, then we should be able to get an AGI machine by that methodology. That will take some time of course. Your approach apparently will also. Which is the shortest path to AGI? Time will tell, I suppose. > What I have done is to think hard about why that collapse occurred, and to > come to an understanding about how to get around it. ?The answer has to do > with building two distinct classes of constraint systems: ?either > non-complex, or complex (side note: ?I will have to refer you to other texts > to get the gist of what I mean by that... see my 2007 paper on the subject). > ?The whole PDP/connectionist revolution was predicated on a non-complex > approach. ?I have, in essence, diagnosed that as the problem. ?Fixing that > problem is hard, but that is what I am working on. > > Unfortunately for you -- wanting to know what is going on with this project > -- I have been studiously unprolific about publishing papers. So at this > stage of the game all I can do is send you to the papers I have written and > ask you to fill in the gaps from your knowledge of cognitive psychology, AI > and complex systems. This kind of sounds like you want me to do your homework for you... :-) You have published a number of papers. The problem from my point of view is that the way you approach your papers is philisophical, not scientific. Interesting, but not immediately useful. > Finally, bear in mind that none of this is relevant to the question of > whether other systems, like Watson, are a real advance or just a symptom of > a malaise. ?John Clark has been ranting at me (and others) for more than > five years now, so when he pulls the old bait-and-switch trick ("Well, if > you think XYZ is flawed, let's see YOUR stinkin' AI then!!") I just smile > and tell him to go read my papers. ?So we only got into this discussion > because of that: ?it has nothing to do with delivering critiques of other > systems, whether they contain a million lines of code or not. ?:-) ? Watson > still is a sleight of hand, IMO, whether my theory sucks or not. ?;-) The problem from my point of view is that you have not revealed enough of your theory to tell whether it sucks or not. I have no personal axe to grind. I'm just curious because you say, "I can solve the problems of the world", and when I ask what those are, you say "read my papers"... I go and read the papers. I think I understand what you are saying, more or less in those papers, and I still don't know how to go about creating an AGI using your model. All I know at this point is that I need to separate the working brain from the storage brain. Congratulations, you have recast the brain as a Von Neumann architecture... :-) -Kelly From eugen at leitl.org Fri Feb 18 12:51:46 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Feb 2011 13:51:46 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <20110217115041.GQ23560@leitl.org> Message-ID: <20110218125146.GJ23560@leitl.org> On Fri, Feb 18, 2011 at 12:10:41AM -0700, Kelly Anderson wrote: > > Look, what is your energetical footprint? 1 kW, more or less? > > Negligible. > > In a super efficient system, my footprint might be nanowatts. I Not even for human equivalent, nevermind at 10^6 to 10^9 speedup. I don't think you can go below 1-10 W for a human realtime equivalent. > believe there are theoretical computing models that use zero net > electricity. Reversible logic is slow, and it's not perfectly reversible. And it's still immaterial, because if you use 100 times less energy there will be 100 times the individuals competing for it. Adaptively. > > Now multiply that by 7 gigamonkeys. Problem? > > > > Infinitesimally small energy budgets multiplied by very large > > numbers are turning stars into FIR blackbodies. And whole galaxies, > > and clusters, and superclusters. > > > > You think that would be easy to miss? > > Yes. Seeing the LACK of something is very difficult astronomy. Heck Giant (up to GLYr) spherical voids only emitting in FIR? > how long did it take astronomers to figure out that the majority of > the universe is dark matter? I agree with you that an advanced There was a dedicated search for Dyson FIR emitters. Result: density too low to care. > civilization would eventually create a ring world, and finally a Not ring, optically dense node cloud. > sphere that collected all available solar energy. But that could > support an enourmous computational structuree, capable of simulating Enormous to some, trivial to others. > every mind in a 10,000 year civilization might take only a few watts > and a few seconds. The numbers don't check out. Occam's razor sez: we're not in anyone's smart lightcone. > > When something is postulated to you it's usually bunk. Novelty > > and too small group for peer review pretty much see to that. > > When I look at teenagers lost in iPods, it doesn't seem like bunk to > think that they could positively be swallowed alive by an interesting > virtual reality. I have relatives who have addiction to WoW that makes > a heroin addict look like a weekend social drinker. Have you seen the birth rate and retention rate of Amish? > > It's less bloody and tragic than bloody stupid. Our collective > > intelligence seems to approach that of an overnight culture. > > > > http://www.fungionline.org.uk/5kinetics/2batch.html > > Competition for limited resources and a recognition that exponential I was referring to 7 gigamonkeys in above graph, actually. > growth cannot continue forever indicates that there will be Darwinian > processes for choosing which AGIs get the eventually limited power, You're getting it. > and which do not. This leads one inevitably to the conclusion that the > surviving AGIs will be the "fittest" in a survival and reproduction > sense. It will be a very competitive world for unenhanced human beings > to compete in, to say the least. Exactly. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Fri Feb 18 13:03:21 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Feb 2011 14:03:21 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> Message-ID: <20110218130321.GK23560@leitl.org> On Fri, Feb 18, 2011 at 12:16:33AM -0700, Kelly Anderson wrote: > > Kelly, do you think 3d integration will be just-ready when > > CMOS runs into a wall? > > Perhaps, perhaps not. But I think ONE out of the several dozen > competing paradigms will be ready to pick up more or less where the > last one left off. *Which* competing platforms? Technologies don't come out of the blue fully formed, they're incubated for decades in R&D pipeline. Everything is photolitho based so far, self-assembly isn't yet even in the crib. TSM is just 2d piled higher and deeper. > > Kelly, do you think that Moore is equivalent to system > > performance? You sure about that? > > No. Software improves as well, so system performance should go up Software degrades, actually. Software bloat about matches the advances in hardware. In terms of advanced concepts, why is the second-oldest high level language still unmatched? Why are newer environments inferior to already historic ones? > faster than would be indicated by Moore's law alone would indicate. > :-) -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Fri Feb 18 13:33:16 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 18 Feb 2011 08:33:16 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> Message-ID: <4D5E751C.2060008@lightlink.com> Kelly Anderson wrote: > On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote: >> Okay, first: although I understand your position as an Agilista, and your >> earnest desire to hear about concrete code rather than theory ("I value >> working code over big ideas"), you must surely acknowledge that in some >> areas of scientific research and technological development, it is important >> to work out the theory, or the design, before rushing ahead to the >> code-writing stage. > > This is the scientist vs. engineer battle. As an engineering type of > scientist, I prefer to perform experiments along the way to determine > if my theory is correct. Newton performed experiments to verify his > theories, and this influenced his next theory. Without the experiments > it would not be the scientific method, but rather closer to > philosophy. > > I'll let "real" scientists figure out how the organelles of the brain > function. I'll pay attention as I can to their findings. I like the > idea of being influenced by the designs of nature. I really like the > wall climbing robots that copy the techniques of the gecko. Really > interesting stuff that. I was reading papers about how the retina of > cats worked in computer vision classes twenty years ago. > > I'll let cognitive scientists and doctors try and unravel the brain > using black box techniques, and I'll pay attention as I can to their > results. These are interesting from the point of view of devising > tests to see if what you have designed is similar to the human brain. > Things like optical illusions are very interesting in terms of > figuring out how we do it. > > As an Agilista with an entrepreneurial bent, I have little patience > for a self-described scientist working on theories that may not have > applications for twenty years. I respect that the mathematics for the > CAT scanner were developed in the 1920's, but the guy who developed > those techniques got very little out of the exercise. Aside from that, > if you can't reduce your theories to practice pretty soon, the > practitioners of "parlor tricks" will beat you to your goal. You've misunderstood so very much of what is really going on here. There are strong theoretical reasons to believe that this approach is the only one that will work, and that the "practitioners of "parlor tricks"" will never actually be able to succeed. This isn't just opinion or speculation, it is the result of a real theoretical analysis. Also, why do you say "self-described scientist"? I don't understand if this is supposed to be me or someone else or scientists in general. And why do you assume that I am not doing experiments?! I am certainly doing that, and doing masive numbers of such experiments is at the core of everything I do. I don't quite understand how these confusions arose, but you've ended up getting quite the opposite idea about what is going on. I have little time today, so may not be able to address your other points. Richard Loosemore From hkeithhenson at gmail.com Fri Feb 18 16:13:44 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 18 Feb 2011 09:13:44 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Fri, Feb 18, 2011 at 12:28 AM, Kelly Anderson wrote: snip > Competition for limited resources and a recognition that exponential > growth cannot continue forever indicates that there will be Darwinian > processes for choosing which AGIs get the eventually limited power, > and which do not. This leads one inevitably to the conclusion that the > surviving AGIs will be the "fittest" in a survival and reproduction > sense. It will be a very competitive world for unenhanced human beings > to compete in, to say the least. The fact that we don't see massive scale manipulation of matter and energy indicates that this has not yet happened in our light cone. That doesn't mean it could not happen here. The human population growth falling below replacement in some places is an indication that reproduction isn't as strong a drive as we thought. Still, to get the observed universe, we have to be wrong on something. Perhaps there is a relatively simple way to escape from the universe. Keith From eugen at leitl.org Fri Feb 18 16:55:41 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Feb 2011 17:55:41 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: <20110218165541.GR23560@leitl.org> On Fri, Feb 18, 2011 at 09:13:44AM -0700, Keith Henson wrote: > The fact that we don't see massive scale manipulation of matter and > energy indicates that this has not yet happened in our light cone. We're not in their light cone. Origin being the time they started expanding visibly. > That doesn't mean it could not happen here. > > The human population growth falling below replacement in some places I don't think this will last. Subpopulations still grow exponentially. This is being masked for time being for select location, but the question is for how long. > is an indication that reproduction isn't as strong a drive as we > thought. > > Still, to get the observed universe, we have to be wrong on something. > > Perhaps there is a relatively simple way to escape from the universe. Not every time. Not one which can recall those already on the way. In general, I wonder about the need for the obvious explanation: yes, we're rare, and we're the first about to start expanding (assuming we won't fall flat on our face, and can't get up). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sparge at gmail.com Fri Feb 18 17:11:51 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 18 Feb 2011 12:11:51 -0500 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: Maybe we're just first in our neighborhood. -Dave On Feb 16, 2011 12:32 PM, "Keith Henson" wrote: On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > >> I'm still pissed at Sagan for his hubris in sending a message to the >> stars without asking the rest of us first, in blithe certainty that "of >> course" any recipient would have evolved beyond aggression and >> xenophobia. > > The real reasons if that they would be there you'd be dead, Jim. > In fact, if any alien picks up the transmission (chance: very close > to zero) they'd better be farther advanced than us, and on a > faster track. I hope it for them. I have been mulling this over for decades. We look out into the Universe and don't (so far) see or hear any evidence of technophilic civilization. I see only two possibilities: 1) Technophilics are so rare that there are no others in our light cone. 2) Or if they are relatively common something wipes them *all* out, or, if not wiped out, they don't do anything which indicates their presence. If 1, then the future is unknown. If 2, it's probably related to local singularities. If that's the case, most of the people reading this list will live to see it. Keith PS. If anyone can suggest something that is not essentially the same two situations, please speak up. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Fri Feb 18 17:17:28 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 10:17:28 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5E751C.2060008@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> Message-ID: On Fri, Feb 18, 2011 at 6:33 AM, Richard Loosemore wrote: > > Kelly Anderson wrote: >> >> On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote > You've misunderstood so very much of what is really going on here. It wouldn't be the first time. I'm here to learn. If you have something to teach, I am your humble student. I am quite sincere in this. No kidding. > There are strong theoretical reasons to believe that this approach is the only one that will >work, and that the "practitioners of "parlor tricks"" will never actually be able to succeed. ?This >isn't just opinion or speculation, it is the result of a real theoretical analysis. Risking Clintonese... I suppose Richard, that this depends upon your definition of 'success'. I would guess that most people would declare that Watson already succeeded. You dismiss it as "trivial" and a "parlor trick", while 99% of everyone else thinks it is a great success already.? If there is derision, I think it is because of your dismissive attitude about what is clearly a great milestone in computation, even if it turns out not to be on the path to some "true" AGI. I, for one, think that with another ten years or so of work, the Watson approach might pass some version of the Turing test. If you wrote a paper entitled "Why Watson is an Evolutionary Dead End", and you were convincing to your peers, I think you would get it published and it would be helpful to the AI community. > Also, why do you say "self-described scientist"? ?I don't understand if this is supposed to be >me or someone else or scientists in general. Carl Sagan, a real scientist, said frequently, "Extraordinary claims require extraordinary evidence." (even though he may have borrowed the phrase from Marcello Truzzi.) I understand that you are claiming to follow the scientific method, and that you do not think of yourself as a philosopher. If you claim to be a philosopher, stand up and be proud of that. Some of the most interesting people are philosophers, and there is nothing wrong with that. > And why do you assume that I am not doing experiments?! ?I am certainly doing that, and >doing masive numbers of such experiments is at the core of everything I do. Good to hear. Your papers did not reflect that. Can you point me to some of your experimental results? > I don't quite understand how these confusions arose, but you've ended up getting quite the > opposite idea about what is going on. All I had to go on was your papers. If what you are saying now is correct, your papers don't effectively reflect that. > I have little time today, so may not be able to address your other points. Understandable. -Kelly From darren.greer3 at gmail.com Fri Feb 18 17:20:46 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 13:20:46 -0400 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <20110218165541.GR23560@leitl.org> References: <20110218165541.GR23560@leitl.org> Message-ID: >is an indication that reproduction isn't as strong a drive as we thought.< Either it's not as strong or that human beings can extract themselves from evolutionary mandated behavior. Some individuals and groups of individuals seem to be able to do it with aggression and dominance and tribal mentalities, and others don't. The question I have is this: do these individuals and groups adapt out of these behaviors by selection pressures over generational periods based on location (like living in cities for example where xenophobia makes life more difficult and not less so.) Or can you consciously remove yourself from evolutionary imperatives by force of will, or education, or both? I would think, by looking at the Internet and knowing the people that I do, that the drive to have sex may be as strong as ever. But the need in certain populations to have progeny result from it is reduced. Once again, technology, and the relaxation in certain cultures of tribal laws and strictures limiting sexual behavior, have influenced the biological result, but perhaps have not influenced the drive at all. d. On Fri, Feb 18, 2011 at 12:55 PM, Eugen Leitl wrote: > On Fri, Feb 18, 2011 at 09:13:44AM -0700, Keith Henson wrote: > > > The fact that we don't see massive scale manipulation of matter and > > energy indicates that this has not yet happened in our light cone. > > We're not in their light cone. Origin being the time they started > expanding visibly. > > > That doesn't mean it could not happen here. > > > > The human population growth falling below replacement in some places > > I don't think this will last. Subpopulations still grow exponentially. > This is being masked for time being for select location, but the > question is for how long. > > > is an indication that reproduction isn't as strong a drive as we > > thought. > > > > Still, to get the observed universe, we have to be wrong on something. > > > > Perhaps there is a relatively simple way to escape from the universe. > > Not every time. Not one which can recall those already on the way. > > In general, I wonder about the need for the obvious explanation: yes, > we're rare, and we're the first about to start expanding (assuming we > won't fall flat on our face, and can't get up). > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Feb 18 17:33:25 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 13:33:25 -0400 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <009301cbcf33$88a24a10$99e6de30$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: >it was a jolt to see all that in something as mainstream as Time magazine. < This may be terribly cynical of me, but I worry when any idea goes mainstream. I always think the media, the moguls and the Wal-marts will try and spin it enough to make a buck off it. Although how you would make money off the singularity I don't know. How about a T-Shirt that says "The Singularity is coming. Get implants!" d. 2011/2/18 spike > Even after all the singularity talk that we have had here for years, it was > a jolt to see all that in something as mainstream as Time magazine. It will > be interesting to see the letters to the editor on this one. > > > > spike > > > > *From:* extropy-chat-bounces at lists.extropy.org [mailto: > extropy-chat-bounces at lists.extropy.org] *On Behalf Of *John Clark > *Sent:* Thursday, February 17, 2011 12:57 PM > *To:* ExI chat list > *Subject:* [ExI] Time magazine cover story on the singularity > > > The cover story of the current issue of Time magazine is entitled "2045: > The Year Man Becomes Immortal", its about Ray Kurzweil and the > singularity: > > http://www.time.com/time/health/article/0,8599,2048138,00.html > > > > John K Clark > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Feb 18 17:48:47 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 18 Feb 2011 12:48:47 -0500 Subject: [ExI] Complex AGI [WAS Watson On Jeopardy] In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> Message-ID: <4D5EB0FF.7000007@lightlink.com> Kelly Anderson wrote: > On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore > wrote: >> Okay, first: although I understand your position as an Agilista, >> and your earnest desire to hear about concrete code rather than >> theory ("I value working code over big ideas"), you must surely >> acknowledge that in some areas of scientific research and >> technological development, it is important to work out the theory, >> or the design, before rushing ahead to the code-writing stage. > > This is the scientist vs. engineer battle. As an engineering type of > scientist, I prefer to perform experiments along the way to determine > if my theory is correct. Newton performed experiments to verify his > theories, and this influenced his next theory. Without the > experiments it would not be the scientific method, but rather closer > to philosophy. > > I'll let "real" scientists figure out how the organelles of the brain > function. I'll pay attention as I can to their findings. I like the > idea of being influenced by the designs of nature. I really like the > wall climbing robots that copy the techniques of the gecko. Really > interesting stuff that. I was reading papers about how the retina of > cats worked in computer vision classes twenty years ago. > > I'll let cognitive scientists and doctors try and unravel the brain > using black box techniques, and I'll pay attention as I can to their > results. These are interesting from the point of view of devising > tests to see if what you have designed is similar to the human brain. > Things like optical illusions are very interesting in terms of > figuring out how we do it. > > As an Agilista with an entrepreneurial bent, I have little patience > for a self-described scientist working on theories that may not have > applications for twenty years. I respect that the mathematics for the > CAT scanner were developed in the 1920's, but the guy who developed > those techniques got very little out of the exercise. Aside from > that, if you can't reduce your theories to practice pretty soon, the > practitioners of "parlor tricks" will beat you to your goal. > >> That is not to say that I don't write code (I spent several years >> as a software developer, and I continue to write code), but that I >> believe the problem of building an AGI is, at this point in time, >> a matter of getting the theory right. We have had over fifty years >> of AI people rushing into programs without seriously and >> comprehensively addressing the underlying issues. Perhaps you feel >> that there are really not that many underlying issues to be dealt >> with, but after having worked in this field, on and off, for >> thirty years, it is my position that we need deep understanding >> above all. Maxwell's equations, remember, were dismissed as useless >> for anything -- just idle theorizing -- for quite a few years after >> Maxwell came up with them. Not everything that is of value *must* >> be accompanied by immediate code that solves a problem. > > > I believe that many interesting problems are solved by throwing more > computational cycles at them. Then, once you have something that > works, you can optimize later. Watson is a system that works largely > because of the huge number of computational cycles being thrown at > the problem. As far as AGI research being off the tracks, the only > way you're going to convince anyone is with some kind of intermediate > result. Even flawed results would be better than nothing. > >> Now, with regard to the papers that I have written, I should >> explain that they are driven by the very specific approach >> described in the complex systems paper. That described a >> methodological imperative: if intelligent systems are complex (in >> the "complex systems" sense, which is not the "complicated >> systems", aka space-shuttle-like systems, sense), then we are in a >> peculiar situation that (I claim) has to be confronted in a very >> particular way. If it is not confronted in that particular way, we >> will likely run around in circles getting nowhere -- and it is >> alarming that the precise way in which this running around in >> circles would happen bears a remarkable resemblance to what has >> been happening in AI for fifty years. So, if my reasoning in that >> paper is correct then the only sensible way to build an AGI is to >> do some very serious theoretical and tool-building work first. > > See, I don't think Watson is "getting nowhere"... It is useful today. > > > > > Let me give you an analogy. I can see that when we can create > nanotech robots small enough to get into the human body and work at > the cellular level, then all forms of cancer are reduced to sending > in those nanobots with a simple program. First, detect cancer cells. > How hard can that be? Second, cut a hole in the wall of each cancer > cell you encounter. With enough nanobots, cancer, of all kinds, is > cured. Of course, we don't have nanotech robots today, but that > doesn't matter. I have cured cancer, and I deserve a Nobel prize in > medicine!!! > > On the other hand, there are doctors with living patients today, and > they practice all manner of barbarous medicine in the attempt to kill > cancer cells without killing patients. The techniques are crude and > often unsuccessful causing their patients lots of pain. Nevertheless, > these doctors do occasionally succeed in getting a patient into > remission. > > You are the nanotech doctor. I prefer to be the doctor with living > patients needing help today. Watson is the second kind. Sure, the > first cure to cancer is more general, easier, more effective, easier > on the patient, but is simply not available today, even if you can > see it as an almost inevitable eventuality. > > >> And part of that theoretical work involves a detailed understanding >> of cognitive psychology AND computer science. Not just a >> superficial acquaintance with a few psychology ideas, which many >> people have, but an appreciation for the enormous complexity of cog >> psych, and an understanding of how people in that field go about >> their research (because their protocols are very different from >> those of AI or computer science), and a pretty good grasp of the >> history of psychology (because there have been many different >> schools of thought, and some of them, like Behaviorism, contain >> extremely valuable and subtle lessons). > > Ok, so you care about cognitive psychology. That's great. Are you > writing a program that simulates a human psychology? Even on a > primitive basis? Or is your real work so secretive that you can't > share your ideas? In other words, how SPECIFICALLY does your deep > understanding of cognitive psychology contribute to a working program > (even if it only solves a simple problem)? > >> With regard to the specific comments I made below about McClelland >> and Rumelhart, what is going on there is that these guys (and >> several others) got to a point where the theories in cognitive >> psychology were making no sense, and so they started thinking in a >> new way, to try to solve the problem. I can summarize it as "weak >> constrain satisfaction" or "neurally inspired" but, alas, these >> things can be interpreted in shallow ways that omit the background >> context ... and it is the background context that is the most >> important part of it. In a nutshell, a lot cognitive psychology >> makes a lot more sense if it can be re-cast in "constraint" terms. > > Ok, that starts to make some sense. I have always considered context > to be the most important aspect of artificial intelligence, and one > of the more ignored. I think Watson does a lot in the area of > addressing context. Certainly not perfectly, but well enough to be > quite useful. I'd rather have an idiot savant to help me today than a > nice theory that might some day result in something truly elegant. > >> The problem, though, is that the folks who started the PDP (aka >> connectionist, neural net) revolution in the 1980s could only >> express this new set of ideas in neural terms. The made some >> progress, but then just as the train appeared to be gathering >> momentum it ran out of steam. There were some problems with their >> approach that could not be solved in a principled way. They had >> hoped, at the beginning, that they were building a new foundation >> for cognitive psychology, but something went wrong. > > They lacked a proper understanding of the system they were > simulating. They kept making simplifying assumptions/guesses because > they didn't have a full picture of the brain. I agree that neural > networks as practiced in the 80s ran out of steam... whether it was > because of a lack of hardware to run the algorithms fast enough, or > whether the algorithms were flawed at their core is an interesting > argument. > > If the brain is simulated accurately enough, then we should be able > to get an AGI machine by that methodology. That will take some time > of course. Your approach apparently will also. Which is the shortest > path to AGI? Time will tell, I suppose. > >> What I have done is to think hard about why that collapse occurred, >> and to come to an understanding about how to get around it. The >> answer has to do with building two distinct classes of constraint >> systems: either non-complex, or complex (side note: I will have >> to refer you to other texts to get the gist of what I mean by >> that... see my 2007 paper on the subject). The whole >> PDP/connectionist revolution was predicated on a non-complex >> approach. I have, in essence, diagnosed that as the problem. >> Fixing that problem is hard, but that is what I am working on. >> >> Unfortunately for you -- wanting to know what is going on with this >> project -- I have been studiously unprolific about publishing >> papers. So at this stage of the game all I can do is send you to >> the papers I have written and ask you to fill in the gaps from your >> knowledge of cognitive psychology, AI and complex systems. > > This kind of sounds like you want me to do your homework for you... > :-) > > You have published a number of papers. The problem from my point of > view is that the way you approach your papers is philisophical, not > scientific. Interesting, but not immediately useful. > >> Finally, bear in mind that none of this is relevant to the question >> of whether other systems, like Watson, are a real advance or just >> a symptom of a malaise. John Clark has been ranting at me (and >> others) for more than five years now, so when he pulls the old >> bait-and-switch trick ("Well, if you think XYZ is flawed, let's see >> YOUR stinkin' AI then!!") I just smile and tell him to go read my >> papers. So we only got into this discussion because of that: it >> has nothing to do with delivering critiques of other systems, >> whether they contain a million lines of code or not. :-) Watson >> still is a sleight of hand, IMO, whether my theory sucks or not. >> ;-) > > The problem from my point of view is that you have not revealed > enough of your theory to tell whether it sucks or not. > > I have no personal axe to grind. I'm just curious because you say, "I > can solve the problems of the world", and when I ask what those are, > you say "read my papers"... I go and read the papers. I think I > understand what you are saying, more or less in those papers, and I > still don't know how to go about creating an AGI using your model. > All I know at this point is that I need to separate the working brain > from the storage brain. Congratulations, you have recast the brain > as a Von Neumann architecture... :-) > > -Kelly Kelly, Well, I am struggling to find positive things to say, because you're tending to make very sweeping statements (e.g. "this is just philosophy" and "this is not science") that some people might interpret as quite insulting. And at the same time, some of the things that other people (e.g. John Clark) have said are starting to come back as if *I* was the one who said them! ;-) We need to be clear, first, that what we are discussing now has nothing to do with Watson. John Clark made a silly equation between my work and Watson, and you and I somehow ended up discussing my work. But I will not discuss the two as if they are connected, if you don't mind, because they are not. They are orthogonal. You have also started to imply that certain statements or claims have come from me .... so I need to be absolutely clear about what I have said or claimed, and what I have not. I have not said "I can solve the problems of the world". I am sure you weren't being serious, but even so... ;-) Most importantly I have NOT claimed that I have written down a complete theory of AGI, nor do I claim that I have built a functioning AGI. When John Clark's said to me: > So I repeat my previous request, please tell us all about the > wonderful AI program that you have written that does things even more > intelligently than Watson. ... I assumed that anyone who actually read this patently silly demand, would understand immediately that I was not being serious when I responded: > Done: read my papers. > > Questions? Just ask! John Clark ALWAYS changes the subject, in every debate in which he attacks me, by asking that same idiotic, rude question! :-) I have long ago stopped being bothered by it, and these days I either ignore him or tell him to read my papers if he wants to know about my work. I really don't know how anyone could read that exchange and think that I was quietly agreeing that I really did claim that I had built a "wonderful AI program ... that does things even more intelligently than Watson". So what have I actually claimed? What have I been defending? Well, what I do say is that IMPLICIT in the papers I have written, there is indeed an approach to AGI (a framework, and a specific model within that framework). There is no way that I have described an AGI design explictly, in enough detail for it to be evaluated, and I have never claimed that. Nor have I claimed to have built one yet. But when pressed by people who want to know more, I do point out that if they understand cognitive psychology in enough detail they will easily be able to add up all the pieces and connect all the dots and see where I am going with the work I am doing. The problem is that, after saying that you read my papers already, you were quite prepared to dismiss all of it as "philosophizing" and "not science". I tried to explain to you that if you understood the cognitive science and AI and complex systems background from which the work comes, you would be able to see what I meant by there being a theory of AGI implicit in it, and I did try to explain in a little more detail how my work connects to that larger background. I pointed out the thread that stretches from the cog psych of the 1980s, through McClelland and Rumelhart, through the complex systems movement, to the particular (and rather unusual) approach that I have adopted. I even pointed out the very, very important fact that my complex systems paper was all about the need for a radically different AGI methodology. Now, I might well be wrong about my statement that we need to do things in this radically different way, but you could at least realize that I have declared myself to be following that alternate methodology, and therefore understand what I have said about the priority of theory and a particular kind of experiment, over hacking out programs. It is all there, in the complex systems paper. But even after me pointing out that this stuff has a large context that you might not be familiar with, instead of acknowledging that fact, you are still making sweeping condemnations! This is pretty bad. More generally: I get two types of responses to my work. One (less common) type of response is from people who understand what I am trying to say well enough that they ask specific, focussed questions about things that are unclear or things they want to challenge. Those people clear understand that there is a "there" there .... if the papers I wrote were empty philosophising, those people would never be ABLE to send coherent challenges or questions in my direction. Papers that really are just empty philosophising CANNOT generate that kind of detailed response, because there is nothing coherent enough in the paper for anyone to get a handle on. Then there is the second kind of response. From these people I get nothing specific, just handwaving or sweeping condemnations. Nothing that indicates that they really understood what I was trying to say. They reflect back my arguments in a weird, horribly distorted form -- so distorted that it has no relationship whatsoever to what I actually said -- and when I try to clarify their misunderstandings they just make more and more distorted statements, often wandering far from the point. And, above all, this type of response usually involves statements like "Yes, I read it, but you didn't say anything meaningful, so I dismissed it all as empty philosophising". I always try to explain and respond. I have put many hours into responding to people who ask questions, and I try very hard to help reduce confusions. I waste a lot of time that way. And very often, I do this even as the person at the other end continues to deliver mildly derogatory comments like "this isn't science, this is just speculation" alongside their other questions. If you want to know why this stuff comes out of cognitive psychology, by all means read the complex systems paper again, and let me know if you find the argument presented there, for why it HAS to come out of cogntive psychology. It is there -- it is the crux of the argument. If you believe it is incorrect, I would be happy to debate the rationale for it. But, please, don't read several papers and just say afterward "All I know at this point is that I need to separate the working brain from the storage brain. Congratulations, you have recast the brain as a Von Neumann architecture". It looks more like I should be saying, if I were less polite, "Congratulations, you just understood the first page of a 700-page cognitive pscychology context that was assumed in those papers". But won't ;-). Richard Loosemore From rpwl at lightlink.com Fri Feb 18 18:01:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 18 Feb 2011 13:01:53 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> Message-ID: <4D5EB411.9090400@lightlink.com> Kelly Anderson wrote: > On Fri, Feb 18, 2011 at 6:33 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote >> You've misunderstood so very much of what is really going on here. > > It wouldn't be the first time. I'm here to learn. If you have > something to teach, I am your humble student. I am quite sincere in > this. No kidding. This is good. I am happy to try. Don't interpret the post I just wrote as being too annoyed (just a *little* frustrated is all). ;-) >> There are strong theoretical reasons to believe that this approach is the only one that will >> work, and that the "practitioners of "parlor tricks"" will never actually be able to succeed. This >> isn't just opinion or speculation, it is the result of a real theoretical analysis. > > Risking Clintonese... I suppose Richard, that this depends upon your > definition of 'success'. I would guess that most people would declare > that Watson already succeeded. You dismiss it as "trivial" and a > "parlor trick", while 99% of everyone else thinks it is a great > success already. If there is derision, I think it is because of your > dismissive attitude about what is clearly a great milestone in > computation, even if it turns out not to be on the path to some "true" > AGI. I, for one, think that with another ten years or so of work, the > Watson approach might pass some version of the Turing test. > > If you wrote a paper entitled "Why Watson is an Evolutionary Dead > End", and you were convincing to your peers, I think you would get it > published and it would be helpful to the AI community. Well, can I point out that the numbers are not 99% in favor? Ben Goertzel just published an essay in H+ magazine saying very much the same things that I said here. Ben is very widely respected in the AGI community, so perhaps you would consider comparing and constrasting my remarks with his. I don't want to write about Watson, because I have seen so many examples of that kind of dead end and I have already analzed them as a *class* of systems. That is very important. They cannot be fought individually. I am pointng to the pattern. >> Also, why do you say "self-described scientist"? I don't understand if this is supposed to be >> me or someone else or scientists in general. > > Carl Sagan, a real scientist, said frequently, "Extraordinary claims > require extraordinary evidence." (even though he may have borrowed the > phrase from Marcello Truzzi.) I understand that you are claiming to > follow the scientific method, and that you do not think of yourself as > a philosopher. If you claim to be a philosopher, stand up and be proud > of that. Some of the most interesting people are philosophers, and > there is nothing wrong with that. :-) Well, you may be confused by the fact that I wrote ONE philosophy paper. But have a look through the very small set of publications on my website. One experimental archaeology, several experimental and computational cognitive science papers. One cognitive neuroscience paper..... I was trained as a physicist and mathematician. I just finished teaching a class in electromagnetic theory this morning. I have written all those cognitive science papers. I was once on a team that ported CorelDraw from the PC to the Mac. I am up to my eyeballs in writing a software tool in OS X that is designed to facilitate the construction and experimental investigation of a class of AGI systems that have never been built before..... Isn't it a bit of a stretch to ask me to be proud to be a philosopher? :-) :-) >> And why do you assume that I am not doing experiments?! I am certainly doing that, and >> doing masive numbers of such experiments is at the core of everything I do. > > Good to hear. Your papers did not reflect that. Can you point me to > some of your experimental results? No, but I did not say that they did. It is too early to ask. Context. Physicists back in the 1980s who wanted to work on the frontiers of particle physics had to spend decades just building one tool - the large hadron collider - to answer their theoretical questions with empirical data. I am in a comparable situation, but with one billionth the funding that they had. Do I get cut a *little* slack? :-( More when I can. Richard Loosemore From spike66 at att.net Fri Feb 18 18:21:41 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 10:21:41 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: <014a01cbcf98$b1455240$13cff6c0$@att.net> .On Behalf Of Darren Greer Subject: Re: [ExI] Time magazine cover story on the singularity >>it was a jolt to see all that in something as mainstream as Time magazine. < >This may be terribly cynical of me, but I worry when any idea goes mainstream. Ja. For mainstream media, this particular Time article wasn't half bad. > I always think the media, the moguls and the Wal-marts will try and spin it enough to make a buck off it. Hmmm, so what's the bad news? I didn't even realize there was a way to make a buck off of the singularity. Kewalll. >.Although how you would make money off the singularity I don't know. If you think of one, do share. As soon as it gets the profit motive behind it, the singularity REALLY IS coming. > How about a T-Shirt that says "The Singularity is coming. Get implants!" d. Eeeexcellent Smithers. Other ideas? Darren has hit it. Commercialization is a driving force like nothing else, a quantity which has a quality all its own. Commercialization is our friend. Look what it did for Christmas. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Feb 18 18:40:30 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 14:40:30 -0400 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <014a01cbcf98$b1455240$13cff6c0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <014a01cbcf98$b1455240$13cff6c0$@att.net> Message-ID: >Commercialization is our friend. Look what it did for Christmas.< If you can sell Santa Claus and Justin Beiber, you can sell anything. d. 2011/2/18 spike > *?On Behalf Of *Darren Greer > *Subject:* Re: [ExI] Time magazine cover story on the singularity > > > > >>it was a jolt to see all that in something as mainstream as Time > magazine. < > > > > >This may be terribly cynical of me, but I worry when any idea goes > mainstream? > > > > Ja. For mainstream media, this particular Time article wasn?t half bad. > > > > > I always think the media, the moguls and the Wal-marts will try and spin > it enough to make a buck off it? > > > > Hmmm, so what?s the bad news? I didn?t even realize there was a way to > make a buck off of the singularity. Kewalll? > > > > >?Although how you would make money off the singularity I don't know? > > > > If you think of one, do share. As soon as it gets the profit motive behind > it, the singularity REALLY IS coming. > > > > > How about a T-Shirt that says "The Singularity is coming. Get implants!" > d. > > > > Eeeexcellent Smithers. > > > > Other ideas? Darren has hit it. Commercialization is a driving force like > nothing else, a quantity which has a quality all its own. Commercialization > is our friend. Look what it did for Christmas. > > > > spike > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Feb 18 19:16:51 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 18 Feb 2011 12:16:51 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Fri, Feb 18, 2011 at 11:40 AM, Darren Greer wrote: (Keith) >>is an indication that reproduction isn't as strong a drive as we > thought.< snip > I would think, by looking at the Internet and knowing the people that I do, > that the drive to have sex may be as strong as ever. But the need in certain > populations to have progeny result from it is reduced. Once again, > technology, and the relaxation in certain cultures of tribal laws and > strictures limiting sexual behavior, have influenced the biological result, > but perhaps have not influenced the drive at all. Evolution had good reason to build in a strong drive to have sex. And in the pre birth control era that resulted in reproduction. It's also fairly clear to me that there is a drive directly for reproduction, especially in women. You only need to consider what one member who used to be on this group did to have an example. But it's far from clear to me that this direct drive is enough to sustain the population. It probably doesn't matter anyway. Keith From pharos at gmail.com Fri Feb 18 19:49:45 2011 From: pharos at gmail.com (BillK) Date: Fri, 18 Feb 2011 19:49:45 +0000 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: On Fri, Feb 18, 2011 at 7:16 PM, Keith Henson wrote: > Evolution had good reason to build in a strong drive to have sex. ?And > in the pre birth control era that resulted in reproduction. > > It's also fairly clear to me that there is a drive directly for > reproduction, especially in women. ?You only need to consider what one > member who used to be on this group did to have an example. > > No. There isn't. If you look at the groups who have falling birth rates they correlate *very* strongly with women's rights and the empowerment of women. As soon as women get the power to choose they stop having children. Some might have one child, but this is below the rate required to sustain the population. You can also correlate falling birth rates with first world countries, or 'civilization'. Which also correlates with women's rights. I agree with Eugene's claim that there are sub-groups and third world nations that to-date still have high birth rates and growing populations. But it is to be expected that these high birth rates will only continue while their women remain subjugated under male domination. How long that will last is questionable. That is why I disagree strongly that advanced civilizations will be breeding like rabbits. The 'advanced' part means low reproduction by definition. If a civilization is busy breeding furiously and fighting for survival with other breeders, they have no spare capacity to get 'advanced'. Too many mouths to feed. BillK From spike66 at att.net Fri Feb 18 20:45:20 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 12:45:20 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <014a01cbcf98$b1455240$13 cff6c0$@att.net> Message-ID: <018401cbcfac$c2eb5bc0$48c21340$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Friday, February 18, 2011 10:41 AM To: ExI chat list Subject: Re: [ExI] Time magazine cover story on the singularity >Commercialization is our friend. Look what it did for Christmas.< If you can sell Santa Claus and Justin Beiber, you can sell anything. d. I didn't sell Santa Clause and Justin Beiber, but Santa sold him to me. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 18 21:06:10 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 18 Feb 2011 16:06:10 -0500 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: On Feb 18, 2011, at 12:33 PM, Darren Greer wrote: > how you would make money off the singularity I don't know. I know how to make money off the singularity, sell everything you own and borrow every nickel you can and then use the money to short bonds. But you will have to wait until we get to the point where even Mr. Joe Average expects the singularity to happen in his lifetime. When that happens we can expect a HUGE increase in interest rates, because after the singularity one of 2 things is certain to happen: 1) Paying off that huge debt will be easy with Mr. Joe Average being the master of Nanotechnology. 2) The singularity will kill Mr. Joe Average. Either way money in the future will be worth far less than money in the present to Mr. Joe Average, so the logical thing to do is cheerfully take on a crushing debt. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Feb 18 21:48:13 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 13:48:13 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> . On Behalf Of John Clark Subject: Re: [ExI] Time magazine cover story on the singularity On Feb 18, 2011, at 12:33 PM, Darren Greer wrote: how you would make money off the singularity I don't know. >.I know how to make money off the singularity, sell everything you own and borrow every nickel you can and then use the money to short bonds.Either way money in the future will be worth far less than money in the present to Mr. Joe Average, so the logical thing to do is cheerfully take on a crushing debt. John K Clark John the US is doing exactly that. When anyone points out the craziness of this, we respond with a collective "It doesn't matter, the singularity is coming." spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Feb 18 23:27:36 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 19:27:36 -0400 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: > cheerfully take on a crushing debt.< Being a visionary, as always, I'm way ahead of you. d. 2011/2/18 John Clark > On Feb 18, 2011, at 12:33 PM, Darren Greer wrote: > > how you would make money off the singularity I don't know. > > > I know how to make money off the singularity, sell everything you own and > borrow every nickel you can and then use the money to short bonds. But you > will have to wait until we get to the point where even Mr. Joe Average > expects the singularity to happen in his lifetime. When that happens we can > expect a HUGE increase in interest rates, because after the singularity one > of 2 things is certain to happen: > > 1) Paying off that huge debt will be easy with Mr. Joe Average being the > master of Nanotechnology. > > 2) The singularity will kill Mr. Joe Average. > > Either way money in the future will be worth far less than money in the > present to Mr. Joe Average, so the logical thing to do is cheerfully take on > a crushing debt. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 19 00:00:29 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 20:00:29 -0400 Subject: [ExI] Call To Libertarians Message-ID: I understand there are some libertarians in this group. I am currently embroiled in an e-mail discussion where I find myself in a rather unique (for me) position of defending free markets and smaller government. I am a Canadian, and a proponent of socialized democracy. However, I'm not naive enough to think that full-stop socialization is a good idea. We tried that once, in the Soviet Union, and it didn't work so well. I recognize the need for competition to drive development and promote innovation. So, being a fan of balance, I'm trying to come up with some arguments that a libertarian might give while explaining why that system of could benefit mankind, especially in relation to the development of technology and the philosophies of transhumanism. Problem is, I'm not very good at it. Anyone wanna give my their opinions on this? I will not plagiarize you. I've already stated in this discussion that I will ask some people and get back to them. It's not necessary that I win the argument, but I do think that my beliefs and preferences are simply points of view, and no better (nor worse) than those of others. This may be the point that I'm trying to make -- that libertarians are not by definition inarticulate right wingers or rabid anarchists, which seems to be the point of view of this group I'm talking with. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From algaenymph at gmail.com Fri Feb 18 23:36:10 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Fri, 18 Feb 2011 15:36:10 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: <4D5F026A.9010106@gmail.com> On 2/18/11 1:06 PM, John Clark wrote: > Either way money in the future will be worth far less than money in > the present to Mr. Joe Average, so the logical thing to do is > cheerfully take on a crushing debt. That's the sort of think that got me bitched at for advocating a "passive religion" of blind faith as opposed to an "active religion" of thoughtful questioning. Note that his argument consisted of ZOMYGAWD TEH EVULRICH PEOPLE!!1! And my "friends" just sat aside and watched. From olga.bourlin at gmail.com Sat Feb 19 02:19:21 2011 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Fri, 18 Feb 2011 18:19:21 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) 2011/2/18 Darren Greer : > I understand there are some libertarians in this group. > I am currently embroiled in an e-mail discussion where I find myself in a > rather unique (for me) position of defending free markets and smaller > government. I am a Canadian, and a proponent of socialized democracy. > However, I'm not naive enough to think that full-stop socialization is a > good idea. We tried that once, in the Soviet Union, and it didn't work so > well. I recognize the need for competition to drive development and promote > innovation. > So, being a fan of balance, I'm trying to come up with some arguments that a > libertarian might give while explaining why that system of ?could benefit > mankind, especially in relation to the development of technology and the > philosophies of transhumanism. > Problem is, I'm not very good at it. Anyone wanna give my their opinions on > this? I will not plagiarize you. I've already stated in this discussion that > I will ask some people and get back to them. It's not necessary that I win > the argument, but I do think that my beliefs and preferences are simply > points of view, and no better (nor worse) than those of others. This may be > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point > of view of this group I'm talking with. > Darren > > -- > There is no history, only biography. > -Ralph Waldo Emerson > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From spike66 at att.net Sat Feb 19 02:56:34 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 18:56:34 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> ... On Behalf Of Olga Bourlin Subject: Re: [ExI] Call To Libertarians Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) Somalia is an example of anarchy, Olga, not libertarian. Two very different things. spike From moulton at moulton.com Sat Feb 19 05:34:57 2011 From: moulton at moulton.com (F. C. Moulton) Date: Fri, 18 Feb 2011 21:34:57 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> Message-ID: <4D5F5681.3040000@moulton.com> Not exactly. First Somalia is not an anarchy in the most strict sense of the word. There is a recognized government but it only controls a very small part of the country. The rest of the country suffers from a civil war A civil war which has gone on for about two decades. What you have is not an anarchy (ie no government) rather you have is more than one group fighting it out to become sole government in Somalia. To refer to Somalia as the Libertarian Paradise makes about as much sense as referring to Cambodia under the Khmer Rouge as a government paradise. Fred spike wrote: > ... On Behalf Of Olga Bourlin > Subject: Re: [ExI] Call To Libertarians > > Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) > > Somalia is an example of anarchy, Olga, not libertarian. Two very different > things. spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From eugen at leitl.org Sat Feb 19 06:18:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 07:18:32 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: <20110219061831.GT23560@leitl.org> On Fri, Feb 18, 2011 at 07:49:45PM +0000, BillK wrote: > I agree with Eugene's claim that there are sub-groups and third world > nations that to-date still have high birth rates and growing > populations. But it is to be expected that these high birth rates will > only continue while their women remain subjugated under male > domination. How long that will last is questionable. I am sorry, the trend is unfortunately that the responsible, self-limiting folks are eventually self-selecting into invisibility http://www.scientificamerican.com/blog/post.cfm?id=gods-little-rabbits-religious-peopl-2010-12-22 > That is why I disagree strongly that advanced civilizations will be > breeding like rabbits. The 'advanced' part means low reproduction by > definition. This is why you never meet the 'advanced'. Only the other kind, who doesn't care about your orderly world view. (The Indians sure got a nasty surprise). The US was colonized by people wielding diseases, guns and religions. The Amish have no issues using photovoltaics they don't make. I know it's a hard concept to grasp, but evolution doesn't have a built-in direction. If sentience is holding you back, you will lose it over time and space. It's not a being sitting in a spaceship, it's one single beastie. It's only as smart as it needs to be. > If a civilization is busy breeding furiously and fighting for survival > with other breeders, they have no spare capacity to get 'advanced'. > Too many mouths to feed. It must be hard to live in a http://www.youtube.com/watch?v=Ur3CQE8xB3c world. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Sat Feb 19 06:28:33 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 07:28:33 +0100 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> References: <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> Message-ID: <20110219062833.GU23560@leitl.org> On Fri, Feb 18, 2011 at 01:48:13PM -0800, spike wrote: > John the US is doing exactly that. When anyone points out the craziness of > this, we respond with a collective "It doesn't matter, the singularity is > coming." The last stock market Singularity .bombed quite nicely, as you'll recall. A lot of people, particularly on this list really thought that was it, and bought in overproportionally. The problem with exponential growth in a limited resource world is that it frequently looks like http://en.wikipedia.org/wiki/Bacterial_growth We've just left exponential phase and are slowly entering stationary phase. The challenge for this particular culture is to break open this particular Petri dish while they're still able. From eugen at leitl.org Sat Feb 19 06:39:51 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 07:39:51 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <20110219063951.GW23560@leitl.org> On Fri, Feb 18, 2011 at 08:00:29PM -0400, Darren Greer wrote: > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point I wish I could help you, but as a rabid anarchist I unfortunately can't. > of view of this group I'm talking with. From spike66 at att.net Sat Feb 19 06:46:05 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 22:46:05 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <20110219062833.GU23560@leitl.org> References: <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> <20110219062833.GU23560@leitl.org> Message-ID: <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> ... On Behalf Of Eugen Leitl Subject: Re: [ExI] Time magazine cover story on the singularity On Fri, Feb 18, 2011 at 01:48:13PM -0800, spike wrote: >> John the US is doing exactly that. When anyone points out the >> craziness of this, we respond with a collective "It doesn't matter, >> the singularity is coming." >The last stock market Singularity .bombed quite nicely, as you'll recall. How well I recall. I am just getting back to where I was back in those heady days. >A lot of people, particularly on this list really thought that was it, and bought in overproportionally... We thought it was the technocalypse. I did anyway. Then the stock market crashed. It wasn't until 9/11/01 that many of us realized we still have yet another world war to fight, and this one may be worse than the three we had in the 20th century. >We've just left exponential phase and are slowly entering stationary phase. The challenge for this particular culture is to break open this particular Petri dish while they're still able. We are able. The question is will we break out while we are still willing. spike From moulton at moulton.com Sat Feb 19 07:23:52 2011 From: moulton at moulton.com (F. C. Moulton) Date: Fri, 18 Feb 2011 23:23:52 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <4D5F7008.7020406@moulton.com> Darren Greer wrote: > I understand there are some libertarians in this group. There are some people who are libertarians and then there are some who use the term with little or no comprehension of libertarian history and philosophy. I have at least a modest understanding of libertarian history and philosophy so I will attempt to provide a few comments before I get so tired that I fall asleep. > I am currently embroiled in an e-mail discussion where I find myself > in a rather unique (for me) position of defending free markets and > smaller government. I am a Canadian, and a proponent of socialized > democracy. However, I'm not naive enough to think that full-stop > socialization is a good idea. We tried that once, in the Soviet Union, > and it didn't work so well. I recognize the need for competition to > drive development and promote innovation. I will note that libertarian philosophy covers more that just economic systems and that economics is not "starting point" of the libertarian philosophy. In my comments below I will l attempt to show how this develops. > > So, being a fan of balance, I'm trying to come up with some arguments > that a libertarian might give while explaining why that system of > could benefit mankind, especially in relation to the development of > technology and the philosophies of transhumanism. First a couple of high level points. There are those who hold the position that libertarianism properly understood is anarchism. There are those who hold the position that libertarianism can be either anarchism or a very limited government sometimes called a "night watchman" government. I personally have never seen a convincing argument that the limited government position is intellectually defensible. However I will attempt to provide some insight into it as best I can. As I mentioned above it should be noted that libertarian thought has often historically been divided into the "moralist" derived branch and the "consequentialist" derived branch. Usually on any particular question these two are in agreement but not always. There is not enough space to go into the details here; just be aware of it. Now to get to your specific question about development of technology. One important aspect is the allocation of resources and the knowledge gained from markets. This is the point that Hayek and others have made over the years. One major difficulty arises when government control of part or all of an economic system distorts the feedback loops and can contribute to unwanted outcomes. Thus if a regulator keeps interest rates artificially low that might make the economy rev up just like consuming too much coffee and doughnuts can get a person revved up. But when the caffeine and sugar wear off then that is when the headache arrives. This is not to say that the government is responsible for every economic problem; certainly many people do foolish things on their own. There is not utopia. However I think that a strong argument can be made that we are better off without the distortions inherent in government regulation. And also note that the recent financial mess did not occur in a regulatory vacuum; there were many government institutions around supposedly watching things; everyone from the SEC to the FDIC to the FED. People went to the SEC on more than one occasion and told them that Madoff was not proper but the SEC did their investigation and said there was nothing wrong. Now to be honest it should be noted that the regulatory failure we saw in the past few years is not in and of itself a conclusive argument against all regulation; it can be used at most as an argument against regulation which is not done adequately. Thus the foolish (and in same cases criminal) actions of some business on the one hand and the problems of poor regulation on the other hand are not in and of themselves sufficient to serve as a complete argument for either increased regulation or a complete free market. It is all much more complicated and nuanced but hopefully I have at least given a flavor of some of the issues. On the topic of knowledge let me give a small example. Consider some business which is protected by various tariffs that keep out competition and the workers in the business have regulations which keep their wages high. The owner is happy because there is not much competition and the workers are enjoying the good live. But consider the knowledge problem. The children of the workers see how much their parents make and might decide to skip more education or training to go "work the assembly line with the parents" and get a really nice house because the wages are high and they can afford the mortgage. Then there is WTO ruling that the tariff must be dropped. The owner finds out that the business was not as efficient as previously thought and the workers soon realize that on the world market their labor is not worth what they had believed. Knowledge about the relative value of labor and when and how to allocate resources are some of the things which arise out of market activity. Of course this knowledge is not perfect. Many people might miss an opportunity until one person or group figures it out. That is the nature of human activity. When discussing 'free markets' it is important be on guard when someone points to a non-free market and refers to it as if it was. Too often persons who advocate government crony capitalism or mercantilism fraudulently use the term 'free market'. I think that the philosopher Roderick Long (see link below) is developing an interesting way of discussing this with his terminology of left-conflationism and right-conflationism. > Problem is, I'm not very good at it. Anyone wanna give my their > opinions on this? I will not plagiarize you. I've already stated in > this discussion that I will ask some people and get back to them. It's > not necessary that I win the argument, but I do think that my beliefs > and preferences are simply points of view, and no better (nor worse) > than those of others. This may be the point that I'm trying to make -- > that libertarians are not by definition inarticulate right wingers or > rabid anarchists, which seems to be the point of view of this group > I'm talking with. > There are no simple answers on this however let me point you to some additional sources of information. First I suggest avoiding stuff published by the Libertarian Party; occasionally they might put out something worthwhile but it unless you are well versed you can be misled. I do not agree in total with any of the follow but I can nitpick almost anything: As a general source of ideas on libertarianism and economics I find that David Friedman usually has an interesting take on things: http://daviddfriedman.com/ Roderick Long is a philosophy professor who has some interesting ideas and links to many others http://aaeblog.com/about-2/ The left-conflationism and right-conflationism discussion is in the following http://aaeblog.com/2010/12/26/how-to-do-things-with-words/ For libertarian history I recommend the podcast series (also available as transcripts) by my friend Jeff Riggenbach. Jeff has covers some very interesting topics and they are easy to listen to when out for a walk: http://mises.org/media.aspx?action=category&ID=208 And while it is not totally libertarian I find that EconTalk is an interesting set of podcasts on economics as well as occasional discussions of biology and other areas: http://www.econtalk.org/ In particular this podcast might answer some of your questions http://www.econtalk.org/archives/2010/10/ridley_on_trade.html Also this is an interesting discussion of the recent financial mess http://www.econtalk.org/archives/2010/05/roberts_on_the_2.html And there is the always interesting Marginal Revolution http://www.marginalrevolution.com/ I hope this info is helpful. Fred > Darren > > -- > /There is no history, only biography./ > / > / > /-Ralph Waldo Emerson > / > > > ------------------------------------------------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From jonkc at bellsouth.net Sat Feb 19 07:38:45 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 19 Feb 2011 02:38:45 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> Message-ID: <0E3C74A1-9879-453E-AEFC-5D0A8F4C1280@bellsouth.net> On Feb 18, 2011, at 9:56 PM, spike wrote: > Somalia is an example of anarchy, Olga, not libertarian. Two very different things. Somalia is an example of chaos, anarchy just means lack of government. Chaos necessarily implies anarchy but anarchy does not necessarily imply chaos. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Feb 19 07:47:28 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 19 Feb 2011 08:47:28 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: Hi Darren, I am a big sympathizer of many libertarian ideas but I don't usually call myself a libertarian, and when I do I call myself a left libertarian, in the sense that I want the government out of my living room but I see nothing wrong if the government builds hospitals and highways. I am definitely _not_ a right winger. I can consider myself as a (_non_ rabid) anarchist who sees a small government (in the sense of a small management committee and not in the sense of a big dictatorship) as a necessary evil and a practical necessity in today's world. I guess the history of the development of the Internet shows the advantages of this approach. Public funding has been used at the beginning, but then there has been an exponential acceleration due to the absence of regulations and low entry barriers, which have permitted individual and small teams to participate in the development. The creativity of small spontaneous teams is always orders of magnitude higher than 9-to-5 workers in large companies. 2011/2/19 Darren Greer : > I understand there are some libertarians in this group. > I am currently embroiled in an e-mail discussion where I find myself in a > rather unique (for me) position of defending free markets and smaller > government. I am a Canadian, and a proponent of socialized democracy. > However, I'm not naive enough to think that full-stop socialization is a > good idea. We tried that once, in the Soviet Union, and it didn't work so > well. I recognize the need for competition to drive development and promote > innovation. > So, being a fan of balance, I'm trying to come up with some arguments that a > libertarian might give while explaining why that system of ?could benefit > mankind, especially in relation to the development of technology and the > philosophies of transhumanism. > Problem is, I'm not very good at it. Anyone wanna give my their opinions on > this? I will not plagiarize you. I've already stated in this discussion that > I will ask some people and get back to them. It's not necessary that I win > the argument, but I do think that my beliefs and preferences are simply > points of view, and no better (nor worse) than those of others. This may be > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point > of view of this group I'm talking with. > Darren > > -- > There is no history, only biography. > -Ralph Waldo Emerson > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From moulton at moulton.com Sat Feb 19 08:08:30 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 00:08:30 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <0E3C74A1-9879-453E-AEFC-5D0A8F4C1280@bellsouth.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <0E3C74A1-9879-453E-AEFC-5D0A8F4C1280@bellsouth.net> Message-ID: <4D5F7A7E.2080406@moulton.com> John Clark wrote: > On Feb 18, 2011, at 9:56 PM, spike wrote: >> Somalia is an example of anarchy, Olga, not libertarian. Two very >> different things. > Somalia is an example of chaos, anarchy just means lack of government. > Chaos necessarily implies anarchy but anarchy does not necessarily > imply chaos. Actually Chaos does not necessarily imply the lack of government (ie anarchy) since chaos can exist along side a government. And occasionally governments are the source of the chaos. Fred From pharos at gmail.com Sat Feb 19 07:57:47 2011 From: pharos at gmail.com (BillK) Date: Sat, 19 Feb 2011 07:57:47 +0000 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> References: <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> <20110219062833.GU23560@leitl.org> <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> Message-ID: On Sat, Feb 19, 2011 at 6:46 AM, spike wrote: > How well I recall. ?I am just getting back to where I was back in those > heady days. > We thought it was the technocalypse. ?I did anyway. ?Then the stock market > crashed. ?It wasn't until 9/11/01 that many of us realized we still have yet > another world war to fight, and this one may be worse than the three we had > in the 20th century. But it wasn't an accident, Spike. It was deliberate. And they are doing it again. The transfer of the nation's wealth into a very few hands is progressing as planned. Make sure you cash in this time before the next collapse. Wall Street makes money on the way up and on the way down. Mere mortals have much less choice. (As well as getting told to fight wars to protect the wealth of the rich). BillK From darren.greer3 at gmail.com Sat Feb 19 11:37:03 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 07:37:03 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: Thanks for you responses. Special thanks to Fred for the run-down and the links. I will read them carefully. The Somalia remark is exactly the type of over-simplification that I've been dealing with in the other discussion. One guy said libertarians were people who read Ayn Rand as a teenager and grew up to be self-centered jerks. But even a quick survey of it on the 'net revealed to me that it is a diverse, coherent and extensive set of beliefs, philosophies and principles that cannot easily be dismissed with a simple one-liner. The older I get the less likely I am to denigrate something because I disagree with it. First I'll try to understand it, and then maybe I'll come up with a one-liner. :) Darren On Fri, Feb 18, 2011 at 8:00 PM, Darren Greer wrote: > I understand there are some libertarians in this group. > > I am currently embroiled in an e-mail discussion where I find myself in a > rather unique (for me) position of defending free markets and smaller > government. I am a Canadian, and a proponent of socialized democracy. > However, I'm not naive enough to think that full-stop socialization is a > good idea. We tried that once, in the Soviet Union, and it didn't work so > well. I recognize the need for competition to drive development and promote > innovation. > > So, being a fan of balance, I'm trying to come up with some arguments that > a libertarian might give while explaining why that system of could benefit > mankind, especially in relation to the development of technology and the > philosophies of transhumanism. > > Problem is, I'm not very good at it. Anyone wanna give my their opinions on > this? I will not plagiarize you. I've already stated in this discussion that > I will ask some people and get back to them. It's not necessary that I win > the argument, but I do think that my beliefs and preferences are simply > points of view, and no better (nor worse) than those of others. This may be > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point > of view of this group I'm talking with. > > Darren > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Feb 19 14:10:32 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 09:10:32 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> Message-ID: <4D5FCF58.4020407@lightlink.com> spike wrote: > ... On Behalf Of Olga Bourlin > Subject: Re: [ExI] Call To Libertarians > > Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) > > Somalia is an example of anarchy, Olga, not libertarian. Two very different > things. spike Only different to those who cannot understand the inevitable end-point of libertarianism. :-) Excellent example, Olga! Richard Loosemore [ducks beneath parapet to get out of the way of incomings] From darren.greer3 at gmail.com Sat Feb 19 15:17:19 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 11:17:19 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FCF58.4020407@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: >Only different to those who cannot understand the inevitable end-point of libertarianism.< Just as the end-point of democracy is a stagnant bureaucratic state? The end-point of capitalism is fascism and plutocracy? The end-point of socialism is military dictatorship? The end-point of any system is a situation of extremes and therefore not desirable. When I asked the question I made the assumption that was understood. I was looking for a bit of a nuanced interpretation, much like the one Fred gave. I understand that political discourse tends to evoke passionate responses, but I should have made myself clearer: I was looking for an intellectual response, not a politicized, emotive one. My error. Darren On Sat, Feb 19, 2011 at 10:10 AM, Richard Loosemore wrote: > spike wrote: > >> ... On Behalf Of Olga Bourlin >> Subject: Re: [ExI] Call To Libertarians >> >> Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) >> >> Somalia is an example of anarchy, Olga, not libertarian. Two very >> different >> things. spike >> > > Only different to those who cannot understand the inevitable end-point of > libertarianism. :-) > > Excellent example, Olga! > > > Richard Loosemore > > [ducks beneath parapet to get out of the way of incomings] > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 19 15:26:01 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 11:26:01 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: Sorry. That last post sounded a bit harsh and I'm not usually so confrontational. What I'm saying I guess is that, if Somalia is considered by some to be libertarian, why is that so? Which brand of libertarian is it? What is the movement's history in that country? Why did the writer think it was so? What arguments for? What arguments against? Asking a lot I know, and I'm doing my own research. But I don't find much on the libertarian view regarding technology on-line and I thought some of you might have some interesting things to say about that. Recall: I am most certainly not a libertarian. But I am interested in political systems, especially as they relate to transhumanism. I noticed when I first came here that economic issues were especially important to this group, because progress depends upon them. And economy is difficult to discuss without bring in at least some politics. There. I've back-pedaled enough. :) d. On Sat, Feb 19, 2011 at 11:17 AM, Darren Greer wrote: > >Only different to those who cannot understand the inevitable end-point of > libertarianism.< > > Just as the end-point of democracy is a stagnant bureaucratic state? The > end-point of capitalism is fascism and plutocracy? The end-point of > socialism is military dictatorship? > > The end-point of any system is a situation of extremes and therefore not > desirable. When I asked the question I made the assumption that was > understood. I was looking for a bit of a nuanced interpretation, much like > the one Fred gave. I understand that political discourse tends to evoke > passionate responses, but I should have made myself clearer: I was looking > for an intellectual response, not a politicized, emotive one. My error. > > > Darren > > On Sat, Feb 19, 2011 at 10:10 AM, Richard Loosemore wrote: > >> spike wrote: >> >>> ... On Behalf Of Olga Bourlin >>> Subject: Re: [ExI] Call To Libertarians >>> >>> Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) >>> >>> Somalia is an example of anarchy, Olga, not libertarian. Two very >>> different >>> things. spike >>> >> >> Only different to those who cannot understand the inevitable end-point of >> libertarianism. :-) >> >> Excellent example, Olga! >> >> >> Richard Loosemore >> >> [ducks beneath parapet to get out of the way of incomings] >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Feb 19 15:34:48 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 07:34:48 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FCF58.4020407@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: <005901cbd04a$8ad986a0$a08c93e0$@att.net> ... On Behalf Of Richard Loosemore ... Subject: Re: [ExI] Call To Libertarians ... > >> Somalia is an example of anarchy, Olga, not libertarian. Two very different things. spike >Only different to those who cannot understand the inevitable end-point of libertarianism. :-) >Richard Loosemore The description of complex systems cannot be reduced to a bumper sticker. But this is one rare example of a case where the refutation can *almost* be bumper-sticker-ized: Chaos is the endpoint not of libertarianism but rather the endpoint of its opposite, totalitarianism. spike From giulio at gmail.com Sat Feb 19 15:30:38 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 19 Feb 2011 16:30:38 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: Very well said Darren. I usually distrust "pure" political ideologies because they tend to degenerate into fundamentalist extremes. I think there is no magic bullet, one-size-fits-all theoretical solution, and I am much more interested in pragmatic, workable and flexible solutions to actual problems. 2011/2/19 Darren Greer : >>Only different to those who cannot understand the inevitable end-point of >> libertarianism.< > Just as the end-point of democracy is a stagnant bureaucratic state? The > end-point of capitalism is fascism and plutocracy? The end-point of > socialism is military dictatorship? > The end-point of any system is a situation of extremes and therefore not > desirable. When I asked the question I made the assumption that was > understood. I was looking for a bit of a nuanced interpretation, much like > the one Fred gave. I understand that political discourse tends to evoke > passionate responses, but I should have made myself clearer: I was looking > for an intellectual response, not a politicized, emotive one. My error. > > Darren > > On Sat, Feb 19, 2011 at 10:10 AM, Richard Loosemore > wrote: >> >> spike wrote: >>> >>> ... On Behalf Of Olga Bourlin >>> Subject: Re: [ExI] Call To Libertarians >>> >>> Darren, tell them to visit the Libertarian Paradise: ?SOMALIA. ;) >>> >>> Somalia is an example of anarchy, Olga, not libertarian. ?Two very >>> different >>> things. ?spike >> >> Only different to those who cannot understand the inevitable end-point of >> libertarianism. ?:-) >> >> Excellent example, Olga! >> >> >> Richard Loosemore >> >> [ducks beneath parapet to get out of the way of incomings] >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > -- > There is no history, only biography. > -Ralph Waldo Emerson > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From rpwl at lightlink.com Sat Feb 19 16:51:48 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 11:51:48 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: <4D5FF524.7030103@lightlink.com> Darren Greer wrote: >>Only different to those who cannot understand the inevitable end-point > of libertarianism.< > > Just as the end-point of democracy is a stagnant bureaucratic state? The > end-point of capitalism is fascism and plutocracy? The end-point of > socialism is military dictatorship? > > The end-point of any system is a situation of extremes and therefore not > desirable. When I asked the question I made the assumption that was > understood. I was looking for a bit of a nuanced interpretation, much > like the one Fred gave. I understand that political discourse tends to > evoke passionate responses, but I should have made myself clearer: I was > looking for an intellectual response, not a politicized, emotive one. My > error. I think you mistake the seriousness behind my reply (and Olga's). Systems settle down into a balance of exchanges -- a state in which all the players locally are trying to get what they want in various ways, so that a situation emerges in which those players more or less accept a set of exchanges that satisfy them. Looking at the list of political systems you give above -- democracy, captialism, socialism etc. -- we can OBJECTIVELY ask questions about how those kinds of systems will settle down, given enough time. We cannot find perfectly good answers to our questions (or we would all be Hari Seldons), but we can do some "sanity checks" on the basic ideas in those systems. One sanity check (according to people like myself and, perhaps Olga (though I make no pretence to speak for her)) yields one glaring, massive difference between the fundamental philosophy held by most libertarians and the philosophies held by those who cheer for the other political philosophies that you list. Libertarianism contains a glaring contradiction within it, which makes it clear that it could never actually work in practice, but would instead lead to Somalia-like anarchy and chaos. In what follows I will try to explain what I mean by this. Libertarianism cherishes the idea that "government" should be reduced to the smallest possible size, and that individuals should take full responsibility for paying for -- or cheating others out of -- the things they need. But at the same time Libertarians also want the advantages of civilization. The problem is, that the things that they want to cut or drastically reduce are the "commons" aspects of modern civilisation .... all those aspects that have to do with people coming together and realizing that it is in everyone's best interest if the community is forced to pool their resources to pay for things like roads and theaters and bridges and schools and police forces. The core of the contradiction is that what the Libertarian wants to do is LOCALLY sensible, but globally crazy. From the point of view of the individual libertarian, nothing but good can come from getting the government out of their wallet. Every libertarian on the planet would see an immediate increase in their well-being if that happened. But that increase in their well being is predicated on the assumption that nothing else changes in the society around them: that all the balances and exchanges now established continue to operate as before. If society continues to operate as normal, the local well-being of every libertarian is immensely increased, withiout a shadow of a doubt, but that is only true if everthing else continues to run as it always has done. The mistake -- the glaring contradiction -- is this assumption that everthing else will stay just as it is while all the libertarians are counting the new money in their pocket, and setting up their own private arrangements to pay for healthcare, to pay road tolls on every street, to hire private police forces to look after them, to pay for their kids to go to school, to pay for a snow plow to come visit their street in the winter, and so on. Why is this assumption wrong? Because the entire edifice of modern civilisation is built on that assumption about taxation and pooling of resources for the common good. Taxation and government and redistribution of wealth are what separate us from the dark ages. The concept of taxation + government + redistribution of wealth was the INCREDIBLE INVENTION that allowed human societies in at least one corner of this planet to emerge from feudal societies where everyone looked after themselves and the devil took the hindmost. This fact about libertarianism is so easy to model, that the conclusion about "SOMALIA == the Libertarian Paradise" is almost a no-brainer. What I mean by "easy to model" is that when we try to understand the end point of other political philosophies it really is pretty hard to see exactly where they will go. But in the case of libertarianism, it only takes a few questions to start revealing that terrifying, inevitable slide toward feudalism. The questions we would ask are questions about what exactly would happen when all the libertarians set up accounts to pay for their toll-roads, healthcare, schools, snow plows etc. etc., but the vast underbelly of modern society cannot do the same because they do not have the resources. Questions about what directions the private police forces would go when they have a client base that they must make happy, rather than a hierarchy that goes up to the nation-state level. And so on. We can model those local changes quite easily because we have plenty of examples of what happens when those circumstances are set up. So in the case of libertarianism, the answers to those questions are really REALLY easy to come up with, and they all point toward anarchy and feudalism. There are simply no good answers to those questions (i.e. no answers that clearly demonstrate that there is a way to push the system toward a stable state). This is the reason why the world has had, over the years, plenty of "democracies", "stagnant bureaucratic states", "capitalist states", "fascist states", "plutocracies", "socialist states" and "military dictatorships" ...... but not one "libertarian state". Or rather, according to the analysis of those who have thought about it in an objective way, the world HAS had many libertarian states: they were all the rage in the dark ages, and they are now springing up like wild mushrooms in a bog, in places like Somalia. So, those were really not just shallow comments that I made, and that Olga made, for all that they were delivered with a wry smile. There is a difference between the searches for an end-point of all the various political philosophies: libertarianism is a glaringly obvious "locally-smart + globally dumb" philosophy, whereas the others are all much much harder to call. Richard Loosemore From lubkin at unreasonable.com Sat Feb 19 17:10:25 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sat, 19 Feb 2011 12:10:25 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Darren wrote: >I understand there are some libertarians in this group. It's surreal to read this. I was one of the earliest of subscribers to the original extropian list, twenty or so years ago. I was delighted to join and help build a community that shared so many of my (even then) long-standing interests. One of the ideas was that it was a place where we didn't have to defend or explain the fundamentals. And the dominant sentiment was that anarcho-capitalist libertarianism was one of them. I recognize the drift from that here over the years, and the reasons for it, but your posting still feels weird. Like someone saying "I understand there are some Jews in Israel." I guess the paleo-extropian label is appropriate; it's easy to feel like a living fossil. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From rpwl at lightlink.com Sat Feb 19 17:10:22 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 12:10:22 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <005901cbd04a$8ad986a0$a08c93e0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <005901cbd04a$8ad986a0$a08c93e0$@att.net> Message-ID: <4D5FF97E.2080006@lightlink.com> spike wrote: > ... On Behalf Of Richard Loosemore > ... > Subject: Re: [ExI] Call To Libertarians > ... >>> Somalia is an example of anarchy, Olga, not libertarian. Two very > different things. spike > >> Only different to those who cannot understand the inevitable end-point of > libertarianism. :-) > >> Richard Loosemore > > The description of complex systems cannot be reduced to a bumper sticker. > But this is one rare example of a case where the refutation can *almost* be > bumper-sticker-ized: > > Chaos is the endpoint not of libertarianism but rather the endpoint of its > opposite, totalitarianism. Factually inaccurate, I would say: Example 1: Soviet Union (totalitarian) -> Boris Yeltsin (short interregnum) -> Russia Under Putin (totalitarianism again). Example 2: Iran under Shah (totalitarian) -> Revolution (short interregnum) -> Iran under the Mullahs (totalitarianism again). Example 3: Iraq under Saddam Hussein (totalitarian) -> US Invasion Period (short interregnum) -> Iraq under Corrupt Shia Government with Rigged Elections (totalitarianism again, or heading fast in that direction). Example 4: Germany under Hitler (totalitarian) -> 2nd World War (long interregnum during which GDR was totalitarian and West Germany was deomcratic) -> Eventually United Germany (Democracy). This is really not looking good for your bumper sticker. Richard Loosemore From spike66 at att.net Sat Feb 19 17:45:34 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 09:45:34 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FF524.7030103@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> Message-ID: <000001cbd05c$d0092520$701b6f60$@att.net> >... On Behalf Of Richard Loosemore >... people coming together and realizing that it is in everyone's best interest if the community is forced to pool their resources to pay for things like roads and theaters and bridges and schools and police forces... Indeed? The critical difference in my thinking and yours is found in this one sentence. People coming together for roads, bridges, schools and police, yes. Theatres? No. That is exclusively the domain of private industry, and the root of the tension between libertarian and statist. It is not in everyone's best interest to pool resources to build theatres. >... the conclusion about "SOMALIA == the Libertarian Paradise" is almost a no-brainer... Richard Loosemore You said it, not me. Somalia is the criminal's paradise, not the libertarian's. spike From eugen at leitl.org Sat Feb 19 18:20:52 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 19:20:52 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: <20110219182052.GD23560@leitl.org> On Sat, Feb 19, 2011 at 12:10:25PM -0500, David Lubkin wrote: > Darren wrote: > >> I understand there are some libertarians in this group. > > It's surreal to read this. I was one of the earliest of subscribers to > the original extropian list, twenty or so years ago. I was delighted to Does the list go back to 1990, or was there a dialup BBS before? It's too bad we cannot read the early archives, but I understand why. > join and help build a community that shared so many of my (even then) > long-standing interests. One of the ideas was that it was a place where > we didn't have to defend or explain the fundamentals. And the dominant > sentiment was that anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the years, and the reasons for > it, but your posting still feels weird. Like someone saying "I > understand there are some Jews in Israel." > > I guess the paleo-extropian label is appropriate; it's easy to feel like > a living fossil. It's nice to be a part of one of the longer-lived Internet communities. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Sat Feb 19 18:33:29 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 14:33:29 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <20110219182052.GD23560@leitl.org> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> Message-ID: One of the ideas was that it was a place where > we didn't have to defend or explain the fundamentals. And the dominant > sentiment was that anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the years, I'm a newcomer to the group, David. Only a year, and like most, I came by drawing my own conclusions based on experience and observation and so by the time I got here I knew some of the fundamentals. The rest I learned quickly. The subtleties and incidentals, however, eluded me for some time and often still do. Politics--and economics--are two elusive issues that are often obliquely referenced here that I still haven't got a handle on. One of the first threads I became interested in was on patent and intellectual property rights, and though no one informed me this group used to have a libertarian bent, I could certainly sense the tendency in some of those early discussions. I also understand that political discussions were for a time here verboten because of some messiness that had occurred in the past. I'm glad that's not the case now. I believe politics, and particularly the economic outlooks that come with them, could not be more relevant to the transhumanist schema, if we can be said to have one (or two, or three.) I'm glad the list has come to a place where we can discuss these things without acrimony or prejudice. For my part, I'm just trying to understand. d. On Sat, Feb 19, 2011 at 2:20 PM, Eugen Leitl wrote: > On Sat, Feb 19, 2011 at 12:10:25PM -0500, David Lubkin wrote: > > Darren wrote: > > > >> I understand there are some libertarians in this group. > > > > It's surreal to read this. I was one of the earliest of subscribers to > > the original extropian list, twenty or so years ago. I was delighted to > > Does the list go back to 1990, or was there a dialup BBS before? > > It's too bad we cannot read the early archives, but I understand > why. > > > join and help build a community that shared so many of my (even then) > > long-standing interests. One of the ideas was that it was a place where > > we didn't have to defend or explain the fundamentals. And the dominant > > sentiment was that anarcho-capitalist libertarianism was one of them. > > > > I recognize the drift from that here over the years, and the reasons for > > it, but your posting still feels weird. Like someone saying "I > > understand there are some Jews in Israel." > > > > I guess the paleo-extropian label is appropriate; it's easy to feel like > > a living fossil. > > It's nice to be a part of one of the longer-lived Internet communities. > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Feb 19 18:33:52 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 13:33:52 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <000001cbd05c$d0092520$701b6f60$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> Message-ID: <4D600D10.2090008@lightlink.com> spike wrote: >> ... On Behalf Of Richard Loosemore > >> ... people coming together and realizing that it is in everyone's best > interest if the community is forced to pool their resources to pay for > things like roads and theaters and bridges and schools and police forces... > > Indeed? The critical difference in my thinking and yours is found in this > one sentence. People coming together for roads, bridges, schools and > police, yes. Theatres? No. That is exclusively the domain of private > industry, and the root of the tension between libertarian and statist. It > is not in everyone's best interest to pool resources to build theatres. The inclusion of "theaters" was strictly optional: not essential to my argument. A throwaway. So let me see if I understand: you are saying that without the word "theater" in my description, what I said bore no resemblance to the philosophy of libertarianism? Would it be more accurate, then, to say that Libertarianism is about SUPPORTING the government funding of: Roads, Bridges, Police, Firefighters, Prisons, Schools, Public transport in places where universal use of cars would bring cities to a standstill, or where poor people would otherwise be unable to escape from ghettos, The armed forces, Universities, and publicly funded scholarships for poor students, National research laboratories like the Centers for Disease Control and Prevention, Snow plows, Public libraries, Emergency and disaster assistance, Legal protection for those too poor to fight against the exploitative power of corporations, Government agencies to scrutinize corrupt practices by corporations and wealthy individuals, Basic healthcare for old people who worked all their lives for corporations who paid them so little in salary that they could not save for retirement without starving to death before they reached retirement, And sundry other programs that keep the very poor just above the subsistence level, so we do not have to step over their dead bodies on the street all the time, and so they do not wander around in feral packs, looking for middle-class people that they can kill and eat... .... but it is about NOT supporting the government funding of theaters? In that case I misunderstood, and all western democracies are more or less libertarian already, give or take the 0.0001 percent of their funding that goes toward things like theaters and opera houses. Richard Loosemore From darren.greer3 at gmail.com Sat Feb 19 18:41:39 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 14:41:39 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FF524.7030103@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> Message-ID: Thanks Richard. I wasn't really dismissing the comments. Only the lack of explanation behind them. I don't think it's assuming too much to ask for an explanation of a wry comment to an earnest question. So thank you for providing that. Much food for thought. I turned on the TV shortly after this discussion got cooking and the first words I heard was "Somalian pirates." Thought that was coincidental and amusing. Darren On Sat, Feb 19, 2011 at 12:51 PM, Richard Loosemore wrote: > Darren Greer wrote: > >> Only different to those who cannot understand the inevitable end-point >>> >> of libertarianism.< >> >> Just as the end-point of democracy is a stagnant bureaucratic state? The >> end-point of capitalism is fascism and plutocracy? The end-point of >> socialism is military dictatorship? >> The end-point of any system is a situation of extremes and therefore not >> desirable. When I asked the question I made the assumption that was >> understood. I was looking for a bit of a nuanced interpretation, much like >> the one Fred gave. I understand that political discourse tends to evoke >> passionate responses, but I should have made myself clearer: I was looking >> for an intellectual response, not a politicized, emotive one. My error. >> > > I think you mistake the seriousness behind my reply (and Olga's). > > Systems settle down into a balance of exchanges -- a state in which all the > players locally are trying to get what they want in various ways, so that a > situation emerges in which those players more or less accept a set of > exchanges that satisfy them. > > Looking at the list of political systems you give above -- democracy, > captialism, socialism etc. -- we can OBJECTIVELY ask questions about how > those kinds of systems will settle down, given enough time. We cannot find > perfectly good answers to our questions (or we would all be Hari Seldons), > but we can do some "sanity checks" on the basic ideas in those systems. > > One sanity check (according to people like myself and, perhaps Olga (though > I make no pretence to speak for her)) yields one glaring, massive difference > between the fundamental philosophy held by most libertarians and the > philosophies held by those who cheer for the other political philosophies > that you list. > > Libertarianism contains a glaring contradiction within it, which makes it > clear that it could never actually work in practice, but would instead lead > to Somalia-like anarchy and chaos. In what follows I will try to explain > what I mean by this. > > Libertarianism cherishes the idea that "government" should be reduced to > the smallest possible size, and that individuals should take full > responsibility for paying for -- or cheating others out of -- the things > they need. But at the same time Libertarians also want the advantages of > civilization. The problem is, that the things that they want to cut or > drastically reduce are the "commons" aspects of modern civilisation .... all > those aspects that have to do with people coming together and realizing that > it is in everyone's best interest if the community is forced to pool their > resources to pay for things like roads and theaters and bridges and schools > and police forces. > > The core of the contradiction is that what the Libertarian wants to do is > LOCALLY sensible, but globally crazy. From the point of view of the > individual libertarian, nothing but good can come from getting the > government out of their wallet. Every libertarian on the planet would see > an immediate increase in their well-being if that happened. But that > increase in their well being is predicated on the assumption that nothing > else changes in the society around them: that all the balances and exchanges > now established continue to operate as before. If society continues to > operate as normal, the local well-being of every libertarian is immensely > increased, withiout a shadow of a doubt, but that is only true if everthing > else continues to run as it always has done. > > The mistake -- the glaring contradiction -- is this assumption that > everthing else will stay just as it is while all the libertarians are > counting the new money in their pocket, and setting up their own private > arrangements to pay for healthcare, to pay road tolls on every street, to > hire private police forces to look after them, to pay for their kids to go > to school, to pay for a snow plow to come visit their street in the winter, > and so on. Why is this assumption wrong? Because the entire edifice of > modern civilisation is built on that assumption about taxation and pooling > of resources for the common good. Taxation and government and > redistribution of wealth are what separate us from the dark ages. The > concept of taxation + government + redistribution of wealth was the > INCREDIBLE INVENTION that allowed human societies in at least one corner of > this planet to emerge from feudal societies where everyone looked after > themselves and the devil took the hindmost. > > This fact about libertarianism is so easy to model, that the conclusion > about "SOMALIA == the Libertarian Paradise" is almost a no-brainer. What I > mean by "easy to model" is that when we try to understand the end point of > other political philosophies it really is pretty hard to see exactly where > they will go. But in the case of libertarianism, it only takes a few > questions to start revealing that terrifying, inevitable slide toward > feudalism. The questions we would ask are questions about what exactly > would happen when all the libertarians set up accounts to pay for their > toll-roads, healthcare, schools, snow plows etc. etc., but the vast > underbelly of modern society cannot do the same because they do not have the > resources. Questions about what directions the private police forces would > go when they have a client base that they must make happy, rather than a > hierarchy that goes up to the nation-state level. And so on. We can model > those local changes quite easily because we have plenty of examples of what > happens when those circumstances are set up. > > So in the case of libertarianism, the answers to those questions are really > REALLY easy to come up with, and they all point toward anarchy and > feudalism. There are simply no good answers to those questions (i.e. no > answers that clearly demonstrate that there is a way to push the system > toward a stable state). > > This is the reason why the world has had, over the years, plenty of > "democracies", "stagnant bureaucratic states", "capitalist states", "fascist > states", "plutocracies", "socialist states" and "military dictatorships" > ...... but not one "libertarian state". > > Or rather, according to the analysis of those who have thought about it in > an objective way, the world HAS had many libertarian states: they were all > the rage in the dark ages, and they are now springing up like wild mushrooms > in a bog, in places like Somalia. > > So, those were really not just shallow comments that I made, and that Olga > made, for all that they were delivered with a wry smile. There is a > difference between the searches for an end-point of all the various > political philosophies: libertarianism is a glaringly obvious > "locally-smart + globally dumb" philosophy, whereas the others are all much > much harder to call. > > > > Richard Loosemore > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 19 18:46:42 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 14:46:42 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D600D10.2090008@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> Message-ID: >Theatres? No. That is exclusively the domain of private industry, and the root of the tension between libertarian and statist.< The U.S. government did during the 30's fund the theaters, in something called the national theatre program. It turned out to be too socialist for them, so they canned it. Canada has the Canadian Council which provides grants to professional artists for project creation. They don't care what you write as long as it's good. They've saved my butt a bunch a times. d. On Sat, Feb 19, 2011 at 2:33 PM, Richard Loosemore wrote: > spike wrote: > >> ... On Behalf Of Richard Loosemore >>> >> >> ... people coming together and realizing that it is in everyone's best >>> >> interest if the community is forced to pool their resources to pay for >> things like roads and theaters and bridges and schools and police >> forces... >> >> Indeed? The critical difference in my thinking and yours is found in this >> one sentence. People coming together for roads, bridges, schools and >> police, yes. Theatres? No. That is exclusively the domain of private >> industry, and the root of the tension between libertarian and statist. It >> is not in everyone's best interest to pool resources to build theatres. >> > > The inclusion of "theaters" was strictly optional: not essential to my > argument. A throwaway. > > So let me see if I understand: you are saying that without the word > "theater" in my description, what I said bore no resemblance to the > philosophy of libertarianism? > > Would it be more accurate, then, to say that Libertarianism is about > SUPPORTING the government funding of: > > Roads, > Bridges, > Police, > Firefighters, > Prisons, > Schools, > Public transport in places where universal use of cars would > bring cities to a standstill, or where poor people would > otherwise be unable to escape from ghettos, > The armed forces, > Universities, and publicly funded scholarships for poor students, > National research laboratories like the Centers > for Disease Control and Prevention, > Snow plows, > Public libraries, > Emergency and disaster assistance, > Legal protection for those too poor to fight against the > exploitative power of corporations, > Government agencies to scrutinize corrupt practices by > corporations and wealthy individuals, > Basic healthcare for old people who worked all their lives > for corporations who paid them so little in salary that > they could not save for retirement without starving to > death before they reached retirement, > And sundry other programs that keep the very poor just above > the subsistence level, so we do not have to step over their > dead bodies on the street all the time, and so they do not > wander around in feral packs, looking for middle-class people > that they can kill and eat... > > > .... but it is about NOT supporting the government funding of theaters? > > > In that case I misunderstood, and all western democracies are more or less > libertarian already, give or take the 0.0001 percent of their funding that > goes toward things like theaters and opera houses. > > > > > Richard Loosemore > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Feb 19 18:59:32 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 19 Feb 2011 19:59:32 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: I think the list has become more inclusive of soft libertarians, or left libertarians, who accept some degree of government and welfare (like me), but I like to think that personal freedom and self-ownership are still considered as fundamental values by most posters. On Sat, Feb 19, 2011 at 6:10 PM, David Lubkin wrote: > Darren wrote: > >> I understand there are some libertarians in this group. > > It's surreal to read this. I was one of the earliest of subscribers to the > original extropian list, twenty or so years ago. I was delighted to join and > help build a community that shared so many of my (even then) long-standing > interests. One of the ideas was that it was a place where we didn't have to > defend or explain the fundamentals. And the dominant sentiment was that > anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the years, and the reasons for it, > but your posting still feels weird. Like someone saying "I understand there > are some Jews in Israel." > > I guess the paleo-extropian label is appropriate; it's easy to feel like a > living fossil. > > > -- David. > > Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From hkeithhenson at gmail.com Sat Feb 19 19:05:59 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 19 Feb 2011 12:05:59 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Sat, Feb 19, 2011 at 1:27 AM, BillK wrote: > > On Fri, Feb 18, 2011 at 7:16 PM, Keith Henson ?wrote: >> Evolution had good reason to build in a strong drive to have sex. ?And >> in the pre birth control era that resulted in reproduction. >> >> It's also fairly clear to me that there is a drive directly for >> reproduction, especially in women. ?You only need to consider what one >> member who used to be on this group did to have an example. > > No. There isn't. You know who I am talking about? > If you look at the groups who have falling birth rates they correlate > *very* strongly with women's rights and the empowerment of women. As > soon as women get the power to choose they stop having children. Some > might have one child, but this is below the rate required to sustain > the population. Agree on the points of course. But if there was *no* direct drive for reproduction, they would have none. > You can also correlate falling birth rates with first world countries, > or 'civilization'. > Which also correlates with women's rights. > > I agree with Eugene's claim that there are sub-groups and third world > nations that to-date still have high birth rates and growing > populations. But it is to be expected that these high birth rates will > only continue while their women remain subjugated under male > domination. How long that will last is questionable. > > That is why I disagree strongly that advanced civilizations will be > breeding like rabbits. The 'advanced' part means low reproduction by > definition. > > If a civilization is busy breeding furiously and fighting for survival > with other breeders, they have no spare capacity to get 'advanced'. > Too many mouths to feed. > > BillK I don't think you are considering the future angles here. Cloning and gene editing for example, not to mention outright duplication. And if we have vastly longer lives, a low reproductive rate is a good idea. Keith From moulton at moulton.com Sat Feb 19 19:14:48 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 11:14:48 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <20110219182052.GD23560@leitl.org> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> Message-ID: <4D6016A8.4090402@moulton.com> Eugen Leitl wrote: > Does the list go back to 1990, or was there a dialup BBS before? > > It's too bad we cannot read the early archives, but I understand > why. > > There have been various discussions about recovering the early archives. Some early posts were really great exploration of ideas. I think the Extropian Institute might have a complete (or near complete) archive however I understand that everyone is busy and the project never gets done. Just like the idea of scanning and posting all of the back issues of Extropy magazine. Fred From eugen at leitl.org Sat Feb 19 19:30:30 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 20:30:30 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <4D6016A8.4090402@moulton.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> Message-ID: <20110219193030.GJ23560@leitl.org> On Sat, Feb 19, 2011 at 11:14:48AM -0800, F. C. Moulton wrote: > There have been various discussions about recovering the early archives. Some people have complete early archives, or nearly-complete early archives. The problem is that the list was closed, for very good reasons, and it would be impossible to obtain retrograde consent from all the early participans, assuming they're even still around. And there will be definitely members objecting, for abovementioned good reasons. We just have to live with that, I guess. > Some early posts were really great exploration of ideas. I think the > Extropian Institute might have a complete (or near complete) archive > however I understand that everyone is busy and the project never gets done. > > Just like the idea of scanning and posting all of the back issues of > Extropy magazine. I'm helping with publishing historical cryonics documents, so if I can help with that (sadly, I have only a couple dead tree copies of Extropy magazine, never having been a regular member), I'd be happy to. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From moulton at moulton.com Sat Feb 19 19:36:42 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 11:36:42 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <000001cbd05c$d0092520$701b6f60$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> Message-ID: <4D601BCA.6020300@moulton.com> Here I have to disagree with both Spike and Loosemore spike wrote: >> ... On Behalf Of Richard Loosemore >> ... people coming together and realizing that it is in everyone's best >> interest if the community is forced to pool their resources to pay for >> things like roads and theaters and bridges and schools and police forces... I was going to write a long response but Spike quoted the key passage from the post. Consider the phrase "community is forced" to do things. The libertarian approach is "individuals and groups voluntarily" do things. If anyone want an over simplified bumper sticker summary of the libertarian approach; it is "Anything that is peaceful". > People coming together for roads, bridges, schools and > police, yes. Theatres? No. Spike I think you are fundamentally mistaken. There is no reason why roads, bridges, schools or police can not be created by non-governmental means. Fred From moulton at moulton.com Sat Feb 19 19:41:14 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 11:41:14 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <20110219193030.GJ23560@leitl.org> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> Message-ID: <4D601CDA.90803@moulton.com> Eugen Leitl wrote: > And there will be definitely members objecting, for above mentioned > good reasons. We just have to live with that, I guess. > I was under the impression that early members who did not want their posts made public was relatively low but my impression was based on casual observation not on a rigorous survey. It would interesting to at least have the early posts of those who agreed to be make available. Fred From lubkin at unreasonable.com Sat Feb 19 20:15:14 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sat, 19 Feb 2011 15:15:14 -0500 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: <4D601CDA.90803@moulton.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> Message-ID: <201102192014.p1JKEeST027600@andromeda.ziaspace.com> The terms under which the original list functioned require permission of a posting's author before dissemination beyond that list's membership. It would, however, be legitimate to share one's archives with someone else who'd been on the list at the time of a posting, and I think to someone who joined that list after the date of the posting. Anything beyond that means finding folks and getting permissions. (One of the messy questions to deal with is what if Keith was replying to and quotes something Perry said. Keith gives permission; Perry doesn't.) I am now building systems for other communities I'm part of that have similar problems. I think what I'm doing will be readily adaptable to the original list archive issue. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From eugen at leitl.org Sat Feb 19 20:53:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 21:53:08 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <005901cbd04a$8ad986a0$a08c93e0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <005901cbd04a$8ad986a0$a08c93e0$@att.net> Message-ID: <20110219205308.GP23560@leitl.org> On Sat, Feb 19, 2011 at 07:34:48AM -0800, spike wrote: > Chaos is the endpoint not of libertarianism but rather the endpoint of its > opposite, totalitarianism. We don't want chaos nor crystalline order, we want the boundary in-between. The edge of chaos. http://www.necsi.edu/projects/baranger/cce.pdf etc. http://www.google.com/search?hl=en&q=%22edge+of+chaos%22+entropy From eugen at leitl.org Sat Feb 19 21:32:17 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 22:32:17 +0100 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> References: <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> <20110219062833.GU23560@leitl.org> <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> Message-ID: <20110219213217.GT23560@leitl.org> On Fri, Feb 18, 2011 at 10:46:05PM -0800, spike wrote: > >The last stock market Singularity .bombed quite nicely, as you'll recall. > > How well I recall. I am just getting back to where I was back in those > heady days. Alas, the next Big One is round the corner. Or, rather, we're still in it. There's no timing the market, but if you're still in it at the time I hope you can afford writing it all off as gambling losses. > We thought it was the technocalypse. I did anyway. Then the stock market > crashed. It wasn't until 9/11/01 that many of us realized we still have yet > another world war to fight, and this one may be worse than the three we had > in the 20th century. "We've always been at war with Eastasia"? > The challenge for this particular culture is to break open this particular > Petri dish while they're still able. > > We are able. The question is will we break out while we are still willing. We're definitely able. Still. However, the launch window is slowly (or quickly) closing. The people look less and less up to the skies, unfortunately. I genuinely hope the private sector and new players in the developing world will take up the slack. Because, if we don't make it sometime soon, we're not going to make it at all. Not that we care, but our children will, definitely. From spike66 at att.net Sat Feb 19 21:50:13 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 13:50:13 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> Message-ID: <002401cbd07e$fce824c0$f6b86e40$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer . I also understand that political discussions were for a time here verboten because of some messiness that had occurred in the past. I'm glad that's not the case now. I believe politics, and particularly the economic outlooks that come with them, could not be more relevant to the transhumanist schema. d. We haven't really had a libertarian discussion here for a good while. In light of Darren's comments above, I propose a temporary open season on the specific topic of transhumanism and libertarianism. Free number of posts on all that for five days, and please don't let me down: post stuff that is well-reasoned, humor and even sarcasm allowed, but do keep it respectful and free of personal attack of those with differing or opposing political points of view. As the open season on Watson draws to a close, post away for a few days on "Call to Libertarians." I think we can handle this like transhumanists in which we may take pride. Play ball! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Feb 19 22:20:00 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 19 Feb 2011 15:20:00 -0700 Subject: [ExI] Call To Libertarians Message-ID: On Sat, Feb 19, 2011 at 11:46 AM, David Lubkin wrote: > Darren wrote: > >>I understand there are some libertarians in this group. > > It's surreal to read this. I was one of the > earliest of subscribers to the original extropian > list, twenty or so years ago. I was delighted to > join and help build a community that shared so > many of my (even then) long-standing interests. > One of the ideas was that it was a place where we > didn't have to defend or explain the > fundamentals. And the dominant sentiment was that > anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the > years, and the reasons for it, but your posting > still feels weird. Like someone saying "I > understand there are some Jews in Israel." > > I guess the paleo-extropian label is appropriate; > it's easy to feel like a living fossil. Welcome to the club. :-) As I recall anarcho-capitalist libertarianism was just an underlying assumption. Libertarians come in a lot of flavors, personally I best fit the Space Cadet (Heinlein) variation. But as I recall, there was either relatively little discussion on the topic, or I just skipped the posts about it. Of course early days L5 Society members were something like 20% libertarian, and perhaps as high as 50% of the early cryonics members. Keith > -- David. > > Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut > > > > > ------------------------------ > > Message: 8 > Date: Sat, 19 Feb 2011 12:10:22 -0500 > From: Richard Loosemore > To: ExI chat list > Subject: Re: [ExI] Call To Libertarians > Message-ID: <4D5FF97E.2080006 at lightlink.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > spike wrote: >> ... On Behalf Of Richard Loosemore >> ... >> Subject: Re: [ExI] Call To Libertarians >> ... >>>> Somalia is an example of anarchy, Olga, not libertarian. ?Two very >> different things. ?spike >> >>> Only different to those who cannot understand the inevitable end-point of >> libertarianism. ?:-) >> >>> Richard Loosemore >> >> The description of complex systems cannot be reduced to a bumper sticker. >> But this is one rare example of a case where the refutation can *almost* be >> bumper-sticker-ized: >> >> Chaos is the endpoint not of libertarianism but rather the endpoint of its >> opposite, totalitarianism. > > Factually inaccurate, I would say: > > Example 1: ? Soviet Union (totalitarian) -> Boris ?Yeltsin (short > interregnum) -> Russia Under Putin (totalitarianism again). > > Example 2: ? Iran under Shah (totalitarian) -> Revolution (short > interregnum) -> Iran under the Mullahs (totalitarianism again). > > Example 3: ? Iraq under Saddam Hussein (totalitarian) -> US Invasion > Period (short interregnum) -> Iraq under Corrupt Shia Government with > Rigged Elections (totalitarianism again, or heading fast in that direction). > > Example 4: ? Germany under Hitler (totalitarian) -> 2nd World War (long > interregnum during which GDR was totalitarian and West Germany was > deomcratic) -> Eventually United Germany (Democracy). > > This is really not looking good for your bumper sticker. > > > > Richard Loosemore > > > ------------------------------ > > Message: 9 > Date: Sat, 19 Feb 2011 09:45:34 -0800 > From: "spike" > To: "'ExI chat list'" > Subject: Re: [ExI] Call To Libertarians > Message-ID: <000001cbd05c$d0092520$701b6f60$@att.net> > Content-Type: text/plain; ? ? ? charset="us-ascii" > > >>... On Behalf Of Richard Loosemore > >>... people coming together and realizing that it is in everyone's best > interest if the community is forced to pool their resources to pay for > things like roads and theaters and bridges and schools and police forces... > > Indeed? ?The critical difference in my thinking and yours is found in this > one sentence. ?People coming together for roads, bridges, schools and > police, yes. ?Theatres? ?No. ?That is exclusively the domain of private > industry, and the root of the tension between libertarian and statist. ?It > is not in everyone's best interest to pool resources to build theatres. > >>... the conclusion about "SOMALIA == the Libertarian Paradise" is almost a > no-brainer... Richard Loosemore > > You said it, not me. ?Somalia is the criminal's paradise, not the > libertarian's. > > spike > > > > > > > > > ------------------------------ > > Message: 10 > Date: Sat, 19 Feb 2011 19:20:52 +0100 > From: Eugen Leitl > To: extropy-chat at lists.extropy.org > Subject: Re: [ExI] Call To Libertarians > Message-ID: <20110219182052.GD23560 at leitl.org> > Content-Type: text/plain; charset=us-ascii > > On Sat, Feb 19, 2011 at 12:10:25PM -0500, David Lubkin wrote: >> Darren wrote: >> >>> I understand there are some libertarians in this group. >> >> It's surreal to read this. I was one of the earliest of subscribers to >> the original extropian list, twenty or so years ago. I was delighted to > > Does the list go back to 1990, or was there a dialup BBS before? > > It's too bad we cannot read the early archives, but I understand > why. > >> join and help build a community that shared so many of my (even then) >> long-standing interests. One of the ideas was that it was a place where >> we didn't have to defend or explain the fundamentals. And the dominant >> sentiment was that anarcho-capitalist libertarianism was one of them. >> >> I recognize the drift from that here over the years, and the reasons for >> it, but your posting still feels weird. Like someone saying "I >> understand there are some Jews in Israel." >> >> I guess the paleo-extropian label is appropriate; it's easy to feel like >> a living fossil. > > It's nice to be a part of one of the longer-lived Internet communities. > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A ?7779 75B0 2443 8B29 F6BE > > > ------------------------------ > > Message: 11 > Date: Sat, 19 Feb 2011 14:33:29 -0400 > From: Darren Greer > To: ExI chat list > Subject: Re: [ExI] Call To Libertarians > Message-ID: > ? ? ? ? > Content-Type: text/plain; charset="iso-8859-1" > > One of the ideas was that it was a place where >> we didn't have to defend or explain the fundamentals. And the dominant >> sentiment was that anarcho-capitalist libertarianism was one of them. >> >> I recognize the drift from that here over the years, > > I'm a newcomer to the group, David. Only a year, and like most, I came by > drawing my own conclusions based on experience and observation and so by the > time I got here I knew some of the fundamentals. The rest I learned quickly. > The subtleties and incidentals, however, eluded me for some time and often > still do. > > Politics--and economics--are two elusive issues that are often obliquely > referenced here that I still haven't got a handle on. One of the first > threads I became interested in was on patent and intellectual property > rights, and though no one informed me this group used to have a libertarian > bent, I could certainly sense the tendency in some of those early > discussions. > > I also understand that political discussions were for a time here verboten > because of some messiness that had occurred in the past. I'm glad that's not > the case now. I believe politics, and particularly the economic outlooks > that come with them, could not be more relevant to the transhumanist schema, > if we can be said to have one (or two, or three.) I'm glad the list has come > to a place where we can discuss these things without acrimony or prejudice. > For my part, I'm just trying to understand. > > d. > > On Sat, Feb 19, 2011 at 2:20 PM, Eugen Leitl wrote: > >> On Sat, Feb 19, 2011 at 12:10:25PM -0500, David Lubkin wrote: >> > Darren wrote: >> > >> >> I understand there are some libertarians in this group. >> > >> > It's surreal to read this. I was one of the earliest of subscribers to >> > the original extropian list, twenty or so years ago. I was delighted to >> >> Does the list go back to 1990, or was there a dialup BBS before? >> >> It's too bad we cannot read the early archives, but I understand >> why. >> >> > join and help build a community that shared so many of my (even then) >> > long-standing interests. One of the ideas was that it was a place where >> > we didn't have to defend or explain the fundamentals. And the dominant >> > sentiment was that anarcho-capitalist libertarianism was one of them. >> > >> > I recognize the drift from that here over the years, and the reasons for >> > it, but your posting still feels weird. Like someone saying "I >> > understand there are some Jews in Israel." >> > >> > I guess the paleo-extropian label is appropriate; it's easy to feel like >> > a living fossil. >> >> It's nice to be a part of one of the longer-lived Internet communities. >> >> -- >> Eugen* Leitl leitl http://leitl.org >> ______________________________________________________________ >> ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org >> 8B29F6BE: 099D 78BA 2FD3 B014 B08A ?7779 75B0 2443 8B29 F6BE >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 12 > Date: Sat, 19 Feb 2011 13:33:52 -0500 > From: Richard Loosemore > To: ExI chat list > Subject: Re: [ExI] Call To Libertarians > Message-ID: <4D600D10.2090008 at lightlink.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > spike wrote: >>> ... On Behalf Of Richard Loosemore >> >>> ... people coming together and realizing that it is in everyone's best >> interest if the community is forced to pool their resources to pay for >> things like roads and theaters and bridges and schools and police forces... >> >> Indeed? ?The critical difference in my thinking and yours is found in this >> one sentence. ?People coming together for roads, bridges, schools and >> police, yes. ?Theatres? ?No. ?That is exclusively the domain of private >> industry, and the root of the tension between libertarian and statist. ?It >> is not in everyone's best interest to pool resources to build theatres. > > The inclusion of "theaters" was strictly optional: ?not essential to my > argument. ?A throwaway. > > So let me see if I understand: ?you are saying that without the word > "theater" in my description, what I said bore no resemblance to the > philosophy of libertarianism? > > Would it be more accurate, then, to say that Libertarianism is about > SUPPORTING the ?government funding of: > > ? ?Roads, > ? ?Bridges, > ? ?Police, > ? ?Firefighters, > ? ?Prisons, > ? ?Schools, > ? ?Public transport in places where universal use of cars would > ? ? ? bring cities to a standstill, or where poor people would > ? ? ? otherwise be unable to escape from ghettos, > ? ?The armed forces, > ? ?Universities, and publicly funded scholarships for poor students, > ? ?National research laboratories like the Centers > ? ? ? for Disease Control and Prevention, > ? ?Snow plows, > ? ?Public libraries, > ? ?Emergency and disaster assistance, > ? ?Legal protection for those too poor to fight against the > ? ? ? exploitative power of corporations, > ? ?Government agencies to scrutinize corrupt practices by > ? ? ? corporations and wealthy individuals, > ? ?Basic healthcare for old people who worked all their lives > ? ? ? for corporations who paid them so little in salary that > ? ? ? they could not save for retirement without starving to > ? ? ? death before they reached retirement, > ? ?And sundry other programs that keep the very poor just above > ? ? ? the subsistence level, so we do not have to step over their > ? ? ? dead bodies on the street all the time, and so they do not > ? ? ? wander around in feral packs, looking for middle-class people > ? ? ? that they can kill and eat... > > > .... but it is about NOT supporting the government funding of theaters? > > > In that case I misunderstood, and all western democracies are more or > less libertarian already, give or take the 0.0001 percent of their > funding that goes toward things like theaters and opera houses. > > > > > Richard Loosemore > > > ------------------------------ > > Message: 13 > Date: Sat, 19 Feb 2011 14:41:39 -0400 > From: Darren Greer > To: ExI chat list > Subject: Re: [ExI] Call To Libertarians > Message-ID: > ? ? ? ? > Content-Type: text/plain; charset="iso-8859-1" > > Thanks Richard. I wasn't really dismissing the comments. Only the lack of > explanation behind them. I don't think it's assuming too much to ask for an > explanation of a wry comment to an earnest question. So thank you for > providing that. Much food for thought. > > I turned on the TV shortly after this discussion got cooking and the first > words I heard was "Somalian pirates." Thought that was coincidental and > amusing. > > Darren > > On Sat, Feb 19, 2011 at 12:51 PM, Richard Loosemore wrote: > >> Darren Greer wrote: >> >>> Only different to those who cannot understand the inevitable end-point >>>> >>> of libertarianism.< >>> >>> Just as the end-point of democracy is a stagnant bureaucratic state? The >>> end-point of capitalism is fascism and plutocracy? The end-point of >>> socialism is military dictatorship? >>> The end-point of any system is a situation of extremes and therefore not >>> desirable. When I asked the question I made the assumption that was >>> understood. I was looking for a bit of a nuanced interpretation, much like >>> the one Fred gave. I understand that political discourse tends to evoke >>> passionate responses, but I should have made myself clearer: I was looking >>> for an intellectual response, not a politicized, emotive one. My error. >>> >> >> I think you mistake the seriousness behind my reply (and Olga's). >> >> Systems settle down into a balance of exchanges -- a state in which all the >> players locally are trying to get what they want in various ways, so that a >> situation emerges in which those players more or less accept a set of >> exchanges that satisfy them. >> >> Looking at the list of political systems you give above -- democracy, >> captialism, socialism etc. -- we can OBJECTIVELY ask questions about how >> those kinds of systems will settle down, given enough time. ?We cannot find >> perfectly good answers to our questions (or we would all be Hari Seldons), >> but we can do some "sanity checks" on the basic ideas in those systems. >> >> One sanity check (according to people like myself and, perhaps Olga (though >> I make no pretence to speak for her)) yields one glaring, massive difference >> between the fundamental philosophy held by most libertarians and the >> philosophies held by those who cheer for the other political philosophies >> that you list. >> >> Libertarianism contains a glaring contradiction within it, which makes it >> clear that it could never actually work in practice, but would instead lead >> to Somalia-like anarchy and chaos. ?In what follows I will try to explain >> what I mean by this. >> >> Libertarianism cherishes the idea that "government" should be reduced to >> the smallest possible size, and that individuals should take full >> responsibility for paying for -- or cheating others out of -- the things >> they need. ?But at the same time Libertarians also want the advantages of >> civilization. ?The problem is, that the things that they want to cut or >> drastically reduce are the "commons" aspects of modern civilisation .... all >> those aspects that have to do with people coming together and realizing that >> it is in everyone's best interest if the community is forced to pool their >> resources to pay for things like roads and theaters and bridges and schools >> and police forces. >> >> The core of the contradiction is that what the Libertarian wants to do is >> LOCALLY sensible, but globally crazy. ?From the point of view of the >> individual libertarian, nothing but good can come from getting the >> government out of their wallet. ?Every libertarian on the planet would see >> an immediate increase in their well-being if that happened. ?But that >> increase in their well being is predicated on the assumption that nothing >> else changes in the society around them: that all the balances and exchanges >> now established continue to operate as before. ?If society continues to >> operate as normal, the local well-being of every libertarian is immensely >> increased, withiout a shadow of a doubt, but that is only true if everthing >> else continues to run as it always has done. >> >> The mistake -- the glaring contradiction -- is this assumption that >> everthing else will stay just as it is while all the libertarians are >> counting the new money in their pocket, and setting up their own private >> arrangements to pay for healthcare, to pay road tolls on every street, to >> hire private police forces to look after them, to pay for their kids to go >> to school, to pay for a snow plow to come visit their street in the winter, >> and so on. ?Why is this assumption wrong? ?Because the entire edifice of >> modern civilisation is built on that assumption about taxation and pooling >> of resources for the common good. ?Taxation and government and >> redistribution of wealth are what separate us from the dark ages. ?The >> concept of taxation + government + redistribution of wealth was the >> INCREDIBLE INVENTION that allowed human societies in at least one corner of >> this planet to emerge from feudal societies where everyone looked after >> themselves and the devil took the hindmost. >> >> This fact about libertarianism is so easy to model, that the conclusion >> about "SOMALIA == the Libertarian Paradise" is almost a no-brainer. What I >> mean by "easy to model" is that when we try to understand the end point of >> other political philosophies it really is pretty hard to see exactly where >> they will go. ?But in the case of libertarianism, it only takes a few >> questions to start revealing that terrifying, inevitable slide toward >> feudalism. ?The questions we would ask are questions about what exactly >> would happen when all the libertarians set up accounts to pay for their >> toll-roads, healthcare, schools, snow plows etc. etc., but the vast >> underbelly of modern society cannot do the same because they do not have the >> resources. ?Questions about what directions the private police forces would >> go when they have a client base that they must make happy, rather than a >> hierarchy that goes up to the nation-state level. And so on. ?We can model >> those local changes quite easily because we have plenty of examples of what >> happens when those circumstances are set up. >> >> So in the case of libertarianism, the answers to those questions are really >> REALLY easy to come up with, and they all point toward anarchy and >> feudalism. ?There are simply no good answers to those questions (i.e. no >> answers that clearly demonstrate that there is a way to push the system >> toward a stable state). >> >> This is the reason why the world has had, over the years, plenty of >> "democracies", "stagnant bureaucratic states", "capitalist states", "fascist >> states", "plutocracies", "socialist states" and "military dictatorships" >> ...... but not one "libertarian state". >> >> Or rather, according to the analysis of those who have thought about it in >> an objective way, the world HAS had many libertarian states: ?they were all >> the rage in the dark ages, and they are now springing up like wild mushrooms >> in a bog, in places like Somalia. >> >> So, those were really not just shallow comments that I made, and that Olga >> made, for all that they were delivered with a wry smile. ?There is a >> difference between the searches for an end-point of all the various >> political philosophies: ?libertarianism is a glaringly obvious >> "locally-smart + globally dumb" philosophy, whereas the others are all much >> much harder to call. >> >> >> >> Richard Loosemore >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 14 > Date: Sat, 19 Feb 2011 14:46:42 -0400 > From: Darren Greer > To: ExI chat list > Subject: Re: [ExI] Call To Libertarians > Message-ID: > ? ? ? ? > Content-Type: text/plain; charset="iso-8859-1" > >>Theatres? ?No. ?That is exclusively the domain of private > industry, and the root of the tension between libertarian and statist.< > > The U.S. government did during the 30's fund the theaters, in something > called the national theatre program. It turned out to be too socialist for > them, so they canned it. Canada has the Canadian Council which provides > grants to professional artists for project creation. They don't care what > you write as long as it's good. They've saved my butt a bunch a times. > > d. > > On Sat, Feb 19, 2011 at 2:33 PM, Richard Loosemore wrote: > >> spike wrote: >> >>> ... On Behalf Of Richard Loosemore >>>> >>> >>> ?... people coming together and realizing that it is in everyone's best >>>> >>> interest if the community is forced to pool their resources to pay for >>> things like roads and theaters and bridges and schools and police >>> forces... >>> >>> Indeed? ?The critical difference in my thinking and yours is found in this >>> one sentence. ?People coming together for roads, bridges, schools and >>> police, yes. ?Theatres? ?No. ?That is exclusively the domain of private >>> industry, and the root of the tension between libertarian and statist. ?It >>> is not in everyone's best interest to pool resources to build theatres. >>> >> >> The inclusion of "theaters" was strictly optional: ?not essential to my >> argument. ?A throwaway. >> >> So let me see if I understand: ?you are saying that without the word >> "theater" in my description, what I said bore no resemblance to the >> philosophy of libertarianism? >> >> Would it be more accurate, then, to say that Libertarianism is about >> SUPPORTING the ?government funding of: >> >> ? Roads, >> ? Bridges, >> ? Police, >> ? Firefighters, >> ? Prisons, >> ? Schools, >> ? Public transport in places where universal use of cars would >> ? ? ?bring cities to a standstill, or where poor people would >> ? ? ?otherwise be unable to escape from ghettos, >> ? The armed forces, >> ? Universities, and publicly funded scholarships for poor students, >> ? National research laboratories like the Centers >> ? ? ?for Disease Control and Prevention, >> ? Snow plows, >> ? Public libraries, >> ? Emergency and disaster assistance, >> ? Legal protection for those too poor to fight against the >> ? ? ?exploitative power of corporations, >> ? Government agencies to scrutinize corrupt practices by >> ? ? ?corporations and wealthy individuals, >> ? Basic healthcare for old people who worked all their lives >> ? ? ?for corporations who paid them so little in salary that >> ? ? ?they could not save for retirement without starving to >> ? ? ?death before they reached retirement, >> ? And sundry other programs that keep the very poor just above >> ? ? ?the subsistence level, so we do not have to step over their >> ? ? ?dead bodies on the street all the time, and so they do not >> ? ? ?wander around in feral packs, looking for middle-class people >> ? ? ?that they can kill and eat... >> >> >> .... but it is about NOT supporting the government funding of theaters? >> >> >> In that case I misunderstood, and all western democracies are more or less >> libertarian already, give or take the 0.0001 percent of their funding that >> goes toward things like theaters and opera houses. >> >> >> >> >> Richard Loosemore >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > End of extropy-chat Digest, Vol 89, Issue 32 > ******************************************** > From spike66 at att.net Sat Feb 19 22:08:58 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 14:08:58 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D600D10.2090008@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> Message-ID: <002901cbd081$9bb2b550$d3181ff0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Richard Loosemore Subject: Re: [ExI] Call To Libertarians spike wrote: >> ... On Behalf Of Richard Loosemore >The inclusion of "theaters" was strictly optional: not essential to my argument. A throwaway... Ja, that one caught my attention. If any government builds a theatre, that government dictates what is played there. >Would it be more accurate, then, to say that Libertarianism is about SUPPORTING the government funding of: Keep in mind that I differentiate between libertarianism and Libertarianism. One has a capital L. I use lower case. > Roads, yes > Bridges, yes > Police, yes > Firefighters, yes > Prisons, yes, but perhaps not the luxury outfits we see so commonly today. > Schools, yes > Public transport in places where universal use of cars would bring cities to a standstill yes, if the public transport is self-sustaining without (or perhaps minimal) government subsidy > The armed forces, yes > Universities, and publicly funded scholarships for poor students, Yes if by "poor students" you meant students with little money, as opposed to bad students. High SATers, yes. > National research laboratories like the Centers for Disease Control and Prevention yes > Snow plows, yes, operated by non-union drivers > Public libraries, yes > Emergency and disaster assistance; yes, > Legal protection for those too poor to fight against the exploitative power of corporations; no, let them take their trade elsewhere. > Government agencies to scrutinize corrupt practices by corporations and wealthy individuals, This might be OK if we balance it by having corporations which would scrutinize corrupt practices by government and poor individuals > Basic healthcare for old people who worked all their lives for corporations who paid them so little in salary that they could not save for retirement without starving to death before they reached retirement... yes > And sundry other programs that keep the very poor just above the subsistence level, so we do not have to step over their dead bodies on the street all the time, and so they do not wander around in feral packs, looking for middle-class people that they can kill and eat... {8^D Yes by all means. The whole feral pack thing never really did appeal. I am much too dignified to howl at the moon. Furthermore, all the middle-class people I know just don't look all that tasty to me. >... but it is about NOT supporting the government funding of theaters? Good, you scared me. >...In that case I misunderstood, and all western democracies are more or less libertarian already, give or take the 0.0001 percent of their funding that goes toward things like theaters and opera houses. Richard Loosemore They could stand to be more libertarian. The US and Europe are likely to be headed that direction anyway, even if not by choice. They will go kicking and screaming, but the previous generation has devoured the seed grain. Now we must all face the consequences. spike From spike66 at att.net Sat Feb 19 23:44:56 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 15:44:56 -0800 Subject: [ExI] kepler results Message-ID: <003601cbd08f$0354a210$09fde630$@att.net> Ooohhhhh life is goooood: http://news.blogs.cnn.com/2011/02/19/scientists-pleasantly-surprised-by-numb er-of-earth-sized-distant-planets/?hpt=C2 The existence of many small planets in the galaxy that Kepler has found also amazed scientists, because there was a possibility that they would have been destroyed by larger planets long ago. "It was a wonderful surprise to see this large number of small planets we have found," Borucki said. {8-] spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sat Feb 19 23:48:34 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 19 Feb 2011 17:48:34 -0600 Subject: [ExI] Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: They are but that does not make a person a libertarian. It makes a person an Extropian. I am an Extropian and NOT a libertarian!!@!!!@ Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Giulio Prisco Sent: Saturday, February 19, 2011 1:00 PM To: ExI chat list Subject: Re: [ExI] Call To Libertarians I think the list has become more inclusive of soft libertarians, or left libertarians, who accept some degree of government and welfare (like me), but I like to think that personal freedom and self-ownership are still considered as fundamental values by most posters. On Sat, Feb 19, 2011 at 6:10 PM, David Lubkin wrote: > Darren wrote: > >> I understand there are some libertarians in this group. > > It's surreal to read this. I was one of the earliest of subscribers to > the original extropian list, twenty or so years ago. I was delighted > to join and help build a community that shared so many of my (even > then) long-standing interests. One of the ideas was that it was a > place where we didn't have to defend or explain the fundamentals. And > the dominant sentiment was that anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the years, and the reasons > for it, but your posting still feels weird. Like someone saying "I > understand there are some Jews in Israel." > > I guess the paleo-extropian label is appropriate; it's easy to feel > like a living fossil. > > > -- David. > > Easy to find on: LinkedIn . Facebook . Twitter . Quora . Orkut > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From darren.greer3 at gmail.com Sun Feb 20 00:30:32 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 20:30:32 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: Natasha wrote: > I am an Extropian and NOT a libertarian!!@!!!@< So does that mean Extropianism has a built-in political philosophy? I'm assuming it's not libertarianism, from the discussion here today. :) My original question was about the relation between libertarianism and technological progress in the transhumanist sense. I did the reading you all suggested. But is there a political and economic philosophy that transhumanists are more likely to champion because it furthers technological goals? I don't expect consensus on this. But I'm curious about it, and have been since I joined this group. We certainly have no problem discussing religion. I know, for example, that most of us are atheists. What about, as Spike suggested in his let's-have-an-open-season-on-libertarianism suggestion, politics? Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sun Feb 20 00:34:05 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 20:34:05 -0400 Subject: [ExI] kepler results In-Reply-To: <003601cbd08f$0354a210$09fde630$@att.net> References: <003601cbd08f$0354a210$09fde630$@att.net> Message-ID: 2011/2/19 spike wrote: > > > >Ooohhhhh life is goooood:< > And rare. ;) d. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Feb 20 01:11:33 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 19 Feb 2011 19:11:33 -0600 Subject: [ExI] Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: <4D606A45.4070606@satx.rr.com> On 2/19/2011 6:30 PM, Darren Greer wrote: > > Natasha wrote: > >> I am an Extropian and NOT a libertarian!!@!!!@< > So does that mean Extropianism has a built-in political philosophy? Anti-authoritarianism, at least, I'd say. Damien Broderick [anarcho-communitarian, if there's such a thing] From lubkin at unreasonable.com Sun Feb 20 02:54:48 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sat, 19 Feb 2011 21:54:48 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <4D606A45.4070606@satx.rr.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <4D606A45.4070606@satx.rr.com> Message-ID: <201102200254.p1K2sQZd009642@andromeda.ziaspace.com> Damien wrote: >[anarcho-communitarian, if there's such a thing] I suspect that your anarcho-communitarian could be compatible with being labeled a libertarian anarchist. As David Friedman notes more eloquently than I can ("Love Is Not Enough" in Pournelle's The Survival of Freedom), if you want something I have, you can obtain it by love (I care for you or your goals), trade, or force. A minarchist (small-government libertarian) cedes the necessity of government but wants it (i.e., the threat or use of force) kept to an irreducible minimum. A libertarian anarchist rejects the initiation of force or fraud altogether. An anarcho-capitalist libertarian (AnCap) says love is fine where it works, but trade works more often, so let's focus on that. How would you describe anarcho-communitarian? -- David. From natasha at natasha.cc Sun Feb 20 03:29:47 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 19 Feb 2011 21:29:47 -0600 Subject: [ExI] Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: <492F443FDF9B48C69B86DFFC1E4096A5@DFC68LF1> Extropy has no built in politics. That is why I am an extropian and not a libertarian, democrat, republican, green or whatever. I am only for human rights and the right to enhance. I am not an atheist. I don't bother with religion. I value empathy more than religion or politics. Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Saturday, February 19, 2011 6:31 PM To: ExI chat list Subject: Re: [ExI] Call To Libertarians Natasha wrote: > I am an Extropian and NOT a libertarian!!@!!!@< So does that mean Extropianism has a built-in political philosophy? I'm assuming it's not libertarianism, from the discussion here today. :) My original question was about the relation between libertarianism and technological progress in the transhumanist sense. I did the reading you all suggested. But is there a political and economic philosophy that transhumanists are more likely to champion because it furthers technological goals? I don't expect consensus on this. But I'm curious about it, and have been since I joined this group. We certainly have no problem discussing religion. I know, for example, that most of us are atheists. What about, as Spike suggested in his let's-have-an-open-season-on-libertarianism suggestion, politics? Darren -- There is no history, only biography. -Ralph Waldo Emerson -------------- next part -------------- An HTML attachment was scrubbed... URL: From krisnotaro at yahoo.com Sun Feb 20 04:16:03 2011 From: krisnotaro at yahoo.com (Kris Notaro) Date: Sat, 19 Feb 2011 20:16:03 -0800 (PST) Subject: [ExI] Call To Libertarians In-Reply-To: <492F443FDF9B48C69B86DFFC1E4096A5@DFC68LF1> Message-ID: <85110.17213.qm@web39321.mail.mud.yahoo.com> compassion, human rights, and empathy are of course great ideals, the pinnacle of any civilized society. however capitalism is going to be rendered useless in the future because it is destructive in nature. capitalism creates a world-wide "race to the bottom" of wages. i could say more but that sums it up for me. --- On Sat, 2/19/11, Natasha Vita-More wrote: From: Natasha Vita-More Subject: Re: [ExI] Call To Libertarians To: "'ExI chat list'" Date: Saturday, February 19, 2011, 10:29 PM Extropy has no built in politics. That is why I am an extropian and not a libertarian, democrat, republican, green or whatever.? I am only for human rights and the right to enhance. ? I am not an atheist.? I don't bother with religion. I value empathy more than religion or politics. ? Natasha Vita-More ? From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Saturday, February 19, 2011 6:31 PM To: ExI chat list Subject: Re: [ExI] Call To Libertarians ? Natasha wrote: ??> I am an Extropian and NOT a libertarian!!@!!!@< So does that mean Extropianism has a built-in political philosophy? I'm assuming it's not libertarianism, from the discussion here today. :) My original question was about the relation between libertarianism and technological progress in the transhumanist sense. I did the reading you all suggested. But is there a political and economic philosophy that transhumanists are more likely to champion because it furthers technological goals? I don't expect consensus on this. But I'm curious about it, and have been since I joined this group. We certainly have no problem discussing religion. I know, for example, that most of us are atheists. What about, as Spike suggested in his let's-have-an-open-season-on-libertarianism suggestion, politics? Darren -- There is no history, only biography. -Ralph Waldo Emerson -----Inline Attachment Follows----- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 20 06:44:40 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 22:44:40 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <85110.17213.qm@web39321.mail.mud.yahoo.com> References: <492F443FDF9B48C69B86DFFC1E4096A5@DFC68LF1> <85110.17213.qm@web39321.mail.mud.yahoo.com> Message-ID: <007a01cbd0c9$a649f620$f2dde260$@att.net> . On Behalf Of Kris Notaro . Subject: Re: [ExI] Call To Libertarians . capitalism creates a world-wide "race to the bottom" of wages. .. Sure, but that almost makes it sound like a *bad* thing. Wages are prices. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sun Feb 20 06:54:11 2011 From: max at maxmore.com (Max More) Date: Sat, 19 Feb 2011 23:54:11 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: <85110.17213.qm@web39321.mail.mud.yahoo.com> References: <492F443FDF9B48C69B86DFFC1E4096A5@DFC68LF1> <85110.17213.qm@web39321.mail.mud.yahoo.com> Message-ID: How you do you reconcile your view with the reality that over the entire span of "capitalism", real wages have risen and risen and risen? I put "capitalism" in quotes, because it's a term created by Karl Marx and is a vague term used to describe a wide range of both market-based and state-influenced economic systems. --- Max 2011/2/19 Kris Notaro > compassion, human rights, and empathy are of course great ideals, the > pinnacle of any civilized society. however capitalism is going to be > rendered useless in the future because it is destructive in nature. > capitalism creates a world-wide "race to the bottom" of wages. i could say > more but that sums it up for me. > -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sun Feb 20 08:40:45 2011 From: pharos at gmail.com (BillK) Date: Sun, 20 Feb 2011 08:40:45 +0000 Subject: [ExI] Call To Libertarians In-Reply-To: References: <492F443FDF9B48C69B86DFFC1E4096A5@DFC68LF1> <85110.17213.qm@web39321.mail.mud.yahoo.com> Message-ID: 2011/2/20 Max More wrote: > > How you do you reconcile your view with the reality that over the entire span > of "capitalism", real wages have risen and risen and risen? > > I put "capitalism" in quotes, because it's a term created by Karl Marx and is > a vague term used to describe a wide range of both market-based and > state-influenced economic systems. > > That is a difficult claim to justify. I suspect many sources would quote the opposite. We live in an inflationary society, so in the past, prices generally rose along with wages, but not necessarily all at the same time or at the same rate. Results depend on which periods of time that you try to measure. Price inflation is difficult to measure because different items have different rates of inflation. e.g. food, energy, housing, gold, etc. And mass production tends to decrease the price of manufactured goods to offset increases in other prices. Similarly with wages, some wages might increase, some might decrease, and the job market itself might increase or decrease. With globalisation, since about 1990, western real wages have felt the effect of cheap labour from third world countries, both by reducing wages and increasing unemployment in first world countries. Kris's claim that capitalism creates a world-wide "race to the bottom" of wages could perhaps be rephrased as 'aggressive globalisation creates a world-wide "race to the bottom" of wages'. So, overall. I would say the jury is still out on the question. BillK From moulton at moulton.com Sun Feb 20 10:16:07 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sun, 20 Feb 2011 02:16:07 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: <4D60E9E7.6050208@moulton.com> Darren Greer wrote: > So does that mean Extropianism has a built-in political philosophy? I'm > assuming it's not libertarianism, from the discussion here today. :) My > original question was about the relation between libertarianism and > technological progress in the transhumanist sense. I did the reading you all > suggested. But is there a political and economic philosophy that > transhumanists are more likely to champion because it furthers technological > goals? > > Let me provide a little more context and emphasize some points which should hopefully clarify the matter. The libertarian philosophy covers much more that just economics. Libertarianism at its core derives from a couple of high level principles which I list here in greatly oversimplified form: 1. The principle that a person has personal rights. This is sometimes expressed as self-ownership or as right to life, liberty and property. 2. Human societies (as we know them today) work best when individuals can make their own choices. These are the "moralistic" and "consequentialist" principles that I mentioned in a previous message. Most libertarians start with one or the other of these positions or some amalgamation. Now I would be the first to point out that as presented both of the above need a lot of fleshing out. Remember that we are discussing political philosophy and we are attempting to cover in a few emails what is normally covered in dense books. So at best we can give a broad picture and highlight a few common misunderstandings. Of course we face the similar issues with any other political philosophy; for example if we were discussing socialism we would likely be in the same position. The issues are too complex to cover in a few emails. And like all philosophies the libertarian philosophy undergoes refinement over time. In the first couple of years of the Exi email list virtually everyone was either a libertarian or understood libertarian philosophy sufficiently so as to avoid common mistakes. So although many of the early list participants were libertarian that does not mean they all were; I think I remember at least one who self identified as a socialist. My memory of those days is that we all got along and we did not deliberately misrepresent anyone else's position. And whatever the topic - be it economics, space travel, cryonics or music - people generally did not frequently post on topics about which they were clueless. This led to a very high quality list. The early list and the current list typically have the same socially liberal atmosphere. For example I would expect everyone on the list not have a problem with same sex marriage. And the libertarian would say that the government should stay out marriage altogether. Let a marriage contract involve a man and a woman or a man and a man or a woman and a women or two men and a women and so on. Similarly there was view that the War on Drugs was a terrible thing and should be stopped immediately. Of course this was also the libertarian view. And similar on the view that the government has no business censoring speech and press. And of course that is in line with the libertarian position. Thus the attitudes (sometimes explicit and sometimes implicit) of both the early list and the current list shared much in common with the libertarian philosophy. And consider that "spontaneous order" was specifically part of the principles. And lest us not forget the discussions of Polycentric Law; http://www.tomwbell.com/writings/JurisPoly.html Plus many of the early members of the list were recruited by other libertarians. For example I first heard that list was being formed by an email sent to a libertarian email list and so I signed up at the beginning. Perhaps things would have been different if a bunch of socialists had started the list, who knows? Also to provide a historical background I think it is important to list a few of the books which influenced many of the early members of the list. The first is Engines of Creation by Eric Drexler which is I am sure well known to all so I will not dwell on it here. Here is where I think much of the answer to your question about technology and the early list can be found. It is not just a brilliant book about technology in general and nano-technology in particular but it also discusses the implications of the technology. And that book seems to resonate well with libertarians interested in science and technology; at least the ones here in Silicon Valley. The second book is The Retreat to Commitment by W. W. Bartley. I know that here in the Silicon Valley area this book and the idea of Pan-Critical Rationalism were important in the ideas of many early list participants. Personally it is one of the most important books I have read. Also Max wrote a very good essay on the topic and I highly recommend Max's essay. There is a third book that should be mentioned and that is The Fatal Conceit by F. A. Hayek. I know this book was read by several of the early list participants here in the Silicon Valley and the ideas did percolate in some of the early list discussions. What I have written above is from memory so keep that in mind. Anyone who was on the list during the first few years might want to jump in and add their recollections. And just for old time's sake: BEST DO IT SO. Fred From stefano.vaj at gmail.com Sun Feb 20 12:59:26 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 20 Feb 2011 13:59:26 +0100 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: 2011/2/17 Darren Greer : > Just to let everyone on here know. > When I first joined this group I was ashamed at how little I knew about > science compared to the rest of you. Well, it is still better than being ashamed of how little the rest of us know of science compared with you. :-))) -- Stefano Vaj From darren.greer3 at gmail.com Sun Feb 20 13:30:58 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 20 Feb 2011 09:30:58 -0400 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: On Sun, Feb 20, 2011 at 8:59 AM, Stefano Vaj wrote: > > > Well, it is still better than being ashamed of how little the rest of > us know of science compared with you. :-))) > > No, I'm pretty comfortable when the shoe is on the other foot. :) -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Feb 20 12:39:36 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 20 Feb 2011 13:39:36 +0100 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> Message-ID: 2011/2/12 Darren Greer : > Damien wrote: >>The technical name for what you prefer is "corporate fascism". That doesn't >> have a really compelling history.< Strangely enough, "corporations" this side of the Atlantic, especially in Italian and Neolatin languages, have an apposite meaning to the ordinary usage of the US English word (we would rather say "companies" here). In fact, the word "corporations" in Europe refer to authoritarian state-controlled agencies whose purpose is that of regulating, of "harmonising", and sooner or later of taking over (see the hysterical denounciations of "communism" against Ugo Spirito in 1937 by Italian capitalists), private businesses in each given sector, both at a proprietary and at a union level. Thus, the "textile corporation", the "mechanical corporation", the "building industry corporation", the "agricultural corporation"... For some time, the catholic church itself was promoting such a system as a solution to the social problems and a "third way" between Marxist socialism and capitalism. On the contrary, what goes for "corporate fascism" today seems to have little to do with fascist or Leninist state control of the means of production and rather with multi-sector umbrella conglomerates or horizonal cartels (such as Japanese kairetsus) taking over state prerogatives or subjugating public powers, as in the typical cyberpunk scenario. -- Stefano Vaj From darren.greer3 at gmail.com Sun Feb 20 13:42:42 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 20 Feb 2011 09:42:42 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D60E9E7.6050208@moulton.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <4D60E9E7.6050208@moulton.com> Message-ID: On Sun, Feb 20, 2011 at 6:16 AM, F. C. Moulton wrote >The first is Engines of Creation by Eric Drexler which is I am sure well known to all so I will not dwell on it here. < Thanks Fred. Excellent summary. I know the book but haven't read it (read my Acceptance into Math Program to understand why I'm so poorly educated re: some of these topics.) But it has been mentioned before and it is on my list. Right now I'm still trying to get caught up on evolutionary psychology with texts that Keith recommended, as well as keep up with my my paid work and my class work. There is a steep learning curve here in this group, but it's worth the effort. darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 20 14:49:19 2011 From: spike66 at att.net (spike) Date: Sun, 20 Feb 2011 06:49:19 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <492F443FDF9B48C69B86DFFC1E4096A5@DFC68LF1> <85110.17213.qm@web39321.mail.mud.yahoo.com> Message-ID: <001b01cbd10d$5b4a5810$11df0830$@att.net> ... On Behalf Of BillK >...Kris's claim that capitalism creates a world-wide "race to the bottom" of wages could perhaps be rephrased as 'aggressive globalisation creates a world-wide "race to the bottom" of wages'. If you go that route, perhaps you would argue the equivalent "Trade protectionism inhibits a world-wide race to the bottom of wages." It looks to me like inhibition of globalization creates a race to the bottom of wages along a parallel track. Either way the race happens. Without globalization, wages remain high, but the high wage earners look around one day and realize that will all their money, they can't afford anything. >...So, overall. I would say the jury is still out on the question...BillK Looks to me like that jury is back, and the hangman is with them. spike From spike66 at att.net Sun Feb 20 14:53:53 2011 From: spike66 at att.net (spike) Date: Sun, 20 Feb 2011 06:53:53 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D60E9E7.6050208@moulton.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <4D60E9E7.6050208@moulton.com> Message-ID: <001c01cbd10d$fe5fc3a0$fb1f4ae0$@att.net> ... On Behalf Of F. C. Moulton Darren Greer wrote: >> So does that mean Extropianism has a built-in political philosophy? ... >And just for old time's sake: BEST DO IT SO. Fred http://www.maxmore.com/extprn3.htm From atymes at gmail.com Sun Feb 20 18:00:38 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 20 Feb 2011 10:00:38 -0800 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: 2011/2/20 Darren Greer : > On Sun, Feb 20, 2011 at 8:59 AM, Stefano Vaj wrote: >> Well, it is still better than being ashamed of how little the rest of >> us know of science compared with you. :-))) > > No, I'm pretty comfortable when the shoe is on the other foot. :) Shame is only for those things you can control. The lack of knowledge of others is not that, to the same degree. From darren.greer3 at gmail.com Sun Feb 20 19:24:36 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 20 Feb 2011 15:24:36 -0400 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: On Sun, Feb 20, 2011 at 2:00 PM, Adrian Tymes wrote: > > > Shame is only for those things you can control. The lack of knowledge of > others is not that, to the same degree. > > Agreed. Actually it's kind of nice being on the steepest learning curve in the room. Every day is a feast of ideas and experiences. When I make a mathematical mistake in my class work, as long as it's not a dumb one of misperception, I get grateful. I learn a lot from making mistakes and asking questions. Once in my first year of college this guy was sitting at the table in caf spouting forth some nonsensical jargon about english literature deconstruction or the conscious universe and Shakespeare's sister or something. He used the word epiphany, and being a back-woods country boy in the city for the first time I politely (and sincerely) stopped him and asked what the word meant. The table was crowded and everyone was simply tolerating the mindless condescension. Turned out he couldn't answer my question. He had used the word without knowing its meaning. Every since then I've been OK with admitting ignorance by asking for clarification. Darren P.S. The guy came back to me a day later with a definition. But by then it was too late, I had looked it up myself. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From js_exi at gnolls.org Sun Feb 20 19:31:11 2011 From: js_exi at gnolls.org (J. Stanton) Date: Sun, 20 Feb 2011 11:31:11 -0800 Subject: [ExI] Serfdom and libertarian critiques (Was: Call to Libertarians) Message-ID: <4D616BFF.2000502@gnolls.org> On 2/19/11 10:46 AM, Richard Loosemore wrote: > Taxation and government and redistribution of wealth are what > separate us from the dark ages. The concept of taxation + government > + redistribution of wealth was the INCREDIBLE INVENTION that allowed > human societies in at least one corner of this planet to emerge from > feudal societies where everyone looked after themselves and the devil > took the hindmost. This is a breathtakingly counterfactual statement. Feudal economies were and are entirely supported by "taxation + government + redistribution of wealth". The only difference is that in a feudal economy, the redistribution is from the masses to the already rich, in the form of "lords" -- whereas in our modern government-contronlled economy, the redistribution is from the masses to the already rich in the form of "corporations" and "banks". The difference of income and assets between a feudal serf and his lord in the Middle Ages is not proportionally larger than the difference in income and assets today between the average world citizen and its richest citizens. The only difference is that we modern serfs have a better standard of living than serfs in the Dark Ages due to sterile medicine, antibiotics, and mass production of technology. If anyone thinks there is a difference of kind between medieval serfdom and what we have in America ("oh, we can OWN LAND") just stop paying your property tax -- or any other tax -- and you'll see that the state owns everything, just as in the Dark Ages. What we call "ownership" is a finder's fee for the privilege of paying below-market rent. There is no difference between the Domesday Book and the county recorder's office. As far as libertarianism, I find the standard statist critique to be nonsense: claims that the government can be less corrupt than the people assume that government is made up of something other than people, which fails trivially. My critique rests on the blindness of the Libertarian Party (and libertarians) towards banks and corporations. Both are government-granted exceptions to the rules of liability for one's actions and debts ('limited liability') and monetary exchange ('fractional reserve banking', i.e. the ability to create money from thin air by issuing debt) -- and both produce the inevitable result that institutions granted such exceptional powers control the world economy. If libertarians genuinely believed what they said, the first plank on every libertarian's political platform would be "abolish corporations and banks". Yet the LPUSA, and every libertarian I've personally met, remains blind to this yawning contradiction. Therefore I discard modern libertarianism as self-contradictory, irrelevant, and inevitably producing the same corporate/banking hegemony as any other statist philosophy. JS http://www.gnolls.org From atymes at gmail.com Sun Feb 20 19:59:35 2011 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 20 Feb 2011 11:59:35 -0800 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: 2011/2/20 Darren Greer : > Once in my first year of college this guy was sitting at the table in caf > spouting forth some nonsensical jargon about english literature > deconstruction or the conscious universe and Shakespeare's sister or > something. He used the word epiphany, and being a back-woods country boy in > the city for the first time I politely (and sincerely) stopped him and asked > what the word meant. > The table was crowded and everyone was simply tolerating the mindless > condescension. > Turned out he couldn't answer my question. He had used the word without > knowing its meaning. > Every since then I've been OK with admitting ignorance by asking for > clarification. > Darren > P.S. The guy came back to me a day later with a definition. But by then it > was too late, I had looked it up myself. It doesn't stop with college. Just last week, I was interviewing for a job as a software engineering manager. The guy asked me about design patterns - which are a relatively new idea, and have some benefits, but I'm not yet entirely sold on them. (For instance: people keep saying the singleton design pattern is the thing to use to ensure consistency of data between different parts of an application. I ask them how it's different from a global variable, which just happens to be accessed through a class structure, and thus has the same warnings about, e.g., shared memory and potential for subroutines to affect it. No one has been able to give me a coherent answer yet.) And that's all he asked about, picking away at my not being a master of design patterns. I asked him if he had questions about documenting the architecture, managing people (this being a management position, after all), and so on. He said they didn't need any of that, because they used design patterns. I have rarely been so grateful not to get a job I had tried to get. From moulton at moulton.com Sun Feb 20 20:37:28 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sun, 20 Feb 2011 12:37:28 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <001c01cbd10d$fe5fc3a0$fb1f4ae0$@att.net> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <4D60E9E7.6050208@moulton.com> <001c01cbd10d$fe5fc3a0$fb1f4ae0$@att.net> Message-ID: <4D617B88.3080207@moulton.com> spike wrote: > ... On Behalf Of F. C. Moulton >> And just for old time's sake: BEST DO IT SO. Fred > > http://www.maxmore.com/extprn3.htm For those who wondering what this is about let me elaborate. The phrase BEST DO IT SO was a short hand phrase incorporating the earlier version 2.6 principles; see http://www.maxmore.com/extprn26.htm I included BEST DO IT SO for a couple of reasons; one was that I had already mentioned the "SO" Spontaneous Order as being one of the ideas circulating in the early days of the list. And to give an "Ahh I remember when" moment to anyone on the list now who was around in the early days. Spike provided the link to the newer version. And since we are discussing history to is important to remember the other groups and movements which had influences on and often common participants with the early Exi list members. Some have already been mentioned in prior posts such cryonics, nanotechnology and space exploration. Another one that had an impact was cryptography in general and cypherpunks in particular. Here in the Silicon Valley area it was not uncommon to see the some of the same persons at a cypherpunks gathering one month and at a party of Extropians the next. Topics such as data havens, anonymous transactions and reputation capital were common in both. For those interested in a walk down memory lane there is a scholarly legal paper on HavenCo published at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1760151 And the author discussed the HavenCo in some blog postings:http://volokh.com/2011/02/12/prof-james-grimmelmann-guest-blogging-this-coming-week-on-sealand-havenco-and-the-rule-of-law/ http://volokh.com/2011/02/14/sealand-and-havenco-part-i-the-history-of-sealand/ http://volokh.com/2011/02/15/sealand-and-havenco-part-ii-the-rise-and-fall-of-havenco/ http://volokh.com/2011/02/16/sealand-and-havenco-part-iii-why-did-havenco-fail/ http://volokh.com/2011/02/17/sealand-and-havenco-part-iv-international-law-and-sealand-law/ http://volokh.com/2011/02/18/sealand-and-havenco-part-v-learning-from-havenco/ Fred From spike66 at att.net Sun Feb 20 20:45:13 2011 From: spike66 at att.net (spike) Date: Sun, 20 Feb 2011 12:45:13 -0800 Subject: [ExI] jop search, was RE: Acceptance Into Math Program Message-ID: <003f01cbd13f$12c8de90$385a9bb0$@att.net> ... On Behalf Of Adrian Tymes ... >And that's all he asked about, picking away at my not being a master of design patterns. ... >I have rarely been so grateful not to get a job I had tried to get. I am tempted to write a book about job search nightmares. Last week I saw a posting go up in which I was able to figure out who had written the requisition. I posted a note to him the day after the requisition went up on the job board, and learned that the job had already been filled. Coincidentally the same day, I saw a requisition go up which used a nonstandard acronym without definition. I googled with quotation marks around the exact acronymed skill they were listing as a requirement. I got exactly three hits. One was a typo. The other two were found on two different versions of a resume of the same guy. I was tempted to post the guy a note of congratulations on his new position, the same day the requisition went on the external board. spike From moulton at moulton.com Sun Feb 20 21:10:32 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sun, 20 Feb 2011 13:10:32 -0800 Subject: [ExI] Serfdom and libertarian critiques (Was: Call to Libertarians) In-Reply-To: <4D616BFF.2000502@gnolls.org> References: <4D616BFF.2000502@gnolls.org> Message-ID: <4D618348.5090102@moulton.com> J. Stanton wrote: > > My critique rests on the blindness of the Libertarian Party (and > libertarians) towards banks and corporations. Both are > government-granted exceptions to the rules of liability for one's > actions and debts ('limited liability') and monetary exchange > ('fractional reserve banking', i.e. the ability to create money from > thin air by issuing debt) -- and both produce the inevitable result > that institutions granted such exceptional powers control the world > economy. > I am somewhat baffled by your comments because your comments ignore reality. Libertarians have long complained about government privileged banking. And obviously all anarchists by definition are opposed to government granted privilege in commerce or any other area. Plus economists discuss free banking and it is easy to find. Since I have been providing so many text links here are some video links: http://www.youtube.com/watch?v=5P7W1G1hbiQ http://www.youtube.com/watch?v=0PyS2NtW3xA I have not reviewed either of these but I wanted to provide some non-text sources so I easily found these online. I have read some of the texts by White and found them interesting. > If libertarians genuinely believed what they said, the first plank on > every libertarian's political platform would be "abolish corporations > and banks". > I disagree. As much as I oppose the current government role in banking I strongly feel that if there is going to be a libertarian political plank at least in the context of the USA the first plank should be to stop the War on Drugs. The War on Drugs has in my opinion done much more harm. The harm the War on Drugs has done in poor and minority areas is terrible and we are seeing this harm in much of the violence in Mexico. > Yet the LPUSA, and every libertarian I've personally met, remains > blind to this yawning contradiction. As for the LPUSA in my message of Feb 18 I warned against using LPUSA as any source of libertarian info. I am assuming the reason that you are still bring the LPUSA up is that there was a delivery glitch and that message has not arrived in you inbox yet since I am sure that everyone on this list reads every word I write; even when I attempting a small bit of humor. And as for your comment about every libertarian being blind to the situation well I suppose it is possible that you are meeting people who call themselves libertarian but have no idea of the history and philosophy of the libertarian movement. But if that is the case then it has nothing to do with our discussion of libertarian philosophy unless of course we want to have a long digression on how people self identify with terms about which they are clueless. > Therefore I discard modern libertarianism as self-contradictory, > irrelevant, and inevitably producing the same corporate/banking > hegemony as any other statist philosophy. You can discard whatever you want however I suggest that you do a bit of research so that know about the issues first so that you do not mislabel what you are discarding. > > JS > http://www.gnolls.org > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Sun Feb 20 22:00:33 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 20 Feb 2011 16:00:33 -0600 Subject: [ExI] ripping off a book nobody's read Message-ID: <4D618F01.1000506@satx.rr.com> http://www.epubbud.com/book.php?g=QB38WNUB I suppose I should be grateful that someone cares enough to pirate my recent novel QUIPU... (and now that the pirated version exists, extropes should not hesitate to download the thing, should the impulse move anyone): The piracy is even somewhat recursive, for as the note at the end states: From bbenzai at yahoo.com Sun Feb 20 21:58:29 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 20 Feb 2011 13:58:29 -0800 (PST) Subject: [ExI] Call To Libertarians In-Reply-To: Message-ID: <895132.47768.qm@web114413.mail.gq1.yahoo.com> Richard Loosemore wrote: >Would it be more accurate, then, to say that Libertarianism is about SUPPORTING the government funding of: > Roads, Bridges, Police, Firefighters, Prisons,... >... but it is about NOT supporting the government funding of theaters? >...In that case I misunderstood, and all western democracies are more or less libertarian already, give or take the 0.0001 percent of their funding that goes toward things like theaters and opera houses. Richard Loosemore 'More or less', except that they assume the right to tell you if you can have an abortion or not, who you should and shouldn't have sex with, whether you can marry someone of the same sex as yourself, what you can do with your own body, whether you even *own* your own body, whether you can be forced to risk your life, mandate that there is such a thing as a 'lawful killing', and dictate what that is, declare that a belief in an imaginary sky-fairy is necessary for the holding of public office (at least in some places), dictate what you can and can't do for fun, forcefully prevent you from using 'unapproved' therapies regardless of whether you wish to risk your own life doing so or not, dictating whether or not you're allowed to end your own life, ... (just a few objectionable things off the top of my head). You get the idea, I hope. 'Libertarianism' is not just about economics. Ben Zaiboc From spike66 at att.net Sun Feb 20 22:09:43 2011 From: spike66 at att.net (spike) Date: Sun, 20 Feb 2011 14:09:43 -0800 Subject: [ExI] watson again Message-ID: <005301cbd14a$e0dc4c30$a294e490$@att.net> During the Watson discussion I mentioned a technology we need: something to convert text to talking head. We already have pretty good text to speech and speech recognition, but this video is an example of text to avatar. A civil engineer sent me the video, thinking it was hilarious. City planners actually do sound exactly like this, according to her: http://vimeo.com/17784798 The point here is not to make fun of city planners (although that is allowed) but rather to look at current state-of-the-art text to video. With speech recognition+Watson+Eliza(?)+text-to-video, we are most of the way there to an artificial companion. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at harveynewstrom.com Sun Feb 20 22:37:49 2011 From: mail at harveynewstrom.com (mail at harveynewstrom.com) Date: Sun, 20 Feb 2011 15:37:49 -0700 Subject: [ExI] Call To Libertarians Message-ID: <20110220153749.d32794d095cdfcc0018508d9c136b552.4eb506ce65.wbe@email09.secureserver.net> On Sat, Feb 19, 2011 at 2:20 PM, Eugen Leitl wrote: > It's nice to be a part of one of the longer-lived Internet communities. I agree. I have fond memories of the early days. And we aren't done yet! -- Harvey Newstrom, Security Consultant, CISSP CISA CISM CGEIT CSSLP CRISC CIFI NSA-IAM ISSAP ISSMP ISSPCS IBMCP From max at maxmore.com Mon Feb 21 01:48:49 2011 From: max at maxmore.com (Max More) Date: Sun, 20 Feb 2011 18:48:49 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: <20110220153749.d32794d095cdfcc0018508d9c136b552.4eb506ce65.wbe@email09.secureserver.net> References: <20110220153749.d32794d095cdfcc0018508d9c136b552.4eb506ce65.wbe@email09.secureserver.net> Message-ID: Hey, isn't the Extropy-Chat/Extropians email list about to have it's 20th anniversary? That's remarkably long-lived for an Internet forum. What can we do to celebrate? And when, exactly, did the list start? I seem to recall that it was August or September of 1991. However, my memory is far from the best in the world. Can someone tell us the exact date? (I could if I had the backup archives, but they are back in Austin.) --- Max On Sun, Feb 20, 2011 at 3:37 PM, wrote: > On Sat, Feb 19, 2011 at 2:20 PM, Eugen Leitl wrote: > > It's nice to be a part of one of the longer-lived Internet communities. > > I agree. I have fond memories of the early days. And we aren't done > yet! > > -- > Harvey Newstrom, Security Consultant, > > > CISSP CISA CISM CGEIT CSSLP CRISC CIFI NSA-IAM ISSAP ISSMP ISSPCS IBMCP > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 21 02:22:54 2011 From: spike66 at att.net (spike) Date: Sun, 20 Feb 2011 18:22:54 -0800 Subject: [ExI] watson again In-Reply-To: <005301cbd14a$e0dc4c30$a294e490$@att.net> References: <005301cbd14a$e0dc4c30$a294e490$@att.net> Message-ID: <000f01cbd16e$3fcb7fe0$bf627fa0$@att.net> . On Behalf Of spike . The point here is not to make fun of city planners (although that is allowed) but rather to look at current state-of-the-art text to video. With speech recognition+Watson+Eliza(?)+text-to-video, we are most of the way there to an artificial companion.. spike OK here's one I made, a text-to-video: http://www.xtranormal.com/watch/11199505/ It looks like they let you make one movie, then after that they charge you a nominal fee. I haven't figured out the fee structure. This algorithm might be too slow for a real-time yakbot. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Mon Feb 21 03:20:04 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 20 Feb 2011 22:20:04 -0500 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: <895132.47768.qm@web114413.mail.gq1.yahoo.com> References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> Message-ID: <4D61D9E4.90607@lightlink.com> Ben Zaiboc wrote: > Richard Loosemore wrote: > >> Would it be more accurate, then, to say that Libertarianism is >> about > SUPPORTING the government funding of: > >> Roads, Bridges, Police, Firefighters, Prisons,... > >> ... but it is about NOT supporting the government funding of >> theaters? > >> ...In that case I misunderstood, and all western democracies are >> more or > less libertarian already, give or take the 0.0001 percent of their > funding that goes toward things like theaters and opera houses. > Richard Loosemore > > > 'More or less', except that they assume the right to tell you if you > can have an abortion or not, who you should and shouldn't have sex > with, whether you can marry someone of the same sex as yourself, what > you can do with your own body, whether you even *own* your own body, > whether you can be forced to risk your life, mandate that there is > such a thing as a 'lawful killing', and dictate what that is, declare > that a belief in an imaginary sky-fairy is necessary for the holding > of public office (at least in some places), dictate what you can and > can't do for fun, forcefully prevent you from using 'unapproved' > therapies regardless of whether you wish to risk your own life doing > so or not, dictating whether or not you're allowed to end your own > life, ... (just a few objectionable things off the top of my head). > > You get the idea, I hope. > > 'Libertarianism' is not just about economics. > > > Ben Zaiboc I'm not sure, but at least one level of irony may have been missed here (e.g. I do not in the least believe that western democracies are more or less libertarian). Be that as it may, however, could I point out that the list of objectionable things apply largely to ONE western democracy. The European democracies, for the most part, are not guilty of these objectionable practices. Richard Loosemore From moulton at moulton.com Mon Feb 21 06:54:48 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sun, 20 Feb 2011 22:54:48 -0800 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: <4D61D9E4.90607@lightlink.com> References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: <4D620C38.5080704@moulton.com> Richard Loosemore wrote: > Ben Zaiboc wrote: >> >> 'More or less', except that they assume the right to tell you if you >> can have an abortion or not, > The European democracies, for the most part, are not guilty of these > objectionable practices. > It should be noted that Ben started his paragraph by saying the countries involved assumed "the right to tell you if" and then continued on the with the list. So in that regard his statement true regardless of what laws were in place. However Richard wants to point to the European democracies by which I assume he means something like the European Union. But Richard provides no evidence. So let us look at the first item on the list abortion and the EU countries; according to the info I found online it looks like about 3 out of the EU countries have not abortion restrictions (Iceland, Sweden, UK). The rest have various levels of restriction depending on a variety of factors and based on what I found online the Republic of Ireland only allows abortion to save the life of the woman. Since there are about 27 EU countries that means based on the info I found around 11% of the EU do not restrict abortion and about 89% do restrict it to varying degrees. While I think both Ben and Richard need to be more exacting and nuanced in their language it appears that at least on the question of abortion Ben is more correct than Richard. I am not going to research the rest of the list I am going to leave that to Richard. Fred From pharos at gmail.com Mon Feb 21 09:29:46 2011 From: pharos at gmail.com (BillK) Date: Mon, 21 Feb 2011 09:29:46 +0000 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: <4D620C38.5080704@moulton.com> References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> <4D620C38.5080704@moulton.com> Message-ID: On Mon, Feb 21, 2011 at 6:54 AM, F. C. Moulton wrote: > However Richard wants to point to the European democracies by which I > assume he means something like the European Union. ?But Richard provides > no evidence. > > So let us look at the first item on the list abortion and the EU > countries; ?according to the info I found online it looks like about 3 > out of the EU countries have not abortion restrictions (Iceland, Sweden, > UK). ?The rest have various levels of restriction depending on a variety > of factors and based on what I found online the Republic of Ireland only > allows abortion to save the life of the woman. ? Since there are about > 27 EU countries that means based on the info I found ?around 11% of the > EU do not restrict abortion and about 89% do restrict it to varying degrees. > > While I think both Ben and Richard need to be more exacting and nuanced > in their language it appears that at least on the question of abortion > Ben is more correct than Richard. > > I am not going to research the rest of the list I am going to leave that > to Richard. > > Thanks for pointing out that some countries in the European Union are not quite perfect yet. (Irony alert!). This graphic from the NY Times is self explanatory. The main point, of course, is that libertarian policies on a national level are a recipe for total disaster and chaos. Around the world almost anything is preferable. BillK From spike66 at att.net Mon Feb 21 12:15:17 2011 From: spike66 at att.net (spike) Date: Mon, 21 Feb 2011 04:15:17 -0800 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> <4D620C38.5080704@moulton.com> Message-ID: <000301cbd1c1$005cb4c0$01161e40$@att.net> ... On Behalf Of BillK ... >...The main point, of course, is that libertarian policies on a national level are a recipe for total disaster and chaos. Around the world almost anything is preferable...BillK Don't worry, BillK. There is a culture spreading across Europe which is diametrically opposed to libertarianism. I understand it is growing quite popular in places such as France and Italy. Thirty years from now, the total disaster and chaos will be the result of any serious attempt to resist this growing anti-libertarian culture, such as by allowing your wife or daughter go outdoors uncovered. spike From mail at harveynewstrom.com Mon Feb 21 22:50:20 2011 From: mail at harveynewstrom.com (mail at harveynewstrom.com) Date: Mon, 21 Feb 2011 15:50:20 -0700 Subject: [ExI] Call To Libertarians Message-ID: <20110221155020.d32794d095cdfcc0018508d9c136b552.90590f9419.wbe@email09.secureserver.net> Max More wrote, > Hey, isn't the Extropy-Chat/Extropians email list about to have it's 20th > anniversary? That's remarkably long-lived for an Internet forum. > > What can we do to celebrate? And when, exactly, did the list start? I seem > to recall that it was August or September of 1991. However, my memory is far > from the best in the world. Can someone tell us the exact date? (I could if > I had the backup archives, but they are back in Austin.) All my old references agree that it was founded in 1991. But I can't find a more precise date. -- Harvey Newstrom, Security Consultant, CISSP CISA CISM CGEIT CSSLP CRISC CIFI NSA-IAM ISSAP ISSMP ISSPCS IBMCP From mail at harveynewstrom.com Mon Feb 21 23:09:51 2011 From: mail at harveynewstrom.com (mail at harveynewstrom.com) Date: Mon, 21 Feb 2011 16:09:51 -0700 Subject: [ExI] watson again Message-ID: <20110221160951.d32794d095cdfcc0018508d9c136b552.718f250857.wbe@email09.secureserver.net> "spike" wrote, > OK here's one I made, a text-to-video: > http://www.xtranormal.com/watch/11199505/ The voice is a perfect balance between Casey Kasem and Stephen Hawking. -- Harvey Newstrom, Security Consultant, CISSP CISA CISM CGEIT CSSLP CRISC CIFI NSA-IAM ISSAP ISSMP ISSPCS IBMCP From spike66 at att.net Tue Feb 22 00:06:59 2011 From: spike66 at att.net (spike) Date: Mon, 21 Feb 2011 16:06:59 -0800 Subject: [ExI] watson again In-Reply-To: <20110221160951.d32794d095cdfcc0018508d9c136b552.718f250857.wbe@email09.secureserver.net> References: <20110221160951.d32794d095cdfcc0018508d9c136b552.718f250857.wbe@email09.secureserver.net> Message-ID: <008501cbd224$6d08f3f0$471adbd0$@att.net> ... On Behalf Of mail at harveynewstrom.com "spike" wrote, >> OK here's one I made, a text-to-video: >> http://www.xtranormal.com/watch/11199505/ >The voice is a perfect balance between Casey Kasem and Stephen Hawking.-- Harvey Newstrom... Kasem, coast to coooooooast! For this weekend, we bring you Casey's Countdown, of rock and roll's American Top 40... Until next time, keep your feet on the ground and keep reaching for the stars." Anyone here much under about 50, ask your parents. {8^D Explanation for our young friends: In addition to American Top 40, Casey Kasem also did the voice of Shaggy on the original Scooby Doo Where Are You show from the late 60s, which featured a talking dog (of sorts). It wasn't quite as retarded as it sounds, because the dog played the part of a dumb superstitious human, sorta. The shows were half hour mysteries, in which all paranormal or superstitious anything was always the non-paranormal act of a criminal. And he would have gotten away with it, except for those meddling kids! (The bad guy said that at the end of every episode.) Thanks Harvey for resurrecting a long forgotten memory from a pleasantly squandered youth. {8-] {8^D spike From kellycoinguy at gmail.com Tue Feb 22 00:27:59 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 21 Feb 2011 17:27:59 -0700 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <20110218125146.GJ23560@leitl.org> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <20110217115041.GQ23560@leitl.org> <20110218125146.GJ23560@leitl.org> Message-ID: On Fri, Feb 18, 2011 at 5:51 AM, Eugen Leitl wrote: > On Fri, Feb 18, 2011 at 12:10:41AM -0700, Kelly Anderson wrote: > >> > Look, what is your energetical footprint? 1 kW, more or less? >> > Negligible. >> >> In a super efficient system, my footprint might be nanowatts. I > > Not even for human equivalent, nevermind at 10^6 to 10^9 speedup. > I don't think you can go below 1-10 W for a human realtime equivalent. Are you assuming the use of today's technology? Or the best that may be created in the future? According to several sites on the Internet the human brain uses 20-40 Watts. Some of that undoubtedly goes for biological purposes that are not directly supportive of computation. It seems very pessimistic to say that we could only improve by 2-40 times over nature. Granted nanowatts may be overly optimistic, and is based on no currently known technology. Nevertheless, I see no reason to believe that that the bottom is 1 Watt. >> believe there are theoretical computing models that use zero net >> electricity. > > Reversible logic is slow, and it's not perfectly reversible. Not yet, of course. And human brains are very slow too. An interesting question to be answered is what is the most limiting factor? Is it matter out of which to build intelligence? Is it energy to power it? Time to run it? Or space to house it? Or is there some other limiting factor? I think it will take a while for the exponential growth to stop, but it must eventually stop. I'm just not sure which of the above is the most limiting factor. Only time and technology will tell. I'm not sure we can even guess at this point what the most limiting factor will be. When the actual limiting factor is determined, then Darwinism will kick in and we'll see what the results are. > And it's still immaterial, because if you use 100 times less energy > there will be 100 times the individuals competing for it. Adaptively. Software is a gas. I have understood that for a very long time. >> > Now multiply that by 7 gigamonkeys. Problem? >> > >> > Infinitesimally small energy budgets multiplied by very large >> > numbers are turning stars into FIR blackbodies. And whole galaxies, >> > and clusters, and superclusters. >> > >> > You think that would be easy to miss? >> >> Yes. Seeing the LACK of something is very difficult astronomy. Heck > > Giant (up to GLYr) spherical voids only emitting in FIR? You are probably right on this one. I hadn't thought that through and I don't have a lot of experience with the technical aspects of astronomy. >> how long did it take astronomers to figure out that the majority of >> the universe is dark matter? I agree with you that an advanced > > There was a dedicated search for Dyson FIR emitters. Result: > density too low to care. OK. I am sure we all would have heard on the evening news had something like that been located. >> civilization would eventually create a ring world, and finally a > > Not ring, optically dense node cloud. Most likely. >> sphere that collected all available solar energy. But that could >> support an enourmous computational structuree, capable of simulating > > Enormous to some, trivial to others. > >> every mind in a 10,000 year civilization might take only a few watts >> and a few seconds. > > The numbers don't check out. Occam's razor sez: we're not in anyone's > smart lightcone. The proof of extraterrestrial life wasn't the fundamental portion of my argument. Closer to the center is that with human psychology running the show, many, if not most, people would likely retreat into VR given the choice. >> > When something is postulated to you it's usually bunk. Novelty >> > and too small group for peer review pretty much see to that. >> >> When I look at teenagers lost in iPods, it doesn't seem like bunk to >> think that they could positively be swallowed alive by an interesting >> virtual reality. I have relatives who have addiction to WoW that makes >> a heroin addict look like a weekend social drinker. > > Have you seen the birth rate and retention rate of Amish? No. What is your point? Really, I don't understand what you're implying. >> growth cannot continue forever indicates that there will be Darwinian >> processes for choosing which AGIs get the eventually limited power, > > You're getting it. > >> and which do not. This leads one inevitably to the conclusion that the >> surviving AGIs will be the "fittest" in a survival and reproduction >> sense. It will be a very competitive world for unenhanced human beings >> to compete in, to say the least. > > Exactly. I don't hold out a lot of optimism for humans as they now exist. I hold out a LOT of optimism for our progeny (whatever form that progeny takes). Responding to the thread as a whole, Darwinian forces in the future will probably be more focused on the replication of temes, (techno memes) rather than physical reproduction. Memes and temes reproduce much faster than genes, and overpower the slower replicators without much effort (Dawkins, Blackmore). No matter how smart the machines are in the future, no one of them will know everything that all the others know, and so there will always remain a necessity for some kind of networking or shared memory. Even viewed as one super organism, there are organelles or cells that contain unique information that will want to be copied and processed. If some kind of economy survives the singularity, then some information will still have some value that will have to be paid for by some form of money. I haven't seen much written to this point on money and economics in the coming Singularity. Have I missed something? -Kelly From anders at aleph.se Tue Feb 22 00:03:39 2011 From: anders at aleph.se (Anders Sandberg) Date: Tue, 22 Feb 2011 00:03:39 +0000 Subject: [ExI] Celebrating the list In-Reply-To: References: <20110220153749.d32794d095cdfcc0018508d9c136b552.4eb506ce65.wbe@email09.secureserver.net> Message-ID: <4D62FD5B.7070102@aleph.se> Max More wrote: > Hey, isn't the Extropy-Chat/Extropians email list about to have it's > 20th anniversary? That's remarkably long-lived for an Internet forum. Cool! > > What can we do to celebrate? And when, exactly, did the list start? I > seem to recall that it was August or September of 1991. However, my > memory is far from the best in the world. Can someone tell us the > exact date? (I could if I had the backup archives, but they are back > in Austin.) As a relative newcomer (I think I first joined around 1993) I hjave unfortunately no good data on exactly when it began. But I do think we ought to have a mailinglist birthday party online :-) -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From lubkin at unreasonable.com Tue Feb 22 00:42:33 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Mon, 21 Feb 2011 19:42:33 -0500 Subject: [ExI] Extropy anniversary (Was: Re: Call To Libertarians) In-Reply-To: <20110221155020.d32794d095cdfcc0018508d9c136b552.90590f9419 .wbe@email09.secureserver.net> References: <20110221155020.d32794d095cdfcc0018508d9c136b552.90590f9419.wbe@email09.secureserver.net> Message-ID: <201102220042.p1M0gAYD019698@andromeda.ziaspace.com> Max wrote: > Hey, isn't the Extropy-Chat/Extropians email list about to have it's 20th > anniversary? That's remarkably long-lived for an Internet forum. > > What can we do to celebrate? And when, exactly, did the list start? I seem > to recall that it was August or September of 1991. However, my memory is far > from the best in the world. Can someone tell us the exact date? (I could if > I had the backup archives, but they are back in Austin.) Harvey replied: >All my old references agree that it was founded in 1991. But I can't >find a more precise date. I have the original announcement that was posted to sci.nanotech (I think that's where I saw it) but it will take a while to locate. I'm converting data formats, consolidating, and indexing. Meanwhile, I just did a quick poke in Google. I didn't find the reference, but I did see a copy of a 1991 report on the suspension of Jerry Leaf from Cryonics magazine. Some guy named Max More appears in the listed Crew for Transfer Dry Ice to Liquid Nitrogen Cooling. His task was listed as "Strong back." http://www.alcor.org/Library/html/JerryLeafEntersCryonicSuspension.htm -- David. From kellycoinguy at gmail.com Mon Feb 21 23:55:31 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 21 Feb 2011 16:55:31 -0700 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: <201102192014.p1JKEeST027600@andromeda.ziaspace.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> Message-ID: Wikipedia went through a similar issue a few years back contacted every single publisher to Wikipedia and ask them to click through a contract that assigned their text through Creative Commons. I don't know what happened to content that was anonymous, or where the author failed to sign the agreement. They changed their user agreement around that time so that everyone by default was contributing through Creative Commons. Maybe there is a useful precedent there. -Kelly On Sat, Feb 19, 2011 at 1:15 PM, David Lubkin wrote: > The terms under which the original list functioned require permission of a > posting's author before dissemination beyond that list's membership. It > would, however, be legitimate to share one's archives with someone else > who'd been on the list at the time of a posting, and I think to someone who > joined that list after the date of the posting. Anything beyond that means > finding folks and getting permissions. (One of the messy questions to deal > with is what if Keith was replying to and quotes something Perry said. Keith > gives permission; Perry doesn't.) > > I am now building systems for other communities I'm part of that have > similar problems. I think what I'm doing will be readily adaptable to the > original list archive issue. > > > -- David. > > Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From kanzure at gmail.com Tue Feb 22 01:07:40 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Mon, 21 Feb 2011 19:07:40 -0600 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> Message-ID: On Mon, Feb 21, 2011 at 7:02 PM, Bryan Bishop wrote: > On Mon, Feb 21, 2011 at 5:55 PM, Kelly Anderson wrote: > >> contract that assigned their text through Creative Commons. I don't >> know what happened to content that was anonymous, or where the author >> failed to sign the agreement. They changed their user agreement around >> that time so that everyone by default was contributing through >> Creative Commons. > > > I thought that FSF published an update to the GFDL that allowed Wikimedia > to transfer over their content to a Creative Commons license? Presumably, > everyone making an edit to Wikipedia is assigning the copyright of their > original content to either the Free Software Foundation (per the GFDL) or to > Wikimedia, who would then be able to control whether or not to port it to a > Creative Commons license. > > or am I horribly misinformed? http://en.wikipedia.org/wiki/GNU_Free_Documentation_License#Compatibility_with_Creative_Commons_licensing_terms """ Although the two licenses work on similar copyleft principles, the GFDL is not compatible with the Creative Commons Attribution-ShareAlikelicense. However, version 1.3 added a new section allowing specific types of websites using the GFDL to additionally offer their work under the CC-BY-SA license. These exemptions allow a GFDL-based collaborative project with multiple authors to transition to the CC-BY-SA 3.0 license (which would normally require the permission of every author), if the work satisfies several conditions:[2] - The work must have been produced on a "Massive Multiauthor Collaboration Site" (MMC), such as a public wikifor example. - If external content originally published on a MMC is present on the site, the work must have been licensed under Version 1.3 of the GNU FDL, or an earlier version but with the "or any later version" declaration, with no cover texts or invariant sections. If it was not originally published on an MMC, it can only be relicensed if it were added to an MMC before November 1, 2008. To prevent the clause from being used as a general compatibility measure, the license itself only allowed the change to occur before August 1, 2009. At the release of version 1.3, the FSF stated that all content added before November 1, 2008 to Wikipedia as an example satisfied the conditions. The Wikimedia Foundation itself after a public referendum, invoked this process to dual-license content released under the GFDL under the CC-BY-SA license in June 2009, and adopted a foundation-wide attribution policy for the use of content from Wikimedia Foundation projects.[7] """ - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Tue Feb 22 01:02:34 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Mon, 21 Feb 2011 19:02:34 -0600 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> Message-ID: On Mon, Feb 21, 2011 at 5:55 PM, Kelly Anderson wrote: > contract that assigned their text through Creative Commons. I don't > know what happened to content that was anonymous, or where the author > failed to sign the agreement. They changed their user agreement around > that time so that everyone by default was contributing through > Creative Commons. I thought that FSF published an update to the GFDL that allowed Wikimedia to transfer over their content to a Creative Commons license? Presumably, everyone making an edit to Wikipedia is assigning the copyright of their original content to either the Free Software Foundation (per the GFDL) or to Wikimedia, who would then be able to control whether or not to port it to a Creative Commons license. or am I horribly misinformed? - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Feb 22 01:04:51 2011 From: sparge at gmail.com (Dave Sill) Date: Mon, 21 Feb 2011 20:04:51 -0500 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> Message-ID: On Mon, Feb 21, 2011 at 6:55 PM, Kelly Anderson wrote: > Wikipedia went through a similar issue a few years back contacted > every single publisher to Wikipedia and ask them to click through a > contract that assigned their text through Creative Commons. I don't > know what happened to content that was anonymous, or where the author > failed to sign the agreement. They changed their user agreement around > that time so that everyone by default was contributing through > Creative Commons. > > Maybe there is a useful precedent there. Why don't we just anonymize the archives for people who don't want their articles associated with them? Just assign them IDs like "Anonymous #1", "Anonymous #2", etc. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Tue Feb 22 02:10:38 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Mon, 21 Feb 2011 21:10:38 -0500 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> Message-ID: <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Dave Sill wrote: >Why don't we just anonymize the archives for people who don't want >their articles associated with them? Just assign them IDs like >"Anonymous #1", "Anonymous #2", etc. Because that's theft and breach of contract. We original-list authors own what we wrote. As such, we (largely) control what may lawfully be done with it. Taking our names off doesn't change that. Also, separately we, as a society unto ourselves, practicing "privately produced law," stated very clearly and repeatedly that nothing posted to the list could be used off the list without the specific consent of the authors in question. And this list is not that list. -- David. From kellycoinguy at gmail.com Tue Feb 22 03:00:44 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 21 Feb 2011 20:00:44 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5EB411.9090400@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> Message-ID: On Fri, Feb 18, 2011 at 11:01 AM, Richard Loosemore wrote: > Kelly Anderson wrote: >> >> On Fri, Feb 18, 2011 at 6:33 AM, Richard Loosemore >> wrote: > > This is good. ?I am happy to try. ?Don't interpret the post I just wrote as > being too annoyed (just a *little* frustrated is all). ?;-) Don't think I'm too annoyed with you either. It is frustrating to ask a seemingly straightforward question, and then get an answer to a different question. >> If you wrote a paper entitled "Why Watson is an Evolutionary Dead >> End", and you were convincing to your peers, I think you would get it >> published and it would be helpful to the AI community. > > Well, can I point out that the numbers are not 99% in favor? That sentence does not quite parse. >?Ben Goertzel > just published an essay in H+ magazine saying very much the same things that > I said here. ?Ben is very widely respected in the AGI community, so perhaps > you would consider comparing and constrasting my remarks with his. I assume you are talking about this: http://hplusmagazine.com/2011/02/17/watson-supercharged-search-engine-or-prototype-robot-overlord/ Quoting from his article... "My initial reaction ... was a big fat ho-hum. ... I?m an AI guru so I know pretty much exactly what kind of specialized trickery they?re using under the hood. It?s not really a high-level mind, just a fancy database lookup system.? "But while that cynical view is certainly technically accurate, I have to admit that when I actually watched Watson play Jeopardy! on TV ? and beat the crap out of its human opponents ? I felt some real excitement ? and even some pride for the field of AI. Sure, Watson is far from a human-level AI, and doesn?t have much general intelligence. But even so, it was pretty bloody cool to see it up there on stage, defeating humans in a battle of wits created purely by humans for humans ? playing by the human rules and winning." "But even so, the technologies underlying Watson are likely to be part of the story when human-level and superhuman AGI robots finally do emerge." >End Quote And you have said exactly the opposite of this, on this list. Quote> "Both the Watson strategy and the human strategy are valid ways of playing Jeopardy! But, the human strategy involves skills that are fairly generalizable to many other sorts of learning (for instance, learning to achieve diverse goals in the physical world), whereas the Watson strategy involves skills that are only extremely useful for domains where the answers to one?s questions already lie in knowledge bases someone else has produced." >End quote I think this comes closest to saying something both reasonable, and in agreement with what you've said. Note however, the gentler softer language used. Watson was not denigrated as "trivial" but pointed out (correctly) as having solved the problem in an entirely different manner than human beings would. The final question is whether the eventual AI that evolves from all of the current experimentation will be a huge collection of parlor tricks, or something that reasons more like a real human being. I would assume you think the latter, and in that you may well be correct. Give Watson the assignment to collect "common sense" from the Internet for a few years, and he might be able to assemble a very large collection of common sense. Perhaps large enough to never make obvious stupid mistakes. Perhaps. > I don't want to write about Watson, because I have seen so many examples of > that kind of dead end and I have already analzed them as a *class* of > systems. ?That is very important. ?They cannot be fought individually. I am > pointng to the pattern. What is this pattern? I am (as a human being) an expert pattern recognizer. I am familiar with a number of approaches to AGI. Yet, I am having trouble recognizing the pattern you seem to think is so clear. Can you spell it out succinctly? (I realize this is a true challenge) >>> Also, why do you say "self-described scientist"? ?I don't understand if >>> this is supposed to be >>> me or someone else or scientists in general. >> >> Carl Sagan, a real scientist, said frequently, "Extraordinary claims >> require extraordinary evidence." (even though he may have borrowed the >> phrase from Marcello Truzzi.) I understand that you are claiming to >> follow the scientific method, and that you do not think of yourself as >> a philosopher. If you claim to be a philosopher, stand up and be proud >> of that. Some of the most interesting people are philosophers, and >> there is nothing wrong with that. > > :-) ?Well, you may be confused by the fact that I wrote ONE philosophy > paper. Hehe... which one was that? They all seemed pretty philosophical to my mind. None of them said... here is an algorithm that might lead to general artificial intelligence... > But have a look through the very small set of publications on my website. > ?One experimental archaeology, several experimental and computational > cognitive science papers. ?One cognitive neuroscience paper..... > > I was trained as a physicist and mathematician. For Odin's sake man!!! Why didn't you say this in the beginning?!? This explains EVERYTHING!! > I just finished teaching a > class in electromagnetic theory this morning. ?I have written ?all those > cognitive science papers. ?I was once on a team that ported CorelDraw from > the PC to the Mac. Did you live in Orem? Perhaps we have run into each other before. Your name sounds familiar. > I am up to my eyeballs in writing a software tool in OS > X that is designed to facilitate the construction and experimental > investigation of a class of AGI systems that have never been built > before..... ? ?Isn't it a bit of a stretch to ask me to be proud to be a > philosopher? :-) :-) I am only going off of the papers I could find. Point me to the one specific paper that you feel is most scientific and I'll read it again. Happily. >>> And why do you assume that I am not doing experiments?! ?I am certainly >>> doing that, and >>> doing masive numbers of such experiments is at the core of everything I >>> do. >> >> Good to hear. Your papers did not reflect that. Can you point me to >> some of your experimental results? > > No, but I did not say that they did. ?It is too early to ask. Sigh. This reminds me of a story. A mathematician was asked to build a fence around a herd of cattle. He built a small corral being careful not to surround any cows, and then defined the larger area to be "inside". Problem solved. > Context. ?Physicists back in the 1980s who wanted to work on the frontiers > of particle physics had to spend decades just building one tool - the large > hadron collider - to answer their theoretical questions with empirical data. > ?I am in a comparable situation, but with one billionth the funding that > they had. ?Do I get cut a *little* slack? :-( I will never say "It will never fly Orville"... (other perhaps than in some very specific narrow issue) that is the slack I will give you. When you share the results of your experimentation such that other scientists can replicate your amazing results, then I will say "well done." As for the physicists, they built a lot of smaller colliders along the way. There was one about ten feet across in the basement of the science building at BYU... I'm sure that the preliminary results that they achieved with those smaller colliders gave the people funding the hadron collider confidence that they weren't throwing their money down a rat hole. > More when I can. Fair enough. -Kelly From msd001 at gmail.com Tue Feb 22 04:16:58 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 21 Feb 2011 23:16:58 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> Message-ID: On Mon, Feb 21, 2011 at 10:00 PM, Kelly Anderson wrote: > done." As for the physicists, they built a lot of smaller colliders > along the way. There was one about ten feet across in the basement of > the science building at BYU... I'm sure that the preliminary results > that they achieved with those smaller colliders gave the people > funding the hadron collider confidence that they weren't throwing > their money down a rat hole. True enough. However, if there really is a tipping point between the number of neurons in a network that makes a "dumb" slug turn emerge as a higher-order intelligence then any smaller-scale system would likely be ridiculed as a narrow-domain toy. If you doubt this, consider Watson's recent achievement: "Yeah sure but can it accomplish anything I would ask of a biological 4 year old?" If you want to study an anthill or a beehive and you should someone pictures/movies/papers/etc. on a single ant or bee, will they have any appreciation of the scale at which the group dynamic is operating? Could you have predicted the existence of puffers, gliders, glider guns, etc. from Conway's basic Game of Life rules? If/when Richard shows you a paper that proves a 3 cell "organism" (for lack of a better term) oscillates with a period of 2 and is completely stable in that configuration, would you be willing to invest in a larger strategy? No? How many cells at what duration would be convincing enough to spend your first $million? 'just curious... From possiblepaths2050 at gmail.com Tue Feb 22 03:45:36 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 21 Feb 2011 20:45:36 -0700 Subject: [ExI] Celebrating the list In-Reply-To: <4D62FD5B.7070102@aleph.se> References: <20110220153749.d32794d095cdfcc0018508d9c136b552.4eb506ce65.wbe@email09.secureserver.net> <4D62FD5B.7070102@aleph.se> Message-ID: Anders, I deeply approve of your idea of having a mailing list birthday party! But how are such things done? Please tell me! John : ) On 2/21/11, Anders Sandberg wrote: > Max More wrote: >> Hey, isn't the Extropy-Chat/Extropians email list about to have it's >> 20th anniversary? That's remarkably long-lived for an Internet forum. > > Cool! > >> >> What can we do to celebrate? And when, exactly, did the list start? I >> seem to recall that it was August or September of 1991. However, my >> memory is far from the best in the world. Can someone tell us the >> exact date? (I could if I had the backup archives, but they are back >> in Austin.) > > As a relative newcomer (I think I first joined around 1993) I hjave > unfortunately no good data on exactly when it began. But I do think we > ought to have a mailinglist birthday party online :-) > > > -- > Anders Sandberg, > Future of Humanity Institute > James Martin 21st Century School > Philosophy Faculty > Oxford University > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From max at maxmore.com Tue Feb 22 05:37:29 2011 From: max at maxmore.com (Max More) Date: Mon, 21 Feb 2011 22:37:29 -0700 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: Yes, it's a pain in the butt in terms of promulgating those brilliant and stimulating early posts, but we absolutely will continue to abide by the conditions agreed to two decades ago. I am NEVER going to set aside the agreement of that time in favor of satisfying today's (perfectly natural) desire to see the earliest posting to the older transhumanist email list (and one of the very earliest email lists of all). If the early archives are never generally available because of this, then so be it. However, it may be that most of that early material might eventually become available with the consent of the authors. *** Writers from the early 1990s: If you agree to allow all of your postings to the original Extropians email list to be publically available, please let me (and the world -- or at least the Extropians-Chat email list) know. --- Max On Mon, Feb 21, 2011 at 7:10 PM, David Lubkin wrote: > Dave Sill wrote: > > Why don't we just anonymize the archives for people who don't want their >> articles associated with them? Just assign them IDs like "Anonymous #1", >> "Anonymous #2", etc. >> > > Because that's theft and breach of contract. > > We original-list authors own what we wrote. As such, we (largely) control > what may lawfully be done with it. Taking our names off doesn't change that. > > Also, separately we, as a society unto ourselves, practicing "privately > produced law," stated very clearly and repeatedly that nothing posted to the > list could be used off the list without the specific consent of the authors > in question. And this list is not that list. > > > -- David. > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Tue Feb 22 07:39:55 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 22 Feb 2011 08:39:55 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <20110221155020.d32794d095cdfcc0018508d9c136b552.90590f9419.wbe@email09.secureserver.net> References: <20110221155020.d32794d095cdfcc0018508d9c136b552.90590f9419.wbe@email09.secureserver.net> Message-ID: <20110222073955.GN23560@leitl.org> On Mon, Feb 21, 2011 at 03:50:20PM -0700, mail at harveynewstrom.com wrote: > All my old references agree that it was founded in 1991. But I can't > find a more precise date. So no dialup BBS prior to that? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Tue Feb 22 09:50:11 2011 From: pharos at gmail.com (BillK) Date: Tue, 22 Feb 2011 09:50:11 +0000 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: <000301cbd1c1$005cb4c0$01161e40$@att.net> References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> <4D620C38.5080704@moulton.com> <000301cbd1c1$005cb4c0$01161e40$@att.net> Message-ID: On Mon, Feb 21, 2011 at 12:15 PM, spike wrote: > Don't worry, BillK. ?There is a culture spreading across Europe which is > diametrically opposed to libertarianism. ?I understand it is growing quite > popular in places such as France and Italy. ?Thirty years from now, the > total disaster and chaos will be the result of any serious attempt to resist > this growing anti-libertarian culture, such as by allowing your wife or > daughter go outdoors uncovered. > > Yes, but......... societies change anyway. Sociology shows that people who have large families in their original country (because so many children die) find that succeeding generations growing up in the West have smaller families and their Western children demand more freedom. There is much stress in immigrant family groups because of these changes. I also understand there is a bit of excitement sweeping through countries like Tunisia, Egypt, Libya, etc. at present. Societies are always in a state of flux. The USA of 2011 is very different from the USA of 1950. BillK From darren.greer3 at gmail.com Tue Feb 22 10:18:20 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 22 Feb 2011 06:18:20 -0400 Subject: [ExI] watson again In-Reply-To: <008501cbd224$6d08f3f0$471adbd0$@att.net> References: <20110221160951.d32794d095cdfcc0018508d9c136b552.718f250857.wbe@email09.secureserver.net> <008501cbd224$6d08f3f0$471adbd0$@att.net> Message-ID: On Mon, Feb 21, 2011 at 8:06 PM, spike wrote: > ... On Behalf Of mail at harveynewstrom.com >Anyone here much under about 50, ask< Under forty five. I'm 43 and I remember him well. Sunday afternoon when the countdown came on I'd settle into my room with my homework or a novel and listen to the entire thing on a ghetto-blaster I'd won for selling the most chocolate bars for a funding drive in Junior High school. I was there when Madonna first hit the charts with Material Girl, and the Canadian Band Sheriff hit number one with I Miss You and Twisted Sister rocked up the airwaves with We're Not Gonna Take It. You're right, Spike. Good memories. But I think I'm kinda getting to be a relic from the paleolithic. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 22 10:56:45 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Feb 2011 11:56:45 +0100 Subject: [ExI] Serfdom and libertarian critiques (Was: Call to Libertarians) In-Reply-To: <4D616BFF.2000502@gnolls.org> References: <4D616BFF.2000502@gnolls.org> Message-ID: On 20 February 2011 20:31, J. Stanton wrote: > On 2/19/11 10:46 AM, Richard Loosemore wrote: >> >> Taxation and government and redistribution of wealth are what >> separate us from the dark ages. ?The concept of taxation + government >> + redistribution of wealth was the INCREDIBLE INVENTION that allowed >> human societies in at least one corner of this planet to emerge from >> feudal societies where everyone looked after themselves and the devil >> took the hindmost. > > This is a breathtakingly counterfactual statement. I am inclined to agree. "Taxation" is an old invention indeed, and not a very clear-cut one for that matter (what about compulsory or heavily-encouraged community services in hunting-and-gathering tribes?). On the other hands, it is absolute private property of wealth in the modern sense which is a relatively new concept. The feodal lords were not the *owner" of their land in the modern sense, they were rather enjoying a privilege which could be accorded and under some circumstances revoked, had a limited if any transferability, was supposed to be parcelled through further concessions to lower lords (vavasours, vassals of vavasours), etc. Moreover, commonal property took the place to some extent of wealth distribution through services paid by taxes levied by a central bureacracy. -- Stefano Vaj From bbenzai at yahoo.com Tue Feb 22 12:53:07 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 22 Feb 2011 04:53:07 -0800 (PST) Subject: [ExI] Brief correction re Western Democracies In-Reply-To: Message-ID: <764392.59473.qm@web114411.mail.gq1.yahoo.com> "spike" wroet: > > ... On Behalf Of BillK > ... > > >...The main point, of course, is that libertarian > policies on a national > level are a recipe for total disaster and chaos. Around the > world almost > anything is preferable...BillK > > Don't worry, BillK.? There is a culture spreading > across Europe which is > diametrically opposed to libertarianism.? I understand > it is growing quite > popular in places such as France and Italy.? Thirty > years from now, the > total disaster and chaos will be the result of any serious > attempt to resist > this growing anti-libertarian culture, such as by allowing > your wife or > daughter go outdoors uncovered. Indeed. Or educating them: http://tinyurl.com/65q8csf Ben Zaiboc From kellycoinguy at gmail.com Tue Feb 22 13:30:14 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 22 Feb 2011 06:30:14 -0700 Subject: [ExI] Complex AGI [WAS Watson On Jeopardy] In-Reply-To: <4D5EB0FF.7000007@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5EB0FF.7000007@lightlink.com> Message-ID: On Fri, Feb 18, 2011 at 10:48 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > Well, I am struggling to find positive things to say, because > you're tending to make very sweeping statements (e.g. "this is just > philosophy" and "this is not science") that some people might interpret > as quite insulting. I don't mean to be insulting. I am trying to draw out something real, useful and substantial from you. I have some degree of faith that you have something interesting to say, and I'm trying to get at it. I don't think I am confused about what you have said vs what John has said. Twenty years on mailing lists has focused my mind fairly well on keeping who said what straight. Funny that I've learned to do that without any conscious effort... the brain really is amazing. > So what have I actually claimed? ?What have I been defending? YES, YES, YES, that is what I want to know! >?Well, > what I do say is that IMPLICIT in the papers I have written, there is > indeed an approach to AGI (a framework, and a specific model within that > framework). ?There is no way that I have described an AGI design > explictly, in enough detail for it to be evaluated, and I have never > claimed that. ?Nor have I claimed to have built one yet. ?But when pressed > by people who want to know more, I do point out that if they understand > cognitive psychology in enough detail they will easily be able to add up all > the pieces and connect all the dots and see where I am going with the work I > am doing. Well it's good to know that when you fly the planes into the established AI buildings, we will be able to say we should have connected the dots. :-) > The problem is that, after saying that you read my papers already, you > were quite prepared to dismiss all of it as "philosophizing" and "not > science". ? I tried to explain to you that if you understood the > cognitive science and AI and complex systems background from which the > work comes, you would be able to see what I meant by there being a > theory of AGI implicit in it, and I did try to explain in a little more > detail how my work connects to that larger background. ?I pointed out > the thread that stretches from the cog psych of the 1980s, through > McClelland and Rumelhart, through the complex systems movement, to the > particular (and rather unusual) approach that I have adopted. Referring to a group of other people's work, saying, "read this with this other thing in mind", is a little like me saying, if you read Wikipedia thinking about some topic, you'll come up with the result. Be a little more explicit. A little less vague. That's all I'm asking for. I didn't get much of a specific nature about your particular approach from your papers. > I even pointed out the very, very important fact that my complex systems > paper was all about the need for a radically different AGI methodology. > ?Now, I might well be wrong about my statement that we need to do things in > this radically different way, but you could at least realize that I have > declared myself to be following that alternate methodology, and therefore > understand what I have said about the priority of theory and a particular > kind of experiment, over hacking out programs. ?It is all there, in the > complex systems paper. What I hear is you railing against the current state of the art, but without suggesting something different in a specific way. You do suggest a vague framework generator, which is interesting, but not useful in a SCIENTIFIC way. i.e. it does not immediately suggest an experiment that I can reproduce. > But even after me pointing out that this stuff has a large context that > you might not be familiar with, instead of acknowledging that fact, you are > still making sweeping condemnations! ?This is pretty bad. I am roughly familiar with most of the context you give. The only sweeping condemnation I have given is that you sweepingly condemn your "competition" and that you haven't yet shared any useful results. You have admitted the second now, so I see that as progress. It is your callous negation of the work of others that I condemn, not your work. I don't understand enough about your work to condemn it, and I haven't condemned your work, just your approach to everyone else. > More generally: > > I get two types of responses to my work. ?One (less common) type of > response is from people who understand what I am trying to say well > enough that they ask specific, focussed questions about things that are > unclear or things they want to challenge. ?Those people clear understand > that there is a "there" there .... if the papers I wrote were empty > philosophising, those people would never be ABLE to send coherent > challenges or questions in my direction. ?Papers that really are just > empty philosophising CANNOT generate that kind of detailed response, > because there is nothing coherent enough in the paper for anyone to get > a handle on. OK. I haven't given any of those types of responses at this time, but I give some of my thoughts later in this (longish) email. > Then there is the second kind of response. ?From these people I get > nothing specific, just handwaving or sweeping condemnations. ?Nothing > that indicates that they really understood what I was trying to say. I think I have stated fairly clearly that I haven't understood the details of your ideas. You haven't shared enough for me to do that. Perhaps I have unfairly blamed you for that. Perhaps if I were to spend months digging into your ideas I would come up with something solid to refute or agree with. > They reflect back my arguments in a weird, horribly distorted form > -- so distorted that it has no relationship whatsoever to what I > actually said -- and when I try to clarify their misunderstandings > they just make more and more distorted statements, often wandering > far from the point. ?And, above all, this type of response usually > involves statements like "Yes, I read it, but you didn't say anything > meaningful, so I dismissed it all as empty philosophising". Richard, there is nothing *empty* about your philosophy. But as a computer scientist I don't see anything concrete, reproducible and useful from your papers so far. It isn't a put down to call it philosophy when all it is is a general idea about where things should go. > I always try to explain and respond. ?I have put many hours into > responding to people who ask questions, and I try very hard to help > reduce confusions. ?I waste a lot of time that way. ?And very often, I > do this even as the person at the other end continues to deliver mildly > derogatory comments like "this isn't science, this is just speculation" > alongside their other questions. It must be frustrating. I have a glimpse of where you are going. It isn't speculation, and some day it may become science. Today, however, in the paper you had me look at, it is not yet presented in a scientific manner. That's all I've said, and if you feel that is a poor description of what you do, all I can say is that is how your paper reads. > If you want to know why this stuff comes out of cognitive psychology, by > all means read the complex systems paper again, and let me know if you > find the argument presented there, for why it HAS to come out of > cogntive psychology. ?It is there -- it is the crux of the argument. ?If you > believe it is incorrect, I would be happy to debate the rationale for it. Here I assume you are referring to your 2007 paper entitled "Complex Systems, Artificial Intelligence and Theoretical Psychology" (I would point out that my spending 10 minutes finding what I THINK is the right paper is indicative of the kind of useless goose chasing I have to go on to have a conversation with you)... 8:37PM ... Carefully Reading ... "It is arguable that intelligent systems must involve some amount of complexity, and so the global behavior of AI systems would therefore not be expected to have an analytic relation to their constituent mechanisms." What other kind of relation would a global result have to the constituent algorithms? After reading the whole paper, I know what you are getting at, but this particular sentence doesn't grok well in an abstract. "the results were both impressive and quick to arrive." Quick results, I like the sound of that. Of course the paper is now nearly four years old... have you achieved any quick results that you can share? "If the only way to solve the problem is to declare the personal philosophy and expertise of many AI researchers to be irrelevant at best, and obstructive at worst, the problem is unlikely to even be acknowledged by the community, let alone addressed." i.e. Everyone else is stupid. That's a good way to get people interested in your research. I, for one, am trying to get past your ego. "A complex system is one in which the local interactions between the components of the system lead to regularities in the overall, global behavior of the system that appear to be impossible to derive in a rigorous, analytic way from knowledge of the local interactions." To paraphrase, a complex system is non-deterministic, or semi-random, or at least incomprehensible. I suppose that describes the brain to some extent, so despite the confrontational definition, I admit that you might be onto something here. At least it is a clear definition of what you mean by "complex system." It sounds similar to chaos theory as well, and perhaps you are thinking in that direction. You discuss the "problem space", and then declare that there is only a small portion of that space that can be dealt with analytically. Fair enough, AND the human brain can only attack a small part of the "problem space" which overlaps partially with the part of the space that analysis can crack. Think of a Venn Diagram. We obviously need more kinds of intelligence to crack all of the problems out there in the "problem space". I would point out that Google and Watson are the first of a series of problem solvers that may create another circle in the Venn diagram, but it is hard to say if this really is the case at this point. I suspect that we will eventually see many circles in such a Venn diagram. I have spoken to my friends for years about Gestalt emergent intelligence (as of ant colonies, neural nets, etc.). I believe in that. I don't think your global-local disconnect is terribly different from the common sense notion that the "whole is greater than the sum of its parts" so I accept that. Wolfram's "computational irreducibility" comments relate to the idea that you can't know the results of running some programs until you actually run them. That is, there is no shortcut to the answer. While you explain this concept well, I don't see how it applies to AI systems. That may be my limitation. However, the only way to see how Watson is going to answer a particular question is to ask it. That seems to be approximate to computational irreducibility. I saw how Wolfram himself described how computational irreducibility relates to Life. He explained it well, and in a manner consistent with your paper. "This seems entirely reasonable?and if true, it would mean that our search for systems that exhibit intelligence is a search for systems that (a) have a characteristic that we may never be able to define precisely, and (b) exhibit a global-local-disconnect between intelligence and the local mechanisms that cause it." I agree with this statement. I think I understand this statement in a deep way. Perhaps even in a similar way as you intend it to be understood. It is more of a philosophical statement than a scientific one (I don't mean that in a negative way, just a descriptive way, in that while you can believe it, it may be impossible to prove). I also would add that systems like Watson have both of these characteristics. It surprises it's creators every time it plays. In many cases, I think they are stupefied as to how Watson does it. "In the very worst case we might be forced to do exactly what he was obliged to do: large numbers of simulations to discover empirically what components need to be put together to make a system intelligent." This sounds like the Kurzweil 'reverse engineer the brain' and then optimize approach. Thus far, this is one of the more plausible methodologies I've heard suggested, and there is a lot of great work going on in this direction. Its a little more directed and understandible than your search for multiple kinds of intelligences. "If, as part of this process, we make gradual progress and finally reach the goal of a full, general purpose AI system that functions in a completely autonomous way, then the story ends happily." I think this is what I said about Watson. To be fair, you do quickly counter this statement. "This is fairly straightforward. All we need to do is observe that a symbol system in which (a) the symbols engage in massive numbers of interactions, with (b) extreme nonlinearity everywhere, and (c) with symbols being allowed to develop over time, with (d) copious interaction with an environment, is a system that possesses all the ingredients of complexity listed earlier. On this count, there are very strong grounds for suspicion." You go on to praise the earlier work in back propagation neural networks. I think those are pretty cool too, and that sort of approach is inherently more human-like and "complex" than the historical large LISP symbol processing programs. The problem (IMHO) historically has been that neural networks haven't been commonly realized in hardware (this may be changing with FPGAs and such), and that they are typically implemented as digital systems instead of analog systems. You encourage your reader to "[remain] as agnostic as possible about the local mechanisms that might give rise to the global characteristics we desire" and "we should organize our work so that we can look at the behavior of large numbers of different approaches in a structured manner." I would suggest that you take this more to heart. If one of the local mechanisms is a lexical analyzer of text, or a gender analysis, or a neural network weighing the importance of the various lower levels of the complex system, all that should be good for you. Yet, you dismiss JUST SUCH a system as "trivial". Watson is a good test subject for your framework analyzer. Yes, it was produced by hand by nearly 100 people over four years, but you didn't put any effort into it. "But if the Complex Systems Problem is valid, this reliance on mathematical tractability would be a mistake, because it restricts the scope of the field to a very small part of the pace of possible systems. There is simply no reason why the systems that show intelligent behavior must necessarily have global behaviors that are mathematically tractable (and therefore computationally reducible). Rather than confine ourselves to systems that happen to have provable global properties, we should take a broad, empirical look at the properties of large numbers of systems, without regard to their tractability." I agree with this statement. That may surprise you. I think that a neural network that can be mathematically proven to be equivalent to a Bayesian analysis should be replaced with the Bayesian analysis (unless the NN can be implemented in hardware, and then there is good reason from an efficiency aspect to use that approach). "The only way to find this out is to do some real science." Here is a statement that I can support 100%. No reservations. You eschew the study of neurons directly in part because "we have little ability to report subjective events at the millisecond timescale." While I grant that was mostly true in 2007 when you wrote the paper, it is MUCH less true today. You then discuss a kind of framework generating system that would use something analogous to a genetic algorithm to create (complex) frameworks that could then be evaluated for their ability to exhibit cognition. The question in genetic terms is what should the fitness test be? You don't really answer that. Other than that, this is an interesting idea that might be reducible to a concrete approach if more details were forthcoming. "The way to make it possible is by means of a software development environment specifically designed to facilitate the building and testing of large numbers of similar, parameterized cognitive systems. The author is currently working on such a tool, the details of which will be covered in a later paper." I assume we are still waiting for this paper. What I can't begin to understand is how a researcher would be able to determine if such a system was good or bad at the rate of several per day. That seems analogous to taking every infant in the hospital, interacting with them for two hours, and trying to determine which one would make the best theoretical physicist. That part seems hard. I am definitely one of the "scruffs" you describe in your conclusion. I am not tied to mathematical elegance in any way. I'm more impressed with what works. Watson works, and therefore I am impressed by that. > But, please, don't read several papers and just say afterward "All I > know at this point is that I need to separate the working brain from the > storage brain. Congratulations, you have recast the brain as a Von > Neumann architecture". ? It looks more like I should be saying, if I were > less polite, "Congratulations, you just understood the first page of a > 700-page cognitive pscychology context that was assumed in those papers". > ?But won't ;-). It is now 10:15 PM. I have spent nearly two hours reading the paper you described as being among your best efforts. Much of what is in your paper is true. Some is conjecture, and you make it pretty clear when it is. The argument that intelligence requires an irreducible system is interesting, and possibly mathematically true, even though you don't necessarily claim that. Knowing that doesn't seem to help much in designing a system, but that could be a lack of imagination and/or knowledge on my side. The proposal to develop a framework generator is interesting too, and like Einstein's thought experiments (riding light beams and so forth) it may lead in a fruitful direction. I know enough reading this paper that I am interested in reading the next paper promised (if and when it is ever finished). All that being said, I stick by what I said earlier. This particular paper is more a work of philosophy than science. Please don't be offended by that. I have a GREAT deal of respect for philosophy. This paper may, in fact, be a great work of philosophy. Remember that the meaning of philosophy is a love of thought. It is clear that a lot of thought has gone into it. But there is no evidence in the paper other than an appeal to common sense (albeit a very vertical kind of common sense) that the assertions made therein are correct. There are no experiments to be repeated (other than the thought experiments). There is no program to run to verify your results (partially because you don't claim any yet). There are no algorithms shared. There are descriptions of complex systems, but only conjecture that it may be important. In addition, there is a contempt for the work of others. Now, being a big Ayn Rand fan, I can actually admire that kind of individualism, but you only get the right to impose that level of self assurance on others AFTER your work has produced some results. In the mean time, you will have to live with working in the rock quarry. (See The Fountainhead - Ayn Rand) Richard, you are a good, and perhaps a great philosopher. You may be a good or great scientist too, but that is indiscernible from that paper. I reviewed this twice for tone... hopefully, it isn't too insulting. It isn't meant to be. -Kelly From anders at aleph.se Tue Feb 22 14:20:10 2011 From: anders at aleph.se (Anders Sandberg) Date: Tue, 22 Feb 2011 14:20:10 +0000 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <20110217115041.GQ23560@leitl.org> <20110218125146.GJ23560@leitl.org> Message-ID: <4D63C61A.301@aleph.se> Kelly Anderson wrote: > On Fri, Feb 18, 2011 at 5:51 AM, Eugen Leitl wrote: > >> Not even for human equivalent, nevermind at 10^6 to 10^9 speedup. >> I don't think you can go below 1-10 W for a human realtime equivalent. >> > ... > > According to several sites on the Internet the human brain uses 20-40 > Watts. Some of that undoubtedly goes for biological purposes that are > not directly supportive of computation. > > It seems very pessimistic to say that we could only improve by 2-40 > times over nature. Granted nanowatts may be overly optimistic, and is > based on no currently known technology. Nevertheless, I see no reason > to believe that that the bottom is 1 Watt. > The basal metabolic rate for humans is about 70-80 Watts, so assuming an average weight of 70-80 kg, we get a basic dissipation of about 1 Watt/kg. The brain dissipates the rest because it runs lots of ion pumps to restore membrane potentials, as well as some possibly costly synaptic remodeling. It is a horrendously inefficient Rube-Goldberg scheme, yet surprisingly tough to beat. The real issue is how much computation you need to replace brains and how much this has to dissipate. I have made some estimates that the likely range for brain emulation is 10^22 to 10^25 flops. Right now the Roadrunner does 376 Mflops/W, so we are *far* away. But the Darpa exascale study suggests we can do 10^12 flops per watt using extrapolated but not blue sky technology - a lot of current computation is very wasteful, and it is just recently heat dissipation has become a towering problem. Quantum dot cellular automata could give 10^19 flops per watt, putting the energy needs at 200-2000 watts per brain. http://netalive.startlogic.com/debenedictis.org/erik/Publications-2005/Reversible-logic-for-supercomputing-p391-debenedictis.pdf As I noted in my essay on this, http://www.aleph.se/andart/archives/2009/03/a_really_green_and_sustainable_humanity.html while this energy demand is higher than the biological brain it can be supplied more efficiently than growing organisms, harvesting them, possibly passing them through other animals, and then digesting them. Even this kind of not-Drexlerian nanotech computing would be very green. Estimating the ultimate limits is hard, since we do not know how many dissipative calculations we need. Assuming one irreversible operation every millisecond at every synapse leads to 10^17 dissipating operations per second and an energy dissipation of 3*10^-6 watts per degree (colder computers are more efficient). So even here nanowatts is going to be tough (cooling below a few Kelvin is expensive), but less than a milliwatt per brain seems entirely feasible using LN- if we have reversible computers with little need for error correction. >> Reversible logic is slow, and it's not perfectly reversible. >> Not necessarily, just a lot of the current proof-of-concept designs. I expect that once we actually start working on it seriously we are going to optimize it quite a lot, including how to get the error correction (which dissipates) done in a clean fashion. It wouldn't surprise me if there was a practical tradeoff between speed and dissipation, though (all those quantum limits to computation involve energy, and fast changes do involve high wattages that are hard to keep dissipationless). > An interesting question to be answered is what is the most limiting > factor? Is it matter out of which to build intelligence? Is it energy > to power it? Time to run it? Or space to house it? Or is there some > other limiting factor? I think it will take a while for the > exponential growth to stop, but it must eventually stop. I'm just not > sure which of the above is the most limiting factor. Only time and > technology will tell. I'm not sure we can even guess at this point > what the most limiting factor will be. > In the really long run you cannot get more mass than around 10^52 kg, due to the accelerated expansion of the universe. And there are time limits due to proton decay and quantum noise. But long before that lightspeed lags will make it hard to maintain cohesive thinking systems when the communications delays become much longer than the local processing cycles. A lot of the limits depend on what you *want* minds to do. Experiencing pleasure doesn't require long-range communications or even much storage space, while having the smartest possible mind requires a lot of communications and resources. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From hkeithhenson at gmail.com Tue Feb 22 14:22:43 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 22 Feb 2011 07:22:43 -0700 Subject: [ExI] Original list Was: Re: Call To Libertarians (Max More) Message-ID: On Tue, Feb 22, 2011 at 3:56 AM, Max More wrote: snip > *** Writers from the early 1990s: If you agree to allow all of your postings > to the original Extropians email list to be publically available, please let > me (and the world -- or at least the Extropians-Chat email list) know. Max, I probably would. But it's been 20 years and while I found a little of it on some old disks, there is no way I can remember what was in those posting. If a vague memory serves, there was one posting which I would not like to be public (not until a certain cult goes out of business). I agree with David Lubkin that we were 'practicing "privately produced law," where we stated very clearly and repeatedly that nothing posted to the list could be used off the list without the specific consent of the authors in question.' If you want "specific consent," the people who were on the list in that day should be sent an archive of the list. You mentioned you had it. I don't think there were more than a few dozen people. Keith PS. It's even possible that an edited and published version could be a money making project for the original list members. It is the origin of most current day transhumanist memes. From spike66 at att.net Tue Feb 22 15:13:23 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 07:13:23 -0800 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <764392.59473.qm@web114411.mail.gq1.yahoo.com> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> Message-ID: <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> ... On Behalf Of Ben Zaiboc ... >Indeed. Or educating them: > http://tinyurl.com/65q8csf >Ben Zaiboc Oy vey, so sorry to hear. This is the part that caught me: "Detectives made secret recordings of the gang's plot to attack Mr Smith prior to the brutal assault." Couldn't they make arrangements to protect him beforehand, rather than punish the attackers after the fact? spike From natasha at natasha.cc Tue Feb 22 15:37:58 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 22 Feb 2011 09:37:58 -0600 Subject: [ExI] Happy Birthday Extropy Email List In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com><20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com><20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com><201102192014.p1JKEeST027600@andromeda.ziaspace.com><201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: Thanks Max. I agree. Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Max More Sent: Monday, February 21, 2011 11:37 PM To: ExI chat list Subject: Re: [ExI] Original list Was: Re: Call To Libertarians Yes, it's a pain in the butt in terms of promulgating those brilliant and stimulating early posts, but we absolutely will continue to abide by the conditions agreed to two decades ago. I am NEVER going to set aside the agreement of that time in favor of satisfying today's (perfectly natural) desire to see the earliest posting to the older transhumanist email list (and one of the very earliest email lists of all). If the early archives are never generally available because of this, then so be it. However, it may be that most of that early material might eventually become available with the consent of the authors. *** Writers from the early 1990s: If you agree to allow all of your postings to the original Extropians email list to be publically available, please let me (and the world -- or at least the Extropians-Chat email list) know. --- Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Feb 22 15:40:50 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 07:40:50 -0800 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> Message-ID: <004201cbd2a6$e1ea1690$a5be43b0$@att.net> >>... On Behalf Of Ben Zaiboc ... >>Indeed. Or educating them: >> http://tinyurl.com/65q8csf >Oy vey, so sorry to hear. This is the part that caught me: >"Detectives made secret recordings of the gang's plot to attack Mr Smith prior to the brutal assault." >Couldn't they make arrangements to protect him beforehand, rather than punish the attackers after the fact? Also: "...The covert audio probe captured the gang condemning Mr Smith for 'teaching other religions to our sisters'... This something to think about seriously. That effect isn't changing. When we teach immigrant children other religions, if they convert, their families and locals might kill them. We put them in a lot of danger by teaching them other religions. "...The RE teacher was targeted as he made his way on foot along Burdett Road..." This is madness. When one is teaching other religions, one needs to drive a car, for it affords a certain amount of physical protection. In the US, this of course can be supplemented by a much more effective means of physical protection, which can be effectively carried in the personal automobile. I would recommend not walking for anyone teaching other religions. "...Prosecutor Sarah Whitehouse told the court: 'The evidence from what was said on the probe points overwhelmingly to a religious motive for this attack.' " Well now, let's not jump to any conclusions. When the attackers were recorded saying "...for teaching them other religions..." it might have been a slip of the tongue, and they really meant to murder the professor for teaching their sisters other ways of adding numbers besides the Arabic, such as you know, Roman numerals or binary arithmetic. "...It is believed the gang had made two earlier attempts to get at the teacher..." Yet they were helpless to do anything. And he was walking. Unarmed. What happens when each of these guys gets out of jail 10 to 20 minutes from now and each of them have a big family of believers? To avoid a tragic repeat, they must stop teaching other religions immediately. spike From rpwl at lightlink.com Tue Feb 22 16:13:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 22 Feb 2011 11:13:53 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> Message-ID: <4D63E0C1.4000901@lightlink.com> Kelly Anderson wrote: > I assume you are talking about this: > http://hplusmagazine.com/2011/02/17/watson-supercharged-search-engine-or-prototype-robot-overlord/ > Oh, make no mistake, Ben and I do not agree about a lot of AGI stuff (though that does not stop us from collaborating on a joint paper, which we happen to be doing right now). I am very much aware that he had nice things to say about Watson, but my point was that he expressed many reservations about Watson, so I was using his article as a counterfoil to your statement that "99% of everyone else thinks it is a great success already". I just felt that it was not accurate to paint me as the lone, 1% voice against, with 99% declaring Watson to be a great achievement on the road to real AGI. Now, you did a very good job of picking out out the pro-Watson excerpts from Ben's essay :-) , but I think you would do better to focus on one of his concluding remarks... "Some AI researchers believe that this sort of artificial general intelligence will eventually come out of incremental improvements to 'narrow AI' systems like Deep Blue, Watson and so forth. Many of us, on the other hand, suspect that Artificial General Intelligence (AGI) is a vastly different animal." My position is more strongly negative than his (and his position, in turn, is more negative than Kurzweil's). >> Richard Loosemore wrote: >> I don't want to write about Watson, >> because I have seen so many examples of that kind of dead end and I >> have already analyzed them as a *class* of systems. That is very >> important. They cannot be fought individually. I am pointng to the >> pattern. > > What is this pattern? I am (as a human being) an expert pattern > recognizer. I am familiar with a number of approaches to AGI. Yet, I > am having trouble recognizing the pattern you seem to think is so > clear. Can you spell it out succinctly? (I realize this is a true > challenge) Well, a lot of that was explained in the complex systems paper. At the risk of putting a detailed argument in so small a space, it goes something like this. AI researchers have, over the years, publicized many supposedly great advances, or big new systems that were supposed to be harbingers of real AI, just around the corner. People were very excited about SHRDLU. The Japanese went wild over Prolog. Then there was the "knowledge based systems" approach, aka "expert systems". Earlier on there was a 1960s craze for "machine translation". In the late 1980s there were "neural networks" vendors springing up all over the place. And these were just the paradigms or general clusters of ideas ... never mind the specific systems or programs themselves. Now, the pattern is that all these ideas were good at bringing down some long-hanging fruit, and every time the proponents would say "Of course, this is just meant to be a demonstration of the potential of this new technique/approach/program: what we want to do next is expand on this breakthrough and find ways to apply it to more significant problems". But in each case it turned out that extending it beyond the toy cases was fiendishly hard, and eventually the effort was abandoneed when the next bandwagon came along. What I tried to do in my 2007 paper was to ask whether there was an underlying reason for these failures. The answer was that, yes, there is indeed a pattern, but the reason is subtle. So, the last thing I am going to do is analyze Watson for its limitations, because the limitations are not at the surface level of Watson's specific architecture, they are in the paradignm from which it comes. >> :-) Well, you may be confused by the fact that I wrote ONE >> philosophy paper. > > Hehe... which one was that? They all seemed pretty philosophical to > my mind. None of them said... here is an algorithm that might lead to > general artificial intelligence... Kelly :-(. You do not know what a real philosophy paper is, eh? The word "philosophy", the way you use it in the above, seems to mean "anything that is not an algorithm". >> But have a look through the very small set of publications on my >> website. One experimental archaeology, several experimental and >> computational cognitive science papers. One cognitive neuroscience >> paper..... >> >> I was trained as a physicist and mathematician. > > For Odin's sake man!!! Why didn't you say this in the beginning?!? > This explains EVERYTHING!! This is not helping. >> I just finished teaching a class in electromagnetic theory this >> morning. I have written all those cognitive science papers. I >> was once on a team that ported CorelDraw from the PC to the Mac. > > Did you live in Orem? Perhaps we have run into each other before. > Your name sounds familiar. Well, I lived in Salt Lake City for a short period. I was working for an extremely annoying company that claimed to be building fpga supercomputers. >> I am up to my eyeballs in writing a software tool in OS X that is >> designed to facilitate the construction and experimental >> investigation of a class of AGI systems that have never been built >> before..... Isn't it a bit of a stretch to ask me to be proud to >> be a philosopher? :-) :-) > > I am only going off of the papers I could find. Point me to the one > specific paper that you feel is most scientific and I'll read it > again. Happily. You can make your mind up without my help. Not my problem. >>>> And why do you assume that I am not doing experiments?! I am >>>> certainly doing that, and doing masive numbers of such >>>> experiments is at the core of everything I do. >>> Good to hear. Your papers did not reflect that. Can you point me >>> to some of your experimental results? >> No, but I did not say that they did. It is too early to ask. > > Sigh. This reminds me of a story. A mathematician was asked to build > a fence around a herd of cattle. He built a small corral being > careful not to surround any cows, and then defined the larger area to > be "inside". Problem solved. Well, now you have trivialized the situation one time too many. You are on your own. >> Context. Physicists back in the 1980s who wanted to work on the >> frontiers of particle physics had to spend decades just building >> one tool - the large hadron collider - to answer their theoretical >> questions with empirical data. I am in a comparable situation, but >> with one billionth the funding that they had. Do I get cut a >> *little* slack? :-( > > I will never say "It will never fly Orville"... (other perhaps than > in some very specific narrow issue) that is the slack I will give > you. When you share the results of your experimentation such that > other scientists can replicate your amazing results, then I will say > "well done." As for the physicists, they built a lot of smaller > colliders along the way. There was one about ten feet across in the > basement of the science building at BYU... I'm sure that the > preliminary results that they achieved with those smaller colliders > gave the people funding the hadron collider confidence that they > weren't throwing their money down a rat hole. Sigh. Richard Loosemore From spike66 at att.net Tue Feb 22 16:35:33 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 08:35:33 -0800 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <004201cbd2a6$e1ea1690$a5be43b0$@att.net> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> Message-ID: <004c01cbd2ae$86f50c60$94df2520$@att.net> ... >> http://tinyurl.com/65q8csf ... >Yet they were helpless to do anything. And he was walking. Unarmed. >What happens when each of these guys gets out of jail 20 minutes from now (or 10 with good behavior) and each of them have a big family of believers? To avoid a tragic repeat, they must stop teaching other religions immediately... The Mormon religion specifically forbids apostasy. Their literature specifies "Whoever changes his religion, kill him." It might have been one of the other major religions, I get confused, Presbyterian or Amish perhaps. But the point is, we must recognize that teaching their children other religions is irresponsible and dangerous when they are not free to convert. It is dangerous for the students and the professors, and of course the families who are morally obligated to murder the confused and erring convert. People do sometimes get hurt in that process. The Mormons in England are there as missionaries. They are not there to have their children converted to the ways of the heathen. They are there to convert you. spike From atymes at gmail.com Tue Feb 22 17:25:04 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 22 Feb 2011 09:25:04 -0800 Subject: [ExI] Happy Birthday Extropy Email List In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: > *** Writers from the early 1990s: If you agree to allow all of your postings > to the original Extropians email list to be publically available, please let > me (and the world -- or at least the Extropians-Chat email list) know. > > --- Max I forget if I was on the original list, but if I was, then sure, permission given for my posts. Same goes for my posts to this list (but then, they already have been publicly available). From stefano.vaj at gmail.com Tue Feb 22 16:57:45 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Feb 2011 17:57:45 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FF97E.2080006@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <005901cbd04a$8ad986a0$a08c93e0$@att.net> <4D5FF97E.2080006@lightlink.com> Message-ID: On 19 February 2011 18:10, Richard Loosemore wrote: > spike wrote: > Factually inaccurate, I would say: > Example 1: ? Soviet Union (totalitarian) -> Boris ?Yeltsin (short > interregnum) -> Russia Under Putin (totalitarianism again). > Example 2: ? Iran under Shah (totalitarian) -> Revolution (short > interregnum) -> Iran under the Mullahs (totalitarianism again). > Example 3: ? Iraq under Saddam Hussein (totalitarian) -> US Invasion Period > (short interregnum) -> Iraq under Corrupt Shia Government with Rigged > Elections (totalitarianism again, or heading fast in that direction). > Example 4: ? Germany under Hitler (totalitarian) -> 2nd World War (long > interregnum during which GDR was totalitarian and West Germany was > deomcratic) -> Eventually United Germany (Democracy). > > This is really not looking good for your bumper sticker. I really object to a usage of the word "totalitarian" which simply puts together a number of political regimes with little else in common than one's possible dislike for them. Moreover, technically, it would appear much more plausible to describe Iran as an ayatollah regime than a mullah regime. :-) -- Stefano Vaj From lubkin at unreasonable.com Tue Feb 22 17:45:29 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 22 Feb 2011 12:45:29 -0500 Subject: [ExI] Original list Was: Re: Call To Libertarians (Max More) In-Reply-To: References: Message-ID: <201102221745.p1MHjW5j013693@andromeda.ziaspace.com> Since I was on the list from virtually the beginning, am still in touch with most of the participants, and am building systems now with the same technical needs, I'm happy to take the lead in making this finally happen, if there are no objections. (For any questions about my technical chops, see my LinkedIn profile.) -- David. From kellycoinguy at gmail.com Tue Feb 22 17:52:30 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 22 Feb 2011 10:52:30 -0700 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: <4D61D9E4.90607@lightlink.com> References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: On Sun, Feb 20, 2011 at 8:20 PM, Richard Loosemore wrote: > Ben Zaiboc wrote: >> >> Richard Loosemore wrote: >> >>> Would it be more accurate, then, to say that Libertarianism is >>> about >> >> SUPPORTING the government funding of: >> >>> Roads, Bridges, Police, Firefighters, Prisons,... Some libertarians go so far as to shorten this list to Army, Courts and Police. There is no reason today for all roads not to be toll roads IMHO. Why not regulate, then privatize prisons? The first fire station in America was a libertarian establishment founded by Benjamin Franklin. Buy fire insurance from us, and we'll fight the fire when your house goes up. If not, we'll come and protect your insured neighbors. THAT is libertarianism at its farthest point. -Kelly From stefano.vaj at gmail.com Tue Feb 22 17:44:36 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Feb 2011 18:44:36 +0100 Subject: [ExI] Happy Birthday Extropy Email List In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: On 22 February 2011 18:25, Adrian Tymes wrote: >> *** Writers from the early 1990s: If you agree to allow all of your postings >> to the original Extropians email list to be publically available, please let >> me (and the world -- or at least the Extropians-Chat email list) know. >> >> --- Max > > I forget if I was on the original list, but if I was, then sure, > permission given for > my posts. ?Same goes for my posts to this list (but then, they already have > been publicly available). I certainly wasn't there, but let me iterate once more, just for the record, that everything I write and sign or has at any time written and signed in my life is forever public. Such engagement does expose one to slight embarrassments, but it wonderfully keeps you focused on avoiding statements you are not ready to stand by in any court for an undefinite time in the future... :-) -- Stefano Vaj From kellycoinguy at gmail.com Tue Feb 22 18:04:44 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 22 Feb 2011 11:04:44 -0700 Subject: [ExI] Serfdom and libertarian critiques (Was: Call to Libertarians) In-Reply-To: <4D616BFF.2000502@gnolls.org> References: <4D616BFF.2000502@gnolls.org> Message-ID: On Sun, Feb 20, 2011 at 12:31 PM, J. Stanton wrote: > My critique rests on the blindness of the Libertarian Party (and > libertarians) towards banks and corporations. Both are government-granted > exceptions to the rules of liability for one's actions and debts ('limited > liability') and monetary exchange ('fractional reserve banking', i.e. the > ability to create money from thin air by issuing debt) -- and both produce > the inevitable result that institutions granted such exceptional powers > control the world economy. > > If libertarians genuinely believed what they said, the first plank on every > libertarian's political platform would be "abolish corporations and banks". I have considered eliminating banks, but my question would be what would you replace them with? There is a necessity for capital investment, and economies of scale in managing capital are important. I can see the point that every man is his own bank, and you go make a loan with another individual, but then you have a problem with the risk not being disbursed. You would immediately have some kind of insurance pop up, it would seem. As for business, do you think the CEO of a business should be PERSONALLY responsible for the actions of each of his employees? I like the thought of getting rid of banks, but I don't know how you could really make it work. I think you could get nearly the results you want by simply deregulating the banks to an extent. What would you propose to do with stock exchanges and the like? The first baby step would be to get rid of the Federal Reserve. That I would be behind today, immediately. I think that is a fairly common stand amongst Libertarians, but I could be wrong. -Kelly From ethersong at gmail.com Tue Feb 22 17:32:37 2011 From: ethersong at gmail.com (Josh) Date: Tue, 22 Feb 2011 12:32:37 -0500 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <004c01cbd2ae$86f50c60$94df2520$@att.net> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <004c01cbd2ae$86f50c60$94df2520$@att.net> Message-ID: Hey, new to the list and was planning on just reading through for a couple of days but I have to jump in right quick. I think I might have missed a part of the convo though so if I say something amiss, I apologize. As a former Mormon whose family is still Mormon, I couldn't not butt in. My sister went on a mission to Venezuela and my brother is currently on one in Chile. We (agnostics, atheists, or those with other spiritual interests) have to be careful about the way we characterize religious sects. This is not helping your authority at all. The Mormons do not prohibit apostasy and the especially don't kill them for leaving the church. If so I would be dead right now. In fact, I'm not really aware of any full religions with doctrinally support such a practice although some radical *sects* of some religions might. Now granted there are many beliefs that do lead to violence, which I believe is what this thread is about. But we have to be careful and not eschew all religions because of the radical beliefs of a few. I know the extropians are quite down on religion but I think this has to be tread carefully. Making assumptions and mistakes like this only alienate people who may have good reasons for their beliefs. That's all I'll say for now. etherfire On Tue, Feb 22, 2011 at 11:35 AM, spike wrote: > ... > > >> http://tinyurl.com/65q8csf > > ... > > >Yet they were helpless to do anything. And he was walking. Unarmed. > > >What happens when each of these guys gets out of jail 20 minutes from now > (or 10 with good behavior) and each of them have a big family of believers? > To avoid a tragic repeat, they must stop teaching other religions > immediately... > > The Mormon religion specifically forbids apostasy. Their literature > specifies "Whoever changes his religion, kill him." It might have been one > of the other major religions, I get confused, Presbyterian or Amish > perhaps. > But the point is, we must recognize that teaching their children other > religions is irresponsible and dangerous when they are not free to convert. > It is dangerous for the students and the professors, and of course the > families who are morally obligated to murder the confused and erring > convert. People do sometimes get hurt in that process. > > The Mormons in England are there as missionaries. They are not there to > have their children converted to the ways of the heathen. They are there > to > convert you. > > spike > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Feb 22 17:40:25 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 22 Feb 2011 12:40:25 -0500 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: <31A10575-EA19-4BDB-A0BF-4C491F971468@bellsouth.net> On Feb 22, 2011, at 12:37 AM, Max More wrote: > Writers from the early 1990s: If you agree to allow all of your postings to the original Extropians email list to be publically available, please let me (and the world -- or at least the Extropians-Chat email list) know. My first post was on September 29 1993, you can make any of my stuff publicly available if you like. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 22 16:47:29 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 22 Feb 2011 17:47:29 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: 2011/2/19 Darren Greer : > I am a Canadian, and a proponent of socialized democracy. > However, I'm not naive enough to think that full-stop socialization is a > good idea. *We tried that once*, in the Soviet Union, and it didn't work so > well. Ah, so it was Canadians who actually tried that in the Soviet Union. This explains a lot of things. :-))) -- Stefano Vaj From rpwl at lightlink.com Tue Feb 22 18:16:18 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 22 Feb 2011 13:16:18 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <005901cbd04a$8ad986a0$a08c93e0$@att.net> <4D5FF97E.2080006@lightlink.com> Message-ID: <4D63FD72.6080802@lightlink.com> Stefano Vaj wrote: > On 19 February 2011 18:10, Richard Loosemore wrote: >> spike wrote: >> Factually inaccurate, I would say: >> Example 1: Soviet Union (totalitarian) -> Boris Yeltsin (short >> interregnum) -> Russia Under Putin (totalitarianism again). >> Example 2: Iran under Shah (totalitarian) -> Revolution (short >> interregnum) -> Iran under the Mullahs (totalitarianism again). >> Example 3: Iraq under Saddam Hussein (totalitarian) -> US Invasion Period >> (short interregnum) -> Iraq under Corrupt Shia Government with Rigged >> Elections (totalitarianism again, or heading fast in that direction). >> Example 4: Germany under Hitler (totalitarian) -> 2nd World War (long >> interregnum during which GDR was totalitarian and West Germany was >> deomcratic) -> Eventually United Germany (Democracy). >> >> This is really not looking good for your bumper sticker. > > I really object to a usage of the word "totalitarian" which simply > puts together a number of political regimes with little else in common > than one's possible dislike for them. Take it up with spike, I was just responding to his comment: > Chaos is the endpoint not of libertarianism but rather the endpoint of its > opposite, totalitarianism. Most people, when pressed, would say that: - Soviet Union - Russia Under Putin - Iran under Shah - Present day Iran - Iraq under Saddam Hussein - Germany under Hitler ... are quite a close fit to the definition that Wikipedia gives for "totalitarianism": "A political system where the state, usually under the control of a single political person, faction, or class, recognizes no limits to its authority and strives to regulate every aspect of public and private life wherever feasible". You think the only thing these regimes have in common is my dislike for them? I doubt many people would find that accurate or fair. Richard Loosemore From sjatkins at mac.com Tue Feb 22 18:41:45 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 10:41:45 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <4D640369.9050802@mac.com> On 02/18/2011 04:00 PM, Darren Greer wrote: > I understand there are some libertarians in this group. > > I am currently embroiled in an e-mail discussion where I find myself > in a rather unique (for me) position of defending free markets and > smaller government. I am a Canadian, and a proponent of socialized > democracy. However, I'm not naive enough to think that full-stop > socialization is a good idea. We tried that once, in the Soviet Union, > and it didn't work so well. I recognize the need for competition to > drive development and promote innovation. > > So, being a fan of balance, I'm trying to come up with some arguments > that a libertarian might give while explaining why that system of > could benefit mankind, especially in relation to the development of > technology and the philosophies of transhumanism. > You might want to read Rothbard or in economics read Hayek, von Mises and other Austrian economists. Or economics in one lesson by Hazlitt. We can't do your homework for you. :) - s From sjatkins at mac.com Tue Feb 22 18:45:39 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 10:45:39 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <4D640453.9010309@mac.com> On 02/18/2011 04:00 PM, Darren Greer wrote: > I understand there are some libertarians in this group. > > I am currently embroiled in an e-mail discussion where I find myself > in a rather unique (for me) position of defending free markets and > smaller government. I am a Canadian, and a proponent of socialized > democracy. However, I'm not naive enough to think that full-stop > socialization is a good idea. We tried that once, in the Soviet Union, > and it didn't work so well. I recognize the need for competition to > drive development and promote innovation. > > So, being a fan of balance, I'm trying to come up with some arguments > that a libertarian might give while explaining why that system of > could benefit mankind, especially in relation to the development of > technology and the philosophies of transhumanism. > > Problem is, I'm not very good at it. Anyone wanna give my their > opinions on this? I will not plagiarize you. I've already stated in > this discussion that I will ask some people and get back to them. It's > not necessary that I win the argument, but I do think that my beliefs > and preferences are simply points of view, and no better (nor worse) > than those of others. This may be the point that I'm trying to make -- > that libertarians are not by definition inarticulate right wingers or > rabid anarchists, which seems to be the point of view of this group > I'm talking with. > Anarchist technically means without government. It does not mean chaos. Read Rothbard for arguments why not. But most Libertarians are minarchist - minimal government, only the minimum necessary that cannot be done privately. They differ in how much or little that is. The essential element of libertarianism is the Non-Agression Principle. No one has the right to initiate force against another. This is equivalent to total freedom to do anything that does not harm, physically force, threaten physical force or defraud another. Right wing, left wing has nothing to do with it. - samantha From sjatkins at mac.com Tue Feb 22 18:48:01 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 10:48:01 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: <4D6404E1.3000905@mac.com> On 02/19/2011 07:17 AM, Darren Greer wrote: > >Only different to those who cannot understand the inevitable end-point > of libertarianism.< > > Just as the end-point of democracy is a stagnant bureaucratic state? > The end-point of capitalism is fascism and plutocracy? The end-point > of socialism is military dictatorship? The endpoint of unrestrained democracy is so much bread and circuses promised that the government, economy, and country collapses. We have a front row seat for the process. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfio.puglisi at gmail.com Tue Feb 22 18:22:06 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 22 Feb 2011 19:22:06 +0100 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: On Tue, Feb 22, 2011 at 6:52 PM, Kelly Anderson wrote: > On Sun, Feb 20, 2011 at 8:20 PM, Richard Loosemore > wrote: > > Ben Zaiboc wrote: > >> > >> Richard Loosemore wrote: > >> > >>> Would it be more accurate, then, to say that Libertarianism is > >>> about > >> > >> SUPPORTING the government funding of: > >> > >>> Roads, Bridges, Police, Firefighters, Prisons,... > > Some libertarians go so far as to shorten this list to Army, Courts > and Police. There is no reason today for all roads not to be toll > roads IMHO. Why not regulate, then privatize prisons? Because it creates an incentive to incarcerate people? The more people in prison, the more profits from prison management. > The first fire > station in America was a libertarian establishment founded by Benjamin > Franklin. Buy fire insurance from us, and we'll fight the fire when > your house goes up. If not, we'll come and protect your insured > neighbors. THAT is libertarianism at its farthest point. > Wow. I don't know how to say this without sounding offensive, but this is remarkably similar to how the mafia operates in southern Italy. Buy protection from us, and we'll make sure that nothing happens to your business. If not, don't call us when some random guy happens to start a fire on your door or your delivery truck in the middle of the night... Alfio > > -Kelly > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Tue Feb 22 19:10:22 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 22 Feb 2011 14:10:22 -0500 Subject: [ExI] Complex AGI [WAS Watson On Jeopardy] In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5EB0FF.7000007@lightlink.com> Message-ID: <4D640A1E.20909@lightlink.com> Kelly Anderson wrote: > On Fri, Feb 18, 2011 at 10:48 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >> Well, I am struggling to find positive things to say, because >> you're tending to make very sweeping statements (e.g. "this is just >> philosophy" and "this is not science") that some people might interpret >> as quite insulting. > > I don't mean to be insulting. I am trying to draw out something real, > useful and substantial from you. I have some degree of faith that you > have something interesting to say, and I'm trying to get at it. > > I don't think I am confused about what you have said vs what John has > said. Twenty years on mailing lists has focused my mind fairly well on > keeping who said what straight. Funny that I've learned to do that > without any conscious effort... the brain really is amazing. > >> So what have I actually claimed? What have I been defending? > > YES, YES, YES, that is what I want to know! > >> Well, >> what I do say is that IMPLICIT in the papers I have written, there is >> indeed an approach to AGI (a framework, and a specific model within that >> framework). There is no way that I have described an AGI design >> explictly, in enough detail for it to be evaluated, and I have never >> claimed that. Nor have I claimed to have built one yet. But when pressed >> by people who want to know more, I do point out that if they understand >> cognitive psychology in enough detail they will easily be able to add up all >> the pieces and connect all the dots and see where I am going with the work I >> am doing. > > Well it's good to know that when you fly the planes into the > established AI buildings, we will be able to say we should have > connected the dots. :-) > >> The problem is that, after saying that you read my papers already, you >> were quite prepared to dismiss all of it as "philosophizing" and "not >> science". I tried to explain to you that if you understood the >> cognitive science and AI and complex systems background from which the >> work comes, you would be able to see what I meant by there being a >> theory of AGI implicit in it, and I did try to explain in a little more >> detail how my work connects to that larger background. I pointed out >> the thread that stretches from the cog psych of the 1980s, through >> McClelland and Rumelhart, through the complex systems movement, to the >> particular (and rather unusual) approach that I have adopted. > > Referring to a group of other people's work, saying, "read this with > this other thing in mind", is a little like me saying, if you read > Wikipedia thinking about some topic, you'll come up with the result. > Be a little more explicit. A little less vague. That's all I'm asking > for. I didn't get much of a specific nature about your particular > approach from your papers. > >> I even pointed out the very, very important fact that my complex systems >> paper was all about the need for a radically different AGI methodology. >> Now, I might well be wrong about my statement that we need to do things in >> this radically different way, but you could at least realize that I have >> declared myself to be following that alternate methodology, and therefore >> understand what I have said about the priority of theory and a particular >> kind of experiment, over hacking out programs. It is all there, in the >> complex systems paper. > > What I hear is you railing against the current state of the art, but > without suggesting something different in a specific way. You do > suggest a vague framework generator, which is interesting, but not > useful in a SCIENTIFIC way. i.e. it does not immediately suggest an > experiment that I can reproduce. > >> But even after me pointing out that this stuff has a large context that >> you might not be familiar with, instead of acknowledging that fact, you are >> still making sweeping condemnations! This is pretty bad. > > I am roughly familiar with most of the context you give. The only > sweeping condemnation I have given is that you sweepingly condemn your > "competition" and that you haven't yet shared any useful results. You > have admitted the second now, so I see that as progress. It is your > callous negation of the work of others that I condemn, not your work. > I don't understand enough about your work to condemn it, and I haven't > condemned your work, just your approach to everyone else. > >> More generally: >> >> I get two types of responses to my work. One (less common) type of >> response is from people who understand what I am trying to say well >> enough that they ask specific, focussed questions about things that are >> unclear or things they want to challenge. Those people clear understand >> that there is a "there" there .... if the papers I wrote were empty >> philosophising, those people would never be ABLE to send coherent >> challenges or questions in my direction. Papers that really are just >> empty philosophising CANNOT generate that kind of detailed response, >> because there is nothing coherent enough in the paper for anyone to get >> a handle on. > > OK. I haven't given any of those types of responses at this time, but > I give some of my thoughts later in this (longish) email. > >> Then there is the second kind of response. From these people I get >> nothing specific, just handwaving or sweeping condemnations. Nothing >> that indicates that they really understood what I was trying to say. > > I think I have stated fairly clearly that I haven't understood the > details of your ideas. You haven't shared enough for me to do that. > Perhaps I have unfairly blamed you for that. Perhaps if I were to > spend months digging into your ideas I would come up with something > solid to refute or agree with. > >> They reflect back my arguments in a weird, horribly distorted form >> -- so distorted that it has no relationship whatsoever to what I >> actually said -- and when I try to clarify their misunderstandings >> they just make more and more distorted statements, often wandering >> far from the point. And, above all, this type of response usually >> involves statements like "Yes, I read it, but you didn't say anything >> meaningful, so I dismissed it all as empty philosophising". > > Richard, there is nothing *empty* about your philosophy. But as a > computer scientist I don't see anything concrete, reproducible and > useful from your papers so far. It isn't a put down to call it > philosophy when all it is is a general idea about where things should > go. > >> I always try to explain and respond. I have put many hours into >> responding to people who ask questions, and I try very hard to help >> reduce confusions. I waste a lot of time that way. And very often, I >> do this even as the person at the other end continues to deliver mildly >> derogatory comments like "this isn't science, this is just speculation" >> alongside their other questions. > > It must be frustrating. I have a glimpse of where you are going. It > isn't speculation, and some day it may become science. Today, however, > in the paper you had me look at, it is not yet presented in a > scientific manner. That's all I've said, and if you feel that is a > poor description of what you do, all I can say is that is how your > paper reads. > >> If you want to know why this stuff comes out of cognitive psychology, by >> all means read the complex systems paper again, and let me know if you >> find the argument presented there, for why it HAS to come out of >> cogntive psychology. It is there -- it is the crux of the argument. If you >> believe it is incorrect, I would be happy to debate the rationale for it. > > Here I assume you are referring to your 2007 paper entitled "Complex > Systems, Artificial Intelligence and Theoretical Psychology" (I would > point out that my spending 10 minutes finding what I THINK is the > right paper is indicative of the kind of useless goose chasing I have > to go on to have a conversation with you)... > > 8:37PM > ... Carefully Reading ... > > "It is arguable that intelligent systems must involve some amount of > complexity, and so the global behavior of AI systems would therefore > not be expected to have an analytic relation to their constituent > mechanisms." > > What other kind of relation would a global result have to the > constituent algorithms? After reading the whole paper, I know what you > are getting at, but this particular sentence doesn't grok well in an > abstract. > > "the results were both impressive and quick to arrive." > > Quick results, I like the sound of that. Of course the paper is now > nearly four years old... have you achieved any quick results that you > can share? > > "If the only way to solve the problem is to declare the personal > philosophy and expertise of many AI researchers to be irrelevant at > best, and obstructive at worst, the problem is unlikely to even be > acknowledged by the community, let alone addressed." > > i.e. Everyone else is stupid. That's a good way to get people > interested in your research. I, for one, am trying to get past your > ego. > > "A complex system is one in which the local interactions between the > components of the > system lead to regularities in the overall, global behavior of the > system that appear to be > impossible to derive in a rigorous, analytic way from knowledge of the > local interactions." > > To paraphrase, a complex system is non-deterministic, or semi-random, > or at least incomprehensible. I suppose that describes the brain to > some extent, so despite the confrontational definition, I admit that > you might be onto something here. At least it is a clear definition of > what you mean by "complex system." It sounds similar to chaos theory > as well, and perhaps you are thinking in that direction. > > You discuss the "problem space", and then declare that there is only a > small portion of that space that can be dealt with analytically. Fair > enough, AND the human brain can only attack a small part of the > "problem space" which overlaps partially with the part of the space > that analysis can crack. Think of a Venn Diagram. We obviously need > more kinds of intelligence to crack all of the problems out there in > the "problem space". I would point out that Google and Watson are the > first of a series of problem solvers that may create another circle in > the Venn diagram, but it is hard to say if this really is the case at > this point. I suspect that we will eventually see many circles in such > a Venn diagram. > > I have spoken to my friends for years about Gestalt emergent > intelligence (as of ant colonies, neural nets, etc.). I believe in > that. I don't think your global-local disconnect is terribly different > from the common sense notion that the "whole is greater than the sum > of its parts" so I accept that. > > Wolfram's "computational irreducibility" comments relate to the idea > that you can't know the results of running some programs until you > actually run them. That is, there is no shortcut to the answer. While > you explain this concept well, I don't see how it applies to AI > systems. That may be my limitation. However, the only way to see how > Watson is going to answer a particular question is to ask it. That > seems to be approximate to computational irreducibility. > > I saw how Wolfram himself described how computational irreducibility > relates to Life. He explained it well, and in a manner consistent with > your paper. > > "This seems entirely reasonable?and if true, it would mean that our > search for systems that exhibit intelligence is a search for systems > that (a) have a characteristic that we may never be able to define > precisely, and (b) exhibit a global-local-disconnect between > intelligence and the local mechanisms that cause it." > > I agree with this statement. I think I understand this statement in a > deep way. Perhaps even in a similar way as you intend it to be > understood. It is more of a philosophical statement than a scientific > one (I don't mean that in a negative way, just a descriptive way, in > that while you can believe it, it may be impossible to prove). I also > would add that systems like Watson have both of these characteristics. > It surprises it's creators every time it plays. In many cases, I think > they are stupefied as to how Watson does it. > > "In the very worst case we might be forced to do exactly what he was > obliged to do: large numbers of simulations to discover empirically > what components need to be put together to make a system intelligent." > > This sounds like the Kurzweil 'reverse engineer the brain' and then > optimize approach. Thus far, this is one of the more plausible > methodologies I've heard suggested, and there is a lot of great work > going on in this direction. Its a little more directed and > understandible than your search for multiple kinds of intelligences. > > "If, as part of this process, we make gradual progress and finally > reach the goal of a full, general purpose AI system that functions in > a completely autonomous way, then the story ends happily." > > I think this is what I said about Watson. To be fair, you do quickly > counter this statement. > > "This is fairly straightforward. All we need to do is observe that a > symbol system in which (a) the symbols engage in massive numbers of > interactions, with (b) extreme nonlinearity everywhere, and (c) with > symbols being allowed to develop over time, with (d) copious > interaction with an environment, is a system that possesses all the > ingredients of complexity listed earlier. On this count, there are > very strong grounds for suspicion." > > You go on to praise the earlier work in back propagation neural > networks. I think those are pretty cool too, and that sort of approach > is inherently more human-like and "complex" than the historical large > LISP symbol processing programs. The problem (IMHO) historically has > been that neural networks haven't been commonly realized in hardware > (this may be changing with FPGAs and such), and that they are > typically implemented as digital systems instead of analog systems. > > You encourage your reader to "[remain] as agnostic as possible about the > local mechanisms that might give rise to the global characteristics we > desire" and "we > should organize our work so that we can look at the behavior of large > numbers of different > approaches in a structured manner." I would suggest that you take this > more to heart. If one of the local mechanisms is a lexical analyzer of > text, or a gender analysis, or a neural network weighing the > importance of the various lower levels of the complex system, all that > should be good for you. Yet, you dismiss JUST SUCH a system as > "trivial". Watson is a good test subject for your framework analyzer. > Yes, it was produced by hand by nearly 100 people over four years, but > you didn't put any effort into it. > > "But if the Complex Systems Problem is valid, this reliance on > mathematical tractability would be a mistake, because it restricts the > scope of the field to a very small part of the pace of possible > systems. There is simply no reason why the systems that show > intelligent behavior must necessarily have global behaviors that are > mathematically tractable (and therefore computationally reducible). > Rather than confine ourselves to systems that happen to have provable > global properties, we should take a broad, empirical look at the > properties of large numbers of systems, without regard to their > tractability." > > I agree with this statement. That may surprise you. I think that a > neural network that can be mathematically proven to be equivalent to a > Bayesian analysis should be replaced with the Bayesian analysis > (unless the NN can be implemented in hardware, and then there is good > reason from an efficiency aspect to use that approach). > > "The only way to find this out is to do some real science." > > Here is a statement that I can support 100%. No reservations. > > You eschew the study of neurons directly in part because "we have > little ability to report subjective events at the millisecond > timescale." > > While I grant that was mostly true in 2007 when you wrote the paper, > it is MUCH less true today. > > You then discuss a kind of framework generating system that would use > something analogous to a genetic algorithm to create (complex) > frameworks that could then be evaluated for their ability to exhibit > cognition. The question in genetic terms is what should the fitness > test be? You don't really answer that. Other than that, this is an > interesting idea that might be reducible to a concrete approach if > more details were forthcoming. > > "The way to make it possible is by means of a software development > environment specifically designed to facilitate the building and > testing of large numbers of similar, parameterized cognitive systems. > The author is currently working on such a tool, the details of which > will be covered in a later paper." > > I assume we are still waiting for this paper. What I can't begin to > understand is how a researcher would be able to determine if such a > system was good or bad at the rate of several per day. That seems > analogous to taking every infant in the hospital, interacting with > them for two hours, and trying to determine which one would make the > best theoretical physicist. That part seems hard. > > I am definitely one of the "scruffs" you describe in your conclusion. > I am not tied to mathematical elegance in any way. I'm more impressed > with what works. Watson works, and therefore I am impressed by that. > >> But, please, don't read several papers and just say afterward "All I >> know at this point is that I need to separate the working brain from the >> storage brain. Congratulations, you have recast the brain as a Von >> Neumann architecture". It looks more like I should be saying, if I were >> less polite, "Congratulations, you just understood the first page of a >> 700-page cognitive pscychology context that was assumed in those papers". >> But won't ;-). > > It is now 10:15 PM. I have spent nearly two hours reading the paper > you described as being among your best efforts. Much of what is in > your paper is true. Some is conjecture, and you make it pretty clear > when it is. The argument that intelligence requires an irreducible > system is interesting, and possibly mathematically true, even though > you don't necessarily claim that. Knowing that doesn't seem to help > much in designing a system, but that could be a lack of imagination > and/or knowledge on my side. The proposal to develop a framework > generator is interesting too, and like Einstein's thought experiments > (riding light beams and so forth) it may lead in a fruitful direction. > I know enough reading this paper that I am interested in reading the > next paper promised (if and when it is ever finished). > > All that being said, I stick by what I said earlier. This particular > paper is more a work of philosophy than science. Please don't be > offended by that. I have a GREAT deal of respect for philosophy. This > paper may, in fact, be a great work of philosophy. Remember that the > meaning of philosophy is a love of thought. It is clear that a lot of > thought has gone into it. But there is no evidence in the paper other > than an appeal to common sense (albeit a very vertical kind of common > sense) that the assertions made therein are correct. There are no > experiments to be repeated (other than the thought experiments). There > is no program to run to verify your results (partially because you > don't claim any yet). There are no algorithms shared. There are > descriptions of complex systems, but only conjecture that it may be > important. > > In addition, there is a contempt for the work of others. Now, being a > big Ayn Rand fan, I can actually admire that kind of individualism, > but you only get the right to impose that level of self assurance on > others AFTER your work has produced some results. In the mean time, > you will have to live with working in the rock quarry. (See The > Fountainhead - Ayn Rand) > > Richard, you are a good, and perhaps a great philosopher. You may be a > good or great scientist too, but that is indiscernible from that > paper. > > I reviewed this twice for tone... hopefully, it isn't too insulting. > It isn't meant to be. Alas, time pressure makes it impossible to respond with a line-by-line commentary. The best I can do, at this point, is to say that when I read your very detailed remarks above, I find that you have interpreted the central concept that I tried to explain AS IF it was a weak, extremely general statement that could have had any number of interpretations, whereas other people who have read it have said that they understand it to be a specific, focused statement about different classes of systems. The central thesis is that there are some *known* systems in which it is simply not possible to work backward from a desired overall behavior to the mechanisms that will generate that behavior. If someone wanted to build a Game-of-Life-like system in which they specified ahead of time what kinds of patterns should emerge, there would be no scientific approach they could use to achieve their goal. That is really a very simple idea. Most people who know about complex systems accept that this much is true. The question is: how much does this kind of problem carry over to other systems that have similar underlying interactions between their components? To answer that we have to understand why it happens in the cellular automaton cases, like GoL, but not in the planetary orbits case, where Newton was able to successfully work backward from orbital behavior to underlying mechanism. Looking at the characteristics of complex systems that tend to put them in that "too difficult to reverse engineer" category, we notice that it all seems to do with the fact that interactions are profoundly "tangled", in the sense that i tried to articulate. Then, we switch focus to AI and human intelligence. Surprise surprise, this is one of the very few cases where tangled interactions between components are unavoidable. Where it could well be that the system gets its stability by a completely empirical, "arbitrary" balance of tangled interactions. Now, you (along with many others) can just wave your hands and say "I'm optimistic that this won't be a problem". But my point was that (a) there is no way for you to come up with reasons to back up that optimism, except your opinion, (b) there is circumstantial evidence that this really is a problem, and (c) getting around this problem, if it is real, woudl involve some very drastic changes in methodology that are not happening, and that many people are resisting in quite dramatic ways. The problem itself is not "philosophy". It is simply a property of systems we are talking about here. Richard Loosemore From natasha at natasha.cc Tue Feb 22 19:12:58 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 22 Feb 2011 14:12:58 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <4D640453.9010309@mac.com> References: <4D640453.9010309@mac.com> Message-ID: <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Quoting Samantha Atkins : > The essential element of libertarianism is the Non-Agression Principle. > No one has the right to initiate force against another. This is > equivalent to total freedom to do anything that does not harm, > physically force, threaten physical force or defraud another. > > Right wing, left wing has nothing to do with it. Well said. Natasha From sjatkins at mac.com Tue Feb 22 19:20:37 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 11:20:37 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <002901cbd081$9bb2b550$d3181ff0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> Message-ID: <4D640C85.6060007@mac.com> On 02/19/2011 02:08 PM, spike wrote: > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Richard > Loosemore > Subject: Re: [ExI] Call To Libertarians > > spike wrote: >>> ... On Behalf Of Richard Loosemore >> The inclusion of "theaters" was strictly optional: not essential to my > argument. A throwaway... > > Ja, that one caught my attention. If any government builds a theatre, that > government dictates what is played there. > >> Would it be more accurate, then, to say that Libertarianism is about > SUPPORTING the government funding of: No. This is the very epitome of definition by non-essentials. We can do better than this. A minarchist generally believes that the only valid functions for government are formulating and enforcing laws and the military. Things that they thing cannot be done privately. It is a very short list. But for what it is worth from this libertarian: > Keep in mind that I differentiate between libertarianism and Libertarianism. > One has a capital L. I use lower case. > >> Roads, yes No. Private road building worked fine and most private toll roads, unlike public ones were paid off ages ago. >> Bridges, yes No. Most bridges were not built by government. >> Police, yes Perhaps but only with very constrained laws that follow the NAP. Not enforcement of whatever any politician things up regardless of whether it is consistent with individual rights. Arguably you do not need this to be a government function at all or to have any such specialized body. Read Rothbard for details. >> Firefighters, yes No. Private firefighters work fine. >> Prisons, yes, but perhaps not the luxury outfits we see so commonly > today. > No. There is also an interesting argument (Rothbard and others) that prisons are actually unnecessary for the putative purpose they are claimed to be justified by. >> Schools, yes No way. Government should not be involved in education whatsoever. >> Public transport in places where universal use of cars would bring > cities to a standstill yes, if the public transport is > self-sustaining without (or perhaps minimal) government subsidy > No. If the excuse is accurate the need can be fulfilled privately much better. >> The armed forces, yes Not necessarily but commonly argued by minarchist. But no wars declared by government with forced participation. Individuals decide whether the war is worth fighting or not. >> Universities, and publicly funded scholarships for poor students, No. You are free to contribute to the education funds of any individual students or to a pool administered by private persons to distribute funding to those in need of it for education. Government involvement is not remotely required. > Yes if by "poor students" you meant students with little money, as opposed > to bad students. High SATers, yes. > > > National research laboratories like the Centers for Disease Control and > Prevention yes > No. There is no need for government to do this job. >> Snow plows, yes, operated by non-union drivers No. >> Public libraries, yes No. Private persons and groups can and do create libraries open to the public. > > Emergency and disaster assistance; yes, No. Private groups and individuals can do this. > >> Legal protection for those too poor to fight against the exploitative > power of corporations; no, let them take their trade elsewhere. > Non starter BS. All have the same rights under rational individual rights NAP based law. > > Government agencies to scrutinize corrupt practices by corporations > and wealthy individuals, This might be OK if we balance it by having > corporations which would scrutinize corrupt practices by government and poor > individuals Nope. Either people or businesses broke rational laws or they did not. No classist BS. >> Basic healthcare for old people who worked all their lives > for corporations who paid them so little in salary that > they could not save for retirement without starving to > death before they reached retirement... yes > Highly biased BS. No one has a valid claim on the resources of anyone else irrespective of the wishes of the those others. Ever. >> And sundry other programs that keep the very poor just above > the subsistence level, so we do not have to step over their > dead bodies on the street all the time, and so they do not > wander around in feral packs, looking for middle-class people > that they can kill and eat... > Utter BS. Poverty is created quite well by the Welfare State. We are all impoverished compared to what we could have had by the huge bloated state and its manifold takings from us by force. - samantha From sjatkins at mac.com Tue Feb 22 19:25:37 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 11:25:37 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <4D640DB1.9060702@mac.com> On 02/19/2011 03:37 AM, Darren Greer wrote: > Thanks for you responses. Special thanks to Fred for the run-down and > the links. I will read them carefully. The Somalia remark is exactly > the type of over-simplification that I've been dealing with in the > other discussion. One guy said libertarians were people who read Ayn > Rand as a teenager and grew up to be self-centered jerks. But even a > quick survey of it on the 'net revealed to me that it is a diverse, > coherent and extensive set of beliefs, philosophies and principles > that cannot easily be dismissed with a simple one-liner. The older I > get the less likely I am to denigrate something because I disagree > with it. First I'll try to understand it, and then maybe I'll come up > with a one-liner. :) > Ayn Rand's philosophy is not remotely about being a self-centered jerk. But that is an entire other thread largely to me populated, if it arises, by those that have no idea what they are talking about or are unable or unwilling to discuss the matter intelligently without dismissive ranting. That is probably my rule of 5 for the day so shutting up. -s From sjatkins at mac.com Tue Feb 22 19:28:06 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 11:28:06 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D601CDA.90803@moulton.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> Message-ID: <4D640E46.2030702@mac.com> On 02/19/2011 11:41 AM, F. C. Moulton wrote: > Eugen Leitl wrote: >> And there will be definitely members objecting, for above mentioned >> good reasons. We just have to live with that, I guess. >> > I was under the impression that early members who did not want their > posts made public was relatively low but my impression was based on > casual observation not on a rigorous survey. It would interesting to > at least have the early posts of those who agreed to be make available. > Couldn't we just assign non-identifying names no get around most of the objection? From lubkin at unreasonable.com Tue Feb 22 19:50:53 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 22 Feb 2011 14:50:53 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D63E0C1.4000901@lightlink.com> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> <4D63E0C1.4000901@lightlink.com> Message-ID: <201102221950.p1MJobQt009702@andromeda.ziaspace.com> Richard Loosemore wrote: >AI researchers have, over the years, publicized >many supposedly great advances, or big new >systems that were supposed to be harbingers of >real AI, just around the corner. People were >very excited about SHRDLU. The Japanese went >wild over Prolog. Then there was the "knowledge >based systems" approach, aka "expert >systems". Earlier on there was a 1960s craze >for "machine translation". In the late 1980s >there were "neural networks" vendors springing >up all over the place. And these were just the >paradigms or general clusters of ideas ... never >mind the specific systems or programs themselves. > >Now, the pattern is that all these ideas were >good at bringing down some long-hanging fruit, >and every time the proponents would say "Of >course, this is just meant to be a demonstration >of the potential of this new >technique/approach/program: what we want to do >next is expand on this breakthrough and find >ways to apply it to more significant problems". >But in each case it turned out that extending it >beyond the toy cases was fiendishly hard, and >eventually the effort was abandoneed when the next bandwagon came along. I think it was my graduate advisor, Carl Page, who told me that whenever you saw a paper in AI with "examples" of sentences understood, theorems proven, etc., the "examples" were in fact the *only* ones the system could cope with. (I was predisposed to be enraptured by AI, but his doctoral classes were the best I've taken in any field. My parenting philosophy began with a conversation with him. Dr. Page sent his two sons to a Montessori school. He extolled the value of Montessori in raising a strong, confident child who thinks for himself, but warned me that it comes at a price: When you have a Montessori kid, they will never accept "Because I say so." The consequences of his parenting style are that his namesake, Carl Jr., co-founded eGroups, which was sold to Yahoo for 0.5 billion and we now know as Yahoo Groups. The other son, Larry, co-founded Google, and is worth $15 billion.) -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From spike66 at att.net Tue Feb 22 18:33:14 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 10:33:14 -0800 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <004c01cbd2ae$86f50c60$94df2520$@att.net> Message-ID: <007701cbd2be$f74e15a0$e5ea40e0$@att.net> . On Behalf Of Josh .Subject: Re: [ExI] Brief correction re Western Democracies >.Hey, new to the list and was planning on just reading through for a couple of days but I have to jump in right quick. I think I might have missed a part of the convo though so if I say something amiss, I apologize. Welcome Josh! >.As a former Mormon.The Mormons do not prohibit apostasy . Good thanks, my mistake, apologies. I must be conflating it with some other religion. Seventh Day Adventist or something. . >.That's all I'll say for now. etherfire Do introduce yourself a little more, etherfire. There are other former Mormons here, and possibly current ones. You are among friends. Most of us are flaming heatherns, but friends all the same. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Tue Feb 22 20:07:16 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 22 Feb 2011 15:07:16 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <4D640C85.6060007@mac.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> Message-ID: <4D641774.6020609@lightlink.com> Samantha Atkins wrote: > On 02/19/2011 02:08 PM, spike wrote: >> >> -----Original Message----- >> From: extropy-chat-bounces at lists.extropy.org >> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Richard >> Loosemore >> Subject: Re: [ExI] Call To Libertarians >> >> spike wrote: >>>> ... On Behalf Of Richard Loosemore >>> The inclusion of "theaters" was strictly optional: not essential to my >> argument. A throwaway... >> >> Ja, that one caught my attention. If any government builds a theatre, >> that >> government dictates what is played there. >> >>> Would it be more accurate, then, to say that Libertarianism is about >> SUPPORTING the government funding of: > > No. This is the very epitome of definition by non-essentials. We can > do better than this. > > A minarchist generally believes that the only valid functions for > government are formulating and enforcing laws and the military. Things > that they thing cannot be done privately. It is a very short list. > > But for what it is worth from this libertarian: > >> Keep in mind that I differentiate between libertarianism and >> Libertarianism. >> One has a capital L. I use lower case. >> >>> Roads, yes > > No. Private road building worked fine and most private toll roads, > unlike public ones were paid off ages ago. > > >>> Bridges, yes > > No. Most bridges were not built by government. > >>> Police, yes > > Perhaps but only with very constrained laws that follow the NAP. Not > enforcement of whatever any politician things up regardless of whether > it is consistent with individual rights. > > Arguably you do not need this to be a government function at all or to > have any such specialized body. Read Rothbard for details. > >>> Firefighters, yes > > No. Private firefighters work fine. > > >>> Prisons, yes, but perhaps not the luxury outfits we see so >>> commonly >> today. >> > > No. There is also an interesting argument (Rothbard and others) that > prisons are actually unnecessary for the putative purpose they are > claimed to be justified by. > >>> Schools, yes > > No way. Government should not be involved in education whatsoever. > > >>> Public transport in places where universal use of cars would bring >> cities to a standstill yes, if the public transport is >> self-sustaining without (or perhaps minimal) government subsidy >> > > No. If the excuse is accurate the need can be fulfilled privately much > better. > >>> The armed forces, yes > > Not necessarily but commonly argued by minarchist. But no wars > declared by government with forced participation. Individuals decide > whether the war is worth fighting or not. > > >>> Universities, and publicly funded scholarships for poor students, > > No. You are free to contribute to the education funds of any individual > students or to a pool administered by private persons to distribute > funding to those in need of it for education. Government involvement is > not remotely required. > >> Yes if by "poor students" you meant students with little money, as >> opposed >> to bad students. High SATers, yes. >> >> > National research laboratories like the Centers for Disease >> Control and >> Prevention yes >> > > No. There is no need for government to do this job. > >>> Snow plows, yes, operated by non-union drivers > > No. > >>> Public libraries, yes > > No. Private persons and groups can and do create libraries open to the > public. > >> > Emergency and disaster assistance; yes, > > No. Private groups and individuals can do this. > >> >>> Legal protection for those too poor to fight against the exploitative >> power of corporations; no, let them take their trade elsewhere. >> > > Non starter BS. All have the same rights under rational individual > rights NAP based law. > > >> > Government agencies to scrutinize corrupt practices by >> corporations >> and wealthy individuals, This might be OK if we balance it by having >> corporations which would scrutinize corrupt practices by government >> and poor >> individuals > > Nope. Either people or businesses broke rational laws or they did not. > No classist BS. > >>> Basic healthcare for old people who worked all their lives >> for corporations who paid them so little in salary that >> they could not save for retirement without starving to >> death before they reached retirement... yes >> > > Highly biased BS. No one has a valid claim on the resources of anyone > else irrespective of the wishes of the those others. Ever. > >>> And sundry other programs that keep the very poor just above >> the subsistence level, so we do not have to step over their >> dead bodies on the street all the time, and so they do not >> wander around in feral packs, looking for middle-class people >> that they can kill and eat... >> > > Utter BS. Poverty is created quite well by the Welfare State. We are > all impoverished compared to what we could have had by the huge bloated > state and its manifold takings from us by force. Now, this is between you and spike, since he was the one who responded to my questions ..... but you indirectly commented on the *framing* of my questions to spike, so I have some observations... In a parallel post, you said: >> Ayn Rand's philosophy is not remotely about being a self-centered >> jerk. But that is an entire other thread largely to me populated, >> if it arises, by those that have no idea what they are talking >> about or are unable or unwilling to discuss the matter >> intelligently without dismissive ranting. Hmmmmm. Can't help but notice that you just responded to my very polite and mild-mannered list of questions directed at spike, with language that dismissed my words as "Non starter BS", "Classist BS", "Highly biassed BS" and "Utter BS". Then you complain about some hypothetical people who are "unable or unwilling to discuss the matter intelligently without dismissive ranting". Very interesting. Thoroughly consistent with other experiences I have had from people who defend extreme libertarian views. *Some* people (not me, for sure, so don't get me wrong) would summarize that kind of behavior as .... well, I won't say it. ;-) But, do please continue your dispute with spike: it is instructive to see libertarians disputing what the L word is actually about. Glad I could help by framing the debate. Richard Loosemore From rpwl at lightlink.com Tue Feb 22 20:25:56 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 22 Feb 2011 15:25:56 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <201102221950.p1MJobQt009702@andromeda.ziaspace.com> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> <4D63E0C1.4000901@lightlink.com> <201102221950.p1MJobQt009702@andromeda.ziaspace.co! m> Message-ID: <4D641BD4.6030701@lightlink.com> David Lubkin wrote: > Richard Loosemore wrote: > >> AI researchers have, over the years, publicized many supposedly great >> advances, or big new systems that were supposed to be harbingers of >> real AI, just around the corner. People were very excited about >> SHRDLU. The Japanese went wild over Prolog. Then there was the >> "knowledge based systems" approach, aka "expert systems". Earlier on >> there was a 1960s craze for "machine translation". In the late 1980s >> there were "neural networks" vendors springing up all over the place. >> And these were just the paradigms or general clusters of ideas ... >> never mind the specific systems or programs themselves. >> >> Now, the pattern is that all these ideas were good at bringing down >> some long-hanging fruit, and every time the proponents would say "Of >> course, this is just meant to be a demonstration of the potential of >> this new technique/approach/program: what we want to do next is >> expand on this breakthrough and find ways to apply it to more >> significant problems". But in each case it turned out that extending >> it beyond the toy cases was fiendishly hard, and eventually the effort >> was abandoneed when the next bandwagon came along. > > I think it was my graduate advisor, Carl Page, who told me that whenever > you saw a paper in AI with "examples" of sentences understood, theorems > proven, etc., the "examples" were in fact the *only* ones the system > could cope with. > > (I was predisposed to be enraptured by AI, but his doctoral classes were > the best I've taken in any field. Indeed. What puzzles me about this whole Watson discussion is that the skeptical perspective I have been presenting here is, well, something that many people in the cognitive science world of about 10 to 20 years ago would have considered a no-brainer. And many in AI, too, paradoxical as that might seem. *Many* people working in those fields, back in the late 80s, were becoming frustrated with AI systems and AI claims that were overblown. Just about everyone was talking about the silliness of programs that could handle a selected examples, but which were not generalizable. The basic mechanism under the Watson hood was known quite some time ago, and it was kind of obvious that if you went to the trouble of stuffing it with a huge amount of data, you could do something superficially impressive like handling Jeopardy questions. But why bother to do that when it did not address the underlying issues? Issues that were known decades ago. Beats me. Anyhow, seems I have to take the rap for saying what was common knowledge. Tell you what, hang on folks and I will do a quick survey of my cognitive science friends to see what they think about Watson as an advance in AI. Get back to you on that. > My parenting philosophy began with a conversation with him. Dr. Page > sent his two sons to a Montessori school. He extolled the value of > Montessori in raising a strong, confident child who thinks for himself, > but warned me that it comes at a price: When you have a Montessori kid, > they will never accept "Because I say so." > > The consequences of his parenting style are that his namesake, Carl Jr., > co-founded eGroups, which was sold to Yahoo for 0.5 billion and we now > know as Yahoo Groups. The other son, Larry, co-founded Google, and is > worth $15 billion.) Great story! You have suddenly made my frustratingly independent-minded son, who argues about everything under the sun, and who we sent to Montessori, a lot easier to deal with.... ;-) Richard Loosemore From kellycoinguy at gmail.com Tue Feb 22 21:01:36 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 22 Feb 2011 14:01:36 -0700 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D63C61A.301@aleph.se> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <20110217115041.GQ23560@leitl.org> <20110218125146.GJ23560@leitl.org> <4D63C61A.301@aleph.se> Message-ID: On Tue, Feb 22, 2011 at 7:20 AM, Anders Sandberg wrote: > Kelly Anderson wrote: > The real issue is how much computation you need to replace brains and how > much this has to dissipate. I have made some estimates that the likely range > for brain emulation is 10^22 to 10^25 flops. I suspect that in the very long term, some super intelligence will figure out how to optimize the computational activity of the brain. Nature has done a good job of it, but I suspect it can be improved upon. So brain emulation may not be the ultimate goal, but rather advanced computation that doesn't tax the brain so much. For example, for many purposes, you might not need a visual cortex at all, or at least not a fully functional one. Vision processing is a huge proportion of the brain's capacity, is it not? > Right now the Roadrunner does > 376 Mflops/W, so we are *far* away. But the Darpa exascale study suggests we > can do 10^12 flops per watt using extrapolated but not blue sky technology - > a lot of current computation is very wasteful, and it is just recently heat > dissipation has become a towering problem. Quantum dot cellular automata > could give 10^19 flops per watt, putting the energy needs at 200-2000 watts > per brain. > http://netalive.startlogic.com/debenedictis.org/erik/Publications-2005/Reversible-logic-for-supercomputing-p391-debenedictis.pdf > > As I noted in my essay on this, > http://www.aleph.se/andart/archives/2009/03/a_really_green_and_sustainable_humanity.html > while this energy demand is higher than the biological brain it can be > supplied more efficiently than growing organisms, harvesting them, possibly > passing them through other animals, and then digesting them. Even this kind > of not-Drexlerian nanotech computing would be very green. Very interesting paper. Thank you for sharing those thoughts. > Estimating the ultimate limits is hard, since we do not know how many > dissipative calculations we need. Assuming one irreversible operation every > millisecond at every synapse leads to 10^17 dissipating operations per > second and an energy dissipation of 3*10^-6 watts per degree (colder > computers are more efficient). So even here nanowatts is going to be tough > (cooling below a few Kelvin is expensive), but less than a milliwatt per > brain seems entirely feasible using LN- if we have reversible computers with > little need for error correction. > > >>> Reversible logic is slow, and it's not perfectly reversible. >>> > > Not necessarily, just a lot of the current proof-of-concept designs. I > expect that once we actually start working on it seriously we are going to > optimize it quite a lot, including how to get the error correction (which > dissipates) done in a clean fashion. It wouldn't surprise me if there was a > practical tradeoff between speed and dissipation, though (all those quantum > limits to computation involve energy, and fast changes do involve high > wattages that are hard to keep dissipationless). > > >> An interesting question to be answered is what is the most limiting >> factor? Is it matter out of which to build intelligence? Is it energy >> to power it? Time to run it? Or space to house it? Or is there some >> other limiting factor? I think it will take a while for the >> exponential growth to stop, but it must eventually stop. I'm just not >> sure which of the above is the most limiting factor. Only time and >> technology will tell. I'm not sure we can even guess at this point >> what the most limiting factor will be. >> > > In the really long run you cannot get more mass than around 10^52 kg, due to > the accelerated expansion of the universe. And there are time limits due to > proton decay and quantum noise. But long before that lightspeed lags will > make it hard to maintain cohesive thinking systems when the communications > delays become much longer than the local processing cycles. Even light speed may prove not to be a barrier in the sense that some of the small loop time travel of information may make some of that go away. The future isn't limitless, but it is WAY out there by any measure. > A lot of the limits depend on what you *want* minds to do. Experiencing > pleasure doesn't require long-range communications or even much storage > space, while having the smartest possible mind requires a lot of > communications and resources. What I would like to do is have a conversation with every other living human being all at the same time, perhaps more than one conversation with very interesting people... :-) Thank goodness that we'll eventually figure out fusion. :-) -Kelly From lubkin at unreasonable.com Tue Feb 22 21:30:59 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 22 Feb 2011 16:30:59 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <4D640DB1.9060702@mac.com> References: <4D640DB1.9060702@mac.com> Message-ID: <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Samantha wrote: >That is probably my rule of 5 for the day so shutting up. Spike waived the limit for discussing libertarianism through the 24th, at least as it applies to transhumanism -- >We haven't really had a libertarian discussion here for a good >while. In light of Darren's comments above, I propose a temporary >open season on the specific topic of transhumanism and >libertarianism. Free number of posts on all that for five days My rule of thumb for assessing the ethics of a governmental policy is to consider whether it would be ethical if it involved a handful of people, e.g., "Spike and Samantha "vote" that I don't need both a laptop and a tablet and I must give my tablet to Samantha. If I refuse, they will beat me up." is equivalent to a progressive tax code. This simplification also highlights the differences in political philosophies. Probably all of us would agree that (A) Initiation of force is bad. (B) Starving children is bad. The question is which is worse. A libertarian would say initiation of force is unacceptable; figure out some other way to feed starving children. A liberal would say that starving children is unacceptable and so be it if force is necessary to avoid it. Two equally smart, rational, caring people can reasonably prioritize differently and rigorously derive different conclusions. Looking in a transhumanist future, as long as we are distinct individuals, there will be room for competition, cooperation, and trade. That is something we talked about on the original list. Keith, if I'm not mistaken, wanted to have a quadrillion copies of himself with starships off exploring the universe, to report back to each other at our end-of-the-universe party. There are no limits to want. You can always want more than you have or more than exists. Could we be a single computronium borganism? I suspect not. I think that as long as there's transmission lag, a system that big will be a society. And, therefore, the same choices apply if I want what you have as they do now. Democracy, though, doesn't seem a viable concept in a transhumanist future. We'd all be too different in capabilities for "one being, one vote." -- David. From darren.greer3 at gmail.com Tue Feb 22 21:34:22 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 22 Feb 2011 17:34:22 -0400 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <004c01cbd2ae$86f50c60$94df2520$@att.net> Message-ID: 2011/2/22 Josh > Hey, new to the list and was planning on just reading through for a couple > of days but I have to jump in right quick. > Another welcome. I'm a relative newcomer too. I joined a year ago. > > We (agnostics, atheists, or those with other spiritual interests) have to > be careful about the way we characterize religious sects. This is not > helping your authority at all. Strangely enough, my first posts to this group were exactly along these lines, and with the same sentiments -- that some religion people are not foaming at the mouth and therefore are relatively harmless, and so we should be careful not to paint all with the same brush. I still believe that to a degree, but I also now believe that we have not just a right but a responsibility to question any system that spreads ignorance, intolerance, exclusion and fear. And most, if not all, religions are guilty of some of what is listed. Even the Bahai's, arguably the most inclusive and tolerant religion on the planet, has a stricture against homosexuals and drug addicts. We question and criticize political systems all the time, even those we agree with, but when it comes to religion there has been this 'hands off" policy that has gone on far too long and has created a political and social vacuum in which hatred and intolerance has been allowed to operate with impuntiy. It is the source of much conflict on the planet. Even political demagogues hide behind it, because they know it is safe. Maybe it's time we spoke up and made it not so. darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Feb 22 21:47:24 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 22 Feb 2011 17:47:24 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D640369.9050802@mac.com> References: <4D640369.9050802@mac.com> Message-ID: > > > Samantha wrote: > You might want to read Rothbard or in economics read Hayek, von Mises and other Austrian economists. Or economics in one lesson by Hazlitt.< thanks. > We can't do your homework for you. :)< Of course. But one of the reasons I came to this group first was that even libertarians do not agree on an exact definition. Your suggestions are fairly heavy on the economics, for example, where other posts in reply to my query stressed that libertarianism was far more than economics. As a result I've decided to look at the socio-political aspects of the philosophy for now and save the economics for later. d. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Feb 22 21:46:24 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 13:46:24 -0800 Subject: [ExI] trim your replies please, was: RE: Complex AGI [WAS Watson On Jeopardy] Message-ID: <00ad01cbd2d9$f3df8780$db9e9680$@att.net> ... >> I reviewed this twice for tone... hopefully, it isn't too insulting. >> It isn't meant to be. >Alas, time pressure makes it impossible to respond with a line-by-line commentary. Posters please trim your replies to just the relevant information. Otherwise the archives are diluted and repetitive. Ideally we want those archives to be a rich stew of ideas as opposed to a thin gruel of repetition. spike From spike66 at att.net Tue Feb 22 22:10:39 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 14:10:39 -0800 Subject: [ExI] Happy Birthday Extropy Email List In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: <00c701cbd2dd$56be0450$043a0cf0$@att.net> ... On Behalf Of Stefano Vaj ... >..Such engagement does expose one to slight embarrassments, but it wonderfully keeps you focused on avoiding statements you are not ready to stand by in any court for an undefinite time in the future... :-) -- Stefano Vaj Slight embarrassment yes, court no. The prosecution would need to prove it was actually *you* who wrote the post. Having been sent from your computer is insufficient proof. I do urge all, if you have ExI-ish friends visit your home, to have them write something that sounds a little like you and post it with your signature, then do something to be able to later prove it actually was not you who wrote it. For instance, if you visit your doctor and your medical records show you were actually under anesthetic at a particular date and time, you could have a neighbor or friend come over and post some wacky thing. I myself have been misquoted here, just today in fact, as having written something that was actually written in rebuttal to something I had posted. I didn't bother scolding the person who did that, rather assumed it accidental, with the usual no blood, no foul attitude. It is good to have provable deniability however. I have it. I had the pseudo-spikes add in a particular code word so that I can later search the archives and I know exactly which were the faux-spike posts and which were genuine. But you don't know that code word, nor do any future prosecutors. Even with our expert data-miners, I don't think there is any way to figure out that code word. spike From spike66 at att.net Tue Feb 22 22:38:59 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 14:38:59 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D640DB1.9060702@mac.com> References: <4D640DB1.9060702@mac.com> Message-ID: <00da01cbd2e1$4c620d40$e52627c0$@att.net> ... On Behalf Of Samantha Atkins Subject: Re: [ExI] Call To Libertarians On 02/19/2011 03:37 AM, Darren Greer wrote: > Thanks for you responses. Special thanks to Fred for the run-down and > the links. ... > >Ayn Rand's philosophy is not remotely about being a self-centered jerk. >But that is an entire other thread largely to me populated, if it arises, by those that have no idea what they are talking about >or are unable or unwilling to discuss the matter intelligently without dismissive ranting. >That is probably my rule of 5 for the day so shutting up. -s Samantha we have a temporary open season on libertarianism that has a couple more days in it, or until people start posting disrespectful or non-well-thought-out commentary. So that doesn't count against your five today. {8^D Also, this was a short but excellent reply, thanks. spike From lubkin at unreasonable.com Tue Feb 22 23:00:31 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 22 Feb 2011 18:00:31 -0500 Subject: [ExI] Happy Birthday Extropy Email List In-Reply-To: <00c701cbd2dd$56be0450$043a0cf0$@att.net> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <00c701cbd2dd$56be0450$043a0cf0$@att.net> Message-ID: <201102222259.p1MMxoKB005184@andromeda.ziaspace.com> Spike posted: >It is good to have provable deniability however. I have it. I had the >pseudo-spikes add in a particular code word so that I can later search the >archives and I know exactly which were the faux-spike posts and which were >genuine. But you don't know that code word, nor do any future prosecutors. >Even with our expert data-miners, I don't think there is any way to figure >out that code word. So you're saying you spiked the evidence.... -- David. From kellycoinguy at gmail.com Tue Feb 22 23:26:08 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 22 Feb 2011 16:26:08 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110218130321.GK23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> <20110218130321.GK23560@leitl.org> Message-ID: On Fri, Feb 18, 2011 at 6:03 AM, Eugen Leitl wrote: > On Fri, Feb 18, 2011 at 12:16:33AM -0700, Kelly Anderson wrote: > >> Perhaps, perhaps not. But I think ONE out of the several dozen >> competing paradigms will be ready to pick up more or less where the >> last one left off. > > *Which* competing platforms? Technologies don't come out of > the blue fully formed, they're incubated for decades in > R&D pipeline. Everything is photolitho based so far, self-assembly > isn't yet even in the crib. TSM is just 2d piled higher and > deeper. Photo lithography has a number of years left in it. As you say, it can extend into the third dimension if the heat problem is solved. I have seen one solution to the heat problem that impressed the hell out of me, and no doubt there are more out there that I haven't seen. By the time they run out of gas on photo lithography, something, be it carbon nano tube based, or optical, or something else will come out. A company like Intel isn't going to make their very best stuff public immediately. You can be sure they and IBM have some great stuff in the back room. I am not fearful of where the next S curve will come from, except that it might come out of a lab in China, Thor help us all then! >> > Kelly, do you think that Moore is equivalent to system >> > performance? You sure about that? >> >> No. Software improves as well, so system performance should go up > > Software degrades, actually. Software bloat about matches the advances > in hardware. I know what you are talking about. You are stating that Java and C# are less efficient than C++ and that is less efficient than C and that is less efficient than Assembly. In that sense, you are very right. It does take new hardware to run the new software systems. The next step will probably be to run everything on whole virtual machines OS and all, no doubt, not just virtual CPUs... That being said, algorithms continue to improve. The new, slower paradigms allow programmers to create software with less concern for the underlying hardware. I remember the bad old days of dealing with the segmented Intel architecture, switching memory banks and all that crap. I for one am glad to be done with it. But algorithms do improve. Not as fast as hardware, but it does. For example, we now have something like 7 or 8 programs playing chess above 2800, and I hear at least one of them runs on a cell phone. In 1997, it was a supercomputer. Now, today's cell phones are dandy, but they aren't equivalent to a high end 1997 supercomputer, so something else had to change. The algorithms. They continued to improve so that a computer with a small fraction of the power available in 1997 can now beat a grand master. > In terms of advanced concepts, why is the second-oldest high > level language still unmatched? Why are newer environments > inferior to already historic ones? Are you speaking with a LISP? I don't think that Eclipse is inferior to the LISP environment I used on HP workstations in the 80s. I think it is far better. I remember waiting for that damn thing to do garbage compaction for 2-3 minutes every half hour or so. Good thing I didn't drink coffee in those days... could have been very bad. :-) We tend to glorify the things of the past. I very much like playing with my NeXT cube, and do so every now and again (It's great when you combine Moore's law with Ebay, I could never have afforded that machine new.) The nostalgia factor is fantastic. But the NeXT was fairly slow even at word processing when you use it now. It was a fantastic development environment, only recently equaled again in regularity and sophistication. Eugen, don't be a software pessimist. We now have two legged walking robots, thanks to a combination of software employing feedback and better hardware, but mostly better software in this case. Picassa does a fairly good job of recognizing faces. I would never have predicted that would be a nut cracked in my time. -Kelly From spike66 at att.net Tue Feb 22 23:02:58 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 15:02:58 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D641774.6020609@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> Message-ID: <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> ... On Behalf Of Richard Loosemore ... >But, do please continue your dispute with spike: it is instructive to see libertarians disputing what the L word is actually >about. Glad I could help by framing the debate... Richard Loosemore Do trim responses please. There are many different kinds of libertarians, which explains why we seldom or never win elections. A lot of times, we don't even vote for our own candidates. I am a libertarian who would argue that you can do a whole bunch of things in a publicly funded way, but do it in such a way that private entities can compete, and usually win against it. Lets take a particularly difficult one that the US is tripping over right now: health care. We can set up a public health system perfectly in parallel with the private one, in such a way that both can compete and the cost is not ruinous. We set up a publicly funded emergency care system, which will try to patch you back up if you take a bad fall in your home, or you are shot and stabbed by the local youth organizations. No legal authorities are involved in any way, no courts, no malpractice anything, you just go there and they do what they do. If you aren't injured but just not feeling well, you have the option to go there as well, no appointment necessary, and if the youth organizations aren't too busy right then sending rival organizations' members to the hospital, they may be able to help you. But if you go that route, you take what you get, and you do not enter the lawsuit lottery. In parallel, the private system would still exist, with all its high-capital high skillset doctors. If you have money, to them you go. They are not involved in the legal system either. If the patient wants to buy insurance against the medics' slaying her, then that is between her and the insurance company, with private arbitration being the decider. Similarly, public roads and private for-profit roads can go side-by-side, as they in fact are doing within 200 meters of where I sit typing this message. A "Lexus lane" was added on the far left of the freeway. If you are in a hurry and have money, you can pay to drive in that. Otherwise get over to the right and slow down with the rest of the proles. I am in the process of choosing a kindergarten for my son. There is a perfect example of where public and private institutions operate in parallel, and effectively compete with each other. spike From darren.greer3 at gmail.com Wed Feb 23 00:45:57 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 22 Feb 2011 20:45:57 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: Quoting Samantha Atkins : > > The essential element of libertarianism is the Non-Agression Principle. >> No one has the right to initiate force against another. This is >> equivalent to total freedom to do anything that does not harm, >> physically force, threaten physical force or defraud another. >> >> > I like that principle Samantha. Very much. I am personally committed to it. But I wonder how does one go about establishing system where the principle non-aggression is paramount, when natural aggression, both tribal and individual, seems to be a dominant feature of the human psyche nurtured by millions of years of evolutionary development? I don't ask this question facetiously, or in an attempt to disparage. I'm truly interested in your response. I've been thinking about this kind of society for awhile, and how it would work. One of the answers I've come up with, that sounds similar to what you describe, is to establish a system of territorial morality, where the doctrine is "you do your thing and I'll do mine and it's all OK as long as it doesn't hurt anyone else." Because morality, along with status (again both tribal and individual) and economics, are most certainly related to aggression, from what I've perceived. Alister Crowley came up with something similar in his Thelema doctrine, as a recipe for modern utopia. The problem as I see it is what to do when people in your society violate this principle. Currently we're paralyzed by political correctness and cultural relativism. Also it might be difficult to come up with valid definitions of what harming somebody actually means. Just figuring that out on a case-to-case basis might be as daunting as coming up with a universal moral consensus. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 23 02:45:26 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 18:45:26 -0800 Subject: [ExI] watson on nova Message-ID: <002601cbd303$ba326aa0$2e973fe0$@att.net> Funny Dilbert strip: http://dilbert.com/strips/comic/2008-03-30/ {8^D -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Wed Feb 23 03:25:32 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 19:25:32 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D641774.6020609@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> Message-ID: <4D647E2C.5070204@mac.com> On 02/22/2011 12:07 PM, Richard Loosemore wrote: > Samantha Atkins wrote: >> On 02/19/2011 02:08 PM, spike wrote: >>> >>> -----Original Message----- >>> From: extropy-chat-bounces at lists.extropy.org >>> [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Richard >>> Loosemore >>> Subject: Re: [ExI] Call To Libertarians >>> >>> spike wrote: >>>>> ... On Behalf Of Richard Loosemore >>>> The inclusion of "theaters" was strictly optional: not essential >>>> to my >>> argument. A throwaway... >>> >>> Ja, that one caught my attention. If any government builds a >>> theatre, that >>> government dictates what is played there. >>> >>>> Would it be more accurate, then, to say that Libertarianism is about >>> SUPPORTING the government funding of: >> >> No. This is the very epitome of definition by non-essentials. We >> can do better than this. >> >> A minarchist generally believes that the only valid functions for >> government are formulating and enforcing laws and the military. >> Things that they thing cannot be done privately. It is a very short >> list. >> >> But for what it is worth from this libertarian: >> >>> Keep in mind that I differentiate between libertarianism and >>> Libertarianism. >>> One has a capital L. I use lower case. >>> >>>> Roads, yes >> >> No. Private road building worked fine and most private toll roads, >> unlike public ones were paid off ages ago. >> >> >>>> Bridges, yes >> >> No. Most bridges were not built by government. >> >>>> Police, yes >> >> Perhaps but only with very constrained laws that follow the NAP. Not >> enforcement of whatever any politician things up regardless of >> whether it is consistent with individual rights. >> >> Arguably you do not need this to be a government function at all or >> to have any such specialized body. Read Rothbard for details. >> >>>> Firefighters, yes >> >> No. Private firefighters work fine. >> >> >>>> Prisons, yes, but perhaps not the luxury outfits we see so >>>> commonly >>> today. >>> >> >> No. There is also an interesting argument (Rothbard and others) that >> prisons are actually unnecessary for the putative purpose they are >> claimed to be justified by. >> >>>> Schools, yes >> >> No way. Government should not be involved in education whatsoever. >> >> >>>> Public transport in places where universal use of cars would >>>> bring >>> cities to a standstill yes, if the public transport is >>> self-sustaining without (or perhaps minimal) government subsidy >>> >> >> No. If the excuse is accurate the need can be fulfilled privately >> much better. >> >>>> The armed forces, yes >> >> Not necessarily but commonly argued by minarchist. But no wars >> declared by government with forced participation. Individuals decide >> whether the war is worth fighting or not. >> >> >>>> Universities, and publicly funded scholarships for poor students, >> >> No. You are free to contribute to the education funds of any >> individual students or to a pool administered by private persons to >> distribute funding to those in need of it for education. Government >> involvement is not remotely required. >> >>> Yes if by "poor students" you meant students with little money, as >>> opposed >>> to bad students. High SATers, yes. >>> >>> > National research laboratories like the Centers for Disease >>> Control and >>> Prevention yes >>> >> >> No. There is no need for government to do this job. >> >>>> Snow plows, yes, operated by non-union drivers >> >> No. >> >>>> Public libraries, yes >> >> No. Private persons and groups can and do create libraries open to >> the public. >> >>> > Emergency and disaster assistance; yes, >> >> No. Private groups and individuals can do this. >> >>> >>>> Legal protection for those too poor to fight against the >>>> exploitative >>> power of corporations; no, let them take their trade elsewhere. >>> >> >> Non starter BS. All have the same rights under rational individual >> rights NAP based law. >> >> >>> > Government agencies to scrutinize corrupt practices by >>> corporations >>> and wealthy individuals, This might be OK if we balance it by having >>> corporations which would scrutinize corrupt practices by government >>> and poor >>> individuals >> >> Nope. Either people or businesses broke rational laws or they did >> not. No classist BS. >> >>>> Basic healthcare for old people who worked all their lives >>> for corporations who paid them so little in salary that >>> they could not save for retirement without starving to >>> death before they reached retirement... yes >>> >> >> Highly biased BS. No one has a valid claim on the resources of >> anyone else irrespective of the wishes of the those others. Ever. >> >>>> And sundry other programs that keep the very poor just above >>> the subsistence level, so we do not have to step over their >>> dead bodies on the street all the time, and so they do not >>> wander around in feral packs, looking for middle-class people >>> that they can kill and eat... >>> >> >> Utter BS. Poverty is created quite well by the Welfare State. We >> are all impoverished compared to what we could have had by the huge >> bloated state and its manifold takings from us by force. > > Now, this is between you and spike, since he was the one who responded > to my questions ..... but you indirectly commented on the *framing* > of my questions to spike, so I have some observations... > > In a parallel post, you said: > > >> Ayn Rand's philosophy is not remotely about being a self-centered > >> jerk. But that is an entire other thread largely to me populated, > >> if it arises, by those that have no idea what they are talking > >> about or are unable or unwilling to discuss the matter > >> intelligently without dismissive ranting. > > Hmmmmm. Can't help but notice that you just responded to my very > polite and mild-mannered list of questions directed at spike, with > language that dismissed my words as "Non starter BS", "Classist BS", > "Highly biassed BS" and "Utter BS". > Are you saying your questions were without spin? It was the spin that I responded to. You may call it ranting if you like. > Then you complain about some hypothetical people who are "unable or > unwilling to discuss the matter intelligently without dismissive > ranting". > Those questions are not usable for an intelligent examination of libertarian thought as I mentioned at the beginning of this post. I was hardly trying to do an intelligent reasoned discussion as the questions were throwaways and I said it was from my perspective for what it was worth to answer them. So your criticism is not that well placed in this case. > Very interesting. Thoroughly consistent with other experiences I have > had from people who defend extreme libertarian views. > Whatever. You baited the hook. > *Some* people (not me, for sure, so don't get me wrong) would > summarize that kind of behavior as .... well, I won't say it. ;-) > Don't be passive aggressive about it. > But, do please continue your dispute with spike: it is instructive to > see libertarians disputing what the L word is actually about. Glad I > could help by framing the debate. Sigh. It is starting to not be worth my time to even attempt to answer the pernicious nonsense that flies past on such subjects. - s From kellycoinguy at gmail.com Wed Feb 23 03:02:59 2011 From: kellycoinguy at gmail.com (kellycoinguy at gmail.com) Date: Tue, 22 Feb 2011 20:02:59 -0700 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: Message-ID: <4d6478f0.06ead80a.48e8.301b@mx.google.com> The current US system already incarcerates more people per capita than any other prison system in the history of the world. Almost 2% of the adult population. The answer from my POV is to have fewer laws. For example decriminalize drugs ... Both prescription and illicit... And you will reduce the prison population over night. Kelly -- Sent from my Palm Pre On Feb 22, 2011 11:53 AM, Alfio Puglisi <alfio.puglisi at gmail.com> wrote: On Tue, Feb 22, 2011 at 6:52 PM, Kelly Anderson <kellycoinguy at gmail.com> wrote: On Sun, Feb 20, 2011 at 8:20 PM, Richard Loosemore <rpwl at lightlink.com> wrote: > Ben Zaiboc wrote: >> >> Richard Loosemore wrote: >> >>> Would it be more accurate, then, to say that Libertarianism is >>> about >> >> SUPPORTING the government funding of: >> >>> Roads, Bridges, Police, Firefighters, Prisons,... Some libertarians go so far as to shorten this list to Army, Courts and Police. There is no reason today for all roads not to be toll roads IMHO. Why not regulate, then privatize prisons? Because it creates an incentive to incarcerate people? The more people in prison, the more profits from prison management.  The first fire station in America was a libertarian establishment founded by Benjamin Franklin. Buy fire insurance from us, and we'll fight the fire when your house goes up. If not, we'll come and protect your insured neighbors. THAT is libertarianism at its farthest point. Wow. I don't know how to say this without sounding offensive, but this is remarkably similar to how the mafia operates in southern Italy. Buy protection from us, and we'll make sure that nothing happens to your business. If not, don't call us when some random guy happens to start a fire on your door or your delivery truck in the middle of the night... Alfio  -Kelly _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From algaenymph at gmail.com Wed Feb 23 03:34:52 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Tue, 22 Feb 2011 19:34:52 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? Message-ID: <4D64805C.7040501@gmail.com> We seem to spend more time in these lists debating the merits of libertarianism or socialism as opposed to, say, how to improve our image with the public. Why is that? From sjatkins at mac.com Wed Feb 23 03:32:22 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 19:32:22 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> Message-ID: <4D647FC6.1080406@mac.com> On 02/22/2011 03:02 PM, spike wrote: > ... On Behalf Of Richard Loosemore > ... >> But, do please continue your dispute with spike: it is instructive to see > libertarians disputing what the L word is actually>about. Glad I could > help by framing the debate... Richard Loosemore > > > Do trim responses please. > > There are many different kinds of libertarians, which explains why we seldom > or never win elections. A lot of times, we don't even vote for our own > candidates. > > I am a libertarian who would argue that you can do a whole bunch of things > in a publicly funded way, but do it in such a way that private entities can > compete, and usually win against it. > > Lets take a particularly difficult one that the US is tripping over right > now: health care. We can set up a public health system perfectly in > parallel with the private one, in such a way that both can compete and the > cost is not ruinous. Spike. How is a public healthcare system, that is government run system paid for with money forcefully taken from individuals, remotely in keeping with the NAP which is the cornerstone of libertarianism? Do you really think that the government can be involved in healthcare without grossly inflating costs? With the government health programs we have now we have tens of trillions of unfunded liabilities. So even on a fiscal basis, much less a libertarian one, I don't see why you would support this. > We set up a publicly funded emergency care system, > which will try to patch you back up if you take a bad fall in your home, or > you are shot and stabbed by the local youth organizations. No legal > authorities are involved in any way, no courts, no malpractice anything, you > just go there and they do what they do. If you aren't injured but just not > feeling well, you have the option to go there as well, no appointment > necessary, and if the youth organizations aren't too busy right then sending > rival organizations' members to the hospital, they may be able to help you. > But if you go that route, you take what you get, and you do not enter the > lawsuit lottery. > > In parallel, the private system would still exist, with all its high-capital > high skillset doctors. If you have money, to them you go. They are not > involved in the legal system either. If the patient wants to buy insurance > against the medics' slaying her, then that is between her and the insurance > company, with private arbitration being the decider. What? That isn't libertarian particularly either unless you are removing all government created law and enforcement. Doing it only for medical stuff doesn't seem very reasonable or at all sound. If a doctor harms you through negligence then you do have a legitimate legal grievance. Hmm. I may be falling for the famous spike tongue-in-cheek remarks again. :) - s From sjatkins at mac.com Wed Feb 23 03:38:27 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 19:38:27 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: <4D648133.9060706@mac.com> On 02/22/2011 01:30 PM, David Lubkin wrote: > Samantha wrote: > >> That is probably my rule of 5 for the day so shutting up. > > Spike waived the limit for discussing libertarianism through the 24th, > at least as it applies to transhumanism -- > >> We haven't really had a libertarian discussion here for a good >> while. In light of Darren's comments above, I propose a temporary >> open season on the specific topic of transhumanism and >> libertarianism. Free number of posts on all that for five days > > My rule of thumb for assessing the ethics of a governmental policy is > to consider whether it would be ethical if it involved a handful of > people, e.g., "Spike and Samantha "vote" that I don't need both a > laptop and a tablet and I must give my tablet to Samantha. If I > refuse, they will beat me up." is equivalent to a progressive tax code. > > This simplification also highlights the differences in political > philosophies. Probably all of us would agree that > > (A) Initiation of force is bad. > (B) Starving children is bad. > > The question is which is worse. I don't think that is the question at all. You can't cure an ill by introducing another one. If you think starving children are bad and have some means to do so I am sure you would happily donate those means. If I think starving children are bad and coming to your house with a armed group to take a percentage of your savings purportedly to help them then there is no question that is wrong no matter how much we both believe (B). > A libertarian would say initiation of force is unacceptable; figure > out some other way to feed starving children. A liberal would say that > starving children is unacceptable and so be it if force is necessary > to avoid it. > They would force other people > Two equally smart, rational, caring people can reasonably prioritize > differently and rigorously derive different conclusions. > They are not equally caring at all. The 'liberal' doesn't really buy (A) based on their actions. > Looking in a transhumanist future, as long as we are distinct > individuals, there will be room for competition, cooperation, and > trade. That is something we talked about on the original list. Keith, > if I'm not mistaken, wanted to have a quadrillion copies of himself > with starships off exploring the universe, to report back to each > other at our end-of-the-universe party. There are no limits to want. > You can always want more than you have or more than exists. Sure. And reality is the measure of which wants can be satisfied, including the reality of the rights of others involved. - samantha From sjatkins at mac.com Wed Feb 23 03:40:58 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 19:40:58 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640369.9050802@mac.com> Message-ID: <4D6481CA.1000405@mac.com> On 02/22/2011 01:47 PM, Darren Greer wrote: > > > Samantha wrote: > > > You might want to read Rothbard or in economics read Hayek, von > Mises and other Austrian economists. Or economics in one lesson by > Hazlitt.< > > thanks. > > > > We can't do your homework for you. :)< > > Of course. But one of the reasons I came to this group first was that > even libertarians do not agree on an exact definition. Your > suggestions are fairly heavy on the economics, for example, where > other posts in reply to my query stressed that libertarianism was far > more than economics. As a result I've decided to look at the > socio-political aspects of the philosophy for now and save the > economics for later. > Rothbard road about a lot more than the economic aspects. He is a general libertarian source. Hayek's The Road to Serfdom is also a libertarian classic about a lot more than economics. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Feb 23 04:01:24 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 22 Feb 2011 22:01:24 -0600 Subject: [ExI] Atlas Shrugged Movie Trailer Message-ID: <4D648694.4080107@satx.rr.com> Atlas Shrugged, Part 1, opens in theaters April 15th, 2011. Casting looks promising. From sjatkins at mac.com Wed Feb 23 04:36:29 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 20:36:29 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: <298EDAA1-4FFD-4C5D-8C3C-D1022683A71F@mac.com> On Feb 22, 2011, at 4:45 PM, Darren Greer wrote: > > > > Quoting Samantha Atkins : > > The essential element of libertarianism is the Non-Agression Principle. > No one has the right to initiate force against another. This is > equivalent to total freedom to do anything that does not harm, > physically force, threaten physical force or defraud another. > > > > I like that principle Samantha. Very much. I am personally committed to it. But I wonder how does one go about establishing system where the principle non-aggression is paramount, when natural aggression, both tribal and individual, seems to be a dominant feature of the human psyche nurtured by millions of years of evolutionary development? > We establish it already whenever we deal with one another peaceably enough to derive maximum value from our association. We are a social interdependent species. Much of our thriving is contingent upon peaceable interactions with one another to mutual benefit. We cannot maximize the value of our interaction if we are subjecting one another to initiated force and the threat of the same. In particular human beings principle unique means of survival and thriving is our mind. Minds do not function optimally under coercion. Economics, broadly speaking, is the free, non-coerced interactions of human beings. Exchanges of all kinds occur that are willing entered into by the parties involved. Exchanges of value for value without sacrificing one's values or the values of the other. So non-aggression arises rather naturally as an optimum value enhancing strategy over time. However, there are indeed many aspects of our evolved psychology that are not so rational. It is also all too easy to conclude that the majority need to be coerced somehow for their own good if one not only disapproves but also considers downright dangerous what their voluntary choices end up being. > I don't ask this question facetiously, or in an attempt to disparage. I'm truly interested in your response. I've been thinking about this kind of society for awhile, and how it would work. One of the answers I've come up with, that sounds similar to what you describe, is to establish a system of territorial morality, where the doctrine is "you do your thing and I'll do mine and it's all OK as long as it doesn't hurt anyone else." Because morality, along with status (again both tribal and individual) and economics, are most certainly related to aggression, from what I've perceived. Alister Crowley came up with something similar in his Thelema doctrine, as a recipe for modern utopia. The about "do your own thing as long as it doesn't hurt anybody" is actually subsumed by the NAP. Crowley was a mystic so he is not the best choice for any sort of ethical clarity. > > The problem as I see it is what to do when people in your society violate this principle. Currently we're paralyzed by political correctness and cultural relativism. Also it might be difficult to come up with valid definitions of what harming somebody actually means. Just figuring that out on a case-to-case basis might be as daunting as coming up with a universal moral consensus. Are we paralyzed by those things or by a refusal or perhaps disbelieve that there is any rational basis for ethics at all? If we do believe there is a rational basis then you can make a more objective argument that A is better than B. If there is no basis in our estimation then in the realms pertaining to ethics (which include economics and politics in large part) we can't really judge whether A is better than B and thus are forced into relativism or subjectivism. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Wed Feb 23 04:39:38 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 22 Feb 2011 20:39:38 -0800 Subject: [ExI] Atlas Shrugged Movie Trailer In-Reply-To: <4D648694.4080107@satx.rr.com> References: <4D648694.4080107@satx.rr.com> Message-ID: <56F7E963-CA2A-473E-9EB4-5C901F3AAAA3@mac.com> On Feb 22, 2011, at 8:01 PM, Damien Broderick wrote: > > > Atlas Shrugged, Part 1, opens in theaters April 15th, 2011. Casting looks promising. It may make be a bit less ticked off and depressed on Tax Day. I am really looking forward to it! - samantha From spike66 at att.net Wed Feb 23 05:13:14 2011 From: spike66 at att.net (spike) Date: Tue, 22 Feb 2011 21:13:14 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D647FC6.1080406@mac.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> Message-ID: <002f01cbd318$601d9b60$2058d220$@att.net> >... On Behalf Of Samantha Atkins ... > > >> Do trim responses please. > >> There are many different kinds of libertarians, which explains why we > seldom or never win elections. A lot of times, we don't even vote for > our own candidates. > >> I am a libertarian who would argue that you can do a whole bunch of > things in a publicly funded way, but do it in such a way that private > entities can compete, and usually win against it. > ... >Spike. How is a public healthcare system, that is government run system paid for with money forcefully taken from individuals, remotely in keeping with the NAP which is the cornerstone of libertarianism? This proposed system is kinda what is evolving anyways. Our emergency rooms routinely take patients they know cannot or will not pay. Currently in the US we are telling people that health care is a right. So how do we expect them to pay? Here's my scheme. The emergency room will be open to all, and the medics who work there will be on salary, not a particularly good one. They will hire pretty much any doctor who is willing to work there, and allow nurses to do plenty of what doctors usually do. These doctors and the hospital will be immune from lawsuit. There will be plenty of inexperienced doctors working there, not so different from my favorite old TV show, that I miss still. No charge, and mostly worth it. Yes it is risky as hell to go there for treatment, but it's free to all. Yes it might be crammed full of illegals, but treatment goes to whoever is sickest. I'm not actually claiming this is a good solution. >Do you really think that the government can be involved in healthcare without grossly inflating costs? Sure do. The government-run ERs wouldn't do any of the high techy stuff. With this system you can still have private emergency rooms which would have CAT scan machines and all the stuff we are accustomed to seeing in the hospital. > With the government health programs we have now we have tens of trillions of unfunded liabilities... Ja, because we try to make our government-run ERs state-of-the-art high tech envy of the world hospitals. I am saying we can have government run ERs where anyone can get treatment, but I am not saying they will be good hospitals. With that arrangement, for-profit hospitals will do fine in competition with the free clinics. Now we are telling everyone that they are entitled to high end medical treatment, when most people can solve many of their own most serious health problems just by throwing away the cigarettes and losing weight. >>... If the patient wants to buy insurance against the medics' slaying her, then that is >> between her and the insurance company, with private arbitration being the decider. >What? That isn't libertarian particularly either unless you are removing all government created law and enforcement... I am proposing removing all legal infrastructure from this one area, medicine, since by involving the legal system we have created a system which we cannot afford. I am proposing we acknowledge that going to the free hospital is dangerous, and they might screw up and we might suffer. We accept risk when we ride motorcycles, when we live unhealthful life styles, when we go into bad neighborhoods. I am proposing that we give away our legal protections, not because we don't need them or that the doctor will not mess up, but rather that we cannot afford all the current protection we demand. It causes doctors to perform defensive medicine, which not only runs up the bill but makes for lousy care. My old favorite example: The doctors can't afford to risk not checking men over 50 for prostate cancer, so when we older lads go to the doctor *for any reason* including an ingrown toenail or the UPS guy delivering a package, he gets the jelly finger. Result: men over 50 avoid the doctor, even when they are sick. The risk of prostate cancer isn't high, but the doctor doesn't want to get sued, so... we get treatments that are not necessary, while losing some that are necessary. > Doing it only for medical stuff doesn't seem very reasonable or at all sound. If a doctor harms you through negligence then you do have a legitimate legal grievance. This I am acknowledging, and proposing that we remove the lawyers from this one specific field, and yes I know it introduces risks of bad doctors (as often shown on ER) but I am saying that it is a good tradeoff. Bad doctors usually are not criminals, rather just bad doctors, and for the most part are likely good people trying to do right. > Hmm. I may be falling for the famous spike tongue-in-cheek remarks again. :) - s Well not really. I do goof around far too much, but this isn't one of those times. I do want us to think of all the alternatives and realize none of them are particularly good. The system the US government is proposing, and has actually passed into law (until the Supreme Court knocks it down) will not work, even if it were legal. It requires everyone to buy medical insurance, but the penalty for not having it is a small fraction of the cost of even basic insurance. So you know most people will drop their insurance, then buy the insurance only when they get sick, go to the doctors to get everything taken care of, then drop the policy as soon as the last doctor is consulted. Even with the non-insurance penalty, there is no reasonable way to extract money for insurance from those who don't have an income. Hell a child could see that system will collapse. Some like to say single payer is the answer, but what happens if that payer can't pay? Answer: you don't get much. So, all I am proposing is a formalized system where it is single payer, but it isn't a particularly competent medical system. It is cheap enough that the government can pay for it. In parallel, you have a pay system, which ends up with all the high-techy medical gear and all the real doctors, and is really the place to be if one can afford it. When you think about it, that is pretty much what's evolving anyway. spike From brent.allsop at canonizer.com Wed Feb 23 05:19:00 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 22 Feb 2011 22:19:00 -0700 Subject: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: <4D6498C4.2040306@canonizer.com> Happy Birthday Natasha, also. Brent Allsop On 2/22/2011 10:25 AM, Adrian Tymes wrote: >> *** Writers from the early 1990s: If you agree to allow all of your postings >> to the original Extropians email list to be publically available, please let >> me (and the world -- or at least the Extropians-Chat email list) know. >> >> --- Max > I forget if I was on the original list, but if I was, then sure, > permission given for > my posts. Same goes for my posts to this list (but then, they already have > been publicly available). > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Wed Feb 23 06:06:06 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 23 Feb 2011 00:06:06 -0600 Subject: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) In-Reply-To: <4D6498C4.2040306@canonizer.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <4D6498C4.2040306@canonizer.com> Message-ID: <4D64A3CE.8090207@satx.rr.com> On 2/22/2011 11:19 PM, Brent Allsop wrote: > Happy Birthday Natasha, also. 61, says Wiki! How time flies... Damien Broderick [far older] From possiblepaths2050 at gmail.com Wed Feb 23 06:23:09 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 22 Feb 2011 23:23:09 -0700 Subject: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) In-Reply-To: <4D64A3CE.8090207@satx.rr.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <4D6498C4.2040306@canonizer.com> <4D64A3CE.8090207@satx.rr.com> Message-ID: Natasha, HAPPY BIRTHDAY!!!!! And may you have at least 100 more.... Warm wishes, John : ) On 2/22/11, Damien Broderick wrote: > On 2/22/2011 11:19 PM, Brent Allsop wrote: > >> Happy Birthday Natasha, also. > > 61, says Wiki! How time flies... > > Damien Broderick > [far older] > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From kellycoinguy at gmail.com Wed Feb 23 07:51:13 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 23 Feb 2011 00:51:13 -0700 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) Message-ID: Hi... My name is Kelly... and I'm a libertarian... I think it was Fred who said that he would not expect anyone on the list to have a problem with same sex marriage. Sounds like an interesting topic. I don't have any problems whatsoever with adults living in whatever sexual arrangement works for them. The government doesn't belong in the bedroom. I have polygamists in my family tree. Most polygamists that get a bad name today are marrying minors, which I don't think is proper. Some people do believe minors should get to participate in these kinds of relationships, NAMBLA, for example. I don't go to that extreme, but acknowledge that it has been normal in past civilizations (the ancient Greeks, for example) and that it could become normal again in our civilization. Seems unlikely, but today's zeitgeist towards gay couples would have seemed very unlikely in 1955. I also think prostitution should be legalized (and maybe regulated just a little regarding the spread of disease), but that perhaps is going far afield of my main point. I only mention it to show that I'm open minded on the subject. Living together is not marriage. Marriage is a government defined and sanctioned institution that comes with a huge number of both benefits and liabilities. In a more purely libertarian environment, the number of these benefits and liabilities would be smaller and it wouldn't matter so much if marriage were extended to any group of people (or whatever) who wanted to have a recognized relationship. But in our socialist leaning (from my point of view) governmental system, there are a lot of areas where there is a leak from the private relationship into both public interest and liability. If we opened up marriage to couples, that would allow for traditional marriage, homosexual marriages, but not polygamist or polyandrous unions. That doesn't seem very fair. Why extend rights to couples, but not to larger groups? Once you extent the rights to groups, it gets a bit more complicated to deal with divorce, child custody and the like. What is the standard visitation schedule for the fellow who leaves a union of 5 men and 7 women? Do you have to prove genetic relationship to the child? In a gay male union, if they have a child where their sperm is intermingled and a surrogate womb is used, then you have to do a DNA test when they are divorced to figure out where the child goes. Why should that matter? What if their DNA is somehow co-mingled using as yet uninvented technology? In the longer term, if I am allowed to marry an artificial cyborg, does that grant the cyborg citizenship? Status as an "individual" or "person"? If I as an employer extend health insurance to the family of the employee, do I have to then pay for insurance for his ten "spouses"? Is that really fair? Will it be required by government edict? Don't even get me started on what this would do to our current tax code. Will virtual people get to marry or just physical cyborgs? It gets messy. Even with a constitutional amendment stating that marriage is between a man and a woman, in the future what constitutes a man and what constitutes a woman will get fuzzy. Just think Bicentennial Man for one interesting example. The point is that extending the definition of marriage leads to a very large number of very messy legal issues. Particularly when you get into relationships involving more than two people. All by itself, that isn't a terrific argument against opening up marriage, but it does give me reason to doubt whether it is a good idea for society. If you say you're OK with gay people being married, but have a problem with polygamous or polyandrous relationships, I think you've got some 'splainin ta do. That doesn't seem like a tenable position to me. I totally get the emotional issue of "these two guys LOVE each other and should be allowed to declare that to the world just like that couple over there." My heart reaches out to those folks, it really does. But I'm not quite ready to say that gay marriage is a good thing for society, even if they think it is a good thing for them. Convince me. -Kelly From eugen at leitl.org Wed Feb 23 08:00:54 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 23 Feb 2011 09:00:54 +0100 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D64805C.7040501@gmail.com> References: <4D64805C.7040501@gmail.com> Message-ID: <20110223080054.GD23560@leitl.org> On Tue, Feb 22, 2011 at 07:34:52PM -0800, AlgaeNymph wrote: > We seem to spend more time in these lists debating the merits of > libertarianism or socialism as opposed to, say, how to improve our image Who is this 'we' kemo sabe? > with the public. Why is that? Do we have a budget? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From kellycoinguy at gmail.com Wed Feb 23 08:05:13 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 23 Feb 2011 01:05:13 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: <002f01cbd318$601d9b60$2058d220$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <002f01cbd318$601d9b60$2058d220$@att.net> Message-ID: On Tue, Feb 22, 2011 at 10:13 PM, spike wrote: > Well not really. ?I do goof around far too much, but this isn't one of those > times. ?I do want us to think of all the alternatives and realize none of > them are particularly good. ?The system the US government is proposing, and > has actually passed into law (until the Supreme Court knocks it down) will > not work, even if it were legal. I agree. > It requires everyone to buy medical > insurance, but the penalty for not having it is a small fraction of the cost > of even basic insurance. ?So you know most people will drop their insurance, > then buy the insurance only when they get sick, go to the doctors to get > everything taken care of, then drop the policy as soon as the last doctor is > consulted. ?Even with the non-insurance penalty, there is no reasonable way > to extract money for insurance from those who don't have an income. ?Hell a > child could see that system will collapse. The thing that frightens me is the thought that maybe that is the intent of that particular law... I hate thinking that way, but it is hard not to sometimes. As for health care, I think you can figure out a lot about what people believe by talking about spending a million dollars to save a single premature baby from a natural premature death. The discussion usually changes somewhat depending on who's million dollars is being discussed, the family and friends of the child, a charity organization, the state, the hospital or the insurance company. It usually gets pretty messy pretty quickly, and exposes basic philosophical and political outlooks quickly. -Kelly From kellycoinguy at gmail.com Wed Feb 23 08:30:11 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 23 Feb 2011 01:30:11 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D63E0C1.4000901@lightlink.com> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> <4D63E0C1.4000901@lightlink.com> Message-ID: On Tue, Feb 22, 2011 at 9:13 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > I am very much aware that he had nice things to say about Watson, but my > point was that he expressed many reservations about Watson, so I was > using his article as a counterfoil to your statement that "99% of > everyone else thinks it is a great success already". ?I just felt that > it was not accurate to paint me as the lone, 1% voice against, with 99% > declaring Watson to be a great achievement on the road to real AGI. Your turn to misunderstand what I said. I did not say that 99% of people would say that Watson was on the road to AGI, but merely that it was a substantial achievement that SUCCEEDED at it's stated goal of defeating human Jeopardy champions. Surely, you aren't arguing that Watson lost... :-) So by any reasonable examination of their short term goal, the Watson team succeeded. And 99% of everyone would say "cool". You are the 1% that says, "so what." This "bah humbug" attitude is what I find so off putting. Only that. > ? ?"Some AI researchers believe that this sort of artificial > ? ? general intelligence will eventually come out of incremental > ? ? improvements to 'narrow AI' systems like Deep Blue, Watson > ? ? and so forth. ? Many of us, on the other hand, suspect that > ? ? Artificial General Intelligence (AGI) is a vastly different > ? ? animal." I don't disagree with that. I don't think I've stated that Watson is definitely on the evolutionary road to AGI, merely that it was successful and cool. You just won't give them that, and I guess that's what bugs me the most about your position. I won't definitively say that Watson isn't on the road to some kind of AGI either. Admittedly, it probably wouldn't be a very human like AGI... but it could be intelligent and general. maybe. > My position is more strongly negative than his (and his position, in > turn, is more negative than Kurzweil's). Kurzweil gives Watson a little too much IMHO. In the sense that Watson can make money, he and I are on the same page. > Well, a lot of that was explained in the complex systems paper. As you know, I read that with a fine tooth comb. > At the risk of putting a detailed argument in so small a space, it goes > something like this. > > AI researchers have, over the years, publicized many supposedly great > advances, or big new systems that were supposed to be harbingers of real AI, > just around the corner. ?People were very excited about SHRDLU. ?The > Japanese went wild over Prolog. ?Then there was the "knowledge based > systems" approach, aka "expert systems". ?Earlier on there was a 1960s craze > for "machine translation". ?In the late 1980s there were "neural networks" > vendors springing up all over the place. ?And these were just the paradigms > or general clusters of ideas ... never mind the specific systems or programs > themselves. So the problem is that people have over promised, and under delivered. Are you absolutely sure you aren't over promising? > Now, the pattern is that all these ideas were good at bringing down some > long-hanging fruit, and every time the proponents would say "Of course, this > is just meant to be a demonstration of the potential of this new > technique/approach/program: ?what we want to do next is expand on this > breakthrough and find ways to apply it to more significant problems". But in > each case it turned out that extending it beyond the toy cases was > fiendishly hard, and eventually the effort was abandoneed when the next > bandwagon came along. Yup, that's a really big problem. You do propose (vaguely) a method for getting around that, and that is quite exciting. I'll be very interested when you publish more about that. >> Hehe... which one was that? They all seemed pretty philosophical to my >> mind. None of them said... here is an algorithm that might lead to >> ?general artificial intelligence... > > Kelly :-(. ?You do not know what a real philosophy paper is, eh? > > The word "philosophy", the way you use it in the above, seems to mean > "anything that is not an algorithm". No, a philosophy paper is one that says, "here is what I think. what do you think of that?" A scientific paper says "here is what I did, and here is how you can do it too." The critical difference is reproducible results. Kind of like how a patent has to explain to one "skilled in the art" how to do something. I like your paper very much in the sense that it made me really think hard, and potentially in a productive direction. I will finish with one last question. When do you anticipate publishing your paper on your framework generator? -Kelly From algaenymph at gmail.com Wed Feb 23 08:09:23 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Wed, 23 Feb 2011 00:09:23 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <20110223080054.GD23560@leitl.org> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> Message-ID: <4D64C0B3.3060008@gmail.com> On 2/23/11 12:00 AM, Eugen Leitl wrote: > Who is this 'we' kemo sabe? Us transhumanists. > Do we have a budget? So we should just not even bother? I'm not saying we need a mass media blitz, just that we should look for how we're looking bad, figure out why, and respond to that on an individual level. The closest to mass-anything I can think of doing is creating a how-to guide for anyone who wants to start up their own advocacy group. From giulio at gmail.com Wed Feb 23 09:33:02 2011 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 23 Feb 2011 10:33:02 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: <> Remember Churchill: democracy is the worse form of government, with the exception of all others tried so far. Democracy is two wolves and a lamb deciding, by majority vote, what to have for dinner. In other words it tends to degenerate into dictatorship of the majority and oppression of all minorities. The question is what is better than democracy. I am not able to answer it. -- Giulio Prisco giulio at gmail.com (39)3387219799 (1)7177giulio -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Feb 23 09:35:12 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 23 Feb 2011 10:35:12 +0100 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D64C0B3.3060008@gmail.com> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> Message-ID: <20110223093512.GF23560@leitl.org> On Wed, Feb 23, 2011 at 12:09:23AM -0800, AlgaeNymph wrote: >> Who is this 'we' kemo sabe? > > Us transhumanists. Absolutely not. Some transhumanists who are subscribed to a mailing list. Do you see all the non-participants non-participating vigorously? >> Do we have a budget? > > So we should just not even bother? I'm not saying we need a mass media A herd of cats attempts to appeal to dogs. (Why, actually?) > blitz, just that we should look for how we're looking bad, figure out A herd of cats cannot form a common front. Particularly, a common front appealing to dogs. The easiest way is to hire a dog with a good track record. > why, and respond to that on an individual level. The closest to > mass-anything I can think of doing is creating a how-to guide for anyone > who wants to start up their own advocacy group. A group targeting whom? Advocating what? Advocating how? What is the added value to the target audience? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From giulio at gmail.com Wed Feb 23 09:14:07 2011 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 23 Feb 2011 10:14:07 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: <> Well put. It is not easy when primary values are in conflict. In these cases I tend to look for midway solutions, like feeding children as much as possible while reducing initiation of force to the strictly necessary minimum. Needless to say, both fundamentalist libertarians and fundamentalist liberals dislike midway solutions. -- Giulio Prisco giulio at gmail.com (39)3387219799 (1)7177giulio -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Feb 23 09:46:40 2011 From: pharos at gmail.com (BillK) Date: Wed, 23 Feb 2011 09:46:40 +0000 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D64805C.7040501@gmail.com> References: <4D64805C.7040501@gmail.com> Message-ID: On Wed, Feb 23, 2011 at 3:34 AM, AlgaeNymph wrote: > We seem to spend more time in these lists debating the merits of > libertarianism or socialism as opposed to, say, how to improve our image > with the public. ?Why is that? > And libertarians filling the list with detailed nit-picking arguments about how the poor half of the nation might have to let sick children die because they can't afford medical bills and the state shouldn't give handouts is a complete turnoff for the public. (That's only one of many turnoffs!) Libertarianism enthusiasts ruin any chance of appealing to the public. BillK From eugen at leitl.org Wed Feb 23 09:57:30 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 23 Feb 2011 10:57:30 +0100 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: References: <4D64805C.7040501@gmail.com> Message-ID: <20110223095730.GG23560@leitl.org> Can we please let this thread die? The ban was there for a reason. Kthxbai. On Wed, Feb 23, 2011 at 09:46:40AM +0000, BillK wrote: > On Wed, Feb 23, 2011 at 3:34 AM, AlgaeNymph wrote: > > We seem to spend more time in these lists debating the merits of > > libertarianism or socialism as opposed to, say, how to improve our image > > with the public. ?Why is that? > > > > > And libertarians filling the list with detailed nit-picking arguments > about how the poor half of the nation might have to let sick children > die because they can't afford medical bills and the state shouldn't > give handouts is a complete turnoff for the public. > (That's only one of many turnoffs!) > > Libertarianism enthusiasts ruin any chance of appealing to the public. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From protokol2020 at gmail.com Wed Feb 23 10:09:04 2011 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Wed, 23 Feb 2011 11:09:04 +0100 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> <4D63E0C1.4000901@lightlink.com> Message-ID: > Many of us, on the other hand, suspect that > Artificial General Intelligence (AGI) is a vastly different > animal." I suspect it isn't. It is more probable, that the AGI will be somewhere in this direction than not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From algaenymph at gmail.com Wed Feb 23 10:14:50 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Wed, 23 Feb 2011 02:14:50 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <20110223093512.GF23560@leitl.org> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> <20110223093512.GF23560@leitl.org> Message-ID: <4D64DE1A.2080402@gmail.com> On 2/23/11 1:35 AM, Eugen Leitl wrote: > Absolutely not. Some transhumanists who are subscribed > to a mailing list. Do you see all the non-participants > non-participating vigorously? I do see the same names and the same arguments. > A herd of cats attempts to appeal to dogs. (Why, actually?) Politicians listen to dogs. > A herd of cats cannot form a common front. Particularly, a common > front appealing to dogs. The easiest way is to hire a dog with > a good track record. Oo, I'll keep an eye out for that. > A group targeting whom? Advocating what? Advocating how? > What is the added value to the target audience? These are good questions, this is what we should be asking. :) I'll start with the ideas I have at the moment. ? Targeting whom? Whoever has the moral high ground. ? Advocating what? Transhumanism, of course. ? Advocating how? I'd begin by having us prepare answers to the hardest possible questions we can get asked, particularly in regards to equity. ? What is the added value to the target audience? You mean why would they be interested? I'd like to think for the same reason H+ adds value to us but you probably want something more politically specific. My best guess is to find a way to tie H+ to anti-corporatism (*not* anti-business or anti-free market, mind). Still, how to politically frame H+ is a line of questioning we should give some thought. Again, those were some useful questions, thanks. :) From eugen at leitl.org Wed Feb 23 10:57:45 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 23 Feb 2011 11:57:45 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> <20110218130321.GK23560@leitl.org> Message-ID: <20110223105745.GH23560@leitl.org> On Tue, Feb 22, 2011 at 04:26:08PM -0700, Kelly Anderson wrote: > > *Which* competing platforms? Technologies don't come out of > > the blue fully formed, they're incubated for decades in > > R&D pipeline. Everything is photolitho based so far, self-assembly > > isn't yet even in the crib. TSM is just 2d piled higher and > > deeper. > > Photo lithography has a number of years left in it. As you say, it can Not so many more years. > extend into the third dimension if the heat problem is solved. I have Photolitho can't extend into third dimension because each subsequent fabbing step degrades underlying structures. You need a purely additive, iterable deposition process which doesn't damage underlying layers. > seen one solution to the heat problem that impressed the hell out of Cooling is only a part of the problem. There are many easy fixes which are cumulative in regards to reducing heat dissipation. > me, and no doubt there are more out there that I haven't seen. By the > time they run out of gas on photo lithography, something, be it carbon > nano tube based, or optical, or something else will come out. A Completely new technologies do not come out of the blue. We're about to hit 11 nm http://en.wikipedia.org/wiki/11_nanometer Still think Moore's got plenty of wind yet? > company like Intel isn't going to make their very best stuff public > immediately. You can be sure they and IBM have some great stuff in the The very best stuff is called technology demonstrations. It is very public for obvious reasons: shareholder value. > back room. I am not fearful of where the next S curve will come from, The only good optimism is well-informed optimism. Optimists would have expected clock doublings and memory bandwidth doublings to match structure shrink. > except that it might come out of a lab in China, Thor help us all > then! > > >> > Kelly, do you think that Moore is equivalent to system > >> > performance? You sure about that? > >> > >> No. Software improves as well, so system performance should go up > > > > Software degrades, actually. Software bloat about matches the advances > > in hardware. > > I know what you are talking about. You are stating that Java and C# > are less efficient than C++ and that is less efficient than C and that I am talking that people don't bother with algorithms, because "hardware will be fast enough" or just build layers of layers upon external dependencies, because "storage is cheap, hurr durr". Let's face it, 98% of developers are retarded monkeys, and need their programming license revoked. > is less efficient than Assembly. In that sense, you are very right. It > does take new hardware to run the new software systems. The next step > will probably be to run everything on whole virtual machines OS and > all, no doubt, not just virtual CPUs... Virtualization is only good for increasing hardware utilization, accelerate deployment and enhance security by way of compartmentalization. It doesn't work like Inception. If your hardware is already saturated, it will degrade performance due to virtualization overhead. > That being said, algorithms continue to improve. The new, slower > paradigms allow programmers to create software with less concern for > the underlying hardware. I remember the bad old days of dealing with Yes, software as ideal gas. > the segmented Intel architecture, switching memory banks and all that > crap. I for one am glad to be done with it. Isn't relevant to my point. > But algorithms do improve. Not as fast as hardware, but it does. For > example, we now have something like 7 or 8 programs playing chess > above 2800, and I hear at least one of them runs on a cell phone. In Current smartphones are desktop equivalents of about half a decade ago. An iPhone has about 20 MFlops, Tegra 2 50-60 (and this is JIT). This is roughly Cray 1 level of performance. Of course there's graphics accelerators in there, too, and ARM are chronically anemic in the float department. > 1997, it was a supercomputer. Now, today's cell phones are dandy, but > they aren't equivalent to a high end 1997 supercomputer, so something Fritz has been beating the pants of most humans for a long time now http://en.wikipedia.org/wiki/Fritz_%28chess%29 Chess is particular narrow field, look at Go where progress is far less stellar http://en.wikipedia.org/wiki/Go_%28game%29#Computers_and_Go > else had to change. The algorithms. They continued to improve so that > a computer with a small fraction of the power available in 1997 can > now beat a grand master. There's no Moore's law for software, that's for sure. > > In terms of advanced concepts, why is the second-oldest high > > level language still unmatched? Why are newer environments > > inferior to already historic ones? > > Are you speaking with a LISP? I don't think that Eclipse is inferior Why, yeth. How observanth of you, Thir. > to the LISP environment I used on HP workstations in the 80s. I think We're not talking implementations, but power of the concepts. > it is far better. I remember waiting for that damn thing to do garbage > compaction for 2-3 minutes every half hour or so. Good thing I didn't > drink coffee in those days... could have been very bad. :-) Irrelevant to my point. > We tend to glorify the things of the past. I very much like playing Lisp is doing dandy. My question is no current language or environment was capable to improve upon the second-oldest language conceptually. Nevermind that many human developers are mentally poorly equipped to deal with such simple things like macros. > with my NeXT cube, and do so every now and again (It's great when you > combine Moore's law with Ebay, I could never have afforded that > machine new.) The nostalgia factor is fantastic. But the NeXT was > fairly slow even at word processing when you use it now. It was a > fantastic development environment, only recently equaled again in > regularity and sophistication. > > Eugen, don't be a software pessimist. We now have two legged walking > robots, thanks to a combination of software employing feedback and > better hardware, but mostly better software in this case. I'm sorry, I'm in the trade. Not seeing this progress thing you mention. > Picassa does a fairly good job of recognizing faces. I would never > have predicted that would be a nut cracked in my time. We're supposed to have human-grade AI twenty years ago. I can tell you one thing: we won't have human-grade AI in 2030. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Wed Feb 23 13:33:02 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 23 Feb 2011 09:33:02 -0400 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <20110223093512.GF23560@leitl.org> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> <20110223093512.GF23560@leitl.org> Message-ID: On Wed, Feb 23, 2011 at 5:35 AM, Eugen Leitl wrote: > > A herd of cats actually it's a clowder or a colony of cats, kemo sabe. No wonder there's no common front, eh? :) Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Feb 23 13:54:39 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 23 Feb 2011 14:54:39 +0100 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D64DE1A.2080402@gmail.com> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> <20110223093512.GF23560@leitl.org> <4D64DE1A.2080402@gmail.com> Message-ID: <20110223135439.GJ23560@leitl.org> On Wed, Feb 23, 2011 at 02:14:50AM -0800, AlgaeNymph wrote: >> A herd of cats attempts to appeal to dogs. (Why, actually?) > > Politicians listen to dogs. Oh, you're aiming for policy changes. Then you need a lot more money (for lobby). Unpaid experts. Think tank churning out reports, which have a reputation of having good track record, or at least can be sold as if they do. Voters are pretty mangy dogs, unfortunately. You need a lot of these nipping at the heels before top dogs take note. >> A herd of cats cannot form a common front. Particularly, a common >> front appealing to dogs. The easiest way is to hire a dog with >> a good track record. > > Oo, I'll keep an eye out for that. Perception management run by professionals ain't cheap, unfortunately. And unlike CoS, we're not a cult, see herd of cats. Meow. >> A group targeting whom? Advocating what? Advocating how? >> What is the added value to the target audience? > > These are good questions, this is what we should be asking. :) I'll > start with the ideas I have at the moment. > > ? Targeting whom? > Whoever has the moral high ground. Moral high ground = negligible impact. But lots of points for style, I grant you that. > ? Advocating what? > Transhumanism, of course. Transhumanism is just a word. You need a list of specific activities. > ? Advocating how? > I'd begin by having us prepare answers to the hardest possible questions > we can get asked, particularly in regards to equity. Equity? Explain. > ? What is the added value to the target audience? > You mean why would they be interested? I'd like to think for the same > reason H+ adds value to us but you probably want something more We're cats. They're dogs. The mainstream is authority figure driven. Weirdos do not make good authority figures, unless it's a cult. > politically specific. My best guess is to find a way to tie H+ to > anti-corporatism (*not* anti-business or anti-free market, mind). Still, Anti-corporatism is a pretty small niche. Still, maybe possible to ride that. There are a some people who're unhappy with sustainability as promoted by classical environmentalists. Pushing sustainable technology would be a (small) niche. > how to politically frame H+ is a line of questioning we should give some > thought. > > Again, those were some useful questions, thanks. :) -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Wed Feb 23 13:20:02 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 23 Feb 2011 09:20:02 -0400 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: References: Message-ID: On Wed, Feb 23, 2011 at 3:51 AM, Kelly Anderson wrote: But I'm not quite ready to say that gay marriage is a good thing > for society, even if they think it is a good thing for them. That is far from a uniform sentiment among the gay men I know. Maybe I hang out with iconoclasts, but many of them couldn't care less about a marriage certificate.. What they do care about is health coverage, tax benefits, and not having the in-laws of your partner who haven't talked to you or him for ten ten years march in the day after he dies and claim the furniture, the house and even the dog because neither they nor the government recognize your right to sleep and live with and love who you wish. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From amon at doctrinezero.com Wed Feb 23 14:02:03 2011 From: amon at doctrinezero.com (Amon Zero) Date: Wed, 23 Feb 2011 14:02:03 +0000 Subject: [ExI] Happy Birthday Extropy Email List In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> Message-ID: On 22 February 2011 17:44, Stefano Vaj wrote: > > I certainly wasn't there, but let me iterate once more, just for the > record, that everything I write and sign or has at any time written > and signed in my life is forever public. Same goes for me, although I'm not an old-timer (by this list's standards), have had extremely sporadic list membership and maybe posted a dozen times in the last ten years. I *think* I'm probably under the posting limit... ;-) - Amon -------------- next part -------------- An HTML attachment was scrubbed... URL: From amon at doctrinezero.com Wed Feb 23 14:28:49 2011 From: amon at doctrinezero.com (Amon Zero) Date: Wed, 23 Feb 2011 14:28:49 +0000 Subject: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <4D6498C4.2040306@canonizer.com> <4D64A3CE.8090207@satx.rr.com> Message-ID: On 23 February 2011 06:23, John Grigg wrote: > Natasha, HAPPY BIRTHDAY!!!!! > > And may you have at least 100 more.... Yes, Happy Birthday Natasha! - A -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Wed Feb 23 14:29:26 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 23 Feb 2011 09:29:26 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: <201102231429.p1NETDe7002125@andromeda.ziaspace.com> Giulio wrote: ><transhumanist future. >> > >Remember Churchill: democracy is the worse form of government, with >the exception of all others tried so far. > >Democracy is two wolves and a lamb deciding, by majority vote, what >to have for dinner. In other words it tends to degenerate into >dictatorship of the majority and oppression of all minorities. > >The question is what is better than democracy. I am not able to answer it. As Lenny Bruce noted raucously, the free market is. << Capitalism is the best. It's free enterprise. Barter. Gimbels, if I get really rank with the clerk, "Well, I don't like this," how I can resolve it? If it really gets ridiculous, I go, "Frig it, man, I walk." What can this guy do at Gimbels, even if he was the president of Gimbels? He can always reject me from that store, but I can always go to Macy's. He can't really hurt me. Communism is like one big phone company. Government control, man. And if I get too rank with that phone company, where can I go? I'll end up like a schmuck with a dixie cup on a thread. >> Democracy is like getting together once a year to vote on the one model of car that will be built, that everyone has to buy. The free market is a hundred models and everyone buys whatever they want, regardless of what other people do. [I forget whose analogy this is; I want to credit them.] Back to <> The American Founding Fathers specifically did not want everyone to be able to vote. But it's become holy writ that each of our votes is of equal value, whether Snooki or Thomas Sowell. The electorate span is now roughly 6 SD in IQ, from about 70 to 160 (SD 15), all of a single species. What happens when the society includes revived corpsicles, clones, AIs, uploads, uplifted species, Matrioshka brains, etc.? There are central problems. As the variety of sentience grows, the needs and wants of individual voters will also spread; it seems ever less likely that we could reach majority agreement on anything. The gap in electorate knowledge and intelligence would now pit Snooki's vote against Colossus. *Now*, we see the politics of nations altered by changing the demographic mix through high reproductive rates of subgroups. Imagine showing up with a quadrillion new voters because you used nanotech to turn the Kuiper Belt into sentient individuals. It seems like the nearest thing to a viable answer is the free market. -- David. From kanzure at gmail.com Wed Feb 23 15:03:16 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Feb 2011 09:03:16 -0600 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D64C0B3.3060008@gmail.com> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> Message-ID: On Wed, Feb 23, 2011 at 2:09 AM, AlgaeNymph wrote: > So we should just not even bother? I'm not saying we need a mass media > blitz, just that we should look for how we're looking bad, figure out why, > and respond to that on an individual level. The closest to mass-anything I > can think of doing is creating a how-to guide for anyone who wants to start > up their own advocacy group. I can't believe nobody has said this yet: Humanity+ has been trying the media angle. It's just not interesting. Give it a rest already. Unfortunately the transhumanists communicating on these email lists tend to be unskilled in transhuman technologies... so I don't know what to do with you guys. - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Wed Feb 23 15:30:14 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 23 Feb 2011 10:30:14 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> <4D5EB411.9090400@lightlink.com> <4D63E0C1.4000901@lightlink.com> Message-ID: <4D652806.2020303@lightlink.com> Kelly Anderson wrote: > On Tue, Feb 22, 2011 at 9:13 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >> I am very much aware that he had nice things to say about Watson, but my >> point was that he expressed many reservations about Watson, so I was >> using his article as a counterfoil to your statement that "99% of >> everyone else thinks it is a great success already". I just felt that >> it was not accurate to paint me as the lone, 1% voice against, with 99% >> declaring Watson to be a great achievement on the road to real AGI. > > Your turn to misunderstand what I said. I did not say that 99% of > people would say that Watson was on the road to AGI, but merely that > it was a substantial achievement that SUCCEEDED at it's stated goal of > defeating human Jeopardy champions. Surely, you aren't arguing that > Watson lost... :-) So the *only* thing you were saying was that 99% of people would say that Watson [...] SUCCEEDED at it's stated goal of defeating human Jeopardy champions .... ??!! No shit Sherlock! ;-) I was assuming that you were making some kind of substantive claim, but I guess you're right: I misunderstood you. Richard Loosemore From hkeithhenson at gmail.com Wed Feb 23 16:00:24 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 23 Feb 2011 09:00:24 -0700 Subject: [ExI] Call To Libertarians (David Lubkin) Message-ID: On Tue, Feb 22, 2011 at 9:36 PM, David Lubkin wrote: snip > This simplification also highlights the differences in political > philosophies. Probably all of us would agree that > > (A) Initiation of force is bad. > (B) Starving children is bad. > > The question is which is worse. A libertarian would say initiation of > force is unacceptable; figure out some other way to feed starving > children. A liberal would say that starving children is unacceptable > and so be it if force is necessary to avoid it. An evolutionary psychologist would say that even the future possibility of starving children (bleak economic times) turns up the gain on xenophobic memes, a situation that led to war among hunter gathered people. This was noted on NPR this morning as a major cause of an upsurge in hate groups in the US. (Without an EP explanation of why this happens.) > Two equally smart, rational, caring people can reasonably prioritize > differently and rigorously derive different conclusions. And evolutionary psychologist would make that case that "smart, rational or caring" affects only on the margins of evolved psychological brain mechanisms. I.e., this is as wired in as ducks flying south in response to shortening days. People do differ in how it takes to turn on "war mode" mechanisms. > Looking in a transhumanist future, as long as we are distinct > individuals, there will be room for competition, cooperation, and > trade. That is something we talked about on the original list. Keith, > if I'm not mistaken, wanted to have a quadrillion copies of himself > with starships off exploring the universe, to report back to each > other at our end-of-the-universe party. There are no limits to want. > You can always want more than you have or more than exists. > > Could we be a single computronium borganism? I suspect not. I think > that as long as there's transmission lag, a system that big will be a society. Given expected switching times in the picosecond range or faster, borgs might be limited to the dimensions of a human and off planet societies highly decoupled from each other. Interstellar society might be impossible--which may be why we don't see any. We have historical records as to how much communication delay is tolerable before a society splits. A million to one subjective speed up puts round trip communication to the moon at 4 (subjective) weeks. You don't want to even think about Mars. > And, therefore, the same choices apply if I want what you have as they do now. > > Democracy, though, doesn't seem a viable concept in a transhumanist > future. We'd all be too different in capabilities for "one being, one vote." Charles Stross, member of the early list, discusses this at some length in Accelerando. Keith From spike66 at att.net Wed Feb 23 15:50:27 2011 From: spike66 at att.net (spike) Date: Wed, 23 Feb 2011 07:50:27 -0800 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: References: Message-ID: <006001cbd371$6562cc90$302865b0$@att.net> ... On Behalf Of Kelly Anderson Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) >...Hi... My name is Kelly... and I'm a libertarian... Welcome Kelly! >... I have polygamists in my family... Me too. My second and third wives are polygamists. If they don't cut it out, I may hafta divorce them. >...If I as an employer extend health insurance to the family of the employee, do I have to then pay for insurance for his ten "spouses"? The real problem isn't even the ten spouses, it's the 70 children. Especially now, when you are being required to keep those "children" on the insurance policy until they are aged 26. >...If you say you're OK with gay people being married, but have a problem with polygamous or polyandrous relationships, I think you've got some 'splainin ta do... -Kelly OK, here's my splanation: What really costs money for the government and the employer is the children. Same sex couples are less likely to breed. If it's two men, they can only adopt, which actually removes a cost from the government. As a kind of an affirmative action, I propose about ten years when only same-sex are allowed to marry. Simultaneously I propose removal of all requirements for employers to offer health insurance, and removal of all legal restrictions on health insurance companies. With those changes, a bunch of old problems go away. Granted there are new ones, but we can deal. Government needs to be out of the marriage business. That whole tax filing as married business needs to go too. Once that tax arrangement is eliminated, family groups can assemble in any size or mix of gender they want, which to me is how it should be. I recognize it really does introduce new problems, and yes I know we have a subculture which would force underage girls into marrying their elderly relatives. But I think we can solve that. spike From phoenix at ugcs.caltech.edu Wed Feb 23 16:34:41 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 08:34:41 -0800 Subject: [ExI] Serfdom and libertarian critiques (Was: Call to Libertarians) In-Reply-To: References: <4D616BFF.2000502@gnolls.org> Message-ID: <20110223163441.GB15944@ofb.net> On Tue, Feb 22, 2011 at 11:56:45AM +0100, Stefano Vaj wrote: > "Taxation" is an old invention indeed, and not a very clear-cut one > for that matter (what about compulsory or heavily-encouraged community > services in hunting-and-gathering tribes?). Taxes can be any of: outright theft; club or homeownser asscoation dues and obligations; dividend return on public capital; rent for use of environmental capital; premiums for social insurance or insurance of last resort. If 10% of my income goes to build a private palace, that's theft. If 50% of my income goes to zero-fare public transit, universal health care, funding for basic research, good law enforcement, safe housing, and many other public services, I may consider that a good deal. -xx- Damien X-) From phoenix at ugcs.caltech.edu Wed Feb 23 16:24:00 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 08:24:00 -0800 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <004201cbd2a6$e1ea1690$a5be43b0$@att.net> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> Message-ID: <20110223162400.GA15944@ofb.net> On Tue, Feb 22, 2011 at 07:40:50AM -0800, spike wrote: > This is madness. When one is teaching other religions, one needs to drive a > car, for it affords a certain amount of physical protection. In the US, Yes, one violent incident means that driving in London is the obivous choice for one's health and safety. As for other religions, Deuteronomy 13 specifices execution for apostates from Judaism. Holy writ is not all-important for determining behavior. -xx- Damien X-) From phoenix at ugcs.caltech.edu Wed Feb 23 16:48:14 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 08:48:14 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D647FC6.1080406@mac.com> References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> Message-ID: <20110223164814.GC15944@ofb.net> On Tue, Feb 22, 2011 at 07:32:22PM -0800, Samantha Atkins wrote: > Do you really think that the government can be involved in healthcare > without grossly inflating costs? With the government health programs A rational person looks at the evidence, which is that goverrnment involvement with health care reduces costs. Half of US medical spending is from the government, which is covering the older and sicker part of the population, at less overhead. Overall the US spends the most on health care, with among the shortest life expectanccies, even after filtering out some of our disadvantaged groups. The most socialized medicine in Europe, Britian's NHS, is also the cheapest, spending less than half per capita what the USA does. -xx- Damien X-) From phoenix at ugcs.caltech.edu Wed Feb 23 16:55:48 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 08:55:48 -0800 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: <000301cbd1c1$005cb4c0$01161e40$@att.net> References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> <4D620C38.5080704@moulton.com> <000301cbd1c1$005cb4c0$01161e40$@att.net> Message-ID: <20110223165548.GD15944@ofb.net> On Mon, Feb 21, 2011 at 04:15:17AM -0800, spike wrote: > Don't worry, BillK. There is a culture spreading across Europe which is > diametrically opposed to libertarianism. I understand it is growing quite > popular in places such as France and Italy. Thirty years from now, the > total disaster and chaos will be the result of any serious attempt to resist > this growing anti-libertarian culture, such as by allowing your wife or > daughter go outdoors uncovered. Yes, I'm sure that increasingly assimilated 10% of the population will completely dominate politics. http://www.economist.com/node/18008022 -xx- Damien X-) From phoenix at ugcs.caltech.edu Wed Feb 23 16:59:54 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 08:59:54 -0800 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: <20110223165952.GE15944@ofb.net> On Tue, Feb 22, 2011 at 07:22:06PM +0100, Alfio Puglisi wrote: > The first fire > station in America was a libertarian establishment founded by > Benjamin > Franklin. Buy fire insurance from us, and we'll fight the fire when > your house goes up. If not, we'll come and protect your insured > neighbors. THAT is libertarianism at its farthest point. > > Wow. I don't know how to say this without sounding offensive, but this > is remarkably similar to how the mafia operates in southern Italy. Buy > protection from us, and we'll make sure that nothing happens to your > business. If not, don't call us when some random guy happens to start > a fire on your door or your delivery truck in the middle of the > night... AIUI starting fires or sabotaging each other's fire equipment was a feature in US private firefighting companies. Private firefighting is also how Crassus came to own much of Rome. I'm a bit bemused by the libertarian enthusiasm for private toll roads. I mean, yes, it's a way to get roads built, but is a society full of piecemeal tolls, and possibilities to be denied passage by a road owner whod oesn't like you, actually desirable? How does an economy of tolls everywhere compare to that of a free travel zone? -xx- Damien X-) From phoenix at ugcs.caltech.edu Wed Feb 23 17:16:39 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 09:16:39 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <20110223171636.GF15944@ofb.net> On Fri, Feb 18, 2011 at 08:00:29PM -0400, Darren Greer wrote: > I am currently embroiled in an e-mail discussion where I find myself > in a rather unique (for me) position of defending free markets and > smaller government. I am a Canadian, and a proponent of socialized > democracy. However, I'm not naive enough to think that full-stop You might get better responses if you gave the context. Are you arguing with fairly informed and fully dedicated socialists? Social democrats by default who don't understand economics and are dismissive of the value of markets even if they don't outright call for getting rid of markets? Some other permutation? Are you trying to convince people to change their mind on policy, or convince them that libertarians aren't all selfish or insane? If you want policy convincingness for ignorant social democrats, you might be better off asking informed social democrats, or mainstream economists, who can share the assumptions of your target audience, rather than libertarians who'll start out with "taxation is theft" and make even less sense to your targets from there. I actualy am a social democrat, so could help, but it'd be nice to know the point. I used to be libertarian, so I might be able to help with "not totally insane" too. > socialization is a good idea. We tried that once, in the Soviet Union, > and it didn't work so well. I recognize the need for competition to > drive development and promote innovation. IMO, evidence is good. Total socialization has been failure. Point to the USSR, for that matter point to homeowner associations. Not everything needs to be or should be under social control. Do your friends believe the inside color of one's living room should be voted on? If not, there's a wedge for private spheres. (But maybe this is a silly extreme; again, who's your audience?) Defending markets can be as simple as "people have different tastes and like to trade things to get what they want, and that shouldn't be banned unless there's a good reason to do so". That can get you into when private trades should be banned or regulated, and why some areas of society are or should be subject to social control, which gets you to externalities, natural monopolies, and minimum-income ideas of fairness. Why is medicine typically socialized or socially paid for, and food production private, and industrial production private but pollution-regulated? There are non-arbitrary reasons for this state of affairs, for mixed economies being mixed and in the particular ways that they are. Many libertarians deny the existence, relevance, or morality of externalities, natural monopolies, and economic egalitarianism, so they're not likely to give you arguments that support "government should be somewhat less intrusive than what you're saying but still significant". -xx- Damien X-) From atymes at gmail.com Wed Feb 23 17:28:38 2011 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 23 Feb 2011 09:28:38 -0800 Subject: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <4D6498C4.2040306@canonizer.com> <4D64A3CE.8090207@satx.rr.com> Message-ID: On Tue, Feb 22, 2011 at 10:23 PM, John Grigg wrote: > Natasha, HAPPY BIRTHDAY!!!!! > > And may you have at least 100 more.... Tack on at least one more 0 there. ;) From natasha at natasha.cc Wed Feb 23 17:41:43 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 23 Feb 2011 11:41:43 -0600 Subject: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com><20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com><20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com><201102192014.p1JKEeST027600@andromeda.ziaspace.com><201102220210.p1M2A4ah005359@andromeda.ziaspace.com><4D6498C4.2040306@canonizer.com> <4D64A3CE.8090207@satx.rr.com> Message-ID: Thank you all for the Birthday Greetings! I am reborn every day. http://www.natasha.cc/ageless.htm Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes Sent: Wednesday, February 23, 2011 11:29 AM To: ExI chat list Subject: Re: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) On Tue, Feb 22, 2011 at 10:23 PM, John Grigg wrote: > Natasha, HAPPY BIRTHDAY!!!!! > > And may you have at least 100 more.... Tack on at least one more 0 there. ;) _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From natasha at natasha.cc Wed Feb 23 17:47:29 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Wed, 23 Feb 2011 11:47:29 -0600 Subject: [ExI] Humanity+ @ Parsons NYC May 14-15 2011 - Call for Abstracts Message-ID: Transhumanism Meets Design invites you to submit abstracts relating to the following themes: * Architecture and the Future * Neuroculture and the Transhuman * Fashion and Human Futures * Cities, Systems, Infrastructures * Legal Scapes of IP Bodies and Trademark Identities * Mind Scapes of the Transhuman: AGI / Uploads * Human Enhancement * agentCODE, Hacking * Communicating with the Alien: xenobio * Global computation: the GeoTechnoScape http://humanityplus.org/conferences/parsons/ Abstract Submission Requirements: Please submit abstracts of no more than 250 words, including four keywords, and a short CV by March 31, 2011, either in Word or PDF formats to Conference Co-Chairs Ed Keller, Associate Dean at Parsons the New School for Design and Natasha Vita-More, Vice Chair of Humanity+ -- by emailing: natasha at natasha.cc -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: atte4d8f.png Type: image/png Size: 45668 bytes Desc: not available URL: From lubkin at unreasonable.com Wed Feb 23 18:21:34 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 23 Feb 2011 13:21:34 -0500 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <20110223162400.GA15944@ofb.net> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <20110223162400.GA15944@ofb.net> Message-ID: <201102231821.p1NIL4FY027163@andromeda.ziaspace.com> Damien Sullivan wrote: >As for other religions, Deuteronomy 13 specifices execution for >apostates from Judaism. For the sake of accuracy, no, it doesn't. It says nothing about apostasy. It specifies execution for attempting to woo *others* *to a different religion*, i.e., if you say "Let us go and serve other gods." At least with regard to Deut. 13, there is no penalty for non-belief, conversion from Judaism, or for voicing your non-belief (absent advocating an alternate religion). I have not looked into how the Talmud, etc. interpret the matter. (Orthodox Jews will assert that you can't correctly interpret Biblical passages from just reading the Bible, and on your own. Consider just the US Second Amendment. What does "well-regulated" mean? How does the introductory clause affect the rest of the meaning? Who are "the people"? The Bible's much worse than that for linguistic nuance.) But generally speaking, when looking at harsh passages in the Bible, one should examine the historic record of "implementing regulation." The typical "stone someone who does this" commandment has such a high burden of proof that it was rarely if ever met. (I'm speaking as an Israeli agnostic with a daughter in rabbinical school. I don't believe this stuff, but I know a bit about it.) -- David. From kellycoinguy at gmail.com Wed Feb 23 18:27:59 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 23 Feb 2011 11:27:59 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110223105745.GH23560@leitl.org> References: <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> <20110218130321.GK23560@leitl.org> <20110223105745.GH23560@leitl.org> Message-ID: On Wed, Feb 23, 2011 at 3:57 AM, Eugen Leitl wrote: > On Tue, Feb 22, 2011 at 04:26:08PM -0700, Kelly Anderson wrote: > >> > *Which* competing platforms? Technologies don't come out of >> > the blue fully formed, they're incubated for decades in >> > R&D pipeline. Everything is photolitho based so far, self-assembly >> > isn't yet even in the crib. TSM is just 2d piled higher and >> > deeper. >> >> Photo lithography has a number of years left in it. As you say, it can > > Not so many more years. I understand that Intel thinks that they can stay on track until 2018 using their current approach. Now their current approach seems to mostly be to just but more cores on one chip, which requires intelligent compilers and/or programmers. Hopefully, more on the compiler side as that leverages better. >> extend into the third dimension if the heat problem is solved. I have > > Photolitho can't extend into third dimension because each subsequent > fabbing step degrades underlying structures. You need a purely additive, > iterable deposition process which doesn't damage underlying layers. What about fabbing slices, and "gluing" them together afterwards? Is there anything in that direction? (I am not a fab expert by any means, I'm just asking) >> seen one solution to the heat problem that impressed the hell out of > > Cooling is only a part of the problem. There are many easy fixes which > are cumulative in regards to reducing heat dissipation. The mechanism I saw was a water circulation mechanism driven off of the heat created in the chip itself. It was extremely cool (no pun intended). Get the water out of the chip and you can cool the water using conventional means. Their pitch indicated that cooling was one of the biggest problems with going to 3D. There are probably many more. >> me, and no doubt there are more out there that I haven't seen. By the >> time they run out of gas on photo lithography, something, be it carbon >> nano tube based, or optical, or something else will come out. A > > Completely new technologies do not come out of the blue. > We're about to hit 11 nm http://en.wikipedia.org/wiki/11_nanometer > Still think Moore's got plenty of wind yet? Not forever, but seemingly for a few more years. >> company like Intel isn't going to make their very best stuff public >> immediately. You can be sure they and IBM have some great stuff in the > > The very best stuff is called technology demonstrations. It is very > public for obvious reasons: shareholder value. What about race track memory? They were saying that might be available by 2015 the last time I saw anything on it. >> back room. I am not fearful of where the next S curve will come from, > > The only good optimism is well-informed optimism. Optimists > would have expected clock doublings and memory bandwidth > doublings to match structure shrink. I am optimistic, and on these issues particularly I'm going off of the stuff Ray put in TSIN. If he's wrong, then I'm wrong. Knowing the cool stuff they see at MIT all the time, perhaps he is right. >> except that it might come out of a lab in China, Thor help us all >> then! >> >> >> > Kelly, do you think that Moore is equivalent to system >> >> > performance? You sure about that? >> >> >> >> No. Software improves as well, so system performance should go up >> > >> > Software degrades, actually. Software bloat about matches the advances >> > in hardware. >> >> I know what you are talking about. You are stating that Java and C# >> are less efficient than C++ and that is less efficient than C and that > > I am talking that people don't bother with algorithms, because "hardware > will be fast enough" or just build layers of layers upon external > dependencies, because "storage is cheap, hurr durr". A lot of business software is exactly that way because it doesn't have to be great. Just good enough. In things like computer vision and AI where the computational requirements are above the current level of hardware performance, care is still taken to optimize. > Let's face it, 98% of developers are retarded monkeys, and need > their programming license revoked. Only those whos degrees are in engineering, mathematics, physics, political science and the like (in my experience). Those with actual degrees in computer science are pretty decent for the most part. That training matters IMNSHO. >> is less efficient than Assembly. In that sense, you are very right. It >> does take new hardware to run the new software systems. The next step >> will probably be to run everything on whole virtual machines OS and >> all, no doubt, not just virtual CPUs... > > Virtualization is only good for increasing hardware utilization, > accelerate deployment and enhance security by way of compartmentalization. > It doesn't work like Inception. If your hardware is already saturated, > it will degrade performance due to virtualization overhead. Yes, that's what I meant. Agreed. >> But algorithms do improve. Not as fast as hardware, but it does. For >> example, we now have something like 7 or 8 programs playing chess >> above 2800, and I hear at least one of them runs on a cell phone. In > > Current smartphones are desktop equivalents of about half a decade > ago. An iPhone has about 20 MFlops, Tegra 2 50-60 (and this is JIT). > This is roughly Cray 1 level of performance. Of course there's > graphics accelerators in there, too, and ARM are chronically > anemic in the float department. > >> 1997, it was a supercomputer. Now, today's cell phones are dandy, but >> they aren't equivalent to a high end 1997 supercomputer, so something > > Fritz has been beating the pants of most humans for a long time > now http://en.wikipedia.org/wiki/Fritz_%28chess%29 > > Chess is particular narrow field, look at Go where progress > is far less stellar http://en.wikipedia.org/wiki/Go_%28game%29#Computers_and_Go I am a go player, so yes I'm familiar with this... :-) >> else had to change. The algorithms. They continued to improve so that >> a computer with a small fraction of the power available in 1997 can >> now beat a grand master. > > There's no Moore's law for software, that's for sure. Nope. >> > In terms of advanced concepts, why is the second-oldest high >> > level language still unmatched? Why are newer environments >> > inferior to already historic ones? >> > I'm sorry, I'm in the trade. Not seeing this progress thing > you mention. I do. -Kelly From phoenix at ugcs.caltech.edu Wed Feb 23 18:39:55 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 10:39:55 -0800 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <201102231821.p1NIL4FY027163@andromeda.ziaspace.com> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <20110223162400.GA15944@ofb.net> <201102231821.p1NIL4FY027163@andromeda.ziaspace.com> Message-ID: <20110223183955.GA20360@ofb.net> On Wed, Feb 23, 2011 at 01:21:34PM -0500, David Lubkin wrote: > Damien Sullivan wrote: > >> As for other religions, Deuteronomy 13 specifices execution for >> apostates from Judaism. > > For the sake of accuracy, no, it doesn't. It says nothing about > apostasy. It specifies execution for attempting to woo *others* *to a > different religion*, i.e., if you say "Let us go and serve other gods." > At least with regard to Deut. 13, there is no penalty for non-belief, > conversion from Judaism, or for voicing your non-belief (absent > advocating an alternate religion). Well, it starts out talking about prophets and dreamers, whom you should kill. Then it moves on to relatives like your brother, "secretly enticing" you. Kill them too. Don't just reject him, don't conceal him with silence, but be the first to stone him. And then it moves on to, if you've heard the inhabitants of some city have withdrawn to serve other gods, then you should search them out and kill them, and destroy the city. Deut 17 also specifices excution of those who have served other gods, without even a hint of them having to have proselytized as well. > But generally speaking, when looking at harsh passages in the Bible, one > should examine the historic record of "implementing regulation." The > typical "stone someone who does this" commandment has such a high burden > of proof that it was rarely if ever met. Mmm, I've heard that said now, but I'm not sure it was historically true in old Judah and Israel. It's a good way of getting out of the obligation though, which is what we want for good behavior. The other one I've heard is that the court empowered to hand out capital punishments can only exist in the Temple. But as for simplistically reading passages from a holy book and invoking them without context, well, critics of Islam do that all the time. Never mind that most Muslim countries don't have death penalty for apostasy. -xx- Damien X-) From lubkin at unreasonable.com Wed Feb 23 19:02:43 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 23 Feb 2011 14:02:43 -0500 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <20110223183955.GA20360@ofb.net> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <20110223162400.GA15944@ofb.net> <201102231821.p1NIL4FY027163@andromeda.ziaspace.com> <20110223183955.GA20360@ofb.net> Message-ID: <201102231904.p1NJ4ACW011677@andromeda.ziaspace.com> Damien Sullivan wrote: >Well, it starts out talking about prophets and dreamers, whom you should >kill. Then it moves on to relatives like your brother, "secretly >enticing" you. Kill them too. Don't just reject him, don't conceal him >with silence, but be the first to stone him. > >And then it moves on to, if you've heard the inhabitants of some city >have withdrawn to serve other gods, then you should search them out and >kill them, and destroy the city. I say again: Nothing in Deut. 13 penalizes apostasy. >Deut 17 also specifices excution of those who have served other gods, >without even a hint of them having to have proselytized as well. Deut. 17 is not Deut. 13. >But as for simplistically reading passages from a holy book and invoking >them without context, well, critics of Islam do that all the time. Tu quoque is not a rebuttal. Can't you simply concede that you were wrong in writing << Deuteronomy 13 specifices execution for apostates from Judaism. >> and thank me for the correction? -- David. From spike66 at att.net Wed Feb 23 19:30:29 2011 From: spike66 at att.net (spike) Date: Wed, 23 Feb 2011 11:30:29 -0800 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <201102231821.p1NIL4FY027163@andromeda.ziaspace.com> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <20110223162400.GA15944@ofb.net> <201102231821.p1NIL4FY027163@andromeda.ziaspace.com> Message-ID: <00a601cbd390$21a53e60$64efbb20$@att.net> ... On Behalf Of David Lubkin ...Orthodox Jews will assert that you can't correctly interpret Biblical passages from just reading the Bible, and on your own... -- David. What, you mean as in that passage in Exodus 20? God wrote ten rules in stone, including one which forbids adultery. Yet it includes no actual definition of the term adultery, nor any definition of marriage before that. The term written on stone is the first time it shows up in the bible. Apparently the children of Israel were left to take their best guess at what this new term meant, and whatever they decided it was, they weren't to do it. Apparently having concubines was not adultery, it was fair game, because the Israelis were doing that both before and after the tablets of stone were presented by Moses. If we argue that they were young women captured from a warring tribe, the very first mention of a concubine doesn't say a word about any war or battle. So we would guess that concubines were sort of... like... sex slaves? Kewallllll... {8^D It doesn't actually say one must be female. So, how can I become one? And what would I be called? And so on. These lines of reasoning are likely exactly why the rabbis demanded NO BIBLE READING ON YOUR OWN dammit. If one reads the bible oneself and knows how to reason, there is no end to these kinds of difficulties. spike From eugen at leitl.org Wed Feb 23 21:14:25 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 23 Feb 2011 22:14:25 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> <20110218130321.GK23560@leitl.org> <20110223105745.GH23560@leitl.org> Message-ID: <20110223211425.GU23560@leitl.org> On Wed, Feb 23, 2011 at 11:27:59AM -0700, Kelly Anderson wrote: > > Not so many more years. > > I understand that Intel thinks that they can stay on track until 2018 > using their current approach. Now their current approach seems to 11 nm node is scheduled for 2015. I'm not at all sure 11 nm will be a cakewalk, and beyond that things become really interesting (Si-Si is 0.233 nm, and of course CMOS no longer works way before, arguably after 11 nm you're in quantum electronics country, aka molecular circuitry without chemistry). > mostly be to just but more cores on one chip, which requires More cores with shared memory don't scale. The only way to go much beyond that is SCC/"Tera scale". > intelligent compilers and/or programmers. Hopefully, more on the Intelligent compilers don't work with shared-nothing asynchronous message passing over kilonodes. Humans are even more lousy at that, vide MPI debuggers. > compiler side as that leverages better. Doesn't work either, though you can emulate a shared memory architecture. It will certainly force people to either stagnate, or to learn new tricks. > What about fabbing slices, and "gluing" them together afterwards? Is Stacking with TSV is not as good as true 3d integration, and it doesn't offer anything like Moore's law. You're basically just doing a lot of Si real estate, then thinning it, then stacking it. I don't think anything beyond 400 mm wafer size will happen, so without structure shrink you're stuck with paying real thalers for actual silicon real estate. > there anything in that direction? (I am not a fab expert by any means, > I'm just asking) Yes, the next step will be TSV-stacked DIMMs, and then memory stacks on top of dies and then individual cores, and then WSI with stacked memory on top. After that you'll have to go on the real 3D integration train -- assuming you can. > >> seen one solution to the heat problem that impressed the hell out of > > > > Cooling is only a part of the problem. There are many easy fixes which > > are cumulative in regards to reducing heat dissipation. > > The mechanism I saw was a water circulation mechanism driven off of > the heat created in the chip itself. It was extremely cool (no pun There are tricks to prevent power from being wasted in the first place. Killing pin drivers, optical signalling, clockless designs, static designs (MRAM/spintronics, memristors, and such), reversible logic. Immersion cooling is definitely coming, and 60 deg C watercooling is already happening. > intended). Get the water out of the chip and you can cool the water > using conventional means. Their pitch indicated that cooling was one > of the biggest problems with going to 3D. There are probably many The biggest problem with going to 3D is going to 3D. > more. > > >> me, and no doubt there are more out there that I haven't seen. By the > >> time they run out of gas on photo lithography, something, be it carbon > >> nano tube based, or optical, or something else will come out. A > > > > Completely new technologies do not come out of the blue. > > We're about to hit 11 nm http://en.wikipedia.org/wiki/11_nanometer > > Still think Moore's got plenty of wind yet? > > Not forever, but seemingly for a few more years. I remember discrete transistor minis and punched cards/tape. That's 35+ years? We certainly won't have another decade nevermind many of these. Not with semiconductor photolithography. The question is whether there will be a smooth takeover. Given that clock doubling has died about 6 years ago, without too many noticing it might be well that we'll hit a discontinuity as Moore in CMOS will hit a wall, until a successor technology comes online. Any progress in that lacune must happen by way of architecture, which isn't out of question, but tougher. > What about race track memory? They were saying that might be available > by 2015 the last time I saw anything on it. I have no idea. I consider everything vaporware until it ships (bubble memory, MRAM). Racetrack would be more like a flash killer, assuming it ever happens. > I am optimistic, and on these issues particularly I'm going off of the > stuff Ray put in TSIN. If he's wrong, then I'm wrong. Knowing the cool > stuff they see at MIT all the time, perhaps he is right. I haven't read any Shortwhile in anger, but most of his predictions are either trivial, or cherrypicked and/or outright bunk like http://www.guardian.co.uk/environment/2011/feb/21/ray-kurzweill-climate-change > > I am talking that people don't bother with algorithms, because "hardware > > will be fast enough" or just build layers of layers upon external > > dependencies, because "storage is cheap, hurr durr". > > A lot of business software is exactly that way because it doesn't have > to be great. Just good enough. In things like computer vision and AI You can say awful. It's ok. We're among friends. > where the computational requirements are above the current level of > hardware performance, care is still taken to optimize. Anyone who's tracking the field care to give a number of literature reviews, preferably online, and not behind paywalls? > > I'm sorry, I'm in the trade. Not seeing this progress thing > > you mention. > > I do. This glass is half empty, dammit! -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jrd1415 at gmail.com Wed Feb 23 22:08:39 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 23 Feb 2011 15:08:39 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: 2011/2/22 Darren Greer : >> Quoting Samantha Atkins : >>> The essential element of libertarianism is the Non-Agression Principle. ?No one has the right to initiate force against another. ?This is equivalent to total freedom to do anything that does not harm, physically force, threaten physical force or defraud another. > I like that principle Samantha. Very much. I am personally committed to it. But I wonder how does one go about establishing system where the principle non-aggression is paramount, when natural aggression, both tribal and individual, seems to be a dominant feature of the human psyche nurtured by millions of years of evolutionary development? Me: Ah, yes, the question of the transition from where we are "here" to the glorious Llibertarian utopia "there". This is my problem with libertarians -- particularly zealots-slash-purists. They say our current system is crappy. (I agree). They say life would be perfect in the Llibertarian utopia "over there". But they .rarely seem willing to propose a reality-based plan for getting from "here" to "there"-- a plan for the transition. And by reality-based I mean a plan which acknowledges that any responsible transition must be incremental. They don't like the old way -- understandable, who outside of the kleptocratic elite does? -- but they won't dirty themselves with the sort of compromise with the current system that an orderly transition implies. This annoys me. There's real substance to Llibertarian principles. I'm looking for less bitching and moaning, and more progress re implementation. Which brings us back to Darren's question: "...how do we go about establishing a system where the principle non-aggression is paramount,..." Let's talk about the US of A in the year 2011. How to begin the transition? Oddly, it seems to require only that enough people behind the curtain in the polling booth mark their ballot correctly. Which is to say, for the candidates put forth by The Accountability Party. "The Accountability Party? What's that?" you ask, puzzled, thinking you've missed some newsworthy "announcement". You haven't. The Accountability Party is my little fantasy, created at this most opportune moment, when the Dems and Repubs are both out of favor. To be robustly resistant to destruction by fragmentation, The Accountability Party is deliberately "preconfigured" to be broad-based, having only two planks: Accountability and Jobs. No other issue is relevant except as relates to these two concerns. So, regarduing any other issue: the AP takes no position. No position means NO POSITION. No position means being "agnostic" on EVERYTHING else. Individual AP members have their own views of course, but as a unified organization, the AP takes no position on: abortion, taxes, gay marriage, gun rights, defense policy, campaign finance, racial discrimination, immigration, terrorism, hate-speech, Israel, education policy, environmentalism, global warming, etc. The two issues which the AP devotes its exclusive focus are: accountability: no one is above the law. Everyone, but in particular persons in high position who have traditionally 'enjoyed' immunity from prosecution, will now have their get out of jail free cards voided. And jobs: everyone who wants a paycheck gets a paycheck. EV-REE-ONE. Now you might well ask -- certainly others will -- "How you gonna implement the jobs program, and more to the point, how you gonna pay for it?" To which I reply, "You must always remember that the AP subordinates ALL OTHER ISSUES to paychecks/jobs and accountability, so the details of the fiscal policy behind the "JOBS" commitment is for the most part irrelevant. That said, the Treasury has a machine that prints checks, so the policy is secured, "Move right along. Nothing to see here." Whatever may be the details required to reconcile the jobs program with fiscal reality, the program itself is in stone, and non-negotiable. For the curious though, I would state the obvious: print the money, borrow the money, or tax someone. In terms of practical economics, it would be quite simple: The more robust the private sector economy, the greater the proportion of jobs it provides. The rest to be provided by govt, and financed,... however. (Personally, I like a progressive income tax, or a flat tax based on net worth, or a financial transaction tax, but I'll go along with whatever the AP figures out AFTER THE ELECTIONS HAVE BEEN WON.) A major innovation: the AP does not conduct its campaigns by traditional methods. No TV, no radio, no interviews with mainstream journalists. TV, radio, and other conventional media are corporate. They are part of the illegitimate "mainstream", of the illegitimate corporate statist ruling elite. They are part of the political opposition. They are gatekeepers of the political process. If you pay them for TV and radio ads, you are giving material support to your political adversaries. The AP therefore, chooses to conduct its campaigns DIRECTLY with the voters, over the internet, no gatekeeper, no middleman -- no corporate mediation-for-profit of the political process. A not-for-profit political process is crucial to the elimination of corporate/govt corruption, and the restoration of a healthy society. In this way, the AP terminates the age old linkage between money and political power. There's more, but this is a start. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From kanzure at gmail.com Wed Feb 23 23:35:24 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Feb 2011 17:35:24 -0600 Subject: [ExI] Fwd: [chicago-reprap] well well, how is everyone? In-Reply-To: References: <10e3f00c-2eef-4cf3-9041-83d488f94f7f@u23g2000pro.googlegroups.com> Message-ID: ---------- Forwarded message ---------- From: smonkey Date: Wed, Feb 23, 2011 at 5:33 PM Subject: Re: [chicago-reprap] well well, how is everyone? To: chicago-reprap at googlegroups.com And then of course, there is what I like to think of as the "3d printer singularity" which I feel is rapidly approaching. True, it won't be a real "singularity" but the tech is changing super fast. There are now a bunch of3d printer variants out there that work on some type of filament deposition setup (reprap, makerbot things, etc) that are all basically the same bits that one can easily port improvements back and forth at whim. This means that hacker 3d printing is advancing fast! Super fast! Just in the past couple of days Makerbot released a water disolvable plastic, and some guy on thingiverse dropped in a double head extruder. This means that within a year or so there'll be multiple material printing which will allow support materials, allowing....well.....craziness. Oh, and that's not even going into the people printing circuit boards. Whoa! I'm betting the Makerbot cats are going to regret selling the thing-o-matic and quite soon the main demand will be for a basic bot just to learn basic skills on. After that one's going to build oneself a nice big reprap anyway. nat nat On Wed, Feb 23, 2011 at 12:37 PM, andres wrote: > I've got some issues with the cost associated with the ToM and makerbot > stuff in general. I understand they are a business, but it seems like they > have suffered from having margins that were started as way too thin. now > they are scrambling away from that however they can and putting cost where > people (some may say suckers) are willing to pay for it (unicorn, 3d > scanner, frostruder prices come to mind). i do like the look of their > current hotend, but there are actually a ton of good options for those these > days. there has been a bunch of learning since the first crummy ones, (one > of which i am still using, though hopefully my arcol.hu one gets delivered > sometime in the next week.) > > the cost of a thing-o-matic shouldn't ACTUALLY be much off of the cost of a > cupcake. just plywood sheeting differences. the added build area is "free" > with the design change. The new mk6 stepstruder, doesn't have to be $200, > when it only works as well as one of the printed geared stepper extruders > (that are capable of working with a much less beefy stepper.) for some > reason makerbot is making a big deal about the "direct drive" aspect of > their new design, but honestly a bit of backlash makes zero difference on > print quality when you have settings dialed in, in fact with gearing you > actually get greater resolution. Maybe you can print with faster extrusion > rates on direct drive, but print speeds tend to be limited by many other > factors anyway. (the ultimaker's print rate seems to be the reigning champ, > and that one is geared. most of their gain seems to be in lowering the > moving mass by going bowden with their extruder motor and keeping a moving > XY like the darwin design) > > i mean $1300 is quite a bit of money, and they havne't really upped the > reliability of their electronics much (honestly a huge issue with makerbots > currently. i've had several "random" resets through esd and comm issues and > ruined prints as a result, i'm going to be upgrading to better reprap > electronics sometime soon). at least they don't rely on the crummy DC motor > they used to. the new mendel designs (specifically the Prusa one) is super > simplified from the days of Darwin and at most $600 to completely build, > including buying the printed pieces. but i'll let you know how setup and > dialing in goes when i build mine. I made it through the struggle of > adjustment and calibration of a darwin, and i've probably sunk more than the > cost of the ToM over time. i'm looking forward to building a prusa. > > I've actually said to some people that i'd actually have considered that > getting a cupcake to print mendel parts on would have been the most cost > effective way to get started (back when they first started selling it, which > was after i started my own investment), but that is kind of out of the > window now that they are no longer selling a $900 kit. > > for what it is worth. i'd say the best way of getting printing today is > buying solder kits for the boards (all components presourced, and i'd say to > go RAMPs instead of the makerbot versions), motors separately through > whatever manufacturer (Lin, zapp, etc), and printed parts and hardware for a > mendel (plenty of sites sell these as kits, and you can always go ebay from > printers like myself). > > some sites to check out - makergear.com (USA), mendel-parts.com (EU), > ultimachine.com (USA) > > definitely much easier than the route i had to go. laser cut darwin parts > (ponoko, back when they were only in NZ), hardware individually sourced > through mcmaster & amazon, preprinted boards from makerbot, sourced > components through digikey and mouser, hoped that things printed well enough > to build a stepper extruder. some extra boards along the way, separate > replacement belts (and having to carve my own belt gears out of acrylic > scraps as i couldn't print gears well enough without them.) > > > as for the business model of a printed parts market, it is going to be > fairly tough to find a space between the reprap irc group where people can > get help with printed parts for little or no cost, and companies like > shapeways and any 3d printing house where high quality parts can be gotten > out of a variety of materials. there might be a middle ground, but I think > the variation of print quality in the hobby community might lack quality > control for requests and fulfillment to match. at least in a professional > environment (i'm an ME professionally) you can go to hitting dimensional > tolerances as a measure if getting what you pay for, but with reprap and > other homemade prints you are also talking about strength and layer adhesion > variation (and cosmetic blob/string variations) that is probably all over > the place for the producers who would be into such a venture. it would be > tough to nail down expectations, though it may be possible. let me know if > you want any more input from me, but i'm kind of just one point of view, > you'll need to get some input from the customer point of view too. > > anyway. i've got to get back into blogging my experiences, but i'll try to > keep the group up to date with my prusa mendel build i'll be doing next > month. right now i am switching over to printing PLA for the bushings, and > taking care of a few more optimizations on my darwin along the way. > > Can i just say how sad i am about having to disassemble and cannibalize my > darwin. i mean we have kind of been through a lot over the last 2 years. > > excuse me while i go get some tissues. > > andres. > > > On Sun, Feb 20, 2011 at 11:12 PM, Brian Chamberlain < > blchamberlain at gmail.com> wrote: > >> Yeah, same here. I have drifted a bit and been busy with other things. >> Namely trying to finish school. >> >> I hope to get back on this 3D printing band wagon. I'm really liking the >> new Makerbot Thing-o-matic but have not set aside the $$$ to get one yet. >> >> I have also been kicking around the idea for a 3D parts site like >> Thingiverse but where people could request parts from other people who have >> 3D printers (for exchange of $$$, or equivalent). Though, I'm not sure how >> the transactions would work. Would people post requests for parts and give a >> price?(buyer sets price) Would there be bidding on a part from multiple >> people who can print it?(reverse ebay) Would the people printing things set >> the price of stuff they've printed before and know can reproduce well?(Etsy >> model) I'd like to avoid any model that encourages a "race to the bottom" >> for the price of an item. I'm thinking the focus would be more on building a >> relationship (whatever that means in an online context) with the person >> printing the thing. Anyone heard of a site or something like this? Or ideas >> on what model might work as most of you all have printers which sit idle >> from time-to-time and *might* want to have that idle time filled with print >> jobs which can earn you money. What would you be comfortable with? >> >> Thanks! >> Brian >> >> >> On Thu, Feb 17, 2011 at 12:44 PM, smonkey wrote: >> >>> Hi! >>> >>> I guess I just drifted from this list like everyone else. >>> >>> I bought myself a cupcake last spring and slowly got it up and running. >>> Then a busy summer/winter, but now with lots of time again its running >>> really well. I've got a heated build platform >>> and can crank out basic shapes with consistency and accuracy. >>> >>> I've still got some tuning issues to get overhangs better. I suspect I'm >>> running a bit hot and might need some cooling time in there. >>> >>> I haven't been by PS1 in months and months. Having my own studio and >>> workspace and Makerbot means if I have time to hack, I do it here. >>> >>> But I always mean to drop by or meetings. I just never get around to it. >>> >>> >>> >>> >>> Nat >>> >>> >>> >>> On Thu, Feb 17, 2011 at 11:55 AM, Andres Huertas < >>> andres at healthytiger.com> wrote: >>> >>>> Hey all, >>>> >>>> Just wondering how people were getting along. have people given up >>>> hope? has anyone bought their way into a working system? (cupcake? >>>> ToM? mendel?) have people just forgotten about this community as a >>>> whole, on to bigger and better things? I'm also curious of who >>>> frequents pumping station one and what you all are working on if you >>>> are there. >>>> >>>> I've continued my odyssey and actually i'm printing quite reliably and >>>> with very decent quality on my darwin repstrap. (software issues be >>>> damned, i've learned enough to be dangerous) I've just about printed >>>> full parts to build a prusa mendel (a few PLA bushings left), and i'll >>>> probably be cannibalizing my old friend for parts and rebuilding it >>>> smaller stronger and faster sometime in the next few months. >>>> >>>> I wanted to write the group to remind / let people know that the cost >>>> of getting a printer up and running has really gone down lately. also, >>>> there is enough knowledge between me and others who are printing here >>>> in chicago (be it makerbot or homebuilt) to help through any issues. >>>> here is a link for bare bones costs of a reprap system : >>>> >>>> http://repraplogphase.blogspot.com/2011/01/cheap-skates-guide-to-510ish-mendel-360.html >>>> >>>> i'm not quite in a place to print "free" parts for others, i've got a >>>> few more mods and stuff for myself, and then i'm going to print a >>>> couple of sets of prusa mendels to sell into the web community to >>>> recoup some cash, but like i said, i'm totally willing to help with my >>>> time and experience. >>>> >>>> anyway, i'm willing to help so just get at me. >>>> >>>> andres. >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "Chicago 3D Printer Enthusiasts" group. >>>> To post to this group, send email to chicago-reprap at googlegroups.com. >>>> To unsubscribe from this group, send email to >>>> chicago-reprap+unsubscribe at googlegroups.com. >>>> For more options, visit this group at >>>> http://groups.google.com/group/chicago-reprap?hl=en. >>>> >>>> >>> >>> >>> -- >>> "Science is a Differential Equation. Religion is a Boundry Condition." >>> - A. Turing >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "Chicago 3D Printer Enthusiasts" group. >>> To post to this group, send email to chicago-reprap at googlegroups.com. >>> To unsubscribe from this group, send email to >>> chicago-reprap+unsubscribe at googlegroups.com. >>> For more options, visit this group at >>> http://groups.google.com/group/chicago-reprap?hl=en. >>> >> >> >> >> -- >> -Brian >> >> blchamberlain at gmail.com >> @breakpointer >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Chicago 3D Printer Enthusiasts" group. >> To post to this group, send email to chicago-reprap at googlegroups.com. >> To unsubscribe from this group, send email to >> chicago-reprap+unsubscribe at googlegroups.com. >> For more options, visit this group at >> http://groups.google.com/group/chicago-reprap?hl=en. >> > > -- > You received this message because you are subscribed to the Google Groups > "Chicago 3D Printer Enthusiasts" group. > To post to this group, send email to chicago-reprap at googlegroups.com. > To unsubscribe from this group, send email to > chicago-reprap+unsubscribe at googlegroups.com. > For more options, visit this group at > http://groups.google.com/group/chicago-reprap?hl=en. > -- "Science is a Differential Equation. Religion is a Boundry Condition." - A. Turing -- You received this message because you are subscribed to the Google Groups "Chicago 3D Printer Enthusiasts" group. To post to this group, send email to chicago-reprap at googlegroups.com. To unsubscribe from this group, send email to chicago-reprap+unsubscribe at googlegroups.com. For more options, visit this group at http://groups.google.com/group/chicago-reprap?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kcadmus at gmail.com Wed Feb 23 23:11:52 2011 From: kcadmus at gmail.com (Kevin Cadmus) Date: Wed, 23 Feb 2011 18:11:52 -0500 Subject: [ExI] Same Sex Marriage Message-ID: Thanks, Kelly, for bringing up a favorite topic of mine. Perhaps the best way to get government out of the marriage business is to foment revolution within the huge mass of single folks. They are discriminated against in many ways, some subtle and some not so subtle. By educating this group about how they are getting the shaft by government's bestowal of privilege to married folk, maybe there will be a new faction saying that, "We aren't going to take this anymore!" Is it so hard to imagine a push for a new U.S. constitutional amendment along the lines of "Congress shall pass no legislation that discriminates according to marital status." It has a nice parallelism with other similar civil rights legislation. So people will readily understand the issue. But how can the existing laws be retrofitted to abide by this new amendment? It may be easier than it appears. A person's marital status is referenced by a relatively small handful of existing laws. Rescinding these few laws will end the injustice, simplify the tax code, avoid promoting marriages for trumped up reasons, and (maybe best of all) end the endless and irresolvable blathering and bickering about who should or should not be allowed to be considered by the state to be "married". In essence, the answer becomes "no one". The government finally is removed from the ugly business of defining what "being married" means. If private parties want to discriminate for or against single persons, fine! But it will force them to define what they want "married" to mean. Most might conclude that it simply isn't worth the effort. Would there be a down side to this that I'm just not seeing? -Kevin kcadmus at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Thu Feb 24 00:02:06 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Feb 2011 18:02:06 -0600 Subject: [ExI] Fwd: [chicago-reprap] well well, how is everyone? In-Reply-To: References: <10e3f00c-2eef-4cf3-9041-83d488f94f7f@u23g2000pro.googlegroups.com> Message-ID: ---------- Forwarded message ---------- From: John Stoner Date: Wed, Feb 23, 2011 at 6:00 PM Subject: Re: [chicago-reprap] well well, how is everyone? To: chicago-reprap at googlegroups.com I think on the software end this underlines the demand for an open source, flexible 3D printing API, a set of building blocks on which you can make a basic download-and-print tool for grandma, a more powerful design-and-print fast-cycle prototyping system for designers, and a mass manufacturing tool to drive many printing devices for mass manufacture of parts and Repowulf operations. A lot of these different tools need the same code. I have my attention elsewhere but if I won the lottery I'd hire a team to build it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 24 00:55:23 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 23 Feb 2011 20:55:23 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: On Wed, Feb 23, 2011 at 6:08 PM, Jeff Davis wrote: the AP takes no position. No position > > means NO POSITION. No position means being "agnostic" on EVERYTHING > else. Individual AP members have their own views of course, but as a > unified organization, the AP takes no position on: abortion, taxes, > gay marriage, gun rights, defense policy, campaign finance, racial > discrimination, immigration, terrorism, hate-speech, Israel, education > policy, environmentalism, global warming, etc. > Ah, yes, the question of the transition from where we are "here" to > the glorious Llibertarian utopia "there". This is my problem with > libertarians -- particularly zealots-slash-purists. They say our > current system is crappy. (I agree). They say life would be perfect > in the Llibertarian utopia "over there". But they .rarely seem > willing to propose a reality-based plan for getting from "here" to > "there"-- a plan for the transition. And by reality-based I mean a > plan which acknowledges that any responsible transition must be > incremental. They don't like the old way -- understandable, who > outside of the kleptocratic elite does? -- but they won't dirty > themselves with the sort of compromise with the current system that an > orderly transition implies. This annoys me. There's real substance to > Llibertarian principles. I'm looking for less bitching and moaning, > and more progress re implementation. > > Which brings us back to Darren's question: "...how do we go about > establishing a system where the principle non-aggression is > paramount,..." > > Let's talk about the US of A in the year 2011. > > How to begin the transition? > > Oddly, it seems to require only that enough people behind the curtain > in the polling booth mark their ballot correctly. Which is to say, for > the candidates put forth by The Accountability Party. > > "The Accountability Party? What's that?" you ask, puzzled, thinking > you've missed some newsworthy "announcement". You haven't. > > The Accountability Party is my little fantasy, created at this most > opportune moment, when the Dems and Repubs are both out of favor. To > be robustly resistant to destruction by fragmentation, The > Accountability Party is deliberately "preconfigured" to be > broad-based, having only two planks: Accountability and Jobs. > > No other issue is relevant except as relates to these two concerns. > So, regarduing any other issue: the AP takes no position. No position > means NO POSITION. No position means being "agnostic" on EVERYTHING > else. Individual AP members have their own views of course, but as a > unified organization, the AP takes no position on: abortion, taxes, > gay marriage, gun rights, defense policy, campaign finance, racial > discrimination, immigration, terrorism, hate-speech, Israel, education > policy, environmentalism, global warming, etc. > > The two issues which the AP devotes its exclusive focus are: > accountability: no one is above the law. Everyone, but in particular > persons in high position who have traditionally 'enjoyed' immunity > from prosecution, will now have their get out of jail free cards > voided. > > And jobs: everyone who wants a paycheck gets a paycheck. EV-REE-ONE. > > Now you might well ask -- certainly others will -- "How you gonna > implement the jobs program, and more to the point, how you gonna pay > for it?" To which I reply, "You must always remember that the AP > subordinates ALL OTHER ISSUES to paychecks/jobs and accountability, so > the details of the fiscal policy behind the "JOBS" commitment is for > the most part irrelevant. That said, the Treasury has a machine that > prints checks, so the policy is secured, "Move right along. Nothing to > see here." Whatever may be the details required to reconcile the jobs > program with fiscal reality, the program itself is in stone, and > non-negotiable. For the curious though, I would state the obvious: > print the money, borrow the money, or tax someone. In terms of > practical economics, it would be quite simple: The more robust the > private sector economy, the greater the proportion of jobs it > provides. The rest to be provided by govt, and financed,... however. > (Personally, I like a progressive income tax, or a flat tax based on > net worth, or a financial transaction tax, but I'll go along with > whatever the AP figures out AFTER THE ELECTIONS HAVE BEEN WON.) > > A major innovation: the AP does not conduct its campaigns by > traditional methods. No TV, no radio, no interviews with mainstream > journalists. > > TV, radio, and other conventional media are corporate. They are part > of the illegitimate "mainstream", of the illegitimate corporate > statist ruling elite. They are part of the political opposition. They > are gatekeepers of the political process. If you pay them for TV and > radio ads, you are giving material support to your political > adversaries. The AP therefore, chooses to conduct its campaigns > DIRECTLY with the voters, over the internet, no gatekeeper, no > middleman -- no corporate mediation-for-profit of the political > process. A not-for-profit political process is crucial to the > elimination of corporate/govt corruption, and the restoration of a > healthy society. In this way, the AP terminates the age old linkage > between money and political power. > > There's more, but this is a start. > > Best, Jeff Davis > > "Everything's hard till you know how to do it." > Ray Charles > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Feb 24 00:59:22 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 23 Feb 2011 17:59:22 -0700 Subject: [ExI] Call To Libertarians Message-ID: As Fred mentioned a while back, upper and lower case libertarians are different and there is an entire thicket of ideas that go under that general heading. The big influences in that area are Ayn Rand and Robert A. Heinlein. To give the younger members a sense of how far back this goes, here is a snip from The Moon is a Harsh Mistress, 1966 (serial version in 1965). This is a discussion among three of the main characters before they start the revolution to free Luna (the 4th character is Mike, an awakened AI.) (Whoh) "But Professor, what are your political beliefs?" (Professor de la Paz) "I'm a rational anarchist." (Whoh) " I don't know that brand. Anarchist individualist, anarchist Communist, Christian anarchist, philosophical anarchist, syndicalist, libertarian those I know. What what's this? Randite?" (Professor de la Paz) "I can get along with a Randite. A rational anarchist believes that concepts such as 'state' and 'society' and 'government' have no existence save as physically exemplified in the acts of self-responsible individuals. He believes that it is impossible to shift blame, share blame, distribute blame...as blame, guilt, responsibility are matters taking place inside human beings singly and nowhere else. But being rational, he knows that not all individuals hold his evaluations, so he tries to live perfectly in an imperfect world...aware that his effort will be less than perfect yet undismayed by self-knowledge of self-failure." (Mannie) "Hear, hear!" I said. "'Less than perfect.' What I've been aiming for all my life." "You've achieved it," said Wyoh. "Professor, your words sound good but there is something slippery about them. Too much power in the hands of individuals -- surely you would not want...well, H-missiles for example ? to be controlled by one irresponsible person?" (Professor de la Paz) "My point is that one person is responsible. Always. If H-bombs exist -- and they do -- some man controls them. In tern of morals there is no such thing as 'state.' Just men. Individuals. Each responsible for his own acts." Page 84 I didn't put this up to display the thinking ascribed to a Heinlein character, but to show that there is a lot of variety in those who describe themselves as "libertarian". And I should note that while I have formerly described myself as (lower case) libertarian, my thinking has been oriented toward understanding people and societies through the lens of evolutionary psychology. So far a political movement has not been built around this view of humans. Keith PS This is part of the cultural background needed to understand early extropian discussions. I doubt there was a person on that list who had not read most of Heinlein's works as well as essential books such as Drexler's Engines of Creation. From thespike at satx.rr.com Thu Feb 24 01:07:19 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 23 Feb 2011 19:07:19 -0600 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: <4D65AF47.1030606@satx.rr.com> Why the hell have so many posters started vomiting back every word of what they, and we, have just read, usually adding a sentence or two of their own dazzling insights at the end? What Spike has now said a couple of times: Trim yer posts! That doesn't mean Write Less, it means Stop Quoting Incontinently. Electrons are suffering. Damien Broderick From darren.greer3 at gmail.com Thu Feb 24 01:16:18 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 23 Feb 2011 21:16:18 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: On Wed, Feb 23, 2011 at 6:08 PM, Jeff Davis wrote: the AP takes no position. No position > means NO POSITION. No position means being "agnostic" on EVERYTHING > else. Individual AP members have their own views of course, but as a > unified organization, the AP takes no position on: abortion, taxes, > gay marriage, gun rights, defense policy, campaign finance, racial > discrimination, immigration, terrorism, hate-speech, Israel, education > policy, environmentalism, global warming, etc. > > The two issues which the AP devotes its exclusive focus are: > accountability: no one is above the law. Everyone, but in particular > persons in high position who have traditionally 'enjoyed' immunity > from prosecution, will now have their get out of jail free cards > voided. The idea that a society, or a party, takes no official position on anything other than the law or accountability is not that far afield from the territorial morality idea I posted about. If the general rule of law was that any body was free to do anything as long as it didn't hurt anyone else would mean not only that official moral positions wouldn't need to be taken but individuals ones wouldn't have to be either, except as it related to how you chose to govern yourself. But if you do hurt someone you are prosecuted, and swiftly. Not for crimes of moral turpitude, but for violating the rights of another. Period. After 9/11, this principle would have served us well. If instead of declaring war, which actually gives a legal sanction to atrocities, we had simply said we are dealing with criminals and mass murderers who have violated not just the rights of Americans but all human beings, we probably could have forged a better consensus. Initially Iran was on board, and even Hezbollah was appalled at what happened in New York. That event proved, at least to my mind, that there is a lowering threshold for horror among human beings. But instead we moralized it, and politicized it, and the whole thing fell apart. Of course, to get there, religion would need to go. Its bread and better is morally judging the behavior of others. The master stroke of Christianity was removing sex from the realm of the sacred, where the pagans kept it, and tossing it into the pits of the profane. This ensured that all human beings were sinful by nature and could not stray far from the pulpits of power. (Forgive that little run of alliteration.) :) Darren Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Feb 24 01:47:50 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 17:47:50 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: <4D65B8C6.5090408@mac.com> On 02/23/2011 02:08 PM, Jeff Davis wrote: > 2011/2/22 Darren Greer: > >>> Quoting Samantha Atkins: >>>> The essential element of libertarianism is the Non-Agression Principle. No one has the right to initiate force against another. This is equivalent to total freedom to do anything that does not harm, physically force, threaten physical force or defraud another. >> I like that principle Samantha. Very much. I am personally committed to it. But I wonder how does one go about establishing system where the principle non-aggression is paramount, when natural aggression, both tribal and individual, seems to be a dominant feature of the human psyche nurtured by millions of years of evolutionary development? > Me: > > Ah, yes, the question of the transition from where we are "here" to > the glorious Llibertarian utopia "there". This is my problem with > libertarians -- particularly zealots-slash-purists. They say our > current system is crappy. (I agree). They say life would be perfect > in the Llibertarian utopia "over there". But they .rarely seem > willing to propose a reality-based plan for getting from "here" to > "there"-- a plan for the transition. Here is mine. Give up on the USA. Start a new country fresh by roughly Rothbardian principles. Now, who has a 10-20 million for our starter island and set up costs? :) On the here to there, the first utterly critical thing is to agree on what should be sufficiently to take the necessary actions. Saying that without a detailed plan it is all more or less bullshit is grossly unfair. The first question is what is right, what should be the case. The problem of unravelling the current mess and replacing it is a secondary endeavor. Very important of course and I agree we need to think a lot about this. But to just dismiss the discussion in lieu of such a detailed plan is not reasonable. Here is my partial sketch of an incremental approach to a few aspects: 1) remove all non-violent, non-fraud "victimless" crimes from the books by executive order; 2) release all imprisoned for such crimes immediately (also by XO); 3) get Constitutional amendments passed to limit severely what new things government can do, starting with absolute limit that it cannot borrow money except perhaps in rigrorously limited wartime conditions; 4) sell almost all government held assets outside its operational assets as quickly as possible with proceeds applied to its debts; 5) on programs like Social Security, allow everyone below some cut-off, say 40, to opt out if they desire immediately. Those over some cutoff get the benefits they have paid into. Optionally allow everyone that desires to to opt out and pay the benefits accrued out of general accounts (where the money has ended up being spent anyway) until some cutoff date that is say 20 years or so out; 6) roughly same idea on many other entitlement programs. The point is not the leave people hanging but to have a planned phase out over time with no new promises. The end goal being to get government out of these areas entirely; 7) on education any parent can opt out of the public school system and spend the equivalent amount for schooling by the school of their choice. This amount to be provided out of the same government accounts in the beginning but over some period of time moving gradually to total private market transactions. Government gets out of determining curriculum much earlier. 8) No more sort of maybe wars and troops posted all over the globe just because we can or want to brag that the Navy is all over the world or whatever. We cannot remotely afford this. There are many more and the problem solution cannot simply be written down in one go. - samantha From sjatkins at mac.com Thu Feb 24 01:56:23 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 17:56:23 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> Message-ID: <4D65BAC7.1090507@mac.com> On 02/23/2011 02:08 PM, Jeff Davis wrote: > > Ah, yes, the question of the transition from where we are "here" to > the glorious Llibertarian utopia "there". This is my problem with > libertarians -- particularly zealots-slash-purists. They say our > current system is crappy. (I agree). They say life would be perfect > in the Llibertarian utopia "over there". But they .rarely seem > willing to propose a reality-based plan for getting from "here" to > "there"-- a plan for the transition. And by reality-based I mean a > plan which acknowledges that any responsible transition must be > incremental. They don't like the old way -- understandable, who > outside of the kleptocratic elite does? -- but they won't dirty > themselves with the sort of compromise with the current system that an > orderly transition implies. This annoys me. There's real substance to > Llibertarian principles. I'm looking for less bitching and moaning, > and more progress re implementation. > > Which brings us back to Darren's question: "...how do we go about > establishing a system where the principle non-aggression is > paramount,..." > > Let's talk about the US of A in the year 2011. > > How to begin the transition? > > Oddly, it seems to require only that enough people behind the curtain > in the polling booth mark their ballot correctly. Which is to say, for > the candidates put forth by The Accountability Party. Problem with this is that the vast majority (roughly 99%) of the government machinery is not subject to election at all. And it is very resistant to major change by incumbents. > "The Accountability Party? What's that?" you ask, puzzled, thinking > you've missed some newsworthy "announcement". You haven't. > > The Accountability Party is my little fantasy, created at this most > opportune moment, when the Dems and Repubs are both out of favor. To > be robustly resistant to destruction by fragmentation, The > Accountability Party is deliberately "preconfigured" to be > broad-based, having only two planks: Accountability and Jobs. > > No other issue is relevant except as relates to these two concerns. > So, regarduing any other issue: the AP takes no position. No position > means NO POSITION. No position means being "agnostic" on EVERYTHING > else. Individual AP members have their own views of course, but as a > unified organization, the AP takes no position on: abortion, taxes, > gay marriage, gun rights, defense policy, campaign finance, racial > discrimination, immigration, terrorism, hate-speech, Israel, education > policy, environmentalism, global warming, etc. > Being agnostic on everything but these two ungrounded concepts cannot possibly lead to a good outcome. No principles means no standards for what is desired and no means to judge future proposals systematically. > The two issues which the AP devotes its exclusive focus are: > accountability: no one is above the law. Everyone, but in particular > persons in high position who have traditionally 'enjoyed' immunity > from prosecution, will now have their get out of jail free cards > voided. > Which laws? Which laws are legitimate to start with? How do you know? All now are equal under the law as a standing principle. How would you make it more so? > And jobs: everyone who wants a paycheck gets a paycheck. EV-REE-ONE. > WHAT? Even if that can offer no value whatsoever in exchange? How is this just and how does it lead to a better world? > Now you might well ask -- certainly others will -- "How you gonna > implement the jobs program, and more to the point, how you gonna pay > for it?" To which I reply, "You must always remember that the AP > subordinates ALL OTHER ISSUES to paychecks/jobs and accountability, so > the details of the fiscal policy behind the "JOBS" commitment is for > the most part irrelevant. There is no accountability if there is no accounting for how wished for things can actually be done and what the implications of doing those things really are. > That said, the Treasury has a machine that > prints checks, so the policy is secured, "Move right along. Nothing to > see here." Whatever may be the details required to reconcile the jobs > program with fiscal reality, the program itself is in stone, and > non-negotiable. For the curious though, I would state the obvious: > print the money, borrow the money, or tax someone. In terms of > practical economics, it would be quite simple: The more robust the > private sector economy, the greater the proportion of jobs it > provides. The rest to be provided by govt, and financed,... however. > (Personally, I like a progressive income tax, or a flat tax based on > net worth, or a financial transaction tax, but I'll go along with > whatever the AP figures out AFTER THE ELECTIONS HAVE BEEN WON.) That will finish destroying the value of the dollar very very quickly and the country with it. Progressive tax is regressive to actually growing an economy. It has been seen over and over again. Not to mention be utterly unjust and immoral. > A major innovation: the AP does not conduct its campaigns by > traditional methods. No TV, no radio, no interviews with mainstream > journalists. > > TV, radio, and other conventional media are corporate. They are part > of the illegitimate "mainstream", of the illegitimate corporate > statist ruling elite. They are part of the political opposition. They > are gatekeepers of the political process. If you pay them for TV and > radio ads, you are giving material support to your political > adversaries. The AP therefore, chooses to conduct its campaigns > DIRECTLY with the voters, over the internet, no gatekeeper, no > middleman -- no corporate mediation-for-profit of the political > process. A not-for-profit political process is crucial to the > elimination of corporate/govt corruption, and the restoration of a > healthy society. In this way, the AP terminates the age old linkage > between money and political power. > > There's more, but this is a start. > This a total non-starter. - s From sjatkins at mac.com Thu Feb 24 02:03:04 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 18:03:04 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <20110223164814.GC15944@ofb.net> References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> Message-ID: <4D65BC58.9010203@mac.com> On 02/23/2011 08:48 AM, Damien Sullivan wrote: > On Tue, Feb 22, 2011 at 07:32:22PM -0800, Samantha Atkins wrote: > >> Do you really think that the government can be involved in healthcare >> without grossly inflating costs? With the government health programs > A rational person looks at the evidence, which is that goverrnment > involvement with health care reduces costs. Historically there is no basis for such a claim. Take Medicare/Medicaid and take Bush prescription drug plan as starters. Trillions in unfunded liabilities. Since when does government run anything more economically than private enterprise? > Half of US medical spending > is from the government, which is covering the older and sicker part of > the population, at less overhead. Overall the US spends the most on > health care, with among the shortest life expectanccies, even after > filtering out some of our disadvantaged groups. The most socialized > medicine in Europe, Britian's NHS, is also the cheapest, spending less > than half per capita what the USA does. > With notorious and life threatening waits for many procedures. No, thank you. - s From darren.greer3 at gmail.com Thu Feb 24 02:11:18 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 23 Feb 2011 22:11:18 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D65BC58.9010203@mac.com> References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> Message-ID: On Wed, Feb 23, 2011 at 10:03 PM, Samantha Atkins wrote: With notorious and life threatening waits for many procedures. No, thank > you. > > According to who? Health insurance companies? Lobbyists? Because that is simply not true. In Canada, when you need it, you get it. The more more life-threatening it is, the faster you get it. If you believe otherwise, you've bought into propaganda. darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Feb 24 02:23:44 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 23 Feb 2011 20:23:44 -0600 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> Message-ID: <4D65C130.3080304@satx.rr.com> On 2/23/2011 8:11 PM, Darren Greer wrote: > In Canada, when you need it, you get it. The more more > life-threatening it is, the faster you get it. If you believe otherwise, > you've bought into propaganda. Likewise in Australia, as Stathis (a public hospital doctor) has pointed out here more than once. The dull repeated impacts of such propaganda claims against the unmentionable truth is very depressing, on a par with "Those so-called Moon 'landings' were faked!" But by and large there's no point saying that here (so why am I saying it? --what the hell, a few more electrons). Damien Broderick From spike66 at att.net Thu Feb 24 03:35:08 2011 From: spike66 at att.net (spike) Date: Wed, 23 Feb 2011 19:35:08 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D65AF47.1030606@satx.rr.com> References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> <4D65AF47.1030606@satx.rr.com> Message-ID: <006d01cbd3d3$d606c840$821458c0$@att.net> ... On Behalf Of Damien Broderick ... Trim yer posts! That doesn't mean Write Less, it means Stop Quoting Incontinently. Electrons are suffering. Damien Broderick Ja, do let me encourage once again please, trim yer posts! There are those who follow ExI-chat on tiny screens on telephones. Let's not make them scroll thru reams of quotations, thanks. spike From kanzure at gmail.com Thu Feb 24 04:14:13 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Feb 2011 22:14:13 -0600 Subject: [ExI] An "open source party"? Message-ID: I wrote a reply on H+ Magazine for some reason, see below: http://hplusmagazine.com/2011/02/24/open-source-party-2-0-liberty-democracy-transparency/ """ One Comment Submitted by kanzure on February 24, 2011 at 3:42 am. The way I see it, open source paints a larger trend, not one of mere transparency in our current politics, but rather a complete re-envisioning of society entirely. This is why we have individuals like Patri Friedman (Seasteading Institute) working on a ?startups of governments? framework. Transhumanists have known Patri for some time now. He was recently re-elected to the board of Humanity+ and has presented at these conferences before. His borrowed concept is to make land out on the high seas available to ?entrepreneurial governments?. What would an entirely open source seasteading distribution look like? There?s been no doubt that Debian and Ubuntu have been huge forces in the free software world? will Seasteading Institute be as influential in development? This is also why we have Marcin Jakubowski (Factor E Farm) working on the global village construction set. He?s creating the Global Village Construction Set, an open source, low-cost, high performance technological platform that allows for the easy, DIY fabrication of the 50 different Industrial Machines that it takes to build a sustainable civilization with modern comforts. Holy crap, mount that on Patri?s friggin? seasteading platform. Marcin presented at H+ Summit 2009 in Irvine, California. His farm out in Missourri has sort of been like a Zeitgeist or Venus Project for people who have an urge to get down to business. He?ll be presenting at TED sometime this year. And he really, really deserves his TED talk. This is why we have Adrian Bowyer (University of Bath) working on RepRap, an open-source 3D printer that hopes to one day make all of its own components. It?s not really just Adrian now, but thousands of developers and hundreds of repraps and derivatives, even businesses like Makerbot Industries and MakerGear. This technology has ignited global, open development. Humanity+ (this blog) thinks that open technology development has tremendous acceleration benefits, especially in open manufacturing. I helped organize the Gada Prizes at Humanity+ including the just-recently-announced Grand RepRap Prize .. and there?s $80,000 at stake. This is why Robert Freitas (Institute for Molecular Machinery) has provided hundreds of hours of research in his book Kinematic Self-Replicating Machines . For many of the reprappers it (and Advanced Automation for Space Missions) has been a guiding star in both mechanical devices but also nanotech. I flip through these almost daily now. Christopher Kelty once published an interesting seed of an idea about recursive republics- societies that continuously use their technologies to update their mandate in a giant feedback loop. At least, that?s the thought he came to after chronicling the historical trends in the free software movement ... er, which isn't his video. ((On a related note, it?s always amused me how Chris Peterson @ Foresight Institute was more involved in open sourceback in the late 90s. There?s a few edge cases in the transhumanist communities, but in general, it seems that the futurists missed out on open source. To be fair, open source isn?t easy to make. It?s hard work. But nobody is going to hand-deliver you the future. Biocurious(the open source, DIY biohacking hackerspace) is ran by a few transhumanists, so the future is looking bright for the Bay Area transhumanists.)) The future of ?open source politics? is going to be about technology development. Don?t like your current government? You?ll get to spawn off a spore and take a recent version of technological civilization with you- for yourself, your family and your friends to go with you, if they think your proposed system is worth leaving (just don?t ?fork-and-forget?! ah, GitHub?s one weakness). That?s the power of open source. But there seems to be a chasm or disconnect between the events and trends I?ve outlined... and the article?s take on Open Source, which is definitely crippling in dangerous ways. BTW: I?ll be in the Bay Area at the end of the month in case anyone wants to hang out or, you know, feed me. *Edit*: Also, there?s a BIL meetup in Long Beach, California on March 3rd-5th. Joseph Jackson has been recruiting lots of DIYbio folks to talk about directed evolution, EEG, open source hardware projects, a mass spectrometer project, etc. etc. (deets ) - Bryan http://heybryan.org/ 1 512 203 0507 irc.freenode.net ##hplusroadmap - Reply """ -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Thu Feb 24 04:25:40 2011 From: moulton at moulton.com (F. C. Moulton) Date: Wed, 23 Feb 2011 20:25:40 -0800 Subject: [ExI] A plea for us to do better In-Reply-To: References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> Message-ID: <4D65DDC4.5050106@moulton.com> Darren Greer wrote: > On Wed, Feb 23, 2011 at 10:03 PM, Samantha Atkins > wrote: > With notorious and life threatening waits for many procedures. No, thank >> you. >> > According to who? Health insurance companies? Lobbyists? Because that is > simply not true. In Canada, when you need it, you get it. The more more > life-threatening it is, the faster you get it. If you believe otherwise, > you've bought into propaganda. > > darren Would everyone please cut out the hyperbole. Please. The situation is much more complex and nuanced that either of the above positions. Instead of just making bold assertions can I instead suggest defining our terms, agreeing on a set of metrics and processes of measurement and then taking a calm look at all available data. To just bring up single instances and anecdotes is not very helpful; I suspect that we could go on for days just trading jabs with one side talking about some Canadian official who went to USA for heart surgery and someone else would bring up someone who was denied surgery by an insurance company in the USA. Instead let us get beyond simplistic slogans. Let us please avoid mis-characterizations of the positions of others in the discussion. I realize that some people must get a great thrill from it because I have seen it all too often on this list. This is the Exi list and I really thing we can do better whether we are discussing AI or religion or whatever. Fred From phoenix at ugcs.caltech.edu Thu Feb 24 04:28:16 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Wed, 23 Feb 2011 20:28:16 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D65BC58.9010203@mac.com> References: <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> Message-ID: <20110224042816.GA13834@ofb.net> On Wed, Feb 23, 2011 at 06:03:04PM -0800, Samantha Atkins wrote: > On 02/23/2011 08:48 AM, Damien Sullivan wrote: >> On Tue, Feb 22, 2011 at 07:32:22PM -0800, Samantha Atkins wrote: >> >>> Do you really think that the government can be involved in healthcare >>> without grossly inflating costs? With the government health programs >> A rational person looks at the evidence, which is that goverrnment >> involvement with health care reduces costs. > > Historically there is no basis for such a claim. Take Medicare/Medicaid Except for actual spending data. http://mindstalk.net/socialhealth/financial.html > and take Bush prescription drug plan as starters. Trillions in unfunded > liabilities. Since when does government run anything more economically Unfunded liabilities means there won't bee taxes in the future to cover expenses in the future. It has nothing to do with whether government is more or less efficient than private enterprise. > than private enterprise? Well, Medicare's overhead is 2%, that of private US insurers, 14%. Government doesn't have to return profit to shareholders, or spend on advertising, and spends less on trying to deny care to customers. > With notorious and life threatening waits for many procedures. No, > thank you. What is notorious is not necessarily true. -xx- Damien X-) From sjatkins at mac.com Thu Feb 24 04:41:52 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 20:41:52 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D64C0B3.3060008@gmail.com> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> Message-ID: <40434C26-6B36-489A-B674-A091AD5AF2C4@mac.com> On Feb 23, 2011, at 12:09 AM, AlgaeNymph wrote: > On 2/23/11 12:00 AM, Eugen Leitl wrote: > >> Who is this 'we' kemo sabe? > > Us transhumanists. > >> Do we have a budget? > > So we should just not even bother? I'm not saying we need a mass media blitz, just that we should look for how we're looking bad, figure out why, and respond to that on an individual level. The closest to mass-anything I can think of doing is creating a how-to guide for anyone who wants to start up their own advocacy group. I am a bit tired of advocacy and of talking about bizarre and interesting things like what our attitude and relationship will be to our N upload clones. I am much more interested in groups, especially with money, to actually tackle and or fund the tackling of various projects essential for progress toward our dreams. I think we have succeeded pretty well getting the ideas out into the world (with caveats). It is now time to show as clearly as we can that it isn't simply the latest variant of sweet smelling smoke blown up the proverbial public undies. Beware the possibility of Transhumanist Winter. - samantha From sjatkins at mac.com Thu Feb 24 04:47:22 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 20:47:22 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: On Feb 23, 2011, at 1:14 AM, Giulio Prisco wrote: > < > (A) Initiation of force is bad. > (B) Starving children is bad. > > The question is which is worse. A libertarian would say initiation of force is unacceptable; figure out some other way to feed starving children. A liberal would say that starving children is unacceptable and so be it if force is necessary to avoid it.>> > > Well put. It is not easy when primary values are in conflict. In these cases I tend to look for midway solutions, like feeding children as much as possible while reducing initiation of force to the strictly necessary minimum. Needless to say, both fundamentalist libertarians and fundamentalist liberals dislike midway solutions. > > Which children and what are the root causes of their blight really? Is any child anywhere in the world who is starving a more valid claim to some of my possession and the the disposal thereof that I and those I know and value highly are? How did that unknown child somewhere on earth automatically acquire a valid claim check on the property of those that had nothing to do with the conceiving or upbringing or the condition of that child? Is to simply be born to have a claim on the property and part of the life and time of others as long as one is needy enough? What exactly is being presumed underneath this? -samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Feb 24 04:52:48 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 20:52:48 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: References: <4D64805C.7040501@gmail.com> Message-ID: On Feb 23, 2011, at 1:46 AM, BillK wrote: > On Wed, Feb 23, 2011 at 3:34 AM, AlgaeNymph wrote: >> We seem to spend more time in these lists debating the merits of >> libertarianism or socialism as opposed to, say, how to improve our image >> with the public. Why is that? >> > > > And libertarians filling the list with detailed nit-picking arguments > about how the poor half of the nation might have to let sick children > die because they can't afford medical bills and the state shouldn't > give handouts is a complete turnoff for the public. > (That's only one of many turnoffs!) Please do not mix up being libertarian with any flavor of "conservative". That word is even more bereft of meaning than "left" or "right" are. I don't give a fig about my image or the image of this list with people that largely refuse to thing or can't see that exploring some of these questions is far more important than merely how some nebulous "they" may or may not see us. > > Libertarianism enthusiasts ruin any chance of appealing to the public. > I am not at all interested in mere appeal to the public to the point of being afraid to talk about anything that may not be appealing to the public. - samantha From sjatkins at mac.com Thu Feb 24 04:54:41 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 20:54:41 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <20110223095730.GG23560@leitl.org> References: <4D64805C.7040501@gmail.com> <20110223095730.GG23560@leitl.org> Message-ID: <7E56279F-9F75-410E-82CF-AA0D39AEED1F@mac.com> On Feb 23, 2011, at 1:57 AM, Eugen Leitl wrote: > > Can we please let this thread die? The ban was there for a reason. Kthxbai. > Hmm? If we can't handle this then this list is probably utterly worthless. So typical. Important issues are brought up, talked about a while, then there are calls to shut it down and even appeals to a ban. This really really depresses me. - s From sjatkins at mac.com Thu Feb 24 04:57:19 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 20:57:19 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D64DE1A.2080402@gmail.com> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> <20110223093512.GF23560@leitl.org> <4D64DE1A.2080402@gmail.com> Message-ID: <5C11B7E1-26D6-421F-A39A-5DF513E5C531@mac.com> On Feb 23, 2011, at 2:14 AM, AlgaeNymph wrote: > On 2/23/11 1:35 AM, Eugen Leitl wrote: > >> Absolutely not. Some transhumanists who are subscribed >> to a mailing list. Do you see all the non-participants >> non-participating vigorously? > > I do see the same names and the same arguments. > >> A herd of cats attempts to appeal to dogs. (Why, actually?) > > Politicians listen to dogs. > >> A herd of cats cannot form a common front. Particularly, a common >> front appealing to dogs. The easiest way is to hire a dog with >> a good track record. > > Oo, I'll keep an eye out for that. > >> A group targeting whom? Advocating what? Advocating how? >> What is the added value to the target audience? > > These are good questions, this is what we should be asking. :) I'll start with the ideas I have at the moment. > > ? Targeting whom? > Whoever has the moral high ground. > > ? Advocating what? > Transhumanism, of course. > > ? Advocating how? > I'd begin by having us prepare answers to the hardest possible questions we can get asked, particularly in regards to equity. Which brings you right back to those so bothersome questions regarding ethics, individual rights, what are and are not rights, how that effects economics and so on. How could we possibly talk about equity without addressing those things? Yet many here don't seem to have the stomach for it. > > ? What is the added value to the target audience? > You mean why would they be interested? I'd like to think for the same reason H+ adds value to us but you probably want something more politically specific. My best guess is to find a way to tie H+ to anti-corporatism (*not* anti-business or anti-free market, mind). Still, how to politically frame H+ is a line of questioning we should give some thought. The difference is lost without again delving pretty deep into those troublesome subjects. From kanzure at gmail.com Thu Feb 24 05:02:00 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 23 Feb 2011 23:02:00 -0600 Subject: [ExI] Fwd: [diybio] Re: Urgent final BIL participant confirmations, press coverage In-Reply-To: <17b11efb-b94c-419b-890a-ee8c8db73f6a@u24g2000prn.googlegroups.com> References: <17b11efb-b94c-419b-890a-ee8c8db73f6a@u24g2000prn.googlegroups.com> Message-ID: ---------- Forwarded message ---------- From: JonathanCline Date: Wed, Feb 23, 2011 at 10:58 PM Subject: [diybio] Re: Urgent final BIL participant confirmations, press coverage To: SoCal DIYBio Cc: jcline On Feb 23, 3:21 pm, Joseph Jackson wrote: > LA and SD bio people. I am down to the wire for BIL now and we have a few > publicity opportunities cropping up. You can take them or leave them and > you can be featured individually or as part of your group. Eg, "Natalia, a > UCLA grad student was drawn to DIY bio bla bla bla" > The Perl Robotics open source software project allows the typical young computer hacker to control $100,000 laboratory robotics in order to run biochem experiments just like the pro's. It's published online for anyone to download and has attracted the attention of Syn Bio labs, MIT, etc. It competes with some of the best professional software out there and in fact exceeds their capabilities. A junior high school student could use it to become the next Amyris.. in her basement [*]. With focused development work typical of open source projects, it could convert the entire 30-volume set of Current Protocols in Molecular Biology into computer instructions, automatically, or it could make great martinis. http://search.cpan.org/dist/Robotics/ This software is not for the meek: laboratory robotics are heavy machines with precise moving parts and only semi-intelligent circuits. The author is currently looking for an open-source-license-friendly partner lab. (Reality check: "it's in alpha." Plus the cost of the reagents per machine run is outrageous.) [*] Actual results may differ in practice. ## Jonathan Cline ## jcline at ieee.org ## Mobile: +1-805-617-0223 ######################## -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Thu Feb 24 05:01:10 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 23 Feb 2011 22:01:10 -0700 Subject: [ExI] women's rights: an amusing video Message-ID: This five year-old girl is very adamant about having a job before she marries! I just wonder if she gained this wisdom from a relative, or instead from television? lol But I think she should have included in her statement about how she would also be getting a good education before marrying! Well, maybe that will come when she's six... http://www.bing.com/videos/watch/video/little-girl-needs-a-job/20mhcwyt?q=Viral+video&rel=msn&from=en-us_msnhp&form=msnrll>1=42010 John : ) From sjatkins at mac.com Thu Feb 24 05:19:31 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 21:19:31 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110223105745.GH23560@leitl.org> References: <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> <20110218130321.GK23560@leitl.org> <20110223105745.GH23560@leitl.org> Message-ID: <2CD47B82-AFEE-4E96-9565-97F438E9E6B7@mac.com> On Feb 23, 2011, at 2:57 AM, Eugen Leitl wrote: > On Tue, Feb 22, 2011 at 04:26:08PM -0700, Kelly Anderson wrote: > >>> *Which* competing platforms? Technologies don't come out of >>> the blue fully formed, they're incubated for decades in >>> R&D pipeline. Everything is photolitho based so far, self-assembly >>> isn't yet even in the crib. TSM is just 2d piled higher and >>> deeper. >> >> Photo lithography has a number of years left in it. As you say, it can > > Not so many more years. > >> extend into the third dimension if the heat problem is solved. I have > > Photolitho can't extend into third dimension because each subsequent > fabbing step degrades underlying structures. You need a purely additive, > iterable deposition process which doesn't damage underlying layers. > >> seen one solution to the heat problem that impressed the hell out of > > Cooling is only a part of the problem. There are many easy fixes which > are cumulative in regards to reducing heat dissipation. > >> me, and no doubt there are more out there that I haven't seen. By the >> time they run out of gas on photo lithography, something, be it carbon >> nano tube based, or optical, or something else will come out. A > > Completely new technologies do not come out of the blue. > We're about to hit 11 nm http://en.wikipedia.org/wiki/11_nanometer > Still think Moore's got plenty of wind yet? Yes, but due to newer technologies. Some like optical connects (partially empowered by nanoscale light sensors) between components enable 3D architectures. Others involve things not as close to out of the lab like memristor based designs, molecular chips, racetrack memory, graphene transistors, quantum dot memory - to name a few. While we may not continue Moore's law by current means there are many contenders that should enable continuance of that pace for some time. > >> company like Intel isn't going to make their very best stuff public >> immediately. You can be sure they and IBM have some great stuff in the > > The very best stuff is called technology demonstrations. It is very > public for obvious reasons: shareholder value. > >> back room. I am not fearful of where the next S curve will come from, > > The only good optimism is well-informed optimism. Optimists > would have expected clock doublings and memory bandwidth > doublings to match structure shrink. I do, largely. > >> except that it might come out of a lab in China, Thor help us all >> then! >> >>>>> Kelly, do you think that Moore is equivalent to system >>>>> performance? You sure about that? >>>> >>>> No. Software improves as well, so system performance should go up >>> >>> Software degrades, actually. Software bloat about matches the advances >>> in hardware. >> >> I know what you are talking about. You are stating that Java and C# >> are less efficient than C++ and that is less efficient than C and that > > I am talking that people don't bother with algorithms, because "hardware > will be fast enough" or just build layers of layers upon external > dependencies, because "storage is cheap, hurr durr". True too often. But almost always a better algorithm buys you more cheaper than buying next year's latest and greatest or even the latest and greatest from five year in the future. > > Let's face it, 98% of developers are retarded monkeys, and need > their programming license revoked. > True but we don't need no stinking licenses! >> > > There's no Moore's law for software, that's for sure. > >>> In terms of advanced concepts, why is the second-oldest high >>> level language still unmatched? Why are newer environments >>> inferior to already historic ones? >> >> Are you speaking with a LISP? I don't think that Eclipse is inferior > > Why, yeth. How observanth of you, Thir. Because LISP sought to capture the essence and full power of function abstractions and did a good job of doing so while not introducing a lot of canned syntax to get in the way. The most empowering software environment I ever encountered was a Symbolics workstation in the early 80s. That says a lot for the dearth of improvement is programming environment tools. Part of it is economics. Pleasing programmers is hard, they are tight with their money, most won't bother with becoming fluent in the needed abstractions and there are not nearly as many of them as people that want to pay a lot for Microsoft office or the latest iPhone fart app equivalent. We software folks spend the majority of our careers automating other people's workflow (at best). > >> to the LISP environment I used on HP workstations in the 80s. I think > > We're not talking implementations, but power of the concepts. > >> it is far better. I remember waiting for that damn thing to do garbage >> compaction for 2-3 minutes every half hour or so. Good thing I didn't >> drink coffee in those days... could have been very bad. :-) > > Irrelevant to my point. GC is actually much faster in modern designs than reference counting - provably so. And you do not want to even dream of programming in the large without some effective means of GC. Reference counting is also provably fallible. > >> We tend to glorify the things of the past. I very much like playing > > Lisp is doing dandy. My question is no current language or > environment was capable to improve upon the second-oldest > language conceptually. Nevermind that many human developers > are mentally poorly equipped to deal with such simple things > like macros. > >> with my NeXT cube, and do so every now and again (It's great when you >> combine Moore's law with Ebay, I could never have afforded that >> machine new.) The nostalgia factor is fantastic. But the NeXT was >> fairly slow even at word processing when you use it now. It was a >> fantastic development environment, only recently equaled again in >> regularity and sophistication. >> >> Eugen, don't be a software pessimist. We now have two legged walking >> robots, thanks to a combination of software employing feedback and >> better hardware, but mostly better software in this case. > > I'm sorry, I'm in the trade. Not seeing this progress thing > you mention. > Me either and I have done little but software for the last 30 years. >> Picassa does a fairly good job of recognizing faces. I would never >> have predicted that would be a nut cracked in my time. > > We're supposed to have human-grade AI twenty years ago. I can > tell you one thing: we won't have human-grade AI in 2030. > Now that I am not sure of. Except I am afraid the economic meltdown this decade may kill to much that such an accomplishment needs to rest on. - samantha From spike66 at att.net Thu Feb 24 05:11:16 2011 From: spike66 at att.net (spike) Date: Wed, 23 Feb 2011 21:11:16 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <40434C26-6B36-489A-B674-A091AD5AF2C4@mac.com> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> <40434C26-6B36-489A-B674-A091AD5AF2C4@mac.com> Message-ID: <009b01cbd3e1$43f393d0$cbdabb70$@att.net> ... On Behalf Of Samantha Atkins >... Beware the possibility of Transhumanist Winter. - samantha Transhumanist Winter? Do explain please? Did you mean transhumanism could come to have a negative public image? spike From jrd1415 at gmail.com Thu Feb 24 05:27:45 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 23 Feb 2011 22:27:45 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: <4D65BAC7.1090507@mac.com> References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> <4D65BAC7.1090507@mac.com> Message-ID: Samantha, how sweet. We haven't chatted in a while. And now i see that you're a studied libertarian (L?). I suppose if I'd been paying closer attention, I'd have known that. On Wed, Feb 23, 2011 at 6:56 PM, Samantha Atkins wrote: > On 02/23/2011 02:08 PM, Jeff Davis wrote: >> Oddly, it seems to require only that enough people behind the curtain >> in the polling booth mark their ballot correctly. Which is to say, for >> the candidates put forth by The Accountability Party. > > Problem with this is that the vast majority (roughly 99%) of the government > machinery is not subject to election at all. ? And it is very resistant to > major change by incumbents. Bureaucratic inertia. Subject to legislative direction, no? But also a giant interest group/voting block in its own right. Don'[t you just hate democracy sometimes? Paraphrasing something de Toqueville may have said, 'the American republic (democracy?) will last until the govt discovers that it can bribe its citizens with their own money.' [Googled it: "The American Republic will endure until the day Congress discovers that it can bribe the public with the public's money."] >> The Accountability Party is deliberately "preconfigured" to be >> broad-based, having only two planks: Accountability and Jobs. >> >> No other issue is relevant except as relates to these two concerns. > Being agnostic on everything but these two ungrounded concepts cannot > possibly lead to a good outcome. ?No principles means Not committing the party a priori to a menu of positions hardly means having no principles. Why take a position that can only splinter the party and weaken it. With the result being lost power, and interment in the ash heap of history. The party can poll its members later during the legislative session, work out niggling details, and get on with exercising power on issues that matter. > >> The two issues which the AP devotes its exclusive focus are: >> accountability: no one is above the law. Everyone, but in particular >> persons in high position who have traditionally 'enjoyed' immunity >> from prosecution, will now have their get out of jail free cards >> voided. >> > > Which laws? Honestly? I would start with war crimes. By "accountability" I essentially mean subject the ruling class in general and the power elite in particular to a strong dose of "ethic cleansing", so the entire society could start over with a clean slate. Start over, but with the former upper reaches of society on notice that the law now applies to them. No, really. > Which laws are legitimate to start with? Sort that out later. > How do you know? Apply libertarian principles? Why not? We'll certainly have to sort that out. Let's talk it over. >?All now are equal under the law as a standing principle. A standing principle for the semi-washed masses, perhaps. We both know that US Presidents and legislators have never been prosecuted for war crimes. > ?How would you make it more so? Easy. Prosecute the formerly unprosecuted. All of them. This doesn't imply draconian penalties. It isn't about revenge. It's about starting over with a clean slate and a "rule of law" that does its job. >> And jobs: everyone who wants a paycheck gets a paycheck. EV-REE-ONE. >> > > WHAT? Now, now, don't get upset. People vote their pocketbooks. Economics is all. Establish a principle that everyone is ***ENTITLED*** to their piece of the economic pie, and they should vote for you in large enough numbers to guarantee that you get the power to implement necessary reforms. It's the Lombardi principle: winning is everything. > Even if that can offer no value whatsoever in exchange? Yes, if needs be. (But your question presumes no value. I do not propose a "no value" exchange.) > How is this just It reconfigures the economic system, eliminating the "war of all against all". High level political and economic crime will be deterred. There will be a societal shift away from parasitism and toward greater productivity. Economic activity will then equilibrate, and life will go on. But better. Rinse and repeat. > and how does it lead to a better world? See above. And by the way, if at first you don't succeed, tweak , and tweak again. (Till you get it right, or stop breathing. Is there another choice?) >> the Treasury has a machine that >> prints checks, so the policy is secured, "Move right along. Nothing to >> see here." > That will finish destroying the value of the dollar very very quickly and the country with it. No it won't. > Progressive tax is regressive to actually growing an > economy. No it isn't. > It has been seen over and over again. No it hasn't. >Not to mention be utterly unjust and immoral. Nothing could be more moral and just than to confirm, and apply, the principle that every person is ENTITLED to a living wage from the economic pie. By the way, I base my challenge to your assertions about the economic consequences of taxation, on the claim that it's just ruling class propaganda. No doubt you will counter with some conservative or "Austrian" economist as authority. It's the same old story from the dim recesses of time. The intellectual class provides "scholarly" justifications for the predation of the wealthy. And one other thing: we're on the same side , seek the same end. Hard to believe, but true. Libertarian principles-wise. >> There's more, but this is a start. >> > > This a total non-starter. Glad to have you on board. Best, Jeff Davis "First they ignore you, then they laugh at you, then they fight you, then you win." Mahatma Gandhi From sjatkins at mac.com Thu Feb 24 05:35:11 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 21:35:11 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> Message-ID: On Feb 23, 2011, at 6:11 PM, Darren Greer wrote: > > > On Wed, Feb 23, 2011 at 10:03 PM, Samantha Atkins wrote: > > > With notorious and life threatening waits for many procedures. No, thank you. > > > According to who? Health insurance companies? Lobbyists? Because that is simply not true. In Canada, when you need it, you get it. The more more life-threatening it is, the faster you get it. If you believe otherwise, you've bought into propaganda. Unfortunately for your argument I have too many friends in Canada and a few in England to buy your claim. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Feb 24 05:38:29 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 21:38:29 -0800 Subject: [ExI] A plea for us to do better In-Reply-To: <4D65DDC4.5050106@moulton.com> References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> <4D65DDC4.5050106@moulton.com> Message-ID: <788BA700-2638-4718-BEAF-03F0ECD36B50@mac.com> On Feb 23, 2011, at 8:25 PM, F. C. Moulton wrote: > Darren Greer wrote: >> On Wed, Feb 23, 2011 at 10:03 PM, Samantha Atkins >> wrote: >> With notorious and life threatening waits for many procedures. No, thank >>> you. >>> >> According to who? Health insurance companies? Lobbyists? Because that is >> simply not true. In Canada, when you need it, you get it. The more more >> life-threatening it is, the faster you get it. If you believe otherwise, >> you've bought into propaganda. >> >> darren > > Would everyone please cut out the hyperbole. Please. The situation is > much more complex and nuanced that either of the above positions. > Instead of just making bold assertions can I instead suggest defining > our terms, agreeing on a set of metrics and processes of measurement and > then taking a calm look at all available data. In point of fact I tried pretty hard to speak in terms of essentials for most of this libertarianism related discussion. Admittedly the above was a throwaway remark when I was feeling a bit annoyed that is not my best. - samantha From sjatkins at mac.com Thu Feb 24 05:47:07 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 23 Feb 2011 21:47:07 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <009b01cbd3e1$43f393d0$cbdabb70$@att.net> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> <40434C26-6B36-489A-B674-A091AD5AF2C4@mac.com> <009b01cbd3e1$43f393d0$cbdabb70$@att.net> Message-ID: <64C5D9A5-6778-4280-A3F8-CF96469E1CB2@mac.com> On Feb 23, 2011, at 9:11 PM, spike wrote: > > ... On Behalf Of Samantha Atkins > >> ... Beware the possibility of Transhumanist Winter. > > - samantha > > > Transhumanist Winter? Do explain please? Did you mean transhumanism could > come to have a negative public image? > Yes. Now that Kurweilian Singularity in particular is becoming very widespread along with many other H+ memes, beware the backwash when things don't go as people dreamed they could in a reasonable seeming amount of time. Especially beware that if the economic mess really does go as horrible as I and others think it will. Also beware the future minded entitlement mentality that is going to be really really really pissed at someone (who doesn't matter as long as they don't have the goodies) when it becomes clear they aren't going to become gods with no effort at all required on their part. There are more than a few who cheer H+ now that more or less thing it will bring us to utopia and is so inevitable they needn't sweat much. - samantha From algaenymph at gmail.com Thu Feb 24 06:12:54 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Wed, 23 Feb 2011 22:12:54 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <20110223135439.GJ23560@leitl.org> References: <4D64805C.7040501@gmail.com> <20110223080054.GD23560@leitl.org> <4D64C0B3.3060008@gmail.com> <20110223093512.GF23560@leitl.org> <4D64DE1A.2080402@gmail.com> <20110223135439.GJ23560@leitl.org> Message-ID: <4D65F6E6.9020301@gmail.com> On 2/23/11 5:54 AM, Eugen Leitl wrote: > Moral high ground = negligible impact. But lots of > points for style, I grant you that. Thanks. :) > Transhumanism is just a word. You need a list of specific > activities. Noted, I'll see if I can break it down. Making healthspan indefinite, making gender change easy, enabling DIY projects that'll break the monopoly of Big Pharma. > Equity? Explain. Making sure everyone gets the toys, not just the rich. > Anti-corporatism is a pretty small niche. Still, maybe possible to ride > that. There are a some people who're unhappy with sustainability as > promoted by classical environmentalists. Pushing sustainable technology > would be a (small) niche. Better than nothing. From possiblepaths2050 at gmail.com Thu Feb 24 07:43:24 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 24 Feb 2011 00:43:24 -0700 Subject: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday Extropy Email List) In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <4D6498C4.2040306@canonizer.com> <4D64A3CE.8090207@satx.rr.com> Message-ID: On Tue, Feb 22, 2011 at 10:23 PM, John Grigg wrote: > Natasha, HAPPY BIRTHDAY!!!!! > > And may you have at least 100 more.... Adrian Tymes wrote: >Tack on at least one more 0 there. ;) If Natasha can make it to 161, then she has definitely reached longevity escape velocity!! John : ) On 2/23/11, Natasha Vita-More wrote: > Thank you all for the Birthday Greetings! > > I am reborn every day. > > http://www.natasha.cc/ageless.htm > > > Natasha Vita-More > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes > Sent: Wednesday, February 23, 2011 11:29 AM > To: ExI chat list > Subject: Re: [ExI] Happy Birthday Natasha also (was Re: Happy Birthday > Extropy Email List) > > On Tue, Feb 22, 2011 at 10:23 PM, John Grigg > wrote: >> Natasha, HAPPY BIRTHDAY!!!!! >> >> And may you have at least 100 more.... > > Tack on at least one more 0 there. ;) > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From eugen at leitl.org Thu Feb 24 09:28:24 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 24 Feb 2011 10:28:24 +0100 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <7E56279F-9F75-410E-82CF-AA0D39AEED1F@mac.com> References: <4D64805C.7040501@gmail.com> <20110223095730.GG23560@leitl.org> <7E56279F-9F75-410E-82CF-AA0D39AEED1F@mac.com> Message-ID: <20110224092824.GC23560@leitl.org> On Wed, Feb 23, 2011 at 08:54:41PM -0800, Samantha Atkins wrote: > > On Feb 23, 2011, at 1:57 AM, Eugen Leitl wrote: > > > > > Can we please let this thread die? The ban was there for a reason. Kthxbai. > > > > Hmm? If we can't handle this then this list is probably utterly worthless. There are some hot buttons which reliably produce a lot of heat and very little light. If human learning is not a myth we should figure out that keeping pressing them is not a good idea. > So typical. Important issues are brought up, talked about > a while, then there are calls to shut it down and even > appeals to a ban. This really really depresses me. People tend to exaggerate their differences when they're particularly similar. See the donklephant, where different parts of the same animal's anatomy apparently seem to in a fight with each other. Most curious. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Thu Feb 24 09:28:48 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 24 Feb 2011 05:28:48 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> Message-ID: 2011/2/24 Samantha Atkins >Unfortunately for your argument I have too many friends in Canada and a few in England to buy your claim.< Samantha: the claims is yours actually, that there are routinely life-threatening waits for medical procedures in my country. The burden of proof is on you. I considered briefly citing my own experience as a person with a serious illness having lived in both Canada and the U.S., but then considered that it is a weak defense compared to actual documented evidence. Therefore the "I have a friend" approach gets the same treatment. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 24 09:49:07 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 24 Feb 2011 05:49:07 -0400 Subject: [ExI] A plea for us to do better In-Reply-To: <788BA700-2638-4718-BEAF-03F0ECD36B50@mac.com> References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> <4D65DDC4.5050106@moulton.com> <788BA700-2638-4718-BEAF-03F0ECD36B50@mac.com> Message-ID: > > On Feb 23, 2011, at 8:25 PM, F. C. Moulton wrote: > Would everyone please cut out the hyperbole. Please. The situation is > much more complex and nuanced that either of the above positions. > Instead of just making bold assertions can I instead suggest defining > our terms, agreeing on a set of metrics and processes of measurement and > then taking a calm look at all available data. Thanks Fred. I normally don't get riled. But about healthcare I can lose my head sometimes. I was forced to move out of my beloved San Francisco back to Canada because of it, and it's hard for me to be analytical about it. But I will try, and if I am unable, I'll simply withdraw from the conversation. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 24 10:10:48 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 24 Feb 2011 06:10:48 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> <002901cbd081$9bb2b550$d3181ff0$@att.net> <4D640C85.6060007@mac.com> <4D641774.6020609@lightlink.com> <00e701cbd2e4$a63aa9f0$f2affdd0$@att.net> <4D647FC6.1080406@mac.com> <20110223164814.GC15944@ofb.net> <4D65BC58.9010203@mac.com> Message-ID: On Thu, Feb 24, 2011 at 5:28 AM, Darren Greer wrote: > > > 2011/2/24 Samantha Atkins > > >Unfortunately for your argument I have too many friends in Canada and a > few in England to buy your claim.< > > Samantha: the claims is yours actually, that there > are routinely life-threatening waits for medical procedures in my country. > The burden of proof is on you. I considered briefly citing my own experience > as a person with a serious illness having lived in both Canada and the U.S., > but then considered that it is a weak defense compared to actual documented > evidence. Therefore the "I have a friend" approach gets the same treatment. > Samatha: in light of Fred's comments, I have decided to forward you the Romonow report. The Romonow commission was set up to investigate the problems with the Canadian health care system over a two year period, and the report was released only last year. I was an advocate against, and professional critic of, our health care system for a number of years, and worked as a consultant for Health Canada to devise ways to improve the system in specific areas and increase accountability, most notably for Aboriginal health issues. There is a small section on diagnostic wait times that admits the need for improvement, but what the report doesn't say, and that you can find elsewhere on the net, is that those who wait for diagnostics or treatments with serious illnesses do so almost entirely because of initial mis-diagnosis based on symptoms. These cases have been used by insurance lobbyists and private health care advocates in other countries who cherry pick data to support their own agendas. As Fred said, for every Canadian that claims he had to go the U.S. for cancer treatment, you'll find someone from the U.S. who had to go India for a heart operation or to Poland for the new MS procedure. That kind of thing is not helpful, and is the source of much of the confusion and mis- (and dis-) information, on both sides. Start with this: it will give you an idea of the problems. If your country (and I'm not sure what that is) has any equivalent criticism, national in scope, of the health care system you champion, it would be helpful to see. I don't have a lot of time these days, but I'm willing to have a civilized debate about the subject. http://dsp-psd.pwgsc.gc.ca/Collection/CP32-85-2002E.pdf Darren > > Darren > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 24 11:21:22 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 24 Feb 2011 07:21:22 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <20110223171636.GF15944@ofb.net> References: <20110223171636.GF15944@ofb.net> Message-ID: On Wed, Feb 23, 2011 at 1:16 PM, Damien Sullivan wrote: You might get better responses if you gave the context. Are you arguing > with fairly informed and fully dedicated socialists? > Dedicated, yes. Informed? Not particularly. Although there is a fair amount of context and back-ground I'll try and give you some of it. It may help to cool this discussion down a bit too, because the original question was quite innocent and I got way more than I bargained for in the responses here. I am used to Exi being a fairly balanced place to go when looking for an intellectual assessment of a topic. The other discussion had degenerated into rhetoric and even name-calling and I thought I might find some reasoned arguments here to help bring it back on track. Unfortunately, the same thing has happened here, which is in itself an education of sorts. I'm as guilty as the next guy, getting embroiled in this one the way I did not get in the other. Perhaps it is because I'm more invested here. > > > >Are you trying to convince people to > change their mind on policy< Absolutely not. But I despise mis-statements, or at least statements that have no evidence other than "my Auntie Jane says so." Canada has for some time been laboring under an ultra conservative right wing government that slid into the barest minority after a financial scandal with the ruling party eight years ago. They have managed to hold on to that minority through various sleaze-ball politics, and the fact that the other party hasn't been able to recover from the blows it took and the third is admittedly too socialist for most tastes. Thinking people watch this government and know that they are trouble. Recently regarding Egypt we stood in true solidarity with Saudi Arabia, Israel and Libya in refusing to criticize Mubarek's treatment of protesters. Canada lost its seat on the U.N security council last year because of our foreign policy, which now makes U.S. foreign policy look balanced. We have alienated China by invoking human rights abuse claims, when anyone can see that it was done to keep that country away from our oil reserves, which are some of the largest in the world. The business community has now realized that government policy is hurting trade ( I have yet to look into the specifics of that) because of its eagerness to get reelected. It has consolidated enough power in the federal government and prime minister's office to become a dictatorship tomorrow, if it wants. There is now media blackout on inner party workings. It has silenced critics in the bureaucracy by gutting it (and Canada relies on a strong bureaucracy to ensure continuity when new political parties take control of the house and senate.) Now it is trying to make sweeping changes to the CRTC so that media outlets do not actually have to report the truth. They have couched this in political jargon, but it is being done to facilitate the entry of a new 24 hour news station similar to Fox news that is about to go on the air and that the prime minister's former aide was directed --openly and without fear of reprisal-- by the government to create. I do not exaggerate these issues. It is very serious. Canada's infamous politeness is being used against it. Or was our politeness just apathy in disguise? > or convince them that libertarians aren't > all selfish or insane?< > Aye, there's the rub, as Hamlet said. This party wears many masks. And one of the masks it wears is libertarian. It attempted to appeal to libertarians in the country by scrapping the long form census and citing privacy issues. I don't know if any libertarians were dumb enough to buy that, but a lot of smart social democrats were ( dumb people do dumb things. Smart people do really dumb things.) I pointed out to this group, mostly writers and poets, that I could not see how a libertarian, from what little I knew about them, could support this government. When I tried to articulate why, that privacy was less haloed now that they had instituted new CRTC regulations and were reading our mail, and that they had increased the national debt and centralized government and concentrated power even more, I was attacked as being libertarian. Which is kind of funny, given Samantha and I's discussion about health care. There seemed to be no distinction between libertarian and ultra-conservative. To them, they were one and the same. I suspected this wasn't true. I did some of my own research on-line and came back with some evidence. Not pretty. So I went to you guys thinking I could find a way to explain what libertarianism was, from those who held those views rather than from documents on-line. Also the discussion sparked an interest in me about why some transhumanists championed it. Was it for personal reasons, or was there a link between the advancement of technology and libertarian politics? I got some of what I needed, though I've abandoned the other discussion as hopeless. And I have to say, while I have been interested in this thread, and even got my blood pumping a few times, I'm gonna go back to Watson. :) Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 24 12:07:27 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 24 Feb 2011 13:07:27 +0100 Subject: [ExI] Q&A with Watson Message-ID: <20110224120727.GE23560@leitl.org> http://blog.reddit.com/2011/02/ibm-watson-research-team-answers-your.html IBM Watson Research Team Answers Your Questions Posted by Erik Martin (hueypriest) at 12:13 | Labels: can it find my pants?, iama, interviews Last week you requested someone who worked on Watson over in IAMA, and IBM Watson Research team was game to answer your top questions about Watson, how it was developed and how IBM plans to use it in the future. Below are answers to your top 10 questions, along with some bonus common ones as well. Thanks for taking the time to answer, Watson Team! -- 1. Could you give an example of a question (or question style) that Watson always struggled with? (Chumpesque) Any questions that require the resolution of very opaque references especially to everyday knowledge that no one might have written about in an explicit way. For example, ?If you're standing, it's the direction you should look to check out the wainscoting.? Or questions that require a resolving and linking opaque and remote reference, for example ?A relative of this inventor described him as a boy staring at the tea kettle for an hour watching it boil.? The answer is James Watt, but he might have many relatives and there may be very many ways in which one of them described him as studying tea boil. So first, find every possible inventor (and there may be 10,000's of inventors), then find each relative, then what they said about the inventor (which should express that he stared at boiling tea). Watson attempts to do exactly this kind of thing but there are many possible places to fail to build confident evidence in just a few seconds. 2. What was the biggest technological hurdle you had to overcome in the development of Watson? (this_is_not_the_cia) Accelerating the innovation process ? making it easy to combine, weigh evaluate and evolve many different independently developed algorithms that analyze language form different perspectives. Watson is a leap in computers being able to understand natural language, which will help humans be able to find the answers they need from the vast amounts of information they deal with everyday. Think of Watson as a technology that will enable people to have the exact information they need at their fingertips 3. Can you walk us through the logic Watson would go through to answer a question such as, "The antagonist of Stevenson's Treasure Island." (Who is Long John Silver?) (elmuchoprez) Step One: Parses sentence to get some logical structure describing the answer X is the answer. antagonist(X). antagonist_of(X, Stevenson's Treasure Island). modifies_possesive(Stevenson, Treasure Island). modifies(Treasure, Island) Step Two: Generates Semantic Assumptions island(Treasure Island) location(Treasure Island) resort(Treasure Island) book(Treasure Island) movie(Treasure Island) person(Stevenson) organization(Stevenson) company(Stevenson) author(Stevenson) director(Stevenson) person(antagonist) person(X) Step Three: Builds different semantic queries based on phrases, keywords and semantic assumptions. Step Four: Generates 100s of answers based on passage, documents and facts returned from 3. Hopefully Long-John Silver is one of them. Step Five: For each answer formulates new searches to find evidence in support or refutation of answer -- score the evidence. Positive Examples: Long-John Silver the main character in Treasure Island..... The antagonist in Treasure Island is Long-John Silver Treasure Island, by Stevenson was a great book. One of the great antagonists of all time was Long-John Silver Richard Lewis Stevenson's book, Treasure Island features many great characters, the greatest of which was Long-John Silver. Step Six: Generate, get evidence and score new assumptions Positive Examples: (negative examples would support other characters, people, books, etc associated with any Stevenson, Treasure or Island) Stevenson = Richard Lewis Stevenson "by Stevenson" --> Stevenson's main character --> antagonist Step Seven: Combine all the evidence and their scores Based on analysis of evidence for all possible answer compute a final confidence and link back to the evidence. Watson's correctness will depend on evidence collection, analysis and scoring algorithms and the machine learning used to weight and combine the scores. 4. What is Watson?s strategy for seeking out Daily Doubles, and how did it compute how much to wager on the Daily Doubles and the final clue? (AstroCreep5000) Watson?s strategy for seeking out Daily Doubles is the same as humans -- Watson hunts around the part of the grid where they typically occur. In order to compute how much to wager, Watson uses input like its general confidence, the current state of the game (how much ahead or behind), its confidence in the category and prior clues, what is at risk and known human betting behaviors. We ran Watson through many, many simulations to learn the optimal bet for increasing chances of winning. 5. It seems like Watson had an unfair advantage with the buzzer. How did Jeopardy! and IBM try to level the playing field? (Raldi) Jeopardy! and IBM tried to ensure that both humans and machines had equivalent interfaces to the game. For example, they both had to press down on the same physical buzzer. IBM had to develop a mechanical device that grips and physically pushes the button. Any given player however has different strengths and weakness relative to his/her/its competitors. Ken had a fast hand relative to his competitors and dominated many games because he had the right combination of language understanding, knowledge, confidence, strategy and speed. Everyone knows you need ALL these elements to be a Jeopardy! champion. Both machine and human got the same clues at the same time -- they read differently, they think differently, they play differently, they buzz differently but no player had an unfair advantage over the other in terms of how they interfaced with the game. If anything the human players could hear the clue being read and could anticipate when the buzzer would enable. This allowed them the ability to buzz in almost instantly and considerably faster than Watson's fastest buzz. By timing the buzz just right like this, humans could beat Watson's fastest reaction. At the same time, one of Watson's strength was its consistently fast buzz -- only effective of course if it could understand the question in time, compute the answer and confidence and decide to buzz in before it was too late. The clues are in English -- Brad and Ken's native language; not Watson's. Watson analyzes the clue in natural language to understand what the clue is asking for. Once it has done that, it must sift through the equivalent of one million books to calculate an accurate response in 2-3 seconds and determine if it's confident enough to buzz in, because in Jeopardy! you lose money if you buzz in and respond incorrectly. This is a huge challenge, especially because humans tend to know what they know and know what they don't know. Watson has to do thousands of calculations before it knows what it knows and what it doesn't. The calculating of confidence based on evidence is a new technological capability that is going to be very significant in helping people in business and their personal lives, as it means a computer will be able to not only provide humans with suggested answers, but also provide an explanation of where the answers came from and why they seem correct. 6. What operating system does Watson use? What language is he written in? (RatherDashing) Watson is powered by 10 racks of IBM Power 750 servers running Linux, and uses 15 terabytes of RAM, 2,880 processor cores and is capable of operating at 80 teraflops. Watson was written in mostly Java but also significant chunks of code are written C++ and Prolog, all components are deployed and integrated using UIMA. Watson contains state-of-the-art parallel processing capabilities that allow it to run multiple hypotheses ? around one million calculations ? at the same time. Watson is running on 2,880 processor cores simultaneously, while your laptop likely contains four cores, of which perhaps two are used concurrently. Processing natural language is scientifically very difficult because there are many different ways the same information can be expressed. That means that Watson has to look at the data from scores of perspectives and combine and contrast the results. The parallel processing power provided by IBM Power 750 systems allows Watson to do thousands of analytical tasks simultaneously to come up with the best answer in under three seconds. 7. Are you pleased with Watson's performance on Jeopardy!? Is it what you were expecting? (eustis) We are pleased with Watson's performance on Jeopardy! While at times, Watson did provide the wrong response to the clues, such as its Toronto response, it is still a giant leap in a computer?s understanding of natural human language; in its ability to understand what the Jeopardy! clue was asking for and respond with the correct response the majority of the time. 8. Will Watson ever be available public [sic] on the Internet? (i4ybrid) We envision Watson-like cloud services being offered by companies to consumers, and we are working to create a cloud version of Watson's natural language processing. However, IBM is focused on creating technologies that help businesses make sense of data in order to enable companies to provide the best service to the consumer. So, we are first focused on providing this technology to companies so that those companies can then provide improved services to consumers. The first industry we will provide the Watson technology to is the healthcare industry, to help physicians improve patient care. Consider these numbers: Primary care physicians spend an average of only 10.7 - 18.7 minutes face-to-face with each patient per visit. Approximately 81% average 5 hours or less per month ? or just over an hour a week -- reading medical journals. An estimated 15% of diagnoses are inaccurate or incomplete. In today?s healthcare environment, where physicians are often working with limited information and little time, the results can be fragmented care and errors that raise costs and threaten quality. What doctors need is an assistant who can quickly read and understand massive amounts of information and then provide useful suggestions. In terms of other applications we?re exploring, here are a few examples of how Watson might some day be used: Watson technology offered through energy companies could teach us about our own energy consumption. People querying Watson on how they might improve their energy management would draw on extensive knowledge of detailed smart meter data, weather and historical information. Watson technology offered through insurance companies would allow us to get the best recommendations from insurance agents and help us understand our policies more easily. For our questions about insurance coverage, the question answering system would access the text for that person?s actual policy, the other policies that they might have purchased, and any exclusions, endorsements, and riders. Watson technology offered through travel agents would more easily allow us to plan our vacations based on our interests, budget, desired temperature, and more. Instead of having to do lots of searching, Watson-like technology could help us quickly get the answers we need among all of the information that is out there on the Internet about hotels, destinations, events, typical weather, etc, to plan our travel faster. 9. How raw is your source data? I am sure that you distilled down whatever source materials you were using into something quick to query, but I noticed that on some of the possible answers Watson had, it looked like you weren't sanitizing your sources too much; for example, some words were in all caps, or phrases included extraneous and unrelated bits. Did such inconsistencies not cause you any problems? Couldn't Watson trip up an answer as a result? (knorby) Some of the source data was very messy and we did several things to clean it up. It was relatively rare, less than 1% of the time that this issue overtly surfaced in a confident answer. Evidentiary passages might have been weighed differently if they were cleaner, however. We did not measure how much of problem messy data effected evidence assessment. 10. I'm interested in how Watson is able to (sometimes) use object-specific questions like "Who is --" or "Where is --". In the training/testing materials I saw, it seemed to be limited to "What is--" regardless of what is being talked about ("What is Shakespeare?"), which made me think that words were only words and Watson had no way of telling if a word was a person, place, or thing. Then in the Jeopardy challenge, there was plenty of "Who is--." Was there a last-minute change to enable this, or was it there all along and I just never happened to catch it? I think that would help me understand the way that Watson stores and relates data. (wierdaaron) Watson does distinguish between and people, things, dates, events, etc. certainly for answering questions. It does not do it perfectly of course, there are many ambiguous cases where it struggles to resolve. When formulating a response, however, since "What is...." was acceptable regardless, early on in the project, we did not make the effort to classify the answer for the response. Later in the project, we brought more of the algorithms used in determining the answer to help formulate the more accurate response phrase. So yes, there was a change in that we applied those algorithms, or the results there-of, to formulate the "who"/"what" response. 11. Now that both Deep Blue and Watson have proven to be successful, what is IBM's next "great challenge"? (xeones) We don?t assign grand challenges, grand challenges arrive based on our scientists' insights and inspiration. One of the great things about working for IBM Research is that we have so much talent that we have ambitious projects going on in a wide variety of areas today. For example: We are working to make computing systems 1,000 times more powerful than they are today from the petascale to the exascale. We are working to make nanoelectronic devices 1,000 times smaller than they are today, moving us from an era of nanodevices to nanosystems. One of those systems we are working on is a DNA transistor, which could decode a human genome for under $1000, to help enable personalized medicine to become reality. We are working on technologies that move from an era of wireless connectivity -- which we all enjoy today -- to the Internet of Things and people, where all sorts of unexpected things can be connected to the Internet. 12. Can we have Watson itself / himself do an AMA? If you give him traditional questions, ie not phrased in the form they are on jeopardy, how well will he perform- how tailored is he to those questions, and how easy would it be to change that? Would it be unfeasible to hook him up to a website and let people run queries? At this point, all Watson can do is play Jeopardy and provide responses in the Jeopardy format. However, we are collaborating with Nuance, Columbia University Medical Center and the University of Maryland School of Medicine to apply Watson technology to healthcare. You can read more about that here: http://www-03.ibm.com/press/us/en/pressrelease/33726.wss 13. After seeing the description of how Watson works, I found myself wondering whether what it does is really natural language processing, or something more akin to word association. That is to say, does Watson really need to understand syntax and meaning to just search its database for words and phrases associated with the words and phrases in the clue? How did Waston's approach differ from simple phrase association (with some advanced knowledge of how Jeopardy clues work, such as using the word "this" to mean "blank"), and what would the benefit/drawback have been to taking that approach? (ironicsans) Watson performs deep parsing on questions and on background content to extract the syntactic structure of sentences (e.g., grammatical and logical structure) and then assign semantics (e.g., people, places, time, organization, actions, relationship etc). Watson does this analysis on the Jeopardy! clue, but also on hundreds of millions of sentences from which it abstracts propositional knowledge about how different things relate to one another. This is necessary to generate plausible answers or to relate an evidentiary passage to a question even if they are expressed with different words or structures. Consider more complex clues like: ?A relative of this inventor described him as a boy staring at the tea kettle for an hour watching it boil.? Sometimes, of course, Jeopardy questions are best answered based on the weight of a simple word associations. For example, "Got ___ !" ? well if "Milk" occurs mostly frequently in association with this phrase in everything Watson processed, then Watson should answer "Milk". It?s a very quick and direct association based on the frequency of exposure to that context. Other questions require a much deeper analysis. Watson has to try many different techniques, some deeper than others, for almost all questions and all at the same time to learn which produces the most compelling evidence. That is how it gets its confidence scores for its best answer. So even the ones that might have been answered based on word-association evidence, Watson also tried to answer other ways requiring much deeper analysis. If word association evidence produced strong evidence (high confidence scores) then that is what Watson goes with. We imagine this is to the way a person might quickly peruse many different paths toward an answer simultaneously but then will provide the answer they are most confident in being correct. 14. In the time it takes a human to even know they are hearing something (about .2 seconds) Watson has already read the question and done several million computations. It's got a huge head start. Do you agree or disagree with that assessment? (robotpirateninja) The clues are in English -- Brad and Ken's native language; not Watson's. Watson must calculate its response in 2-3 seconds and determine if it's confident enough to buzz in, because as you know, you lose money if you buzz in and respond incorrectly. This is a huge challenge, especially because humans tend to know what they know and know what they don't know. Watson has to do thousands of calculations before it knows what it knows and what it doesn't. The calculating of confidence based on evidence is a new technological capability that is going to be very significant in helping people in business and their personal lives, as it means a computer will be able to not only provide humans with suggested answers, but also provide an explanation of where the answers came from and why they seem correct. This will further human ability to make decisions. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Thu Feb 24 12:50:28 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 24 Feb 2011 08:50:28 -0400 Subject: [ExI] Q&A with Watson In-Reply-To: <20110224120727.GE23560@leitl.org> References: <20110224120727.GE23560@leitl.org> Message-ID: On Thu, Feb 24, 2011 at 8:07 AM, Eugen Leitl wrote: > Or questions that require a resolving and linking opaque and remote reference, for example ?A relative of this inventor described him as a boy staring at the tea kettle for an hour watching it boil.? The answer is James Watt, but he might have many relatives and there may be very many ways in which one of them described him as studying tea boil. So first, find every possible inventor (and there may be 10,000's of inventors), then find each relative, then what they said about the inventor (which should express that he stared at boiling tea). Watson attempts to do exactly this kind of thing but there are many possible places to fail to build confident evidence in just a few seconds.< This question is fairly easy for a human. Associate inventor and tea kettle (steam) and most would come up with Watt. I see how the tea kettle (and relative) phrasing would throw him, though, as opposed to asking directly who was James Watt. A human could know this answer without ever having the quote in his memory banks, because of the metaphorical association between a tea kettle and a steam engine ( probably drawn from image-based association, or at least it is for me.) So far, Watson and computers can't. If the quote isn't there -- toast. darren > > http://blog.reddit.com/2011/02/ibm-watson-research-team-answers-your.html > > IBM Watson Research Team Answers Your Questions > Posted by Erik Martin (hueypriest) at 12:13 > | > Labels: can it find my pants?, iama, interviews > Last week you requested someone who worked on Watson over in IAMA, and IBM > Watson Research team was game to answer your top questions about Watson, how > it was developed and how IBM plans to use it in the future. > > Below are answers to your top 10 questions, along with some bonus common > ones as well. Thanks for taking the time to answer, Watson Team! > > -- > 1. Could you give an example of a question (or question style) that Watson > always struggled with? (Chumpesque) > > Any questions that require the resolution of very opaque references > especially to everyday knowledge that no one might have written about in an > explicit way. For example, ?If you're standing, it's the direction you > should look to check out the wainscoting.? > > Or questions that require a resolving and linking opaque and remote > reference, for example ?A relative of this inventor described him as a boy > staring at the tea kettle for an hour watching it boil.? > The answer is James Watt, but he might have many relatives and there may be > very many ways in which one of them described him as studying tea boil. So > first, find every possible inventor (and there may be 10,000's of > inventors), then find each relative, then what they said about the inventor > (which should express that he stared at boiling tea). Watson attempts to do > exactly this kind of thing but there are many possible places to fail to > build confident evidence in just a few seconds. > > 2. What was the biggest technological hurdle you had to overcome in the > development of Watson? (this_is_not_the_cia) > > Accelerating the innovation process ? making it easy to combine, weigh > evaluate and evolve many different independently developed algorithms that > analyze language form different perspectives. > > Watson is a leap in computers being able to understand natural language, > which will help humans be able to find the answers they need from the vast > amounts of information they deal with everyday. Think of Watson as a > technology that will enable people to have the exact information they need > at their fingertips > > 3. Can you walk us through the logic Watson would go through to answer a > question such as, "The antagonist of Stevenson's Treasure Island." (Who is > Long John Silver?) (elmuchoprez) > > Step One: Parses sentence to get some logical structure describing the > answer > X is the answer. > antagonist(X). > antagonist_of(X, Stevenson's Treasure Island). > modifies_possesive(Stevenson, Treasure Island). > modifies(Treasure, Island) > > Step Two: Generates Semantic Assumptions > island(Treasure Island) > location(Treasure Island) > resort(Treasure Island) > book(Treasure Island) > movie(Treasure Island) > > person(Stevenson) > organization(Stevenson) > company(Stevenson) > author(Stevenson) > director(Stevenson) > > person(antagonist) > person(X) > > Step Three: Builds different semantic queries based on phrases, keywords > and semantic assumptions. > > Step Four: Generates 100s of answers based on passage, documents and facts > returned > from 3. Hopefully Long-John Silver is one of them. > > Step Five: For each answer formulates new searches to find evidence in > support or > refutation of answer -- score the evidence. > Positive Examples: > Long-John Silver the main character in Treasure Island..... > The antagonist in Treasure Island is Long-John Silver > Treasure Island, by Stevenson was a great book. > One of the great antagonists of all time was Long-John Silver > Richard Lewis Stevenson's book, Treasure Island features many great > characters, the greatest of which was Long-John Silver. > > Step Six: Generate, get evidence and score new assumptions > Positive Examples: (negative examples would support other characters, > people, books, etc associated with any Stevenson, Treasure or Island) > Stevenson = Richard Lewis Stevenson > "by Stevenson" --> Stevenson's > main character --> antagonist > > Step Seven: Combine all the evidence and their scores > Based on analysis of evidence for all possible answer compute a final > confidence and link back to the evidence. > Watson's correctness will depend on evidence collection, analysis and > scoring algorithms and the machine learning used to weight and combine the > scores. > > > 4. What is Watson?s strategy for seeking out Daily Doubles, and how did it > compute how much to wager on the Daily Doubles and the final clue? > (AstroCreep5000) > > Watson?s strategy for seeking out Daily Doubles is the same as humans -- > Watson hunts around the part of the grid where they typically occur. > > In order to compute how much to wager, Watson uses input like its general > confidence, the current state of the game (how much ahead or behind), its > confidence in the category and prior clues, what is at risk and known human > betting behaviors. We ran Watson through many, many simulations to learn the > optimal bet for increasing chances of winning. > > 5. It seems like Watson had an unfair advantage with the buzzer. How did > Jeopardy! and IBM try to level the playing field? (Raldi) > > Jeopardy! and IBM tried to ensure that both humans and machines had > equivalent interfaces to the game. For example, they both had to press down > on the same physical buzzer. IBM had to develop a mechanical device that > grips and physically pushes the button. Any given player however has > different strengths and weakness relative to his/her/its competitors. Ken > had a fast hand relative to his competitors and dominated many games because > he had the right combination of language understanding, knowledge, > confidence, strategy and speed. Everyone knows you need ALL these elements > to be a Jeopardy! champion. > > Both machine and human got the same clues at the same time -- they read > differently, they think differently, they play differently, they buzz > differently but no player had an unfair advantage over the other in terms of > how they interfaced with the game. If anything the human players could hear > the clue being read and could anticipate when the buzzer would enable. This > allowed them the ability to buzz in almost instantly and considerably faster > than Watson's fastest buzz. By timing the buzz just right like this, humans > could beat Watson's fastest reaction. At the same time, one of Watson's > strength was its consistently fast buzz -- only effective of course if it > could understand the question in time, compute the answer and confidence and > decide to buzz in before it was too late. > > The clues are in English -- Brad and Ken's native language; not Watson's. > Watson analyzes the clue in natural language to understand what the clue is > asking for. Once it has done that, it must sift through the equivalent of > one million books to calculate an accurate response in 2-3 seconds and > determine if it's confident enough to buzz in, because in Jeopardy! you lose > money if you buzz in and respond incorrectly. This is a huge challenge, > especially because humans tend to know what they know and know what they > don't know. Watson has to do thousands of calculations before it knows what > it knows and what it doesn't. The calculating of confidence based on > evidence is a new technological capability that is going to be very > significant in helping people in business and their personal lives, as it > means a computer will be able to not only provide humans with suggested > answers, but also provide an explanation of where the answers came from and > why they seem correct. > > 6. What operating system does Watson use? What language is he written in? > (RatherDashing) > > Watson is powered by 10 racks of IBM Power 750 servers running Linux, and > uses 15 terabytes of RAM, 2,880 processor cores and is capable of operating > at 80 teraflops. Watson was written in mostly Java but also significant > chunks of code are written C++ and Prolog, all components are deployed and > integrated using UIMA. > > Watson contains state-of-the-art parallel processing capabilities that > allow it to run multiple hypotheses ? around one million calculations ? at > the same time. Watson is running on 2,880 processor cores simultaneously, > while your laptop likely contains four cores, of which perhaps two are used > concurrently. Processing natural language is scientifically very difficult > because there are many different ways the same information can be expressed. > That means that Watson has to look at the data from scores of perspectives > and combine and contrast the results. The parallel processing power provided > by IBM Power 750 systems allows Watson to do thousands of analytical tasks > simultaneously to come up with the best answer in under three seconds. > > 7. Are you pleased with Watson's performance on Jeopardy!? Is it what you > were expecting? (eustis) > > We are pleased with Watson's performance on Jeopardy! While at times, > Watson did provide the wrong response to the clues, such as its Toronto > response, it is still a giant leap in a computer?s understanding of natural > human language; in its ability to understand what the Jeopardy! clue was > asking for and respond with the correct response the majority of the time. > > > 8. Will Watson ever be available public [sic] on the Internet? (i4ybrid) > > We envision Watson-like cloud services being offered by companies to > consumers, and we are working to create a cloud version of Watson's natural > language processing. However, IBM is focused on creating technologies that > help businesses make sense of data in order to enable companies to provide > the best service to the consumer. > > So, we are first focused on providing this technology to companies so that > those companies can then provide improved services to consumers. The first > industry we will provide the Watson technology to is the healthcare > industry, to help physicians improve patient care. > > Consider these numbers: > > Primary care physicians spend an average of only 10.7 - 18.7 minutes > face-to-face with each patient per visit. > Approximately 81% average 5 hours or less per month ? or just over an > hour a week -- reading medical journals. > An estimated 15% of diagnoses are inaccurate or incomplete. > > > In today?s healthcare environment, where physicians are often working with > limited information and little time, the results can be fragmented care and > errors that raise costs and threaten quality. What doctors need is an > assistant who can quickly read and understand massive amounts of information > and then provide useful suggestions. > > In terms of other applications we?re exploring, here are a few examples of > how Watson might some day be used: > > Watson technology offered through energy companies could teach us about > our own energy consumption. People querying Watson on how they might improve > their energy management would draw on extensive knowledge of detailed smart > meter data, weather and historical information. > Watson technology offered through insurance companies would allow us to > get the best recommendations from insurance agents and help us understand > our policies more easily. For our questions about insurance coverage, the > question answering system would access the text for that person?s actual > policy, the other policies that they might have purchased, and any > exclusions, endorsements, and riders. > Watson technology offered through travel agents would more easily allow > us to plan our vacations based on our interests, budget, desired > temperature, and more. Instead of having to do lots of searching, > Watson-like technology could help us quickly get the answers we need among > all of the information that is out there on the Internet about hotels, > destinations, events, typical weather, etc, to plan our travel faster. > > > 9. How raw is your source data? I am sure that you distilled down whatever > source materials you were using into something quick to query, but I noticed > that on some of the possible answers Watson had, it looked like you weren't > sanitizing your sources too much; for example, some words were in all caps, > or phrases included extraneous and unrelated bits. Did such inconsistencies > not cause you any problems? Couldn't Watson trip up an answer as a result? > (knorby) > > Some of the source data was very messy and we did several things to clean > it up. It was relatively rare, less than 1% of the time that this issue > overtly surfaced in a confident answer. Evidentiary passages might have been > weighed differently if they were cleaner, however. We did not measure how > much of problem messy data effected evidence assessment. > > 10. I'm interested in how Watson is able to (sometimes) use object-specific > questions like "Who is --" or "Where is --". In the training/testing > materials I saw, it seemed to be limited to "What is--" regardless of what > is being talked about ("What is Shakespeare?"), which made me think that > words were only words and Watson had no way of telling if a word was a > person, place, or thing. Then in the Jeopardy challenge, there was plenty of > "Who is--." Was there a last-minute change to enable this, or was it there > all along and I just never happened to catch it? I think that would help me > understand the way that Watson stores and relates data. (wierdaaron) > > Watson does distinguish between and people, things, dates, events, etc. > certainly for answering questions. It does not do it perfectly of course, > there are many ambiguous cases where it struggles to resolve. When > formulating a response, however, since "What is...." was acceptable > regardless, early on in the project, we did not make the effort to classify > the answer for the response. Later in the project, we brought more of the > algorithms used in determining the answer to help formulate the more > accurate response phrase. So yes, there was a change in that we applied > those algorithms, or the results there-of, to formulate the "who"/"what" > response. > > 11. Now that both Deep Blue and Watson have proven to be successful, what > is IBM's next "great challenge"? (xeones) > > We don?t assign grand challenges, grand challenges arrive based on our > scientists' insights and inspiration. One of the great things about working > for IBM Research is that we have so much talent that we have ambitious > projects going on in a wide variety of areas today. For example: > > We are working to make computing systems 1,000 times more powerful than > they are today from the petascale to the exascale. > We are working to make nanoelectronic devices 1,000 times smaller than > they are today, moving us from an era of nanodevices to nanosystems. One of > those systems we are working on is a DNA transistor, which could decode a > human genome for under $1000, to help enable personalized medicine to become > reality. > We are working on technologies that move from an era of wireless > connectivity -- which we all enjoy today -- to the Internet of Things and > people, where all sorts of unexpected things can be connected to the > Internet. > > > > 12. Can we have Watson itself / himself do an AMA? If you give him > traditional questions, ie not phrased in the form they are on jeopardy, how > well will he perform- how tailored is he to those questions, and how easy > would it be to change that? Would it be unfeasible to hook him up to a > website and let people run queries? > > At this point, all Watson can do is play Jeopardy and provide responses in > the Jeopardy format. However, we are collaborating with Nuance, Columbia > University Medical Center and the University of Maryland School of Medicine > to apply Watson technology to healthcare. You can read more about that here: > http://www-03.ibm.com/press/us/en/pressrelease/33726.wss > > 13. After seeing the description of how Watson works, I found myself > wondering whether what it does is really natural language processing, or > something more akin to word association. That is to say, does Watson really > need to understand syntax and meaning to just search its database for words > and phrases associated with the words and phrases in the clue? How did > Waston's approach differ from simple phrase association (with some advanced > knowledge of how Jeopardy clues work, such as using the word "this" to mean > "blank"), and what would the benefit/drawback have been to taking that > approach? (ironicsans) > > Watson performs deep parsing on questions and on background content to > extract the syntactic structure of sentences (e.g., grammatical and logical > structure) and then assign semantics (e.g., people, places, time, > organization, actions, relationship etc). Watson does this analysis on the > Jeopardy! clue, but also on hundreds of millions of sentences from which it > abstracts propositional knowledge about how different things relate to one > another. This is necessary to generate plausible answers or to relate an > evidentiary passage to a question even if they are expressed with different > words or structures. Consider more complex clues like: ?A relative of this > inventor described him as a boy staring at the tea kettle for an hour > watching it boil.? > Sometimes, of course, Jeopardy questions are best answered based on the > weight of a simple word associations. For example, "Got ___ !" ? well if > "Milk" occurs mostly frequently in association with this phrase in > everything Watson processed, then Watson should answer "Milk". It?s a very > quick and direct association based on the frequency of exposure to that > context. > > Other questions require a much deeper analysis. Watson has to try many > different techniques, some deeper than others, for almost all questions and > all at the same time to learn which produces the most compelling evidence. > That is how it gets its confidence scores for its best answer. So even the > ones that might have been answered based on word-association evidence, > Watson also tried to answer other ways requiring much deeper analysis. If > word association evidence produced strong evidence (high confidence scores) > then that is what Watson goes with. We imagine this is to the way a person > might quickly peruse many different paths toward an answer simultaneously > but then will provide the answer they are most confident in being correct. > > 14. In the time it takes a human to even know they are hearing something > (about .2 seconds) Watson has already read the question and done several > million computations. It's got a huge head start. Do you agree or disagree > with that assessment? (robotpirateninja) > > The clues are in English -- Brad and Ken's native language; not Watson's. > Watson must calculate its response in 2-3 seconds and determine if it's > confident enough to buzz in, because as you know, you lose money if you buzz > in and respond incorrectly. This is a huge challenge, especially because > humans tend to know what they know and know what they don't know. Watson has > to do thousands of calculations before it knows what it knows and what it > doesn't. The calculating of confidence based on evidence is a new > technological capability that is going to be very significant in helping > people in business and their personal lives, as it means a computer will be > able to not only provide humans with suggested answers, but also provide an > explanation of where the answers came from and why they seem correct. This > will further human ability to make decisions. > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Thu Feb 24 15:32:26 2011 From: natasha at natasha.cc (Natasha Vita-More) Date: Thu, 24 Feb 2011 09:32:26 -0600 Subject: [ExI] META: Overposting Message-ID: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> Please remember the list member's diet: 8 posts a day. Thank you! Natasha Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrjones2020 at gmail.com Thu Feb 24 15:09:52 2011 From: mrjones2020 at gmail.com (Mr Jones) Date: Thu, 24 Feb 2011 10:09:52 -0500 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <7E56279F-9F75-410E-82CF-AA0D39AEED1F@mac.com> References: <4D64805C.7040501@gmail.com> <20110223095730.GG23560@leitl.org> <7E56279F-9F75-410E-82CF-AA0D39AEED1F@mac.com> Message-ID: On Wed, Feb 23, 2011 at 11:54 PM, Samantha Atkins wrote: > Hmm? If we can't handle this then this list is probably utterly worthless. > So typical. Important issues are brought up, talked about a while, then > there are calls to shut it down and even appeals to a ban. This really > really depresses me. > You would like the INTP list I'm on then. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Thu Feb 24 17:39:31 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 24 Feb 2011 09:39:31 -0800 (PST) Subject: [ExI] Call To Libertarians In-Reply-To: Message-ID: <155926.8297.qm@web114408.mail.gq1.yahoo.com> Darren Greer wrote: > I've been thinking about > this kind of > society for awhile, and how it would work. One of the > answers I've come up > with, that sounds similar to what you describe, is to > establish a system of > territorial morality, where the doctrine is "you do your > thing and I'll do > mine and it's all OK as long as it doesn't hurt anyone > else." So how do you cope with a group of people who believe that, for instance, foetuses are people, and you are hurting someone else if you have an abortion? Or that you're hurting your children if you don't bring them up believing in a particular god, thus condemning them to eternal torture? Doesn't that give them grounds to interfere in the way you want to live, since, by their lights, you've violated the "as long as it doesn't hurt anyone else" principle? I don't see much chance of coming to an agreement of what 'hurting someone else' actually means, that would satisfy everyone. Ben Zaiboc From lubkin at unreasonable.com Thu Feb 24 18:08:43 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Thu, 24 Feb 2011 13:08:43 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: <20110223171636.GF15944@ofb.net> Message-ID: <201102241808.p1OI8nep020838@andromeda.ziaspace.com> Darren wrote: >The other discussion had degenerated into rhetoric and even >name-calling and I thought I might find some reasoned arguments here >to help bring it back on track. Unfortunately, the same thing has >happened here, which is in itself an education of sorts. I'm as >guilty as the next guy, getting embroiled in this one the way I did >not get in the other. Perhaps it is because I'm more invested here. I run an assortment of lists with the ground rules of civility and absence of hostility and a few whose membership overlaps with the former where nearly anything goes. On the civil lists, we've been able to productively discuss all sorts of precarious topics, by focusing on facts, rigor, and assuming good faith. Some choose to be on the civil lists only, some don't need no steenkin' rules. Among those on both, there are people like me, who are pretty much the same everywhere, and some who adapt -- jerks there, civil here. >Aye, there's the rub, as Hamlet said. I thought that was Berlusconi. >There seemed to be no distinction between libertarian and >ultra-conservative. To them, they were one and the same. "ultra-conservative" is an even-more meaningless term than conservative. (Especially when we're talking beyond the US. An American libertarian and a Brazilian libertarian have similar goals, policies, and beliefs. Consider what you're describing by a conservative in the PRC.) And most libertarians I know detest being labeled conservative by the left or liberal by the right. There is, however, an interesting bifurcation. Some libertarians are more dismayed by the left and some are more dismayed by the right. When your guy isn't going to win an election, which of the other candidates appalls you least? -- David. From spike66 at att.net Thu Feb 24 17:58:28 2011 From: spike66 at att.net (spike) Date: Thu, 24 Feb 2011 09:58:28 -0800 Subject: [ExI] META: Overposting In-Reply-To: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> Message-ID: <00ab01cbd44c$71919380$54b4ba80$@att.net> And the libertarian thread temporary open season is now back to our regularly scheduled programming, thanks. spike From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Natasha Vita-More Sent: Thursday, February 24, 2011 7:32 AM To: 'ExI chat list' Subject: [ExI] META: Overposting Please remember the list member's diet: 8 posts a day. Thank you! Natasha Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Thu Feb 24 19:01:16 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Thu, 24 Feb 2011 14:01:16 -0500 Subject: [ExI] Brief correction re Western Democracies In-Reply-To: <00a601cbd390$21a53e60$64efbb20$@att.net> References: <764392.59473.qm@web114411.mail.gq1.yahoo.com> <003e01cbd2a3$0cdcf2e0$2696d8a0$@att.net> <004201cbd2a6$e1ea1690$a5be43b0$@att.net> <20110223162400.GA15944@ofb.net> <201102231821.p1NIL4FY027163@andromeda.ziaspace.com> <00a601cbd390$21a53e60$64efbb20$@att.net> Message-ID: <201102241900.p1OJ0iBv000623@andromeda.ziaspace.com> Spike wrote: >What, you mean as in that passage in Exodus 20? God wrote ten rules in >stone, including one which forbids adultery. Yet it includes no actual >definition of the term adultery, nor any definition of marriage before that. >The term written on stone is the first time it shows up in the bible. >Apparently the children of Israel were left to take their best guess at what >this new term meant, and whatever they decided it was, they weren't to do >it. : >And so on. These lines of reasoning are likely exactly why the rabbis >demanded NO BIBLE READING ON YOUR OWN dammit. If one reads the bible >oneself and knows how to reason, there is no end to these kinds of >difficulties. Your analysis is a good example of the problem. You don't speak Hebrew, let alone Biblical Hebrew. You're relying on someone else's translation, centuries or millennia later. Consider how essential a preposition is to interpreting a verb -- knock up, knock down, knock on. (And how different what's meant when a Brit says he knocked up your sister.) Or how the meaning might change if the article is definite or indefinite, the noun is singular or plural, or there's a choice of what a pronoun is referring to. Or, as Bill Clinton noted, what the meaning of "is" is. This all folds back to the Watson discussion. ISTM we talked once about disambiguated postings, by tagging each word with the index of the intended meaning from a reference dictionary: "John-3 likes-2 peanuts-4." (Some semantic nets use a similar approach.) -- David. From eugen at leitl.org Thu Feb 24 19:43:22 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 24 Feb 2011 20:43:22 +0100 Subject: [ExI] building Watson Jr. in your basement Message-ID: <20110224194322.GL23560@leitl.org> https://www.ibm.com/developerworks/mydeveloperworks/blogs/InsideSystemStorage/entry/ibm_watson_how_to_build_your_own_watson_jr_in_your_basement7?lang=en From algaenymph at gmail.com Thu Feb 24 20:33:28 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Thu, 24 Feb 2011 12:33:28 -0800 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: References: <4D64805C.7040501@gmail.com> <20110223095730.GG23560@leitl.org> <7E56279F-9F75-410E-82CF-AA0D39AEED1F@mac.com> Message-ID: <4D66C098.8010606@gmail.com> On 2/24/11 7:09 AM, Mr Jones wrote: > > > On Wed, Feb 23, 2011 at 11:54 PM, Samantha Atkins > wrote: > > Hmm? If we can't handle this then this list is probably utterly > worthless. So typical. Important issues are brought up, talked > about a while, then there are calls to shut it down and even > appeals to a ban. This really really depresses me. > > > > You would like the INTP list I'm on then. I'm interested. Have a link? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Thu Feb 24 20:14:23 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Thu, 24 Feb 2011 13:14:23 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: <155926.8297.qm@web114408.mail.gq1.yahoo.com> References: <155926.8297.qm@web114408.mail.gq1.yahoo.com> Message-ID: On Thu, Feb 24, 2011 at 10:39 AM, Ben Zaiboc wrote: > Darren Greer wrote: > >> I've been thinking about... a system of >> territorial morality, where the doctrine is "you do your >> thing and I'll do mine and it's all OK as long as it >> doesn't hurt anyone else." > > So how do you cope with a group of people who believe that, for instance, foetuses are people, and you are hurting someone else if you have an abortion? The current system of distinct jurisdictions with their own laws would seem adequate. Know the laws in the jurisdiction, and act lawfully. If the laws don't suit you, relocate to a jurisdiction where the laws are a better fit for your values. Best, Jeff Davis "There is only one basic human right, the right to do as you damn well please. And with it comes the only basic human duty, the duty to take the consequences." P.J. O'Rourke From mrjones2020 at gmail.com Thu Feb 24 21:04:03 2011 From: mrjones2020 at gmail.com (Mr Jones) Date: Thu, 24 Feb 2011 16:04:03 -0500 Subject: [ExI] Economic liberalism vs. conservatism: Why the debate here? In-Reply-To: <4D66C098.8010606@gmail.com> References: <4D64805C.7040501@gmail.com> <20110223095730.GG23560@leitl.org> <7E56279F-9F75-410E-82CF-AA0D39AEED1F@mac.com> <4D66C098.8010606@gmail.com> Message-ID: 2011/2/24 AlgaeNymph > I'm interested. Have a link? http://intp-list.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Thu Feb 24 22:57:51 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Thu, 24 Feb 2011 15:57:51 -0700 Subject: [ExI] democracy sucks Message-ID: I think I've completely cleansed the last vestige of "democracy is the way the truth and the light" from my meme set. Let others worship at that alter. I've moved on. "None of that democracy stuff for me, thanks. It doesn't agree with me. Multiple problems. I'm looking for something better." I can't stand "the stupids". Okay, so I'm elitist, have an inflated opinion of myself, and am rude and insensitive. Yeah, yeah. Get over it. What a flippin' idiotic system. There's just too much stupidity (not to mention bias, self-interest, greed, vanity, and perhaps the worst, misguided good intentions) for any hope of good governance. I don't have the time to waste writing, and you haven't the time to waste reading, about my notions of the multi-factorial clusterf*ck that is democracy. Stupid voters vote stupidly. Smart voters are overwhelmed. Powerful interests propagandize all voters, and pre-select pre-purchased candidates. They also pre-select and pre-purchase the experts who advise the candidates. There's no escape from the kleptogarchy. Not in a democratic system, anyway. "Democracy is the worst form of government except for all those others that have been tried." Winston Churchill Churchill missed his chance, and now he's dead. What he should have said was, "Democracy sucks. We have to do better." Can we identify the pros and cons re governance in general, and democracy in particular, and come up with something better, or some suggestions, or at least get pointed in the right direction? I'll start it off. What we like about democracy is that "we, the people" get a say. That seems good when compared to tyranny where "we, the people" only get to say, "How high?" It's an "Enlightenment values" thing. Two and a half centuries later, having a say is clearly important, but having a life is clearly MORE important. I can't help but notice the irony that the tribal cultures we are currently at war with have a tradition of governance by tribal elders. Putatively ***wise*** tribal elders. Which brings me to tribe vs nation state. The small tribal unit is the "natural" social form for humans. The tribal leader has personal contact and a human relationship with "his" people. The nation state on the other hand is a gathering together of large numbers of diverse "tribes". Why? Because once you master the management of populations substantially larger than a tribe -- think Roman Empire -- then you mobilize the resources necessary for conquest and expansion. At his scale, no authentic personal relationship is possible between leader and people. Rather, there is a leadership class, essentially a leadership tribe. This in my view is the basis of the modern nation state, and may explain the social pathologies that result from the unavoidable human distance between leadership (upper class) and people (lower class). I like the idea (yes, Samantha, there is a Santa Claus) of an ocean-borne society of small, value-homogenous social units (tribes), federated(informally?) for the purpose of defense, and where questions we would normally associate with "governance", are dealt with 'tribally', at the local level. This social structure would also work off-planet. YMMV. Best, Jeff Davis "We call someone insane who does not believe as we do to an outrageous extent." Charles McCabe From mrjones2020 at gmail.com Thu Feb 24 23:11:36 2011 From: mrjones2020 at gmail.com (Mr Jones) Date: Thu, 24 Feb 2011 18:11:36 -0500 Subject: [ExI] democracy sucks In-Reply-To: References: Message-ID: On Thu, Feb 24, 2011 at 5:57 PM, Jeff Davis wrote: > Which brings me to tribe vs nation state. The small tribal unit is > the "natural" social form for humans. > The tribal talk reminds me of a book I recently read. Have you ever read/heard of a book called "The Last Hours of Ancient Sunlight"? Good read. I agree with you, that governance should take place at a much more local, more 'connected' level. 'Tribes' are the way to go. Not for xenophobic reasons, but for practical reasons. Humans just don't have a large enough 'monkeysphere' to thrive in these huge metro areas we've grown accustomed. I'm willing to bet you'd see mental health increase the closer we got to 'tribal' societies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Fri Feb 25 01:12:59 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 24 Feb 2011 17:12:59 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640453.9010309@mac.com> <20110222141258.surcjqee0wssocgg@webmail.natasha.cc> <4D65BAC7.1090507@mac.com> Message-ID: <4D67021B.8030309@mac.com> On 02/23/2011 09:27 PM, Jeff Davis wrote: > Samantha, how sweet. We haven't chatted in a while. And now i see > that you're a studied libertarian (L?). I suppose if I'd been paying > closer attention, I'd have known that. *smiles* > On Wed, Feb 23, 2011 at 6:56 PM, Samantha Atkins wrote: >> On 02/23/2011 02:08 PM, Jeff Davis wrote: >>> Oddly, it seems to require only that enough people behind the curtain >>> in the polling booth mark their ballot correctly. Which is to say, for >>> the candidates put forth by The Accountability Party. >> Problem with this is that the vast majority (roughly 99%) of the government >> machinery is not subject to election at all. And it is very resistant to >> major change by incumbents. > Bureaucratic inertia. Worse than that. Much of effective law and more than a little of its enforcement is done by unelected regulatory agencies and their employees. Some of these bodies are only very loosely accountable to much less controllable by Congress. > Subject to legislative direction, no? Not so much as you might wish as per the above. > But also > a giant interest group/voting block in its own right. Don'[t you just > hate democracy sometimes? Paraphrasing something de Toqueville may > have said, 'the American republic (democracy?) will last until the > govt discovers that it can bribe its citizens with their own money.' Even better, it can just print up money and not even squeeze the citizens further or borrow it from other countries on the basis of its theoretical ability to squeeze its citizens even more in the future. >>> The Accountability Party is deliberately "preconfigured" to be >>> broad-based, having only two planks: Accountability and Jobs. >>> >>> No other issue is relevant except as relates to these two concerns. > > >> Being agnostic on everything but these two ungrounded concepts cannot >> possibly lead to a good outcome. No principles means > Not committing the party a priori to a menu of positions hardly means > having no principles. "No other issue is relevant" doesn't leave a lot of room for bringing up principles this may fly in the face of. > Why take a position that can only splinter the party and weaken it. > With the result being lost power, and interment in the ash heap of > history. The party can poll its members later during the legislative > session, work out niggling details, and get on with exercising power > on issues that matter. Free floating wish list items with no grounding in any principles whatsoever are a BS basis for any party and cannot last because there is no grounding. I mean you can satisfy everyone has a job by simply enslaving the entire country and putting any excess workers (newly employed) to work digging holes and then filling them back up. Nothing in the party planks precludes this implementation. >>> The two issues which the AP devotes its exclusive focus are: >>> accountability: no one is above the law. Everyone, but in particular >>> persons in high position who have traditionally 'enjoyed' immunity >>> from prosecution, will now have their get out of jail free cards >>> voided. >>> >> Which laws? > Honestly? I would start with war crimes. > By what? Geneva convention? > By "accountability" I essentially mean subject the ruling class in > general and the power elite in particular to a strong dose of "ethic > cleansing", so the entire society could start over with a clean slate. > Start over, but with the former upper reaches of society on notice > that the law now applies to them. No, really. > This seems like blaming the powerful politically and or the rich-er as a class. This has been so busted so many times when it has been tried before. Simple envy would make it very popular as it has been before. The results would be unlikely to be much better without considerable more refinement and statement of and adherence to some of those pesky principles. >> Which laws are legitimate to start with? >> Sort that out later. >> >> How do you know? > Apply libertarian principles? Why not? We'll certainly have to sort > that out. Let's talk it over. > If you don't start with any principles I don't see how you can safely leave it to later. >> All now are equal under the law as a standing principle. > A standing principle for the semi-washed masses, perhaps. We both > know that US Presidents and legislators have never been prosecuted for > war crimes. No. It is a firm part of what we are already supposed to be about. Fixing instances where it is not the case is a fine thing. I would press criminal charges if not treason on many a past and present politician as many violate their oath of office wholesale. >> How would you make it more so? > Easy. Prosecute the formerly unprosecuted. All of them. That includes all those not prosecuted for "crimes" that are victimless and not possible to apply to everyone "guilty" without imprisoning the entire country. No principles means no basis for discrimination among laws. > This doesn't imply draconian penalties. It isn't about revenge. It's > about starting over with a clean slate and a "rule of law" that does > its job. If you are picking on the powerful for being more powerful than you or I and the richer for having more money than you or I and you are also speaking of and to the sentiments of the "average person" then you are in revenge territory. >>> And jobs: everyone who wants a paycheck gets a paycheck. EV-REE-ONE. >>> >> WHAT? > Now, now, don't get upset. People vote their pocketbooks. Economics > is all. Establish a principle that everyone is ***ENTITLED*** to > their piece of the economic pie, and they should vote for you in large > enough numbers to guarantee that you get the power to implement > necessary reforms. > Economics, while maybe not all, is not served by pretending their are limitless means to satisfy limitless wants. That is a denial of economic reality. You can't spend your way out of bankruptcy. Ask Zimbabwe whether you can print enough money to get out of bankruptcy. People are not in the least entitled to a slice of the economic pie just by virtue of being born. Not when the pie is finite and produced by the work of others. This would be a denial of justice and reality. So you want to gain power by promising things that are irrational (counter to reality), unjust and will destroy the economy if implemented? Go the the end of the line. There are a lot of would be politicians lined up to do that. > It's the Lombardi principle: winning is everything. That entirely depends on what exactly you have "won" and what is left when you have done so that you care about or even to look at. >> Even if that can offer no value whatsoever in exchange? > Yes, if needs be. (But your question presumes no value. I do not > propose a "no value" exchange.) Yes, you do. You propose to give everyone a paycheck regardless of whether their skills and/or labor have any real value in a free market or not. >> How is this just > It reconfigures the economic system, eliminating the "war of all > against all". High level political and economic crime will be > deterred. There will be a societal shift away from parasitism and > toward greater productivity. Economic activity will then > equilibrate, and life will go on. But better. That is not remotely a meaningful answer. Why would their be greater productivity when you print and borrow money like mad to make sure everyone has a paycheck thus destroying the financial basis of the economy and producing (sooner or later) rampant inflation? Why would their be greater productivity when everyone knows they have a roof over their had, food on the table and other essential things as a matter of entitlement even if they play games all day or spend everyday in a stupor? Why would the productive remain productive and become more so when they have to pay more and more in taxes or the money they make is worth less and less and they have to support many more parasites on the system? > Rinse and repeat. > or Flush. >> and how does it lead to a better world? > See above. And by the way, if at first you don't succeed, tweak , and > tweak again. (Till you get it right, or stop breathing. Is there > another choice?) There is nothing above but empty claims that disintegrate under even rudimentary analysis. >>> the Treasury has a machine that >>> prints checks, so the policy is secured, "Move right along. Nothing to >>> see here." >> That will finish destroying the value of the dollar very very quickly and the country with it. > No it won't. Please explain and show your work. >> Progressive tax is regressive to actually growing an >> economy. > No it isn't. Whatever. If you aren't interested in any real dialogue I am wasting my time. >> It has been seen over and over again. > No it hasn't. > >> Not to mention be utterly unjust and immoral. > Nothing could be more moral and just than to confirm, and apply, the > principle that every person is ENTITLED to a living wage from the > economic pie. By what standard of morality validated how? The above is simply a claim with no argument whatsoever for its validity. > By the way, I base my challenge to your assertions about the economic > consequences of taxation, on the claim that it's just ruling class > propaganda. Which is another empty assertion. > No doubt you will counter with some conservative or > "Austrian" economist as authority. It's the same old story from the > dim recesses of time. The intellectual class provides "scholarly" > justifications for the predation of the wealthy. Oh, so now you are going to pull a classist argument claiming all counter-arguments are bourgeois conditioning and rationalisation. Is see. Glad we cleared that up. The Communist did a more convincing job of that. > And one other thing: we're on the same side , seek the same end. Hard > to believe, but true. Libertarian principles-wise. No, we are not remotely on the same side judging from what you have said above. - samantha From anders at aleph.se Fri Feb 25 01:55:05 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 25 Feb 2011 01:55:05 +0000 Subject: [ExI] democracy sucks In-Reply-To: References: Message-ID: <4D670BF9.4090004@aleph.se> Jeff Davis wrote: > Can we identify the pros and cons re governance in general, and > democracy in particular, and come up with something better, or some > suggestions, or at least get pointed in the right direction? > Hmm, from reading your post I think your problem is with the complex representative democracy systems we have today. Not so much with democracy, just that it is not direct. The problem with direct democracy is that it does not scale. And the problem with tribal democracy is that *tribes* don't scale. If you are OK living in a small society you would probably get both egalitarianism and a direct political say for free, assuming it can be kept small enough to at most reach the anthropological "big man" stage but avoid the big man becoming a chief. This means groups of 50-100 people. This might fit the evolved human psyche fine, but it is economically hopeless: there is not room for economic specialisation, it misses economies of scale, and it is not possible to maintain rare but important skills (think chip designers). One can try to patch it by having the tribes trade and send kids to each other for higher education, but as soon as the ties become strong enough to be useful you end up with a larger society and the original problems. The isolated little space/sea habitat might be egalitarian and free, but the larger network of minds in the mainstream civilization will be roaring past it in terms of productivity and progress despite their limited freedom and bureaucratic overheads. Basically, I think there is no way you can avoid a complex, remote government if you want to have a complex big society. And most of the time we do not want to have a say, since most questions are irrelevant or incomprehensible to us. Just as there are benefits in economic specialisation there are benefits in political specialisation. What we should be aiming for is *open societies*: societies where it is possible to observe, criticise and change the activities and structure of the government. Democracy (plus free press) is useful because it tends to maintain open societies, not so much because democracy itself is good. Having competition elements in the political system is a good idea for the same reason it is a good idea in markets: it rewards efficient and successful policies, while it punishes bad policies. So my way of rephrasing the question is: what governance structures enable open societies to function well and maintain their governance? It seems to me that they should have a high degree of transparency/traceability so problems can be found and the relevant parts held accountable, modularity so that corrections of one part does not mess up other parts, a suitable level of responsivity so that they adapt but are not too affected by noise (current political fashions, the latest blogquake), and provide a reward mechanism for constructive criticism/modification that is not easily short-circuited. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From anders at aleph.se Fri Feb 25 01:05:52 2011 From: anders at aleph.se (Anders Sandberg) Date: Fri, 25 Feb 2011 01:05:52 +0000 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: <31A10575-EA19-4BDB-A0BF-4C491F971468@bellsouth.net> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <31A10575-EA19-4BDB-A0BF-4C491F971468@bellsouth.net> Message-ID: <4D670070.4020906@aleph.se> On Feb 22, 2011, at 12:37 AM, Max More wrote: > >> Writers from the early 1990s: If you agree to allow all of your >> postings to the original Extropians email list to be publically >> available, please let me (and the world -- or at least the >> Extropians-Chat email list) know. I'm happy to have my posts public. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From brent.allsop at canonizer.com Fri Feb 25 04:07:42 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Thu, 24 Feb 2011 21:07:42 -0700 Subject: [ExI] a fun brain in which to live In-Reply-To: <00c801cbc62a$7aedec10$70c9c430$@att.net> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> <017301cbbf5a$1b398f30$51acad90$@att.net> <4D4DA986.1030802@canonizer.com> <00c801cbc62a$7aedec10$70c9c430$@att.net> Message-ID: <4D672B0E.3@canonizer.com> Spike, Hope you don't mind if I get back to this discussion, and probe a bit more. This is definitely a good response, and I know what you mean by being interested mostly just in the world which can be represented by equations. I just have one question. Do you ever think of uploading? Or of having your mind being run on another computer? Obviously, there is the stuff going on in your mind, and there is the stuff going on in my mind, and with the uploaded mind, there would be the stuff going on in that mind also... Right now, the left side of your visual field is represented in the right hemisphere, and the right side in the other hemisphere. Obviously these two are merged together, so you know what each is like, all together in the same mind. I'm glad you'd like to try on my brain, and I'd hope you'd like to also experience what your upload brain is like, at the same time you are experiencing your brain. Sure, all this behavior could be modeled by equations, but how do you glue it all together, in a phenomenal way, so you know what it is all phenomenally like? If you are interested in imagining what uploading, and the future is going to be like, is not this kind of stuff one of the most important parts? Isn't knowing that a red quality is not a property of the strawberry, but instead is a property of your knowledge of such? A property of something in your brain, and that this property can be merged with all the rest of your phenomenal knowledge? We should be able to capture such merging together with equations, but till you eff them, and share them, we'll never really know what they are like. Brent Allsop On 2/6/2011 11:20 AM, spike wrote: >>> Likewise, yours is a brain I would like to try on, just to figure out >>> what is qualia. I confess I have never understood that concept, but >>> do not feel you must attempt to explain it to me... spike > > Brent it isn't so much a problem with the concept of qualia, rather it is > just me. I live in a world of equations. I love math, tend to see things > in terms of equations and mathematical models. Numbers are my friends. I > even visualize social structures in terms of feedback control systems, > insofar as it is possible. Beyond that, I don't understand social systems, > or for that matter, anything which cannot be described in terms of systems > of simultaneous differential equations. If I can get it to differential > equations, I can use the tools I know. Otherwise not, which is why I seldom > participate here in the discussions which require actual understanding > outside that limited domain. > > The earth going around the sun is a great example. With that, I can write > the equations, all from memory. I can tweak with this mass and see what > happens there, I can move that term, derive this and the other, come up with > a whole mess of cool new insights, using only algebra and calculus. > Mathematical symbols are rigidly defined. But I am not so skilled with > adjectives, nouns and verbs. Their definitions to me are approximations. I > don't know how to take a set of sentences and create a matrix, or use a > Fourier transform on them, or a Butterworth or Kalman filter, or any of the > mind-blowing tools we have for creating insights with mathematized systems. > > All is not lost. In the rocket science biz, we know we cannot master every > aspect of everything in that field. Life is too short. So we have a > saying: You don't need to know the answer, you only need to know the cat who > knows the answer. > > In the field of qualia, pal, that cat is you. Qualia is the reason > evolution has given us a Brent Allsop. > > So live long, very long. > > spike > > > > > > > From brent.allsop at canonizer.com Fri Feb 25 05:09:08 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Thu, 24 Feb 2011 22:09:08 -0700 Subject: [ExI] Easy solution to wars like that in Libya, and in these groups? Message-ID: <4D673974.7040207@canonizer.com> Extropians, It seems to me that if you could some way have an easy way to reliably, easily, and in real time, know concisely and quantitatively, what the entire population of Libya wanted, war could easily be avoided. Why do we all have to spend so much effort protesting before anyone finally gets a clue as to what the people want? If you could easily know, concicely and quantitatively what everyone wanted, obviously, if the leader was diviating from this, especially if he wanted to kill anyone, everyone could just ignore him, and just do what the people wanted, instead, couldn't they? Problem solved? Does anyone think differently? Everyone is asking the question, what should the US, and other countries do, to help out Libya and similarly struggling countries? Why is everyone only asking or talking to the leaders, at the tops of all the hierarchies, and if that one is taken out, find another. Is nobody interested in what the people of Libya want? Isn't that the only problem? Of course, primitive survey systems, like simple voting, or checking 1 of 4 possible choices doesn't work very well, and is so difficult. Who gets to decide what the 4 choices are, and what they mean? You need some kind of open survey system that is set up in a way that is constantly improving, bottom up, and one that will build as much consensus as possible (pushing the banal disagreeable stuff to lower camps - out of the way), and follow that consensus as it morphs into the future and jumps away from falsified camps, to far better ones.... And it needs to be some kind of expert based system. So uneducated people in a particular field can easily delegate their vote / support, to someone they trust more than themselves. And that delegatee can do the same to someone else, and so on... so the real experts at the top of such trees can make much more educated choices than all the clueless idiots... And of course, all transhumanists are just one big group of individuals all waring and criticizing each other on the most trivial details, and we never get anything done at all, and never have any influence over anything. But I bet you if we had the right consensus building system (where the trivial less important disagreeable stuff we spend all our time on could be pushed to lower level camps), all the real moral and scientific experts at the tops of such delegated tree structures, would be far more transhumanist than the general clueless population. With such a system dictating the morals of society, (rather than all the primitive war mongering bottle necked hierarchies) and telling us what our priorities are and so on. I bet we could rule the world and finally bring the singularity to pass. My hypothosis is that it is all simply a matter of communication. How do you know what the best experts in the crowd want, concisely and quantitatively? What is the moral expert consensus? What is the scientific consensus? What is the transhumanist consensus? If you can know that, suddenly there is no more reason for war and fighting. This hypothesis has led me to try building something like canonizer.com, but everyone seems to hate it, and like everything else, everyone just wants to criticize, fight it and destroy it, and go back to doing everything on their own in a do it yourself lonely way - damn everyone else. So maybe someone can come up with some kind of better method of knowing what all us experts want, concisely and quantitatively, in any kind of consensus building way, so maybe we can work together and get something done, other than just finding disagreeable things and focusing and criticizing everything and everyone on that, as we continue to watch the world still wallow in primitive rotting misery? We just buried our mother in law. Despite my obvious horror, I couldn't even talk about it, the family just put her in the grave to rot. Yes, she is the one I told you about that was asking me about transhumanism the other day. But that is about as far as she got. I'm getting tired of rotting these people in the grave and sitting around watching as if we can do nothing. I just want to know what all you experts believe is best, and want to get to work on doing it all, together. We obviously still aren't getting much done as lone individuals. Can we not do more than just spending an eternity in eternal yes no arguments over such things as Libertarianism vs what, over and over again, year after year after year, for now more than 20 years? Lets just find some way to definitively state what everyone wants, concisely and quantitatively, and finally just get started on doing it all, for everyone. Brent Allsop From spike66 at att.net Fri Feb 25 07:09:11 2011 From: spike66 at att.net (spike) Date: Thu, 24 Feb 2011 23:09:11 -0800 Subject: [ExI] a fun brain in which to live In-Reply-To: <4D672B0E.3@canonizer.com> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> <017301cbbf5a$1b398f30$51acad90$@att.net> <4D4DA986.1030802@canonizer.com> <00c801cbc62a$7aedec10$70c9c430$@att.net> <4D672B0E.3@canonizer.com> Message-ID: <003401cbd4ba$e7418ef0$b5c4acd0$@att.net> ... On Behalf Of Brent Allsop Subject: Re: [ExI] a fun brain in which to live ... >...Spike, I just have one question. Do you ever think of uploading? Or of having your mind being run on another computer?... Of course, all the time. In the long run, I see that as the only reasonable future of mankind. Eventually in some form, software will write software, and the result will be recursive self-improvement, and my fond hope is that the result will want to upload us. I am one who is convinced that consciousness is not strictly substrate dependent. Once we exist as software, the things we can do with our brains will be astonishing in variety. An example is one I brought up before. I want to be able to view the world, at least temporarily, through female eyes. That would allow me to understand the things women think, and that would make me a better husband. Some things I just utterly fail to understand, starting with what in the heck to women see in us? >...Obviously, there is the stuff going on in your mind, and there is the stuff going on in my mind, and with the uploaded mind, there would be the stuff going on in that mind also... Imagine that we can unify two or more different brains, and have a being that is the superset of each individual. Then you might choose a person who is wildly different from you, with which to temporarily unify. I don't know what happens when people merge their consciousness, but we can't do it now. We might be able to in the uploaded condition. >...Sure, all this behavior could be modeled by equations, but how do you glue it all together, in a phenomenal way, so you know what it is all phenomenally like? I don't know. I want a shot at it, which is why I am probably going to go in for cryonics. I actually don't think the singularity will happen in 30 years (it might) but rather about 50, at which time I would be 100. I might not make it that far. >...If you are interested in imagining what uploading, and the future is going to be like, is not this kind of stuff one of the most important parts?...but till you eff them, and share them, we'll never really know what they are like...Brent Allsop Ja. I still just don't know with so much of this. I will sadly confess that fifteen years ago I thought we would be farther along by now than we are. But the singularity is still coming eventually, and when it does, I can imagine no logical stopping place for it short of all the metals in the solar system converted to computronium to form an MBrain, with humans uploaded. spike From spike66 at att.net Fri Feb 25 07:22:28 2011 From: spike66 at att.net (spike) Date: Thu, 24 Feb 2011 23:22:28 -0800 Subject: [ExI] Easy solution to wars like that in Libya, and in these groups? In-Reply-To: <4D673974.7040207@canonizer.com> References: <4D673974.7040207@canonizer.com> Message-ID: <003b01cbd4bc$c29388e0$47ba9aa0$@att.net> ... On Behalf Of Brent Allsop ... Brent, so sorry to hear of your mother-in-law's passing. Sincerest condolences to the family. >...It seems to me that if you could some way have an easy way to reliably, easily, and in real time, know concisely and quantitatively, what the entire population of Libya wanted, war could easily be avoided...Brent Allsop Any war over Libya has little or nothing to do with what the Libyans want. It has to do with the fact that they are sitting on a huge reserve of light sweet crude. There are so many refineries, especially in Europe and Asia, which can only deal with those high grades of oil. Some US refineries need the light sweet grades, which we buy from Algeria and Nigeria. If the mad dictator of Libya manages to interrupt the flow of light sweet crude to Europe, then they are forced into a bidding war with the US for equivalent grades in those places. We have a big problem on our hands, a huge problem. None of this has anything to do with what the Libyans want. It really is all about that oil. They have it, we need it. spike From js_exi at gnolls.org Fri Feb 25 07:45:50 2011 From: js_exi at gnolls.org (J. Stanton) Date: Thu, 24 Feb 2011 23:45:50 -0800 Subject: [ExI] Banking, corporations, and rights (Re: Serfdom and libertarian critiques) Message-ID: <4D675E2E.8030901@gnolls.org> Kelly Anderson wrote: > The first baby step would be to get rid of the Federal Reserve. That I > would be behind today, immediately. I think that is a fairly common > stand amongst Libertarians, but I could be wrong. Absolutely. But as I state above, the fundamental problem remains: a special class of people ("banks") with the special privilege of creating money from thin air. > I have considered eliminating banks, but my question would be what > would you replace them with? There is a necessity for capital > investment, and economies of scale in managing capital are important. The problem isn't banks: it's fractional reserve banking. The function of a "bank" is to keep your money safe, for which you would likely be charged a small fee. If you didn't want to pay that fee, or you wanted to offset it, you would likely permit the bank to invest some fraction of your money for you on your behalf. In other words, your "banking" account would look just like your brokerage account currently does. Stocks, bonds, and money market funds are very liquid, but they're not "same as cash". You can't write checks or use your ATM card against investments...only against your cash balance. This would be far superior to our current system, in which you have no choice where your money is invested. As I mentioned before, all of your money in a "checking account" is forced into shares of a hedge fund making 30:1 leveraged investments in mortgage-backed securities, and which you are forced by "legal tender" laws to accept as if it were real money. > As for business, do you think the CEO of a business should be > PERSONALLY responsible for the actions of each of his employees? Absolutely. All people should be equal under the law. Allowing the creation of a virtual person ("corporation") onto which liability can be deflected gives officers of the "corporation" special legal privileges which the rest of us do not enjoy. *** The very concept of the "corporation" violates the most basic tenet of human rights: equality under the law. *** Consider: We've created a race of virtual beings which are immortal, cannot be physically punished, have the money and resources of tens of thousands, and which can dissociate and reorganize their own component parts whenever and wherever it's convenient. And then we're surprised that these "corporations" run everything...? JS http://www.gnolls.org From giulio at gmail.com Fri Feb 25 09:13:25 2011 From: giulio at gmail.com (Giulio Prisco) Date: Fri, 25 Feb 2011 10:13:25 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: I just left this comment to the new article of RU Sirius on H+ Magazine: Open Source Party 2.0: Liberty, Democracy, Transparency! http://hplusmagazine.com/2011/02/24/open-source-party-2-0-liberty-democracy-transparency/ I have been reading this article again, and also the two previous articles The Open Source Party Proposal and The QuestionAuthority Proposal. As in my previous comment, the events of the last 3 years show that the time may be right to seriously thinking about giving power back to the people, to whom it belongs. The open source and free software movements, the Pirate Parties, Wikileaks, Anonymous, Bitcoin and the possible renaissance of the cypherpunk movement are steps in the right directions. In the original Open Source Party proposal I like very much the Liberal/Libertarian characterization of this emerging approach to politics. Of course (look at the comments in the original article) both fundamentalist Libertarians and fundamentalist Liberals reject it with outrage, which makes me think that we are moving in the right direction: fundamentalism means abandoning reason in favor of a prepackaged one-line world-view which fits on a t-shirt. Instead, like for many other issues, the only solution is that there is no solution. The conflict between pure Libertarianism and pure Liberalism is here to stay and become worse, and we should stay away from both extremes and look for pragmatic, workable, ad-hoc midway local solutions. Liberals want to protect citizens and the government from evil big corporations, and Libertarians want to protect citizens and corporations from evil big governments. I want to protect citizens from both evil big governments and evil big corporations, and I think the Open Source Party proposal represents a good initiative in the right direction. As far as the implementation of the proposal is concerned, I would not recommend starting new political movements and parties. Rather, I would recommend joining forces with the Pirate Party, which the Party of the Free Internet and the only really novel and innovative political force to emerge in this century. The Pirate Party and its local national instances have achieved a certain success by linking theoretical open source politics with practical initiatives in defense of the citizens. At this moment the Pirate Party is only focused on IT technologies, but I see its stance in favor of individual empowerment and against current IP and copyright laws as a much more general platform which, in the future, could include bio-hacking, neuro-hacking and support morphological freedom. Therefore, I think transhumanists should support the Pirate Party. On Wed, Feb 23, 2011 at 10:14 AM, Giulio Prisco wrote: > < > (A) Initiation of force is bad. > (B) Starving children is bad. > > The question is which is worse. A libertarian would say initiation of force > is unacceptable; figure out some other way to feed starving children. A > liberal would say that starving children is unacceptable and so be it if > force is necessary to avoid it.>> > > Well put. It is not easy when primary values are in conflict. In these cases > I tend to look for midway solutions, like feeding children as much as > possible while reducing initiation of force to the strictly necessary > minimum. Needless to say, both fundamentalist libertarians and > fundamentalist liberals dislike midway solutions. > > -- > Giulio Prisco > giulio at gmail.com > (39)3387219799 > (1)7177giulio From darren.greer3 at gmail.com Fri Feb 25 15:03:25 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 25 Feb 2011 11:03:25 -0400 Subject: [ExI] democracy sucks In-Reply-To: References: Message-ID: On Thu, Feb 24, 2011 at 6:57 PM, Jeff Davis wrote: "None of that democracy stuff for me, > thanks. It doesn't agree with me. Multiple problems. > Funny enough, Aristotle agreed with you, and for similar reasons. In The Politics he characterizes it as tyranny by the majority. He classes it right up their with oligarchy and monarchy as potentially corrupt systems of governance. I've always thought it strange that we claimed to be looking to the ancient Greeks for our inspiration for it, when their greatest political thinker and philosopher next to Plato (and better than than Plato in my opinion, because he made no mind-body duality distinction and didn' t set us up for the one God-heaven-soul-original sin crap that was to come later like Plato did) dumped all over it. > > > Can we identify the pros and cons re governance in general, and > democracy in particular, and come up with something better, or some > suggestions, or at least get pointed in the right direction? > I have no suggestions other than my one-note symphony that if you can somehow remove moral judgement from your system, it would work better and people would be happier. But I'm not a fan of pure political ideologies anyway. Human collectives move and grow organically, and should be treated that way. We hold onto our system come hell or high water because it worked in the past and it's a good idea on paper. But like Thomas Hardy said, "nothing bears out in practice what it promises incipiently." Time to take the blinders off and admit it's not working, or at least tweak it or change it or do something before you're left standing in the rain in nothing but you're underwear. And based on what is currently happening in my country, I'd have to say I'm getting a tad elitist too. Thank Newton for Entertainment Tonight, so people have something to do besides investigating why a bunch of low-lifes are dismantling the mechanisms originally put in place to protect them from oligarchs and tyrants. Ahem. Clearly I need some coffee. D. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Fri Feb 25 15:27:37 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 25 Feb 2011 08:27:37 -0700 Subject: [ExI] Same Sex Marriage In-Reply-To: References: Message-ID: 2011/2/23 Kevin Cadmus : > Thanks, Kelly, for bringing up a favorite topic of mine. Glad it hasn't started a flame war. :-) I think there is room for some interesting discussion in this area. > Perhaps the best way to get government out of the marriage business is to > foment revolution within the huge mass of single folks. ?They are > discriminated against in many ways, some subtle and some not so subtle. ?By > educating this group about how they are getting the shaft by government's > bestowal of privilege to married folk, maybe there will be a new faction > saying that, "We aren't going to take this anymore!" There seems to be something of a consensus (in the US) these days that there should be a "separation of church and state", although I don't know the exact history of how this got started. The current form of the idea clearly doesn't go back to the founding. Marriage is a weird construct that combines state and church in a most unusual way. Separating the state view of marriage from the religious view of marriage seems like a really complex and difficult thing to achieve. At first, it would seem like the easiest way to do it would be to simply get rid of the government version, but I'm not sure that comes without some societal costs. > Is it so hard to imagine a push for a new U.S. constitutional amendment > along the lines of "Congress shall pass no legislation that discriminates > according to marital status." It's hard to see it getting passed. The government wants to be in the social engineering business (even though they mess it up every time they try, IMNSHO) and there really is a lot of research supporting the idea that kids with two parents do better than kids with only one parent (avoiding the issue of whether the parents are straight or gay for the moment). Since there is this strong evidence, the government wants to promote the raising of citizens in more stable two parent homes. On the face of it, that seems to be in society's best interest, so the government gives married couples tax breaks to promote marriage, and the raising of stable citizenry. This is a difficult argument to overcome, even though part of me really wants to agree with your proposal. I'm surprised that the government by and large has kept getting a divorce so simple (from a legal standpoint). > It has a nice parallelism with other similar > civil rights legislation. ?So people will readily understand the issue. ?But > how can the existing laws be retrofitted to abide by this new amendment? ?It > may be easier than it appears. ?A person's marital status is referenced by a > relatively small handful of existing laws. ?Rescinding these few laws will > end the injustice, simplify the tax code, avoid promoting marriages for > trumped up reasons, Marriage and immigration is a particularly messy area. How do you get married to a foreign national? Do they then get to stay in the country? How would your proposed amendment be interpreted by the courts in this area? Would you be discriminating against foreign nationals wanting to become citizens who are not married? > and (maybe best of all) end the endless and irresolvable > blathering and bickering about who should or should not be allowed to be > considered by the state to be "married". ?In essence, the answer becomes "no > one". ?The government finally is removed from the ugly business of defining > what "being married" means. I like that on the surface, but there are trade offs. I think it is a complex matter to be quite honest. > If private parties want to discriminate for or against single persons, fine! So it's not quite like the civil rights discrimination then? And why not? You're blowing your parallelism argument a bit here. > ?But it will force them to define what they want "married" to mean. ?Most > might conclude that it simply isn't worth the effort. > Would there be a down side to this that I'm just not seeing? Yes, I think there would be a number of downsides. What they are exactly is hard to say, but you can look at the population of inner city black people, who by and large don't get married as much as the rest of the population for a clue as to what some of those downsides might be. Lydon B. Johnson really screwed that population over by marketing government programs (food stamps, welfare, etc.) to that population. One of the biggest screw ups in the history of the nation imho. And a large part of the problem is probably related to what it did to the family structure in that populace. Just to be very clear, I don't have anything against black people. Some of my favorite children are black. :-) I do have a big problem with what the government has done to them. In some ways, it's even worse than what they did to the Native Americans, although that was really bad too. -Kelly From kellycoinguy at gmail.com Fri Feb 25 16:07:34 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 25 Feb 2011 09:07:34 -0700 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: <006001cbd371$6562cc90$302865b0$@att.net> References: <006001cbd371$6562cc90$302865b0$@att.net> Message-ID: On Wed, Feb 23, 2011 at 8:50 AM, spike wrote: > The real problem isn't even the ten spouses, it's the 70 children. > Especially now, when you are being required to keep those "children" on the > insurance policy until they are aged 26. When you talk about the cost of children, it hits me very close to home. I have adopted 8 children out of the foster care system, so I have a certain amount of familiarity with both the system and its costs. I understand you are talking about polygamy, and there are abusers of the system that are polygamous, but I'd bet that is a very small part of the overall budget. There are something like 700,000 kids in foster care today. That costs a lot more than a few thousand children of polygamists soaking the system. Since a lot of those polygamists live in Utah, I can also tell you that they go after folks like that pretty heavy. >>...If you say you're OK with gay people being married, but have a problem > with polygamous or polyandrous relationships, I think you've got some > 'splainin ta do... -Kelly > > OK, here's my splanation: ?What really costs money for the government and > the employer is the children. ?Same sex couples are less likely to breed. Yet, they do manage it some of the time through partnerships with gay couples of the opposite gender, and they also adopt where that is allowed. > If it's two men, they can only adopt, which actually removes a cost from the > government. ?As a kind of an affirmative action, I propose about ten years > when only same-sex are allowed to marry. ?Simultaneously I propose removal > of all requirements for employers to offer health insurance, and removal of > all legal restrictions on health insurance companies. ?With those changes, a > bunch of old problems go away. ?Granted there are new ones, but we can deal. I think the new problems likely outweigh the old ones in this matter. > Government needs to be out of the marriage business. ?That whole tax filing > as married business needs to go too. ?Once that tax arrangement is > eliminated, family groups can assemble in any size or mix of gender they > want, which to me is how it should be. ?I recognize it really does introduce > new problems, and yes I know we have a subculture which would force underage > girls into marrying their elderly relatives. ?But I think we can solve that. Getting government out of the marriage business is an interesting idea, but I think it would have some negative long term consequences. My favorite libertarian solution to a social problem is a program that I heard about a few years back. A group got together some money, went into a drug infested area of an inner city neighborhood with a doctor and offered a few hundred dollars to any young lady that wished to be relieved of her reproductive capacity. To me this is a win-win-win-small lose situation. If I had a million dollars, I think this might be my favorite charity. The problem is that drug babies cost the country a HUGE amount of money. Many hundreds of thousands of dollars per child, over their life time. The girls don't really want to get pregnant for the most part. The beauty of this solution is that everyone is a volunteer in the equation. We don't NEED government to solve our problems with creative thinking like this. -Kelly From hkeithhenson at gmail.com Fri Feb 25 16:17:49 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 25 Feb 2011 09:17:49 -0700 Subject: [ExI] democracy sucks (Anders Sandberg) Message-ID: On Fri, Feb 25, 2011 at 5:00 AM, Anders Sandberg wrote: > Jeff Davis wrote: >> Can we identify the pros and cons re governance in general, and >> democracy in particular, and come up with something better, or some >> suggestions, or at least get pointed in the right direction? >> > Hmm, from reading your post I think your problem is with the complex > representative democracy systems we have today. Not so much with > democracy, just that it is not direct. snip > > Basically, I think there is no way you can avoid a complex, remote > government if you want to have a complex big society. It's a marvel that government works as well as it does when you consider our evolutionary history. "Complex, remote government" combined with the drift into an oligarchy is a formula for disaster. http://voices.washingtonpost.com/blog-post/2011/02/stephen_colbert_explains_anony.html At this point Anonymous seems to be the most competent group on the planet. I wonder if this concept could eventually evolve into a world government? Keith From kellycoinguy at gmail.com Fri Feb 25 17:04:20 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 25 Feb 2011 10:04:20 -0700 Subject: [ExI] Banking, corporations, and rights (Re: Serfdom and libertarian critiques) In-Reply-To: <4D675E2E.8030901@gnolls.org> References: <4D675E2E.8030901@gnolls.org> Message-ID: On Fri, Feb 25, 2011 at 12:45 AM, J. Stanton wrote: > Kelly Anderson wrote: >> The first baby step would be to get rid of the Federal Reserve. That I >> would be behind today, immediately. I think that is a fairly common >> stand amongst Libertarians, but I could be wrong. > > Absolutely. ?But as I state above, the fundamental problem remains: a > special class of people ("banks") with the special privilege of creating > money from thin air. OK. But without the central bank backing the creation of money through this mechanism, the banks would either a) be more conservative in their money multiplier or b) purchase insurance against bank runs. In either case, removing the central bank reduces the problems related to fractional reserve banking. I guess the real problem is understanding the alternatives. There just isn't enough physical gold to run the economy (unless gold were $1000000 an ounce or something... which would make the industrial use of gold prohibitive... which would have its own downsides) >> I have considered eliminating banks, but my question would be what >> would you replace them with? There is a necessity for capital >> investment, and economies of scale in managing capital are important. > > The problem isn't banks: it's fractional reserve banking. ?The function of a > "bank" is to keep your money safe, for which you would likely be charged a > small fee. ?If you didn't want to pay that fee, or you wanted to offset it, > you would likely permit the bank to invest some fraction of your money for > you on your behalf. Without fractional reserve banking, the bank would not be able to pay you interest, so what then is your incentive to invest? > In other words, your "banking" account would look just like your brokerage > account currently does. ?Stocks, bonds, and money market funds are very > liquid, but they're not "same as cash". ?You can't write checks or use your > ATM card against investments...only against your cash balance. OK. I think I understand this. > This would be far superior to our current system, in which you have no > choice where your money is invested. ?As I mentioned before, all of your > money in a "checking account" is forced into shares of a hedge fund making > 30:1 leveraged investments in mortgage-backed securities, and which you are > forced by "legal tender" laws to accept as if it were real money. But aren't professionals better equipped to invest money than a bunch of amateurs? The same arguments that make Democratic Republics better than pure Democracies seem to apply here. I want to deposit money in a bank, have a Representative invest the money, and pay me interest. If I want to do my own investing, then I have an investment account for that. No? >> As for business, do you think the CEO of a business should be >> PERSONALLY responsible for the actions of each of his employees? > > Absolutely. ?All people should be equal under the law. Nobody would start a business were this the case. I suppose this is what you want, but how do you get economies of scale without scalable corporations? > Allowing the creation of a virtual person ("corporation") onto which > liability can be deflected gives officers of the "corporation" special legal > privileges which the rest of us do not enjoy. ?*** The very concept of the > "corporation" violates the most basic tenet of human rights: equality under > the law. *** So does patent law, but for the same reason. Even the strictest libertarians want some form of patent law (although we could argue all day about whether software or DNA sequences should be covered by patent law). > Consider: We've created a race of virtual beings which are immortal, cannot > be physically punished, have the money and resources of tens of thousands, > and which can dissociate and reorganize their own component parts whenever > and wherever it's convenient. ?And then we're surprised that these > "corporations" run everything...? I for one welcome our new corporate overlords. ;-) I see your point of course. I just don't know how we would get the necessary economic scales to run the economy without this exception. I'm here to learn, not to argue on points like this... so don't feel that I'm being disagreeable just to disagree. I just want to understand how such a system would function. -Kelly From darren.greer3 at gmail.com Fri Feb 25 17:10:22 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 25 Feb 2011 13:10:22 -0400 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: References: <006001cbd371$6562cc90$302865b0$@att.net> Message-ID: On Fri, Feb 25, 2011 at 12:07 PM, Kelly Anderson wrote: >A group got together some money, went into a drug infested area of an inner city neighborhood with a doctor and offered a few hundred dollars to any young lady that wished to be relieved of her reproductive capacity. To me this is a win-win-win-small lose situation. If I had a million dollars, I think this might be my favorite charity. The problem is that drug babies cost the country a HUGE amount of money. Many hundreds of thousands of dollars per child, over their life time. The girls don't really want to get pregnant for the most part. The beauty of this solution is that everyone is a volunteer in the equation.< This seems reasonable, at first blush. The problem is, and I'm making this assumption based on your post, that you're speaking of drug addicts Cravings for certain types of drugs such as heroine and crystal meth and cocaine are so powerful and so dominate in the mid-brain--over-powering sex drive, hunger and even the instinct for self-preservation--that you can offer a drug addict money to do just about anything and they will likely do it. People would, and gladly do, kill someone because they are offered a couple hundred dollars to do so. Granted there is at least one involuntary participant in this transaction. But you could offer money or something else she really wanted or saw herself as needing to someone who was mentally challenged to have herself sterilized and perhaps convince her, because she didn't have the mental capacity to know what she was agreeing to. Drug addicts in the throes of their addictions need to be treated the same way, as if they have a disability. A more complex but equally cost-effective solution would be to get that person in treatment and clean, where they could make better decisions about reproduction and everything else. But then again, here treatment is covered by the state, and I know in some places it is not. About ten percent of entrants get clean, which is a pretty low rate. But it saves the state and individuals an enormous amount of money in the end, for it is not just drug babies that are expensive. Drug adults are too -- hospitals and detoxes, shelters and foodbanks, welfare and crime. And the war on drugs is useless, as all wars ultimately are. As Dr. Terry Tafoya said, "'Just say no' to a drug addict is as about as useful as 'have a nice day' is to a manic depressive." Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From clementlawyer at gmail.com Fri Feb 25 14:49:54 2011 From: clementlawyer at gmail.com (James Clement) Date: Fri, 25 Feb 2011 10:49:54 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: Giulio Prisco said: > As in my previous comment, the events of the last 3 years show that > the time may be right to seriously thinking about giving power back to > the people, to whom it belongs. > > > Giulio, Just for the record, no one "gives" power to anyone. A person or group has (some amount of) power and either uses it or doesn't. Both Anonymous and the (sometimes) peaceful revolutions in the Middle East demonstrate that power doesn't necessarily mean "arms." You can wage a revolution with tools beside gunpowder and plastique. I highly recommend John Robb's "Brave New War," to see examples of how small guerrilla movements use the creation of chaos to bring down governments. In a sense, this is also what DRM pirates do to record and film distribution companies, when they knock off a movie and post it for free on the internet. Regards, James Clement -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Fri Feb 25 17:53:11 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 25 Feb 2011 10:53:11 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: 2011/2/23 Giulio Prisco : > Democracy is two wolves and a lamb deciding, by majority vote, what to have > for dinner. In other words it tends to degenerate into dictatorship of the > majority and oppression of all minorities. > > The question is what is better than democracy. I am not able to answer it. One of my pet peeves is when people use the word Democracy (literally Mob Rule) when referring to Democratic Republics. In a Democracy, every eligible citizen votes on every piece of legislation. In a Democratic Republic, every eligible citizen chooses representatives (usually for a period of time) to vote in their behalf on every piece of legislation. The ability of every citizen to become familiar with every piece of legislation is extremely limited. Heck, even our elected representatives in the United States apparently can't be bothered to read legislation prior to voting on it. I love the Nancy Pelosi quote about health care, "We have to pass the (health care) bill so you can find out what is in it." As an aside, I think if we knew who wrote most of the actual words in most of the bills put before congress, we would be astonished and afraid. Nevertheless, the ability of the average citizen to become familiar with the legislative content is far inferior to that of our elected officials. In a more libertarian state, the amount of legislative content would be much lower, which is part of the appeal of libertarian thought for me. -Kelly From giulio at gmail.com Fri Feb 25 18:02:09 2011 From: giulio at gmail.com (Giulio Prisco) Date: Fri, 25 Feb 2011 19:02:09 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: Very true. Power is not given but taken. Let me rephrase as "the time is right for the people to take the power back, and use it". 2011/2/25 James Clement : > Giulio Prisco said: > >> >> As in my previous comment, the events of the last 3 years show that >> the time may be right to seriously thinking about giving power back to >> the people, to whom it belongs. >> >> > Giulio, > Just for the record, no one "gives" power to anyone. ?A person or group has > (some amount of) power and either uses it or doesn't. ?Both Anonymous and > the (sometimes) peaceful revolutions in the Middle East demonstrate that > power doesn't necessarily mean "arms." ?You can wage a revolution with tools > beside gunpowder and plastique. ?I highly recommend John Robb's "Brave New > War," to see examples of how small?guerrilla?movements use the creation of > chaos to bring down governments. ?In a sense, this is also what DRM pirates > do to record and film distribution companies, when they knock off a movie > and post it for free on the internet. > Regards, > > James Clement > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From jonkc at bellsouth.net Fri Feb 25 17:38:09 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 25 Feb 2011 12:38:09 -0500 Subject: [ExI] Easy solution to wars like that in Libya, and in these groups?. In-Reply-To: <4D673974.7040207@canonizer.com> References: <4D673974.7040207@canonizer.com> Message-ID: <9124D70E-4639-4D57-A336-096F6997E558@bellsouth.net> On Feb 25, 2011, at 12:09 AM, Brent Allsop wrote: > If you could easily know, concicely and quantitatively what everyone wanted, obviously, if the leader was diviating from this, especially if he wanted to kill anyone, everyone could just ignore him, and just do what the people wanted, instead, couldn't they? Problem solved? Does anyone think differently? If 99% of the people want candidate A, that is to say they are willing to vote for A if the polling place is not too far away and the weather is good and they don't have anything better to do, and only 1% want candidate B to rule the country but they are willing to die for him, then candidate B will become the new boss because most people love their life more than they hate candidate B. > Is nobody interested in what the people of Libya want? In a word, no. If the people of Libya want a fundamentalistic Islamic dictatorship sympathetic with al-Qaida, and I hope not but that very well could be what they want, then that is not what I want. If the CIA can figure out some sinister underhanded way full of dirty tricks to covertly discourage that then I'm all for it, especially if you can engineer deniability into it in case the operation doesn't work, and many CIA ideas don't work but are worth a try. Don't misunderstand, I'd much rather Libya gets a western style open society, but if that's not in the cards I'll settle for another dictator as long as he's our dictator. That's worked before in Iran, yes I know eventually there came a day of reckoning, but it was successful for 3 decades and not many government programs can say that. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Fri Feb 25 18:19:15 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 25 Feb 2011 11:19:15 -0700 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: References: <006001cbd371$6562cc90$302865b0$@att.net> Message-ID: 2011/2/25 Darren Greer : > On Fri, Feb 25, 2011 at 12:07 PM, Kelly Anderson > wrote: >>offered a few hundred dollars to any young lady that wished to be > relieved of her reproductive capacity. < > > This seems reasonable, at first blush. The problem is, and I'm making this > assumption based on your post, that you're speaking of drug addicts Cravings > for certain types of drugs such as heroine and crystal meth and cocaine are > so powerful and so dominate in the mid-brain--over-powering sex drive, > hunger and even the instinct for self-preservation--that you can offer a > drug addict money to do just about anything and they will likely do it. The point of the exercise is not to protect the drug addict, but to protect the potential child from the drug addict, and to protect society from the burden of the potential child. A $500 investment through this program saves society somewhere around $500,000+. That is a good economic trade off. There is a moral question here too, of course. That question is harder to answer, but I think it is answerable. > ?People would, and gladly do, kill someone because they are offered a couple > hundred dollars to do so. Granted there is at least one involuntary > participant in this transaction. The murder victim. The point of the program isn't to say "see you can get people to do anything for money." The point is to prevent suffering in the world. The suffering relieved is that of the child, and that of the already overburdened taxpayer. Potentially, the suffering of the drug addict now having to care for a baby is also reduced... but that is of smaller consequence to me. The only condom a drug addict is likely to buy is one containing crack. Handing out free condoms is a reasonable program, but not as effective as voluntary sterilization. Reduction of overall suffering as the driving force is a tricky bit of philosophy, but it is widely accepted by most people. > But you could offer money or something else she really wanted or saw herself > as needing to someone who was mentally challenged to have herself sterilized > and perhaps convince her, because she didn't have the mental capacity to > know what she was agreeing to. One difference between the mentally ill and a drug addict is that the mentally ill person (usually) didn't make a choice that led to their being mentally ill. Most drug addicts made the choice to take that first dose of their drug of choice. I believe that by making that first choice, they may make all the subsequent choices. That is, they give up future choices by making a limiting choice today. But that's pretty universal. When I made the choice to marry my first wife, I screwed my life up as much as I would have had I chosen heroin instead. :-| I have had to live with the consequences of that choice (99% bad 1% very good) for the rest of my life. I think drug addicts are the same. Equating drug addicts to the mentally disabled disregards this first choice. > Drug addicts in the throes of their > addictions need to be treated the same way, as if they have a disability. Why? What is the moral basis of that statement? I know it's the politically correct position, but is it philosophically correct? > A more complex but equally cost-effective solution would be to get that person > in treatment and clean, where they could make better decisions about > reproduction and everything else. But then again, here treatment is covered > by the state, and I know in some places it is not. About ten percent of > entrants get clean, which is a pretty low rate. But it saves the state and > individuals an enormous amount of money in the end, for it is not just drug > babies that are expensive. Drug adults are too -- hospitals and detoxes, > shelters and foodbanks, welfare and crime. I separate the drug addict and how we should treat her from the child of the drug addict and how we should treat him. I have eight children who were children of a drug addict prior to being my children. They have suffered substantially from the poor choices of their mothers. Society is paying a high price for their "reproductive rights." As a libertarian, I like that the mothers had a choice, but I don't like that the state takes care of the mess afterwords. If someone wants to pay to try and get someone off of drugs, more power to them. It should be their choice. Paying taxes is not a choice. So using government money to cure addicts is theft in my book. Using private funds to do so is entirely permissible of course. The average bill for a month in rehab is what? Thousands? Only one in ten is cured... Economically, is that a good way to spend money? It may be, but it isn't as economically profitable as the pay for sterilization program. The government would never make this deal. It is left to individuals to be that efficient. > And the war on drugs is useless, as all wars ultimately are. As Dr. Terry > Tafoya said, "'Just say no' to a drug addict is as about as useful as 'have > a nice day' is to a manic depressive." Good one Darren. :-) Would you be happier with the program if it included a month of rehab and counseling prior to the sterilization? -Kelly From stefano.vaj at gmail.com Fri Feb 25 18:07:58 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 25 Feb 2011 19:07:58 +0100 Subject: [ExI] Happy Birthday Extropy Email List In-Reply-To: <00c701cbd2dd$56be0450$043a0cf0$@att.net> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> <201102192014.p1JKEeST027600@andromeda.ziaspace.com> <201102220210.p1M2A4ah005359@andromeda.ziaspace.com> <00c701cbd2dd$56be0450$043a0cf0$@att.net> Message-ID: On 22 February 2011 23:10, spike wrote: > Slight embarrassment yes, court no. ?The prosecution would need to prove it > was actually *you* who wrote the post. Sure, but I was being metaphorical, as "in the court of public opinion". That is, I claim the right to change of opinion as anybody else, but I do not like the idea of profiting from copyright rules to reserve the right to retract the statements I may be proud to issue today. -- Stefano Vaj From eric at m056832107.syzygy.com Fri Feb 25 18:18:16 2011 From: eric at m056832107.syzygy.com (Eric Messick) Date: 25 Feb 2011 18:18:16 -0000 Subject: [ExI] Joule currency (Re: Banking, corporations, and rights) In-Reply-To: References: <4D675E2E.8030901@gnolls.org> Message-ID: <20110225181816.5.qmail@syzygy.com> Kelly Anderson wrote (re: fractional reserve banking): >I guess the real problem is understanding the alternatives. There just >isn't enough physical gold to run the economy (unless gold were >$1000000 an ounce or something... which would make the industrial use >of gold prohibitive... which would have its own downsides) I propose the Joule as the future unit of currency. What you want is a money supply that grows along with your economy, so you don't have deflation. Fractional reserve is supposed to provide that growth, but it has the potential for abuse. Economists have been thinking about "commodity bundle" currencies as a replacement for the gold standard, but things would have to be moved into and out of the bundle as technologies and needs change. In essence, the Joule acts as a single commodity, just like gold. Want to print more money? Go out and bring some energy production online. Manufacturing, even distributed manufacturing by nano-factories, requires energy. The cost of manufactured goods would be the energy cost of creating them, plus some extra for the materials and design. You can transform materials from one form to another, at an energy cost. The higher energy forms would be more expensive. Oil companies might like this, and be willing to back it. It basically turns them into money printers. There might be a shift in attitude though. "Hey wait! We're pumping money out of the ground, then BURNING it?!" Ultimately, it should get people thinking about renewable energy resources, maybe even funding solar power satellites. There are some details to work out (in addition to the question of transitioning to this). What would a banknote be? "SpacePowerCo will redeem this note for 100,000 Joules of electric power delivered to bearer over existing infrastructure." Power producers would issue such notes at the rate at which they produce power, but the note is for future power production. How far into the future are they allowed to issue notes for? Perhaps the notes need a redemption time range. Would you take a note for power you can only redeem in 10 years? They could be issued to finance construction of new power sources. I think we could scrap fractional reserve banking if we went to this scheme, and avoid the incentive to inflate the currency. The banks, of course, won't go for it. Is this something worth thinking about? -eric From spike66 at att.net Fri Feb 25 18:42:11 2011 From: spike66 at att.net (spike) Date: Fri, 25 Feb 2011 10:42:11 -0800 Subject: [ExI] kurzweil in the mainstream press Message-ID: <003501cbd51b$b6d5cf90$24816eb0$@att.net> Ray Kurzweil sure is making the mainstream buzz these days: http://video.foxnews.com/v/4556962/rise-of-the-machines-not-so-farfetched/?p laylist_id=86861 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Feb 25 19:05:15 2011 From: spike66 at att.net (spike) Date: Fri, 25 Feb 2011 11:05:15 -0800 Subject: [ExI] Joule currency (Re: Banking, corporations, and rights) In-Reply-To: <20110225181816.5.qmail@syzygy.com> References: <4D675E2E.8030901@gnolls.org> <20110225181816.5.qmail@syzygy.com> Message-ID: <004a01cbd51e$f0119390$d034bab0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Eric Messick ... >...I propose the Joule as the future unit of currency... The problem with that is that not all joules are created equal. You need another measure to go with any measure of energy, its entropy. The ocean is full of joules in its thermal energy, but the entropy is so high it cannot be converted to useful anything. High entropy = bad, low entropy = good. >... There might be a shift in attitude though. "Hey wait! We're pumping money out of the ground, then BURNING it?!" The government is pumping money out of me, then burning it. That doesn't seem to bother them much. >...Is this something worth thinking about? -eric Ja, but keep thinking how we can describe the entropy level along with the joule unit of currency. spike From sjatkins at mac.com Fri Feb 25 19:47:39 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 25 Feb 2011 11:47:39 -0800 Subject: [ExI] Joule currency (Re: Banking, corporations, and rights) In-Reply-To: <20110225181816.5.qmail@syzygy.com> References: <4D675E2E.8030901@gnolls.org> <20110225181816.5.qmail@syzygy.com> Message-ID: <4D68075B.2040105@mac.com> On 02/25/2011 10:18 AM, Eric Messick wrote: > Kelly Anderson wrote (re: fractional reserve banking): >> I guess the real problem is understanding the alternatives. There just >> isn't enough physical gold to run the economy (unless gold were >> $1000000 an ounce or something... which would make the industrial use >> of gold prohibitive... which would have its own downsides) > I propose the Joule as the future unit of currency. > > What you want is a money supply that grows along with your economy, so > you don't have deflation. Actually, I think this is a mistaken notion. Not that the price per unit of functionality in computer chips has fallen rapidly toward zero. Yet this is still a booming business. Deflation across an entire economy by severely contracting the money supply is very very different than all case of the same unit of currency buying more and more. You actually want the latter in the more benign cases. You can't get it if you insist on adding to the money supply to keep prices roughly the same. - samantha From sjatkins at mac.com Fri Feb 25 20:12:36 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 25 Feb 2011 12:12:36 -0800 Subject: [ExI] Easy solution to wars like that in Libya, and in these groups? In-Reply-To: <4D673974.7040207@canonizer.com> References: <4D673974.7040207@canonizer.com> Message-ID: <4D680D34.3010800@mac.com> On 02/24/2011 09:09 PM, Brent Allsop wrote: > > Extropians, > > It seems to me that if you could some way have an easy way to > reliably, easily, and in real time, know concisely and quantitatively, > what the entire population of Libya wanted, war could easily be avoided. Are you assuming that this collective state of mind is particularly rational or a good [enough] decision maker? Why would you assume that when experience seems to show that relatively few people are reasonably sane and competent about a great number of questions? This is a problem I had with Eliezer's CEV concept as well. Even a powerful AGI that deeply ferreted out and did what humanity collectively wanted or what it extrapolated that humanity when most rational/wise would want would not be a clear win. > Why do we all have to spend so much effort protesting before anyone > finally gets a clue as to what the people want? If you could easily > know, concicely and quantitatively what everyone wanted, obviously, if > the leader was diviating from this, especially if he wanted to kill > anyone, everyone could just ignore him, and just do what the people > wanted, instead, couldn't they? Problem solved? Does anyone think > differently? > That is a system of strong individual rights. It is not democracy as in democracy your wishes and rights can always be overridden by a majority. > Everyone is asking the question, what should the US, and other > countries do, to help out Libya and similarly struggling countries? > Why is everyone only asking or talking to the leaders, at the tops of > all the hierarchies, and if that one is taken out, find another. Is > nobody interested in what the people of Libya want? Isn't that the > only problem? They should leave it alone. > > Of course, primitive survey systems, like simple voting, or checking 1 > of 4 possible choices doesn't work very well, and is so difficult. > Who gets to decide what the 4 choices are, and what they mean? You > need some kind of open survey system that is set up in a way that is > constantly improving, bottom up, and one that will build as much > consensus as possible (pushing the banal disagreeable stuff to lower > camps - out of the way), and follow that consensus as it morphs into > the future and jumps away from falsified camps, to far better ones.... > > And it needs to be some kind of expert based system. So uneducated > people in a particular field can easily delegate their vote / support, > to someone they trust more than themselves. And that delegatee can do > the same to someone else, and so on... so the real experts at the top > of such trees can make much more educated choices than all the > clueless idiots... > And the clueless idiots are going to delegate on what basis well? How is this different from all current representative democracies and their known considerable ills? This is not to say that the open survey is unworkable at all. It is a very fine idea in many respects. For sure what I think is not at all represented by any conceivable vote in the current system. I have a design in mind for an open end survey and matching system that the above sort of reminds me of. > And of course, all transhumanists are just one big group of > individuals all waring and criticizing each other on the most trivial > details, and we never get anything done at all, and never have any > influence over anything. But I bet you if we had the right consensus > building system (where the trivial less important disagreeable stuff > we spend all our time on could be pushed to lower level camps), all > the real moral and scientific experts at the tops of such delegated > tree structures, would be far more transhumanist than the general > clueless population. With such a system dictating the morals of > society, (rather than all the primitive war mongering bottle necked > hierarchies) and telling us what our priorities are and so on. I bet > we could rule the world and finally bring the singularity to pass. In actuality it doesn't happen. Central decision making everyone has to obey even if they are an outlier with a better idea is inherently broken. Such can at best provide general guidance. It can never have enough capacity to outperform localized decision making. You cannot construct a good centralized or expert run system generally that will retain its good qualities or have good qualities if it grows to be the "decider" for too many things which it enforces using force. > > My hypothosis is that it is all simply a matter of communication. How > do you know what the best experts in the crowd want, concisely and > quantitatively? How do you, John Q. Public, know who those "experts" are? > What is the moral expert consensus? Define "moral". We cannot find the above without such a definition. > What is the scientific consensus? Are you sure consensus strongly approximates best? > What is the transhumanist consensus? If you can know that, suddenly > there is no more reason for war and fighting. False. There will be dissenting viewpoints. If they have no space to doing things their way that is a cause for conflict. > This hypothesis has led me to try building something like > canonizer.com, but everyone seems to hate it, and like everything > else, everyone just wants to criticize, fight it and destroy it, and > go back to doing everything on their own in a do it yourself lonely > way - damn everyone else. So maybe someone can come up with some kind > of better method of knowing what all us experts want, concisely and > quantitatively, in any kind of consensus building way, so maybe we can > work together and get something done, other than just finding > disagreeable things and focusing and criticizing everything and > everyone on that, as we continue to watch the world still wallow in > primitive rotting misery? > Build you own walled community with people that you find sane. Don't expect to convert the world or get the consensus to do anything but run over you. Canonizer was an interesting idea but the implementation is too weak/ not so useful. I am not sure what could be better or if some of its goals are doable. > We just buried our mother in law. Despite my obvious horror, I > couldn't even talk about it, the family just put her in the grave to > rot. Yes, she is the one I told you about that was asking me about > transhumanism the other day. But that is about as far as she got. > I'm getting tired of rotting these people in the grave and sitting > around watching as if we can do nothing. I just want to know what all > you experts believe is best, and want to get to work on doing it all, > together. I feel for you and do every time death touches my own life or the lives of my friends or I even see death strike for strangers. When you know there is potentially a "cure" for much of it it becomes many times more unacceptable and it is much more difficult to go numb than it was before understanding this. For many things only relative experts can do the doing although many others can contribute money and other resources to the efforts. It would be fabulous to build a Foundation or some other structure to gather funds to be distributed by a board of transhumanist experts to H+ projects. I don't know the legal details of such. Anyone? > We obviously still aren't getting much done as lone individuals. Can > we not do more than just spending an eternity in eternal yes no > arguments over such things as Libertarianism vs what, over and over > again, year after year after year, for now more than 20 years? I hear you. I should shut up more of the time and build some things I believe will add real value along transhumanist lines. Like a series of Intelligence Augmentation software tools. > Lets just find some way to definitively state what everyone wants, > concisely and quantitatively, and finally just get started on doing it > all, for everyone. I don't give a fig what "everyone wants". Really. - samantha From sjatkins at mac.com Fri Feb 25 20:17:18 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 25 Feb 2011 12:17:18 -0800 Subject: [ExI] Banking, corporations, and rights (Re: Serfdom and libertarian critiques) In-Reply-To: <4D675E2E.8030901@gnolls.org> References: <4D675E2E.8030901@gnolls.org> Message-ID: <4D680E4E.40707@mac.com> On 02/24/2011 11:45 PM, J. Stanton wrote: > > Allowing the creation of a virtual person ("corporation") onto which > liability can be deflected gives officers of the "corporation" special > legal privileges which the rest of us do not enjoy. *** The very > concept of the "corporation" violates the most basic tenet of human > rights: equality under the law. *** I very much disagree. Limiting liability to just the funds/resources owned by the business entity (except for criminal liability) is the only way many highly financially risky projects (thing space business) can ever be undertaken. There is a very sound reason such entities developed over 600 years ago. > > Consider: We've created a race of virtual beings which are immortal, > cannot be physically punished, Sure they can. They can be totally torn asunder. > have the money and resources of tens of thousands, Having more money is a moral issue? > and which can dissociate and reorganize their own component parts > whenever and wherever it's convenient. No, they can't except in very proscribed ways. > And then we're surprised that these "corporations" run everything...? > No, they do not "run everything". Governments run far far more. - s From mrjones2020 at gmail.com Fri Feb 25 20:31:19 2011 From: mrjones2020 at gmail.com (Mr Jones) Date: Fri, 25 Feb 2011 15:31:19 -0500 Subject: [ExI] Joule currency (Re: Banking, corporations, and rights) In-Reply-To: <20110225181816.5.qmail@syzygy.com> References: <4D675E2E.8030901@gnolls.org> <20110225181816.5.qmail@syzygy.com> Message-ID: On Fri, Feb 25, 2011 at 1:18 PM, Eric Messick wrote: > I think we could scrap fractional reserve banking if we went to this > scheme, and avoid the incentive to inflate the currency. > I think it's time we scrap money altogether. I'm thinking more along the lines of The Venus Project . A resource based economy. But that would require humanity to actual work together, and short of a common enemy (threat to humanity as a WHOLE, undeniable, unquestionable), that's unlikely to happen anytime soon. > The banks, > of course, won't go for it. > All the more reason to do it. I like the idea overall. At least energy is the currency, instead of some easily manipulated, valueless piece of paper. > > Is this something worth thinking about? > Absolutely. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Feb 25 22:30:35 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 25 Feb 2011 23:30:35 +0100 Subject: [ExI] Joule currency (Re: Banking, corporations, and rights) In-Reply-To: <20110225181816.5.qmail@syzygy.com> References: <4D675E2E.8030901@gnolls.org> <20110225181816.5.qmail@syzygy.com> Message-ID: <20110225223035.GW23560@leitl.org> On Fri, Feb 25, 2011 at 06:18:16PM -0000, Eric Messick wrote: > I propose the Joule as the future unit of currency. Currency isn't consumed in the process of payment. A slightly different issue is with precious metals, which are typically highly recyclable, but cause environmental load and consume energy when mined. But I *would* link currency to a diversified basket of raw resources, such as PSE elements or just minerals. The dangers would lie in twiddling the coefficients, though. Anything manipulable eventually will be. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Fri Feb 25 23:51:29 2011 From: spike66 at att.net (spike) Date: Fri, 25 Feb 2011 15:51:29 -0800 Subject: [ExI] this is me in another forty years... Message-ID: <00a901cbd546$ec26bda0$c47438e0$@att.net> Check this, a three minute story which could be subtitled: when they pry the handlebars from my cold dead hands: http://www.youtube.com/watch?v=vksdBSVAM6g &feature=player_embedded This is how they make commercials in countries where they still have actual attention spans. Hey it worked on me. {8-] spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Sat Feb 26 00:31:33 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 25 Feb 2011 17:31:33 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: On Thu, Feb 24, 2011 at 6:12 PM, Samantha Atkins wrote: > On 02/23/2011 09:27 PM, Jeff Davis wrote: >> Not committing the party a priori to a menu of positions hardly means having no principles. > > "No other issue is relevant" doesn't leave a lot of room for bringing up principles this may fly in the face of. Let me clarify. My hypothetical Accountability Party (AP) is composed of principled members who subscribe to a broad spectrum of specific, varying, and yes, sometimes conflicting principles. This spectrum of principles will be brought to bear in the POST-ELECTION discussions that accompany the legislative and executive decision-making processes. Libertarian principles will be represented in those discussions to the degree that libertarians have assisted in the AP electoral effort. The dictatorial imposition of Libertarian principles won't happen, but the libertarians will get to make their pitch, and if persuasive their principles will be enabled/implemented. But be forewarned, compromise accompanied by disappointment will almost certainly be the order of the day. (Democracy just sucks! Aaaargh!) I'm trying to explain that it's not that the AP doesn't have other principles. But that there are three elements to successful "rule",: (1)winning campaigns, (2)governing -- these two enable the final element --and (3) having the vision to lead society to "a better place". The AP campaigns -- element (1) above -- on just two "principles" -- accountability and jobs -- because almost EVERYONE supports these principles. Almost EVERYONE is the essence of "broad-based", which means a lot of voters, which wins campaigns. " Accountability and Jobs" serves first duty as a campaign-winning tactic, then after the win, they become the top two principles, but by no means the only principles, of the AP. > Free floating wish list items with no grounding in any principles whatsoever are a BS basis for any party and cannot last because there is no grounding. I think I've explained away this objection above > ?I mean you can satisfy everyone has a job by simply enslaving the entire country and putting any excess workers (newly employed) to work digging holes and then filling them back up. Nothing in the party planks precludes this implementation. ME: True, but that implementation is not my plan but rather your dystopian prediction. Bear with me. Despite your fears, we're not going dystopian. >>>> accountability: no one is above the law. Everyone, but in particular persons in high position who have traditionally 'enjoyed' immunity from prosecution, will now have their "get out of jail free "cards voided. I would start with war crimes. > By what? Geneva convention? My personal preference is the UN Charter. When the US signed on to it, it became US law. It proscribes both war and threats of war, except according to Charter criteria. But that would be only a start. There are other laws which are applicable as well. I understand your objections, as a libertarian, to certain laws -- the victimless crime laws for example. I hold the same libertarian objections. The AP party -- to whatever degree you , I, and others of like mind can assert our principles -- would apply the principle of Accountability based on "just" law, while setting out to eliminate "unjust" law. >> By "accountability", I essentially mean to subject the ruling class in general and the power elite in particular to a strong dose of "ethic cleansing", so the entire society could start over with a clean slate. Start over, but with the former upper reaches of society on notice that the law now applies to them. ?No, really. > This seems like blaming the powerful politically and or the rich-er as a class. ?This has been so busted so many times when it has been tried before. ME: Now I KNOW you're not saying the richer are above the law! And I know,... you surely wouldn't say that,... on account of some little bias you might hold,... arising say, from your love of the richer, admiration of the richer, or aspiration to be one of the richer. So I'm sure you would agree that if a rich or powerful person committed a big ass crime, enabled by and proportionate to their big ass richness or big ass powerfulness, that they should be held accountable and, if found culpable, punished proportionately to the big ass-ness of the crime. >Simple envy would make it very popular as it has been before. ME: This is a class-based argument, Samantha. "The poor envy the rich, and want to kill them and steal their money." And you disappoint me by deploying it. Shame on you. Neither the pre-conviction envy, nor the post-conviction schadenfreude of the poor, is a "get out of jail free" card for the rich. >The results would be unlikely to be much better without considerable more refinement and statement of and adherence to some of those pesky principles. Tell you what I'll do. We'll put our heads together and restrict prosecutions to murder, felony murder, mass murder, and conspiracy to, or facilitation of, the commission of any of these. Then we'll discuss which other offenses qualify as actionable offenses according to libertarian principles. Deal? > [The equal application of the law] is a firm part of what we are already supposed to be about. ?Fixing instances where it is not the case is a fine thing. ? ? I would press criminal charges if not treason on many a past and present politician as many violate their oath of office wholesale. ME: Okay, it seems we're coming into agreement. Good. Regarding the charges of treason, and violations of oaths of office, show me the particulars, and I'll be happy to work with you on this >> This doesn't imply draconian penalties. ?It isn't about revenge. ?It's about starting over with a clean slate and a "rule of law" that actually does its job. > If you are picking on the powerful for being more powerful than you or I and the richer for having more money than you or I and you are also speaking of and to the sentiments of the "average person" then you are in revenge territory. ME: I commend you for your vigilance and insistence on fairness in dealing with the powerful and richer. That said, it is not unlawful, though some -- the once rich and powerful in particular -- may find it unseemly, when the poor dance in the street to celebrate the richer and powerful finally joining the rest of us in being subject to the JUST AND PROPORTIONATE penalty for their misdeeds. >>>> And jobs: everyone who wants a paycheck gets a paycheck. EV-REE-ONE. > Economics, while maybe not all, is not served by pretending their are limitless means to satisfy limitless wants. That is a denial of economic reality. ?You can't spend your way out of bankruptcy. ?Ask Zimbabwe whether you can print enough money to get out of bankruptcy. ME: Look, I deserve your little lecture, Okay? I'll take the blame. I was too lazy, too pressed for time, to explain the reality-based details of my "Jobs" proposal. So I substituted: "The govt has machines that print money, so It's a done deal. Get over it." I still don't have the time for much more. But I have time for a little bit more. The US is a rich, massively-productive giant, with the worlds largest economy. As a nation it has sizable assets, liquid and illiquid, public and private, and also sizable debts (liabilities?). Scare talk aside, no economic catastrophe is going to cause the Earth to open up and swallow the US, leaving behind a seawater filled crater. Life goes on. Life will go on. (Barring an asteroid collision or Gamma Ray Burst.) Even during the great depression, with its 30% unemployment, there must correspondingly have been, if numbers mean anything, 70% employment. Life goes on. The economy goes on. It has its ups and downs. Human suffering as related to these ups and downs correlate with employment. During the "ups" ,things are good, and everyone can find work. During the "downs", not so much. The Accountabilty Party says to the voters, "We're going to end the cycle of misery, by seeing to it that everyone who needs a paycheck can find work. This position, while principled (I know, I know, you hate it ,and consider it entirely unprincipled), is also TACTICAL. In that role, it is the first step in implementing change in an electoral system: getting one's hands on power by winning elections. The best I can do for you is promise that when the libertarian system replaces the current system of manipulated casino predation, I'll join you in shutting down the guaranteed paycheck program. > People are not in the least entitled to a slice of the economic pie just by virtue of being born. ?Not when the pie is finite and produced by the work of others. ? This would be a denial of justice and reality. We disagree. My argument will not persuade you, however, so I will limit my response to a simple counter assertion: Au contraire, it is the very essence of justice and realism. >>... it is **your** question presumes no value. ?I do not >> propose a "no value" exchange.) > > Yes, you do. NO I DO NOT. You interpret it that way because you are ideologically-blinded, compassion-challenged, reality-alienated. and pragmatism-resistant. > You propose to give everyone a paycheck regardless > of whether their skills and/or labor have any real value > in a free market or not. (There ain't no kind of free market around nowhere. And you know it. But that's a subject for another time.) >>> How is this just >> >> It reconfigures the economic system, eliminating the >> "war of all against all". ?High level political and economic >> crime will be deterred. ?There will be a societal shift >> away from parasitism and >> toward greater productivity. ? Economic activity will then >> equilibrate, and life will go on. ?But better. > > That is not remotely a meaningful answer. I've been very polite, and deliberately agreeable, but only your ideological rigidity could explain how you miss the depth of meaning inherent in my proposed reduction of human misery. > Why would their be greater productivity when you print and borrow money like mad Your words not mine >to make sure everyone has a paycheck thus destroying the financial basis of the economy Your conclusions, not reality's >and producing (sooner or later) rampant inflation? You have the future all mapped out, ehhh? Is that Ms. God or ms. god? >?Why would their be greater productivity Business cycles stabilized, labor prices stabilized, and other bullshit economic prognostications. But my ideologically-biased bullshit is just as good as your ideologically-biased bullshit > when everyone knows they have a roof over their head, food on the table and other essential things as a matter of entitlement even if they play games all day or spend everyday in a stupor? The entitlement is to a job, not a paycheck, they still have to work for the paycheck. Oh, and by the way, you can eliminate unemployment benefits, a genuine 'no value' payment. > Why would the productive remain productive and become more so when they have to pay more and more in taxes or the money they make is worth less and less and they have to support many more parasites on the system? Because if "the productive" -- whose shit, by the way, still stinks, even in a libertarian utopia -- want to maintain their standard of living, they're GOING TO HAVE TO PRODUCE MORE. Perhaps they can do this by hiring more people who will then be taken of the govt paycheck rolls. > There is nothing above but empty claims that disintegrate under even rudimentary analysis. > You are blocked from "even rudimentary analysis" by ideological blindness. You've seen the light, and now you cover your eyes. > Please explain and show your work. Show me yours and I'll show you mine. You're not Swedish, are you? >>> >>> Progressive tax is regressive to actually growing an >>> economy. >> >> No it isn't. > > Whatever. ?If you aren't interested in any real dialogue I am wasting my time. And I mine. But I'm retired, financially secure, and have my cryonics contract in place, so I'm golden. >> Nothing could be more moral and just than to confirm, and apply, the principle that every person is ENTITLED to a living wage from the economic pie. > By what standard of morality validated how? The standard of morality that says human suffering is bad, validated by the subjectively undeniable -- if ineffable -- good feeling you get from seeing that suffering lifted and replaced by rampant displays of human foolishness. > The above is simply a claim with > no argument whatsoever for its validity. You spoke to soon, Here's my parsimonious snark at validation. > >> By the way, I base my challenge to your assertions about the economic consequences of taxation, on the claim that it's just ruling class propaganda. > Which is another empty assertion. History slash empiricism constitutes an unbroken confirmation of my assertion. >> ? No doubt you will counter with some conservative or "Austrian" economist ?as authority. ?It's the same old story from the dim recesses of time. ?The intellectual class provides "scholarly" justifications for the predation of the wealthy. > Oh, so now you are going to pull a classist argument claiming all counter-arguments are bourgeois conditioning and rationalisation. If you can't do the time, don't do the crime. > ?I see. > ?Glad we cleared that up. ?The Communist did a more convincing job of that. > >> And one other thing: we're on the same side , seek the same end. ?Hard to believe, but true. ?Libertarian principles-wise. > > No, we are not remotely on the same side judging from what you have said above. No, we're on the same side all right. You're just a little confused. And magnificently stubborn. I do love you so. The proof of my love is that I spent a whole flippin day on this silliness. I need to check into some clinic somewhere. Best, Jeff Davis "Everything you see I owe to spaghetti." Sophia Loren From lubkin at unreasonable.com Sat Feb 26 00:33:48 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Fri, 25 Feb 2011 19:33:48 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: <201102260033.p1Q0X1Su017856@andromeda.ziaspace.com> Kelly wrote: >Nevertheless, the ability of the average citizen to become familiar >with the legislative content is far inferior to that of our elected >officials. In a more libertarian state, the amount of legislative >content would be much lower, which is part of the appeal of >libertarian thought for me. I live in New Hampshire, which has one of the largest legislative bodies in the world. With 400 state reps for 1.3M people, that's 3300 people/rep or roughly 650 households/rep. Candidates can and do personally knock on every door in their district. State reps are paid $100/year plus mileage. Then there's the New England tradition of town meetings, except for the cities. http://en.wikipedia.org/wiki/Town_Meeting#New_Hampshire I routinely run into state reps or senators. (And presidential candidates, but that's a different matter.) The home address and phone number of candidates and elected politicians are typically printed in the newspaper, up to and including the governor. Because of the state focus on "Live free or die" and the lowest taxes in the country, there are far fewer laws. Every town library has a complete set of the laws, for anyone to check, and they only take up about two three-foot shelves. A consequence of this is that if you want to be a corrupt politician, you go elsewhere. The biggest scandal I can think of in >20 years here is that when Chuck Douglas was a supreme court justice and his divorce came before the court, his fellow justices let him participate in their deliberation. It is less libertarian than when I first moved here, thanks to influx and then to sway in the south of Massachusetts liberals. But apart from the snow, it's still an excellent place to live in virtually every respect. -- David. From js_exi at gnolls.org Sat Feb 26 04:03:35 2011 From: js_exi at gnolls.org (J. Stanton) Date: Fri, 25 Feb 2011 20:03:35 -0800 Subject: [ExI] Free banking and fractional reserve banking (Re: Serfdom and libertarian critiques) Message-ID: <4D687B97.9090001@gnolls.org> [I hope that, since this discussion is more about banking at this point, I can still respond to these messages even though the libertarian quarantine has been re-established.] F. C. Moulton wrote: > I am somewhat baffled by your comments because your comments ignore > reality. Libertarians have long complained about government privileged > banking. And obviously all anarchists by definition are opposed to > government granted privilege in commerce or any other area. Plus > economists discuss free banking and it is easy to find. Since I have > been providing so many text links here are some video links: > http://www.youtube.com/watch?v=5P7W1G1hbiQ > http://www.youtube.com/watch?v=0PyS2NtW3xA This is a common point of confusion. "Free banking" still allows banks the privilege of creating money by issuing debt ("fractional reserve banking"), a fraudulent practice that any of us would go to jail for, and which is a special power granted only to "banks" by governments. Example: If you give me $100 and I lend $90 to Spike, I have $10. If you ask for your money back, I have to tell you "I don't have it." If you give "Bank of J. Stanton" $100 and it lends $90 to Spike, it has $10...but it tells you that you have $100, and that you can withdraw it at any time. In other words, your money is immediately replaced by an IOU for the repayment of BoJS' loan to Spike. In practice, the bank packages and sells Spike's loan...so right now, in our current system, *** all of your money in a "checking account" is actually in a hedge fund making 30:1 leveraged investments in mortgage-backed securities. *** And, in our current system, you are forced by "legal tender" laws to accept this share in a highly-leveraged hedge fund as if it were real money! Yes, you can withdraw "your" money, but what you're really getting is other depositors' $10 (the "reserve" in fractional-reserve). Which works so long as not too many other people try the same thing -- about 6%, at current reserve and capital ratios. Any more, and the fraud collapses ("bankruptcy"). (This is why the Fed holds over $2 trillion of worthless bank debt: the banks all know they are insolvent, so they've transferred their bad debt to the US taxpayers through the Fed. It's the biggest swindle in history.) Think about it for a moment...if I told you or anyone here "I have a great scheme by which we can all make lots of money, but which collapses if more than 6% of its participants try to take money out," you'd rightfully dismiss it as a fraud. *Yet this is the foundation of the entire world banking system!* All that "free banking" does is deregulate the oligopoly on fraud to some degree. It's still a fraudulent system with perverse incentives -- both money creation and money destruction are positive feedback loops, and the fastest path to economic growth involves going into debt as quickly as possible. The only benefit to free banking is that these crashes happen more quickly because the debt is not backstopped by a central bank...which I agree is a good thing, but it's polishing deck rails on the Titanic. Stefano Vaj wrote: > On the other hands, it is absolute private property of wealth in the > modern sense which is a relatively new concept. The feodal lords > were not the *owner" of their land in the modern sense, they were > rather enjoying a privilege which could be accorded and under some > circumstances revoked, had a limited if any transferability, was > supposed to be parcelled through further concessions to lower lords > (vavasours, vassals of vavasours), etc. True. But we're not "owners" of our land in the modern sense, either: if we stop paying taxes or responding to random demands at random times, our privilege of occupation is revoked. Then there is eminent domain. And all transfers have to recorded by the local governmental agency: you can't just "sell" land directly to someone else. JS http://www.gnolls.org From darren.greer3 at gmail.com Sat Feb 26 05:21:23 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 26 Feb 2011 01:21:23 -0400 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: References: <006001cbd371$6562cc90$302865b0$@att.net> Message-ID: On Fri, Feb 25, 2011 at 2:19 PM, Kelly Anderson wrote: 2011/2/25 Darren Greer : > > On Fri, Feb 25, 2011 at 12:07 PM, Kelly Anderson > > > wrote: > Would you be happier with the program if it included a month of rehab > and counseling prior to the sterilization? I think I'd be happier with the program if it targeted men instead of women. For a couple of reasons. One, if you wanted to reduce the number of unwanted and/or drug addicted children coming into the world, wouldn't men be the obvious choice? Women get pregnant and they're out of commission reproduction-wise for the gestation period. While the man that impregnated her can go out impregnate a dozen more in that time. From a pure numbers stand-point (and I dislike talking about people in such terms, but will for the sake of this discussion) such a program would make more sense if it used money to entice men to have vasectomys rather than women to have their tubes tied. Secondly, if a man does get clean and regrets the poor decisions he made during his drug using period, and becomes a responsible member of society and is willing to take on the burden of having children, he can have his procedure reversed. A woman currently can't. > > > The point of the exercise is not to protect the drug addict, but to > protect the potential child from the drug addict, and to protect > society from the burden of the potential child. A $500 investment > through this program saves society somewhere around $500,000+. And yes, that is where it is reasonable. But there may be another hidden cost that you may not have considered. I'm not a lawyer, but I think one could make a reasonably good argument in a courtroom that offering a drug-addicted woman money for sterilization was coercion. Especially if that woman had cleaned up, got her life together, and decided that she was ready to have children and couldn't. Also, if she got religion, which is likely given the nature of most twelve step programs, someone might really be in trouble. Justice may be blind, but God can make the blind see again. > > > Most drug addicts made the choice to take that > first dose of their drug of choice. Sometimes they do, sometimes they don't. And even if they do, does that one choice alone damn them to their fate? Have you ever chosen to take a drink of alcohol? If so( and nearly everyone has) than an alcoholic who has made the same choice of you and whose body and mind somehow reacts differently than yours is at fault for making the same choice that you did? Yet in your case your not an alcoholic (I presume :) ) and in his case he is. So is it the choice that condemns him or the predisposition to the condition? > > > Drug addicts in the throes of their > > addictions need to be treated the same way, as if they have a disability. > > Why? What is the moral basis of that statement? I know it's the > politically correct position, but is it philosophically correct? > There is good medical evidence for it. There is an identifiable symptom list of drug addiction. There are stages for relapse that are remarkably similar in each person. (Read Dr. Terrance Gorsky) It is listed in the compendium of psychiatric illnesses as a bona-fide disease. There seems to be some genetic link, as it often runs in families. And I think the real indicator is that there seems to be no environmental, cultural and racial factors that predispose a person to it: it runs across all sectors of society. > > > If someone wants to pay to try and get someone off of drugs, more > power to them. It should be their choice. Paying taxes is not a > choice. So using government money to cure addicts is theft in my book. > Using private funds to do so is entirely permissible of course. But using private funds is as damaging as using public. Imagine if all the money that currently goes to help untreated drug addicts living on the street was invested into the economy. Or paid in taxes. Or even kept in low interest bank accounts. But the big pay-off for getting drug addicts treatment is in lowering crime rates. And not just drug addicts. People who use illicit drugs period. For every gram of cocaine you hold in your hand, someone has likely been killed to get it there. Organized crime and biker gangs thrive on it and cost societies billions each year. Rescuing babies or limiting their births is an easy sell compared to getting the adults help, but it makes sound economic sense to do so. >I separate the drug addict and how we should treat her from the child of the drug addict and how we should treat him. I have eight children who were children of a drug addict prior to being my children. They have suffered substantially from the poor choices of their mothers.< I read all your posts related to this, and recognized that you have good reason for supporting this program. I would probably support it too, if I were in your position. I was careful to criticize the program as respectfully as I could without criticizing you for supporting it. I went to a boxing match with my Dad tonight. It was the gold medal round for the Canadian Games and since I had never been to a live blood-sport before, I was curious. Some guy was getting his face pummeled in the ring and I was busy trying to list in my head all the English language turns of phrase and cliches associated with the sport (I came up with about seven.) Incidental aside. But I was also thinking about your post. And how I would respond when you responded, as I knew you would. I recognized your very intense personal involvement with the issue. And I was thinking that for me at least there is an emotional under-current, some kernel of experience that cannot be analyzed or intellectualized, that is the foundation upon which I build much of my theory and base my positions. I think that is true for many of us. So, to give you my example, and not that it makes my argument any more cogent or relevant, because it doesn't, I was at the age of twenty-seven addicted to morphine and cocaine. At the age of twenty-eight I cleaned up. Now, some fifteen years later, and I have a rich, varied life, the career I always wanted and give back to society as much or perhaps more than it has ever given me. But then I was living in a homeless shelter and I did things for money for drugs that I still firmly regret. However, none of my actions were irreversible. I was able to get it all back. When I read your post I thought of my own experience, and how someone in my position might feel if they gave up their ability to reproduce for a hit of crack and then rediscovered the world as I have. That's why I kept my response as logical and rational as possible, because I have such a close emotional attachment to it. Thanks for posting Kelly. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 26 06:08:06 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 26 Feb 2011 02:08:06 -0400 Subject: [ExI] Easy solution to wars like that in Libya, and in these groups?. In-Reply-To: <9124D70E-4639-4D57-A336-096F6997E558@bellsouth.net> References: <4D673974.7040207@canonizer.com> <9124D70E-4639-4D57-A336-096F6997E558@bellsouth.net> Message-ID: 2011/2/25 John Clark wrote: > If the people of Libya want a fundamentalistic Islamic dictatorship > sympathetic with al-Qaida, and I hope not but that very well could be what > they want, then that is not what I want. > > Hate to do a 'me too,' but feel I must. As happy as I am to see some countries throwing off or trying to throw off oppressive regimes, we could well end up with worse ones. We once supported the Taliban because it was thought they would help stabilize Afghanistan after the Soviet war. And as illogical as this may seem, given the choice between the devil and the witch--theocracy or secular dictatorship--I'll take the dictator. They're less likely to push the button with the thought that there will be a recount in the after-life. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Sat Feb 26 07:01:33 2011 From: moulton at moulton.com (F. C. Moulton) Date: Fri, 25 Feb 2011 23:01:33 -0800 Subject: [ExI] Free banking and fractional reserve banking (Re: Serfdom and libertarian critiques) In-Reply-To: <4D687B97.9090001@gnolls.org> References: <4D687B97.9090001@gnolls.org> Message-ID: <4D68A54D.5060201@moulton.com> J. Stanton wrote: > [I hope that, since this discussion is more about banking at this point, > I can still respond to these messages even though the libertarian > quarantine has been re-established.] In the your reply you confuse different concepts. This is not about libertarianism it is about fundamental definitions. Below I try to draw the distinction again. > > F. C. Moulton wrote: >> I am somewhat baffled by your comments because your comments ignore >> reality. Libertarians have long complained about government privileged >> banking. And obviously all anarchists by definition are opposed to >> government granted privilege in commerce or any other area. Plus >> economists discuss free banking and it is easy to find. Since I have >> been providing so many text links here are some video links: >> http://www.youtube.com/watch?v=5P7W1G1hbiQ >> http://www.youtube.com/watch?v=0PyS2NtW3xA > > This is a common point of confusion. "Free banking" still allows banks > the privilege of creating money by issuing debt ("fractional reserve > banking"), a fraudulent practice that any of us would go to jail for, > and which is a special power granted only to "banks" by governments Currently most banks are chartered by governments but remember this is not necessary. If governments got out of the bank regulation business then we would probably still have banks. Consider the situation where the government is not involved in giving any special privilege to Banks. Opening a bank is just like opening a flower shop as far as the government is concerned. You could start the Stanton Bank and engage in taking deposits and clearing checks and you could charge a fee for holding peoples money. I could start Moulton Bank and tell people that unlike your Bank the Moulton Bank was a fractional reserve bank and would pay some percentage on deposits and a large amount on Certificates of Deposit and no extra charge for clearing checks. Note that there is no fraud involved. This is such crucial point I will repeat it one more time: there is not fraud involved. Each bank is very clear about what it does. People can take their choice; they would know their risks. At your bank inflation would reduce the value of the deposits; in my bank there is the possibility of my bank making a bunch of bad loans and going out of business and depositors being left short. Some people choose one risk and some the other. But the key is that there is also no government privilege. So please do not confusion government privilege, fractional reserve banking and free banking. There are very important differences. Fred From phoenix at ugcs.caltech.edu Sat Feb 26 07:27:12 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Fri, 25 Feb 2011 23:27:12 -0800 Subject: [ExI] democracy sucks In-Reply-To: References: Message-ID: <20110226072711.GA30316@ofb.net> On Fri, Feb 25, 2011 at 11:03:25AM -0400, Darren Greer wrote: > Funny enough, Aristotle agreed with you, and for similar reasons. In > The Politics he characterizes it as tyranny by the majority. He > classes it right up their with oligarchy and monarchy as potentially > corrupt systems of governance. I've always thought it strange that we Which is kind of unhelpful by itself, since that exhausts the logical possibilities. > claimed to be looking to the ancient Greeks for our inspiration for > it, when their greatest political thinker and philosopher next to > Plato (and better than than Plato in my opinion, because he made no > mind-body duality distinction and didn' t set us up for the one > God-heaven-soul-original sin crap that was to come later like Plato > did) dumped all over it. Well, this greatest thinker thought women had fewer teeth than men. For a bit of perspective. Also, from a modern perspective, Democritus or Epicurus might have been better or equivalent thinkers. Unfortunately not much of their writings survives, so it's rather hard to be sure. Democritus thought the world was made of atoms and the Milky Way made of stars and wrote the first encyclopedia, so there's some intriguing potential there. More relevantly, what Aristotle called democracy isn't what we call democracy. Athenian democracy was a mix -- varying over two centuries -- of New England town meetings and decision-making by giant juries. In Aristotelian terms, almost any large democracy today is actually a democratically selected and replaceable (that's the important part) oligarchy, with the occasional plebiscite (a Roman concept.) Even more relevantly, as I understand it Athenian democracy ended with the conquest by Macedon (though it had bounced back from conquest by Sparta and imposed oligarchy) -- and Aristotle was the tutor of Alexander. I suggest that Aristotle's career would have been rather different, and possibly shorter, if he had extolled the virtues of democratic self-government. Great thinkers aren't immune to bias, whether selling out to pay the bills or stay alive. Or Plato's ideal society, a totalitarian state ruled by people remarkably like Plato. Or being a great thinker with property and slaves and coming up with ways to disparage attempts to redistribute property and free slaves. Contemplating elitism is great when you envision being part of the elite. -xx- Damien X-) From kellycoinguy at gmail.com Sat Feb 26 08:03:56 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 26 Feb 2011 01:03:56 -0700 Subject: [ExI] Voluntary Sterilization (was Re: Same Sex Marriage (was Re: Call To Libertarians)) Message-ID: 2011/2/25 Darren Greer : > On Fri, Feb 25, 2011 at 2:19 PM, Kelly Anderson > wrote: >> 2011/2/25 Darren Greer : >> > On Fri, Feb 25, 2011 at 12:07 PM, Kelly Anderson >> > >> > wrote: >> Would you be happier with the program if it included a month of rehab >> and counseling prior to the sterilization? > > I think I'd be happier with the program if it targeted men instead of women. > For a couple of reasons. One, if you wanted to reduce the number of unwanted > and/or drug addicted children coming into the world, wouldn't men be the > obvious choice? Women get pregnant and they're out of commission > reproduction-wise for the gestation period. While the man that impregnated > her can go out impregnate a dozen more in that time. From a pure numbers > stand-point (and I dislike talking about people in such terms, but will for > the sake of this discussion) such a program would make more sense if it used > money to entice men to have vasectomys rather than women to have their tubes > tied. Secondly, if a man does get clean and regrets the poor decisions he > made during his drug using period, and becomes a responsible member of > society and is willing to take on the burden of having children, he can have > his procedure reversed. A woman currently can't. The problem with applying this procedure to men is that if ONE man is still fertile, he can still impregnate hundreds of women per year. Something like 25% of the population of Mongolia are direct descendents of Ghengis Khan. Genetic Eve lived many hundreds of thousands of years ago (400,000 or so I think?), while genetic Adam was only 60,000 years ago. While I respect your opinions, choosing women over men for this program is just plain good math and science. It is interesting regarding the reversibility. I would be happy to pay a woman $20 just to have a depo provera shot... which would wear off in a couple of months. It would be a little harder to keep her coming back, but it might make the program more palatable. http://en.wikipedia.org/wiki/Depo-Provera Understand that many of the women participating in the program were HAPPY to not have to worry about bringing a baby into their situation. >> The point of the exercise is not to protect the drug addict, but to >> protect the potential child from the drug addict, and to protect >> society from the burden of the potential child. A $500 investment >> through this program saves society somewhere around $500,000+. Sorry, I should have said $500,000 per child. If she would have several children, you can multiply that. Four of my children share the same mother. She started having babies at 13 and now has seven that I know of. Probably she's up to ten or eleven by now. She's only about 30. She is a drug addict, and has several serious mental disorders. > And yes, that is where it is reasonable. But there may be another hidden > cost that you may not have considered. I'm not a lawyer, but I think one > could make a reasonably good argument in a courtroom that offering a > drug-addicted woman money for sterilization was coercion. Especially if that > woman had cleaned up, got her life together, and decided that she was ready > to have children and couldn't. Also, if she got religion, which is likely > given the nature of most twelve step programs, someone might really be in > trouble. Justice may be blind, but God can make the blind see again. I think you could set up a foundation that would administer the program that would be fairly immune to such legal efforts. Such a foundation could simply fold at the site of such a law suit. I am not a lawyer, it just seems that such could be possible. The savings to society are difficult to argue with (other than perhaps exact numbers, the order of magnitude is pretty well a fact.) >> Most drug addicts made the choice to take that >> first dose of their drug of choice. > > Sometimes they do, sometimes they don't. And if that first time isn't a choice, maybe they still have a choice the second time, except for crack cocaine, meth and other VERY highly addictive substances. > And even if they do, does that one choice alone damn them to their fate? One choice gets you pregnant, and damns you to that fate. Life is full of such decisions. We seem to be free to make choices, but I do not think we are free from the consequences of our choices. > Have you ever chosen to take a drink of alcohol? I'm weird in this respect. The answer is no. Not one drink, ever. Not one cigarette, ever. Not even a cup of coffee, I've never seen an illicit drug outside of a police presentation. This is clearly NOT normal. It is a side effect of being brought up as a Mormon, and as an atheist I have seen no reason to change my behavior in this respect. It has served me well. I'm pretty sure that I would be the kind to be addicted after just one drink. > If so( and nearly everyone has) than an alcoholic who has made > the same choice of you and whose body and mind somehow reacts differently > than yours is at fault for making the same choice that you did? Yet in your > case your not an alcoholic (I presume :) ) and in his case he is. So is it > the choice that condemns him or the predisposition to the condition? The predisposition isn't his fault. The choice is still his choice. He may not have been informed that one drink can make some people into immediate alcoholics. I don't think anyone can honestly make that mistake about crack. >> > Drug addicts in the throes of their >> > addictions need to be treated the same way, as if they have a >> > disability. >> >> Why? What is the moral basis of that statement? I know it's the >> politically correct position, but is it philosophically correct? > > There is good medical evidence for it. There is an identifiable symptom list > of drug addiction. There are stages for relapse that are remarkably similar > in each person. (Read Dr. Terrance Gorsky) It is listed in the compendium of > psychiatric illnesses as a bona-fide disease. There seems to be some genetic > link, as it often runs in families. And I think the real indicator is that > there seems to be no environmental, cultural and racial factors that > predispose a person to it: it runs across all sectors of society. Poverty has to be correlated with drug use. It just has to be. Are you SURE of this statement??? Libertarianism is a two sided coin. Side one gives you a great deal of freedom to make stupid choices if you wish. Side two is that you are personally responsible for your stupid choices and their consequences. The difficulty is what happens when your stupid personal choice hurts someone else like an unborn child, or someone you kill or maim in an automobile accident. My cousin hurt someone in a car accident that was his fault, he ran a stop sign and it ruined his life. The first rule of life is SHIT HAPPENS. Getting yourself addicted to drugs is more easily preventable than running a stop sign. >> If someone wants to pay to try and get someone off of drugs, more >> power to them. It should be their choice. Paying taxes is not a >> choice. So using government money to cure addicts is theft in my book. >> Using private funds to do so is entirely permissible of course. > > But using private funds is as damaging as using public. Remember this is a libertarian conversation. It is ALWAYS better from that viewpoint to allow people to spend money by their own choice than by governmental force. > Imagine if all the > money that currently goes to help untreated drug addicts living on the > street was invested into the economy. Drugs are bad. They should be legal. But drugs are bad. If they were legal, we could do a better job of teaching people why they are bad, at least that's my theory. > Or paid in taxes. Or even kept in low > interest bank accounts. But the big pay-off for getting drug addicts > treatment is in lowering crime rates. And not just drug addicts. People who > use illicit drugs period. For every gram of cocaine you hold in your hand, > someone has likely been killed to get it there. Organized crime and biker > gangs thrive on it and cost societies billions each year. In my ideal world, drugs are legal. Anyone who understands the history of Prohibition should understand this. And not just Heroin, but Oxycontin too. There is little reason for the current prescription racket. If you are a responsible citizen and you look up the dangers of Oxycontin, you would be an idiot not to talk to a doctor before you start taking it, but you should have the liberty to do so if you are stupid. Libertarians believe in stupidity. Everyone else seems to have given up on it. I believe that you should have the right to starve yourself to death if you aren't ambitious enough to make a living for yourself. (Those who are truly mentally ill or truly mentally impaired should be cared for, but not by the government.) > Rescuing babies or > limiting their births is an easy sell compared to getting the adults help, > but it makes sound economic sense to do so. It may, but it makes MORE economic sense to prevent the birth of a drug baby in the first place. There is no way you can argue that. No way. In a libertarian society, someone would probably choose to help adults. For Zoroaster's sake, there are citizens that spend all their spare money saving homeless reptiles!!! I saw one on the news last week. Imagine what people would do without the shackles of government. Charitable contributions are about 2% of the GDP in the US (less elsewhere). And those contributions are used with an effectiveness many times that of the money confiscated by the government and overseen by bureaucrats. >>I separate the drug addict and how we should treat her from the child > of the drug addict and how we should treat him. I have eight children > who were children of a drug addict prior to being my children. They > have suffered substantially from the poor choices of their mothers.< > > I read all your posts related to this, and recognized that you have good > reason for supporting this program. I would probably support it too, if I > were in your position. I was careful to criticize the program as > respectfully as I could without criticizing you for supporting it. I am not offended. I have a pretty thick skin. I understand that there are aspects of this program that are deeply troubling. Nevertheless, it is a contract entered into by adults, in a FREE way. Freedom is so wonderful that I am overwhelmed by the free act over and above the horror of taking away a woman's ability to reproduce. One mistake that I think is commonly made is that you have a RIGHT to reproduce. And that right can't be lost easily in our society. I think that's balderdash. You commit a felony, you lose the right to bear arms, and vote. I say, you hurt a child, you should lose your right to reproduce. There really is no difference. Also, supposing that one of these young ladies does, by some miracle, change her life. She is free to adopt a baby, even one born addicted to drugs itself. That would be a much greater tribute to her victory over drugs than to merely give birth to a child! I would really support such a person doing that. > I went to a boxing match with my Dad tonight. It was the gold medal round > for the Canadian Games and since I had never been to a live blood-sport > before, I was curious. Some guy was getting his face pummeled in the ring > and I was busy trying to list in my head all the English language turns of > phrase and cliches associated with the sport (I came up with about seven.) > Incidental aside. But I was also thinking about your post. And how I would > respond when you responded, as I knew you would. I recognized your very > intense personal involvement with the issue. And I was thinking that for me > at least there is an emotional under-current, some kernel of experience that > cannot be analyzed or intellectualized, that is the foundation upon which I > build much of my theory and base my positions. I think that is true for many > of us. Sure. And emotions are how we run. They are potentially our highest kind of intelligence. I don't think you are stupid, nor ignorant for having a different opinion than I do. You are wrong about the men vs. women issue, but that's just a minor mistake that anyone could make. > So, to give you my example, and not that it makes my argument any more > cogent or relevant, because it doesn't, I was at the age of twenty-seven > addicted to morphine and cocaine. At the age of twenty-eight I cleaned up. > Now, some fifteen years later, and I have a rich, varied life, the career I > always wanted and give back to society as much or perhaps more than it has > ever given me. But then I was living in a homeless shelter and I did things > for money for drugs that I still firmly regret. However, none of my actions > were irreversible. I was able to get it all back. When I read your post I > thought of my own experience, and how someone in my position might feel if > they gave up their ability to reproduce for a hit of crack and then > rediscovered the world as I have. > That's why I kept my response as logical and rational as possible, because I > have such a close emotional attachment to it. How many of your friends from those days have recovered? How many are dead? Probably, you don't know, because if you still had those friends, you would likely still be addicted... but you must realize that your story is not the most usual outcome. It's not rare, but it is in the minority of outcomes. If your idea is to protect the innocent victims, which is more innocent, the drug addict who made a choice? Or the child who had no choice at all? > Thanks for posting Kelly. You're welcome Darren. I understand your feeling of horror when looking at this sort of thing. I would just encourage you to think of it in terms of the freedom enjoyed by all the participants. -Kelly From anders at aleph.se Sat Feb 26 14:10:55 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 26 Feb 2011 14:10:55 +0000 Subject: [ExI] democracy sucks (Anders Sandberg) In-Reply-To: References: Message-ID: <4D6909EF.10808@aleph.se> Keith Henson wrote: > On Fri, Feb 25, 2011 at 5:00 AM, Anders Sandberg wrote: > > >> Basically, I think there is no way you can avoid a complex, remote >> government if you want to have a complex big society. >> > > It's a marvel that government works as well as it does when you > consider our evolutionary history. > Yes, it is surprising. A bit like how we can learn to become so good at reading/writing or math, which are also non-evolved abilities yet we can make use of our evolved brain systems to do them fairly well, with amazing consequences. The fact that we can construct arbitrary and very complex social structures is quite impressive - and many of them do work, as long as they do not break our evolved social affordances too much. > "Complex, remote government" combined with the drift into an oligarchy > is a formula for disaster. > Sure. But complex remote governments have also been very successful in improving quality of life, at the very least by allowing large-scale societies that can efficiently reduce homicide rates (see Pinker's essays on the subject, very interesting). Slides towards oligarchy are not irreversible, as evidenced by the fact that once upon a time all governments were oligarchies or autarchies, and now we have moved to a situation where the real power does get quite distributed. Given my research I have come to the conclusion that most people, even libertarians, underestimate the threat of governments (surveillance; powerlaw distributed wars and democides; singletons). The solution seems to me to find better ways of keeping governments in check. > http://voices.washingtonpost.com/blog-post/2011/02/stephen_colbert_explains_anony.html > > At this point Anonymous seems to be the most competent group on the > planet. I wonder if this concept could eventually evolve into a world > government? > Imagine a military run by Anonymous. Or a power grid. Or a hospital. I think there is something here: voluntary, self-organized and decentralized groups clearly can do amazing things - IETF, Wikipedia, Linux, various social movements etc. Not to mention the free market. But we do not yet have a good theory for when they work and when they don't work, with maybe the exception of the free market (which we still do not understand that well). Some hints can be found in considering their incentive structures and past successful examples. I am looking forward to read Jane McGonigal's "Reality is Broken" to see what lessons we can get from the game community. The deep problem is to create spontaneous orders that do what we want and evolve to become better at it. Yes, this is basically the friendly AI problem too. And about as easy. But we have some good examples, we have some theory, and even a modestly functional case can be practically very useful. We can hardly do worse than previous thinkers, since we have information they did not have (complexity theory, lots of new science and data, information about past failures and successes ), new tools (Internet, distributed communications, lots of clever software) and new ways of testing out ideas (simulations, online games, economic experiments). -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From anders at aleph.se Sat Feb 26 14:16:56 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 26 Feb 2011 14:16:56 +0000 Subject: [ExI] Joule currency (Re: Banking, corporations, and rights) In-Reply-To: <004a01cbd51e$f0119390$d034bab0$@att.net> References: <4D675E2E.8030901@gnolls.org> <20110225181816.5.qmail@syzygy.com> <004a01cbd51e$f0119390$d034bab0$@att.net> Message-ID: <4D690B58.80608@aleph.se> spike wrote: > The problem with that is that not all joules are created equal. You need > another measure to go with any measure of energy, its entropy. The ocean is > full of joules in its thermal energy, but the entropy is so high it cannot > be converted to useful anything. > > High entropy = bad, low entropy = good. > Would a negentropy currency make sense? (I hesitate to call it an extropy currency) Earth can only dissipate a fixed amount of entropy into space as long as its temperature remains constant, there is a fundamental limit/link to erasing information, and most processes we tend to regard as 'bad' seem to involve a lot of entropy increase. The big problem with resource based currencies is that they tend to inflate when you invent or find new resources - Spain got into serious trouble by conquering the New World and then getting flooded with gold. Invent a fusion reactor and the energy currency inflates. Maybe negentropy is the only way around it, since it is so tough to make. (Another problem with resource based currencies is that prices are not resource-based but due to subjective human desires and needs) -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From anders at aleph.se Sat Feb 26 14:40:05 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 26 Feb 2011 14:40:05 +0000 Subject: [ExI] Easy solution to wars like that in Libya, and in these groups? In-Reply-To: <4D673974.7040207@canonizer.com> References: <4D673974.7040207@canonizer.com> Message-ID: <4D6910C5.8070206@aleph.se> Brent Allsop wrote: > It seems to me that if you could some way have an easy way to > reliably, easily, and in real time, know concisely and quantitatively, > what the entire population of Libya wanted, war could easily be > avoided. Why do we all have to spend so much effort protesting before > anyone finally gets a clue as to what the people want? Graeme Robertson has the following interesting theory about why mass protests are currently bringing down regimes: http://www.themonkeycage.org/2011/02/why_do_protests_bring_down_reg.html Basically, they are a costly form of signalling (and hence trustworthy as being real sentiments, unlike pro-regime protests). Authoritarian regimes are usually coalitions of various elites, and protests are giving them information they can use to decide whether to continue working with the incumbents or defect. So, at first this seems to be a strong case for sentiment mapping. But it must be truthful sentiment mapping that is not easily spoofed. Yet it must also be sufficiently anonymous that people with the wrong views do not get victimized (if not by the government, then by their neighbors). I'm not entirely sure these two factors go well together. > If you could easily know, concicely and quantitatively what everyone > wanted, obviously, if the leader was diviating from this, especially > if he wanted to kill anyone, everyone could just ignore him, and just > do what the people wanted, instead, couldn't they? Problem solved? > Does anyone think differently? "Hmm, Our Glorious Leader wants to kill those layabouts in the Northern Provinces. I don't like that. But I do also like my cushy job here in the Department for Departmental Salaries. Any change in regime will threaten my job. So I do not really want to protest against those killings, I have a family to feed..." Most regimes stay in power through both carrots and sticks, plus a great deal of human inertia. Place the right carrots in the right hands, and you get a lot of people who are going to support you and argue against changes. Now, I still think it is a good thing to give people access to communications and ability to develop their own online institutions (and the media savy that comes after one or two generations - a lot of the newly connected areas are still unused to handle a high meme density environment). Most authoritarian regimes work by limiting information, while open societies work well with unlimited information. But it is not going to be an easy solution. Consensus building is important but hard. Especially since many forms of new communications makes it easy to ignore people with different views and form a consensus (i.e. groupthink) with the people who agree with us. It will not just happen because people can talk to each other. There has to be incentives and social constraints available to make the consensus something that sticks. But the right "communications primitives" like some good sentiment measurement method might help enabling it. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From algaenymph at gmail.com Sat Feb 26 14:44:38 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Sat, 26 Feb 2011 06:44:38 -0800 Subject: [ExI] RPGs and transhumanism In-Reply-To: <4D6909EF.10808@aleph.se> References: <4D6909EF.10808@aleph.se> Message-ID: <4D6911D6.5030105@gmail.com> On 2/26/11 6:10 AM, Anders Sandberg wrote: > I am looking forward to read Jane McGonigal's "Reality is Broken" to > see what lessons we can get from the game community. Interesting you of all people should say this, as I first remember you from the days of Mage: the Ascension. Any ideas for how today's RPGs can promote transhumanism? From agrimes at speakeasy.net Fri Feb 25 15:26:10 2011 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 25 Feb 2011 10:26:10 -0500 Subject: [ExI] a fun brain in which to live In-Reply-To: <003401cbd4ba$e7418ef0$b5c4acd0$@att.net> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> <017301cbbf5a$1b398f30$51acad90$@att.net> <4D4DA986.1030802@canonizer.com> <00c801cbc62a$7aedec10$70c9c430$@att.net> <4D672B0E.3@canonizer.com> <003401cbd4ba$e7418ef0$b5c4acd0$@att.net> Message-ID: <4D67CA12.60101@speakeasy.net> spike wrote: > Of course, all the time. In the long run, I see that as the only reasonable > future of mankind. Byte me. > Eventually in some form, software will write software, > and the result will be recursive self-improvement, and my fond hope is that > the result will want to upload us. I am one who is convinced that > consciousness is not strictly substrate dependent. Once we exist as > software, the things we can do with our brains will be astonishing in > variety. I, on the other hand, am a monist. > An example is one I brought up before. I want to be able to view the world, > at least temporarily, through female eyes. That would allow me to > understand the things women think, and that would make me a better husband. > Some things I just utterly fail to understand, starting with what in the > heck to women see in us? Uploading doesn't let you do that due to the numerous structural differences between the two brain types. I don't think such dimorphisms are very positive for humanity, I'd like to develop a hybrid design that has the best of both, but anyway. > Imagine that we can unify two or more different brains, and have a being > that is the superset of each individual. Then you might choose a person who > is wildly different from you, with which to temporarily unify. I don't know > what happens when people merge their consciousness, but we can't do it now. > We might be able to in the uploaded condition. Funny, I just read an interview of a woman working on doing that with neural interfaces. -- A prospect I actually find interesting up to the point of proclaiming it the ultimate future of humanity, at which point I switch to strong opposition for the same reason I oppose uploading. > I don't know. I want a shot at it, which is why I am probably going to go > in for cryonics. I actually don't think the singularity will happen in 30 > years (it might) but rather about 50, at which time I would be 100. I might > not make it that far. Then cause it to happen sooner. > Ja. I still just don't know with so much of this. I will sadly confess > that fifteen years ago I thought we would be farther along by now than we > are. But the singularity is still coming eventually, and when it does, I > can imagine no logical stopping place for it short of all the metals in the > solar system converted to computronium to form an MBrain, with humans > uploaded. If that is true, then it is imperative that the singularity be prevented. =| -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From darren.greer3 at gmail.com Sat Feb 26 15:09:36 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 26 Feb 2011 11:09:36 -0400 Subject: [ExI] democracy sucks In-Reply-To: <20110226072711.GA30316@ofb.net> References: <20110226072711.GA30316@ofb.net> Message-ID: On Sat, Feb 26, 2011 at 3:27 AM, Damien Sullivan wrote: > Well, this greatest thinker thought women had fewer teeth than men. > True. He also came up with a defense for slavery and some other lulus. And Plato thought it was a good idea to lie to the populace to keep them in line, which influenced Leo Strauss who became the philosophical father of the neocons. The anti-abolitionists used Aristotle's arguments to defend their brand of slavery, and of course the bible has been used to justify all kinds of deadly nonsense. Democritus thought that atoms were the smallest units of divisibility and that there were air atoms and earth atoms and thought atoms, etc. One of the hallmarks of being great is that you're going to have a lot of opinions about a lot of things, and time will prove you dead wrong on some of them. Our own great minds will meet that fate one day, including Einstein et al. If we dismissed every thinker from the past who was wrong about some things, we'd have a pretty short reading list. > > Which is kind of unhelpful by itself, since that exhausts the > logical possibilities. > > Not quite. He championed a polity, which, similar to what Jeff originally posted, elected officials sought only virtue and justice in the system and nothing else. I'm not sure if you have read the Politics, but in it, he goes on in great lengths about what happens if economy is the entire focus of government. The people chase an unattainable grail, wealth through regulation, which, let's face it, doesn't happen, and the officials become corrupt. And by virtue, Aristotle does not mean morally upright behavior, but rather decency through conduct -- honesty, integrity, accountability and the like. Elected officials are elected because they demonstrate these qualities, and once elected they work in concert to achieve justice. That's where it gets tricky, because Aristotle never clearly defines what that is. Plato did, but for him it was a perfect form and the ancient Greek precursor to one God, which has been the source of much woe. Aristotle also thought that virtue was dispensed at birth in fixed amounts and was usually found in the upper classes, so some problems there. Democritus thought the world > was made of atoms and the Milky Way made of stars and wrote the first > encyclopedia, so there's some intriguing potential there. > A scientific bias perhaps? :) I'm a fan of Democritus too. Many people I know with scientific bents are. He also surmised there might be life on other planets, which was pretty heady thinking for a guy who lived 2500 years ago. I guess that's why he is considered the father of Greek philosophy. Epicurean philosophy is closer to libertarianism, I think, than any of the others. And it is often misunderstood. Diogenes and the barrel philosophers were also interesting. > More relevantly, what Aristotle called democracy isn't what we call > democracy. Athenian democracy was a mix -- varying over two centuries > -- of New England town meetings and decision-making by giant juries. In > Aristotelian terms, almost any large democracy today is actually a > democratically selected and replaceable (that's the important part) > oligarchy, with the occasional plebiscite (a Roman concept.) > That's undeniably true. What Aristotle championed was representative government, very much of the kind we have today. True democracy with a vote for every citizen was possible in times when Troy, the legendary great city, was no more than five thousand inhabitants. (Actually Troy was a bit before A's time, but you know what I mean.) But the important point again is that Aristotle's concern was not so much how the mechanics of government were put in place, but what the focus was. If it was in the hands of the many, and the many were only focused on money, then the government would be focused on money. And that was in his purview a recipe for corruption. I would say he wasn't that far off. He suggested we needed to rise above that and focus on higher goals, and the money would take care of itself. That's why I was struck by Jeff's description of his accountability party and forcibly reminded of Aristotle. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Sat Feb 26 15:14:31 2011 From: anders at aleph.se (Anders Sandberg) Date: Sat, 26 Feb 2011 15:14:31 +0000 Subject: [ExI] RPGs and transhumanism In-Reply-To: <4D6911D6.5030105@gmail.com> References: <4D6909EF.10808@aleph.se> <4D6911D6.5030105@gmail.com> Message-ID: <4D6918D7.1080208@aleph.se> AlgaeNymph wrote: > On 2/26/11 6:10 AM, Anders Sandberg wrote: >> I am looking forward to read Jane McGonigal's "Reality is Broken" to >> see what lessons we can get from the game community. > > Interesting you of all people should say this, as I first remember you > from the days of Mage: the Ascension. Any ideas for how today's RPGs > can promote transhumanism? Eclipse Phase. This is my current favorite RPG, a fairly hard sf game. The motto of the game is "Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it." The game setting is the solar system, colonized by various transhuman clades and trying to survive in the aftermath of a bad singularity. It explicitly refers to transhumanism, existential risk - and the extropians is one of the political factions! http://www.eclipsephase.com/ The reason I like it, besides the fact that it is a pretty good game, is that it allows people to play out various transhumanist technologies and explore their consequences. The game is one of the few that has a serious section thinking about the sociological effects of being able to switch bodies, live virtually or modify minds. It is fun to have my players discuss whether they ought to stop or help the growth of Robin Hanson-style massive copying of uploads, characters quarreling over the proper treatment of forked selves or whether it is OK to blow up your body in a "temporary suicide" to get to a remote destination faster, and Tanzania trying to reboot itself in the rings of Saturn using a Farmville-like game and an army of psychiatrists. There are other transhuman games. GURPS Transhuman Space is around [ADVERT MEME: and "Cities of the Edge" by yours truly is appearing about now! ] That is a slightly less extreme setting, but players can again play with the technologies and ideas and see where they lead. I have never tried Freemarket, but it seems interesting. In a sense you can play transhumanism in any game. My gaming group ran a fantasy campaign that involved a kingdom that used a mixture of magic to enhnance people and build what was essentially an early industrial society. The idea is after all to see what happens when you start to change the human condition. -- Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University From spike66 at att.net Sat Feb 26 15:32:53 2011 From: spike66 at att.net (spike) Date: Sat, 26 Feb 2011 07:32:53 -0800 Subject: [ExI] farmville, was RE: RPGs and transhumanism Message-ID: <007a01cbd5ca$6f45b320$4dd11960$@att.net> ...On Behalf Of Anders Sandberg >...Tanzania trying to reboot itself in the rings of Saturn using a Farmville-like game ...--Anders Sandberg, Would anyone here speculate about the wildly popular Farmville increasing the demand for actual farmland? My guess is that for every thousand people who spend time playing simulated farmer, there would be one or more who would like to try her hand at the real dirt and sweat version. If for no other reason, it would give the player street cred with the others, and perhaps lead to improvements in the simulation. spike From hkeithhenson at gmail.com Sat Feb 26 16:08:25 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 26 Feb 2011 09:08:25 -0700 Subject: [ExI] Money and banking Message-ID: I am slightly surprised that I have seen no mentions of http://en.wikipedia.org/wiki/Bitcoin but maybe I missed it. "Walls of text" are not very readable. Keith From kanzure at gmail.com Sat Feb 26 16:18:09 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 26 Feb 2011 10:18:09 -0600 Subject: [ExI] Money and banking In-Reply-To: References: Message-ID: On Sat, Feb 26, 2011 at 10:08 AM, Keith Henson wrote: > "Walls of text" are not very readable. Neither are exceedingly short messages. what do you want us to say about bitcoin? -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Feb 26 16:26:44 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 26 Feb 2011 11:26:44 -0500 Subject: [ExI] farmville, was RE: RPGs and transhumanism In-Reply-To: <007a01cbd5ca$6f45b320$4dd11960$@att.net> References: <007a01cbd5ca$6f45b320$4dd11960$@att.net> Message-ID: <4D6929C4.8040104@lightlink.com> spike wrote: > Would anyone here speculate about the wildly popular Farmville increasing > the demand for actual farmland? My guess is that for every thousand people > who spend time playing simulated farmer, there would be one or more who > would like to try her hand at the real dirt and sweat version. If for no > other reason, it would give the player street cred with the others, and > perhaps lead to improvements in the simulation. LOL. If they *do* try the real dirt and sweat version, they're gonna be some unhappy campers. ;-) And incidentally, one of my goals for my AGI work is to develop inexpensive systems to help turn land husbandry into something that is less smelly, brutish and boring. Organic, self-sufficient, High Farming types of farm can be exquisitely elegant systems. With the availability of abundant but cheap robot intelligence, that kind of land husbandry can be done with very little energy input, and produce more food than it does now. And by the way save the environment. Richard Loosemore From giulio at gmail.com Sat Feb 26 16:22:30 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 26 Feb 2011 17:22:30 +0100 Subject: [ExI] Money and banking In-Reply-To: References: Message-ID: 2011/2/26 Bryan Bishop : > On Sat, Feb 26, 2011 at 10:08 AM, Keith Henson > wrote: >> >> "Walls of text" are not very readable. > > Neither are exceedingly short messages. > > what do you want us to say about bitcoin? http://giulioprisco.blogspot.com/2011/02/bitcoin-cryptocurrency-for-free.html http://giulioprisco.blogspot.com/2011/02/donate-bitcoins-to-pioneer-one.html > > -- > - Bryan > http://heybryan.org/ > 1 512 203 0507 > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From rpwl at lightlink.com Sat Feb 26 17:43:46 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 26 Feb 2011 12:43:46 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: References: Message-ID: <4D693BD2.2050903@lightlink.com> Since this list is now *saturated* with political speculation (not to mention the occasional bit of hyperbolic ranting and liberal use - er sorry, I mean ;-) libertarian use - of the term "BS"), I am going to take this opportunity to try a smackdown, using a weapon that other people LOVE to use against anyone who presents ideas about AGI on this list. Which is to say: if you think these libertarian/anarchist proposals are so great, WHERE IS THE CODE? I mean that literally. Where are your system simulations, which show that society will remain stable when, for example, most government funded institutions are abolished? Where are your simulations -- your detailed, supercomputer-sized models, containing large numbers of realistic constraints -- of how economic systems will be able to function normally when the Federal Reserve is abolished and currency is replaced by, e.g., Joule-Credits? As I explained in a previous (widely ignored) post, most people on this planet have a pretty good understanding of the fact that these kinds of libertarian/anarchist ideas might sound good to some people, in theory, but they would actually lead to a degeneration of civilized human society. If people here claim that these ideas would NOT lead to such a degeneration, PROVE IT. The majority of humanity has a strong intuition that the consequences would be disastrous, so give us some EVIDENCE that our intuitions are mistaken by showing us actual, believable simulations of human society. All I hear at the moment is empty philosophizing and BS :-). Richard Loosemore From eugen at leitl.org Sat Feb 26 18:00:24 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 26 Feb 2011 19:00:24 +0100 Subject: [ExI] Money and banking In-Reply-To: References: Message-ID: <20110226180024.GE23560@leitl.org> On Sat, Feb 26, 2011 at 09:08:25AM -0700, Keith Henson wrote: > I am slightly surprised that I have seen no mentions of > http://en.wikipedia.org/wiki/Bitcoin but maybe I missed it. Not the right list for that. > "Walls of text" are not very readable. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jrd1415 at gmail.com Sat Feb 26 20:01:32 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Sat, 26 Feb 2011 13:01:32 -0700 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: On Fri, Feb 25, 2011 at 2:13 AM, Giulio Prisco > As far as the implementation of the proposal is concerned, I would not recommend starting new political movements and parties. One innovation I would suggest is encouraging/allowing members of other parties to join, without requiring them to terminate their membership in that other party. > Rather, I would recommend joining forces with the Pirate Party, which the Party of the Free Internet and the only really novel and innovative political force to emerge in this century. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From eugen at leitl.org Sat Feb 26 20:50:58 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 26 Feb 2011 21:50:58 +0100 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D693BD2.2050903@lightlink.com> References: <4D693BD2.2050903@lightlink.com> Message-ID: <20110226205058.GO23560@leitl.org> On Sat, Feb 26, 2011 at 12:43:46PM -0500, Richard Loosemore wrote: > > Since this list is now *saturated* with political speculation (not to > mention the occasional bit of hyperbolic ranting and liberal use - er > sorry, I mean ;-) libertarian use - of the term "BS"), I am going to > take this opportunity to try a smackdown, using a weapon that other > people LOVE to use against anyone who presents ideas about AGI on this > list. > > Which is to say: if you think these libertarian/anarchist proposals are > so great, WHERE IS THE CODE? Right, for them to work you need to patch the human primate. > I mean that literally. Where are your system simulations, which show > that society will remain stable when, for example, most government Relevant aspects of human societies cannot yet be modelled effectively. > funded institutions are abolished? Where are your simulations -- your > detailed, supercomputer-sized models, containing large numbers of > realistic constraints -- of how economic systems will be able to > function normally when the Federal Reserve is abolished and currency is > replaced by, e.g., Joule-Credits? Energy as currency backing is not useful, because it gets consumed in the process. However, it would make sense to tie currency value to a basket of raw resources, with periodically adjustable composition and coefficients, which have to be however resistant to gaming. It would be a return to metal-backed/non-fiats, but without the disadvantages. Since we've been there, and we know the system is currently poorly managed the risk would be probably low. > As I explained in a previous (widely ignored) post, most people on this > planet have a pretty good understanding of the fact that these kinds of > libertarian/anarchist ideas might sound good to some people, in theory, > but they would actually lead to a degeneration of civilized human > society. If people here claim that these ideas would NOT lead to such a Yes, you have to fix the agent. Current agent's won't do. > degeneration, PROVE IT. The majority of humanity has a strong intuition > that the consequences would be disastrous, so give us some EVIDENCE that > our intuitions are mistaken by showing us actual, believable simulations > of human society. > > All I hear at the moment is empty philosophizing and BS :-). I agree. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Sat Feb 26 21:29:32 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 26 Feb 2011 16:29:32 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <20110226205058.GO23560@leitl.org> References: <4D693BD2.2050903@lightlink.com> <20110226205058.GO23560@leitl.org> Message-ID: <4D6970BC.5030209@lightlink.com> Eugen Leitl wrote: > ... for [these libertarian/anarchist proposals] to work you need to > patch the human primate. Did you really say that? You are suggesting that libertarians need to ... what?... brainwash people? ... cut out bits of their brains to change their behavior? ... genetically engineer them? :-( I am not sure if you are advocating that idea, or telling me that all these proposed libertarian/anarchist ideas are dumb *because* they impplicitly assume that humans have been "patched" in some way. > Relevant aspects of human societies cannot yet be modelled > effectively. So there is really no code, no proof whatsoever, that these mechanisms will actually work? You know, when some AGI researchers don't have working code, they get slammed. But those AGI researchers are actually working to produce the code, even as they get slammed ... whereas you seem to be saying that libertarian fantasies *cannot* yet be modeled, so I guess those fantasies deserve to be slammed even more than AGI theories, for which code is on the way. > Energy as currency backing is not useful, because it gets consumed > in the process. However, it would make sense to tie currency value > to a basket of raw resources, with periodically adjustable > composition and coefficients, which have to be however resistant to > gaming. > > It would be a return to metal-backed/non-fiats, but without the > disadvantages. Since we've been there, and we know the system is > currently poorly managed the risk would be probably low. But all that was pure speculation, and more speculation is hardly a response to my request for proof. (And, BTW, the vast majority of economists seem to think that going back to a metals standard is crazy in spades.) > Yes, you have to fix the agent. Current agent's won't do. Again, this is mind-boggling. I really want to hear more about this "fixing the agent" business. I am puzzled as to how libertarians propose to "fix" people. It sounds profoundly ominous. Richard Loosemore From eugen at leitl.org Sat Feb 26 22:08:53 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 26 Feb 2011 23:08:53 +0100 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6970BC.5030209@lightlink.com> References: <4D693BD2.2050903@lightlink.com> <20110226205058.GO23560@leitl.org> <4D6970BC.5030209@lightlink.com> Message-ID: <20110226220853.GT23560@leitl.org> On Sat, Feb 26, 2011 at 04:29:32PM -0500, Richard Loosemore wrote: > > Eugen Leitl wrote: > > ... for [these libertarian/anarchist proposals] to work you need to > > patch the human primate. > > Did you really say that? You are suggesting that libertarians need Yes. Yes! I did. Apart from the libertarian part, of course. > to ... what?... brainwash people? ... cut out bits of their brains to Nothing so pedestrian. Human augmentation. We're transhumanists, after all. Immanentize the Eschaton! > change their behavior? ... genetically engineer them? :-( Mere genetic engineering would be a) too slow b) not enough. A simple patch would be a wearable which allows nym tamperproof reputation tracking and easy querying. A better approach is to factor the external system in. > I am not sure if you are advocating that idea, or telling me that all > these proposed libertarian/anarchist ideas are dumb *because* they > impplicitly assume that humans have been "patched" in some way. I am suggesting that with the current agent makeup higher co-operative strategies are not stable. I do not know this, of course, but it does seem a bit like kicking a dead whale up a beach for a living. > > Relevant aspects of human societies cannot yet be modelled > > effectively. > > So there is really no code, no proof whatsoever, that these > mechanisms will actually work? You know, when some AGI researchers I'm afraid the best way to model is to make it happen. > don't have working code, they get slammed. But those AGI researchers The advantage of real physical societies, is that they're real physical societies. Once they happen, they can be observed. > are actually working to produce the code, even as they get slammed ... I have no beef with people who build systems. I have problems with the usual brand of AI mental masturbation, which is sterile. > whereas you seem to be saying that libertarian fantasies *cannot* yet I'm not interested in libertarian fantasies. Just emergent higher co-operative behaviour as a side effect of smarter agents. It's probably a series of spatiotemporal phase transitions, according to what little we know from ALife simulations. > be modeled, so I guess those fantasies deserve to be slammed even more > than AGI theories, for which code is on the way. How can you tell a kook? By the G. By all means, feel free to produce a working system. > > Energy as currency backing is not useful, because it gets consumed > > in the process. However, it would make sense to tie currency value > > to a basket of raw resources, with periodically adjustable > > composition and coefficients, which have to be however resistant to > > gaming. > > > > It would be a return to metal-backed/non-fiats, but without the > > disadvantages. Since we've been there, and we know the system is > > currently poorly managed the risk would be probably low. > > But all that was pure speculation, and more speculation is hardly a > response to my request for proof. The proof of the pudding is in the eating. (And, no, you can't have your cake, and eat it, too). > (And, BTW, the vast majority of economists seem to think that going > back to a metals standard is crazy in spades.) Which is why you notice I'm *not* suggesting a metal-backed currency, or any backed (or baked) currency, but to tie currency to a diverse, periodically readjusted resource basket with built-in checks against manipulation by adjusting the composition and weight of such basket. See the difference? I thought you would. > > Yes, you have to fix the agent. Current agent's won't do. > > Again, this is mind-boggling. If you think the humanity is perfect, you're on the wrong list. > I really want to hear more about this "fixing the agent" business. > > I am puzzled as to how libertarians propose to "fix" people. It > sounds profoundly ominous. I wouldn't know. Ask the libertarians. From brent.allsop at canonizer.com Sat Feb 26 22:23:30 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 26 Feb 2011 15:23:30 -0700 Subject: [ExI] Easy solution to wars like that in Libya, and in these groups? In-Reply-To: <4D680D34.3010800@mac.com> References: <4D673974.7040207@canonizer.com> <4D680D34.3010800@mac.com> Message-ID: <4D697D62.6050802@canonizer.com> Hi Samantha, On 2/25/2011 1:12 PM, Samantha Atkins wrote: > I have a design in mind for an open end survey and matching system > that the above sort of reminds me of. This is very great and exciting news to me. So many people have said exactly this to me. I would love to here more on this. Would it be driven bottom up or top down (i.e. filtered/censored)? What is your motivation for such? How would it work? Would it encourage as much consensus as possible (like having a way to push less important disagreeable things to sub camps)? If so, then how? What are some specific examples of results that would exist if such were created and used?.... Please tell us more. > On 02/24/2011 09:09 PM, Brent Allsop wrote: >> >> Extropians, >> >> It seems to me that if you could some way have an easy way to >> reliably, easily, and in real time, know concisely and >> quantitatively, what the entire population of Libya wanted, war could >> easily be avoided. > > Are you assuming that this collective state of mind is particularly > rational or a good [enough] decision maker? Why would you assume > that when experience seems to show that relatively few people are > reasonably sane and competent about a great number of questions? The fact that still so relatively few people are morally reasonable, morally educated, to any level above moral insanity is precisely the problem. The closest thing to any moral education or reference to moral information, or direction we have today for the masses is primitive scriptures or hierarchical institutions claiming to get moral direction through one top man directly from god. I think we all agree how morally insane and bottle necked that is. That is why everything today is so war and hate based and why we still value rotting everyone in the grave. All these primitive hierarchies have evolved in a survival of the fittest way. Nobody listens to any other hierarchy, and instead build walls and demonizes and attempt to frustrate, destroy, and convert all of them as much as possible. There are very distinct walls that separate everyone. That is why we still have war, both violent and non. Its all because there is zero communication. What we need, is to find some way to recognize or quantitatively measure for who the moral experts are. And this determination method should be based or chosen by each individual. Then you need to find some consensus building way to have as many of those experts as possible collaboratively develop concise descriptions and arguments for the various competing moral theories. Then you need to come up with some rigorous way to measure for expert consensus, by these moral experts, amongst the competing moral theories. In other words, each individual can select who they think are the best moral experts (i.e. select a canonization or filtering algorithm on their own) and then see what their chosen experts believe is the current state of the art of the most moral theory. Having all the competing theories, and arguments for such, together in one open and unbiased place, with quantitative measures of who is in what camp, or who wants what, can finally be a source of moral education reference material for the masses. It will finally be a way for everyone to communicate, and find out what EVERYONE still wants in a way where we have as our goal to get all of it, not just what our particular hierarchy wants (while building walls, frustrating and waring against everything else). Because of the lack of good moral educational sources, everyone tends not to think to much about morals. Without thinking about them much, people fall into Luddite, primitive, and traditional morals, like rotting people in the grave, and tend to fear progress of any kind. If they were forced to think about it a bit more, if they were asked or expected to explicitly specify which moral hypothesis they are currently working with, and why, and have a good unbiased source of expert based educational reference material to help them make that choice, their moral expertise could finally start progressing at the rate of other technologies. If you make people responsible for the moral theories they are working with, they would take much more responsibility for being more morally educated, and supporting increasingly improving morals. Of course, there is usually a set of leading moral experts that are in the minority, and think differently than the still primitive majority. The problem is, their signal is being drowned out from all the noise on the internet from the traditional masses. You can't tell the brilliantly leading moral loner from the simply still insane loner. You end up with something like 20k publications on theories of consciousness in Chalmers' bibliography of 'scientific journals'. Hidden deep within that pile, is some real good scientific consensus information which a few of the leading experts are barely able to make out. The problem is, the masses can't discern the great expert scientific consensus contained in that pile of junk. And if any expert claimed there was a consensus, everyone in a still primitive camp would simply doubt such. You need some way for the experts to collaboratively work together to come up with concise agreed on theory descriptions, using state of the art agreed on terminology, and some rigorous way to measure for how much expert consensus there is for each in a way each individual can trusts in his own way And as these moral experts that really know, can collaborate and work together and can monitor how the popular consensus believes and is or is not changing, compared to the expert consensus that really knows, you measure the progress they are making. Even the experts must know, concicely and quanitatively, what everyone else believes, so all the experts can colaberate and help. When you find an argument, or scientific result that starts to work at falsifying immoral theories (i.e. converting people to morally better camps) you focus on those things that work. That which you measure will always improve. You can't improve if you can't measure your progress, and find out how convincing any argument, or demonstrable scientific evidence, you are using or may find, works. > >> Why do we all have to spend so much effort protesting before anyone >> finally gets a clue as to what the people want? If you could easily >> know, concicely and quantitatively what everyone wanted, obviously, >> if the leader was diviating from this, especially if he wanted to >> kill anyone, everyone could just ignore him, and just do what the >> people wanted, instead, couldn't they? Problem solved? Does anyone >> think differently? >> > > That is a system of strong individual rights. It is not democracy as > in democracy your wishes and rights can always be overridden by a > majority. Again, it is because of the hate based censoring, filtering, and demonizing nature of our hierarchical survival of the fittest organizations with walls around them, that we suffer from this so much. We need to have a system where there are no walls, no censoring - where everyone can express what they want, and what they still believe, a place where nothing is filtered, especially that of the leading minority moral experts. Then we need to teach everyone the immorality and primitive devilishness of hating, or seeking to destroy, in any way, what anyone else wants, especially minority people. We need to have a system that values and rewards all differences, especially minority differences. >> Everyone is asking the question, what should the US, and other >> countries do, to help out Libya and similarly struggling countries? >> Why is everyone only asking or talking to the leaders, at the tops of >> all the hierarchies, and if that one is taken out, find another. Is >> nobody interested in what the people of Libya want? Isn't that the >> only problem? > > They should leave it alone. > There is nothing we can do to help? Surely trying to force them to be like us isn't good. But if we can know, concisely and quantitatively, and in an their selected expert based way, what the people want, can we not then do all we can to help them get that? Getting them what they want first, while still keeping them educated about what we might rather have them be like or what we want also, as a secondary after what they want, priority...? >> And of course, all transhumanists are just one big group of >> individuals all waring and criticizing each other on the most trivial >> details, and we never get anything done at all, and never have any >> influence over anything. But I bet you if we had the right consensus >> building system (where the trivial less important disagreeable stuff >> we spend all our time on could be pushed to lower level camps), all >> the real moral and scientific experts at the tops of such delegated >> tree structures, would be far more transhumanist than the general >> clueless population. With such a system dictating the morals of >> society, (rather than all the primitive war mongering bottle necked >> hierarchies) and telling us what our priorities are and so on. I bet >> we could rule the world and finally bring the singularity to pass. > > In actuality it doesn't happen. Central decision making everyone has > to obey even if they are an outlier with a better idea is inherently > broken. Such can at best provide general guidance. It can never have > enough capacity to outperform localized decision making. You cannot > construct a good centralized or expert run system generally that will > retain its good qualities or have good qualities if it grows to be the > "decider" for too many things which it enforces using force. With an expert based, educational, open survey system nobody 'has' to do anything. It is all entirely volunteer driven, everyone that wants the same thing working together in crowd sourced collaboration to get it way, while ensuring they are aware of and valuing, and seeking not to get in the way of what anyone else wants, especially minorities, as much as possible. Also, as far as effectiveness and speed go, people always assume a hierarchical institutions can change directions faster than any network based system. But, for the networked based system where their are entirely different hierarchies of delegated experts for each individual moral issue (natural division/networking of powers, and if anyone screws up, their hard won hierarchy of constituents vanishes instantly) just the opposite is the case. I've described in a short fiction story how this can be so or how a network managed system can make an expert based decision for the entire organization to change on a dime, much more efficiently and rapidly, with near instantaneous 100% by in and consensus building, than any hierarchical one. (see: http://canonizer.com/files/Meat_Supplier_Decision_at_Future_Burger.doc ) >> >> My hypothosis is that it is all simply a matter of communication. >> How do you know what the best experts in the crowd want, concisely >> and quantitatively? > > How do you, John Q. Public, know who those "experts" are? See above. You need to have a better way to be morally educated, and to measure how educated you are, and take responsibility for such... > >> What is the moral expert consensus? See above. It is based on each individual having the ability to select who they think the moral experts are, in a delegated tree of moral expertise way. It is rigorously measured and tracked... > > Define "moral". We cannot find the above without such a definition. > Anything anyone truly wants has moral value. Everyone should seek after all of that, the more diversity the better. The most important first step is knowing all this concisely and quantitatively, then if you aren't interested in what someone else wants, at least be aware of it, and acknowledge it (i.e. don't build a wall, or censor it) so you can avoid frustrating it, so you can work with it, help it, include it, and love it, as much as possible. >> What is the scientific consensus? See above. > > Are you sure consensus strongly approximates best? That which is best, is what everyone wants. And the more morally educated people are, and the more you explicitly declare and measure for such, in an accept responsibility way, the better moral choosers everyone will become > > >> What is the transhumanist consensus? If you can know that, suddenly >> there is no more reason for war and fighting. > > False. There will be dissenting viewpoints. If they have no space to > doing things their way that is a cause for conflict. Hopefully, I've completely answered this question above. > >> This hypothesis has led me to try building something like >> canonizer.com, but everyone seems to hate it, and like everything >> else, everyone just wants to criticize, fight it and destroy it, and >> go back to doing everything on their own in a do it yourself lonely >> way - damn everyone else. So maybe someone can come up with some >> kind of better method of knowing what all us experts want, concisely >> and quantitatively, in any kind of consensus building way, so maybe >> we can work together and get something done, other than just finding >> disagreeable things and focusing and criticizing everything and >> everyone on that, as we continue to watch the world still wallow in >> primitive rotting misery? >> > > Build you own walled community with people that you find sane. Don't > expect to convert the world or get the consensus to do anything but > run over you. To me, this is anti social hate. I have the same problem with people that think all socialist should move to China, or something. This kind of hate and lack of diversity is the entire problem. We've got to have more interaction, support, inter group communication, where everyone seeks to tear down all walls, everyone seeks to get everything for everyone, all together. Not build a wall, push everyone different outside, and seek to frustrate everything outside it, or at best just "leave it alone". > > Canonizer was an interesting idea but the implementation is too weak/ > not so useful. I am not sure what could be better or if some of its > goals are doable. What, specifically, are it's weaknesses? It is obviously still a work in progress, and if anyone can come up with something better, where I can say what I want, where I can find all who agree with me in a consensus building way, and we will not be filtered even though we are a minority, and I can select who I think the experts are, ... I will quickly jump camps to whatever that system is. > > I don't give a fig what "everyone wants". Really. This is totally shocking to me. I had no idea the extent of this moral hypothesis everyone is still working with. If this canonizer project has taught me anything, it is this - that most people still could care less about what everyone wants. They hate it, they loath it, they want to push it outside their wall, and at best just want to ingore it or 'leave it alone' They only care about what they, themselves, want. They never seek to know what others are saying or believing, whether mistaken or not. They have no interest in measuring how much they might be progressing, or not, with others, and why, they just seek to write their own blog and shout it out from behind this wall, who cares how many people by into what they are saying. They have no interest in knowing, concisely or quantitatively, what all the other blogs are saying. In the transhumanist community, everyone is individuals with huge walls around themselves. If anyone outside their wall starts getting some power, or any kind of consensus, or any size of co-operating group, they descend to ever less important levels where they can find some disagreement and focus on that with eternal yes, no, yes, no, we can't have strawberry, chocolate is the only way eternally repeated arguments. Everyone being blind to how much consensus there is, after all, on the most important issues. They just seek to destroy and frustrate or convert all of that at the lowest possible trivial level, as much as possible, in the survival of the fittest way we've all been evolved to think is best. I have faith and hope that we can all do much better, and that we can finally find a way to help everyone's, including the still morally backwards', morals start improving and keeping up with our technologies. Samantha, thank you so much for responding. For communicating with me what your thoughts and concerns are. Thank you for asking questions. Thank you for not ignoring me and for wanting to help me. Thank you for not putting up a wall and censoring me (the WTA list censored my initial post with no explanation provided.) Brent Allsop From pharos at gmail.com Sat Feb 26 22:37:36 2011 From: pharos at gmail.com (BillK) Date: Sat, 26 Feb 2011 22:37:36 +0000 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <20110226220853.GT23560@leitl.org> References: <4D693BD2.2050903@lightlink.com> <20110226205058.GO23560@leitl.org> <4D6970BC.5030209@lightlink.com> <20110226220853.GT23560@leitl.org> Message-ID: On Sat, Feb 26, 2011 at 10:08 PM, Eugen Leitl wrote: > I'm not interested in libertarian fantasies. Just emergent higher > co-operative behaviour as a side effect of smarter agents. It's > probably a series of spatiotemporal phase transitions, according to > what little we know from ALife simulations. > Is that an assumption I see before me? Are you assuming that smarter agents will be more co-operative? I doubt that all the agents will all be equally smart at the same time. And discrepancies in smartness tend to give the smarter entities the opportunity to advantage themselves at the expense of the less smart. The chaos that this will cause may drive smarter agents to form Borg-like co-operative intelligences. Is this what you see in the future? BillK From kellycoinguy at gmail.com Sun Feb 27 05:39:18 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 26 Feb 2011 22:39:18 -0700 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D693BD2.2050903@lightlink.com> References: <4D693BD2.2050903@lightlink.com> Message-ID: On Sat, Feb 26, 2011 at 10:43 AM, Richard Loosemore wrote: > Which is to say: ?if you think these libertarian/anarchist proposals are so > great, WHERE IS THE CODE? Funny coming from you Richard... :-) > I mean that literally. ?Where are your system simulations, which show that > society will remain stable when, for example, most government funded > institutions are abolished? ?Where are your simulations -- your detailed, > supercomputer-sized models, containing large numbers of realistic > constraints -- of how economic systems will be able to function normally > when the Federal Reserve is abolished and currency is replaced by, e.g., > Joule-Credits? Richard, a simulation wouldn't prove anything, nor change anyone's mind. A simulation only reflects the mind of the writer of the simulation. The closest thing that I can think of to a simulation of libertarian views is the novel Atlas Shrugged. I suggest you go watch the movie when it comes out as the book is very long. There are some holes in it, but it does point out that getting to the pure libertarian from where we are is going to be painful for a lot of the hangers on. -Kelly From phoenix at ugcs.caltech.edu Sun Feb 27 05:57:51 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Sat, 26 Feb 2011 21:57:51 -0800 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: References: <4D693BD2.2050903@lightlink.com> Message-ID: <20110227055751.GA28772@ofb.net> On Sat, Feb 26, 2011 at 10:39:18PM -0700, Kelly Anderson wrote: > the movie when it comes out as the book is very long. There are some > holes in it, but it does point out that getting to the pure > libertarian from where we are is going to be painful for a lot of the > hangers on. Yeah, the parasites of society might have to do hard work for a living. http://www.angryflower.com/atlass.gif I found this while searching for the above. http://forums.spacebattles.com/showpost.php?s=2d8cca834f9771b28c587edb72409d7c&p=4621352&postcount=13 -xx- Damien X-) From sjatkins at mac.com Sun Feb 27 07:54:24 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 26 Feb 2011 23:54:24 -0800 Subject: [ExI] RPGs and transhumanism In-Reply-To: <4D6918D7.1080208@aleph.se> References: <4D6909EF.10808@aleph.se> <4D6911D6.5030105@gmail.com> <4D6918D7.1080208@aleph.se> Message-ID: <10BF5637-FC65-4201-B8DD-E94C7B767A91@mac.com> On Feb 26, 2011, at 7:14 AM, Anders Sandberg wrote: > AlgaeNymph wrote: >> On 2/26/11 6:10 AM, Anders Sandberg wrote: >>> I am looking forward to read Jane McGonigal's "Reality is Broken" to see what lessons we can get from the game community. >> >> Interesting you of all people should say this, as I first remember you from the days of Mage: the Ascension. Any ideas for how today's RPGs can promote transhumanism? > > Eclipse Phase. > > This is my current favorite RPG, a fairly hard sf game. The motto of the game is "Your mind is software. Program it. Your body is a shell. Change it. Death is a disease. Cure it. Extinction is approaching. Fight it." The game setting is the solar system, colonized by various transhuman clades and trying to survive in the aftermath of a bad singularity. It explicitly refers to transhumanism, existential risk - and the extropians is one of the political factions! > > http://www.eclipsephase.com/ "Post-apocalyptic transhuman conspiracy and horror"? You had me interested until I saw that. It seems mainly about fighting off various disasters more than actually building positive futures. OK, it still looks pretty interesting. But is this really part of a message we want to send about the future? Definitely looks very worth exploring and more fun than simply talking about various futuristic ideas without end. - samantha From sjatkins at mac.com Sun Feb 27 07:58:23 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 26 Feb 2011 23:58:23 -0800 Subject: [ExI] this is me in another forty years... In-Reply-To: <00a901cbd546$ec26bda0$c47438e0$@att.net> References: <00a901cbd546$ec26bda0$c47438e0$@att.net> Message-ID: On Feb 25, 2011, at 3:51 PM, spike wrote: > Check this, a three minute story which could be subtitled: when they pry the handlebars from my cold dead hands: > > http://www.youtube.com/watch?v=vksdBSVAM6g&feature=player_embedded > > This is how they make commercials in countries where they still have actual attention spans. Hey it worked on me. Thank you. That was beautiful! - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Sun Feb 27 08:21:57 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 27 Feb 2011 00:21:57 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <4D640DB1.9060702@mac.com> <201102222130.p1MLUUSJ025704@andromeda.ziaspace.com> Message-ID: On Feb 25, 2011, at 4:31 PM, Jeff Davis wrote: > On Thu, Feb 24, 2011 at 6:12 PM, Samantha Atkins wrote: >> On 02/23/2011 09:27 PM, Jeff Davis wrote: > >>> Not committing the party a priori to a menu of positions hardly means having no principles. >> >> "No other issue is relevant" doesn't leave a lot of room for bringing up principles this may fly in the face of. > > Let me clarify. My hypothetical Accountability Party (AP) is composed > of principled members who subscribe to a broad spectrum of specific, > varying, and yes, sometimes conflicting principles. This spectrum of > principles will be brought to bear in the POST-ELECTION discussions > that accompany the legislative and executive decision-making > processes. Libertarian principles will be represented in those > discussions to the degree that libertarians have assisted in the AP > electoral effort. > > The dictatorial imposition of Libertarian principles won't happen, but > the libertarians will get to make their pitch, and if persuasive their > principles will be enabled/implemented. What? Libertarianism is the very opposite of dictatorial imposition of anything. Are you sure you know what the heck you are talking about here? > But be forewarned, compromise > accompanied by disappointment will almost certainly be the order of > the day. (Democracy just sucks! Aaaargh!) > > I'm trying to explain that it's not that the AP doesn't have other > principles. But that there are three elements to successful "rule",: > (1)winning campaigns, (2)governing -- these two enable the final > element --and (3) having the vision to lead society to "a better > place". > > The AP campaigns -- element (1) above -- on just two "principles" -- > accountability and jobs -- because almost EVERYONE supports these > principles. Almost EVERYONE is the essence of "broad-based", which > means a lot of voters, which wins campaigns. " Accountability and > Jobs" serves first duty as a campaign-winning tactic, then after the > win, they become the top two principles, but by no means the only > principles, of the AP. > >> Free floating wish list items with no grounding in any principles whatsoever are a BS basis for any party and cannot last because there is no grounding. > > I think I've explained away this objection above > No, you haven't at all. What you have described that the AP does have are not principles at all but out of context, ungrounded planks designed only to be appealing to many with no real bother to say what they mean or clarify their grounding in reality. That I find quite disappointing. At first I thought you must be joking. >> I mean you can satisfy everyone has a job by simply enslaving the entire country and putting any excess workers (newly employed) to work digging holes and then filling them back up. Nothing in the party planks precludes this implementation. > > ME: True, but that implementation is not my plan but rather your > dystopian prediction. Bear with me. Despite your fears, we're not > going dystopian. Huh? It is not a prediction but a simple illustration of one option that satisfies plank #2 but that I hope you would not find at all palatable. The point being that that plank by itself without grounding invites many quite ugly results in keeping with what you insist is sufficient basis for a party. It is meant to wake you up to the need for much more than this in any viable party likely to do more good than harm. > > I understand your objections, as a libertarian, to certain laws -- the > victimless crime laws for example. I hold the same libertarian > objections. The AP party -- to whatever degree you , I, and others > of like mind can assert our principles -- would apply the principle of > Accountability based on "just" law, while setting out to eliminate > "unjust" law. > Fine. Now on what non-existent principles or standards to you decide which are in fact the just laws? >>> By "accountability", I essentially mean to subject the ruling class in general and the power elite in particular to a strong dose of "ethic cleansing", so the entire society could start over with a clean slate. Start over, but with the former upper reaches of society on notice that the law now applies to them. No, really. > >> This seems like blaming the powerful politically and or the rich-er as a class. This has been so busted so many times when it has been tried before. > > ME: Now I KNOW you're not saying the richer are above the law! And I > know,... you surely wouldn't say that,... on account of some little > bias you might hold,... arising say, from your love of the richer, > admiration of the richer, or aspiration to be one of the richer. > How about not guessing what I am saying or why and instead responding to what I actually did say. That would be most refreshing. > >> Simple envy would make it very popular as it has been before. > > ME: This is a class-based argument, Samantha. "The poor envy the > rich, and want to kill them and steal their money." And you > disappoint me by deploying it. Shame on you. Neither the > pre-conviction envy, nor the post-conviction schadenfreude of the > poor, is a "get out of jail free" card for the rich. > Are you saying that envy does not exist and would not enter into this? If so, why do you think so? Shame on me for what, pointing out the obvious? I said nothing at all about any get out of jail free cards. >> The results would be unlikely to be much better without considerable more refinement and statement of and adherence to some of those pesky principles. > > Tell you what I'll do. We'll put our heads together and restrict > prosecutions to murder, felony murder, mass murder, and conspiracy to, > or facilitation of, the commission of any of these. Then we'll > discuss which other offenses qualify as actionable offenses according > to libertarian principles. Deal? > I am confused. I thought you said your party has no libertarian principles or any others except these two planks that you chose to call "principles" although they aren't really. I thought you said above that you thought libertarian principles were an imposition? And why would you and I get to say exactly how others decided these things in your hypothetical party anyway? If workable principles are not built in the actual results can vary quite widely from anything you or I would wish on our conscience or to live in. >> [The equal application of the law] is a firm part of what we are already supposed to be about. Fixing instances where it is not the case is a fine thing. I would press criminal charges if not treason on many a past and present politician as many violate their oath of office wholesale. > > ME: Okay, it seems we're coming into agreement. Good. Regarding the > charges of treason, and violations of oaths of office, show me the > particulars, and I'll be happy to work with you on this > >>> This doesn't imply draconian penalties. It isn't about revenge. It's about starting over with a clean slate and a "rule of law" that actually does its job. > >> If you are picking on the powerful for being more powerful than you or I and the richer for having more money than you or I and you are also speaking of and to the sentiments of the "average person" then you are in revenge territory. > > ME: > > I commend you for your vigilance and insistence on fairness in dealing > with the powerful and richer. That said, it is not unlawful, though > some -- the once rich and powerful in particular -- may find it > unseemly, when the poor dance in the street to celebrate the richer > and powerful finally joining the rest of us in being subject to the > JUST AND PROPORTIONATE penalty for their misdeeds. That requires strong standards of what is Just and Proportionate - standards that you have said you don't need and aren't part of your proposed party. > >>>>> And jobs: everyone who wants a paycheck gets a paycheck. EV-REE-ONE. > >> Economics, while maybe not all, is not served by pretending their are limitless means to satisfy limitless wants. > That is a denial of economic reality. You can't spend your way out of > bankruptcy. Ask Zimbabwe whether you can print enough money to get > out of bankruptcy. > > ME: Look, I deserve your little lecture, Okay? I'll take the blame. > I was too lazy, too pressed for time, to explain the reality-based > details of my "Jobs" proposal. So I substituted: "The govt has > machines that print money, so It's a done deal. Get over it." > > I still don't have the time for much more. But I have time for a > little bit more. > > The US is a rich, massively-productive giant, with the worlds largest > economy. As a nation it has sizable assets, liquid and illiquid, > public and private, and also sizable debts (liabilities?). > Actually, far far less than you might think as we are massively, at nearly all levels, in debt. We have very little actual manufacturing capability compared to our consumption. > Scare talk aside, no economic catastrophe is going to cause the Earth > to open up and swallow the US, leaving behind a seawater filled > crater. Life goes on. Life will go on. (Barring an asteroid > collision or Gamma Ray Burst.) Without an economic miracle we are most certainly in for very high inflation (over 30%) and most likely, the collapse of the US dollar, before the end of this decade. Care to make a wager? In gold? > > Even during the great depression, with its 30% unemployment, there > must correspondingly have been, if numbers mean anything, 70% > employment. Life goes on. The Great Depression is nothing compared to what is coming. > > The economy goes on. It has its ups and downs. Human suffering as > related to these ups and downs correlate with employment. During the > "ups" ,things are good, and everyone can find work. During the > "downs", not so much. The Accountabilty Party says to the voters, > "We're going to end the cycle of misery, by seeing to it that everyone > who needs a paycheck can find work. > How? This is not remotely answering any of my objections but simply reiterating what you have already asserted. > This position, while principled (I know, I know, you hate it ,and > consider it entirely unprincipled), is also TACTICAL. In that role, > it is the first step in implementing change in an electoral system: > getting one's hands on power by winning elections. The best I can do > for you is promise that when the libertarian system replaces the > current system of manipulated casino predation, I'll join you in > shutting down the guaranteed paycheck program. After you have told everyone they are *entitled* to a paycheck at someone else's expense regardless of the wishes of those others and regardless of whether they are giving any value equal to the size of this entitlement? Once you have baked that into the cake do you really thing you can just take it back? > >> People are not in the least entitled to a slice of the economic pie just by virtue of being born. Not when the pie is finite and produced by the work of others. This would be a denial of justice and reality. > > We disagree. My argument will not persuade you, however, so I will > limit my response to a simple counter assertion: > > Au contraire, it is the very essence of justice and realism. Wait. You are going to make not even offer an argument? You don't have time to discuss any of this seriously? Then why am I wasting time on this as if you are interested in actually understanding or resolving anything? Good day. - samantha From giulio at gmail.com Sun Feb 27 08:28:20 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 27 Feb 2011 09:28:20 +0100 Subject: [ExI] META: Overposting In-Reply-To: <00ab01cbd44c$71919380$54b4ba80$@att.net> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> Message-ID: I don't see why we should refrain from discussing important things. I am very interested in the libertarian trend, but the problem is that it always degenerates into a hormone-driven fight between fundamentalist libertarians and fundamentalist anti-libertarians. I wonder why it is like that. 2011/2/24 spike : > And the libertarian thread temporary open season is now back to our > regularly scheduled programming, thanks.? spike > > > > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Natasha > Vita-More > Sent: Thursday, February 24, 2011 7:32 AM > To: 'ExI chat list' > Subject: [ExI] META: Overposting > > > > Please remember the list member's diet:? 8 posts a day. > > > > Thank you! > > Natasha > > > > Natasha Vita-More > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From anders at aleph.se Sun Feb 27 11:45:23 2011 From: anders at aleph.se (Anders Sandberg) Date: Sun, 27 Feb 2011 11:45:23 +0000 Subject: [ExI] RPGs and transhumanism In-Reply-To: <10BF5637-FC65-4201-B8DD-E94C7B767A91@mac.com> References: <4D6909EF.10808@aleph.se> <4D6911D6.5030105@gmail.com> <4D6918D7.1080208@aleph.se> <10BF5637-FC65-4201-B8DD-E94C7B767A91@mac.com> Message-ID: <4D6A3953.5000403@aleph.se> Samantha Atkins wrote: > "Post-apocalyptic transhuman conspiracy and horror"? You had me interested until I saw that. It seems mainly about fighting off various disasters more than actually building positive futures. OK, it still looks pretty interesting. But is this really part of a message we want to send about the future? Definitely looks very worth exploring and more fun than simply talking about various futuristic ideas without end. > GURP Transhuman Space is much closer to the kind of future we probably want (except that the rate of technological growth is a bit slower)... and as a result it is not as exciting. As I wrote in my Eclipse Phase review, http://www.aleph.se/andart/archives/2009/08/eclipse_phase_review.html the THS world doesn't need saving, but the EP one does. That makes the game far more interesting to play, since there is a feeling that there is something at stake. If you want positive futures, check out Freemarket. http://projectdonut.com/freemarket/ "We are a society of functionally immortal, cybernetically modified, telepathic infovores. You are now one of us. Welcome!" I think the problem with positive transhumanist futures is that they gloss over the bad aspects, and people do recognize this. It is much more interesting to figure out how to make positive futures work when subjected to stress. Take the manufacturing technology of Eclipse Phase as an example. There exist fairly mature MNT fabrication, so in principle the only limit of personal manufacturing is energy (that can be collected as sunlight in the inner system or come from a fusion reactor in the outer) and the rarest elements needed (cheap, unless you live in the middle of nowhere). Except that now every society needs to decide how to handle it: allow anybody to make anything they got a blueprint for (the answer of the Autonomist Alliance), limit manufacturing to approved blueprints (the Planetary Consortium) or keep this technology in the hands of responsible government people (the Jovian Republic). I think most can recognize that the last answer is pretty limiting and repressive... yet the bioconservative republic does have a point in that the free use of these technologies led to a near-extinction disaster. The autonomists are free to print what they want, including weapons of mass destruction and personal weapons... makes things rather dangerous and unstable unless they impose social controls on what you do (and the more I think about social controls in anarchist societies, the scarier they look - the extropian habitats can at least handle it through PPL companies and insurance). And of course, open matter printers allow for all sorts of nasty physical/digital viruses - malware in EP is *evil*. The Consortium approach requires heavy DRM and certification, avoiding the risk of people making too dangerous stuff but also creating artificial scarcities that help incumbent content producers. So, which one do you choose? How do you make it better? These issues are all part of the game setting. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From anders at aleph.se Sun Feb 27 12:06:46 2011 From: anders at aleph.se (Anders Sandberg) Date: Sun, 27 Feb 2011 12:06:46 +0000 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> Message-ID: <4D6A3E56.9020805@aleph.se> Giulio Prisco wrote: > I don't see why we should refrain from discussing important things. > > I am very interested in the libertarian trend, but the problem is that > it always degenerates into a hormone-driven fight between > fundamentalist libertarians and fundamentalist anti-libertarians. I > wonder why it is like that. > I think it can be explained by Jonathan Haidt's moral foundations theory and Phil Tetlock's sacred values theory. Basically, libertarians and anti-libertarians step on each other's sacred values. According to Haidt, morality across cultures tend to be based on five fundamental values that are given different weight between different cultures and individuals: 1. Care for others, protecting them from harm. 2. Fairness, Justice, treating others equally. 3. Loyalty to your group, family, nation. 4. Respect for tradition and legitimate authority. 5. Purity, avoiding disgusting things, foods, actions. Liberals (american sense) value care and fairness higher than the others, while american conservatives value all five at the same time. Tetlock observed that certain things are "sacred" values to people, and that trading them for a "secular" value triggers strong emotional reactions - these tradeoffs are taboo: you are not supposed to even *think* about how much money a human life is worth (if you seriously do, then you are seen as a bad person) and people forced into tradeoffs often do interesting self-purification actions afterwards like washing hands or giving more to charity. http://www.scribd.com/doc/311935/Tetlock-2003-Thinking-the-unthinkable-sacred-values-and-taboo-cognitions In political discussions a lot of heat is generated when one side doesn't feel anything for something sacred to the other side and accidentally threatens it. What is sacred to libertarians? I think freedom is an obvious sacred value, which might go into the fairness foundation. But beyond that I don't think libertarians (by being libertarians) have that many strong sacred values - *just like transhumanists*. We are all happy to question the human condition and all accepted morals in profound ways. This is borne out in experiments. Libertarians are less unwilling to *refuse* making sacred tradeoffs for money than other groups, and find the five foundations less sacred altogether. http://faculty.virginia.edu/haidtlab/mft/GHN.final.JPSP.2008.12.09.pdf http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1665934 This has consequences for discussing politics. Conservatives get enraged by liberals trading purity or respect for fairness. But both get riled up by libertarians trading almost anything for freedom. And libertarians get upset by how readily everybody else trades their sacred value for mere care, purity or other less important things. So that is my general explanation why libertarians (and transhumanists!) generally tend to end up in hot discussions. Solution? -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From eugen at leitl.org Sun Feb 27 12:20:06 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 27 Feb 2011 13:20:06 +0100 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <4D6A3E56.9020805@aleph.se> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> Message-ID: <20110227122006.GB23560@leitl.org> On Sun, Feb 27, 2011 at 12:06:46PM +0000, Anders Sandberg wrote: > So that is my general explanation why libertarians (and transhumanists!) > generally tend to end up in hot discussions. > > Solution? Jack-booted goose-stepping fascist moderation from hell. Nothing lesser will do. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From possiblepaths2050 at gmail.com Sun Feb 27 12:28:15 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 27 Feb 2011 05:28:15 -0700 Subject: [ExI] this is me in another forty years... In-Reply-To: <00a901cbd546$ec26bda0$c47438e0$@att.net> References: <00a901cbd546$ec26bda0$c47438e0$@att.net> Message-ID: Spike Jones wrote: > http://www.youtube.com/watch?v=vksdBSVAM6g > > &feature=player_embedded > > This is how they make commercials in countries where they still have actual > attention spans. Hey it worked on me. > > {8-] > I don't think a commercial has ever made me teary eyed before! Wow! It was really something special. I'd like to hope there will be a group of Extrope/transhumanist bikers forty years from now (you, Max, and a bunch of others), who will have the spunk and hell for leather attitude of the cool old geezer characters in the video. But I would like to think *forty* years from now, in the year 2051, you will all have at least partially rejuvenated bodies (well, for those of you who can afford it...)! I dearly wish you will be a much more healthy, long-lived, and photogenic bunch then what was seen in the commercial. Unfortunately, some of us will not live to make it to that day, and so (especially for those not cryonically preserved, but also for those who are) pictures of them will need to be brought along to be lifted aloft in remembrance. Spike, I must admit that I hope to be a member of this future motorcycle (or a vehicle even more amazing) group... And I recommend that we call ourselves the Singularity Riders! John : ) From anders at aleph.se Sun Feb 27 12:49:44 2011 From: anders at aleph.se (Anders Sandberg) Date: Sun, 27 Feb 2011 12:49:44 +0000 Subject: [ExI] farmville, was RE: RPGs and transhumanism In-Reply-To: <007a01cbd5ca$6f45b320$4dd11960$@att.net> References: <007a01cbd5ca$6f45b320$4dd11960$@att.net> Message-ID: <4D6A4868.7010804@aleph.se> spike wrote: > ...On Behalf Of Anders Sandberg > > >> ...Tanzania trying to reboot itself in the rings of Saturn using a >> > Farmville-like game ...--Anders Sandberg, > > Would anyone here speculate about the wildly popular Farmville increasing > the demand for actual farmland? My guess is that for every thousand people > who spend time playing simulated farmer, there would be one or more who > would like to try her hand at the real dirt and sweat version. If for no > other reason, it would give the player street cred with the others, and > perhaps lead to improvements in the simulation. > I am sceptical about how many actually do farming due to farmville. It compresses farming into a series of quick actions and rewards, while real farming seems to be about having a really long time horizon. Plants vs. zombies is not quite like real gardening. I wonder what games actually make people go out and do things in the real world? RPGs have certainly stimulated me to learn odd subjects, and even helped my research. But what about other games (computers and boardgames)? (In my RPG game, the Farmville-like game is actually a clever interface to the nanotech infrastructure underlying the construction of a space habitat. Players are playing a game but actually, just as lot of people filling in captchas are together doing reliable text recognition, solving morphogenesis problems and controlling the evolution of various nanosystems. Ideally this should all have been done with cheap AI, but it turned out that it was cheaper to run the 3.5 million surviving Tanzanian uploads on the servers... and they can get enticed by using game points in their virtual economy. And yes, the whole project will be in monumental trouble if people start tiring of the game before the critical control period is over. ) -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From possiblepaths2050 at gmail.com Sun Feb 27 12:31:26 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 27 Feb 2011 05:31:26 -0700 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <20110227122006.GB23560@leitl.org> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227122006.GB23560@leitl.org> Message-ID: Eugen Leitl wrote: >Jack-booted goose-stepping fascist moderation from hell. Nothing lesser will do. Eugen, I definitely do not want to live in any post-singularity society that you control! But Anders on the other hand... John ; ) On 2/27/11, Eugen Leitl wrote: > On Sun, Feb 27, 2011 at 12:06:46PM +0000, Anders Sandberg wrote: > >> So that is my general explanation why libertarians (and transhumanists!) >> generally tend to end up in hot discussions. >> >> Solution? > > Jack-booted goose-stepping fascist moderation from hell. Nothing lesser will > do. > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Sun Feb 27 12:57:28 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 27 Feb 2011 05:57:28 -0700 Subject: [ExI] farmville, was RE: RPGs and transhumanism In-Reply-To: <4D6A4868.7010804@aleph.se> References: <007a01cbd5ca$6f45b320$4dd11960$@att.net> <4D6A4868.7010804@aleph.se> Message-ID: Anders wrote: I wonder what games actually make people go out and do things in the real world? RPGs have certainly stimulated me to learn odd subjects, and even helped my research. But what about other games (computers and boardgames)? >>> I have several friends who loved the action/combat/militeristic aspect of science fiction and fantasy roleplaying games, and I think they later on joined the armed services to at least up to a point try to live a real life of adventure. John On 2/27/11, Anders Sandberg wrote: > spike wrote: >> ...On Behalf Of Anders Sandberg >> >> >>> ...Tanzania trying to reboot itself in the rings of Saturn using a >>> >> Farmville-like game ...--Anders Sandberg, >> >> Would anyone here speculate about the wildly popular Farmville increasing >> the demand for actual farmland? My guess is that for every thousand >> people >> who spend time playing simulated farmer, there would be one or more who >> would like to try her hand at the real dirt and sweat version. If for no >> other reason, it would give the player street cred with the others, and >> perhaps lead to improvements in the simulation. >> > > I am sceptical about how many actually do farming due to farmville. It > compresses farming into a series of quick actions and rewards, while > real farming seems to be about having a really long time horizon. Plants > vs. zombies is not quite like real gardening. > > I wonder what games actually make people go out and do things in the > real world? RPGs have certainly stimulated me to learn odd subjects, and > even helped my research. But what about other games (computers and > boardgames)? > > > (In my RPG game, the Farmville-like game is actually a clever interface > to the nanotech infrastructure underlying the construction of a space > habitat. Players are playing a game but actually, just as lot of people > filling in captchas are together doing reliable text recognition, > solving morphogenesis problems and controlling the evolution of various > nanosystems. Ideally this should all have been done with cheap AI, but > it turned out that it was cheaper to run the 3.5 million surviving > Tanzanian uploads on the servers... and they can get enticed by using > game points in their virtual economy. And yes, the whole project will be > in monumental trouble if people start tiring of the game before the > critical control period is over. ) > > > > -- > Anders Sandberg, > Future of Humanity Institute > James Martin 21st Century School > Philosophy Faculty > Oxford University > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From giulio at gmail.com Sun Feb 27 12:58:22 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 27 Feb 2011 13:58:22 +0100 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <4D6A3E56.9020805@aleph.se> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> Message-ID: Thanks Anders, this is really relevant. I think freedom should be an independent "sacred" value, separate from the the fairness foundation (otherwise the model says that libertarians = liberals). Let's call it 6. Then my own position is something like: 1, 2 and 6 are very important values. I don't really care for 3, 4 and 5 I prefer calling 1, 2 and 6 "very important" instead of "sacred", because I prefer not holding anything sacred and considering everything as open to discussion. On Sun, Feb 27, 2011 at 1:06 PM, Anders Sandberg wrote: > Giulio Prisco wrote: >> >> I don't see why we should refrain from discussing important things. >> >> I am very interested in the libertarian trend, but the problem is that >> it always degenerates into a hormone-driven fight between >> fundamentalist libertarians and fundamentalist anti-libertarians. I >> wonder why it is like that. >> > > I think it can be explained by Jonathan Haidt's moral foundations theory and > Phil Tetlock's sacred values theory. Basically, libertarians and > anti-libertarians step on each other's sacred values. > > According to Haidt, morality across cultures tend to be based on five > fundamental values that are given different weight between different > cultures and individuals: > > ?1. Care for others, protecting them from harm. ? 2. Fairness, Justice, > treating others equally. > ?3. Loyalty to your group, family, nation. ? 4. Respect for tradition and > legitimate authority. ? 5. Purity, avoiding disgusting things, foods, > actions. > > Liberals (american sense) value care and fairness higher than the others, > while american conservatives value all five at the same time. > > Tetlock observed that certain things are "sacred" values to people, and that > trading them for a "secular" value triggers strong emotional reactions - > these tradeoffs are taboo: you are not supposed to even *think* about how > much money a human life is worth (if you seriously do, then you are seen as > a bad person) and people forced into tradeoffs often do interesting > self-purification actions afterwards like washing hands or giving more to > charity. > http://www.scribd.com/doc/311935/Tetlock-2003-Thinking-the-unthinkable-sacred-values-and-taboo-cognitions > In political discussions a lot of heat is generated when one side doesn't > feel anything for something sacred to the other side and accidentally > threatens it. > > What is sacred to libertarians? I think freedom is an obvious sacred value, > which might go into the fairness foundation. But beyond that I don't think > libertarians (by being libertarians) have that many strong sacred values - > *just like transhumanists*. We are all happy to question the human condition > and all accepted morals in profound ways. This is borne out in experiments. > Libertarians are less unwilling to *refuse* making sacred tradeoffs for > money than other groups, and find the five foundations less sacred > altogether. > http://faculty.virginia.edu/haidtlab/mft/GHN.final.JPSP.2008.12.09.pdf > http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1665934 > > This has consequences for discussing politics. Conservatives get enraged by > liberals trading purity or respect for fairness. But both get riled up by > libertarians trading almost anything for freedom. And libertarians get upset > by how readily everybody else trades their sacred value for mere care, > purity or other less important things. > > So that is my general explanation why libertarians (and transhumanists!) > generally tend to end up in hot discussions. > > Solution? > > -- > Anders Sandberg, > Future of Humanity Institute James Martin 21st Century School Philosophy > Faculty Oxford University > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From rpwl at lightlink.com Sun Feb 27 14:15:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 27 Feb 2011 09:15:53 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: References: <4D693BD2.2050903@lightlink.com> Message-ID: <4D6A5C99.1030908@lightlink.com> Kelly Anderson wrote: > On Sat, Feb 26, 2011 at 10:43 AM, Richard Loosemore wrote: >> Which is to say: if you think these libertarian/anarchist proposals are so >> great, WHERE IS THE CODE? > > Funny coming from you Richard... :-) So it seems that you missed the joke completely :-( even though I spelled it out in my post. (And you took the opportunity to make another disparaging personal remark ... that's pretty sad.) I hate to have to point out the obvious, Kelly, but: I asked for code precisely *because* of your own stance, which appears to be that someone with no code is saying nothing. So apparently you don't get irony. That's no fun. > Richard, a simulation wouldn't prove anything, nor change anyone's > mind. A simulation only reflects the mind of the writer of the > simulation. The closest thing that I can think of to a simulation of > libertarian views is the novel Atlas Shrugged. I suggest you go watch > the movie when it comes out as the book is very long. There are some > holes in it, but it does point out that getting to the pure > libertarian from where we are is going to be painful for a lot of the > hangers on. Sounds to me like somebody is making excuses. So "a simulation wouldn't prove anything", huh? And - oh, deary me - you cite a horribly bad novel, filled to the brim with naked propaganda, written by an egomaniacal, hypocritical cult leader, as a substitute for the computer simulations that you should be doing...? Funny coming from you Kelly... :-) But seriously, this is excellent news. Now I don't have to write any AGI simulations, since they won't prove anything. And if you ask me for code in the future, I can just tell you to go read "Winnie the Pooh and the Blustery Day". (Or you could watch the movie version, if that's easier.) ;-) Richard Loosemore From stefano.vaj at gmail.com Sun Feb 27 14:35:45 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 27 Feb 2011 15:35:45 +0100 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: 2011/2/22 Alfio Puglisi : > On Tue, Feb 22, 2011 at 6:52 PM, Kelly Anderson >> Some libertarians go so far as to shorten this list to Army, Courts >> and Police. There is no reason today for all roads not to be toll >> roads IMHO. Why not regulate, then privatize prisons? > > Because it creates an incentive to incarcerate people? The more people in > prison, the more profits from prison management. Come on. I can hardly be described as a libertarian, but to argue that allowing private healh care services risks to encourage the deliberate spreadinng of epidemics by their managers or shaheolders, biowarfare-style, sounds as a rather bizarre and far-fetched argument. -- Stefano Vaj From bbenzai at yahoo.com Sun Feb 27 15:01:33 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 27 Feb 2011 07:01:33 -0800 (PST) Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: Message-ID: <283746.86498.qm@web114401.mail.gq1.yahoo.com> Richard Loosemore suggested: > ... > > Which is to say:? if you think these > libertarian/anarchist proposals are > so great, WHERE IS THE CODE? > > I mean that literally.? Where are your system > simulations, which show > that society will remain stable when, for example, most > government > funded institutions are abolished?? Where are your > simulations...? Nice idea. What would it take to create and run a series of simulation experiments, that we can visit, like Second Life or more likely Sim City, that contains lots of agents representing all kinds of v.1.0 humans, living in a particular society for long enough to get a good idea of the direction the society was going in? This could be run many times, with different scenarios, and test out some of these ideas that are being bruited about here? I'm sure this list has enough relevant talent and resources to give something like this a decent try. Possibly a game environment would be a good basis, but run as a sim, not a game, i.e. the majority of the action would be carried out by software agents, not game players. Might something like this be able to weed out the more unlikely ideas? I'd love to see how an Abundance Economy scenario would pan out (like Iain M Bank's Culture, but without the FTL travel). Ben Zaiboc From lubkin at unreasonable.com Sun Feb 27 15:32:28 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sun, 27 Feb 2011 10:32:28 -0500 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <4D6A3E56.9020805@aleph.se> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> Message-ID: <201102271531.p1RFVqH7007419@andromeda.ziaspace.com> Anders wrote: >I think it can be explained by Jonathan Haidt's moral foundations >theory and Phil Tetlock's sacred values theory. Basically, >libertarians and anti-libertarians step on each other's sacred values. Thanks. I'll definitely have to look into these. They seem like more thought-out versions of my point about starving children vs. force. It would be interesting to build a morality version of the Nolan political quiz. Correlate one with the other. I also see personality correlations. Extropians and libertarians are much more likely to be xNTP. Eugen replied: >Jack-booted goose-stepping fascist moderation from hell. Nothing >lesser will do. and then John quipped: >Eugen, I definitely do not want to live in any post-singularity >society that you control! But Anders on the other hand... I'm reminded of the 1984 vision of the future of mankind: "If you want a picture of the future, imagine a boot stamping on a human face -- forever." which Harlan Ellison morphed into his AGI-induced hell "I Have No Mouth and I Must Scream." -- David. From spike66 at att.net Sun Feb 27 15:49:18 2011 From: spike66 at att.net (spike) Date: Sun, 27 Feb 2011 07:49:18 -0800 Subject: [ExI] farmville, was RE: RPGs and transhumanism In-Reply-To: <4D6A4868.7010804@aleph.se> References: <007a01cbd5ca$6f45b320$4dd11960$@att.net> <4D6A4868.7010804@aleph.se> Message-ID: <013901cbd695$e55efd70$b01cf850$@att.net> ... On Behalf Of Anders Sandberg Subject: Re: [ExI] farmville, was RE: RPGs and transhumanism spike wrote: > ...On Behalf Of Anders Sandberg ... > >> Would anyone here speculate about the wildly popular Farmville increasing the demand for actual farmland? ... > >...I am sceptical about how many actually do farming due to farmville. It compresses farming into a series of quick actions and rewards, while real farming seems to be about having a really long time horizon. Plants vs. zombies is not quite like real gardening... Ja. We keep getting annoying requests from a dear cousin to go in and unwilt her wheat. What kind of yahoo wrote a sim in which you can bug someone else to come in and unwilt your crops? Absurd. >...I wonder what games actually make people go out and do things in the real world? ... -- Anders Sandberg I know the answer! Flying. I took flying lessons in college in Cessna. Later flew battle simulators with WW2 style planes. Then I got a chance to fly with a colleague in his Pitts Special aerobatic stunt biplane: http://www.google.com/images?hl=en&sugexp=ldymls&xhr=t&q=pitts+special&cp=7& qe=cGl0dHMgcw&qesig=EDSWfjuoobKZ_xritaf6Qw&pkc=AFgZ2tnh5IeVvB3SPqXAiA-LSyU1Y aiU2rNj_3395lN5Jjaw55jkIrBqjFng4k4gWuoogF-zkQDAfEqQbqxI1mb8b6iFcw3gcQ&bav=on .1,or.&wrapid=tljp1298821434369012&um=1&ie=UTF-8&source=univ&sa=X&ei=PnFqTe2 TMYKusAPuuOSmBA&sqi=2&ved=0CC0QsAQ&biw=1078&bih=681 His was a dual seat dual controls, so he let me fly it. It could do things I never imagined a plane could do, far more power to weight than the WW2 birds. I had an intuition for flying aerobatics from that flight simulator. Aerobatic maneuvers are actually most of what did with the sim: fly stunts until the goddam Nazis showed up, shooting at me and spoiling my fun. spike From rpwl at lightlink.com Sun Feb 27 16:10:03 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 27 Feb 2011 11:10:03 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <283746.86498.qm@web114401.mail.gq1.yahoo.com> References: <283746.86498.qm@web114401.mail.gq1.yahoo.com> Message-ID: <4D6A775B.6080201@lightlink.com> Ben Zaiboc wrote: > Richard Loosemore suggested: >> I mean that literally. Where are your system simulations, which >> show that society will remain stable when, for example, most >> government funded institutions are abolished? Where are your >> simulations...? > > What would it take to create and run a series of simulation > experiments, that we can visit, like Second Life or more likely Sim > City, that contains lots of agents representing all kinds of v.1.0 > humans, living in a particular society for long enough to get a good > idea of the direction the society was going in? This could be run > many times, with different scenarios, and test out some of these > ideas that are being bruited about here? > > I'm sure this list has enough relevant talent and resources to give > something like this a decent try. > > Possibly a game environment would be a good basis, but run as a sim, > not a game, i.e. the majority of the action would be carried out by > software agents, not game players. I am not so sure a game version would work, because of the computing requirements. I envisaged a purely abstract system of agents + constraints, but in that kind of simulation the detail is what matters: realistic detail in modeling both agents and constraints. I supposed the simulation could be interaced to the existing Second Life engine, though.... Not sure about that, I have to say. What I do not think would work at all, would be an idealized parameter model, in which there were no entities representing indvidual agents and constraints, but only parameters representing populations of agents, with equations that encoded what some economist *believed* was the relationship between the parameters. Brittle. Limited scope. I am guessing it was THIS type of economics-and-politics model that allowed the U.S. intelligence agencies to predict the Middle East revolutions ahead of time (NOT). ;-) These last two approaches represent the difference between complex-system economics and classical economics. I am very much a proponent of the former school. And having said all of that, we do have something like the required simulations already: game theory. Libertarianism is something close to an "all defection, all the time" strategy ;-), and I believe those don't do very well.... Richard Loosemore From eugen at leitl.org Sun Feb 27 16:13:14 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 27 Feb 2011 17:13:14 +0100 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <201102271531.p1RFVqH7007419@andromeda.ziaspace.com> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <201102271531.p1RFVqH7007419@andromeda.ziaspace.com> Message-ID: <20110227161314.GF23560@leitl.org> On Sun, Feb 27, 2011 at 10:32:28AM -0500, David Lubkin wrote: > Eugen replied: > >> Jack-booted goose-stepping fascist moderation from hell. Nothing >> lesser will do. > > and then John quipped: > >> Eugen, I definitely do not want to live in any post-singularity >> society that you control! But Anders on the other hand... > > I'm reminded of the 1984 vision of the future of mankind: "If you want a > picture of the future, imagine a boot stamping on a human face -- > forever." which Harlan Ellison morphed into his AGI-induced hell "I Have > No Mouth and I Must Scream." I'm only pro-fascism when it comes to forum moderation. But then very, very strongly. Empirically, it's the only thing that has worked. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Sun Feb 27 16:23:17 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sun, 27 Feb 2011 17:23:17 +0100 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <283746.86498.qm@web114401.mail.gq1.yahoo.com> References: <283746.86498.qm@web114401.mail.gq1.yahoo.com> Message-ID: <20110227162317.GG23560@leitl.org> On Sun, Feb 27, 2011 at 07:01:33AM -0800, Ben Zaiboc wrote: > Richard Loosemore suggested: > > > ... > > > > Which is to say:? if you think these > > libertarian/anarchist proposals are > > so great, WHERE IS THE CODE? > > > > I mean that literally.? Where are your system > > simulations, which show > > that society will remain stable when, for example, most > > government > > funded institutions are abolished?? Where are your > > simulations...? > > Nice idea. Not really. There's no way to model social cohesion and human interaction in a meaningful way. While do have meaningful evolving agent and strategies simulations, these are a cartoon versions, and do not produce quantitative results. > What would it take to create and run a series > of simulation experiments, that we can visit, > like Second Life or more likely Sim City, that > contains lots of agents representing all kinds > of v.1.0 humans, living in a particular society > for long enough to get a good idea of the > direction the society was going in? This > could be run many times, with different scenarios, > and test out some of these ideas that are being bruited about here? Your next-best bet is to let people's avatars work, which is still a far cry from reality, because human activities and motivations in the simulated world are a poor model of reality. If you want to run a more realistic study, you have to recruit strata-representative samples and pay people real wages for simulated work. > I'm sure this list has enough relevant talent and > resources to give something like this a decent try. It's a lot of money, for a very dubious result. > Possibly a game environment would be a good basis, > but run as a sim, not a game, i.e. the majority > of the action would be carried out by software agents, not game players. No good, you want results representing people, there needs to a warm, interested body at the console. > Might something like this be able to weed out the more unlikely ideas? > > I'd love to see how an Abundance Economy scenario > would pan out (like Iain M Bank's Culture, but without the FTL travel). There's no such thing as an abundancy economy. Demand is always elastically matching supply. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From lubkin at unreasonable.com Sun Feb 27 16:57:50 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sun, 27 Feb 2011 11:57:50 -0500 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <20110227161314.GF23560@leitl.org> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <201102271531.p1RFVqH7007419@andromeda.ziaspace.com> <20110227161314.GF23560@leitl.org> Message-ID: <201102271658.p1RGw3VQ022415@andromeda.ziaspace.com> Eugen wrote: >I'm only pro-fascism when it comes to forum moderation. >But then very, very strongly. Empirically, it's the only >thing that has worked. The way I look at it, it's an AnCap libertarian argument. The lists that run best have an owner that asserts property rights. My own lists are online versions of my periodic extropian parties. (One coming up fairly soon. If any of you are or will be in New England in the next month, worth letting me know off-list.) My house is not a democracy. I decide who's welcome, who won't be invited back, and who will be forcibly removed. If I'm an ass about it, people will vote with their feet. Where one of my lists has multiple moderators, all the moderators are friends with one another and not people who would be problems in of themselves. Moderator decisions are never made by voting. We find answers we're all ok with or defer to whoever feels strongest about the issue at hand. I and we have a bias against lawyering and pilpul. Some moderated lists won't act against misconduct unless it had been specifically prohibited. Which seems fair, but often attracts posters who will keep misbehaving just this side of the rule, often in order to goad someone else into stepping over bright lines. -- David. From thespike at satx.rr.com Sun Feb 27 16:56:22 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 27 Feb 2011 10:56:22 -0600 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A5C99.1030908@lightlink.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> Message-ID: <4D6A8236.8000008@satx.rr.com> On 2/27/2011 8:15 AM, Richard Loosemore wrote: >> Funny coming from you Richard... :-) > So it seems that you missed the joke completely :-( > So apparently you don't get irony. That's no fun. Yes, I winced at Kelly's obtuseness there. But it's hard for Brits (and Aussies) to grasp how irony-deficient many USians are. I keep tumbling into the same trap. >> ... Atlas Shrugged. I suggest you go watch >> the movie when it comes out as the book is very long. Another wince, reminding me that Kelly also asked if Samantha had ever heard of Moore's Law. Newbies need to bear in mind that if you're excited about X, chances are it's been a much-hashed-over topic here for a decade or two. > ...you cite a horribly bad novel, filled to the brim with naked propaganda, > written by an egomaniacal, hypocritical cult leader A lot of extropes are very fond of Rand (while usually avoiding Randroid lockstep), so I don't anticipate many nodding heads at this accurate thumbnail characterization, or at Damien Sullivan's link to a flowchart of "How to succeed as an Ayn Rand character": Yes, the flowchart's an unbalanced picture of Rand, but close enough that it should make any fan at least slightly uncomfortable. Damien Broderick From bbenzai at yahoo.com Sun Feb 27 17:08:44 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 27 Feb 2011 09:08:44 -0800 (PST) Subject: [ExI] a fun brain in which to live In-Reply-To: Message-ID: <537041.6459.qm@web114405.mail.gq1.yahoo.com> Alan Grimes wrote: > spike wrote: > > > Ja.? I still just don't know with so much of > > this.? I will sadly confess > > that fifteen years ago I thought we would be farther > > along by now than we > > are.? But the singularity is still coming > > eventually, and when it does, I > > can imagine no logical stopping place for it short of > > all the metals in the > > solar system converted to computronium to form an > > MBrain, with humans > > uploaded. > > If that is true, then it is imperative that the singularity > be prevented. =| Intentionally preventing the singularity (if that were possible) would be equivalent to signing the death warrant of the human race. We might not survive the singularity, but we damn sure won't survive (in the long-term) without it. I wonder if an australopithecine ever said "It's imperative that Homo Sapiens be prevented!"? Well, of course not, but I hope you get the idea. In general, people want their children to do well, usually better then they themselves have done, even at the expense of their own lives. This is really no different. Wishing for the singularity to be prevented is rather like a couple who, expecting their first child, think about how much it will cost them and how it may not grow up the way they hope, and so decide to abort it. Ben Zaiboc From kellycoinguy at gmail.com Sun Feb 27 17:31:42 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 27 Feb 2011 10:31:42 -0700 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A5C99.1030908@lightlink.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> Message-ID: On Sun, Feb 27, 2011 at 7:15 AM, Richard Loosemore wrote: > Kelly Anderson wrote: >> >> On Sat, Feb 26, 2011 at 10:43 AM, Richard Loosemore >> wrote: >>> >>> Which is to say: ?if you think these libertarian/anarchist proposals are >>> so >>> great, WHERE IS THE CODE? >> >> Funny coming from you Richard... :-) > > So it seems that you missed the joke completely :-( even though I spelled it > out in my post. ?(And you took the opportunity to make another disparaging > personal remark ... that's pretty sad.) > > I hate to have to point out the obvious, Kelly, but: ?I asked for code > precisely *because* of your own stance, which appears to be that someone > with no code is saying nothing. > > So apparently you don't get irony. ?That's no fun. Richard, I've been around mailing lists for twenty years. My conclusion is that irony is incompatible with the medium. Now that I get it, I'm laughing. >> Richard, a simulation wouldn't prove anything, nor change anyone's >> mind. A simulation only reflects the mind of the writer of the >> simulation. The closest thing that I can think of to a simulation of >> libertarian views is the novel Atlas Shrugged. I suggest you go watch >> the movie when it comes out as the book is very long. There are some >> holes in it, but it does point out that getting to the pure >> libertarian from where we are is going to be painful for a lot of the >> hangers on. > > Sounds to me like somebody is making excuses. Absolutely not. I honestly believe libertarianism would work, but that it would be EXTREMELY painful for a while. The reason being that we have created a system where more than half of the population is dependent in some pretty big way on the government carrying their water. The American experiment started out as a libertarian experiment. All of the founding fathers would today be considered strong Ron Paul type libertarians. > So "a simulation wouldn't prove anything", huh? ? And - oh, deary me - you > cite a horribly bad novel, filled to the brim with naked propaganda, written > by an egomaniacal, hypocritical cult leader, as a substitute for the > computer simulations that you should be doing...? Believe me, I'm not the one to be writing any such simulation. Not my specialty. I wouldn't know how to start. I would agree that Ayn Rand is an egomaniac, but that's what she espoused, so I don't see how that is hypocritical. She lived her life pretty much as she believed life should be lived. And her position on altruism being evil is very compelling, if you understand it. Anyone who dismisses her as an asshole is missing out on a very interesting philosophy. > Funny coming from you Kelly... :-) > > But seriously, this is excellent news. ?Now I don't have to write any AGI > simulations, since they won't prove anything. I'm going to assume you are attempting Irony again here. Haha. > And if you ask me for code in the future, I can just tell you to go read > "Winnie the Pooh and the Blustery Day". ?(Or you could watch the movie > version, if that's easier.) Now I'm sure you are attempting some sort of humor that I won't attempt to label. -Kelly From spike66 at att.net Sun Feb 27 17:06:24 2011 From: spike66 at att.net (spike) Date: Sun, 27 Feb 2011 09:06:24 -0800 Subject: [ExI] sim city hipsters please: RE: this is me in another forty years... Message-ID: <016201cbd6a0$aa6eca50$ff4c5ef0$@att.net> On Behalf Of John Grigg Spike Jones wrote: http://www.youtube.com/watch?v=vksdBSVAM6g >> This is how they make commercials in countries where they still have >> actual attention spans. Hey it worked on me. >I don't think a commercial has ever made me teary eyed before! Wow! Ja, me too! Of course I get teary eyed whenever anyone utters the suggestion "Let's ride motorcycles!" But that's just me. The reason I posted this is that I have been doing more deep thinking about an idea that has been rattling around in my brain for some time. Note the scene just before the cancer guy's outburst. He looks around the table at his four surviving comrades, perhaps imagining his next (and perhaps last) 100 or so days on this old planet, seeing one guy unable to hear, one guy dozing, one guy mourning the loss of his sweetheart, one guy contemplating another eggroll. If you go to some nursing homes, it really is that way. Our society has evolved to where we spend the last 5% or more of our lives mourning, lonely and bored beyond comprehension. It need not be that way. I need some of you Second Life hipsters to advise me, or SimCity hipsters: can we set up one of these VRs to operate using head or eye motion? We don't need a dome shaped display. An ordinary large screen TV will work well enough. We can create a sim especially for impaired elderly (with money of course) which would make their last year more fun than that scene just before "Let's ride motorcycles." If they can ride of course, well good, let them. But most inmates there never rode a motorcycle in their lives, and the nursing home isn't the time or place to start. But I can see riding in a simulated VR biker gang. >...Spike, I recommend that we call ourselves the Singularity Riders! John : ) Singularity Riders. I like that. There was a headline the other day about a big apartment fire started by candles used in a black magic ritual of some sort. The headline was "Fire started by voodoo sex candles." Now there's your motorcycle gang name "Voodoo Sex Candles." Let's have both, then have the Singularity Riders and the Voodoo Sex Candles have a simulated rumble. Help me estimate what would be needed, internet bandwidth, the computer, the device that clamps to a hat or something to use as a means of commanding the sim, a joystick? Handlebars? Keyboard? Voice recognition? Speech synthesis? Realtime talking avatars? What can we do now? Is there a Second Life version that can be adapted? Assume using only currently available commercial off the shelf technology. Is there already a product that is virtual worlds sims for the impaired? spike From alfio.puglisi at gmail.com Sun Feb 27 17:28:27 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Sun, 27 Feb 2011 18:28:27 +0100 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: On Sun, Feb 27, 2011 at 3:35 PM, Stefano Vaj wrote: > 2011/2/22 Alfio Puglisi : > > On Tue, Feb 22, 2011 at 6:52 PM, Kelly Anderson > >> Some libertarians go so far as to shorten this list to Army, Courts > >> and Police. There is no reason today for all roads not to be toll > >> roads IMHO. Why not regulate, then privatize prisons? > > > > Because it creates an incentive to incarcerate people? The more people in > > prison, the more profits from prison management. > > Come on. I can hardly be described as a libertarian, but to argue that > allowing private healh care services risks to encourage the deliberate > spreadinng of epidemics by their managers or shaheolders, > biowarfare-style, sounds as a rather bizarre and far-fetched argument. > When I wrote that private prisons would be an incentive to bogus incarceration I was hypothesizing, but now I found out that it has already happened: http://en.wikipedia.org/wiki/Kids_for_cash_scandal "Two judges, President Judge Mark Ciavarella and Senior Judge Michael Conahan, were accused of accepting money from the co-owner and builder of two private, for-profit juvenile facilities, in return for contracting with the facilities and imposing harsh sentences on juvenile offenders in order to ensure that the detention centers would be utilized" In southern Italy, a significant percentage of fires are started by seasonal workers employed in fire-fighting. Apparently to make sure that they will not be unemployed: http://www.sisef.it/iforest/show.php?id=521 One could argue that those two judges, and the arsonists, are just criminals. Now for some laws: http://www.npr.org/templates/story/story.php?storyId=130833741 "What they show is a quiet, behind-the-scenes effort to help draft and pass Arizona Senate Bill 1070 by an industry that stands to benefit from it: the private prison industry." I don't know much about NPR, and how reliable this information can be. But the very fact that something like this have been suggested is troubling and, in my opinion, entirely predictable precisely because of the profit motive. Now don't get me wrong: I regard human greediness as a completely natural trait which came into being for obvious evolutionary reasons. This doesn't mean that it is good or bad, just that it exists, and that it must taken into account. The role of regulation is to make sure that profit incentives are aligned with public interest. Like original IP laws, for example (nevermind the last ridiculous incarnations). Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix at ugcs.caltech.edu Sun Feb 27 17:34:09 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Sun, 27 Feb 2011 09:34:09 -0800 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A8236.8000008@satx.rr.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> <4D6A8236.8000008@satx.rr.com> Message-ID: <20110227173409.GA26298@ofb.net> On Sun, Feb 27, 2011 at 10:56:22AM -0600, Damien Broderick wrote: > A lot of extropes are very fond of Rand (while usually avoiding Randroid > lockstep), so I don't anticipate many nodding heads at this accurate > thumbnail characterization, or at Damien Sullivan's link to a flowchart > of "How to succeed as an Ayn Rand character": > > I don't remember that. I am laughing at it, though. -xx- Damien X-) From sparge at gmail.com Sun Feb 27 17:10:19 2011 From: sparge at gmail.com (Dave Sill) Date: Sun, 27 Feb 2011 12:10:19 -0500 Subject: [ExI] farmville, was RE: RPGs and transhumanism In-Reply-To: <4D6A4868.7010804@aleph.se> References: <007a01cbd5ca$6f45b320$4dd11960$@att.net> <4D6A4868.7010804@aleph.se> Message-ID: On Sun, Feb 27, 2011 at 7:49 AM, Anders Sandberg wrote: > I wonder what games actually make people go out and do things in the real > world? I know people who've bought and are learning to play real instruments after getting hooked on Rock Band. And, of course, Rock Band 3's pro mode and pro instruments take advantage of and encourage that. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Sun Feb 27 17:39:26 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 27 Feb 2011 10:39:26 -0700 Subject: [ExI] this is me in another forty years... In-Reply-To: References: <00a901cbd546$ec26bda0$c47438e0$@att.net> Message-ID: <4D6A8C4E.30000@canonizer.com> Spike, I second that thanks, it was beautiful. Brent Allsop On 2/27/2011 12:58 AM, Samantha Atkins wrote: > > On Feb 25, 2011, at 3:51 PM, spike wrote: > >> Check this, a three minute story which could be subtitled: when they >> pry the handlebars from my cold dead hands: >> http://www.youtube.com/watch?v=vksdBSVAM6g&feature=player_embedded >> >> This is how they make commercials in countries where they still have >> actual attention spans. Hey it worked on me. > > Thank you. That was beautiful! > > - s > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sun Feb 27 17:47:16 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 27 Feb 2011 10:47:16 -0700 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A8236.8000008@satx.rr.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> <4D6A8236.8000008@satx.rr.com> Message-ID: On Sun, Feb 27, 2011 at 9:56 AM, Damien Broderick wrote: > On 2/27/2011 8:15 AM, Richard Loosemore wrote: > Another wince, reminding me that Kelly also asked if Samantha had ever heard > of Moore's Law. Newbies need to bear in mind that if you're excited about X, > chances are it's been a much-hashed-over topic here for a decade or two. Sorry, that Moore's Law reference was my attempt at Irony. I knew it was much hashed over. >> ...you cite a horribly bad novel, filled to the brim with naked >> propaganda, >> written by an egomaniacal, hypocritical cult leader > > A lot of extropes are very fond of Rand (while usually avoiding Randroid > lockstep), so I don't anticipate many nodding heads at this accurate > thumbnail characterization, or at Damien Sullivan's link to a flowchart of > "How to succeed as an Ayn Rand character": > > > > Yes, the flowchart's an unbalanced picture of Rand, but close enough that it > should make any fan at least slightly uncomfortable. I really don't like Rand's view of sex. In that area she is deeply disturbed. The Dominique character in The Fountainhead is particularly disturbing to me. It's too bad she had to go there, because it makes the rest of her ideas easier to dismiss as the workings of a disturbed mind. If you believe that much of the evil in the world comes from the twisted thinking that is caused by religion; and I'd bet there are a few of you here; then you would probably be closer to Rand's core philosophy than you might think. -Kelly From phoenix at ugcs.caltech.edu Sun Feb 27 17:50:45 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Sun, 27 Feb 2011 09:50:45 -0800 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <4D6A3E56.9020805@aleph.se> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> Message-ID: <20110227175045.GB26298@ofb.net> On Sun, Feb 27, 2011 at 12:06:46PM +0000, Anders Sandberg wrote: > Giulio Prisco wrote: >> I don't see why we should refrain from discussing important things. >> >> I am very interested in the libertarian trend, but the problem is that >> it always degenerates into a hormone-driven fight between >> fundamentalist libertarians and fundamentalist anti-libertarians. I >> wonder why it is like that. >> > > I think it can be explained by Jonathan Haidt's moral foundations theory > and Phil Tetlock's sacred values theory. Basically, libertarians and > anti-libertarians step on each other's sacred values. I recently ran into an extreme case of this: http://volokh.com/2011/02/15/asteroid-defense-and-libertarianism/ Sasha Volokh saying it would be immoral to tax people to save the human race from an asteroid. Natural disasters don't violate rights, after all. The fun bit is that Volokh does grant taxees for fighting crime or defensive war, on the grounds fo minimizing overall rights violation, but commenters called Volokh out for not being pure enough. Needless to say, for anyone not committed to anarchism, this seems not so much like blasphemy as simply insane. > 1. Care for others, protecting them from harm. 2. Fairness, Justice, > treating others equally. > 3. Loyalty to your group, family, nation. 4. Respect for tradition > and legitimate authority. 5. Purity, avoiding disgusting things, foods, > actions. > > Liberals (american sense) value care and fairness higher than the > others, while american conservatives value all five at the same time. Plus, of course, fairness and treating others equally have multiple definitions. -xx- Damien X-) From kellycoinguy at gmail.com Sun Feb 27 18:02:01 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 27 Feb 2011 11:02:01 -0700 Subject: [ExI] farmville, was RE: RPGs and transhumanism In-Reply-To: <4D6A4868.7010804@aleph.se> References: <007a01cbd5ca$6f45b320$4dd11960$@att.net> <4D6A4868.7010804@aleph.se> Message-ID: On Sun, Feb 27, 2011 at 5:49 AM, Anders Sandberg wrote: > spike wrote: > I wonder what games actually make people go out and do things in the real > world? RPGs have certainly stimulated me to learn odd subjects, and even > helped my research. But what about other games (computers and boardgames)? I know that when I was a teenager, some of my friends were playing Dungeons and Dragons in the tunnels underneath the local college. Does that count? I heard about a study, perhaps 20 years ago, about an attempt to teach people various skills in VR. At that point of the technology, the only skill they had been able to measure as having improved using VR was juggling. Apparently, they turned down the gravity so that things slowed down enough to learn the moves, then they slowly ramped it up to earth gravity, and finally, the subjects were able to juggle in reality. I'm sure there are more successes by now, but I found the report to be very interesting at the time. I looked on the Internet, but was unable to find the research paper, so it may have just been a rumor, you know how "studies" are... :-) -Kelly From rpwl at lightlink.com Sun Feb 27 18:15:40 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 27 Feb 2011 13:15:40 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> Message-ID: <4D6A94CC.1070801@lightlink.com> Kelly Anderson wrote: >> Kelly Anderson wrote: >>> Richard, a simulation wouldn't prove anything, nor change anyone's >>> mind. A simulation only reflects the mind of the writer of the >>> simulation. [snip] > Richard Loosemore wrote: >> Sounds to me like somebody is making excuses. > > Absolutely not. I honestly believe libertarianism would work, but that > it would be EXTREMELY painful for a while. The reason being that we > have created a system where more than half of the population is > dependent in some pretty big way on the government carrying their > water. Okay, so let's dispense with the irony for a moment. You are still telling me that in your opinion libertarianism would work (= unsupported speculation, = mere philosophizing), whereas I am asking you to ante up and give me some objective evidence to back up this claim, by pointing to some code (or equivalent scientific evidence) that shows that human societies would not break down under those circumstances. After all, you did exactly this to me, when I wrote a paper giving a theoretical analysis of AGI. You gave me no quarter, eventually insinuating that nothing I was doing was "science", asking me to point to which of my papers contained some "science", etc etc. And you even (in the last post) implied that I was not developing some code to support my research (which I am). Libertarians COULD, if they wanted, do some systems-modeling to examine the stability of human societies under various conditions, to show that their ideas would not lead to anarchy and chaos. But when I ask to see such evidence, you dodge the question and tell me that simulations would demonstrate nothing (nonsense: simulations have been of great value to economists and intelligence analysts) or you just give me more of your opinion again. I accept that you personally might not be able to write such a simulation, but there is an entire community that espouses these ideas (whereas I, with my unusual approach to AGI, am just one overstretched researcher) .... so why do you and your entire community of libertarian believers dodge the question of giving us a simulation? Turnabout is fair play. Richard Loosemore From stefano.vaj at gmail.com Sun Feb 27 18:22:13 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 27 Feb 2011 19:22:13 +0100 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <4D6A3E56.9020805@aleph.se> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> Message-ID: On 27 February 2011 13:06, Anders Sandberg wrote: > Giulio Prisco wrote: >> I am very interested in the libertarian trend, but the problem is that >> it always degenerates into a hormone-driven fight between >> fundamentalist libertarians and fundamentalist anti-libertarians. I >> wonder why it is like that. > > I think it can be explained by Jonathan Haidt's moral foundations theory and > Phil Tetlock's sacred values theory. Basically, libertarians and > anti-libertarians step on each other's sacred values. > > According to Haidt, morality across cultures tend to be based on five > fundamental values that are given different weight between different > cultures and individuals: What about differentialist fundamentalism, such as my own version thereof, the adept of which suffer from an immediate adrenalyne and testosterone discharge every time somebody mentions such gross cross-cultural generalisations? :-))) -- Stefano Vaj From phoenix at ugcs.caltech.edu Sun Feb 27 18:23:04 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Sun, 27 Feb 2011 10:23:04 -0800 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> Message-ID: <20110227182304.GC26298@ofb.net> On Sun, Feb 27, 2011 at 10:31:42AM -0700, Kelly Anderson wrote: > Absolutely not. I honestly believe libertarianism would work, but that > it would be EXTREMELY painful for a while. The reason being that we > have created a system where more than half of the population is > dependent in some pretty big way on the government carrying their > water. The American experiment started out as a libertarian > experiment. Someone would suggest there are good reasons we moved away from said experiment. At Boskone last week, Charles Stross said the night watchman state of 19th century England died with the Big Stink; when Parliament had to evacuate due to the failures of private waste removal, public sewers were acepted and quickly spread through all the cities. > All of the founding fathers would today be considered strong Ron Paul > type libertarians. I don't think that's true. They certainly weren't consciously in the modern libertarian mindset. The Constitution is less libertarian than the Articles of Confederation: the explicit goal of it was creating a stronger federal government, even if there was some fear of that strength. But that fear doesn't extend to the state governments! Then Jefferson promptly bent the Constitution to make the Louisiana Purchase. The Constitution provides for eminent domain and spending for the general welfare. And then in their writings, they had some "socialist" ideas. Progressive taxation, wealth redistribution, public pensions, the ultimate and legitimate authority of the republic over any private property not essential to life. I'll put the quotes at the end, since they'll get long. This is a main source though: http://www.thedemocraticstrategist.org/strategist/2009/08/the_attack_on_redistribution.php Plus, any Founding Fathers from New England were probably totally cool with public, tax-funded, and compulsory education, that being the local practice. > I would agree that Ayn Rand is an egomaniac, but that's what she > espoused, so I don't see how that is hypocritical. She lived her life > pretty much as she believed life should be lived. And her position on She lived it on Medicare: http://www.boingboing.net/2011/01/28/ayn-rand-took-govern.html > altruism being evil is very compelling, if you understand it. Anyone > who dismisses her as an asshole is missing out on a very interesting > philosophy. Like her defense of a child-killer: http://michaelprescott.net/hickman.htm -xx- Damien X-) Jefferson's letter advocating 19 year limit on debt and law http://press-pubs.uchicago.edu/founders/documents/v1ch2s23.html "that the earth belongs in usufruct to the living" "And the half of those of 21. years and upwards living at any one instant of time will be dead in 18. years 8. months, or say 19. years as the nearest integral number. Then 19. years is the term beyond which neither the representatives of a nation, nor even the whole nation itself assembled, can validly extend a debt." "Another means of silently lessening the inequality of [landed] property is to exempt all from taxation below a certain point, and to tax the higher portions of property in geometrical progression as they rise. Whenever there is in any country, uncultivated lands and unemployed poor, it is clear that the laws of property have been so far extended as to violate natural right. The earth is given as common stock for man to labour and live on. If, for the encouragement of industry we allow it to be appropriated, we must take care that other employment be furnished to those excluded from the appropriation. If we do not the fundamental right to labour the earth returns to the unemployed" (Thomas Jefferson, The Republic of Letters, p. 390). http://en.wikisource.org/wiki/Letter_to_James_Madison_-_October_28,_1785 Madison: Papers 14:197--98 In every political society, parties are unavoidable. A difference of interests, real or supposed, is the most natural and fruitful source of them. The great object should be to combat the evil: 1. By establishing a political equality among all. 2. By withholding unnecessary opportunities from a few, to increase the inequality of property, by an immoderate, and especially an unmerited, accumulation of riches. 3. By the silent operation of laws, which, without violating the rights of property, reduce extreme wealth towards a state of mediocrity, and raise extreme indigence towards a state of comfort. Benjamin Franklin to Robert Morris 25 Dec. 1783 Writings 9:138 All Property, indeed, except the Savage's temporary Cabin, his Bow, his Matchcoat, and other little Acquisitions, absolutely necessary for his Subsistence, seems to me to be the Creature of public Convention. Hence the Public has the Right of Regulating Descents, and all other Conveyances of Property, and even of limiting the Quantity and the Uses of it. All the Property that is necessary to a Man, for the Conservation of the Individual and the Propagation of the Species, is his natural Right, which none can justly deprive him of: But all Property superfluous to such purposes is the Property of the Publick, who, by their Laws, have created it, and who may therefore by other Laws dispose of it, whenever the Welfare of the Publick shall demand such Disposition. He that does not like civil Society on these Terms, let him retire and live among Savages. He can have no right to the benefits of Society, who will not pay his Club towards the Support of it. Benjamin Franklin, Queries and Remarks respecting Alterations in the Constitution of Pennsylvania 1789 Private Property therefore is a Creature of Society, and is subject to the Calls of that Society, whenever its Necessities shall require it, even to its last Farthing; its Contributions therefore to the public Exigencies are not to be considered as conferring a Benefit on the Publick, entitling the Contributors to the Distinctions of Honour and Power, but as the Return of an Obligation previously received, or the Payment of a just Debt. "Men did not make the earth. It is the value of the improvements only, and not the earth itself, that is individual property. Every proprietor owes to the community a ground rent for the land which he holds." (Thomas Paine, Agrarian Justice, paragraphs 11 to 15) Thomas Paine's Agrarian Justice http://en.wikisource.org/wiki/Agrarian_Justice equal and common right to natural property, not personal property produced by individuals, though personal property is produced with the help of society. 15 pounds at age 21, 10 pounds per year for the old (over 50) and disabled, funded by 10% inheritance tax. From spike66 at att.net Sun Feb 27 18:24:28 2011 From: spike66 at att.net (spike) Date: Sun, 27 Feb 2011 10:24:28 -0800 Subject: [ExI] the road home: RE: this is me in another forty years... Message-ID: <001e01cbd6ab$927a4770$b76ed650$@att.net> . Samantha Atkins and Brent Allsop wrote: On Feb 25, 2011, at 3:51 PM, spike wrote: Check this, a three minute story which could be subtitled: when they pry the handlebars from my cold dead hands: http://www.youtube.com/watch?v=vksdBSVAM6g &feature=player_embedded This is how they make commercials in countries where they still have actual attention spans. Hey it worked on me. Thank you. That was beautiful! - samantha and Brent Allsop Isn't it remarkable that the spoken script was only three words? Or five, depending on how you count the two "Eh?"s. Even then, all three to five of those words could have been edited out and the story told completely with visuals and a few subtitles. People in every country in the world would get exactly what is going on: motorcycle gang rides to the sea, fifty years goes by, they end up in the nursing home, one dies, another is given a grim diagnosis, convinces the others to get up off their commie asses and relive the time of their lives and watch the sun setting over the sea one last glorious time. If you want another good full length example of that kind of genre, check out this Chinese film: http://en.wikipedia.org/wiki/The_Road_Home_(1999_film) There is Chinese dialog in there, with some subtitles, but it is easy to figure out the story from only that little bit of text. It mentions the commies in there, but that isn't what the story is about. Rather it is more the old ways vs the new ways. I do recommend it. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoenix at ugcs.caltech.edu Sun Feb 27 18:40:15 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Sun, 27 Feb 2011 10:40:15 -0800 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A94CC.1070801@lightlink.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> <4D6A94CC.1070801@lightlink.com> Message-ID: <20110227184015.GD26298@ofb.net> On Sun, Feb 27, 2011 at 01:15:40PM -0500, Richard Loosemore wrote: > You are still telling me that in your opinion libertarianism would work > (= unsupported speculation, = mere philosophizing), whereas I am asking > you to ante up and give me some objective evidence to back up this > claim, by pointing to some code (or equivalent scientific evidence) that > shows that human societies would not break down under those > circumstances. In particular, are libertarian societies robust in the face of foreign invasion inequalities of wealth environmental degradation disease I note that a favorite example, Iceland, was a remote island no one would want to invade, which according to David Friedman himself stopped being so libertarian after wealth differences became sufficiently extreme. And libertarianish economic policies are frequently correlated with rising income inequality. Not to mention pollution of air and public water, and overharvesting of migratory wildlife. Note that even if you don't think extreme income inquality is morally problematic, there's the stability issue of people being able to trivially afford weregild and get away with murder, or raising their own armies. -xx- Damien X-) From phoenix at ugcs.caltech.edu Sun Feb 27 18:49:17 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Sun, 27 Feb 2011 10:49:17 -0800 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A775B.6080201@lightlink.com> References: <283746.86498.qm@web114401.mail.gq1.yahoo.com> <4D6A775B.6080201@lightlink.com> Message-ID: <20110227184914.GE26298@ofb.net> On Sun, Feb 27, 2011 at 11:10:03AM -0500, Richard Loosemore wrote: > And having said all of that, we do have something like the required > simulations already: game theory. Libertarianism is something close to > an "all defection, all the time" strategy ;-), and I believe those don't > do very well.... That's unfair, I'd grant that it's more like Tit For Tat. The problem is that while TfT largely solves the iterated two-person Prisoner's Dilemma given certain population assumptions, the multiplayer game is less amenable to solution and the real population is less ideal. There is a non-governmental solution, but it's a second-order norm of such strength as to make a democratic governemnt seem lax, where you punish defectors and anyone who isn't punishing a defector. The coercion to cooperate is distributed, but still coercive. When I was libertarian, it was for the sake of real freedom, not replacing government with social oppression. -xx- Damien X-) From thespike at satx.rr.com Sun Feb 27 18:56:03 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 27 Feb 2011 12:56:03 -0600 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: <001e01cbd6ab$927a4770$b76ed650$@att.net> References: <001e01cbd6ab$927a4770$b76ed650$@att.net> Message-ID: <4D6A9E43.6020903@satx.rr.com> On 2/27/2011 12:24 PM, spike wrote: > convinces the others to get up off > their commie asses You think Taiwanese, and their bankers, have commie asses? Damien Broderick From rpwl at lightlink.com Sun Feb 27 19:09:12 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 27 Feb 2011 14:09:12 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <20110227184914.GE26298@ofb.net> References: <283746.86498.qm@web114401.mail.gq1.yahoo.com> <4D6A775B.6080201@lightlink.com> <20110227184914.GE26298@ofb.net> Message-ID: <4D6AA158.7080109@lightlink.com> Damien Sullivan wrote: > On Sun, Feb 27, 2011 at 11:10:03AM -0500, Richard Loosemore wrote: >> And having said all of that, we do have something like the required >> simulations already: game theory. Libertarianism is something close to >> an "all defection, all the time" strategy ;-), and I believe those don't >> do very well.... > > That's unfair, I'd grant that it's more like Tit For Tat. Fair cop. Though I was being a little tongue-in-cheek there. I don't seriously think that even multi-player PD games can tell us much. New question: are there *really* no open-source society-models out there, that anyone knows about? I am kind of surprised at that. I don't really have the time to do a search right now. Richard Loosemore From pharos at gmail.com Sun Feb 27 19:12:19 2011 From: pharos at gmail.com (BillK) Date: Sun, 27 Feb 2011 19:12:19 +0000 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: <4D6A9E43.6020903@satx.rr.com> References: <001e01cbd6ab$927a4770$b76ed650$@att.net> <4D6A9E43.6020903@satx.rr.com> Message-ID: On Sun, Feb 27, 2011 at 6:56 PM, Damien Broderick wrote: > On 2/27/2011 12:24 PM, spike wrote: > >> convinces the others to get up off >> their commie asses > > You think Taiwanese, and their bankers, have commie asses? > Would it rain on your parade too much to point at the much higher death and injury rates for motorcycle riders? If that young motorcycle gang had continued riding motorcycles throughout their life the chances of all of them making it to the rest home are pretty remote. And survivors would probably have a selection of injuries to boast about. That's why there are so many motorists who reminisce about the good ol' days when they rode motorcycles, but don't do it any more. (Or only occasionally as a treat). BillK From thespike at satx.rr.com Sun Feb 27 19:15:17 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 27 Feb 2011 13:15:17 -0600 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <20110227182304.GC26298@ofb.net> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> <20110227182304.GC26298@ofb.net> Message-ID: <4D6AA2C5.9000405@satx.rr.com> On 2/27/2011 12:23 PM, Damien Sullivan wrote: >> > I would agree that Ayn Rand is an egomaniac, but that's what she >> > espoused, so I don't see how that is hypocritical. She lived her life >> > pretty much as she believed life should be lived. > She lived it on Medicare: Surely this is an unjust complaint. She'd paid for many years for it (grudgingly, perhaps, at the point of a gun, yeah yeah, but she had her money invested in the Medicare insurance program and had every right to get it back). Her most egregious hypocrisy was fucking her protege while insisting that he and she lie about it to his wife. Maybe A is A, but apparently SEX =/= SEX. Maybe it depends what the meaning of "is" is. (Her wrongdoing is not that it was sexual, but that it was a betrayal of both trust and the truth; it also showed revealingly that her emotional drives easily overwhelmed her supposedly dominant rationality.) Damien Broderick From spike66 at att.net Sun Feb 27 19:20:43 2011 From: spike66 at att.net (spike) Date: Sun, 27 Feb 2011 11:20:43 -0800 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: <4D6A9E43.6020903@satx.rr.com> References: <001e01cbd6ab$927a4770$b76ed650$@att.net> <4D6A9E43.6020903@satx.rr.com> Message-ID: <003501cbd6b3$6dc66640$495332c0$@att.net> >...On Behalf Of Damien Broderick Subject: Re: [ExI] the road home: RE: this is me in another forty years... On 2/27/2011 12:24 PM, spike wrote: http://www.youtube.com/watch?v=vksdBSVAM6g&feature=player_embedded >> convinces the others to get up off their commie asses >You think Taiwanese, and their bankers, have commie asses? Damien Broderick Kidding bygones. {8^D I know Taiwan isn't communist. Yet. Earlier today someone commented (Kelly?) about the internet being a poor medium for irony. It can be, but it needs to be exaggerated to the point it loses much of its edge. I think of it as related to the early silent movies, where they needed to express emotion using the face and body language. They had to greatly exaggerate every subtle nuance. You can see remnants of that style in the visual comedy of early television (think Lucille Ball, or Gleason and Carney doing Kramden and Norton. Those guys could crack you up with the sound off.) Times goes by, photography gets waaaay better, allows closeups, it doesn't need to rush like the early silent movies did because film was so expensive and they had fewer frames per second. The Taiwanese bikers emoted perfectly. Sans dialog, you can tell what is up with them in every scene, you hurt with them, you are filled with grim resolve, your spirit rides like the wind and soars with eagles when they make it at the end of their three minute struggle; you want to jump out of your chair and cheer "You GO grandpa!" The Road Home is a good (mostly) non-political reminder that commies are people too. If you view that, come on back here after and let's discuss it. I saw some stuff in there I want to see if it was just me, or intentional. http://en.wikipedia.org/wiki/The_Road_Home_(1999_film) spike From jrd1415 at gmail.com Sun Feb 27 20:01:16 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 27 Feb 2011 13:01:16 -0700 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> Message-ID: This slightly off-topic, but I figured, what the hell, I just got a chuckle from that Rand character flow chart, and I'm in a good mood, so what the hey. Kelly, welcome to the list. Maybe a week or so back I noticed a new name, and then right off the bat, I found this little gem: "Exploring the endless possibilities of virtual reality seems a lot more interesting than crossing tens of thousands of light years to visit some lower life form..." That one single word, "lower" makes the quote. Now, if you track down the original, you'll see that I've taken some liberties and trimmed it a bit. Nevertheless, I've set it in a folder on my desktop, pending a final decision regarding adding it to my sig file. Whatever the final decision,...thank you, Kelly. Keep up the good work. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From thespike at satx.rr.com Sun Feb 27 21:07:28 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 27 Feb 2011 15:07:28 -0600 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: <003501cbd6b3$6dc66640$495332c0$@att.net> References: <001e01cbd6ab$927a4770$b76ed650$@att.net> <4D6A9E43.6020903@satx.rr.com> <003501cbd6b3$6dc66640$495332c0$@att.net> Message-ID: <4D6ABD10.8040801@satx.rr.com> On 2/27/2011 1:20 PM, spike wrote: > I know Taiwan isn't communist. Yet. > The Road Home is a good (mostly) non-political reminder that commies are > people too. I still don't understand. You say you know they're not communist (in fact, they have been traditionally and foundationally notable ANTIcommunists, which is what gives PRC the shits with them), but you still call them "commies". In this case I'm missing the irony or facetiousness or joke or something. Is it like calling the POTUS a commie? Or Canadians, or Australians, or just about everyone except die-hard libertarians? Damien Broderick From spike66 at att.net Sun Feb 27 21:24:51 2011 From: spike66 at att.net (spike) Date: Sun, 27 Feb 2011 13:24:51 -0800 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: <4D6ABD10.8040801@satx.rr.com> References: <001e01cbd6ab$927a4770$b76ed650$@att.net> <4D6A9E43.6020903@satx.rr.com> <003501cbd6b3$6dc66640$495332c0$@att.net> <4D6ABD10.8040801@satx.rr.com> Message-ID: <000a01cbd6c4$c58a2450$509e6cf0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Sent: Sunday, February 27, 2011 1:07 PM To: ExI chat list Subject: Re: [ExI] the road home: RE: this is me in another forty years... On 2/27/2011 1:20 PM, spike wrote: > I know Taiwan isn't communist. Yet. > The Road Home is a good (mostly) non-political reminder that commies > are people too. I still don't understand. You say you know they're not communist (in fact, they have been traditionally and foundationally notable ANTIcommunists, which is what gives PRC the shits with them), but you still call them "commies". In this case I'm missing the irony or facetiousness or joke or something... Damien Broderick Ja. But we are talking two different things. The grandpas on bikes were Taiwanese. That was the three minute bank commercial. The Road Home crowd were Chinese, mostly pre-Mao and orthogonal to politics. That was a 90 minute movie. Even the one presumed commie seemed like a decent chap. spike From thespike at satx.rr.com Sun Feb 27 21:45:02 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 27 Feb 2011 15:45:02 -0600 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: <000a01cbd6c4$c58a2450$509e6cf0$@att.net> References: <001e01cbd6ab$927a4770$b76ed650$@att.net> <4D6A9E43.6020903@satx.rr.com> <003501cbd6b3$6dc66640$495332c0$@att.net> <4D6ABD10.8040801@satx.rr.com> <000a01cbd6c4$c58a2450$509e6cf0$@att.net> Message-ID: <4D6AC5DE.4010709@satx.rr.com> On 2/27/2011 3:24 PM, spike wrote: > we are talking two different things. The grandpas on bikes were > Taiwanese. That was the three minute bank commercial. The Road Home crowd > were Chinese, mostly pre-Mao and orthogonal to politics. That was a 90 > minute movie. Ah. Dang. My careless hasty goof. Sorry. Ah Dang From anders at aleph.se Sun Feb 27 23:27:03 2011 From: anders at aleph.se (Anders Sandberg) Date: Sun, 27 Feb 2011 23:27:03 +0000 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> Message-ID: <4D6ADDC7.7090905@aleph.se> Stefano Vaj wrote: > On 27 February 2011 13:06, Anders Sandberg wrote: > >> According to Haidt, morality across cultures tend to be based on five >> fundamental values that are given different weight between different >> cultures and individuals: >> > > What about differentialist fundamentalism, such as my own version > thereof, the adept of which suffer from an immediate adrenalyne and > testosterone discharge every time somebody mentions such gross > cross-cultural generalisations? :-))) > Seems to be a more individual moral foundation. Of course, as you know in order to actually be a real moral reaction and not just a knee-jerk reaction based on surface characteristics (which of course underlie a lot of the "moral intuitions" of people), you should react to the real content of Haidt's thesis and not just my thumbnail sketch. His papers are worth reading, even if there might be moral foundations he missed or a slightly different taxonomy. They have caused a lot of excited analysis among my colleauges. It is actually not too hard to give an evolutionary psychology explanation for them (is it *ever* hard to do that? ;-) ) After all, they seem to fit in nicely with evolved cognitive systems, and evolutionary exaptation nicely explains why the purity system uses the same disgust emotion to avoid unhealthy food as to avoid certain social actions or people. I doubt his list is complete, but I think he nailed a few human universals. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From msd001 at gmail.com Sun Feb 27 22:52:40 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 27 Feb 2011 17:52:40 -0500 Subject: [ExI] META: Overposting In-Reply-To: References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> Message-ID: On Sun, Feb 27, 2011 at 3:28 AM, Giulio Prisco wrote: > I am very interested in the libertarian trend, but the problem is that > it always degenerates into a hormone-driven fight between > fundamentalist libertarians and fundamentalist anti-libertarians. I > wonder why it is like that. Seems like we could generalize "libertarian trend" to most forms of human communication and the fight is between any parties with a difference of opinion no matter how fundamentally important or picayune. I wonder why it is like that too. :) From lubkin at unreasonable.com Sun Feb 27 23:52:44 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sun, 27 Feb 2011 18:52:44 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A8236.8000008@satx.rr.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> <4D6A8236.8000008@satx.rr.com> Message-ID: <201102272352.p1RNpwMF007287@andromeda.ziaspace.com> Damien B wrote: >Another wince, reminding me that Kelly also asked if Samantha had >ever heard of Moore's Law. Newbies need to bear in mind that if >you're excited about X, chances are it's been a much-hashed-over >topic here for a decade or two. It's like each generation thinking they discovered sex. On the other hand, when newbies are excited about X, we coelacanths need a measure of patience. T'other day I refrained from noting that we'd talked a given topic to death in 1992. Just as well we're living in the pre-immortallish years. Once we're into the long time, it's harder to qualify as an cranky old coot. -- David. From femmechakra at yahoo.ca Mon Feb 28 00:07:29 2011 From: femmechakra at yahoo.ca (Anna Taylor) Date: Sun, 27 Feb 2011 16:07:29 -0800 (PST) Subject: [ExI] Wishing You In-Reply-To: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> Message-ID: <521746.63989.qm@web110405.mail.gq1.yahoo.com> You know I wanted to thank you for the great link.? I still keep it on my facebook.?? I thought it was?really inspirational.? I hope you have a wonderful birthday :) ? http://current.com/entertainment/music/89969127_sound-of-music-train-station-dance.htm Anna ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From moulton at moulton.com Mon Feb 28 02:05:38 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sun, 27 Feb 2011 18:05:38 -0800 Subject: [ExI] Some suggestions for improvement - was Re: General comment about all this quasi-libertarianism discussion In-Reply-To: <201102272352.p1RNpwMF007287@andromeda.ziaspace.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> <4D6A8236.8000008@satx.rr.com> <201102272352.p1RNpwMF007287@andromeda.ziaspace.com> Message-ID: <4D6B02F2.9010505@moulton.com> On 02/27/2011 03:52 PM, David Lubkin wrote: > Damien B wrote: > >> Another wince, reminding me that Kelly also asked if Samantha had >> ever heard of Moore's Law. Newbies need to bear in mind that if >> you're excited about X, chances are it's been a much-hashed-over >> topic here for a decade or two. > > It's like each generation thinking they discovered sex. > Perhaps the more inventive try to put something creative. > On the other hand, when newbies are excited about X, we coelacanths > need a measure of patience. T'other day I refrained from noting that > we'd talked a given topic to death in 1992. > Some of the recent topics have been hashed over years ago. Yet in many cases not much progress is made in the sense that the same topics get done over again a few years later. There are I think a couple of reasons. One reason is the difficultly in dealing with inaccurate definitions. It is very easy to see the inaccurate definition used by some one else but less easy when it is our own inaccuracy particularly when that definition reinforces a view that I am correct and the other person is incorrect. And substitute "good" for "correct" and "evil" for "incorrect" and the situation can get tense. Now it is possible for there to be an honest difference about definitions and some terms do have different meanings however if this is the case then it needs to be dealt with at that level; not continuing the discussion as if the issue of definitions does not matter. I have seen this problem with definitions derail discussion of atheism just to give an example. Another reason is introducing side issues as if there were central to the discussion. For example if the topic of discussion is libertarian philosophy it makes no sense arguing about whether Rand's novels are good literature or not or if Rand herself was a nice person because those are not relevant to libertarian philosophy. But often people find it easier to use a proxy rather than deal with ideas. For example I once heard someone criticize Marxism because Marx supposedly neglected his wife. That is ridiculous. We need to stop using proxies instead of the actual idea either in defense of the idea or in attacking the idea. And a third is avoid misusing terms that cover a variety of topics. For example Extropian or Transhuman cover many things not just one little thing like just AI or just Nanotech. So for example if you disagree with one specific thing then if you want to be understood by everyone be very specific and not use the broader term. This is particularly a good point to remember when discussing complex political topics. Perhaps the best way to deal with this is to set up a page like a FAQ page but with a bit more structure to actually show what has already been discussed, what are the various definitions and to create a list of red herrings that are often introduced and which throw off the discussion. Fred > Just as well we're living in the pre-immortallish years. Once we're > into the long time, it's harder to qualify as an cranky old coot. > > > -- David. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From darren.greer3 at gmail.com Mon Feb 28 02:16:18 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 27 Feb 2011 22:16:18 -0400 Subject: [ExI] Wizard Calculating Device Message-ID: I went to a flea market today with my Mom and bought a lot (as in auction lot) of antique mathematical devices and tools. A fifty year old slide rule, a complete sixty year old Myers and Sons mathematics set in a tin case with a brass compass and protractor and ruler and the original pencil. All very cool stuff for five bucks in total. The lot also came with an analog Wizard calculating device, circa 1950, and made by a German company. It has a metal stylus, and nothing is seized up, but I can't figure out how the sucker works. Anyone have one of these dinosaurs and could give me a hint? I only bought them for my desk as a neat conversation piece and to marvel at while I'm working. I couldn't find any tips on the web. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Feb 28 02:40:44 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 27 Feb 2011 19:40:44 -0700 Subject: [ExI] Serious topic Message-ID: http://www.ultimax.com/whitepapers/ETP1_ThreeSigns.pdf Robert is a fellow traveler, though I don't remember if he has ever posted here. Keith From spike66 at att.net Mon Feb 28 02:34:16 2011 From: spike66 at att.net (spike) Date: Sun, 27 Feb 2011 18:34:16 -0800 Subject: [ExI] Wizard Calculating Device In-Reply-To: References: Message-ID: <004401cbd6ef$fef8b0f0$fcea12d0$@att.net> I bought a slide rule about 20 yrs ago at a garage sale for 5 bucks. I made it my educational toy by trying to figure out how it works, knowing only one fundamental rule: adding logs is multiplication. Study it Darren and figure it out. It is a mind expander. spike From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Sunday, February 27, 2011 6:16 PM To: ExI chat list Subject: [ExI] Wizard Calculating Device I went to a flea market today with my Mom and bought a lot (as in auction lot) of antique mathematical devices and tools. A fifty year old slide rule, a complete sixty year old Myers and Sons mathematics set in a tin case with a brass compass and protractor and ruler and the original pencil. All very cool stuff for five bucks in total. The lot also came with an analog Wizard calculating device, circa 1950, and made by a German company. It has a metal stylus, and nothing is seized up, but I can't figure out how the sucker works. Anyone have one of these dinosaurs and could give me a hint? I only bought them for my desk as a neat conversation piece and to marvel at while I'm working. I couldn't find any tips on the web. Darren -- There is no history, only biography. -Ralph Waldo Emerson -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 28 03:06:00 2011 From: spike66 at att.net (spike) Date: Sun, 27 Feb 2011 19:06:00 -0800 Subject: [ExI] Serious topic In-Reply-To: References: Message-ID: <006301cbd6f4$6db47b60$491d7220$@att.net> ... On Behalf Of Keith Henson Subject: [ExI] Serious topic http://www.ultimax.com/whitepapers/ETP1_ThreeSigns.pdf Robert is a fellow traveler, though I don't remember if he has ever posted here. Keith Keith, I like to imagine the kinds of transitions that can be made quickly if necessary, should these kinds of scenarios play out. We have three areas in which energy use can be reduced: home lighting and heating, food, and transportation. I see potential in all three areas for reductions, although we will not like them. In our food production cycle, we can go vegetarian and trend toward far less processed foods. In home lighting and heating, we can transition (quickly if necessary) to LED lighting and far lower use of HVAC systems. In transportation (an area I have pondered long) we can transition to 2 wheels, or very light 3 wheelers if we really need to. If we look around us, everywhere I see astonishing energy waste, just because energy is cheap and plentiful. Oil is still so cheap it strangles out most alternative energy sources. spike From lubkin at unreasonable.com Mon Feb 28 04:02:31 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sun, 27 Feb 2011 23:02:31 -0500 Subject: [ExI] Serious topic In-Reply-To: <006301cbd6f4$6db47b60$491d7220$@att.net> References: <006301cbd6f4$6db47b60$491d7220$@att.net> Message-ID: <201102280402.p1S42AG7006229@andromeda.ziaspace.com> Spike wrote: >If we look around us, everywhere I see astonishing energy waste, just >because energy is cheap and plentiful. Oil is still so cheap it strangles >out most alternative energy sources. And there's a lot of known energy, e.g., nuclear, coal, and natural gas, that can be readily tapped without much technical fuss. We may balk at one or other but someone else won't, and energy is fungible. Aside, any idea how fast we could build gigawatt reactors in a real crunch? (That is, in a WW II grade focus, bypassing all current hurdles.) -- David. From kellycoinguy at gmail.com Mon Feb 28 05:29:42 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 27 Feb 2011 22:29:42 -0700 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: <20110227175045.GB26298@ofb.net> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> Message-ID: On Sun, Feb 27, 2011 at 10:50 AM, Damien Sullivan wrote: > On Sun, Feb 27, 2011 at 12:06:46PM +0000, Anders Sandberg wrote: > I recently ran into an extreme case of this: > http://volokh.com/2011/02/15/asteroid-defense-and-libertarianism/ The fact that some libertarians (myself included) would prefer not to pay taxes to protect against rogue asteroids isn't because we dismiss the importance of doing the job. Too often, when a libertarian individual suggests that we shouldn't have a public fire department, the other side immediately jumps to the incorrect conclusion that libertarians don't wish to fight fires. It isn't that we are hard hearted and wish for everyone who isn't careful to have their house burn down. It's just that we see a different way of paying for things. The total number of people currently employed in looking for asteroids in the NASA Near Earth Object program is reportedly less than the number of people working in a typical McDonalds. Since actuaries indicate that we each have a 1:20,000 chance of being killed by such an asteroid, that is a silly small number. Consider the number of people trying to predict tornadoes, even though your chances of getting killed in a tornado is 1:50,000. Or even sillier the huge number of people involved in preventing airline crashes when your chances of getting killed in a commercial airliner are astronomically small. Funnier still is how little we spend on heart disease research when a HUGE number of people die from that each year. The problem is that being a government program, the NEO program is subject to the "look what the silly congressmen are supporting now" argument. Thus they can't fund the program to the degree that actuarial statistics would indicate would be advisable. Same deal with AIDS getting many times more funding per dead patient than cancer or heart disease. It's all politicized. Those who have political power get the funding regardless of the actuarial size of the problem. If, in fact, we had a libertarian government, I would be surprised if the NEO program wasn't bigger by 3-10x. Let me explain exactly why and how. The prevention of airliner crashes gets considerable private funding because of the liability faced by airlines. Yes, there is a significant amount of money spent by the government too, but that's because of the sensational headlines. John Stossel did a really nice piece on this and how the media is complicit in the problem a few years back. If private insurance companies sold asteroid insurance, which they should, then there would be a significant desire to avoid payout. That would lead to the spending of money to avoid the disaster in the first place. Of all potential mega disasters we could face, asteroid hits are the most easily preventable... (compared to such things as super volcanos, subduction earthquakes and tsunamis and the like, where we are simply powerless at this point.) Additionally, in a libertarian society, someone might set up a non profit organization to search for and disable near earth objects. If everyone in America donated 25 cents to such an organization, it would be funded well over current funding levels. So far from being oblivious to the danger or near earth objects, libertarians point out that the prevention of a visitor from outer space would be MORE likely under a libertarian system than it is now. So, when we say things like, "the government shouldn't pay for X" don't jump immediately to the conclusion that libertarians are against X. That is a simplistic and fallacious argument. We should be better than that. If you want to ask "how would libertarians pay for X?" that is a much better way to challenge a true libertarian proposal. -Kelly From kellycoinguy at gmail.com Mon Feb 28 06:27:40 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 27 Feb 2011 23:27:40 -0700 Subject: [ExI] Wizard Calculating Device In-Reply-To: References: Message-ID: I've seen old manuals for this sort of thing come up occasionally on Ebay. There are also a couple of web sites that sell old manuals, but don't put them online. Playing around with it is probably the best trick. Think of it as a kind of Rubik's cube. :-) -Kelly 2011/2/27 Darren Greer : > I went to a flea market today with my Mom and bought a lot (as in auction > lot) of antique mathematical devices and tools. A fifty year old slide rule, > a complete sixty year old Myers and Sons mathematics set in a tin case with > a brass compass and protractor and ruler and the original pencil. All very > cool stuff for five bucks in total. The lot also came with an analog Wizard > calculating device, circa 1950, and made by a German company. It has a metal > stylus, and nothing is seized up, but I can't figure out how the sucker > works. Anyone have one of these dinosaurs and could give me a hint? I only > bought them for my desk as a neat conversation piece and to marvel at while > I'm working. I couldn't find any tips on the web. > Darren > -- > There is no history, only biography. > -Ralph Waldo Emerson > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From kellycoinguy at gmail.com Mon Feb 28 06:31:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 27 Feb 2011 23:31:12 -0700 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: <003501cbd6b3$6dc66640$495332c0$@att.net> References: <001e01cbd6ab$927a4770$b76ed650$@att.net> <4D6A9E43.6020903@satx.rr.com> <003501cbd6b3$6dc66640$495332c0$@att.net> Message-ID: On Sun, Feb 27, 2011 at 12:20 PM, spike wrote: >>...On Behalf Of Damien Broderick > Subject: Re: [ExI] the road home: RE: this is me in another forty years... > > On 2/27/2011 12:24 PM, spike wrote: > Earlier today someone commented (Kelly?) about the internet being a poor > medium for irony. email and Mailing lists, specifically. Youtube is a GREAT medium for irony, just look up Steven Colbert... :-) > It can be, but it needs to be exaggerated to the point it > loses much of its edge. ?I think of it as related to the early silent > movies, where they needed to express emotion using the face and body > language. ?They had to greatly exaggerate every subtle nuance. ?You can see > remnants of that style in the visual comedy of early television (think > Lucille Ball, or Gleason and Carney doing Kramden and Norton. ?Those guys > could crack you up with the sound off.) Emoticons can help... a little :-) -Kelly From kellycoinguy at gmail.com Mon Feb 28 06:51:19 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 27 Feb 2011 23:51:19 -0700 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: 2011/2/27 Alfio Puglisi : > On Sun, Feb 27, 2011 at 3:35 PM, Stefano Vaj wrote: >> >> 2011/2/22 Alfio Puglisi : >> > On Tue, Feb 22, 2011 at 6:52 PM, Kelly Anderson > When I wrote that private prisons would be an incentive to bogus > incarceration I was?hypothesizing, but now I found out that it has already > happened: > http://en.wikipedia.org/wiki/Kids_for_cash_scandal Alfio, the number of people inappropriately incarcerated in ANY system is non-zero. Corruption exists everywhere. I think it is likely that some AGIs will end up being corrupt too. Think of the number of people incarcerated inappropriately by Stalin, Castro, today's China (where prisoners are allegedly executed to be organ transplant donors), Iran, Saudi Arabia... Even occasionally today in the US. There is the very interesting and controversial case of Leonard Peltier. And what about Ignacio Ramos and Jose Compean? The number of people who are NOT incarcerated who should be in the US is much larger, heck that would include my ex-wife :-| By this type of argument, we should ban axes and hammers because they have been used as murder weapons. In the Old West, the number one murder weapon was not the Colt 45, but the shovel. (Apparently, lots of arguments came up at water turns...) To tell us that we should give up a good tool just because SOME number of people have misused it is to completely stop all human progress. Just because someone died of arson, does not mean we should go back to pre-fire days. I find anecdotal stories to be unconvincing. How many people have seen big foot, UFOs, and the like? These are statistically uninteresting to me. If it were a widespread problem, such as is the case with corruption in Mexico, then you would get my attention. This is no more convincing than the arguments based on single cases for or against government health care. -Kelly From max at maxmore.com Mon Feb 28 06:20:45 2011 From: max at maxmore.com (Max More) Date: Sun, 27 Feb 2011 23:20:45 -0700 Subject: [ExI] Serious topic In-Reply-To: <201102280402.p1S42AG7006229@andromeda.ziaspace.com> References: <006301cbd6f4$6db47b60$491d7220$@att.net> <201102280402.p1S42AG7006229@andromeda.ziaspace.com> Message-ID: Yes, I'd like to see a good analysis of the real costs of (various forms of ) nuclear, minus the unnecessary regulatory costs and delays. It's true that nuclear in the US is currently not cheap with subsidy (but not expensive either), but it does look like a good chunk of that cost could be done away with and might never have existed if not for persistent and intensive pressure from environmentalists who wanted to kill this energy source. --- Max On Sun, Feb 27, 2011 at 9:02 PM, David Lubkin wrote: > Spike wrote: > > If we look around us, everywhere I see astonishing energy waste, just >> because energy is cheap and plentiful. Oil is still so cheap it strangles >> out most alternative energy sources. >> > > And there's a lot of known energy, e.g., nuclear, coal, and natural gas, > that can be readily tapped without much technical fuss. We may balk at one > or other but someone else won't, and energy is fungible. > > Aside, any idea how fast we could build gigawatt reactors in a real crunch? > (That is, in a WW II grade focus, bypassing all current hurdles.) > > > -- David. > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Feb 28 07:58:15 2011 From: pharos at gmail.com (BillK) Date: Mon, 28 Feb 2011 07:58:15 +0000 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> Message-ID: On Mon, Feb 28, 2011 at 5:29 AM, Kelly Anderson wrote: > So, when we say things like, "the government shouldn't pay for X" > don't jump immediately to the conclusion that libertarians are against > X. That is a simplistic and fallacious argument. We should be better > than that. If you want to ask "how would libertarians pay for X?" that > is a much better way to challenge a true libertarian proposal. > > The obvious answer is that the libertarians wouldn't pay for it. Libertarians only pay for things which would benefit themselves personally. The 'free rider' problem leads them into much complicated theorising. If such were possible, ideally libertarians would each want an individual force field to keep asteroids away from their property. :) BillK From pharos at gmail.com Mon Feb 28 08:23:41 2011 From: pharos at gmail.com (BillK) Date: Mon, 28 Feb 2011 08:23:41 +0000 Subject: [ExI] Wizard Calculating Device In-Reply-To: References: Message-ID: 2011/2/28 Darren Greer wrote: > I went to a flea market today with my Mom and bought a lot (as in auction > lot) of antique mathematical devices and tools. A fifty year old slide rule, > a complete sixty year old Myers and Sons mathematics set in a tin case with > a brass compass and protractor and ruler and the original pencil. All very > cool stuff for five bucks in total. The lot also came with an analog Wizard > calculating device, circa 1950, and made by a German company. It has a metal > stylus, and nothing is seized up, but I can't figure out how the sucker > works. Anyone have one of these dinosaurs and could give me a hint? I only > bought them for my desk as a neat conversation piece and to marvel at while > I'm working. I couldn't find any tips on the web. > Tut! Google knows everything. (Not necessarily all in English, though). English instructions here: BillK From js_exi at gnolls.org Fri Feb 25 07:29:36 2011 From: js_exi at gnolls.org (J. Stanton) Date: Thu, 24 Feb 2011 23:29:36 -0800 Subject: [ExI] Free banking and fractional reserve banking (Re: Serfdom and libertarian critiques) Message-ID: <4D675A60.7020107@gnolls.org> [I hope that, since this discussion is more about banking at this point, I can still respond to these messages even though the libertarian quarantine has been re-established.] F. C. Moulton wrote: > I am somewhat baffled by your comments because your comments ignore > reality. Libertarians have long complained about government privileged > banking. And obviously all anarchists by definition are opposed to > government granted privilege in commerce or any other area. Plus > economists discuss free banking and it is easy to find. Since I have > been providing so many text links here are some video links: > http://www.youtube.com/watch?v=5P7W1G1hbiQ > http://www.youtube.com/watch?v=0PyS2NtW3xA This is a common point of confusion. "Free banking" still allows banks the privilege of creating money by issuing debt ("fractional reserve banking"), a fraudulent practice that any of us would go to jail for, and which is a special power granted only to "banks" by governments. Example: If you give me $100 and I lend $90 to Spike, I have $10. If you ask for your money back, I have to tell you "I don't have it." If you give "Bank of J. Stanton" $100 and it lends $90 to Spike, it has $10...but it tells you that you have $100, and that you can withdraw it at any time. In other words, your money is immediately replaced by an IOU for the repayment of BoJS' loan to Spike. In practice, the bank packages and sells Spike's loan...so right now, in our current system, *** all of your money in a "checking account" is actually in a hedge fund making 30:1 leveraged investments in mortgage-backed securities. *** And, in our current system, you are forced by "legal tender" laws to accept this share in a highly-leveraged hedge fund as if it were real money! Yes, you can withdraw "your" money, but what you're really getting is other depositors' $10 (the "reserve" in fractional-reserve). Which works so long as not too many other people try the same thing -- about 6%, at current reserve and capital ratios. Any more, and the fraud collapses ("bankruptcy"). (This is why the Fed holds over $2 trillion of worthless bank debt: the banks all know they are insolvent, so they've transferred their bad debt to the US taxpayers through the Fed. It's the biggest swindle in history.) Think about it for a moment...if I told you or anyone here "I have a great scheme by which we can all make lots of money, but which collapses if more than 6% of its participants try to take money out," you'd rightfully dismiss it as a fraud. *Yet this is the foundation of the entire world banking system!* All that "free banking" does is deregulate the oligopoly on fraud to some degree. It's still a fraudulent system with perverse incentives -- both money creation and money destruction are positive feedback loops, and the fastest path to economic growth involves going into debt as quickly as possible. The only benefit to free banking is that these crashes happen more quickly because the debt is not backstopped by a central bank...which I agree is a good thing, but it's polishing deck rails on the Titanic. Stefano Vaj wrote: > On the other hands, it is absolute private property of wealth in the > modern sense which is a relatively new concept. The feodal lords were > not the *owner" of their land in the modern sense, they were rather > enjoying a privilege which could be accorded and under some > circumstances revoked, had a limited if any transferability, was > supposed to be parcelled through further concessions to lower lords > (vavasours, vassals of vavasours), etc. True. But we're not "owners" of our land in the modern sense, either: if we stop paying taxes or responding to random demands at random times, our privilege of occupation is revoked. Then there is eminent domain. And all transfers have to recorded by the local governmental agency: you can't just "sell" land directly to someone else. JS http://www.gnolls.org From darren.greer3 at gmail.com Mon Feb 28 10:19:06 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 28 Feb 2011 06:19:06 -0400 Subject: [ExI] Wizard Calculating Device In-Reply-To: References: Message-ID: On Mon, Feb 28, 2011 at 2:27 AM, Kelly Anderson wrote: >Playing around with it is probably the best trick. Think of it as a kind of Rubik's cube. :-)< Funny enough, there was a Rubik's cube in the lot as well. Actually a "cube puzzle" in its original box and made in 1981. I think Spike's right. I'll just mess with it and challenge myself to get it to work without googling it.(Thanks though Bill.) D. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 28 10:29:23 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 28 Feb 2011 10:29:23 +0000 Subject: [ExI] Serious topic In-Reply-To: References: Message-ID: <4D6B7903.5000200@aleph.se> Keith Henson wrote: > http://www.ultimax.com/whitepapers/ETP1_ThreeSigns.pdf > > I'm having serious problems with Hubbert curves, for the same reason I have become less happy with using Moore's law for long range prediction. Just like Bass technology diffusion curves they fit data very well in retrospect. But when used to extrapolate the future they seem too unstable: the predicted peaks or sigmoids jump all over the place due to noise in the data. When you have a fairly well developed curve (i.e. you are far beyond the hump of a peak or the inflexion point of a sigmoid) extrapolation is robust. But before that - at the hump or inflexion point, or even a bit after - the extrapolation is almost useless. It is worth testing it yourself with a toy model with noisy data, the effect is a bit surprising. Those confidence intervals get very wide. It is better to try to shore things up using other kinds of data or modeling. UK oil production is clearly moribound, but it is also a particular rather small pocket. Oil reserve data is famously uncertain and biased by various interests. It might be more revealing to look into how the markets are investing long-term. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From eugen at leitl.org Mon Feb 28 11:03:50 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Feb 2011 12:03:50 +0100 Subject: [ExI] Serious topic In-Reply-To: <201102280402.p1S42AG7006229@andromeda.ziaspace.com> References: <006301cbd6f4$6db47b60$491d7220$@att.net> <201102280402.p1S42AG7006229@andromeda.ziaspace.com> Message-ID: <20110228110350.GS23560@leitl.org> On Sun, Feb 27, 2011 at 11:02:31PM -0500, David Lubkin wrote: > Spike wrote: > >> If we look around us, everywhere I see astonishing energy waste, just >> because energy is cheap and plentiful. Oil is still so cheap it strangles >> out most alternative energy sources. > > And there's a lot of known energy, e.g., nuclear, coal, and natural gas, Not a lot unfortunately, that's the whole point of demand rate (growing) eclipsing supply rate (stagnating), at simultaneously declining EROEI. > that can be readily tapped without much technical fuss. We may balk at Anything involving infrastructure isn't 'readily' or 'without much technical fuss', unfortunately. If you consider that the much-touted nuclear option isn't, nevermind it doesn't help with liquids and gases, and you'll realize that polyannas like Shortwhile (1 TW/year substitution rate, for the next 20 years, sure) are Not Helping. > one or other but someone else won't, and energy is fungible. It's fungible only if you have it. > Aside, any idea how fast we could build gigawatt reactors in a real Not just GW worth of reactors, they have to be breeders. Orelse you're in peak fissible Really Soon. And of course you can't get the money, and you would syphon the money away from renewables, as budgets are zero sum. > crunch? (That is, in a WW II grade focus, bypassing all current hurdles.) The only way to address that is the deeply unpopular A and F (austerity and frugality). Oh, and if you think that's Not So Good, consider food. And constraints for food -- there's a damn reason everybody is buying up Africa. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Mon Feb 28 11:34:20 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Feb 2011 12:34:20 +0100 Subject: [ExI] Serious topic In-Reply-To: <4D6B7903.5000200@aleph.se> References: <4D6B7903.5000200@aleph.se> Message-ID: <20110228113420.GU23560@leitl.org> On Mon, Feb 28, 2011 at 10:29:23AM +0000, Anders Sandberg wrote: > I'm having serious problems with Hubbert curves, for the same reason I The basic bell-shape is sound though. > have become less happy with using Moore's law for long range prediction. Projecting Moore into the future is bunk, simply because it's the result of a simple 2d feature shrink, at simultaneously higher entry costs for the next node. The only way to give Moore second wind is to leap into 3d, and I'm very leery expecting just-in-time new technologies. > Just like Bass technology diffusion curves they fit data very well in > retrospect. But when used to extrapolate the future they seem too > unstable: the predicted peaks or sigmoids jump all over the place due to > noise in the data. I see little magic in graphs, but in fundamental processes giving rise to the graphs. > When you have a fairly well developed curve (i.e. you are far beyond the > hump of a peak or the inflexion point of a sigmoid) extrapolation is > robust. But before that - at the hump or inflexion point, or even a bit > after - the extrapolation is almost useless. It is worth testing it > yourself with a toy model with noisy data, the effect is a bit > surprising. Those confidence intervals get very wide. > > It is better to try to shore things up using other kinds of data or > modeling. UK oil production is clearly moribound, but it is also a > particular rather small pocket. Oil reserve data is famously uncertain Each pocket follows the same curve. Local oil peaks have come and gone, now we're in a global peak. It's completely expected. > and biased by various interests. It might be more revealing to look into > how the markets are investing long-term. Markets are chronically poor long-term predictors. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Mon Feb 28 12:46:55 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Feb 2011 13:46:55 +0100 Subject: [ExI] META: Overposting In-Reply-To: References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> Message-ID: <20110228124655.GF23560@leitl.org> On Sun, Feb 27, 2011 at 05:52:40PM -0500, Mike Dougherty wrote: > Seems like we could generalize "libertarian trend" to most forms of > human communication and the fight is between any parties with a > difference of opinion no matter how fundamentally important or > picayune. > > I wonder why it is like that too. :) I don't. I just want it gone. We've got work to do, and this isn't helping. From eugen at leitl.org Mon Feb 28 14:15:09 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Feb 2011 15:15:09 +0100 Subject: [ExI] Serious topic In-Reply-To: <006301cbd6f4$6db47b60$491d7220$@att.net> References: <006301cbd6f4$6db47b60$491d7220$@att.net> Message-ID: <20110228141509.GP23560@leitl.org> On Sun, Feb 27, 2011 at 07:06:00PM -0800, spike wrote: > Keith, I like to imagine the kinds of transitions that can be made quickly > if necessary, should these kinds of scenarios play out. We have three areas > in which energy use can be reduced: home lighting and heating, food, and Home lighting takes electricity, and is the easiest to fix, though many lack the means of buying solid state or metal halide lightings, nevermind refitting their home electric infrastructure. Changing heating is far more expensive, and to what? I heat with locally sourced wood from renewably managed forests/my other place is deep geothermal, but that's not an option for many, especially overnight. Food is far more difficult, and not just energy-constrained. In general people don't seem to see what these >15 TW total mean, and what doubling and tripling electrification to substitute for missing fossil liquids and gases mean (1 TW/year conversion rate, for the next 20 years, and photovoltaic surface doesn't fabricate itself, put itself up, and connects to the grid, while rebuilding it in the process, and adding energy buffering capacity). http://en.wikipedia.org/wiki/World_energy_resources_and_consumption http://www.theoildrum.com/tag/fake_fire_brigade > transportation. I see potential in all three areas for reductions, although > we will not like them. In our food production cycle, we can go vegetarian Won't help much if there's crop failures, and grain exporting countries have stopped exporting. > and trend toward far less processed foods. In home lighting and heating, we > can transition (quickly if necessary) to LED lighting and far lower use of Not an option for poor people. > HVAC systems. In transportation (an area I have pondered long) we can > transition to 2 wheels, or very light 3 wheelers if we really need to. Or simple public transport, electrified. > If we look around us, everywhere I see astonishing energy waste, just > because energy is cheap and plentiful. Oil is still so cheap it strangles > out most alternative energy sources. The problem is that it's cheap, until it suddenly isn't. We've been down that road before, remember the oil crisis and Carter, and what has been started, and then shut down. From bbenzai at yahoo.com Mon Feb 28 14:11:36 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Mon, 28 Feb 2011 06:11:36 -0800 (PST) Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: Message-ID: <484492.44014.qm@web114406.mail.gq1.yahoo.com> Kelly Anderson stated: > Of all potential mega disasters we could face, > asteroid hits > are the most easily preventable... (compared to such things > as super > volcanos, subduction earthquakes and tsunamis and the like, > where we > are simply powerless at this point.) > That's an interesting assertion. I presume you mean this in the same way that we might say "Of all the coronal mass ejections in the galaxy, the ones produced by our own sun are the most easily preventable"? At this point in history, I think our ability to protect ourselves from a dinosaur-killer asteroid is doubtful, to say the least. Ben Zaiboc From rpwl at lightlink.com Mon Feb 28 14:32:50 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 28 Feb 2011 09:32:50 -0500 Subject: [ExI] Serious topic In-Reply-To: <201102280402.p1S42AG7006229@andromeda.ziaspace.com> References: <006301cbd6f4$6db47b60$491d7220$@att.net> <201102280402.p1S42AG7006229@andromeda.ziaspace.com> Message-ID: <4D6BB212.6090508@lightlink.com> David Lubkin wrote: > And there's a lot of known energy, e.g., nuclear, coal, and natural gas, > that can be readily tapped without much technical fuss. I have to say that this kind of talk, in this context, makes my blood boil with rage. Plenty of natural gas, you say? Just yesterday, some friends of mine a few miles away discovered that a Pennsylvania Hydro-fracking corporation managed to persuade someone up here, in New York State, to take their poisoned fracking water and DUMP it in a local waterway. Dump it, in exchange for money. So while some people in this community talk about there being a big untapped reserve of natural gas, my water - and the water of hundreds of thousands of people in this region - is in imminent risk of being poisoned. Or is actually being poisoned, right now, as we debate this issue. This is not a time to be debating the niceties of Hubbert curves, or talking about there being a lot of known energy. It is a time for emergency action to find alternative sources. Richard Loosemore From kellycoinguy at gmail.com Mon Feb 28 14:46:36 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 28 Feb 2011 07:46:36 -0700 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A775B.6080201@lightlink.com> References: <283746.86498.qm@web114401.mail.gq1.yahoo.com> <4D6A775B.6080201@lightlink.com> Message-ID: On Sun, Feb 27, 2011 at 9:10 AM, Richard Loosemore wrote: > Ben Zaiboc wrote: >> Richard Loosemore suggested: > And having said all of that, we do have something like the required > simulations already: ?game theory. ?Libertarianism is something close to an > "all defection, all the time" strategy ;-), and I believe those don't do > very well.... Assuming you are speaking of the Prisoner's Dilemma, that would be closer to the Anarchist strategy. Or perhaps the role of the government in communism or socialism. I believe libertarianism is more akin to the tit for tat strategy. According to Dawkins, this is one of the most successful strategies. -Kelly From eugen at leitl.org Mon Feb 28 14:54:20 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Feb 2011 15:54:20 +0100 Subject: [ExI] Serious topic In-Reply-To: <4D6BB212.6090508@lightlink.com> References: <006301cbd6f4$6db47b60$491d7220$@att.net> <201102280402.p1S42AG7006229@andromeda.ziaspace.com> <4D6BB212.6090508@lightlink.com> Message-ID: <20110228145420.GR23560@leitl.org> On Mon, Feb 28, 2011 at 09:32:50AM -0500, Richard Loosemore wrote: > This is not a time to be debating the niceties of Hubbert curves, or > talking about there being a lot of known energy. It is a time for > emergency action to find alternative sources. He's entirely correct here, of course. In practice the last 45 years (at least, some few saw it coming in 1930s, others in 1950s ) vividly demonstrate our proactive problem solving behaviour. Which is... not so very good. We definitely do not have another 45 years. What are you doing about it? From rpwl at lightlink.com Mon Feb 28 15:01:06 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 28 Feb 2011 10:01:06 -0500 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: References: <283746.86498.qm@web114401.mail.gq1.yahoo.com> <4D6A775B.6080201@lightlink.com> Message-ID: <4D6BB8B2.3020200@lightlink.com> Kelly Anderson wrote: > On Sun, Feb 27, 2011 at 9:10 AM, Richard Loosemore wrote: >> Ben Zaiboc wrote: >>> Richard Loosemore suggested: >> And having said all of that, we do have something like the required >> simulations already: game theory. Libertarianism is something close to an >> "all defection, all the time" strategy ;-), and I believe those don't do >> very well.... > > Assuming you are speaking of the Prisoner's Dilemma, that would be > closer to the Anarchist strategy. Or perhaps the role of the > government in communism or socialism. I believe libertarianism is more > akin to the tit for tat strategy. According to Dawkins, this is one of > the most successful strategies. I explained, in reply to Damien Sullivan yesterday, that I was being tongue in cheek when I said this (hence the ";-)"). There is no serious way to make a comparison between a game theory strategy and a real world political philosophy. And besides, TfT has actually been proven to work through SIMULATIONS, whereas libertarians seem too scared ;-) to put their theories to that kind of objective scientific test.... Richard Loosemore From spike66 at att.net Mon Feb 28 15:49:39 2011 From: spike66 at att.net (spike) Date: Mon, 28 Feb 2011 07:49:39 -0800 Subject: [ExI] Wizard Calculating Device In-Reply-To: References: Message-ID: <004201cbd75f$1bd0e110$5372a330$@att.net> On Mon, Feb 28, 2011 at 2:27 AM, Kelly Anderson wrote: >>Playing around with it is probably the best trick. Think of it as a kind of Rubik's cube. :-)< >Funny enough, there was a Rubik's cube in the lot as well. Actually a "cube puzzle" in its original box and made in 1981. I think Spike's right. I'll just mess with it and challenge myself to get it to work without googling it.(Thanks though Bill.) D. An original Rubik's cube if in it's unopened original package is a valuable collector's item. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Mon Feb 28 16:20:53 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 28 Feb 2011 12:20:53 -0400 Subject: [ExI] Wizard Calculating Device In-Reply-To: <004201cbd75f$1bd0e110$5372a330$@att.net> References: <004201cbd75f$1bd0e110$5372a330$@att.net> Message-ID: 2011/2/28 spike : >An original Rubik?s cube if in it?s unopened original package is a valuable collector?s item. spike< Yeah, it's not original. It's a "cube puzzle" knock off, and though the box is in great shape it has been opened though the instructions are still folded up inside it. The math set might be worth a few dollars, for it too has original instructions with it, is brass, and is the oldest of the lot. It was issued in the 40's. I also bought a first edition Hardy Boys novel from 1933 for 2 dollars. Even without the dust jacket it's worth about fifty times more than I paid for it. I love small town garage sales around here. People have no idea what their stuff is worth. :) D On Mon, Feb 28, 2011 at 2:27 AM, Kelly Anderson > wrote: > > > > >>Playing around with it is probably the best trick. Think of it as a > kind of Rubik's cube. :-)< > > > > >Funny enough, there was a Rubik's cube in the lot as well. Actually a > "cube puzzle" in its original box and made in 1981. I think Spike's right. > I'll just mess with it and challenge myself to get it to work without > googling it.(Thanks though Bill.) D. > > > > > > > > > > An original Rubik?s cube if in it?s unopened original package is a valuable > collector?s item. spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Mon Feb 28 16:53:19 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Mon, 28 Feb 2011 11:53:19 -0500 Subject: [ExI] Wizard Calculating Device In-Reply-To: <004401cbd6ef$fef8b0f0$fcea12d0$@att.net> References: <004401cbd6ef$fef8b0f0$fcea12d0$@att.net> Message-ID: <201102281653.p1SGrVJF025032@andromeda.ziaspace.com> Spike wrote: >I bought a slide rule about 20 yrs ago at a garage sale for 5 >bucks. I made it my educational toy by trying to figure out how it >works, knowing only one fundamental rule: adding logs is >multiplication. Study it Darren and figure it out. It is a mind expander. He meant the Wizard. But while we're on slide rules: When I was in high school in Israel, we all had four-place log tables that we were expected to use for exams. I asked my teacher if I could use a calculator. He said no, because not everyone in the class could afford one. But he would let me use a slide rule. I tried to argue that a good one cost as much as a calculator, so he should just let me use that. No deal. I have a few (mostly linear, a couple circular) that had belonged to my mother (physicist), father (EE, mathematician), or grandfather (EE, mathematician). For instance, a 19-scale Pickett & Eckel from 1948. The chief routine benefit of slide rules is that you *have* to have an idea of the order of magnitude of the correct answer. But if any of you are concerned about preparedness, they're worth having. No batteries to wear out or leak, resistant to EMP, less fragile, indefinite shelf life if stored properly. -- David. From spike66 at att.net Mon Feb 28 16:48:52 2011 From: spike66 at att.net (spike) Date: Mon, 28 Feb 2011 08:48:52 -0800 Subject: [ExI] Wizard Calculating Device In-Reply-To: References: <004201cbd75f$1bd0e110$5372a330$@att.net> Message-ID: <006601cbd767$61cab8f0$25602ad0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Monday, February 28, 2011 8:21 AM To: ExI chat list Subject: Re: [ExI] Wizard Calculating Device 2011/2/28 spike : >An original Rubik's cube if in it's unopened original package is a valuable collector's item. spike< Yeah, it's not original. It's a "cube puzzle" knock off, and though the box is in great shape it has been opened though the instructions are still folded up inside it. .D Ja I have a couple of those. The original Rubik's cube was about 7 bucks, but almost immediately the Chinese were making cheapy knock offs, which my college roommate brought back from Singapore as an early lesson to all of us engineering students in intellectual property and how difficult it is to defend. The Chinese version was about 2 bucks. We compared the original with the Chinese version, and found the manufacturing tolerances in the original were better, but that the Chinese version was in some ways easier to disassemble to modify for racing purposes. A racing cube had ground and polished catch tracks and corners. That was in 1981. I wrote a routine for a TI59 programmable calculator. It would make the same series of moves repeatedly on a simulated cube and note the number of moves required to cycle back to a solved cube. Then I repeated the task on a counterfeit Apple II, which ran at a blazing 0.0028 GHz, which finished the task about 100 times faster than the calculator. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Mon Feb 28 17:18:34 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 28 Feb 2011 10:18:34 -0700 Subject: [ExI] Serious topic Message-ID: On Mon, Feb 28, 2011 at 5:00 AM, "spike" wrote: > > Keith, I like to imagine the kinds of transitions that can be made quickly > if necessary, should these kinds of scenarios play out. I posted this for the *meta* information: At the political level, there is no awareness of the problem. The good point is the US gets energy independence but there are downsides. Keith From spike66 at att.net Mon Feb 28 17:10:34 2011 From: spike66 at att.net (spike) Date: Mon, 28 Feb 2011 09:10:34 -0800 Subject: [ExI] Wizard Calculating Device In-Reply-To: <201102281653.p1SGrVJF025032@andromeda.ziaspace.com> References: <004401cbd6ef$fef8b0f0$fcea12d0$@att.net> <201102281653.p1SGrVJF025032@andromeda.ziaspace.com> Message-ID: <007701cbd76a$696b8370$3c428a50$@att.net> ... On Behalf Of David Lubkin Subject: Re: [ExI] Wizard Calculating Device Spike wrote: >>I bought a slide rule about 20 yrs ago at a garage sale for 5 bucks. I >>made it my educational toy by trying to figure out how it works... >... I asked my teacher if I could use a calculator. He said no, because not everyone in the class could afford one... Ja I remember hearing that argument too and it made me crazy. We intentionally wasted our time learning obsolete technologies because not everyone could afford the new. I wanted to say "Not everyone has a brain, but that shouldn't prevent me from using mine." Of course I was a cocky ass back in these days. THOSE days I meant, back in THOSE days. This is a problem I contemplate a lot now that I am preparing to find a kindergarten for my son. I see it everywhere: ways in which teachers subtly slow down the able students so that the age groups can be taught together. This in itself is an obsolete technology: the notion of teaching a classroom of similarly aged students by one teacher standing up front talking. He could learn so much more so much faster with an individualized computer based system. He already has in fact: he reads well and can do two digit from two digit subtraction. Most of what they offer in kindergarten will be a sinful waste of time for him. >... But he would let me use a slide rule. I tried to argue that a good one cost as much as a calculator, so he should just let me use that. No deal. -- David. What if you argued for the use a slide rule now? If you can find one, they are likely a lot more expensive than a basic calculator. Would the same old argument be used to disallow the slide rule? Education is probably the most Luddite area in our modern existence, and it is falling steadily farther behind, to our peril. If I went into the schools as a volunteer and tried to explain to the kids the challenge their generation faces, humanity's energy equation, it would scare them so badly they would never invite me back. spike From stefano.vaj at gmail.com Mon Feb 28 17:37:15 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 28 Feb 2011 18:37:15 +0100 Subject: [ExI] Banking, corporations, and rights (Re: Serfdom and libertarian critiques) In-Reply-To: References: <4D675E2E.8030901@gnolls.org> Message-ID: On 25 February 2011 18:04, Kelly Anderson wrote: > I guess the real problem is understanding the alternatives. There just > isn't enough physical gold to run the economy (unless gold were > $1000000 an ounce or something... which would make the industrial use > of gold prohibitive... which would have its own downsides) That money, as a unity of measure, be created out of thin air is fine. Kilometers are. The prob is when such money is "lent" by a private central bank, for a price, to the State which grants it the monopolistic power to do so, and yet has to recover through taxes a total amount in capital and interest which exceed the total money available in the system. -- Stefano Vaj From eugen at leitl.org Mon Feb 28 17:47:35 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 28 Feb 2011 18:47:35 +0100 Subject: [ExI] Banking, corporations, and rights (Re: Serfdom and libertarian critiques) In-Reply-To: References: <4D675E2E.8030901@gnolls.org> Message-ID: <20110228174735.GY23560@leitl.org> On Mon, Feb 28, 2011 at 06:37:15PM +0100, Stefano Vaj wrote: > On 25 February 2011 18:04, Kelly Anderson wrote: > > I guess the real problem is understanding the alternatives. There just > > isn't enough physical gold to run the economy (unless gold were > > $1000000 an ounce or something... which would make the industrial use > > of gold prohibitive... which would have its own downsides) > > That money, as a unity of measure, be created out of thin air is fine. > Kilometers are. Metrology uses references. In case of monetary units you need the amount of a current essential commodity (diversified, weight-adjusted, made resistant to gaming) the monetary unit can buy. You do not need to stock said essential commodity, so that each monetary unit in circulation is backed up by thorium, grain, or light sweet crude. Such reference baskets will ground fiats to a known potential, and force them be not free-floating. > The prob is when such money is "lent" by a private central bank, for a > price, to the State which grants it the monopolistic power to do so, > and yet has to recover through taxes a total amount in capital and > interest which exceed the total money available in the system. Another problem is that compound interest is linear semi-log plot, while the underlying economy growth isn't. There's a reason why most religion's sacred texts contain some ranting against usury. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stefano.vaj at gmail.com Mon Feb 28 18:10:00 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 28 Feb 2011 19:10:00 +0100 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: References: <006001cbd371$6562cc90$302865b0$@att.net> Message-ID: 2011/2/26 Darren Greer : >> > Drug addicts in the throes of their >> > addictions need to be treated the same way, as if they have a >> > disability. >> >> Why? What is the moral basis of that statement? I know it's the >> politically correct position, but is it philosophically correct? I doubt it. Why somebody who is in the throes of a haute couture-deprivation should not be deemed in a similar position? I can testify that such scenario may lead to a total loss of control and/or to the acceptance of worse scenarios than the mere loss of reproductive capacity... > For every gram of cocaine you hold in your hand, > someone has likely been killed to get it there. Mmhhh. As much as I am against prohibitionism, this sounds a little emphatic, and likely to overestimate the number of murders, or to underestimate the number of cocaine grams in circulation. Any hard data in this respect? -- Stefano Vaj From stefano.vaj at gmail.com Mon Feb 28 17:47:08 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 28 Feb 2011 18:47:08 +0100 Subject: [ExI] Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: References: Message-ID: 2011/2/23 Darren Greer : > That is far from a uniform?sentiment among the gay men I know. Maybe I hang > out with?iconoclasts, ?but many of them?couldn't?care less about a marriage > certificate.. What they do care about is?health coverage, tax benefits, and > not having the in-laws of your partner who haven't talked to you or him for > ten ten years march in the day after he dies and claim the furniture, the > house and even the dog because neither they nor the government recognize > your right to sleep and live with and love who you wish. Yes. The point would be of course that of liberalising succession law, not of extending access to marriage. I appreciate the feelings of "discrimination" a gay may perceive in its ability to have a same-sex, formally monogamous, theoretically long-term, relationship "blessed", but let us say that there is a law preventing minors to send themselves as slaves, contrary to everybody else. I do not mean to compare marriage of any kind with slavery, but what would be the point to fight for its abrogation? What would prevent a minor to act "as if"? As a second best, I am all in favour of making the social norms involved in marriage simply implode by allowing gay, incestuous, chaste, pedophilic, poligynic, polyandric, group, post-mortem, inter-species weddings. Nice ceremonies are not to be denied to anybody. -- Stefano Vaj From darren.greer3 at gmail.com Mon Feb 28 18:38:50 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 28 Feb 2011 14:38:50 -0400 Subject: [ExI] Wizard Calculating Device In-Reply-To: <006601cbd767$61cab8f0$25602ad0$@att.net> References: <004201cbd75f$1bd0e110$5372a330$@att.net> <006601cbd767$61cab8f0$25602ad0$@att.net> Message-ID: >We compared the original with the Chinese version, and found the manufacturing tolerances in the original were better, but that the Chinese version was in some ways easier to disassemble to modify for racing purposes. < Yes, I was thinking that this morning, how the original was much smoother to use and didn't break like the cheaper ones did. I won the local racing competition but lost out in the regional. The Rubik's cube contests were the first "sport" I was actually any good at. I loved my little collection. Like most that used them for racing I guess, I'd break them down and coat the working parts inside with vaseline in order to get it to slide better. Fond memories. I played with it a little last night, but I don't remember how to solve it. I'll work on it after my math test on Tuesday. They actually have blind-folded 3X3X3 cube contests now. The champion is (or was last year) a teenage girl in Asia somewhere. Maybe Taiwan. She can do it in under fifty seconds. Darren P.S. My pet peeve re cubes were the idiots in junior high school that used to peel off the stickers and solve it that way. A completely pointless exercise and a waste of a good cube. :) 2011/2/28 spike > > > > > *From:* extropy-chat-bounces at lists.extropy.org [mailto: > extropy-chat-bounces at lists.extropy.org] *On Behalf Of *Darren Greer > *Sent:* Monday, February 28, 2011 8:21 AM > *To:* ExI chat list > *Subject:* Re: [ExI] Wizard Calculating Device > > > > > > 2011/2/28 spike : > > > > >An original Rubik?s cube if in it?s unopened original package is a > valuable collector?s item. spike< > > > > Yeah, it's not original. It's a "cube puzzle" knock off, and though the box > is in great shape it has been opened though the instructions are still > folded up inside it. ?D > > > > > > Ja I have a couple of those. The original Rubik?s cube was about 7 bucks, > but almost immediately the Chinese were making cheapy knock offs, which my > college roommate brought back from Singapore as an early lesson to all of us > engineering students in intellectual property and how difficult it is to > defend. The Chinese version was about 2 bucks. We compared the original > with the Chinese version, and found the manufacturing tolerances in the > original were better, but that the Chinese version was in some ways easier > to disassemble to modify for racing purposes. A racing cube had ground and > polished catch tracks and corners. > > > > That was in 1981. I wrote a routine for a TI59 programmable calculator. > It would make the same series of moves repeatedly on a simulated cube and > note the number of moves required to cycle back to a solved cube. Then I > repeated the task on a counterfeit Apple II, which ran at a blazing 0.0028 > GHz, which finished the task about 100 times faster than the calculator. > > > > spike > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Feb 28 19:08:58 2011 From: spike66 at att.net (spike) Date: Mon, 28 Feb 2011 11:08:58 -0800 Subject: [ExI] value of a human life, was RE: Same Sex Marriage (was Re: Call To Libertarians) Message-ID: <00af01cbd77a$f3fff150$dbffd3f0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj ... >> For every gram of cocaine you hold in your hand, someone has likely been killed to get it there. Mmhhh. As much as I am against prohibitionism, this sounds a little emphatic, and likely to overestimate the number of murders, or to underestimate the number of cocaine grams in circulation. Any hard data in this respect? -- Stefano Vaj Wiki says annual world cocaine consumption is about 600 tons per year. This can likely get us an order of magnitude estimate. If we use the 1 murder per gram ratio, that's 600 million murders per year or about a tenth of the world population murdered per year. This sounds high to me. My (admittedly uneducated) estimate would run to somewhere in the order of 600 to 6000 murders per year over cocaine, so that for every 100 to 1000 kilos of cocaine you hold in your hand, someone has been killed, even if we ignore the practical difficulty of holding a ton of cocaine in one's hand. If we use that criterion to tackle the difficult ethical question regarding the dollar value of a human life, I would start with the estimate that cocaine is worth (well it was when I was a teenager) about 100 bucks a gram, and there has been inflation but simultaneously far more new suppliers from what I hear, so let me use 100 US bucks a gram, so a human life is worth between 10 million and 100 million bucks. spike From alfio.puglisi at gmail.com Mon Feb 28 19:23:47 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Mon, 28 Feb 2011 20:23:47 +0100 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: On Mon, Feb 28, 2011 at 7:51 AM, Kelly Anderson wrote: > 2011/2/27 Alfio Puglisi : > > On Sun, Feb 27, 2011 at 3:35 PM, Stefano Vaj > wrote: > >> > >> 2011/2/22 Alfio Puglisi : > >> > On Tue, Feb 22, 2011 at 6:52 PM, Kelly Anderson < > kellycoinguy at gmail.com> > > When I wrote that private prisons would be an incentive to bogus > > incarceration I was hypothesizing, but now I found out that it has > already > > happened: > > http://en.wikipedia.org/wiki/Kids_for_cash_scandal > > Alfio, the number of people inappropriately incarcerated in ANY system > is non-zero. Corruption exists everywhere. I think it is likely that > some AGIs will end up being corrupt too. Sure, don't take it personally :-) . I was just responding to Stefano's argument that mine was "a bizarre and far-fetched argument", pointing out that it was based on some real-world example. > In the Old West, the number one > murder weapon was not the Colt 45, but the shovel. (Apparently, lots > of arguments came up at water turns...) To tell us that we should give > up a good tool just because SOME number of people have misused it is > to completely stop all human progress. > > Just because someone died of arson, does not mean we should go back to > pre-fire days. > Now please don't attribute to me things I didn't say. The arsonist example was to point out that, when personal profit incentives are aligned with setting a forest to fire, fires will occur. If those seasonal workers had nothing to gain from lighting fires up, say for example if they had permanent, iron-clad contracts, they wouldn't become arsonists. This may be regarded as economically inefficient, but in this limited example, it must be weighted against the loss from all the fires (plus related externalities like increased fire insurance, etc). > I find anecdotal stories to be unconvincing. How many people have seen > big foot, UFOs, and the like? These are statistically uninteresting to > me. If it were a widespread problem, such as is the case with > corruption in Mexico, then you would get my attention. This is no more > convincing than the arguments based on single cases for or against > government health care. > Now that you talk about health care, I don't remember exactly where I read that there was some ancient Greek doctor who was regularly paid by his clients as long as they were healthy, and was *not* paid by anyone who was sick. I'm sure that the doctor made his best effort to keep everyone as healthy as possible! This is a perfect example of how profit motives can be aligned to everyone's best interest. Alfio. > > -Kelly > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Feb 28 19:29:02 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 28 Feb 2011 11:29:02 -0800 Subject: [ExI] General comment about all this quasi-libertarianism discussion In-Reply-To: <4D6A8236.8000008@satx.rr.com> References: <4D693BD2.2050903@lightlink.com> <4D6A5C99.1030908@lightlink.com> <4D6A8236.8000008@satx.rr.com> Message-ID: <4D6BF77E.30506@mac.com> On 02/27/2011 08:56 AM, Damien Broderick wrote: > > >> ...you cite a horribly bad novel, filled to the brim with naked >> propaganda, >> written by an egomaniacal, hypocritical cult leader > > A lot of extropes are very fond of Rand (while usually avoiding > Randroid lockstep), so I don't anticipate many nodding heads at this > accurate thumbnail characterization, or at Damien Sullivan's link to a > flowchart of "How to succeed as an Ayn Rand character": Accurate? I am sorry but that is such a livid utter miss on both the work and the author that I am left stunned by the amount of evil misplaced rancour involved. I don't think I want to know or communicate with anyone that would say this is accurate. - samantha From jrd1415 at gmail.com Mon Feb 28 19:25:08 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Mon, 28 Feb 2011 12:25:08 -0700 Subject: [ExI] the road home: RE: this is me in another forty years... In-Reply-To: References: <001e01cbd6ab$927a4770$b76ed650$@att.net> <4D6A9E43.6020903@satx.rr.com> Message-ID: On Sun, Feb 27, 2011 at 12:12 PM, BillK wrote: > Would it rain on your parade too much to point at the much higher death and injury rates for motorcycle riders? Had the same thought. Definitely have to have a cryonic suspension team among the ga...er club members. Best, Jeff Davis "Death is really just an engineering problem." Regina Pancake From js_exi at gnolls.org Sat Feb 19 20:38:05 2011 From: js_exi at gnolls.org (J. Stanton) Date: Sat, 19 Feb 2011 12:38:05 -0800 Subject: [ExI] Serfdom and libertarian critiques (Was: Call to Libertarians) In-Reply-To: References: Message-ID: <4D602A2D.9060902@gnolls.org> On 2/19/11 10:46 AM, Richard Loosemore wrote: > Taxation and > government and redistribution of wealth are what separate us from the > dark ages. The concept of taxation + government + redistribution of > wealth was the INCREDIBLE INVENTION that allowed human societies in at > least one corner of this planet to emerge from feudal societies where > everyone looked after themselves and the devil took the hindmost. This is a breathtakingly counterfactual statement. Feudal economies were and are entirely supported by "taxation + government + redistribution of wealth". The only difference is that in a feudal economy, the redistribution is from the masses to the already rich, in the form of "lords" -- whereas in our modern government-contronlled economy, the redistribution is from the masses to the already rich in the form of "corporations" and "banks". The difference of income and assets between a feudal serf and his lord in the Middle Ages is not proportionally larger than the difference in income and assets today between the average world citizen and its richest citizens. The only difference is that we serfs have a better standard of living than in the Dark Ages due to sterile medicine, antibiotics, and mass production of technology. If anyone thinks there is a difference of kind between medieval serfdom and what we have in America ("oh, we can OWN LAND") just stop paying your property tax -- or any other tax -- and you'll see that the state owns everything, just as in the Dark Ages. What we call "ownership" is a finder's fee for the privilege of paying below-market rent. As far as libertarianism, I find the standard statist critique to be nonsense: claims that the government ca be less corrupt than the people assume that government is made up of something other than people, which fails trivially. I think Bob Black's critique is much more trenchant: "The Libertarian As Conservative" http://www.inspiracy.com/black/abolition/libertarian.html "Silly doctrinaire theories which regard the state as a parasitic excrescence on society cannot explain its centuries-long persistence, its ongoing encroachment upon what was previously market terrain, or its acceptance by the overwhelming majority of people including its demonstrable victims." JS http://www.gnolls.org From sjatkins at mac.com Mon Feb 28 19:51:34 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 28 Feb 2011 11:51:34 -0800 Subject: [ExI] META: Overposting (psychology of morals) In-Reply-To: References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> Message-ID: <4D6BFCC6.2080806@mac.com> On 02/27/2011 09:29 PM, Kelly Anderson wrote: > On Sun, Feb 27, 2011 at 10:50 AM, Damien Sullivan > wrote: >> On Sun, Feb 27, 2011 at 12:06:46PM +0000, Anders Sandberg wrote: >> I recently ran into an extreme case of this: >> http://volokh.com/2011/02/15/asteroid-defense-and-libertarianism/ > The fact that some libertarians (myself included) would prefer not to > pay taxes to protect against rogue asteroids isn't because we dismiss > the importance of doing the job. Too often, when a libertarian > individual suggests that we shouldn't have a public fire department, > the other side immediately jumps to the incorrect conclusion that > libertarians don't wish to fight fires. It isn't that we are hard > hearted and wish for everyone who isn't careful to have their house > burn down. It's just that we see a different way of paying for things. > > The total number of people currently employed in looking for asteroids > in the NASA Near Earth Object program is reportedly less than the > number of people working in a typical McDonalds. Since actuaries > indicate that we each have a 1:20,000 chance of being killed by such > an asteroid, that is a silly small number. What?? Are you seriously saying that 5 persons in every 100,000 will be or have been killed by an asteroid? The actual number historically is closer to 1:1,000,000,000. Or are you referring to the chance of dying IF an asteroid over a certain size impacts the earth without bothering to factor in the actual chances of that happening? - samantha From sjatkins at mac.com Mon Feb 28 19:54:19 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 28 Feb 2011 11:54:19 -0800 Subject: [ExI] Brief correction re Western Democracies [WASI am Call To Libertarians] In-Reply-To: References: <895132.47768.qm@web114413.mail.gq1.yahoo.com> <4D61D9E4.90607@lightlink.com> Message-ID: <4D6BFD6B.2080308@mac.com> On 02/27/2011 09:28 AM, Alfio Puglisi wrote: > > > On Sun, Feb 27, 2011 at 3:35 PM, Stefano Vaj > wrote: > > 2011/2/22 Alfio Puglisi >: > > On Tue, Feb 22, 2011 at 6:52 PM, Kelly Anderson > > > >> Some libertarians go so far as to shorten this list to Army, Courts > >> and Police. There is no reason today for all roads not to be toll > >> roads IMHO. Why not regulate, then privatize prisons? > > > > Because it creates an incentive to incarcerate people? The more > people in > > prison, the more profits from prison management. > > Come on. I can hardly be described as a libertarian, but to argue that > allowing private healh care services risks to encourage the deliberate > spreadinng of epidemics by their managers or shaheolders, > biowarfare-style, sounds as a rather bizarre and far-fetched argument. > > > When I wrote that private prisons would be an incentive to bogus > incarceration I was hypothesizing, but now I found out that it has > already happened: > > http://en.wikipedia.org/wiki/Kids_for_cash_scandal It does not require private prisons for this to happen. The "prison industry" has been big business for decades using publicly funded prisons. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Mon Feb 28 19:18:02 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Mon, 28 Feb 2011 12:18:02 -0700 Subject: [ExI] this is me in another forty years... In-Reply-To: <00a901cbd546$ec26bda0$c47438e0$@att.net> References: <00a901cbd546$ec26bda0$c47438e0$@att.net> Message-ID: Why wait forty years? We could have an Extropian motorcycle ga... er, club right now. I need a cool new leather jacket. My old one is shot. We could cruise around the planet, stopping at all those places where are buds reside, party a while, and then move on. Best, Jeff Davis "You are what you think." Jeff Davis From darren.greer3 at gmail.com Mon Feb 28 18:44:20 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 28 Feb 2011 14:44:20 -0400 Subject: [ExI] Wizard Calculating Device In-Reply-To: References: <004201cbd75f$1bd0e110$5372a330$@att.net> <006601cbd767$61cab8f0$25602ad0$@att.net> Message-ID: Darren wrote: >Rubik's cube contests were the first "sport" I was actually any good at. < Damn. I just remembered something. That was also the first book I ever wrote. I wrote a manual on how to solve it. I can't believe that completely slipped my mind for all these years. I got my buddy Donald to type it up for me and sold it to my friends for fifty cents a copy. They said later it was completely incomprehensible. :) But I had diagrams and everything, which I drew in by hand on the typed master copy. Cool, to get that memory back. Darren On Mon, Feb 28, 2011 at 2:38 PM, Darren Greer wrote: > >We compared the original with the Chinese version, and found the > manufacturing tolerances in the original were better, but that the Chinese > version was in some ways easier to disassemble to modify for racing > purposes. < > > Yes, I was thinking that this morning, how the original was much smoother > to use and didn't break like the cheaper ones did. I won the local racing > competition but lost out in the regional. The Rubik's cube contests were the > first "sport" I was actually any good at. I loved my little collection. Like > most that used them for racing I guess, I'd break them down and coat the > working parts inside with vaseline in order to get it to slide better. Fond > memories. I played with it a little last night, but I don't remember how to > solve it. I'll work on it after my math test on Tuesday. They actually have > blind-folded 3X3X3 cube contests now. The champion is (or was last year) a > teenage girl in Asia somewhere. Maybe Taiwan. She can do it in under fifty > seconds. > > Darren > > P.S. My pet peeve re cubes were the idiots in junior high school that used > to peel off the stickers and solve it that way. A completely pointless > exercise and a waste of a good cube. :) > > 2011/2/28 spike > >> >> >> >> >> *From:* extropy-chat-bounces at lists.extropy.org [mailto: >> extropy-chat-bounces at lists.extropy.org] *On Behalf Of *Darren Greer >> *Sent:* Monday, February 28, 2011 8:21 AM >> *To:* ExI chat list >> *Subject:* Re: [ExI] Wizard Calculating Device >> >> >> >> >> >> 2011/2/28 spike : >> >> >> >> >An original Rubik?s cube if in it?s unopened original package is a >> valuable collector?s item. spike< >> >> >> >> Yeah, it's not original. It's a "cube puzzle" knock off, and though the >> box is in great shape it has been opened though the instructions are still >> folded up inside it. ?D >> >> >> >> >> >> Ja I have a couple of those. The original Rubik?s cube was about 7 bucks, >> but almost immediately the Chinese were making cheapy knock offs, which my >> college roommate brought back from Singapore as an early lesson to all of us >> engineering students in intellectual property and how difficult it is to >> defend. The Chinese version was about 2 bucks. We compared the original >> with the Chinese version, and found the manufacturing tolerances in the >> original were better, but that the Chinese version was in some ways easier >> to disassemble to modify for racing purposes. A racing cube had ground and >> polished catch tracks and corners. >> >> >> >> That was in 1981. I wrote a routine for a TI59 programmable calculator. >> It would make the same series of moves repeatedly on a simulated cube and >> note the number of moves required to cycle back to a solved cube. Then I >> repeated the task on a counterfeit Apple II, which ran at a blazing 0.0028 >> GHz, which finished the task about 100 times faster than the calculator. >> >> >> >> spike >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Feb 28 20:38:45 2011 From: pharos at gmail.com (BillK) Date: Mon, 28 Feb 2011 20:38:45 +0000 Subject: [ExI] this is me in another forty years... In-Reply-To: References: <00a901cbd546$ec26bda0$c47438e0$@att.net> Message-ID: On Mon, Feb 28, 2011 at 7:18 PM, Jeff Davis wrote: > Why wait forty years? ?We could have an Extropian motorcycle ga... er, > club right now. ?I need a cool new leather jacket. ? My old one is > shot. ?We could cruise around the planet, stopping at all those places > where are buds reside, party a while, and then move on. > > That's not quite as unrealistic as it might look at first glance. :) There is already a reasonably large retirement community that wanders around the US, Europe and Australia in large mobile homes. Florida in winter, further north in summer. They have sold up their home and used the cash to buy a Winnebago and supplement their pension. Most already tow a small car as a runabout, so just strap a motorcycle on the back as well. Sounds like fun to me. BillK From spike66 at att.net Mon Feb 28 21:22:05 2011 From: spike66 at att.net (spike) Date: Mon, 28 Feb 2011 13:22:05 -0800 Subject: [ExI] this is me in another forty years... In-Reply-To: References: <00a901cbd546$ec26bda0$c47438e0$@att.net> Message-ID: <00e601cbd78d$8c9de630$a5d9b290$@att.net> ... On Behalf Of Jeff Davis Subject: Re: [ExI] this is me in another forty years... >Why wait forty years? We could have an Extropian motorcycle ga... er, club right now. I need a cool new leather jacket. My old one is shot... Best, Jeff Davis Jeff, your old leather jacket is fine. But that bike of yours, oh my goodness, that one will hafta go pal. Don't get me wrong, I like rat bikes, rode them for years. My four motorcycles are aged 24, 25, 26 and 29 years. But jeez, none of them look like that Kawasaki you ride, oy freaking vey. Now all we need to do is decide if we will be the Singularity Riders or the Voodoo Sex Candles. spike From phoenix at ugcs.caltech.edu Mon Feb 28 21:39:34 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Mon, 28 Feb 2011 13:39:34 -0800 Subject: [ExI] libertarian (asteroid) defense In-Reply-To: References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> Message-ID: <20110228213934.GA1344@ofb.net> On Sun, Feb 27, 2011 at 10:29:42PM -0700, Kelly Anderson wrote: > > I recently ran into an extreme case of this: > > http://volokh.com/2011/02/15/asteroid-defense-and-libertarianism/ > If private insurance companies sold asteroid insurance, which they > should, then there would be a significant desire to avoid payout. That Why should they? Can they make money off of it? Why aren't they selling asteroid insurance right now? Who would buy end of the world insurance -- who would make or receive payments? Unsubsidized insurers go for little disasters that happen a lot and are spread out in a statistically averagable manner. They avoid things that strike lots of people at once, like floods, earthquakes, and fission plant accidents. > would lead to the spending of money to avoid the disaster in the first > place. Of all potential mega disasters we could face, asteroid hits > are the most easily preventable... (compared to such things as super > volcanos, subduction earthquakes and tsunamis and the like, where we Actually I imagine volcanoes might pretty tameable. Drill down and release gases/magma in a controlled manner, rather than letting them blow all at once. Though the BP oil spill highlights the safety concerns of drilling into a pressure chamber. Would want to practice on the small volcanoes first. Alternately, being able to trigger a volcano or earthquake at a specific time would be helpful, rather than having them strike at once. > Additionally, in a libertarian society, someone might set up a non > profit organization to search for and disable near earth objects. If > everyone in America donated 25 cents to such an organization, it would > be funded well over current funding levels. Someone might? Why don't they do so now? Why would everyone donating 25 cents be more likely then than it is now? In a libertarian society, you get to specify less government, that's all. You don't get to specify magically more altruistic people than we have now. And anything not actively banned by government is perfectly doable today, so if people aren't doing it now, that bodes ill for doing it in libertarian world. -xx- Damien X-) From spike66 at att.net Mon Feb 28 22:28:45 2011 From: spike66 at att.net (spike) Date: Mon, 28 Feb 2011 14:28:45 -0800 Subject: [ExI] propose temporary moratorium on a topic Message-ID: <00f001cbd796$dcf52270$96df6750$@att.net> As a counterpart to our recent temporary open season on libertarian discussions, I now propose a temporary halt to the discussions on that topic, thanks. When Altas Shrugged reaches the theatres, I expect we will have another temporary open season. Fair enough? spike From anders at aleph.se Mon Feb 28 22:42:41 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 28 Feb 2011 22:42:41 +0000 Subject: [ExI] libertarian (asteroid) defense In-Reply-To: <4D6BFCC6.2080806@mac.com> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> <4D6BFCC6.2080806@mac.com> Message-ID: <4D6C24E1.7000002@aleph.se> Samantha Atkins wrote: > On 02/27/2011 09:29 PM, Kelly Anderson wrote: >> The total number of people currently employed in looking for asteroids >> in the NASA Near Earth Object program is reportedly less than the >> number of people working in a typical McDonalds. Since actuaries >> indicate that we each have a 1:20,000 chance of being killed by such >> an asteroid, that is a silly small number. > > What?? Are you seriously saying that 5 persons in every 100,000 will > be or have been killed by an asteroid? The actual number historically > is closer to 1:1,000,000,000. Or are you referring to the chance of > dying IF an asteroid over a certain size impacts the earth without > bothering to factor in the actual chances of that happening? If you calculate the expected number of fatailities times the estimated frequency distribution of impacts, you get numbers like an annual risk of dying of 1 in 2 million (a lifetime risk of 1 in 30,000). These numbers jiggle depending on which dataset you use in the calculation, but basically the risk seems to be on the order of 10^-4 that we will have a asteroid GCR during our lifetime. It is megatsunamis or asteroid winters from 1+ km impactors that provide most of the hazard. Fortunately the size distribution has a rapidly declining power law tail and we have not seen any dangerous NEOs so far, but if you like to worry there is always the possibility that there are "black comets" that we are bad at detecting (sure, there might be just (say) 10% chance of the theory being right, but if it is right the risk might jump more than one order of magnitude). I will be giving a keynote speech at the 2011 IAA Planetary Defense Conference this May with the title "The billion-body problem: taking human (ir)rationality into account for planetary defense". So I would love to mine this thread for good ideas. Here is my abstract: "Of all the types of global catastrophic risks facing mankind, NEO impacts almost represent a best case: they follow dynamical laws that are fairly well understood and deterministic, we have data on past effects, estimated probability distributions, there is some public understanding of the issue, and there is a community that is actually working towards ways of ameliorating the risk. Few other threats have this many positive factors working against them. Yet we should not expect planetary defense to be unproblematic even if the scientific and technical problems can be resolved. Part of this is due to the public goods nature of planetary defense: who will pay for something that benefits everybody? Another aspect is the long timeframes involved: how well can societies plan for predictable but remote threats when there are many more urgent issues? More fundamentally, human cognition suffers from several biases that are likely to both impair proper understanding and decisionmaking for this kind of rare but hazardous event. Can we get around the problems posed by human rationality for coordinating planetary defense? This is the billion-body problem." Libertarian asteroid defense is a nice example of how a worst case scenario can be helpful for planning: if one can come up with a good private solution, then the public good issue might be solvable. The timeframe issue might be worse from our perspective than most, since if you are in a civilization hurtling towards singularity discounting of the future might go down: the next generation will be far more capable than the current one, and investing your capital in more growth gives better returns than investing asteroid deflection. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From darren.greer3 at gmail.com Mon Feb 28 23:09:06 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 28 Feb 2011 19:09:06 -0400 Subject: [ExI] value of a human life, was RE: Same Sex Marriage (was Re: Call To Libertarians) In-Reply-To: <00af01cbd77a$f3fff150$dbffd3f0$@att.net> References: <00af01cbd77a$f3fff150$dbffd3f0$@att.net> Message-ID: On Mon, Feb 28, 2011 at 3:08 PM, spike wrote: >Wiki says annual world cocaine consumption is about 600 tons per year. This can likely get us an order of magnitude estimate. If we use the 1 murder per gram ratio, that's 600 million murders per year or about a tenth of the world population murdered per year. This sounds high to me. My (admittedly uneducated) estimate would run to somewhere in the order of 600 to 6000 murders per year over cocaine, so that for every 100 to 1000 kilos of cocaine you hold in your hand, someone has been killed, even if we ignore the practical difficulty of holding a ton of cocaine in one's hand.< My bad. That was a bit of info (and I now see propaganda) that I bought into in treatment and never really analyzed. Makes sense. Humble apologies and another warning to self to number crunch and check facts before I post. But it would be nice to know how many drug-related deaths occur each year across the planet. Difficult to guess, I suppose. From originating drug-country wars, mafia and biker hits, over-doses, suicides, car accidents. And if you count alcohol, the numbers would seriously spike. (Not you spike.) Darren Darren Darren > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj > ... > > >> For every gram of cocaine you hold in your hand, someone has likely been > killed to get it there. > > Mmhhh. As much as I am against prohibitionism, this sounds a little > emphatic, and likely to overestimate the number of murders, or to > underestimate the number of cocaine grams in circulation. > > Any hard data in this respect? > > -- > Stefano Vaj > > Wiki says annual world cocaine consumption is about 600 tons per year. > This > can likely get us an order of magnitude estimate. If we use the 1 murder > per gram ratio, that's 600 million murders per year or about a tenth of the > world population murdered per year. This sounds high to me. My > (admittedly > uneducated) estimate would run to somewhere in the order of 600 to 6000 > murders per year over cocaine, so that for every 100 to 1000 kilos of > cocaine you hold in your hand, someone has been killed, even if we ignore > the practical difficulty of holding a ton of cocaine in one's hand. > > If we use that criterion to tackle the difficult ethical question > regarding > the dollar value of a human life, I would start with the estimate that > cocaine is worth (well it was when I was a teenager) about 100 bucks a > gram, > and there has been inflation but simultaneously far more new suppliers from > what I hear, so let me use 100 US bucks a gram, so a human life is worth > between 10 million and 100 million bucks. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders at aleph.se Mon Feb 28 23:19:17 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 28 Feb 2011 23:19:17 +0000 Subject: [ExI] libertarian (asteroid) defense In-Reply-To: <20110228213934.GA1344@ofb.net> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> <20110228213934.GA1344@ofb.net> Message-ID: <4D6C2D75.8070601@aleph.se> Damien Sullivan wrote: > Actually I imagine volcanoes might pretty tameable. Drill down and > release gases/magma in a controlled manner, rather than letting them > blow all at once. Though the BP oil spill highlights the safety > concerns of drilling into a pressure chamber. Would want to practice on > the small volcanoes first. > I think there is a bit of a problem with this method. A typical volcano has a lava chamber on the order of a cubic kilometer and contains up towards 10^19 Joules or so. A drilled hole has a diameter ~0.1 m. Assuming blackbody radiation from 2000 K lava only allows 9000 W to escape the hole. Pumping up something like hydrogen (thermal capacity 20 kJ/kgK) at the speed of sound (~1300 m/s) at a density of 1000 kg/m^3 gives an energy flux of 520*10^9 W, able to defuse the volcano in a year or so... assuming the massive erosion near-supersonic very hot gas would have on the pipe does not just turn it into a steam vent leading to the volcano boiling off (the risk is alway a sudden pressure decrease in the chamber, since then gases start to release in the lava and it spurts out). Garden variety volcanos are anyway not big threats except to locals and airlines. It is defusing supervolcanos we ought to seriously think about, but the liability problems of getting them wrong are... severe. Most common natural GCRs are fairly manageable if you can warn ahead (tsunamis, volcanos, hurricanes, even asteroids) and build resiliency (good civil society infrastructure, food storage). The ones to fear are the ones that are global (major climate fluctuation impairing agriculture, pandemics, cosmic eruptions). Anthropogenic GCRs are somwhat similar, but IMHO more dangerous because they are often adaptive and potentially larger. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From natasha at natasha.cc Mon Feb 28 22:53:21 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Mon, 28 Feb 2011 17:53:21 -0500 Subject: [ExI] propose temporary moratorium on a topic In-Reply-To: <00f001cbd796$dcf52270$96df6750$@att.net> References: <00f001cbd796$dcf52270$96df6750$@att.net> Message-ID: <20110228175321.ny7qmngikg0800k4@webmail.natasha.cc> Quoting spike : > > As a counterpart to our recent temporary open season on libertarian > discussions, I now propose a temporary halt to the discussions on that > topic, thanks. > > When Altas Shrugged reaches the theatres, I expect we will have another > temporary open season. Fair enough? Yes. (And that will be about April 15th. ) From phoenix at ugcs.caltech.edu Mon Feb 28 23:38:29 2011 From: phoenix at ugcs.caltech.edu (Damien Sullivan) Date: Mon, 28 Feb 2011 15:38:29 -0800 Subject: [ExI] libertarian (asteroid) defense In-Reply-To: <4D6C2D75.8070601@aleph.se> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> <20110228213934.GA1344@ofb.net> <4D6C2D75.8070601@aleph.se> Message-ID: <20110228233829.GA9672@ofb.net> On Mon, Feb 28, 2011 at 11:19:17PM +0000, Anders Sandberg wrote: > Damien Sullivan wrote: > >Actually I imagine volcanoes might pretty tameable. Drill down and > >release gases/magma in a controlled manner, rather than letting them > >blow all at once. Though the BP oil spill highlights the safety > >concerns of drilling into a pressure chamber. Would want to practice on > >the small volcanoes first. Oh cool, you give the first response I've gotten to this that isn't a simple "cool" or "that's silly". > I think there is a bit of a problem with this method. A typical > volcano has a lava chamber on the order of a cubic kilometer and > contains up towards 10^19 Joules or so. A drilled hole has a > diameter ~0.1 m. Assuming blackbody radiation from 2000 K lava only > allows 9000 W to escape the hole. Pumping up something like hydrogen > (thermal capacity 20 kJ/kgK) at the speed of sound (~1300 m/s) at a > density of 1000 kg/m^3 gives an energy flux of 520*10^9 W, able to > defuse the volcano in a year or so... assuming the massive erosion Those seem like silly mechanisms, especially radiation, though they give a bracket, I guess. I'd envisioned letting CO2 out, to lessen the pressure; letting lava come up and cool; pumping water down, so you're both cooling the chamber and recouping or even generating geothermal power. And no need to be limited to one hole. > turn it into a steam vent leading to the volcano boiling off (the > risk is alway a sudden pressure decrease in the chamber, since then > gases start to release in the lava and it spurts out). Ah, tthat's good to know. I've never studied vulcanology much. > Garden variety volcanos are anyway not big threats except to locals > and airlines. It is defusing supervolcanos we ought to seriously Well, the high end of ordinary volcanoes can bobble climate and agriculture for a year or three. Survivable in a robust system but an unfortunate extra stress at bad times. Speaking of robustness, I was appalled to investigate and learn that the US doesn't seem to have a strategic food reserve. I think we used to, but now we trust to the markets, as if markets will provide food in the case of a global crop failure. Well, we can pay more, but still. Weren't public granaries one of the first services of non-thuggish government? > think about, but the liability problems of getting them wrong are... > severe. Yeah. Of course, getting surprised by one is also severe. But I guess no *liability*. > Most common natural GCRs are fairly manageable if you can warn ahead GCR = global catastrophic risk -xx- Damien X-) From anders at aleph.se Mon Feb 28 23:40:54 2011 From: anders at aleph.se (Anders Sandberg) Date: Mon, 28 Feb 2011 23:40:54 +0000 Subject: [ExI] Serious topic In-Reply-To: <20110228113420.GU23560@leitl.org> References: <4D6B7903.5000200@aleph.se> <20110228113420.GU23560@leitl.org> Message-ID: <4D6C3286.1070806@aleph.se> Eugen Leitl wrote: > The only way to give Moore second wind is to leap into 3d, and > I'm very leery expecting just-in-time new technologies. > The real Moore's law is flops per dollar rather than transistors, anyway. Right now it runs on increasing cores, it seems. I think *Sahal's law* is strong enough: given the still remaining enormous demand for processors of all sizes, we will see continued exponential improvement for at least two more decades. http://192.12.12.16/events/workshops/images/4/4f/Nagy.ModelingOrganizationalComplexity.pdf >> Just like Bass technology diffusion curves they fit data very well in >> retrospect. But when used to extrapolate the future they seem too >> unstable: the predicted peaks or sigmoids jump all over the place due to >> noise in the data. >> > > I see little magic in graphs, but in fundamental processes giving > rise to the graphs. > Exactly. But that rarely gives insight enough to make the quantitative predictions we care about. The fact that there is a finite amount of oil (modulo slow abiogenesis) doesn't tell us whether we should be heading for the hills, investing in solar power or form a bike gang. That Bass curve dynamics is well understood will not help you predict how many gizmos you will eventually sell. > > Markets are chronically poor long-term predictors. > Maybe. But they seem to be extremely quick at changing when there is a constraint. Remember what happened to the energy use per capita vs gdp per capita post WWII? http://www.aleph.se/andart/archives/images/fig2007userg.html A very strong linear relation from 1949 to 1970, followed by what has essentially been a horizontal curve (growing GDP with no more energy use). The reason was the oil crisis. It induced a surprisingly quick shift of a long-term, heavily invested infrastructure. -- Anders Sandberg, Future of Humanity Institute James Martin 21st Century School Philosophy Faculty Oxford University From spike66 at att.net Mon Feb 28 23:32:58 2011 From: spike66 at att.net (spike) Date: Mon, 28 Feb 2011 15:32:58 -0800 Subject: [ExI] value of a human life, was RE: Same Sex Marriage (was Re: Call To... Message-ID: <00f701cbd79f$d50fb760$7f2f2620$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer On Mon, Feb 28, 2011 at 3:08 PM, spike wrote: >>. so that for every 100 to 1000 kilos of cocaine you hold in your hand, someone has been killed, even if we ignore the practical difficulty of holding a ton of cocaine in one's hand.< . >.From originating drug-country wars, mafia and biker hits, over-doses, suicides, car accidents. BIKER HITS??? What's this biker hits jazz? Sorry, we bikers can be a little over sensitive at times. We have that whole leather jacket and rumble chain around the neck ruffian reputation, but we are gentle souls who are deeply hurt at the idea we would ever harm a soul. Sometimes I want to hit someone with my purse. {8^D The nuclear war people use the term megadeath, so I see no reason why we couldn't use the term microdeaths per gram of cocaine. >.And if you count alcohol, the numbers would seriously spike. (Not you spike.). Darren Oddly enough, this puts me in the position of sounding like I am defending drug and alcohol abuse, when in actuality I am not. I reluctantly agree that all recreational pharmaceuticals should be legal. At the same time, I will unapologetically state I think people would be better off leaving it alone, all of it. Live sober me lads! It's the polar opposite to that feller who said something about tune in, turn on, drop out. My version is live sober, eat light, think hard and write smart! {8-] spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan_ust at yahoo.com Mon Feb 28 23:19:53 2011 From: dan_ust at yahoo.com (Dan) Date: Mon, 28 Feb 2011 15:19:53 -0800 (PST) Subject: [ExI] libertarian (asteroid) defense In-Reply-To: <20110228213934.GA1344@ofb.net> References: <51C5DD94C093479AAB3A5C8A55B5F961@DFC68LF1> <00ab01cbd44c$71919380$54b4ba80$@att.net> <4D6A3E56.9020805@aleph.se> <20110227175045.GB26298@ofb.net> <20110228213934.GA1344@ofb.net> Message-ID: <878649.30684.qm@web30108.mail.mud.yahoo.com> One minor point: given that people tend to look toward government for these kinds of things, I don't know if it's the best test to look at what people are doing know to assess what they'd do were there no government. (I wouldn't say it's not the worst test -- as I do believe people would generally do many of the same things they do now sans government. It's just that what will happen absent government depends not just on what governments actually do or prohibit, but also on expectations.) And one could also make the case that, were there no government and this were a basically libertarian world, there'd be more wealth to devote to things like highly unlikely risks. Of course, this is accepting that free societies (and a basically free world) would be more productive. This does not mean, of course, that free people would necessarily devote any extra wealth, accepting the argument, to asteroid defense. Finally, a case could be made that the cost of spaceflight would be lower without government subsidies in the space market so that the costs of asteroidal defense would be lower. Again, accepting this as so doesn't mean people would say, "Oh, the cost is much lower, so let's invest in it." Maybe they wouldn't. Regards, Dan From: Damien Sullivan To: ExI chat list Sent: Mon, February 28, 2011 4:39:34 PM Subject: [ExI] libertarian (asteroid) defense On Sun, Feb 27, 2011 at 10:29:42PM -0700, Kelly Anderson wrote: > > I recently ran into an extreme case of this: > > http://volokh.com/2011/02/15/asteroid-defense-and-libertarianism/ > If private insurance companies sold asteroid insurance, which they > should, then there would be a significant desire to avoid payout. That Why should they?? Can they make money off of it?? Why aren't they selling asteroid insurance right now?? Who would buy end of the world insurance -- who would make or receive payments? Unsubsidized insurers go for little disasters that happen a lot and are spread out in a statistically averagable manner.? They avoid things that strike lots of people at once, like floods, earthquakes, and fission plant accidents. > would lead to the spending of money to avoid the disaster in the first > place. Of all potential mega disasters we could face, asteroid hits > are the most easily preventable... (compared to such things as super > volcanos, subduction earthquakes and tsunamis and the like, where we Actually I imagine volcanoes might pretty tameable.? Drill down and release gases/magma in a controlled manner, rather than letting them blow all at once.? Though the BP oil spill highlights the safety concerns of drilling into a pressure chamber.? Would want to practice on the small volcanoes first. Alternately, being able to trigger a volcano or earthquake at a specific time would be helpful, rather than having them strike at once. > Additionally, in a libertarian society, someone might set up a non > profit organization to search for and disable near earth objects. If > everyone in America donated 25 cents to such an organization, it would > be funded well over current funding levels. Someone might?? Why don't they do so now?? Why would everyone donating 25 cents be more likely then than it is now? In a libertarian society, you get to specify less government, that's all.? You don't get to specify magically more altruistic people than we have now.? And anything not actively banned by government is perfectly doable today, so if people aren't doing it now, that bodes ill for doing it in libertarian world. -xx- Damien X-) _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: