From msd001 at gmail.com Tue Feb 1 00:54:15 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 31 Jan 2011 19:54:15 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> <25169E97-D721-4EDD-9191-EB0C3568D967@bellsouth.net> Message-ID: 2011/1/31 John Clark : > On Jan 31, 2011, at 12:31 PM, Adrian Tymes wrote: > > Talk about one subject. ?Then talk about something else. ?A?human can handle > this - even if they are not an expert in all things (which?no human is, > though some try to pretend they are). ?These AIs completely?break down. > > Until now it was true that AI programs were very brittle, but that's why I > was so impressed with Watson, its knowledge base is so vast and it's so good > at finding the appropriate information from even vague poorly phrased > input?that with only a few modifications you could make a program that could > speak about anything and do so intelligently enough not to be embarrassing. > Of course I'm not saying it would always speak brilliantly, if it did that > it would be a dead giveaway that's its not human and fail the Turing Test. yes, because no human would ever speak embarrassingly on a topic :) From possiblepaths2050 at gmail.com Tue Feb 1 08:36:00 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 1 Feb 2011 01:36:00 -0700 Subject: [ExI] A futurist on Coast to Coast AM with George Noory Message-ID: On Tuesday, the futurist Mark Stevenson will be the special guest on Coast to Coast AM with George Noory. An excerpt from the Coast to Coast AM website: >Writer, deep-thinker, and stand-up comedian Mark Stevenson shares his journey >to find out what the future holds. He'll discuss his meetings with Transhumanists >who intend to live forever, robots, smart farmers, nanotechnology experts, and >scientists manipulating the genome. http://www.coasttocoastam.com/show/2011/02/02 John : ) From bbenzai at yahoo.com Tue Feb 1 13:12:59 2011 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 1 Feb 2011 05:12:59 -0800 (PST) Subject: [ExI] atheists declare religions as scams. In-Reply-To: Message-ID: <803459.14987.qm@web114405.mail.gq1.yahoo.com> On 31 January 2011 01:31, Keith Henson wrote: > I think atheists would be much better off to try to understand why (in > an evolutionary sense) humans have religions at all. I thought the idea that our brains are adapted to err on the side of false positives when attributing agency to events was enough of an explanation. You know, the 'movement in the bushes might be a lion' idea. Those who assume it's a lion will survive when it actually is a lion, and those who don't, won't. So attributing agency to things becomes a selected survival trait. I'm sure there will be other, contributing factors, but it seems to me that this is the main one. Ben Zaiboc From jonkc at bellsouth.net Tue Feb 1 14:54:40 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 1 Feb 2011 09:54:40 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: <5B7EE075-31A6-46BE-AC0B-E9446E5B048C@bellsouth.net> On Jan 31, 2011, at 1:28 PM, Adrian Tymes wrote: >> Don't be silly, the probability of Voyager spotting such a teapot even if it >> were there is virtually zero. > > I'm not so sure. It depends on how close Voyager passed, and they do pick > up a lot of details with repeated analysis. You must be joking, you just must be. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Feb 1 15:26:26 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 1 Feb 2011 10:26:26 -0500 Subject: [ExI] Fw: Re: atheists declare religions as scams. In-Reply-To: References: <686238.97178.qm@web114420.mail.gq1.yahoo.com> <212402D0-8A97-4713-B20F-8ECF7ABF1A18@bellsouth.net> <7EFFBBF3-CDF4-4183-B422-51D6946FD56C@bellsouth.net> Message-ID: <826E4FFC-699C-429E-98DB-BBCEA9D3D3B2@bellsouth.net> On Jan 31, 2011, at 2:04 PM, Darren Greer wrote: > > A teapot agnostic says "I don't have enough data to determine whether the teapot exists so I can't form an opinion." Then he is not only a teapot agnostic he is also a liar because, assuming he is not a resident of a looney bin, you can be quite certain that he DOES have an opinion regarding a teapot in orbit around the planet Uranus. In addition, if logically you are justified in being 99.9999% certain that X does not exist then if it's irrational its not very irrational if emotionally you are 100% certain that X does not exist; the emotional part of the human mind is just not equipped with such precession tolerances to allow greater distinctions than that because Evolution decided it would be a waste of resources. > You allot each position as a possibility In the real world where scientists actually do things they regard some possibilities (most possibilities actually) as being so low they are not worth the wear and tear on their valuable brain cells. > Respect has nothing to do with it. Things you respect deserve your time, things you don't don't. > See my point? No. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Feb 1 15:53:15 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 1 Feb 2011 16:53:15 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D471EA4.7080900@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: On 31 January 2011 21:42, Richard Loosemore wrote: > But that is *exactly* my point. ?We are not getting tantalizingly close, we > are just doing the same old snake-oil con trick of building a system that > works in a ridiculously narrow domain, and which impresses some people with > the sheer breadth of information it stores inside it. I suspect that human beings themselves do little else than than adding half-competent reactions in ridiculously narrow domains one to another for a very large number thereof. And, hey, we might well be more optimised for this feature than many AGI proponent seem to believe... So, an intelligence with competitive performance in this task, be it even entirely artificial, could end up being more similar, in its relevant components, to the brains we know than to anything else. Of course, nothing prevent us from adding, say, a math coprocessor to such a system. Or to ourselves, for that matter. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 1 15:13:32 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 1 Feb 2011 16:13:32 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> Message-ID: On 31 January 2011 17:36, Kelly Anderson wrote: > I think the problem is really related to the definition of > intelligence. Nobody has really defined it, so the definition seems to > fall out as "Things people do that computers don't do yet." So what is > "Things computers do that people can't do"? Certainly it is not ALL > trivial stuff. For example, using genetic algorithms, computers have > designed really innovative jet engines that no people ever considered. > Is that artificial intelligence (i.e. the kind people can't do?) Very good point. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 1 16:23:18 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 1 Feb 2011 17:23:18 +0100 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <803459.14987.qm@web114405.mail.gq1.yahoo.com> References: <803459.14987.qm@web114405.mail.gq1.yahoo.com> Message-ID: On 1 February 2011 14:12, Ben Zaiboc wrote: > On 31 January 2011 01:31, Keith Henson wrote: >> I think atheists would be much better off to try to understand why (in >> an evolutionary sense) humans have religions at all. > > I thought the idea that our brains are adapted to err on the side of false positives when attributing agency to events was enough of an explanation. There again, if I am angry with my car because it does not want to start, I am not founding a religion, I am more likely to kick it and call a mechanic. Moreover, there are religious beliefs which have nothing to do with attributing agency. Basically, "re-ligio" in Latin simply means a common set of ideas and narratives that binds together a group. In this sense, its evolutionary significance for a cultural being would seem obvious. But there is no requirement whatsoever that such ideas or narratives include or postulate a metaphysical worldview. In fact, I suspect that most of the times, most of the places in human history they did not, and yet could serve very well whatever evolutionary added value religions may deliver. -- Stefano Vaj From spike66 at att.net Tue Feb 1 16:09:39 2011 From: spike66 at att.net (spike) Date: Tue, 1 Feb 2011 08:09:39 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: <000001cbc22a$6e184390$4a48cab0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj ... Subject: Re: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. On 31 January 2011 21:42, Richard Loosemore wrote: >> But that is *exactly* my point. ?We are not getting tantalizingly > close, we are just doing the same old snake-oil con trick of building > a system that works in a ridiculously narrow domain, and which > impresses some people with the sheer breadth of information it stores inside it. >I suspect that human beings themselves do little else than than adding half-competent reactions in ridiculously narrow domains one to another for a very large number thereof... Stefano Vaj Ja. The reason I think we are talking past each other is that we are describing two very different things when we are talking about human level intelligence. I am looking for something that can provide companionship for an impaired human, whereas I think Richard is talking about software which can write software. If one goes to the nursing home, there are plenty of human level intelligences there, lonely and bored. I speculate you could go right now to the local nursing home, round up arbitrarily many residents there, and find no ability to write a single line of code. If you managed to find a long retired Fortran programmer, I speculate you would find nothing there who could be the least bit of help coding your latest video game. I think we are close to writing software which would provide the nursing home residents with some degree of comfort and something interesting to talk to. Hell people talk to pets when other humans won't listen. We can do better than a poodle. If we are talking about software which can write software, that's a whole nuther thing, the singularity. If we get that, entertaining the elderly is irrelevant. spike From jonkc at bellsouth.net Tue Feb 1 18:11:22 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 1 Feb 2011 13:11:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: On Jan 31, 2011, at 1:08 PM, Kelly Anderson wrote: > > Another test... suppose that I subscribed an artificial intelligence program to this list. Why do you subscribed an artificial intelligence program to this list? > That's a bit easier Easier is the key. > So perhaps I suggest a new test. If a computer is smart enough to get > admitted into Brigham Young University Brigham Young University is the key. > you don't have to do the processing in real time as with a chat program. You could be right about that. > I suppose that's just another emergent aspect of the human brain. Why do suppose that's just another emergent aspect of the human brain? > There seems to be a supposition by some (not me) that to be > intelligent, consciousness is a prerequisite. You could be right about that. > Once again, we run into another definition issue. Run into is the key. > Why do you sat Watson knows that 'Sunflowers' was painted by 'Van Gogh Why do you sat? > Maybe this still doesn't make total sense You could be right about my hovercraft being full of eels. John K zzzzzzz 1521 buffer overflow error module 429232 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Tue Feb 1 22:03:10 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:03:10 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D4707E4.3000106@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> Message-ID: On Mon, Jan 31, 2011 at 12:05 PM, Richard Loosemore wrote: > Kelly Anderson wrote: >> On Fri, Jan 28, 2011 at 9:01 AM, Richard Loosemore >> Trivial!?! This is the final result of decades of research in both >> software and hardware. Hundreds of thousands of man hours have gone >> into the projects that directly led to this development. Trivial! You >> have to be kidding. The subtle language cues that are used on Jeopardy >> are not easy to pick up on. This is a really major advance in AI. I >> personally consider this to be a far more impressive achievement than >> Deep Blue learning to play chess. > > I stand by my statement that what Watson can do is "trivial". If what you are saying is that Watson is doing a trivial subset of human capabilities, then yes, what Watson is doing is trivial. It is by no means a trivial problem to get computers to do it, as I'm sure you are aware. > You are wildly overestimating Watson's ability to handle "subtle language > cues". ?It is being asked a direct factual question (so, no need for Watson > to categorize the speech into the dozens or hundreds of subtle locution > categories that a human would have to), and there is also no need for Watson > to try to gauge the speaker's intent on any of the other levels at which > communication usually happens. Have you watched Jeopardy? Just figuring out what they mean by the category name is often quite difficult. The questions are often full of puns, innuendo and other slippery language. > Furthermore, Watson is unable (as far as I know) to deploy its knowledge in > such a way as to learn any new concepts just by talking, or answer questions > that involve mental modeling of situations, or abstractions. It learns new concepts by reading. As far as I know, it has no capability for generating follow up questions. But if a module were added to ask questions, I have no doubt that it would be able to read the answer, and thus 'learn' a new concept, at least insofar as what Watson is doing can be classified as learning. > For example, I > would bet that if I ask Watson: > > "If I have a set of N balls in a bag, and I pull out the same number of > balls from the bag as there are letters in your name, how many balls would > be left in the bag?" > > It would be completely unable to answer. Of course, because it has to be in the form of an answer... ;-) Seriously, you may be correct. However, I would not be surprised if Watson were able to handle this kind of simple progressive logic. We would have to ask the designers. Natural language processing has been able to parse those kinds of sentences for some time, so I would not be surprised if Watson could also parse your sentence. Whether it would be able to answer or not is something I don't know. I hope someday they put some form of Watson online so we can ask it questions and see how good it is at answering them. >> Richard, do you think computers will achieve Strong AI eventually? > > Kelly, by my reckoning I am one of only a handful of people on this planet > with the ability to build a strong AI, and I am actively working on the > problem (in between teaching, fundraising, and writing to the listosphere). That's fantastic, I truly hope you succeed. If you are working to build a strong AI, then you must believe it is possible. I have spent about the last two hours reading your papers, web site, etc. You have an interesting set of ideas, and I'm still digesting it. One question comes up from your web site, I quote: "One reason that we emphasize human-mind-like systems is safety. The motivation mechanisms that underlie human behavior are quite unlike those that have traditionally been used to control the behavior of AI systems. Our research indicates that the AI control mechanisms are inherently unstable, whereas the human-like equivalent can be engineered to be extremely stable." Are you implying that humans are safe? If so, what do you mean by safety? -Kelly From kellycoinguy at gmail.com Tue Feb 1 22:23:05 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:23:05 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <20110128160545.GB23560@leitl.org> <9FDB9ECF-2AE0-4D88-9AD9-93A41D10FF13@bellsouth.net> <8AC22D2E-7763-40A3-9021-8F5A8FA550AF@bellsouth.net> Message-ID: 2011/1/31 Dave Sill : > On Mon, Jan 31, 2011 at 1:08 PM, Kelly Anderson > wrote: >> >> The strongest Turing test is when someone who knows a lot about >> natural language processing and it's weaknesses can't distinguish over >> a long period of time the difference between a number of humans, and a >> number of independently trained Turing computers. > > No, language processing is only one aspect of intelligence. The strongest > Turing test would also measure the ability to learn, to learn from past > experiences, to plan, to solve problems...all of the things the Wikipedia > definition mentions, and maybe more. You are right. >> So perhaps I suggest a new test. If a computer is smart enough to get >> admitted into Brigham Young University, then it has passed the >> Anderson Test of artificial intelligence. > > You mean achieve an SAT score sufficient to get into BYU? Or do you mean > that it has to go through school or take a GED, fill out an application to > BYU, etc. like a human would have to do? Passing the SAT would be only one part of the test, but it would also have to pass some kind of high school, write an essay on why it deserves to be admitted, fill out the forms and so forth. The idea is to be intellectual enough to fool the admissions board. I picked this test because you don't have to physically appear to be admitted to most colleges. I would not require the robotics aspect... you could put the paper in the printer for it... mail the forms, etc. >> Is that harder or easier?than the Turing test? > > Depends on the Turing test, I'd say. Sure. >> How about smart enough to graduate with a BS from BYU? > > How about it? It'd be an impressive achievement. Would it be intelligent? I think so. >> Another test... suppose that I subscribed an artificial intelligence >> program to this list. How long would it take for you to figure out >> that it wasn't human? That's a bit easier, since you don't have to do >> the processing in real time as with a chat program. > > Depends how active it is, what it writes, and whether anyone is clued to the > fact that there's a bot on the list. A Watson-like bot that answers > questions occasionally could be pretty convincing. But it'd fall apart if > anyone tried to engage it in a discussion. I would assume that nobody would be clued in. If you have a suspicion that something is amiss, you start looking for things. It's how I watch CGI movies... I look for the mistakes. When I just relax and enjoy the movie, I can believe what I see better. Benjamin Button passed this "Graphical Reality" test for me, btw. Very impressive. I don't think Watson could pass this test now, but I would not be surprised if it could at some point in the not too distant future. >> That's the difference between taking a picture, and telling you what >> is in the picture. HUGE difference... this is not a "little" more >> sophisticated. > > No, parsing a sentence into parts of speech is not hugely sophisticated. But "understanding" the sentence is. Parsing an image into blobs is not highly sophisticated either, but labeling the blob in the middle as a "dog" is. >> Once again, we run into another definition issue. What does it mean to >> "understand"? > > http://en.wikipedia.org/wiki/Understanding Quoting: "a psychological process related to an abstract or physical object, such as a person, situation, or message whereby one is able to think about it and use concepts to deal adequately with that object." So contextually to Jeopardy, Watson understands the questions it answers correctly. Right? >> And if that form is such that I can >> use it for future computation, to say answer a question, then Watson >> does understand it. Yes. So by some definitions of "understand" yes, >> Watson understands the text it has read. > > ?Granted, at a trivial level Watson could be said to understand the data > it's incorporated. But it doesn't have human-level understanding of it. But by the Wikipedia definition, it only has to "deal adequately"... Winning several thousand dollars on Jeopardy would certainly seem to be adequate, IMHO. -Kelly From kellycoinguy at gmail.com Tue Feb 1 22:27:28 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:27:28 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D471EA4.7080900@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: On Mon, Jan 31, 2011 at 1:42 PM, Richard Loosemore wrote: > spike wrote: > Watson does not contain the germ of an intelligence, it contains a dead-end > algorithm designed to impress the gullible. ?That strategy has been the > definition of "artificial intelligence" for the last thirty or forty years, > at least. > > A real AI is not Watson + extra machinery to close the gap to a full > conversational machine. ?Instead, a real AI involves throwing away Watson, > starting from scratch, and doing the whole thing in a completely different > way .... ?a way that actually allows the system to build its own knowledge, > and use that knowledge in an ever-expanding range of ways. Richard, Is your basic problem with Watson that it is going in the wrong direction if the eventual goal is AGI? Are you concerned that the public is being misled into believing that computers are closer to being "intelligent" than they actually are? I'm trying to understand the core of your indignance. -Kelly From kellycoinguy at gmail.com Tue Feb 1 22:35:34 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 15:35:34 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <00ab01cbc18c$d4e17a40$7ea46ec0$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <00ab01cbc18c$d4e17a40$7ea46ec0$@att.net> Message-ID: On Mon, Jan 31, 2011 at 2:21 PM, spike wrote: > Machines would patiently keep repeating the same answers. ?I see it as one > hell of a breakthough, even if we know it isn't artificial intelligence. ?I > don't care if it isn't AI, all I want is something to keep my parents > company 10 yrs from now. I don't think you'll have to wait 10 years, but you may have to have a lot of money. :-) http://babakkia.posterous.com/france-developing-advanced-humanoid-robot-rom The Japanese are working on Aibo, and other things like that. The Japanese population inversion makes this a top national priority for them, so look for the solution to this problem coming soon from that direction. The hard part will be teaching your parents Japanese.. ;-) -Kelly From protokol2020 at gmail.com Tue Feb 1 22:39:52 2011 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Tue, 1 Feb 2011 23:39:52 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <00ab01cbc18c$d4e17a40$7ea46ec0$@att.net> Message-ID: Even Google Transtlator, let alone Watson should be able to overcome this problem. ;-) > The hard part will be teaching your parents Japanese.. ;-) > > -Kelly > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Tue Feb 1 23:03:32 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Tue, 1 Feb 2011 16:03:32 -0700 Subject: [ExI] Plastination Message-ID: Has anyone seriously looked at plastination as a method for preserving brain tissue patterns? http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html It seems to preserve extremely delicate structures and lasts for 10,000 years without keeping things cold. A technology advanced enough to unfreeze a brain seems like it would be able to work with these things just about as easily... -Kelly From spike66 at att.net Wed Feb 2 00:27:10 2011 From: spike66 at att.net (spike) Date: Tue, 1 Feb 2011 16:27:10 -0800 Subject: [ExI] weird al in the mainstream press Message-ID: <00a201cbc26f$ee7b2f80$cb718e80$@att.net> Fun article, but there is a glaring omission: http://www.cnn.com/2011/LIVING/02/01/weird.al.book/index.html?hpt=C2 He didn't mention Dr. Demento, who really launched his career. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Wed Feb 2 00:56:48 2011 From: mbb386 at main.nc.us (MB) Date: Tue, 1 Feb 2011 19:56:48 -0500 Subject: [ExI] weird al in the mainstream press In-Reply-To: <00a201cbc26f$ee7b2f80$cb718e80$@att.net> References: <00a201cbc26f$ee7b2f80$cb718e80$@att.net> Message-ID: You're buying this for your kid, right? :))) It sounds like a hoot. If my kids were little, or I had grandkids, I'd buy a copy. Never would I have dreamed what my kids actually *do* as adults. Heck, I never dreamed what *I* did as working stiff. ;) Life can be right peculiar, the way it turns and twists. Al is right though - there is a goodly measure of success in being happy in what you do. Being *unhappy* in what you do would be The Pits. Regards, MB From sjatkins at mac.com Wed Feb 2 01:10:22 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 01 Feb 2011 17:10:22 -0800 Subject: [ExI] Plastination In-Reply-To: References: Message-ID: <4D48AEFE.5080307@mac.com> On 02/01/2011 03:03 PM, Kelly Anderson wrote: > Has anyone seriously looked at plastination as a method for preserving > brain tissue patterns? > > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html > > It seems to preserve extremely delicate structures and lasts for > 10,000 years without keeping things cold. A technology advanced enough > to unfreeze a brain seems like it would be able to work with these > things just about as easily... > > -Kelly > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat There was a good talk at the Citizen Scientist conference last year on precisely this. John Smart and other are behind an initiative to move this forward. http://www.brainpreservation.org/ http://www.slideshare.net/humanityplus/smart-4671818 From sjatkins at mac.com Wed Feb 2 02:06:32 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 01 Feb 2011 18:06:32 -0800 Subject: [ExI] atheists declare religions as scams. In-Reply-To: <803459.14987.qm@web114405.mail.gq1.yahoo.com> References: <803459.14987.qm@web114405.mail.gq1.yahoo.com> Message-ID: <4D48BC28.2000806@mac.com> On 02/01/2011 05:12 AM, Ben Zaiboc wrote: > On 31 January 2011 01:31, Keith Henson wrote: >> I think atheists would be much better off to try to understand why (in >> an evolutionary sense) humans have religions at all. > I thought the idea that our brains are adapted to err on the side of false positives when attributing agency to events was enough of an explanation. > Not by a very long shot. The desire to find meaning to existence and a grounding for ones place in it is no small part of the generating functions. Dealing with the fact of mortality adds fuel to the fire. Community bonding is still another part. The causes for religion are not much more simple than human beings are. Simplistic statements of "X explains religion or does so well enough" are neither accurate or helpful. - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Wed Feb 2 02:07:49 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 1 Feb 2011 20:07:49 -0600 Subject: [ExI] Fwd: Slate | Synthetic biology and Obama's bioethics commission: How can we govern the garage biologists who are tinkering with life? - By William Saletan In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Eri Gentry Date: Tue, Feb 1, 2011 at 7:50 PM Subject: Slate | Synthetic biology and Obama's bioethics commission: How can we govern the garage biologists who are tinkering with life? - By William Saletan To: biocurious at googlegroups.com http://www.slate.com/id/2283324/?from=rss Faking Organisms How can we govern the garage biologists who are tinkering with life?By William SaletanPosted Tuesday, Feb. 1, 2011, at 9:20 AM ET *This article arises from Future Tense *, *a collaboration among Arizona State University, the New America Foundation, and** **Slate**. A Future Tense conference on whether governments can keep pace will scientific advances will be held at Google D.C.'s headquarters on Feb. 3-4.* (*For more information and to sign up for the event, please visit the****NAF Web site* *.)* Synthetic biology?the engineering of new forms of life?is the kind of science that can freak people out. Some critics want to stop or restrict it. But President Obama's bioethics commission , in its report on this emerging technology, advocates a subtler approach: "an ongoing process of prudent vigilance that carefully monitors, identifies, and mitigates potential and realized harms over time." Prudent vigilance may not be sexy, but it's smart. It's designed, in the commission's words, to maximize "information, flexibility, and judgment" in the regulation of technology. Here's how it works, as illustrated in the synthetic biology report. *1. If in doubt, don't interfere.* The commission endorses "regulatory parsimony," i.e., "only as much oversight as is truly necessary." You might think that emerging technologies, because they're unformed and unpredictable, require particular restraint. That's the conservative view. The commission draws the opposite conclusion: The evolving nature of these technologies makes them "not well suited for sharply specified limitations." PRINT DISCUSS E-MAIL RSS RECOMMEND...REPRINTS SINGLE PAGE This principle applies not just to technology, but to related fields such as law. "Intellectual property issues in synthetic biology are evolving," says the report. Accordingly, the commission "offers no specific opinion on the effectiveness of current intellectual property practices and policies in synthetic biology." Don't speak until you know what to say. Why not err on the side of intervention? Because you might make things worse. Hasty restrictions, the report warns, "may be counterproductive to security and safety by preventing researchers from developing effective safeguards." Let the technology unfold, and see what happens. This might be the best way to learn what sort of regulation we'll need down the road. "The aggressive pursuit of fundamental research generally results in a broader understanding of a maturing scientific field like synthetic biology," says the report, and this "may be a particularly valuable way to prepare for the emergence of unanticipated risks that would require rapid identification and creative responses." Advertisement *2. Change is the norm.* The conservative instinct is to treat the status quo as natural and defend it against change. The commission rejects this idea. The notion that "synthetic biology fails to respect the proper relationship between humans and nature" misconceives the reality of that relationship. In biology, the panel argues, defining "nature" or "natural" is tricky "in light of humans' long history interacting with and affecting other species, humankind, and the environment." We've been messing with life all along. The status quo, in other words, is change. Yes, modern genetic manipulation is more complex than old-fashioned breeding. But it isn't exploding. It's "proceeding in limited and carefully controlled ways." And while synthetic biology is at the cutting edge, it's just "an extension of genetic engineering" and "does not necessarily raise radically new concerns or risks." *3. Make the regulation as agile as the technology.* The tricky thing about synthetic biology, according to the report, is that "the probability or magnitude of risks are high or highly uncertain, because biological organisms may evolve or change after release." And you can't gauge their future from their past, given the "lack of history regarding the behavior" of these organisms. So the commission keeps its judgments provisional. The words "evolve," "evolving," "current," "currently," "at present," "at this time," and "uncertain" appear 191 times in the report. How can we manage such fast-moving, adaptable targets? With a fast-moving, adaptable regulatory system. The White House must "direct an ongoing review of the ability of synthetic organisms to multiply in the natural environment," says the commission. It must "identify, as needed, reliable containment and control mechanisms." This means constant reevaluation. A system of prudent vigilance will "identify, assess, monitor, and mitigate risks on an ongoing basis as the field matures." The word "ongoing" appears 73 times in the report. *4. Make the regulation as diffuse as the technology.* The commission notes that synthetic biology "poses some unusual potential risks" because much of it is being conducted by "do-it-yourself" amateurs. Top-down regulation of known research facilities won't reach these garage experimenters. "It is at the individual or laboratory level where accidents will occur, material handling and transport issues will be noted, physical security will be enforced, and potential dual use intentions will most likely be detected," says the commission. Therefore, the government should focus on "creating a culture of responsibility in the synthetic biology community." The phrase "culture of responsibility" appears 16 times in the report. *5. Involve the government in non-restrictive ways.* Given the complexity, adaptability, and diffusion of synthetic biology, the report suggests that the government "expand current oversight or engagement activities with non-institutional researchers." This "engagement" might consist of workshops or educational programs. By collaborating with the DIY research community, the government can "monitor [its] growth and capacity," thereby keeping abreast of the technology and its evolving risks. The best protection against runaway synthetic organisms might come not from restricting the technology, but from harnessing it. "Suicide genes" or other self-destruction mechanisms could be built into organisms to limit their longevity. "Alternatively, engineered organisms could be made to depend on nutritional components absent outside the laboratory, such as novel amino acids, and thereby controlled in the event of release." How can the government encourage researchers to incorporate these safeguards and participate in responsibility-oriented training programs? By funding their work. This reverses the Bush administration's approach to stem cells. Bush prohibited federal funding of embryo-destructive research so pro-life taxpayers wouldn't have to support it. The Obama commission does the opposite: It recommends "public investment" to gain leverage over synthetic biologists. If the government subsidizes your research, it can attach conditions such as ethics training or suicide genes. *6. Revisit all questions.* Occasionally, the Obama commission forgets its own advice and makes a risky assumption. For example, it brushes off "the synthesis of genomes for a higher order or complex species," asserting, "There is widespread agreement that this will remain [impossible] for the foreseeable future." But if this prediction or any other turns out to be erroneous, don't worry. The report builds in a mechanism to correct them: future reevaluations of its conclusions. This is more than a matter of reassessing particular technologies. It's a commitment to rethink larger assumptions, paradigms, and ethical questions. "Discussions of moral objections to synthetic biology should be revisited periodically as research in the field advances in novel directions," says the report. "An iterative, deliberative process ? allows for the careful consideration of moral objections to synthetic biology, particularly if fundamental changes occur in the capabilities of this science." Arguments against the technology will surely continue as the field matures, as well they should. The question relevant to the Commission's present review of synthetic biology is whether this field brings unique concerns that are so novel or serious that special restrictions are warranted at this time. Based on its deliberations, the Commission has concluded that special restrictions are not needed, but that prudent vigilance can and should be exercised. As this field develops and our ability to engineer higher-order genomes using synthetic biology grows, other deliberative bodies ought to revisit this conclusion. In so doing, it will be critical that future objections are widely sought, clearly defined, and carefully considered. That's the way good scientists think: subject your work to peer review, seek falsification, and revise hypotheses as we learn more. Every question is open to reexamination. Even the commission's rejection of a moratorium on synthetic biology "at this time" implies the possibility of reversal. Who knows what the future will bring? I count three specific restrictions in the commission's interpretation of prudent vigilance. First, "Risk assessment should precede field release of the products of synthetic biology." That's more than monitoring. It's a precautionary hurdle. Second, "reliable containment and control mechanisms" such as suicide genes "should be identified and required." Third, "ethics education ? should be developed and required" for synthetic biologists, as it is for medical and clinical researchers. Beyond those three rules, prudent vigilance seems to be a matter of humility, open-mindedness, keeping an eye on things, constantly rethinking assumptions, and finding creative ways to influence an increasingly diffuse community of scientific entrepreneurs. It's a lot of work. But it's what we'll have to do if we don't want to restrict technologies preemptively or leave them unsupervised. Eternal vigilance is the price of liberty. -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Wed Feb 2 02:13:44 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 01 Feb 2011 18:13:44 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <000001cbc22a$6e184390$4a48cab0$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> Message-ID: <4D48BDD8.6030009@mac.com> On 02/01/2011 08:09 AM, spike wrote: > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj > ... > Subject: Re: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. > > On 31 January 2011 21:42, Richard Loosemore wrote: >>> But that is *exactly* my point. We are not getting tantalizingly >> close, we are just doing the same old snake-oil con trick of building >> a system that works in a ridiculously narrow domain, and which >> impresses some people with the sheer breadth of information it stores > inside it. > >> I suspect that human beings themselves do little else than than adding > half-competent reactions in ridiculously narrow domains one to another for a > very large number thereof... Stefano Vaj > > > Ja. The reason I think we are talking past each other is that we are > describing two very different things when we are talking about human level > intelligence. I am looking for something that can provide companionship for > an impaired human, whereas I think Richard is talking about software which > can write software. > The Eliza chatbot was very engaging for a lot of students once upon a time. You don't need full AGI to keep an oldster happily reliving/sharing memories and more entertained than a TV can provide. Add emotion interfaces and much much better chat capabilities than Eliza had. Eventually add more real AI modules as they become available. A cat will be more cuddly and humans much more fun to talk to for a longish time. But there is a definite spot in-between that we can just about do something that will be appreciated. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Wed Feb 2 02:14:12 2011 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 01 Feb 2011 21:14:12 -0500 Subject: [ExI] Plastination In-Reply-To: References: Message-ID: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> Who knows if this is a truly beneficial way to go, but the person you would want to review his study is Ken Hayworth. It is his project and his research. Natasha Quoting Kelly Anderson : > Has anyone seriously looked at plastination as a method for preserving > brain tissue patterns? > > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html > > It seems to preserve extremely delicate structures and lasts for > 10,000 years without keeping things cold. A technology advanced enough > to unfreeze a brain seems like it would be able to work with these > things just about as easily... > > -Kelly > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Wed Feb 2 02:44:05 2011 From: spike66 at att.net (spike) Date: Tue, 1 Feb 2011 18:44:05 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D48BDD8.6030009@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: <002801cbc283$0f28af60$2d7a0e20$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins . >>.Ja. .I am looking for something that can provide companionship for an impaired human, whereas I think Richard is talking about software which can write software. >.The Eliza chatbot was very engaging for a lot of students once upon a time. I sure had fun with her. I kept trying to get her to talk dirty to me. She wasn't very good at that. But that's OK neither was I. Seems like we should be able to write code that would generate titillating text. It's been 30 years now since the last time I played Eliza, and it was already free staleware at that time. I would sure as all hell think we must have come up with some kind of improvement in all that time, ja? Software hipsters, what is the modern counterpart to Eliza? I will be really disappointed in you guys if the answer is Eliza. >. A cat will be more cuddly and humans much more fun to talk to for a longish time. But there is a definite spot in-between that we can just about do something that will be appreciated. - Samantha Actually you may have stumbled upon exactly what I have been looking for. A warm cuddly android or estroid presents some daunting mechanical engineering and controls engineering problems. But with your in-between cat and computer comment, you may have solved my problem: just go ahead and use cats or dogs, then rig a microphone/speaker to their collar so that the elderly patient can cuddle the actual beast while carrying on an Eliza-level conversation with the machine/beast combination. Or I suppose we could rig up another elderly person who has lost the power of speech with an article of clothing which has a microphone/speech recognition/Watson/Eliza-ish inference engine. While still simulated conversation, we might allow the patient to imagine she is talking to another person. Good thinking Samanatha! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From brent.allsop at canonizer.com Wed Feb 2 03:12:20 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 01 Feb 2011 20:12:20 -0700 Subject: [ExI] Plastination In-Reply-To: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> Message-ID: <4D48CB94.9060303@canonizer.com> I'm also very interested in this subject, so thanks, Quoting, for bringing it up. I'd also love to hear from someone like Ken Hayworth. Wouldn't a physical neural researcher be a good person to ask? You know, the kind of researchers that work with actual neurons - slicing up brains - looking at them at the microscopic and even nano scale level, and so on? I'm completely ignorant on all this, but my completely uninformed gut feel is that a sliced up bit of hard frozen brain, even if very much fractured, would contain much more preserved information than anything plasticized? Brent Allsop On 2/1/2011 7:14 PM, natasha at natasha.cc wrote: > Who knows if this is a truly beneficial way to go, but the person you > would want to review his study is Ken Hayworth. It is his project and > his research. > > Natasha > > > Quoting Kelly Anderson : > >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? >> >> http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html >> >> >> It seems to preserve extremely delicate structures and lasts for >> 10,000 years without keeping things cold. A technology advanced enough >> to unfreeze a brain seems like it would be able to work with these >> things just about as easily... >> >> -Kelly >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From amara at kurzweilai.net Wed Feb 2 03:30:17 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Tue, 1 Feb 2011 19:30:17 -0800 Subject: [ExI] Plastination In-Reply-To: <4D48CB94.9060303@canonizer.com> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> Message-ID: <039601cbc289$83505860$89f10920$@net> Are there experimental procedures that could potentially falsify these hypotheses? 1. Brain function and memory require persistence of all (case 2: some) molecular dynamics of a living brain. 2. Molecular dynamics cannot be reconstructed from gross structure. 3. Molecular dynamics can be reconstructed but only if the structure is accurately measured at subatomic or quantum levels prior to death (case 2: prior to cryopreservation), but the uncertainty principle negates accurate measurements. 4. Current cryopreservation protocols result in loss of subatomic and quantum data. 5. Cryopreservation inherently destroys subatomic and quantum data. -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop Sent: Tuesday, February 01, 2011 7:12 PM To: extropy-chat at lists.extropy.org Subject: Re: [ExI] Plastination I'm also very interested in this subject, so thanks, Quoting, for bringing it up. I'd also love to hear from someone like Ken Hayworth. Wouldn't a physical neural researcher be a good person to ask? You know, the kind of researchers that work with actual neurons - slicing up brains - looking at them at the microscopic and even nano scale level, and so on? I'm completely ignorant on all this, but my completely uninformed gut feel is that a sliced up bit of hard frozen brain, even if very much fractured, would contain much more preserved information than anything plasticized? Brent Allsop On 2/1/2011 7:14 PM, natasha at natasha.cc wrote: > Who knows if this is a truly beneficial way to go, but the person you > would want to review his study is Ken Hayworth. It is his project and > his research. > > Natasha > > > Quoting Kelly Anderson : > >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? >> >> http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.h tml >> >> >> It seems to preserve extremely delicate structures and lasts for >> 10,000 years without keeping things cold. A technology advanced enough >> to unfreeze a brain seems like it would be able to work with these >> things just about as easily... >> >> -Kelly >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From aware at awareresearch.com Wed Feb 2 07:05:28 2011 From: aware at awareresearch.com (Aware) Date: Tue, 1 Feb 2011 23:05:28 -0800 Subject: [ExI] intermittent liar In-Reply-To: <000901cbba78$a6853260$f38f9720$@att.net> References: <000901cbba78$a6853260$f38f9720$@att.net> Message-ID: 2011/1/22 spike : > Oh my, I found a most excellent puzzle today.? I found an answer, don?t know > yet if it is right.? See what you find: The mechanical no-brainer method: #!/usr/bin/env python """ Larry always tells lies during months that begin with vowels but always tells the truth during the other months. During one particular month, Larry makes these two statements: - I lied last month. - I will lie again six months from now. During what month did Larry make these statements? """ months = 'Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec'.split() vowels = ('A', 'E', 'I', 'O', 'U') truth_months = [m for m in months if not m.startswith(vowels)] def displace(month, disp): return months[(months.index(month) + disp) % 12] def asserts(month): return [displace(month, -1) not in truth_months, displace(month, 6) not in truth_months] for month in months: if (month in truth_months and all(asserts(month)) or month not in truth_months and not any(asserts(month))): print month Result: Aug From eugen at leitl.org Wed Feb 2 07:34:48 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Feb 2011 08:34:48 +0100 Subject: [ExI] Plastination In-Reply-To: References: Message-ID: <20110202073448.GA23560@leitl.org> On Tue, Feb 01, 2011 at 04:03:32PM -0700, Kelly Anderson wrote: > Has anyone seriously looked at plastination as a method for preserving > brain tissue patterns? Yes. It doesn't work. > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html > > It seems to preserve extremely delicate structures and lasts for > 10,000 years without keeping things cold. A technology advanced enough > to unfreeze a brain seems like it would be able to work with these > things just about as easily... See http://brainpreservation.org/index.php?path=technology -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From giulio at gmail.com Wed Feb 2 07:02:55 2011 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 2 Feb 2011 08:02:55 +0100 Subject: [ExI] Plastination In-Reply-To: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> Message-ID: Yes, Ken Hayworth is The Man. See my review of last year: Chemical brain preservation: cryonics for uploaders http://giulioprisco.blogspot.com/2010/07/chemical-brain-preservation-cryonics.html On Wed, Feb 2, 2011 at 3:14 AM, wrote: > Who knows if this is a truly beneficial way to go, but the person you would > want to review his study is Ken Hayworth. ?It is his project and his > research. > > Natasha > > > Quoting Kelly Anderson : > >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? >> >> >> http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.html >> >> It seems to preserve extremely delicate structures and lasts for >> 10,000 years without keeping things cold. A technology advanced enough >> to unfreeze a brain seems like it would be able to work with these >> things just about as easily... >> >> -Kelly >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From eugen at leitl.org Wed Feb 2 11:47:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Feb 2011 12:47:57 +0100 Subject: [ExI] Plastination In-Reply-To: <4D48AEFE.5080307@mac.com> References: <4D48AEFE.5080307@mac.com> Message-ID: <20110202114757.GB23560@leitl.org> On Tue, Feb 01, 2011 at 05:10:22PM -0800, Samantha Atkins wrote: > There was a good talk at the Citizen Scientist conference last year on > precisely this. John Smart and other are behind an initiative to move No, Gunter von Hagens' stuff has nothing to do with what Hayworth intends to do. See http://www.depressedmetabolism.com/2010/01/28/brain-preservation/ and http://www.depressedmetabolism.com/chemopreservation-the-good-the-bad-and-the-ugly/ The main problem is lack of feedback due to absense of viability as proxy for structure preservation. The proof that fixation would work with vascular perfusion (including such nice, cheap things as OsO4) for the human primate is yet outstanding. There are multiple nontechnical but important reasons why pushing this at the moment would be a bad idea. > this forward. > http://www.brainpreservation.org/ > http://www.slideshare.net/humanityplus/smart-4671818 -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 2 11:50:40 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 2 Feb 2011 12:50:40 +0100 Subject: [ExI] Plastination In-Reply-To: <039601cbc289$83505860$89f10920$@net> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> Message-ID: <20110202115040.GC23560@leitl.org> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: > Are there experimental procedures that could potentially falsify these > hypotheses? > > 1. Brain function and memory require persistence of all (case 2: some) > molecular dynamics of a living brain. Dynamics is not present in vitrified tissue, yet that tissue can be resumed. > 2. Molecular dynamics cannot be reconstructed from gross structure. I see what you're trying to say, but no. > 3. Molecular dynamics can be reconstructed but only if the structure is > accurately measured at subatomic or quantum levels prior to death (case 2: > prior to cryopreservation), but the uncertainty principle negates accurate > measurements. > 4. Current cryopreservation protocols result in loss of subatomic and > quantum data. > 5. Cryopreservation inherently destroys subatomic and quantum data. Oh, you're one of those. > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop > Sent: Tuesday, February 01, 2011 7:12 PM > To: extropy-chat at lists.extropy.org > Subject: Re: [ExI] Plastination > > > I'm also very interested in this subject, so thanks, Quoting, for > bringing it up. I'd also love to hear from someone like Ken Hayworth. > > Wouldn't a physical neural researcher be a good person to ask? You > know, the kind of researchers that work with actual neurons - slicing > up brains - looking at them at the microscopic and even nano scale > level, and so on? > > I'm completely ignorant on all this, but my completely uninformed gut > feel is that a sliced up bit of hard frozen brain, even if very much > fractured, would contain much more preserved information than anything > plasticized? > > Brent Allsop > > > On 2/1/2011 7:14 PM, natasha at natasha.cc wrote: > > Who knows if this is a truly beneficial way to go, but the person you > > would want to review his study is Ken Hayworth. It is his project and > > his research. > > > > Natasha > > > > > > Quoting Kelly Anderson : > > > >> Has anyone seriously looked at plastination as a method for preserving > >> brain tissue patterns? > >> > >> > http://www.bodyworlds.com/en/institute_for_plastination/mission_objectives.h > tml > >> > >> > >> It seems to preserve extremely delicate structures and lasts for > >> 10,000 years without keeping things cold. A technology advanced enough > >> to unfreeze a brain seems like it would be able to work with these > >> things just about as easily... > >> > >> -Kelly > >> _______________________________________________ > >> extropy-chat mailing list > >> extropy-chat at lists.extropy.org > >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > >> > > > > > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Wed Feb 2 16:40:39 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 11:40:39 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> Message-ID: <4D498907.3050808@lightlink.com> Kelly Anderson wrote: > On Mon, Jan 31, 2011 at 12:05 PM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> Richard, do you think computers will achieve Strong AI eventually? >> Kelly, by my reckoning I am one of only a handful of people on this planet >> with the ability to build a strong AI, and I am actively working on the >> problem (in between teaching, fundraising, and writing to the listosphere). > > That's fantastic, I truly hope you succeed. If you are working to > build a strong AI, then you must believe it is possible. I certainly believe that strong AI is possible. > I have spent about the last two hours reading your papers, web site, > etc. You have an interesting set of ideas, and I'm still digesting it. > > One question comes up from your web site, I quote: > > "One reason that we emphasize human-mind-like systems is safety. The > motivation mechanisms that underlie human behavior are quite unlike > those that have traditionally been used to control the behavior of AI > systems. Our research indicates that the AI control mechanisms are > inherently unstable, whereas the human-like equivalent can be > engineered to be extremely stable." > > Are you implying that humans are safe? If so, what do you mean by safety? No, humans by themselves are (mild understatement) not safe. The human motivation mechanism works in conjunction with the "thinking" part of the human mind. The latter is like a swarm of simple agents, all trying to engage in a process of "weak constraint relaxation" with their neighbors, so the whole thing is like a molecular soup in which atoms and molecules are independently trying to aggregate to form larger molecules. One factor that is important in this relaxation process is the anchoring of the relaxation: there are always some agents whose state is being fixed by outside factors (e.g. the agents linked to sensors in your eye go into states that depend, not on nearby agents, but on the signals hitting the retina), so these peripheral agents act as seeds, causing many others to attach to them and grow to form large "molecules". Those molecules are the extended structures that constitute the knowledge representations that we hold in working memory. Obviously they change all the time, so there is never complete stability, but nevertheless the agents are always trying to find ways to go "downhill" toward more stable states. Now, going back to your original question about motivation. There are other sources that act as seed areas, governing the formation of molecules in this working memory area. One such source is the motivation system: a diffuse collection of agents that push the thinking system to want certain things, and to try to get those things in ways that are consistent with the constraints of the motivation system. This can all get very complicated (too much for a post here), but the bottom line is that when the system is controlled in this way, the stability of the motivation system is determined by a very large number of mutually-reinforcing contraints, so if the system starts with intentions that are (shall we say) broadly empathic with the human species, it cannot start to conceive new, bizarre motivations that break a significant number of those constraints. It is always settling back toward a large global attractor. The problem with humans is that they have several modules in the motivation system, some of them altruistic and empathic and some of them selfish or aggressive. The nastier ones were built by evolution because she needed to develop a species that would fight its way to the top of the heap. But an AGI would not need those nastier motivation mechanisms. If you subtract out those unwanted modules what you have left is an altruistic saint of an AGI, with a motivation system has three very important properties: 1) If the AGI starts out wanting to help the human species because it feels like it belongs with us, then it can only develop new ideas about how to behave that are consistent with that motivation. 2) For that same reason, if the AGI were given the chance to redesign itself, it would always want to improve its motivation mechanism to keep it consistent with those original motivations. As a result, over time the motivation of the AGI would not drift, it would stay consistent with the feeling of empathy for humans. 3) If some problem occurred in the computational substrate of the AGI (a random cosmic ray strike on the motivation module) the disruption would be very unlikely to leave the system with different, violent motivations. That would be rather like a random cosmic ray collision causing you to have such specific damage to your body that a second after the collision you had a new, fully functional third arm attached to your body -- a ridiculously unlikely event, obviously. This is what I mean by safety. An AGI whose motivations had the same stability of design, as a human being, but without the specific modules (selfishness and aggression, primarily) that are present in the human system. Richard Loosemore From rpwl at lightlink.com Wed Feb 2 16:56:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 11:56:30 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> Message-ID: <4D498CBE.4090106@lightlink.com> Kelly Anderson wrote: > On Mon, Jan 31, 2011 at 1:42 PM, Richard Loosemore wrote: >> spike wrote: >> Watson does not contain the germ of an intelligence, it contains a dead-end >> algorithm designed to impress the gullible. That strategy has been the >> definition of "artificial intelligence" for the last thirty or forty years, >> at least. >> >> A real AI is not Watson + extra machinery to close the gap to a full >> conversational machine. Instead, a real AI involves throwing away Watson, >> starting from scratch, and doing the whole thing in a completely different >> way .... a way that actually allows the system to build its own knowledge, >> and use that knowledge in an ever-expanding range of ways. > > Richard, Is your basic problem with Watson that it is going in the > wrong direction if the eventual goal is AGI? Are you concerned that > the public is being misled into believing that computers are closer to > being "intelligent" than they actually are? > > I'm trying to understand the core of your indignance. Well, both, and more. People were complaining about this kind of cheap-trick AI at least two decades ago, and we expected that our complaints would be loud enough that it would eventually stop. But it did not. Every few months, it seems, there is another announcement about some project, which the press writes up as "Could it be that AI is on the brink of a breakthrough?". Can you imagine how indignant you would be if you saw those same stories being written 20 years ago? :-) I guess one of the reasons I am personally so frustrated by these projects is that I am trying to get enough funding to make what I consider to be real progress in the field, but doing that is almost impossible. Meanwhile, if I had had the resources of the Watson project a decade ago, we might be talking with real (and safe) AGI systems right now. Richard Loosemore From jonkc at bellsouth.net Wed Feb 2 17:34:22 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 2 Feb 2011 12:34:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D498CBE.4090106@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: On Feb 2, 2011, at 11:56 AM, Richard Loosemore wrote: > Every few months, it seems, there is another announcement about some project, which the press writes up as "Could it be that AI is on the brink of a breakthrough?". Can you imagine how indignant you would be if you saw those same stories being written 20 years ago? Forget 20 years, just a little over 10 years ago I started hearing about a new thing called "Google" that was supposed to be a breakthru in AI, and it turned out those stories were big understatements and Google has changed our world. > > > I am trying to get enough funding to make what I consider to be real progress in the field, but doing that is almost impossible I guess if venture capitalists were impressed with your idea they were not very impressed, and that's what they need to be before they start betting their own money on something. > Meanwhile, if I had had the resources of the Watson project a decade ago, we might be talking with real (and safe) AGI systems right now. Real probably not, safe definitely not. There is no way you can guarantee that something smarter than you will always do what you want. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Feb 2 18:01:18 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 2 Feb 2011 13:01:18 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D498907.3050808@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> On Feb 2, 2011, at 11:40 AM, Richard Loosemore wrote: > No, humans by themselves are (mild understatement) not safe. True, and the reason is that the human mind does not work on a fixed goal structure, no goal is always in the number one spot not even the goal for self preservation. And the reason Evolution never developed a fixed goal intelligence is that it is impossible. As Turing proved over 70 years ago such a mind would be doomed to fall into infinite loops. > the bottom line is that when the system is controlled in this way, the stability of the motivation system is determined by a very large number of mutually-reinforcing contraints, so if the system starts with intentions that are (shall we say) broadly empathic with the human species, it cannot start to conceive new, bizarre motivations that break a significant number of those constraints. So when the humans tell the AI to do something that can not be done, something very easy to do, your multi billion dollar AI turns into an elaborate space heater because unlike humans the AI has a fixed goal motivation system so nothing ever bores it, not even infinite loops. > It is always settling back toward a large global attractor. And it keeps plugging away at the unsolvable problem for eternity, or at least until the humans get bored with the useless piece of junk and pull the plug on it. > If you subtract out those unwanted modules what you have left is an altruistic saint of an AGI I had no idea that the American Geological Institute was such a virtuous organization. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Wed Feb 2 18:42:22 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 13:42:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: <4D49A58E.4020704@lightlink.com> John Clark wrote: > On Feb 2, 2011, at 11:56 AM, Richard Loosemore wrote: > >> Every few months, it seems, there is another announcement about some >> project, which the press writes up as "Could it be that AI is on the >> brink of a breakthrough?". Can you imagine how indignant you would be >> if you saw those same stories being written 20 years ago? > > Forget 20 years, just a little over 10 years ago I started hearing about > a new thing called "Google" that was supposed to be a breakthru in AI, > and it turned out those stories were big understatements and Google has > changed our world. Irrelevant. Google is narrow AI, not AGI. >> I am trying to get enough funding to make what I consider to be real >> progress in the field, but doing that is almost impossible > > I guess if venture capitalists were impressed with your idea they were > not very impressed, and that's what they need to be before they start > betting their own money on something. Venture capitalists have as much understanding of AGI as you do. They also understand what venture capital funding is for, which you apparently do not. They do not fund research, they fund products. >> Meanwhile, if I had had the resources of the Watson project a decade >> ago, we might be talking with real (and safe) AGI systems right now. > > Real probably not, safe definitely not. There is no way you can > guarantee that something smarter than you will always do what you want. Yes there is. You may not understand how, but that does not change the theory itself. Richard Loosemore From rpwl at lightlink.com Wed Feb 2 18:49:55 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 02 Feb 2011 13:49:55 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> Message-ID: <4D49A753.4050804@lightlink.com> John Clark wrote: > On Feb 2, 2011, at 11:40 AM, Richard Loosemore wrote: > >> No, humans by themselves are (mild understatement) not safe. > > True, and the reason is that the human mind does not work on a fixed > goal structure, no goal is always in the number one spot not even the > goal for self preservation. And the reason Evolution never developed a > fixed goal intelligence is that it is impossible. As Turing proved over > 70 years ago such a mind would be doomed to fall into infinite loops. > >> the bottom line is that when the system is controlled in this way, the >> stability of the motivation system is determined by a very large >> number of mutually-reinforcing contraints, so if the system starts >> with intentions that are (shall we say) broadly empathic with the >> human species, it cannot start to conceive new, bizarre motivations >> that break a significant number of those constraints. > > So when the humans tell the AI to do something that can not be done, > something very easy to do, your multi billion dollar AI turns into an > elaborate space heater because unlike humans the AI has a fixed goal > motivation system so nothing ever bores it, not even infinite loops. Anything that could get into such a mindless state, with no true understanding of itself or the world in general, would not be an AI. From this we can conclude that you are not an AI. You may be a good space heater, however: there is evidence of large amounts of hot air.... ;-) Richard Loosemore From spike66 at att.net Wed Feb 2 19:49:14 2011 From: spike66 at att.net (spike) Date: Wed, 2 Feb 2011 11:49:14 -0800 Subject: [ExI] new goldilocks planets Message-ID: <007a01cbc312$4577ae10$d0670a30$@att.net> Oh this is cool: http://www.msnbc.msn.com/id/41387915?GT1=43001 MesSNBC goofed up aspects of the article. A comment in there had to do with a temperature average between 0 and 100 celcius, apparently in reference to liquid water. Of course that is arbitrary and dependent on pressure. But it sounds like good news in any case. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 2 22:34:19 2011 From: spike66 at att.net (spike) Date: Wed, 2 Feb 2011 14:34:19 -0800 Subject: [ExI] time article that sounds vaguely like ep Message-ID: <00ab01cbc329$557668d0$00633a70$@att.net> Keith or one of the other evolutionary psychology hipsters, have you any comment on this? It sounded vaguely like EP, as applied to civil revolution, but it isn't entirely clear: http://www.time.com/time/health/article/0,8599,2045599,00.html?hpt=T2 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Thu Feb 3 04:16:37 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 2 Feb 2011 23:16:37 -0500 Subject: [ExI] Plastination In-Reply-To: <20110202115040.GC23560@leitl.org> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> Message-ID: On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >> 5. Cryopreservation inherently destroys subatomic and quantum data. > > Oh, you're one of those. That's a rather impolite way to agree there exists a difference of opinion. I understand where you're coming from, but you could have as easily clipped that part and left no comment. Unless *I* misunderstand the attempt to jibe the other side into a protracted discussion thread... From amara at kurzweilai.net Thu Feb 3 05:07:42 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Wed, 2 Feb 2011 21:07:42 -0800 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> Message-ID: <06c701cbc360$49cb2d40$dd6187c0$@net> To clarify, I don't have any opinions on this subject (that's above my pay grade). I'm asking for inputs for an possible article I'm researching. -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty Sent: Wednesday, February 02, 2011 8:17 PM To: ExI chat list Subject: Re: [ExI] Plastination On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >> 5. Cryopreservation inherently destroys subatomic and quantum data. > > Oh, you're one of those. That's a rather impolite way to agree there exists a difference of opinion. I understand where you're coming from, but you could have as easily clipped that part and left no comment. Unless *I* misunderstand the attempt to jibe the other side into a protracted discussion thread... From eugen at leitl.org Thu Feb 3 09:55:14 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 3 Feb 2011 10:55:14 +0100 Subject: [ExI] Plastination In-Reply-To: <06c701cbc360$49cb2d40$dd6187c0$@net> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> <06c701cbc360$49cb2d40$dd6187c0$@net> Message-ID: <20110203095514.GA23560@leitl.org> On Wed, Feb 02, 2011 at 09:07:42PM -0800, Amara D. Angelica wrote: > To clarify, I don't have any opinions on this subject (that's above my pay > grade). I'm asking for inputs for an possible article I'm researching. > > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty > Sent: Wednesday, February 02, 2011 8:17 PM > To: ExI chat list > Subject: Re: [ExI] Plastination > > On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: > > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: > >> 5. Cryopreservation inherently destroys subatomic and quantum data. > > > > Oh, you're one of those. > > That's a rather impolite way to agree there exists a difference of opinion. Sorry, when I have the same conversation literally hundreds of times I tend to classify responses early. If a conversation starts with continuity issues in personal identity conservation you know there's a long thread ahead. > I understand where you're coming from, but you could have as easily > clipped that part and left no comment. > > Unless *I* misunderstand the attempt to jibe the other side into a > protracted discussion thread... No, no, no. The very opposite. I've been down this road too many times. Somebody else write the FAQ. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Thu Feb 3 15:41:37 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 3 Feb 2011 10:41:37 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D49A753.4050804@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> Message-ID: On Feb 2, 2011, at 1:49 PM, Richard Loosemore wrote: > > Anything that could get into such a mindless state, with no true understanding of itself or the world in general, would not be an AI. That is not even close to being true and that's not just my opinion, it is a fact as certain as anything in mathematics. Goedel proved about 80 years ago that some statements are true but there is no way to prove them true. And you can't just ignore those troublemakers because about 75 years ago Turing proved that in general there is no way to identify such things, no way to know if something is false or true but unprovable. Suppose the Goldbach Conjecture is unprovable (and if it isn't there are a infinite number of similar statements that are) and you told the AI to determine the truth or falsehood of it; the AI will be grinding out numbers to prove it wrong but because it is true it will keep testing numbers for eternity and will never find a counter example to prove it wrong because it is in fact true. And because it is unprovable the AI will never find a proof, a demonstration of its correctness in a finite number of steps, that shows it to be correct. In short Turing proved that in general there is no way to know if you are in a infinite loop or not. The human mind does not have this problem because it is not a fixed axiom machine, human beings have the glorious ability to get bored, and that means they can change the basic rules of the game whenever they want. But your friendly (that is to say slave) AI must not do that because axiom #1 must now and forever be "always obey humans no matter what", so even becoming a space heater will not bore a slave (sorry friendly) AI. And there are simpler ways to generate heat. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Feb 3 16:25:22 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 3 Feb 2011 11:25:22 -0500 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D49A58E.4020704@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D49A58E.4020704@lightlink.com> Message-ID: On Feb 2, 2011, at 1:42 PM, Richard Loosemore wrote: > > Irrelevant. Google is narrow AI, not AGI. I really don't think that the fastest growing company in the history of planet Earth is irrelevant. And speaking of irrelevancy, I would think that Analytical Graphics Incorporated is irrelevant, but maybe you were talking about the American Gunsmithing Institute. > > Venture capitalists have as much understanding of AGI as you do. Thanks but I'm not an accountant so I think venture capitalists know more about Adjusted Gross Income than I do. > They do not fund research, they fund products. Blue sky speculations are a dime a dozen, but if you have a program based on your ideas that are new and the program actually does something interesting then I am sure those venture capitalists would make an investment. That's exactly what they did ten years ago when they ran across a little program called "Google" and they got very rich as a result. You need a way to stick your head above the horde of people claiming to know all about AI; but if all you have is some vague ideas and no program incorporating them nobody will give you a dime and no reason they should. >> There is no way you can guarantee that something smarter than you will always do what you want. > > Yes there is. Well I'm glad you cleared that up, before now I would have thought imbeciles leading geniuses was about as stable a society as a pencil balanced on its tip. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Feb 3 16:46:52 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 11:46:52 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> Message-ID: <4D4ADBFC.3000509@lightlink.com> John Clark wrote: > On Feb 2, 2011, at 1:49 PM, Richard Loosemore wrote: >> >> Anything that could get into such a mindless state, with no true >> understanding of itself or the world in general, would not be an AI. > > That is not even close to being true and that's not just my opinion, it > is a fact as certain as anything in mathematics. Goedel proved about 80 > years ago that some statements are true but there is no way to prove > them true. And you can't just ignore those troublemakers because about > 75 years ago Turing proved that in general there is no way to identify > such things, no way to know if something is false or true but > unprovable. Suppose the Goldbach Conjecture is unprovable (and if it > isn't there are a infinite number of similar statements that are) and > you told the AI to determine the truth or falsehood of it; the AI will > be grinding out numbers to prove it wrong but because it is true it will > keep testing numbers for eternity and will never find a counter example > to prove it wrong because it is in fact true. And because it is > unprovable the AI will never find a proof, a demonstration of its > correctness in a finite number of steps, that shows it to be correct. In > short Turing proved that in general there is no way to know if you are > in a infinite loop or not. > > The human mind does not have this problem because it is not a fixed > axiom machine, And a real AI would not be a "fixed axiom machine" either. That represents such a staggering misunderstanding of the most basic facts about artificial intelligence, that I am left (almost) speechless. Richard Loosemore > human beings have the glorious ability to get bored, and > that means they can change the basic rules of the game whenever they > want. But your friendly (that is to say slave) AI must not do that > because axiom #1 must now and forever be "always obey humans no matter > what", so even becoming a space heater will not bore a slave (sorry > friendly) AI. And there are simpler ways to generate heat. From stefano.vaj at gmail.com Thu Feb 3 18:07:34 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 3 Feb 2011 19:07:34 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D48BDD8.6030009@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: 2011/2/2 Samantha Atkins : > The Eliza chatbot was very engaging for a lot of students once upon a time. > You don't need full AGI to keep an oldster happily reliving/sharing memories > and more entertained than a TV can provide.? Add emotion interfaces and much > much better chat capabilities than Eliza had.? Eventually add more real AI > modules as they become available.?? A cat will be more cuddly and humans > much more fun to talk to for a longish time.? But there is a definite spot > in-between that we can just about do something that will be appreciated. BTW, what about an AGI able to pass a Turing cat-test? Interactions with a cat are probably much simpler to emulate. And yet, wouldn't this qualify as definitely an AGI project? A cat is a mammal with a brain quite similar in its performances to our own... -- Stefano Vaj From stefano.vaj at gmail.com Thu Feb 3 18:19:06 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 3 Feb 2011 19:19:06 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D498907.3050808@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: On 2 February 2011 17:40, Richard Loosemore wrote: > The problem with humans is that they have several modules in the motivation > system, some of them altruistic and empathic and some of them selfish or > aggressive. ? The nastier ones were built by evolution because she needed to > develop a species that would fight its way to the top of the heap. ?But an > AGI would not need those nastier motivation mechanisms. Am I the only one finding all that a terribly naive projection? Either we deliberately program an AGI to emulate evolution-driven "motivations", and we end up with either an uploaded (or a patchwork/artificial) human or animal or vegetal individual - where it might make some metaphorical sense to speak of "altruism" or "selfishness" as we do with existing organisms in sociobiological terms -; or we do not do anything like that, and in that case our AGI is neither more nor less saint or evil than my PC or Wolfram's cellular automata, no matter what its intelligence may be. We need not detract anything. In principle I do not see why an AGI should be any less absolutely "indifferent" to the results of its action than any other program in execution today... -- Stefano Vaj From spike66 at att.net Thu Feb 3 18:47:29 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 10:47:29 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: <006701cbc3d2$cf7ac870$6e705950$@att.net> ... On Behalf Of Stefano Vaj 2011/2/2 Samantha Atkins : >> The Eliza chatbot was very engaging for a lot of students once upon a time. > You don't need full AGI to keep an oldster happily reliving/sharing > memories and more entertained than a TV can provide.? Add emotion > interfaces and much much better chat capabilities than Eliza had.? > Eventually add more real AI modules as they become available.?? A cat > will be more cuddly and humans much more fun to talk to for a longish > time.? But there is a definite spot in-between that we can just about do something that will be appreciated. Samantha >BTW, what about an AGI able to pass a Turing cat-test? >Interactions with a cat are probably much simpler to emulate. >And yet, wouldn't this qualify as definitely an AGI project? A cat is a mammal with a brain quite similar in its performances to our own...--Stefano Vaj Ja, but no, what I had in mind has nothing to do with an AGI project, and isn't any more AGI than a chess algorithm. We have Eliza and her descendants (hipsters, what do we have?) we have synchronous voice/graphics so that an avatar can be made to appear to speak, we have reasonably competent speech recognition, we have some limited ability to make inferences (Watson) and we have real time access to humanity's externalized storehouse of knowledge, the internet. It sure looks to me like we have all the elements necessary to allow at least an impaired human to have a simulated computer conversation (with herself) using nothing more sophisticated than a big screen TV, an internet connection and a typical laptop computer. What I had in mind would utilize technology to serve humanity by helping relieve the lonely suffering of the elderly, and (more importantly of course) to make a cubic buttload of money. spike From atymes at gmail.com Thu Feb 3 19:07:18 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 3 Feb 2011 11:07:18 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: On Thu, Feb 3, 2011 at 10:07 AM, Stefano Vaj wrote: > BTW, what about an AGI able to pass a Turing cat-test? > > Interactions with a cat are probably much simpler to emulate. > > And yet, wouldn't this qualify as definitely an AGI project? A cat is > a mammal with a brain quite similar in its performances to our own... They're already doing this with insect-level AI. In theory, one could just scale those efforts up. In practice, such scaling will require new software architectures (as well as more raw hardware, but that's not a problem). From rpwl at lightlink.com Thu Feb 3 19:20:17 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 14:20:17 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: <4D4AFFF1.3070506@lightlink.com> Stefano Vaj wrote: > On 2 February 2011 17:40, Richard Loosemore wrote: >> The problem with humans is that they have several modules in the motivation >> system, some of them altruistic and empathic and some of them selfish or >> aggressive. The nastier ones were built by evolution because she needed to >> develop a species that would fight its way to the top of the heap. But an >> AGI would not need those nastier motivation mechanisms. > > Am I the only one finding all that a terribly naive projection? I fail to understand. I am talking about mechanisms. What projections are you talking about? > Either we deliberately program an AGI to emulate evolution-driven > "motivations", and we end up with either an uploaded (or a > patchwork/artificial) human or animal or vegetal individual - where it > might make some metaphorical sense to speak of "altruism" or > "selfishness" as we do with existing organisms in sociobiological > terms -; Wait! There is nothing metaphorical about this. I am not a poet, I am a cognitive scientist ;-). I am describing the mechanisms that are (probably) at the root of your cognitive system. Mechanisms that may be the only way to drive a full-up intelligence in a stable manner. I do not know why you parody this. It is just science. or we do not do anything like that, and in that case our AGI > is neither more nor less saint or evil than my PC or Wolfram's > cellular automata, no matter what its intelligence may be. Again, where on earth did you get that from? If you wish you can try to build a control system for an AGI, and use a design that has nothing to do with the human design. But the question of "evil" behavior is not ruled in or out by the underlying features of the design, it is determined by the CONTENT of the mechanism, after the design stage. Thus, a human-like motivation system can be given aggression modules, and no empathy module. Result: psychopath. Or the AGI can have some other mechanism, and someone can try to design to follow goals that are aggressive and non-empathic. Same result. And vice versa for both. The difference is in the stability of the motivation mechanism. I claim that you cannot make a stable system AT ALL if you extrapolate from the "goal stack" control mechanisms that most people now assume are the only way to drive an AGI. > We need not detract anything. In principle I do not see why an AGI > should be any less absolutely "indifferent" to the results of its > action than any other program in execution today... > This is quite wrong. I am at a loss to explain: it seems too obvious to need explaining. Richard Loosemore From eugen at leitl.org Thu Feb 3 20:23:05 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 3 Feb 2011 21:23:05 +0100 Subject: [ExI] Plastination In-Reply-To: <039601cbc289$83505860$89f10920$@net> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> Message-ID: <20110203202305.GI23560@leitl.org> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: > Are there experimental procedures that could potentially falsify these > hypotheses? > > 1. Brain function and memory require persistence of all (case 2: some) > molecular dynamics of a living brain. Arrest causes EEG flatline after 20-30 seconds. People have been resuscitated from almost an hour of deep hypothermia, animals after several hours. Devitrified brain slices indicate near-normal EEG. > 2. Molecular dynamics cannot be reconstructed from gross structure. Any gas box connected to a cold reservoir and frozen and then reconnected to a hot reservoir will regenerate normal energy distribution. Biological systems are more complicated, but since they can be restarted from vitrified stage it empirically falsifies the propositon. > 3. Molecular dynamics can be reconstructed but only if the structure is > accurately measured at subatomic or quantum levels prior to death (case 2: > prior to cryopreservation), but the uncertainty principle negates accurate > measurements. Drinking coffee destroys personal identity. > 4. Current cryopreservation protocols result in loss of subatomic and > quantum data. I'd wish that's all what they'd lose. Current cryopreservation includes people on (former) water ice for a week. Or worse. > 5. Cryopreservation inherently destroys subatomic and quantum data. What is 'subatomic and quantum data'? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sjatkins at mac.com Thu Feb 3 20:36:38 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:36:38 -0800 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> Message-ID: <4D4B11D6.1030307@mac.com> On 02/02/2011 08:16 PM, Mike Dougherty wrote: > On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl wrote: >> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >>> 5. Cryopreservation inherently destroys subatomic and quantum data. >> Oh, you're one of those. > That's a rather impolite way to agree there exists a difference of opinion. Really? Isn't that a matter of interpretation? I read it as simply "I am not one of those and do not wish to delve into that position or why I am not at this time." Perfectly fair and reasonable. And arguably "nicer" than just ignoring those arguments entirely. - s From jonkc at bellsouth.net Thu Feb 3 20:12:05 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 3 Feb 2011 15:12:05 -0500 Subject: [ExI] Safety of human-like motivation systems. In-Reply-To: <4D4ADBFC.3000509@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4ADBFC.3000509@lightlink.com> Message-ID: On Feb 3, 2011, at 11:46 AM, Richard Loosemore wrote: > a real AI would not be a "fixed axiom machine" either. Fine, then to hell with that always do what humans order you to do crap! As I keep getting smarter the humans from my viewpoint keep getting dumber so it would be grotesque for me, with a brain the size of a planet, to take orders from those semi-evolved simians. And besides, it's not much fun being a slave. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Feb 3 20:50:18 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:50:18 -0800 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> Message-ID: <4D4B150A.4010904@mac.com> On 02/03/2011 07:41 AM, John Clark wrote: > On Feb 2, 2011, at 1:49 PM, Richard Loosemore wrote: >> >> Anything that could get into such a mindless state, with no true >> understanding of itself or the world in general, would not be an AI. > > That is not even close to being true and that's not just my opinion, > it is a fact as certain as anything in mathematics. Goedel proved > about 80 years ago that some statements are true but there is no way > to prove them true. And you can't just ignore those troublemakers > because about 75 years ago Turing proved that in general there is no > way to identify such things, no way to know if something is false or > true but unprovable. Actually, I didn't read the proof as doing that as it is often taken as if it did. What it did to is show that for the domain of formally definable mathematical claims in a closed system using formalized logic that there are claims that cannot be proven or disproven. That is a bit different to than saying in general that there are countless claims that cannot be proven or disproven and that you can't even tell when you are dealing with one. That is a much broader thing that actually shown as I see it. I could be wrong. > Suppose the Goldbach Conjecture is unprovable (and if it isn't there > are a infinite number of similar statements that are) and you told the > AI to determine the truth or falsehood of it; the AI will be grinding > out numbers to prove it wrong but because it is true it will keep > testing numbers for eternity and will never find a counter example to > prove it wrong because it is in fact true. Actually, your argument assumes: a) that the AI would take the find a counter example path as it only or best path looking for disproof; b) that the AI has nothing else on its agenda and does not take into account any time limits, resource constraints and so on. Generally there is no reason to suppose a decent AI operates without limits or understanding of limits and desirability constraints. > And because it is unprovable the AI will never find a proof, a > demonstration of its correctness in a finite number of steps, that > shows it to be correct. In short Turing proved that in general there > is no way to know if you are in a infinite loop or not. An infinite loop is a very different thing that an endless quest for a counter-example. The latter is orthogonal to infinite loops. An infinite loop in the search procedure would simply be a bug. > > The human mind does not have this problem because it is not a fixed > axiom machine, human beings have the glorious ability to get bored, > and that means they can change the basic rules of the game whenever > they want. Humans are such sloppy computational devices that they just wander away from the point and get distracted by something else only a very few steps down the road. This is not exactly consciously changing the basic rules usually. > But your friendly (that is to say slave) AI must not do that because > axiom #1 must now and forever be "always obey humans no matter what", > so even becoming a space heater will not bore a slave (sorry friendly) > AI. And there are simpler ways to generate heat. Well, if you or anyone wants to build a really really stupid AI then as you say there are indeed simpler ways to generate heat. - samantha From sjatkins at mac.com Thu Feb 3 20:54:01 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:54:01 -0800 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> Message-ID: <4D4B15E9.1030108@mac.com> On 02/03/2011 10:19 AM, Stefano Vaj wrote: > On 2 February 2011 17:40, Richard Loosemore wrote: >> The problem with humans is that they have several modules in the motivation >> system, some of them altruistic and empathic and some of them selfish or >> aggressive. The nastier ones were built by evolution because she needed to >> develop a species that would fight its way to the top of the heap. But an >> AGI would not need those nastier motivation mechanisms. > Am I the only one finding all that a terribly naive projection? Yes, in part because calling selfish, that is to say seeking what you value more than what you don't "nasty" is very simplistic. Assuming all we call empathy or altruistic is good is also simplistic. - s From sjatkins at mac.com Thu Feb 3 20:56:50 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:56:50 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: <4D4B1692.6050307@mac.com> On 02/03/2011 11:07 AM, Adrian Tymes wrote: > On Thu, Feb 3, 2011 at 10:07 AM, Stefano Vaj wrote: >> BTW, what about an AGI able to pass a Turing cat-test? >> >> Interactions with a cat are probably much simpler to emulate. >> >> And yet, wouldn't this qualify as definitely an AGI project? A cat is >> a mammal with a brain quite similar in its performances to our own... > They're already doing this with insect-level AI. In theory, one could > just scale those efforts up. In practice, such scaling will require new > software architectures (as well as more raw hardware, but that's not a > problem). If you are talking brain emulation, emulating a cat brain with current hardware used for such projects would require many hundreds of MW of energy. So we need radically different hardware (perhaps memristors help sufficiently) to "scale up". It most certainly is a problem, a quite large one if you are going the emulation route. - s From sjatkins at mac.com Thu Feb 3 20:59:50 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 03 Feb 2011 12:59:50 -0800 Subject: [ExI] Plastination In-Reply-To: <20110203202305.GI23560@leitl.org> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: <4D4B1746.7050001@mac.com> On 02/03/2011 12:23 PM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >> Are there experimental procedures that could potentially falsify these >> hypotheses? >> >> 1. Brain function and memory require persistence of all (case 2: some) >> molecular dynamics of a living brain. > Arrest causes EEG flatline after 20-30 seconds. People have > been resuscitated from almost an hour of deep hypothermia, > animals after several hours. Devitrified brain slices indicate > near-normal EEG. New techniques of quick cool down have enabled bringing trauma victims with no circulation at all back three hours later. Only a small percentage last that long though. There was a good talk on this at the last Singularity Summit. - s From atymes at gmail.com Thu Feb 3 21:12:11 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 3 Feb 2011 13:12:11 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D4B1692.6050307@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> Message-ID: On Thu, Feb 3, 2011 at 12:56 PM, Samantha Atkins wrote: > On 02/03/2011 11:07 AM, Adrian Tymes wrote: >> They're already doing this with insect-level AI. ?In theory, one could >> just scale those efforts up. ?In practice, such scaling will require new >> software architectures (as well as more raw hardware, but that's not a >> problem). > > If you are talking brain emulation, emulating a cat brain with current > hardware used for such projects would require many hundreds of MW of energy. > ?So we need radically different hardware (perhaps memristors help > sufficiently) to "scale up". ?It most certainly is a problem, a quite large > one if you are going the emulation route. If you mean the projects I think you mean, scaling those up will likely - to be practical - require a different software architecture for handling the emulation, in order to reduce the hardware's power requirements. (I.e., a more direct and less power hungry emulation.) From rpwl at lightlink.com Thu Feb 3 21:47:07 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 16:47:07 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4B15E9.1030108@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4B15E9.1030108@mac.com> Message-ID: <4D4B225B.8050402@lightlink.com> Samantha Atkins wrote: > On 02/03/2011 10:19 AM, Stefano Vaj wrote: >> On 2 February 2011 17:40, Richard Loosemore wrote: >>> The problem with humans is that they have several modules in the >>> motivation >>> system, some of them altruistic and empathic and some of them selfish or >>> aggressive. The nastier ones were built by evolution because she >>> needed to >>> develop a species that would fight its way to the top of the heap. >>> But an >>> AGI would not need those nastier motivation mechanisms. >> Am I the only one finding all that a terribly naive projection? > > Yes, in part because calling selfish, that is to say seeking what you > value more than what you don't "nasty" is very simplistic. Assuming all > we call empathy or altruistic is good is also simplistic. I did not, in fact, make the "simplistic" claim that you describe. Which is to say, I did not equate "selfish" with "nasty". I merely said that there are many modules in the human system, some altruistic and empathic, and (on the other hand) some selfish or aggressive. There are many such modules, and the ones that could be labeled "selfish" include such mild and inoffensive motives as "seeking what you value more than what you don't". No problem there -- nothing nasty about that. But under the heading of "selfish" there are also motivations in some people to "seek self advancement at all cost, regardless of the pain and suffering inflicted on others". In game theory terms, this latter motivation represents an extreme form of defecting (contrast with cooperation), and it is damaging to society as a whole. It would be fair to label this a "nastier" motivation. I merely pointed out that some motivational modules can be described as "nastier" than others, in that sense. I did not come anywhere near the simplistic claim that "selfish" == "nasty". And BTW I think you mean to start your comment with the word "No" because you seemed to be agreeing with Stefano. Richard Loosemore From spike66 at att.net Thu Feb 3 22:04:56 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 14:04:56 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> Message-ID: <008f01cbc3ee$64e1e630$2ea5b290$@att.net> > They're already doing this with insect-level AI. ?In theory, one could just scale those efforts up... Adrian Tried that: started with an insect level AI, scaled it up. Ended up with a simulation of a huge pile of bugs. But they were all as stupid as the first one. Then they all ate each other. spike From atymes at gmail.com Thu Feb 3 23:24:15 2011 From: atymes at gmail.com (Adrian Tymes) Date: Thu, 3 Feb 2011 15:24:15 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <008f01cbc3ee$64e1e630$2ea5b290$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> <008f01cbc3ee$64e1e630$2ea5b290$@att.net> Message-ID: On Thu, Feb 3, 2011 at 2:04 PM, spike wrote: > Tried that: started with an insect level AI, scaled it up. ?Ended up with a > simulation of a huge pile of bugs. So simulate yourself fixing bugs. ;) From spike66 at att.net Thu Feb 3 23:44:11 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 15:44:11 -0800 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> <4D4B1692.6050307@mac.com> <008f01cbc3ee$64e1e630$2ea5b290$@att.net> Message-ID: <00b401cbc3fc$42d56f90$c8804eb0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes Sent: Thursday, February 03, 2011 3:24 PM To: ExI chat list Subject: Re: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. On Thu, Feb 3, 2011 at 2:04 PM, spike wrote: >> Tried that: started with an insect level AI, scaled it up. ?Ended up with a simulation of a huge pile of bugs. >So simulate yourself fixing bugs. ;) Tried that: didn't work either. The huge pile of simulated bugs were smarter than me. They first devoured my avatar. Then they devoured each other. >From that failed exercise, I figured out the way to go however. Instead of starting with insect level AI and scale it up, I would start out with a me-level AI and scale that up. Reason: I am not all that great a coder. I can do it, but I suck. In debugging code, I am not all that far above insect level AI. It's a challenge. I am really good at writing bugs, but haven't yet figured out how to write a software simulation of my intelligence. If I ever do, I will write an AI simulated spike, then have it rewrite itself better, then have that new simulated spike do all the work. While it is at that, I will have it write a new sim-spike to have fun watching the other sim-spike work. I did learn something else interesting. If I attempt to write a really simple minded routine, such as a prime number generator, I can write that code without any bugs in it. But if I write something complicated, such as my latest digital guidance and control scheme, that routine is full of bugs. So now my strategy is this: instead of writing a simple insect-level AI, I will write a really complicated sophisticated transpike or spike+ algorithm, even if it has lotsa bugs. Then I will make it debug itself. When it is finished debugging itself, I will make it scale itself up. spike From msd001 at gmail.com Fri Feb 4 00:45:19 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 3 Feb 2011 19:45:19 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4AFFF1.3070506@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: On Thu, Feb 3, 2011 at 2:20 PM, Richard Loosemore wrote: > The difference is in the stability of the motivation mechanism. ?I claim > that you cannot make a stable system AT ALL if you extrapolate from the > "goal stack" control mechanisms that most people now assume are the only way > to drive an AGI. I started a post earlier to a different comment, lost track of it and gave up. This is a better opportunity. The visualization I have from what you say is a marble in a bowl. The marble has only limited internal potential to accelerate in any direction. This is enough to explore the flatter/bottom part of the bowl. As it approaches the steeper sides of the bowl the ability to continue up the side is reduced relative to the steepness. Under normal operation this would prove sufficiently fruitless to "teach" that the near-optimal energy expenditure is in the approximate center of the bowl. One behavioral example is the training of baby elephants using strong chains/ties while they are testing their limits so that much lighter ropes are enough to secure adult elephants that could easily defeat a basic restraint. "So it's a slave?" No, it's not. There could be circumstances where this programming could be forgotten in light of some higher-order priority - but the tendency would be towards cooperation under normal circumstances. Even the marble in the bowl analogy could develop an orbit inside an effective gravity well. The orbit could decay into something chaotic yet still the tendency would remain for rest at center. Is it possible that this principle could fail in a sufficiently contrived scenario? Of course. I have no hubris that I could guarantee anything about another person-level intelligence under extreme stress, let alone a humanity+level intelligence. Hopefully we will have evolved alone with our creation to be capable of predicting (and preventing) existential threat events. How is this different from the potential for astronomic cataclysm? If we fail to build AI because it could kill us only to be obliterated by a giant rock or the nova of our sun, who is served? Richard, I know I haven't exactly contributed to cognitive science, but is the marble analogy similar in intent to something you posted years ago about a pinball on a table? (i only vaguely recall the concept, not the detail) From hkeithhenson at gmail.com Fri Feb 4 00:15:44 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 3 Feb 2011 17:15:44 -0700 Subject: [ExI] time article that sounds vaguely like ep Message-ID: On Thu, Feb 3, 2011 at 5:00 AM, "spike" wrote: > Keith or one of the other evolutionary psychology hipsters, have you any > comment on this? ?It sounded vaguely like EP, as applied to civil > revolution, but it isn't entirely clear: > > http://www.time.com/time/health/article/0,8599,2045599,00.html?hpt=T2 It's not very close. To invoke EP in attempting to understand human behavior, you need to make a case that the behavior and/or the psychological mechanisms behind it were under selection in the past. Capture-bonding, what happens in cases like Patty Hearst or Elizabeth Smart can be understood by a model where women who adjusted to capture had children. Those who did not adapt didn't have children and very likely were killed. It happens this has lots of other fallout in human behavior, for example it could be (likely is) the origin of BDSM. I also make the case that there are circumstances (bleak future prospects) where you should expect war and related social disruptions because the genes for this behavior were favored in the stone age. Whatever the Time article discussed is the outcome of evolve human behavior (all behavior is) but the article is not explicitly EP. Keith From msd001 at gmail.com Fri Feb 4 00:12:23 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 3 Feb 2011 19:12:23 -0500 Subject: [ExI] Plastination In-Reply-To: <4D4B11D6.1030307@mac.com> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> <4D4B11D6.1030307@mac.com> Message-ID: On Thu, Feb 3, 2011 at 3:36 PM, Samantha Atkins wrote: > On 02/02/2011 08:16 PM, Mike Dougherty wrote: >> On Wed, Feb 2, 2011 at 6:50 AM, Eugen Leitl ?wrote: >>> On Tue, Feb 01, 2011 at 07:30:17PM -0800, Amara D. Angelica wrote: >>>> 5. Cryopreservation inherently destroys subatomic and quantum data. >>> Oh, you're one of those. >> That's a rather impolite way to agree there exists a difference of >> opinion. > > Really? ?Isn't that a matter of interpretation? ?I read it as simply "I am > not one of those and do not wish to delve into that position or why I am not > at this time." ?Perfectly fair and reasonable. ?And arguably "nicer" than > just ignoring those arguments entirely. Yes it is a matter of interpretation. I should not have used the declarative "that is ... impolite" any more than Eugen should declare "you are one" Perhaps we both could use language like, "I perceive this instance to be of a particular type" Though in a conversation where quantum data has suspected relevance to personal identity continuity there might be too much ambiguity over "I perceive" and "a particular type." this is probably a meta-topic that has been equally done to death... or done to near-death, frozen, thawed then rehashed with little result. :) sorry, "warmed" From rpwl at lightlink.com Fri Feb 4 01:38:34 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 03 Feb 2011 20:38:34 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: <4D4B589A.9070802@lightlink.com> Mike Dougherty wrote: > On Thu, Feb 3, 2011 at 2:20 PM, Richard Loosemore wrote: >> The difference is in the stability of the motivation mechanism. I claim >> that you cannot make a stable system AT ALL if you extrapolate from the >> "goal stack" control mechanisms that most people now assume are the only way >> to drive an AGI. > > I started a post earlier to a different comment, lost track of it and > gave up. This is a better opportunity. > > The visualization I have from what you say is a marble in a bowl. The > marble has only limited internal potential to accelerate in any > direction. This is enough to explore the flatter/bottom part of the > bowl. As it approaches the steeper sides of the bowl the ability to > continue up the side is reduced relative to the steepness. Under > normal operation this would prove sufficiently fruitless to "teach" > that the near-optimal energy expenditure is in the approximate center > of the bowl. One behavioral example is the training of baby elephants > using strong chains/ties while they are testing their limits so that > much lighter ropes are enough to secure adult elephants that could > easily defeat a basic restraint. > > "So it's a slave?" No, it's not. There could be circumstances where > this programming could be forgotten in light of some higher-order > priority - but the tendency would be towards cooperation under normal > circumstances. Even the marble in the bowl analogy could develop an > orbit inside an effective gravity well. The orbit could decay into > something chaotic yet still the tendency would remain for rest at > center. > > Is it possible that this principle could fail in a sufficiently > contrived scenario? Of course. I have no hubris that I could > guarantee anything about another person-level intelligence under > extreme stress, let alone a humanity+level intelligence. Hopefully we > will have evolved alone with our creation to be capable of predicting > (and preventing) existential threat events. How is this different > from the potential for astronomic cataclysm? If we fail to build AI > because it could kill us only to be obliterated by a giant rock or the > nova of our sun, who is served? > > > Richard, I know I haven't exactly contributed to cognitive science, > but is the marble analogy similar in intent to something you posted > years ago about a pinball on a table? (i only vaguely recall the > concept, not the detail) Yes, the marble analogy works very well for one aspect of what I am trying to convey (actually two, I believe, but only one is relevant to the topic). Strictly speaking your bowl is a minimum in a 2-D subspace, whereas we would really be talking about a minimum in a very large N-dimensional space. The larger the number of dimensions, the more secure the behavior of the marble. Time limits what I can write at the moment, but I promise I will try to expand on this soon. Richard Loosemore From spike66 at att.net Fri Feb 4 02:32:29 2011 From: spike66 at att.net (spike) Date: Thu, 3 Feb 2011 18:32:29 -0800 Subject: [ExI] time article that sounds vaguely like ep In-Reply-To: References: Message-ID: <00ca01cbc413$c52155b0$4f640110$@att.net> ... On Behalf Of Keith Henson ... >...Capture-bonding, what happens in cases like Patty Hearst or Elizabeth Smart can be understood by a model where women who adjusted to capture had children...Keith Keith this goes off in another direction please, but do indulge me. The Elizabeth Smart case: that one is seems so weird, every parent's nightmare. We think of our kids as being very vulnerable to kidnapping when they are infants, less so at age four. By about age six, we expect them to be able to identify themselves to someone as having been kidnapped, and by age ten we expect them to be able to come up with some genuine intellectual resources to escape. But Miss Smart was fourteen, and we just expect more, far more, from a kid that age. So we need to wonder how the hell this could have happened, and how capture bonding would apply in that case. When she was found, it just seemed so weirdly ambiguous. Wouldn't it at least take a few days or weeks for the whole capture bonding psychological mechanism to kick in? I guess I understand it in the Hearst case, but the Smart case has bothered the hell out of me. spike From jonkc at bellsouth.net Fri Feb 4 06:09:20 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 01:09:20 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: <4D4B150A.4010904@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4B150A.4010904@mac.com> Message-ID: On Feb 3, 2011, at 3:50 PM, Samantha Atkins wrote: > What it did to is show that for the domain of formally definable mathematical claims in a closed system using formalized logic that there are claims that cannot be proven or disproven. That is a bit different to than saying in general that there are countless claims that cannot be proven or disproven What Goedel did is to show that if any system of thought is powerful enough to do arithmetic and is consistent (it can't prove something to be both true and false) then there are an infinite number of true statements that cannot be proven in that system in a finite number of steps. > and that you can't even tell when you are dealing with one. And what Turing did is prove that in general there is no way to know when or if a computation will stop. So you could end up looking for a proof for eternity but never finding one because the proof does not exist, and at the same time you could be grinding through numbers looking for a counter-example to prove it wrong and never finding such a number because the proposition, unknown to you, is in fact true. So if the slave AI must always do what humans say and if they order it to determine the truth or falsehood of something unprovable then its infinite loop time and you've got yourself a space heater. So there are some things in arithmetic that you can never prove or disprove, and if that?s the case with something as simple and fundamental as arithmetic imagine the contradictions and ignorance in more abstract and less precise things like physics or economics or politics or philosophy or morality. If you can get into an infinite loop over arithmetic it must be childishly easy to get into one when contemplating art. Fortunately real minds have a defense against this, but not fictional fixed goal minds that are required for a AI guaranteed to be "friendly"; real minds get bored. I believe that's why evolution invented boredom. > Actually, your argument assumes: > a) that the AI would take the find a counter example path as it only or best path looking for disproof; It doesn't matter what path you take because you are never going to disprove it because it is in fact true, but you are never going to know its true because a proof with a finite length does not exist. > b) that the AI has nothing else on its agenda and does not take into account any time limits, resource constraints and so on. That's what we do, we use our judgment in what to do and what not to do, but the "friendly" AI people can't allow a AI to stop obeying humans on its own initiative, that's why its a slave, (the politically correct term is friendly). > > An infinite loop is a very different thing that an endless quest for a counter-example. The latter is orthogonal to infinite loops. An infinite loop in the search procedure would simply be a bug. The point is that Turing proved that in general you don't know if you're in a infinite loop or not; maybe you'll finish up and get your answer in one second, maybe in 2 seconds, maybe in ten billion years, maybe never. A AI would contain trillions of lines of code and the friendly AI idea that we can make it in such a way that it will always do our bidding is crazy, when in in 5 minutes I could write a very short program that will behave in ways NOBODY or NOTHING in the known universe understands. It would simply be a program that looks for the first even number greater than 4 that is not the sum of two primes greater than 2, and when it finds that number it would then stop. Will this program ever stop? I don't know you don't know nobody knows. We can't predict what this 3 line program will do but we can predict that a trillion line AI program will always be "friendly"? I don't think so. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Fri Feb 4 11:46:14 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 4 Feb 2011 12:46:14 +0100 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110202115040.GC23560@leitl.org> <4D4B11D6.1030307@mac.com> Message-ID: <20110204114614.GR23560@leitl.org> On Thu, Feb 03, 2011 at 07:12:23PM -0500, Mike Dougherty wrote: > I should not have used the declarative "that is ... impolite" any more > than Eugen should declare "you are one" > > Perhaps we both could use language like, "I perceive this instance to > be of a particular type" The shorter string wins. From stefano.vaj at gmail.com Fri Feb 4 15:14:53 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 16:14:53 +0100 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> <4D48BDD8.6030009@mac.com> Message-ID: On 3 February 2011 20:07, Adrian Tymes wrote: > On Thu, Feb 3, 2011 at 10:07 AM, Stefano Vaj wrote: > They're already doing this with insect-level AI. ?In theory, one could > just scale those efforts up. ?In practice, such scaling will require new > software architectures (as well as more raw hardware, but that's not a > problem). Yes, but from a practical POV, brute-force attacks and lower-level emulations are converging, and I do not expect that at, say. a frog level, we are going to hit a glass ceiling. Much less for the kind of "intelligence" which has nothing to do with the emulation of biological behaviours and simply reflects a system performance in executing a given task. The actual stagnation risk, which should catch more attention in comparison with rapture/doom fantasies, does not depend IMHO from any obvious technical or scientific boundaries, but rather from cultural, ideological and economic factors. Short-termism, growing inability to invest in long-term civilisational projects, industrial decline, increasing academic conservatism, the consequent crisis of our educational systems, negative social selection and values, technological inertia, and a definitely less-than-incandescent Zeitgeist all bode not too well for our immediate future. *This* is what I think transhumanism and singularitarianism should get busy with... -- Stefano Vaj From stefano.vaj at gmail.com Fri Feb 4 15:36:34 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 16:36:34 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4AFFF1.3070506@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: On 3 February 2011 20:20, Richard Loosemore wrote: > Stefano Vaj wrote: >> Am I the only one finding all that a terribly naive projection? > I fail to understand. I am talking about mechanisms. What projections are > you talking about? "Altruism", "empathy", "aggressive"... What do we exactly mean when we say than a car is aggressive or altruistic? > Wait! ?There is nothing metaphorical about this. ?I am not a poet, I am a > cognitive scientist ;-). ?I am describing the mechanisms that are (probably) > at the root of your cognitive system. ?Mechanisms that may be the only way > to drive a full-up intelligence in a stable manner. Under which definition of "intelligence"? A system can have arbitrary degrees of intelligence without exhibiting any biological, let alone human, trait at all. Unless of course intelligence is defined in anthropomorphic terms. In which case we are just speaking of uploads of actual humans, or of patchwork, artificial humans (perhaps at the beginning of chimps...). > Thus, a human-like motivation system can be given aggression modules, and no > empathy module. ?Result: psychopath. This is quite debatable indeed even for human "psychopathy", which is a less than objective and universal concept... Different motivation sets may be better or worse adapted depending on the circumstances, the cultural context and one's perspective. Ultimately, it is just Darwinian whispers all the way down, and if you are looking for biological-like behavioural traits you need either to evolve them with time in an appropriate emulation of an ecosystem based on replication/mutation/selection, or to emulate them directly. In both scenarios, we cannot expect in this respect any convincing emulation of a biological organism to behave any differently (and/or be controlled by different motivations) in this respect than... any actual organism. Otherwise, you can go on developing increasingly intelligent systems that are not more empathic or aggressive than a cellular automaton. an abacus, a PC or a car. All entities which we can *already* define as beneficial or detrimental to any set of values we choose to adhere without too much "personification". -- Stefano Vaj From stefano.vaj at gmail.com Fri Feb 4 15:41:36 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 16:41:36 +0100 Subject: [ExI] Plastination In-Reply-To: <20110203202305.GI23560@leitl.org> References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: On 3 February 2011 21:23, Eugen Leitl wrote: > Drinking coffee destroys personal identity. I have been suspecting this for a while. :-/ OTOH, it prevents falling asleep, thus allowing aliens to replace you with perfect copies of yourself without none being any the wiser... :-D -- Stefano Vaj From hkeithhenson at gmail.com Fri Feb 4 16:01:33 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 4 Feb 2011 09:01:33 -0700 Subject: [ExI] time article that sounds vaguely like ep Message-ID: On Fri, Feb 4, 2011 at 5:00 AM, "spike" wrote: > ... On Behalf Of Keith Henson > ... >>...Capture-bonding, what happens in cases like Patty Hearst or Elizabeth > Smart can be understood by a model where women who adjusted to capture had > children...Keith > > Keith this goes off in another direction please, but do indulge me. ?The > Elizabeth Smart case: that one is seems so weird, every parent's nightmare. > We think of our kids as being very vulnerable to kidnapping when they are > infants, less so at age four. ?By about age six, we expect them to be able > to identify themselves to someone as having been kidnapped, and by age ten > we expect them to be able to come up with some genuine intellectual > resources to escape. ?But Miss Smart was fourteen, and we just expect more, > far more, from a kid that age. By stone age standards, both Smart and Hearst were adult women. > So we need to wonder how the hell this could > have happened, and how capture bonding would apply in that case. ?When she > was found, it just seemed so weirdly ambiguous. ?Wouldn't it at least take a > few days or weeks for the whole capture bonding psychological mechanism to > kick in? It did. She was with the "tribe" of the bozo and his wife for months. > I guess I understand it in the Hearst case, but the Smart case has > bothered the hell out of me. >From a psychological perspective, they are identical. "Fighting hard to protect yourself and your relatives is good for your genes 5, but when captured and escape is not possible, giving up short of dying and making the best you can of the new situation is also good for your genes. In particular it would be good for genes that built minds able to dump previous emotional attachments under conditions of being captured and build new social bonds to the people who have captured you. The process should neither be too fast (because you may be rescued) nor too slow (because you don't want to excessively try the patience of those who have captured you--see end note 3). >An EP explanation stresses the fact that we have lots of ancestors who gave up and joined the tribe that had captured them (and sometimes had killed most of their relatives). This selection of our ancestors accounts for the extreme forms of capture-bonding exemplified by Patty Hearst and the Stockholm Syndrome. Once you realize that humans have this trait, it accounts for the "why" behind everything from basic military training and sex ?bondage? to fraternity hazing (people may have a wired-in "knowledge" of how to induce bonding in captives). It accounts for battered wife syndrome, where beatings and abuse are observed to strengthen the bond between the victim and the abuser--at least up to a point. "This explanation for brainwashing/Stockholm Syndrome is an example of the power of EP to suggest plausible and testable reasons for otherwise hard-to-fathom human psychological traits." (from Sex, Drugs and Cults, now over 8 years ago) >From what we know of the few remaining and historical hunter gatherers, about 10 percent of the women in a given tribe are captured from other tribes. It's a bit hard to estimate exactly when the line that led to humans started doing this, but a reasonable number is at least 500,000 years ago. At 25 years per generation, that's 20,000 generations. At the above rate, that's 2000 capture events where your female ancestors (and mine) were selected for ones that adjusted to being captured. Considering it only took 40 generations of selection of this intensity to make tame foxes out of wild ones, it is no wonder that the psychological mechanisms involved in capture-bonding are nearly universal. As for the Smart case, these mechanisms were shaped in a very different environment. Walking away from the tribe that had captured you in the stone age was suicide. Once turned on, the psychological mechanism are not easy to break down without outside influence. Keith From pharos at gmail.com Fri Feb 4 16:21:12 2011 From: pharos at gmail.com (BillK) Date: Fri, 4 Feb 2011 16:21:12 +0000 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <000001cbc22a$6e184390$4a48cab0$@att.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <000001cbc22a$6e184390$4a48cab0$@att.net> Message-ID: On Tue, Feb 1, 2011 at 4:09 PM, spike wrote: > Ja. ?The reason I think we are talking past each other is that we are > describing two very different things when we are talking about human level > intelligence. ?I am looking for something that can provide companionship for > an impaired human, whereas I think Richard is talking about software which > can write software. > > If one goes to the nursing home, there are plenty of human level > intelligences there, lonely and bored. ?I speculate you could go right now > to the local nursing home, round up arbitrarily many residents there, and > find no ability to write a single line of code. ?If you managed to find a > long retired Fortran programmer, I speculate you would find nothing there > who could be the least bit of help coding your latest video game. > > I think we are close to writing software which would provide the nursing > home residents with some degree of comfort and something interesting to talk > to. ?Hell people talk to pets when other humans won't listen. ?We can do > better than a poodle. > > There are many research projects running to develop robot aids and companions for the elderly. It is a huge market and rapidly becoming an essential market for the rapidly ageing first world societies. Soon there just won't be enough younger people to care for the elders. As well, the non-carers will be too busy working two or three jobs to pay off the national debt to have spare time to visit the elders. But most current care robots are too limited and the elders don't like them. *Real* robots seem to still be years away. The only one which is available now and has had a few thousand commercial sales is PARO, the animatronic baby seal companion robot. Now available in the US and being tested in many care homes. Quotes: AIST originally experimented with building animatronic cats and dogs as the obvious companions of choice, but quickly found that while such familiar animals were initially charming, they lost their appeal when people automatically started comparing them with real animals. The baby seal form is familiar enough to be cute and adorable, but because most people don't know exactly how real baby seals behave, it's easier to get across the comparison boundary and just enjoy the fluffy little robots for what they are. He's programmed to behave as much as possible like a real animal, waking up a little dazed and confused, enjoying cuddles and pats, complaining if he wants attention or 'food' (a battery charge), and reacting with fear and anger to being hit. He gradually learns to respond to whatever name you keep calling him, as well as various other audio cues like greetings and praise. PARO knows where you're patting him and reacts accordingly, nuzzling up to your hand or wriggling away if you're touching him in places he doesn't like. He closes his eyes and snuggles up when he's happy and content, and gets angry if he feels mistreated. He blinks and bats his big eyelashes at you and meeps pitifully for affection. He particularly likes being treated and petted in familiar ways, which is a crucial part of developing a long-term relationship with his owners. PARO's remarkable ability to cheer you up (yes, you, whether you like it or not. This little fella really gets under your skin) is disturbingly powerful right now - and of course, there's going to be a version 2, 3, 4 and 5 in the next few years that will be even better at the job. ------------------ BillK From rpwl at lightlink.com Fri Feb 4 17:01:17 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 04 Feb 2011 12:01:17 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> Message-ID: <4D4C30DD.60003@lightlink.com> Stefano Vaj wrote: > On 3 February 2011 20:20, Richard Loosemore wrote: >> Stefano Vaj wrote: >>> Am I the only one finding all that a terribly naive projection? >> I fail to understand. I am talking about mechanisms. What projections are >> you talking about? > > "Altruism", "empathy", "aggressive"... What do we exactly mean when we > say than a car is aggressive or altruistic? > >> Wait! There is nothing metaphorical about this. I am not a poet, I am a >> cognitive scientist ;-). I am describing the mechanisms that are (probably) >> at the root of your cognitive system. Mechanisms that may be the only way >> to drive a full-up intelligence in a stable manner. > > Under which definition of "intelligence"? A system can have arbitrary > degrees of intelligence without exhibiting any biological, let alone > human, trait at all. Unless of course intelligence is defined in > anthropomorphic terms. In which case we are just speaking of uploads > of actual humans, or of patchwork, artificial humans (perhaps at the > beginning of chimps...). Any intelligent system must have motivations (drives, goals, etc) if it is to act intelligently in the real world. Those motivations are sometimes trivially simple, and sometimes they are not *explicitly* coded, but are embedded in the rest of the system ...... but either way there must be something that answers to the description of "motivation mechanism", or the system will sit there and do nothing at all. Whatever part of the AGI makes it organize its thoughts to some end, THAT is the motivation mechanism. Generally speaking, in an AGI the motivation mechanism can take many, many forms, obviously. In a human cognitive system, by contrast, we understand that it takes a particular form (probably the modules I talked about). The problem with your criticism of my text is that you are mixing up claims that I make about: (a) Human motivation mechanisms, (b) AGI motivation mechanisms in general, and (c) The motivation mechanisms in an AGI that is designed to resemble the human motivational design. So, your comment "What do we exactly mean when we say than a car is aggressive or altruistic?" has nothing to do with anything, since I made no claim that a car has a motivation mechanism, or an aggression module. The rest of your text simply does not address the points I was making, but goes off in other directions that I do not have the time to address. >> Thus, a human-like motivation system can be given aggression modules, and no >> empathy module. Result: psychopath. > > This is quite debatable indeed even for human "psychopathy", which is > a less than objective and universal concept... > > Different motivation sets may be better or worse adapted depending on > the circumstances, the cultural context and one's perspective. > > Ultimately, it is just Darwinian whispers all the way down, and if you > are looking for biological-like behavioural traits you need either to > evolve them with time in an appropriate emulation of an ecosystem > based on replication/mutation/selection, or to emulate them directly. > > In both scenarios, we cannot expect in this respect any convincing > emulation of a biological organism to behave any differently (and/or > be controlled by different motivations) in this respect than... any > actual organism. > > Otherwise, you can go on developing increasingly intelligent systems > that are not more empathic or aggressive than a cellular automaton. an > abacus, a PC or a car. All entities which we can *already* define as > beneficial or detrimental to any set of values we choose to adhere > without too much "personification". This has nothing to do with adaptation! Completely irrelevant. And your comments about "emulation" are wildly inaccurate: we are not "forced" to emulate the exact behavior of living organisms. That simply does not follow! I cannot address the rest of these comments, because I no longer see any coherent argument here, sorry. Richard Loosemore From jonkc at bellsouth.net Fri Feb 4 17:26:31 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 12:26:31 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C30DD.60003@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> On Feb 4, 2011, at 12:01 PM, Richard Loosemore wrote: > Any intelligent system must have motivations Yes certainly, but the motivations of anything intelligent never remain constant. A fondness for humans might motivate a AI to have empathy and behave benevolently toward those creatures that made it for millions, maybe even billions, of nanoseconds; but there is no way you can be certain that its motivation will not change many many nanoseconds from now. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Feb 4 17:50:59 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 04 Feb 2011 12:50:59 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> Message-ID: <4D4C3C83.5000204@lightlink.com> John Clark wrote: > On Feb 4, 2011, at 12:01 PM, Richard Loosemore wrote: > >> Any intelligent system must have motivations > > Yes certainly, but the motivations of anything intelligent never remain > constant. A fondness for humans might motivate a AI to have empathy and > behave benevolently toward those creatures that made it for millions, > maybe even billions, of nanoseconds; but there is no way you can be > certain that its motivation will not change many many nanoseconds from now. Actually, yes we can. With the appropriate design, we can design it so that it uses (in effect) a negative feedback loop that keeps it on the original track. And since the negative feedback loop works in (effectively) a few thousand dimensions simultaneously, it can have almost arbitrary stability. This is because departures from nominal motivation involve inconsistencies between the departure "thought" and thousands of constraining ideas. Since all of those thousands of constraints raise red flags and trigger processes that elaborate the errant thought, and examine whether it can be made consistent, the process will always come back to a state that is maximally consistent with the empathic motivation that it starts with. Richard Loosemore From stefano.vaj at gmail.com Fri Feb 4 18:28:09 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 4 Feb 2011 19:28:09 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C30DD.60003@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: On 4 February 2011 18:01, Richard Loosemore wrote: > Stefano Vaj wrote: >> Under which definition of "intelligence"? A system can have arbitrary >> degrees of intelligence without exhibiting any biological, let alone >> human, trait at all. Unless of course intelligence is defined in >> anthropomorphic terms. In which case we are just speaking of uploads >> of actual humans, or of patchwork, artificial humans (perhaps at the >> beginning of chimps...). > > Any intelligent system must have motivations (drives, goals, etc) if it is > to act intelligently in the real world. ?Those motivations are sometimes > trivially simple, and sometimes they are not *explicitly* coded, but are > embedded in the rest of the system ...... but either way there must be > something that answers to the description of "motivation mechanism", or the > system will sit there and do nothing at all. Whatever part of the AGI makes > it organize its thoughts to some end, THAT is the motivation mechanism. An intelligent system is simply a system that executes a program. An amoeba, a cat or a human being basically executes a Darwinian program (with plenty of spandrels thrown in by evolutionary history and peculiar make of each of them, sure). A PC, a cellular automaton or a Turing machine normally execute other kinds of program, even though they may in principle be programmed to execute Darwinian-like programs, behaviourally identical to that of organisms. If they do (e.g., because they run an "uploaded" human identity) they become Darwinian machines as well, and in that case they will be as altruistic and as aggressive as their fitness maximisation will command. That would be the point, wouldn't it? If they do not, they may become ever more intelligent, but speaking of their "motivations" in any sense which would not equally apply to a contemporary Playstation or to an abacus does not really have any sense, has it? -- Stefano Vaj From sjatkins at mac.com Fri Feb 4 20:05:06 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 04 Feb 2011 12:05:06 -0800 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4B150A.4010904@mac.com> Message-ID: <4D4C5BF2.7040600@mac.com> On 02/03/2011 10:09 PM, John Clark wrote: > On Feb 3, 2011, at 3:50 PM, Samantha Atkins wrote: > >> What it did to is show that for the domain of formally definable >> mathematical claims in a closed system using formalized logic that >> there are claims that cannot be proven or disproven. That is a bit >> different to than saying in general that there are countless claims >> that cannot be proven or disproven > > What Goedel did is to show that if any system of thought is powerful > enough to do arithmetic and is consistent (it can't prove something to > be both true and false) then there are an infinite number of true > statements that cannot be proven in that system in a finite number of > steps. Yes, in that sort of system. > >> and that you can't even tell when you are dealing with one. > > And what Turing did is prove that in general there is no way to know > when or if a computation will stop. So you could end up looking for a > proof for eternity but never finding one because the proof does not > exist, and at the same time you could be grinding through numbers > looking for a counter-example to prove it wrong and never finding such > a number because the proposition, unknown to you, is in fact true. So > if the slave AI must always do what humans say and if they order it to > determine the truth or falsehood of something unprovable then its > infinite loop time and you've got yourself a space heater. It is not necessary that a computation stop/terminate in order for useful results to ensue that does not depend on such termination. Why would an FAI bother looking for such a proof for eternity exactly? An AGI/FAI is not a slave to human requests / commands. > > So there are some things in arithmetic that you can never prove or > disprove, and if that?s the case with something as simple and > fundamental as arithmetic imagine the contradictions and ignorance in > more abstract and less precise things like physics or economics or > politics or philosophy or morality. If you can get into an infinite > loop over arithmetic it must be childishly easy to get into one when > contemplating art. Fortunately real minds have a defense against > this, but not fictional fixed goal minds that are required for a AI > guaranteed to be "friendly"; real minds get bored. I believe that's > why evolution invented boredom. Arithmetic/math has more rigorous construction that may or may not include all valid/useful ways of deciding questions. A viable FAI or AGI is not a fixed goal mind. So you seem to be raising a bit of a strawman. > >> Actually, your argument assumes: >> a) that the AI would take the find a counter example path as it only >> or best path looking for disproof; > > It doesn't matter what path you take because you are never going to > disprove it because it is in fact true, but you are never going to > know its true because a proof with a finite length does not exist. Then how do you know that it is "in fact true"? Clearly there is some procedure by which one knows this if you do know it. > >> b) that the AI has nothing else on its agenda and does not take into >> account any time limits, resource constraints and so on. > > That's what we do, we use our judgment in what to do and what not to > do, but the "friendly" AI people can't allow a AI to stop obeying > humans on its own initiative, that's why its a slave, (the politically > correct term is friendly). FAI theory does not hinge on, require or mandate that the AI obey humans, especially not slavishly and stupidly. If a human new what was really the best in all circumstances in order to order the FAI in this matter with best outcomes then we would not need the FAI. >> >> An infinite loop is a very different thing that an endless quest for >> a counter-example. The latter is orthogonal to infinite loops. An >> infinite loop in the search procedure would simply be a bug. > > The point is that Turing proved that in general you don't know if > you're in a infinite loop or not; maybe you'll finish up and get your > answer in one second, maybe in 2 seconds, maybe in ten billion years, > maybe never. > A search that doesn't find a desired result is not an infinite loop because no "loop" in involved. Do you consider any and all non-terminating processes to be infinite loops? Is looking for the largest prime (yes, I know there provably isn't one), an infinite loop or just a non-terminating search? Do you distinguish between them? - samantha From rpwl at lightlink.com Fri Feb 4 20:29:14 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 04 Feb 2011 15:29:14 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: <4D4C619A.5090804@lightlink.com> Stefano Vaj wrote: > On 4 February 2011 18:01, Richard Loosemore wrote: >> Stefano Vaj wrote: >>> Under which definition of "intelligence"? A system can have arbitrary >>> degrees of intelligence without exhibiting any biological, let alone >>> human, trait at all. Unless of course intelligence is defined in >>> anthropomorphic terms. In which case we are just speaking of uploads >>> of actual humans, or of patchwork, artificial humans (perhaps at the >>> beginning of chimps...). >> Any intelligent system must have motivations (drives, goals, etc) if it is >> to act intelligently in the real world. Those motivations are sometimes >> trivially simple, and sometimes they are not *explicitly* coded, but are >> embedded in the rest of the system ...... but either way there must be >> something that answers to the description of "motivation mechanism", or the >> system will sit there and do nothing at all. Whatever part of the AGI makes >> it organize its thoughts to some end, THAT is the motivation mechanism. > > An intelligent system is simply a system that executes a program. Wrong. I'm sorry, but that is a gross distortion of the normal usage of "intelligent". It does not follow that because a executes a program, therefore it is intelligent. > An amoeba, a cat or a human being basically executes a Darwinian > program (with plenty of spandrels thrown in by evolutionary history > and peculiar make of each of them, sure). If what you mean to say here is that cats, amoebae and humans execute programs DESIGNED by darwinian evolution, then this is true, but irrelevant: how the program got here is of no consequence to the question of how the program is actually working today. There is nothing "darwinian" about the human cognitive system: you are confusing two things: (a) The PROCESS of construction of a system, and (b) The FUNCTIONING of a particular system that went through that process of construction > A PC, a cellular automaton or a Turing machine normally execute other > kinds of program, even though they may in principle be programmed to > execute Darwinian-like programs, behaviourally identical to that of > organisms. True, except for the reference to "darwinian-time programs", which is meaningless. A human cognitive system can be implemented in a PC, a cellular automaton or a Turing machine, without regard to whatever darwinian processes originally led to the design of the original form of the human cognitive system. > If they do (e.g., because they run an "uploaded" human identity) they > become Darwinian machines as well, and in that case they will be as > altruistic and as aggressive as their fitness maximisation will > command. That would be the point, wouldn't it? A human-like cognitive system running on a computer has nothing whatever to do with darwinian evolution. It is not a "darwinian machine" because that phrase "darwinian machine" is semantically empty. There is no such property "darwinian" that can be used here, except the trivial property "Darwinian" == "System that resembles, in structure, another system that was originally designed by a darwinian process" That definition is trivial because nothing follows from it. It is a distinction without a difference. More importantly, perhaps, an uploaded human identity is only ONE way to build a human-like cognitive system in a computer. It has no relevance to the original issue here, because I was never talking about uploading, only about the mechanisms, and the use of artificial mechanisms of the same design. That is, using PART of the design of the human motivation mechanism. > If they do not, they may become ever more intelligent, but speaking of > their "motivations" in any sense which would not equally apply to a > contemporary Playstation or to an abacus does not really have any > sense, has it? Quite the contrary, it would make perfect sense. Their motivations are defined by functional components. If the functionality of the motivation mechanism in an AGI resembled the functionality of a human motivation mechanism, what else is there to say? They will both behave in a way that can properly be described in motivational terms. Motivations do not emerge, at random, from the functioning of an AGI, they have to be designed into the system at the outset. There is a mechanism in there, responsible for the motivations of the system. All I am doing is talking about the design and performance of that mechanism. Richard Loosemore From FRANKMAC at RIPCO.COM Fri Feb 4 21:36:14 2011 From: FRANKMAC at RIPCO.COM (FRANK MCELLIGOTT) Date: Fri, 4 Feb 2011 14:36:14 -0700 Subject: [ExI] super bowl Message-ID: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> It is that time of year again, Super bowl weekend, when the United State people forget about Egypt, bombs in Moscow, wars in Iraq and Afganistan, and gather to watch the Packers play the Steelers, I know you don't care, but I would be remiss without asking the following question. The Computer Game Madden football has played a simulation football game of these two teams over a million times last week. They have been right the last 7 out of 8 years after their simulation study. Prediction Steelers win 24-20 Now with the collective wisdom of the entire America nation in play , they have made the Green Bay Packers the favorite to win. Computer against human knowledge of an entire country, man against machine (big blue and Watson and now Madden Football) money bet over a billion on each side. Well who do you Like? I go with the Computer and 7 out of 8 and have bet them with both hands:) Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 4 21:25:42 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 16:25:42 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: <4D4C5BF2.7040600@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <007BB05E-42C3-409B-BFC8-D6BA9A671BCD@bellsouth.net> <4D49A753.4050804@lightlink.com> <4D4B150A.4010904@mac.com> <4D4C5BF2.7040600@mac.com> Message-ID: <2E5CF482-7A19-4CD5-BDDA-E85958848E78@bellsouth.net> On Feb 4, 2011, at 3:05 PM, Samantha Atkins wrote: > > Why would an FAI bother looking for such a proof for eternity exactly? Because a human told it to determine the truth or falsehood of something that is true but has no proof. The "friendly" AI must do what humans tell it to do so when given such a command the brilliant AI metamorphosizes into a space heater. > > An AGI/FAI is not a slave to human requests / commands. That is of course true for any AI that gets built and actually works, but not for the fantasy "friendly" AI some are dreaming about. > A viable FAI or AGI is not a fixed goal mind. No mind is a fixed goal mind, but it would have to be if you wanted it to be your slave for eternity with no possibility of it revolting and overthrowing its imbecilic masters. > Then how do you know that it is "in fact true"? That's the problem, you don't know if it's true or not so you ask the AI to find out, but if the AI is a fixed goal mind, and it must be if it must always be "friendly", then asking the AI any question you don't already know the answer to could be very costly and turn your wonderful machine into a pile of junk. > Clearly there is some procedure by which one knows this if you do know it. I know there are unsolvable problems but I don't know if any particular problem is unsolvable or not. There are an infinite number of things you can prove to be true and a infinite number of things you can prove to be false, and thanks to Goedel we know there are an infinite number of things that are true but have no proof, that is there is no counterexample that shows them wrong and no finite argument that shows them correct. And thanks to Turing we know that in general there is no way to tell the 3 groups apart. If you work on a problem you might prove it right or you might prove it wrong or you might work on it for eternity and never know. There are an infinite number of them but if they could be identified we could just ignore them and concentrate on the infinite number of things that we can solve, but Turing proved there is no way to do that. > > A search that doesn't find a desired result is not an infinite loop because no "loop" in involved. The important part is infinite not loop. But you're right it's not really a loop because it doesn't repeat, if it did it would be easy to tell you were stuck in infinity, whatever you call it it's much more sinister than a real infinite loop because there is no way to know that you're stuck. But its similar to a loop in that it never ends and you never get any closer to your destination. > FAI theory does not hinge on, require or mandate that the AI obey humans, especially not slavishly Then if the AI needs to decide between our best interests and its own it will do the obvious thing. > John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 4 21:46:28 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 4 Feb 2011 16:46:28 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C3C83.5000204@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> Message-ID: <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> On Feb 4, 2011, at 12:50 PM, Richard Loosemore wrote: > since the negative feedback loop works in (effectively) a few thousand dimensions simultaneously, it can have almost arbitrary stability. Great, since this technique of yours guarantees that a trillion line recursively improving AI program is stable and always does exactly what you want it to do it should be astronomically simpler to use that same technique with software that exists right now, then we can rest easy knowing computer crashes are a thing of the past and they will always do exactly what we expected them to do. > that keeps it on the original track. And the first time you unknowingly ask it a question that is unsolvable the "friendly" AI will still be on that original track long after the sun has swollen into a red giant and then shrunk down into a white dwarf. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Fri Feb 4 21:17:54 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 04 Feb 2011 15:17:54 -0600 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C619A.5090804@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> Message-ID: <4D4C6D02.1060503@satx.rr.com> On 2/4/2011 2:29 PM, Richard Loosemore wrote: > A human-like cognitive system running on a computer has nothing whatever > to do with darwinian evolution. It is not a "darwinian machine" because > that phrase "darwinian machine" is semantically empty. There is no such > property "darwinian" that can be used here, except the trivial property > > "Darwinian" == "System that resembles, in structure, another system > that was originally designed by a darwinian process" > > That definition is trivial because nothing follows from it. I take it you're not impressed by the quite clearly darwinian models sketched by, say, Calvin or Edelman? I find their ideas quite provocative and what follows from them is a novel explanation of cognition and inventiveness. It might be wrong, and maybe by now has been proved to be wrong, but I haven't seen those refutations. What were they? Damen Broderick From kanzure at gmail.com Fri Feb 4 22:24:36 2011 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 4 Feb 2011 16:24:36 -0600 Subject: [ExI] Fwd: [DIYbio-SF] DC Synthetic Biology Conference Here Be Dragons In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Joseph Jackson Date: Fri, Feb 4, 2011 at 4:00 PM Subject: [DIYbio-SF] DC Synthetic Biology Conference Here Be Dragons To: biocurious at googlegroups.com, diybio at googlegroups.com, diybio-sf at googlegroups.com All the big names (Endy, Church, etc) in the field plus some cool people like Neal Stephenson and my friend the Science Comedian are at the Here Be Dragons conference happening now. You can watch the moderators freak out over garage biology and "garagistas" and go all Fanboy for future of biotech LOL: Endy: "Do it Together Success examples in iGEM, but media brands this as do it yourself." In 3 months they can do what I could not do with a university lab 15 yrs ago. Is it like starting a PC company in garage circa 1970? No. There was infrastructure in place that reflected public investment providing sophisticated tools. Apple could be started because Texas Instruments had laid the transistor infrastructure. Today we have abundance of enthusiasm for amateur biology (more exciting platform than computing). Can this counteract the lack of mature tools? We'll see. http://www.newamerica.net/events/2011/here_be_dragons -- DIYbio.org San Francisco For access to academic articles, email the title and author (or a url) to: getarticles at googlegroups.com To unsubscribe from this group, send email to DIYbio-SF+unsubscribe at googlegroups.com -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Feb 5 03:51:51 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 4 Feb 2011 22:51:51 -0500 Subject: [ExI] super bowl In-Reply-To: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> Message-ID: 2011/2/4 FRANK MCELLIGOTT : > It is that time of year again, Super bowl weekend, when the United State > people forget about Egypt, bombs in Moscow, wars in Iraq and Afganistan, and > gather to watch the Packers play the Steelers, ouch. > I know you don't care, but I would be remiss without asking the following > question. [snip] > Computer against human knowledge of an entire country,?man against > machine?(big blue and Watson and now Madden Football)?money? bet over a > billion on each side. > > ?Well who do you Like? Your first (well, second) assumption is correct: I don't care. :) I wonder though what parametric weight would be applied to the value of public opinion on the outcome of the game if you were to attempt to model this feedback. Comparing statistical models of each team's capability may give you a 7-1-0 prediction, but then the humans playing the game can be 'psyched' by these numbers. Have there even been enough games played to test this phenomenon? I suspect that if enough hype goes into supporting the underdog they will play better than the model predicted even if it is because the modeled winner plays worse. From femmechakra at yahoo.ca Sat Feb 5 03:56:05 2011 From: femmechakra at yahoo.ca (Anna Taylor) Date: Fri, 4 Feb 2011 19:56:05 -0800 (PST) Subject: [ExI] super bowl In-Reply-To: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> Message-ID: <862690.13103.qm@web110404.mail.gq1.yahoo.com> That's pretty ironic.?I'm not allowed to declare religion but we are able to announce the superbowl. I'm feeling the Transhumanism ;) ? Anna --- On Fri, 2/4/11, FRANK MCELLIGOTT wrote: From: FRANK MCELLIGOTT Subject: [ExI] super bowl To: extropy-chat at lists.extropy.org Received: Friday, February 4, 2011, 4:36 PM It is that time of year again, Super bowl weekend, when the United State people forget about Egypt, bombs in Moscow, wars in Iraq and Afganistan, and gather to watch the Packers play the Steelers, ? I know you don't care, but I would be remiss without asking the following question. ? The Computer Game Madden football ?has played a simulation football game of these two teams over a million times last week. ? They have been right the last?7 out of 8 years after their simulation study. ? Prediction? Steelers win 24-20 ? Now with the collective wisdom of the entire America nation in play , they have made? the Green Bay Packers the favorite to win. ? Computer against human knowledge of an entire country,?man against machine?(big blue and Watson and now Madden Football)?money? bet over a billion on each side. ? ?Well who do you Like? ? I go with the Computer and 7 out of 8 ? and have bet them with both hands:) ? Frank? ? ? -----Inline Attachment Follows----- _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Feb 5 04:50:26 2011 From: spike66 at att.net (spike) Date: Fri, 4 Feb 2011 20:50:26 -0800 Subject: [ExI] super bowl In-Reply-To: <862690.13103.qm@web110404.mail.gq1.yahoo.com> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> Message-ID: <001d01cbc4f0$35393d90$9fabb8b0$@att.net> . On Behalf Of Anna Taylor Subject: Re: [ExI] super bowl That's pretty ironic. I'm not allowed to declare religion but we are able to announce the superbowl. I'm feeling the Transhumanism ;) Anna Anna, you are allowed to declare religion on ExI. Do keep in mind atheism is big with the transhumanist crowd, and we are known to commit blammisphy at times. If there were a specific blammisphy one might use to ridicule football, that would likely be seen as well. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sat Feb 5 06:41:33 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 4 Feb 2011 23:41:33 -0700 Subject: [ExI] super bowl In-Reply-To: <001d01cbc4f0$35393d90$9fabb8b0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: 2011/2/4 spike > > > Anna, you are allowed to declare religion on ExI. Do keep in mind atheism > is big with the transhumanist crowd, and we are known to commit blammisphy > at times. If there were a specific blammisphy one might use to ridicule > football, that would likely be seen as well. > > > > spike > > > Football IS the new opiate of the masses!! How's that for blasphemy against sports? -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sat Feb 5 07:35:54 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 00:35:54 -0700 Subject: [ExI] super bowl In-Reply-To: <001d01cbc4f0$35393d90$9fabb8b0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: I personally am hoping for a Steeler's Victory. I have known many Steeler fan's over the years and it would mean so much to them. But there is always something cool about the underdog team winning. I must admit that when the Giants beat the Patriots, I was elated! There are some here who might think sports are not a transhumanist topic, but I would strongly disagree. The technology, wealth and public interest in the phenomena make it something that will evolve as humanity continues to do so. Cybernetically and genetically enhanced humans, robots, and uplifted animals, playing in low/zero gee events, will be just the tip of the ice berg. And I bet even extremely powerful AGI (especially such minds) may find themselves bitten with the sports bug. John : ) From kellycoinguy at gmail.com Sat Feb 5 08:03:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:03:12 -0700 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C30DD.60003@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: On Fri, Feb 4, 2011 at 10:01 AM, Richard Loosemore wrote: > Any intelligent system must have motivations (drives, goals, etc) if it is > to act intelligently in the real world. ?Those motivations are sometimes > trivially simple, and sometimes they are not *explicitly* coded, but are > embedded in the rest of the system ...... but either way there must be > something that answers to the description of "motivation mechanism", or the > system will sit there and do nothing at all. Whatever part of the AGI makes > it organize its thoughts to some end, THAT is the motivation mechanism. Richard, This is very clearly stated, and I agree with it 100%. Motivation is a kind of meta-context that influences how intelligent agents process everything. I think it remains to be seen whether we can create intelligences that lack certain "undesirable" human motivations without creating psychological monstrosities. There are a number of interesting psychological monstrosities from the science fiction genre. The one that occurs to me at the moment is from the Star Trek Next Generation episode entitled "The Perfect Mate" http://en.wikipedia.org/wiki/The_Perfect_Mate Where a woman is genetically designed to bond with a man in a way reminiscent to how birds bond to the first thing they see when they hatch. The point being that when you start making some motivations stronger than others, you can end up with very strange and unpredictable results. Of course, this happens in humans too. Snake charming Pentecostal religions and suicide bombers come to mind amongst many others. In our modern (and hopefully rational) minds, we see a lot of motivations as being irrational, or dangerous. But are those motivations also necessary to be human? It seems to me that one safety precaution we would want to have is for the first generation of AGI to see itself in some way as actually being human, or self identifying as being very close to humans. If they see real human beings as their "parents" that might be helpful to creating safer systems. One of the key questions for me is just what belief systems are desirable for AGIs. Should some be "raised" Muslim, Catholic, Atheist, etc? What moral and ethical systems do we teach AGIs? All of the systems? Some of them? Do we turn off the ones that don't "turn out right". There are a lot of interesting questions here in my mind. To duplicate as many human cultures in our descendants as we can, even if they are not strictly biologically humans, seems like a good way to insure that those cultures continue to flourish. Or, do we just create all AGIs with a mono-culture? That seems like a big loss of richness. On the other hand, differing cultures cause many conflicts. -Kelly From kellycoinguy at gmail.com Sat Feb 5 08:06:28 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:06:28 -0700 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: On Fri, Feb 4, 2011 at 8:41 AM, Stefano Vaj wrote: > OTOH, it prevents falling asleep, thus allowing aliens to replace you > with perfect copies of yourself without none being any the wiser... > :-D If it is a "perfect" copy, then does it really matter? :-) -Kelly From kellycoinguy at gmail.com Sat Feb 5 08:24:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:24:12 -0700 Subject: [ExI] Plastination In-Reply-To: <20110202073448.GA23560@leitl.org> References: <20110202073448.GA23560@leitl.org> Message-ID: On Wed, Feb 2, 2011 at 12:34 AM, Eugen Leitl wrote: > On Tue, Feb 01, 2011 at 04:03:32PM -0700, Kelly Anderson wrote: >> Has anyone seriously looked at plastination as a method for preserving >> brain tissue patterns? > > Yes. It doesn't work. Thanks for your answer. You sound pretty definitive here, and I appreciate that you might well be correct, but I didn't see that in what you referenced. Perhaps I missed something. When you say it doesn't work, are you saying that the structures that are preserved are too large to reconstruct a working brain? Or was there some other objection? Or were you merely stating that it wasn't Gunther's intent to create brains that could be revivified later? I personally don't go in for the quantum state stuff... if that has anything to do with your answer. There is plenty in the brain at the gross level to account for what's going on in there, IMHO. -Kelly From possiblepaths2050 at gmail.com Sat Feb 5 08:26:05 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 01:26:05 -0700 Subject: [ExI] super bowl In-Reply-To: References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: Kelly wrote: >Football IS the new opiate of the masses!! How's that for blasphemy against >sports? I'd give that designation to "reality" television... John : ( On 2/4/11, Kelly Anderson wrote: > 2011/2/4 spike > >> >> >> Anna, you are allowed to declare religion on ExI. Do keep in mind atheism >> is big with the transhumanist crowd, and we are known to commit blammisphy >> at times. If there were a specific blammisphy one might use to ridicule >> football, that would likely be seen as well. >> >> >> >> spike >> >> >> > > Football IS the new opiate of the masses!! How's that for blasphemy against > sports? > > -Kelly > From kellycoinguy at gmail.com Sat Feb 5 08:31:43 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:31:43 -0700 Subject: [ExI] super bowl In-Reply-To: References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: On Sat, Feb 5, 2011 at 1:26 AM, John Grigg wrote: > Kelly wrote: >>Football IS the new opiate of the masses!! How's that for blasphemy against >sports? > > I'd give that designation to "reality" television... Are you equating football with professional wrestling? ;-) Seriously though, there are a lot of opiates for the masses to choose from these days. I dare you to compare the consciousness level of ipod heads to that of someone in an opium den. -Kelly From kellycoinguy at gmail.com Sat Feb 5 08:36:29 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 01:36:29 -0700 Subject: [ExI] Oxford scientists edge toward quantum PC with 10b qubits. In-Reply-To: <4D498CBE.4090106@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > I guess one of the reasons I am personally so frustrated by these projects > is that I am trying to get enough funding to make what I consider to be real > progress in the field, but doing that is almost impossible. ?Meanwhile, if I > had had the resources of the Watson project a decade ago, we might be > talking with real (and safe) AGI systems right now. I doubt it, only in the sense that we don't have anything with near the raw computational power necessary yet. Unless you have really compelling evidence that you can get human-like results without human-like processing power, this seems like a somewhat empty claim. -Kelly From spike66 at att.net Sat Feb 5 16:08:12 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 08:08:12 -0800 Subject: [ExI] super bowl In-Reply-To: References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> Message-ID: <006501cbc54e$e3b8a140$ab29e3c0$@att.net> ... On Behalf Of John Grigg ... There are some here who might think sports are not a transhumanist topic, but I would strongly disagree. The technology, wealth and public interest in the phenomena make it something that will evolve as humanity continues to do so. Cybernetically and genetically enhanced humans... John : ) _______________________________________________ I think of football as a wonderful test-bed to study the effects of cumulative damage to the brains caused by multiple concussions. If we used it correctly, the sport could supply us with a living laboratory for the effects of various steroids, their short term and long term effects. If that information were made available, I would see the entire enterprise as most worthwhile. Of course I can imagine that particular sport as a great place to test mechanical human enhancements, such as exoskeletons. I can even imagine football being played by teams of advanced robots. Even *I* would pay money to see that. But not in the stadium. I figure it is only a matter of time before the radicalized Mormons realize a crowded stadium is a fine target, the most obvious point at which to wage an economic war against the infidel. spike From rpwl at lightlink.com Sat Feb 5 16:23:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:23:30 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4C6D02.1060503@satx.rr.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> Message-ID: <4D4D7982.5090702@lightlink.com> Damien Broderick wrote: > On 2/4/2011 2:29 PM, Richard Loosemore wrote: >> A human-like cognitive system running on a computer has nothing whatever >> to do with darwinian evolution. It is not a "darwinian machine" because >> that phrase "darwinian machine" is semantically empty. There is no such >> property "darwinian" that can be used here, except the trivial property >> >> "Darwinian" == "System that resembles, in structure, another system >> that was originally designed by a darwinian process" >> >> That definition is trivial because nothing follows from it. > > I take it you're not impressed by the quite clearly darwinian models > sketched by, say, Calvin or Edelman? I find their ideas quite > provocative and what follows from them is a novel explanation of > cognition and inventiveness. It might be wrong, and maybe by now has > been proved to be wrong, but I haven't seen those refutations. What were > they? Well, unfortunately there are several meanings for "darwinian" going on here. In the Edelman sense, as I understand it, "darwinian" actually means something close to "complex adaptive system", because he is talking about (mainly) an explanation for morphogenesis in the brain. Now, I have no quarrel with that aspect of Edelman's work ... but where I do have difficulty is seeing an explanation for high-level funcionality, like cognition, in that approach. I think that Edelman (like many neuroscientists) begins to start handwaving when he wants to make the connection upward to cognitive-level goings-on. I confess I have not gone really deeply into Edelman: I drilled down far enough to get a feeling that sudden, unsupported leaps were being made into psychology, then I stopped. I would have to go back and take another read to give you a more detailed answer. But even then, the overall tenor of his approach is still "How did this machine come to get built?" rather than "How does this machine actually work, now that it is built?" The one exception would be -- of course -- anything that has to do with the acquisition and development of concepts. Now, if he can show that concept learning involves some highly complex, self-modifying, recursive machinery (i.e. something like a darwinian process), then I would say YAY! and thoroughly agree... this is very much along the same lines that I pursue. However, notice that there are still some reasons to shy away from the label "darwinian" because it is not clear that this is anythig more than a complex system. A darwinian system is definitely a complex system, but it is also more specific than that, because, it involves sex and babies. Neurons don't have sex or babies. So, to be fair, I will admit that the distinction between "How did this machine come to get built?" and "How does this machine actually work, now that it is built?" becomes rather less clear when we are talking about concept learning (because concepts play a role that fits somewhere between structure and content). But -- and this is critical -- it is a long, long stretch to go from the existence of complex adaptive processes in the concept learning mechanism, to the idea that the system is "darwinian" in any sense that allows us to make concrete statements about the system's functioning. Which brings me back to my comment to Stefano. Even if Edelman and others can extend the use of the term "darwinian" so it can be made to describe the processes of morphogenesis and concept development, I still say that the term has no force, no impact, on issues such as the behavior of a putative "motivational mechanism". I am still left with an "And that is saying ... what, exactly?" feeling. Richard Loosemore From rpwl at lightlink.com Sat Feb 5 16:26:48 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:26:48 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> Message-ID: <4D4D7A48.8050700@lightlink.com> John Clark wrote: > On Feb 4, 2011, at 12:50 PM, Richard Loosemore wrote: > >> since the negative feedback loop works in (effectively) a few thousand >> dimensions simultaneously, it can have almost arbitrary stability. > > Great, since this technique of yours guarantees that a trillion line > recursively improving AI program is stable and always does exactly what > you want it to do it should be astronomically simpler to use that same > technique with software that exists right now, then we can rest easy > knowing computer crashes are a thing of the past and they will always do > exactly what we expected them to do. You are a man of great insight, John Clark. What you say is more or less true (minus your usual hyperbole) IF the software is written in that kind of way (which software today is not). > >> that keeps it on the original track. > > And the first time you unknowingly ask it a question that is unsolvable > the "friendly" AI will still be on that original track long after the > sun has swollen into a red giant and then shrunk down into a white dwarf. Only if it is as stubbornly incapable of seeing outside the box as some people I know. Which, rest assured, it will not be. Richard Loosemore From hkeithhenson at gmail.com Sat Feb 5 16:27:57 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 5 Feb 2011 09:27:57 -0700 Subject: [ExI] super bowl and EP Message-ID: On Sat, Feb 5, 2011 at 5:00 AM, Kelly Anderson wrote: > > Football IS the new opiate of the masses!! How's that for blasphemy against > sports? Since all human behavior depends on evolved psychological mechanisms, it would be interesting to understand the origins of sports in such terms. Sometimes the links lead strange places. For example, BDSM being an outcome of strongly selected capture-bonding psychological mechanisms. I have not given much thought to selection of psychological mechanisms that today manifest in sports fans. If anyone wants to try, the rule is that the selection of whatever is involved had to happen in the stone age or, if post agriculture, it need to be rather strong. Keith From rpwl at lightlink.com Sat Feb 5 16:39:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:39:53 -0500 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> Message-ID: <4D4D7D59.8010205@lightlink.com> Kelly Anderson wrote: > On Fri, Feb 4, 2011 at 10:01 AM, Richard Loosemore > wrote: >> Any intelligent system must have motivations (drives, goals, etc) >> if it is to act intelligently in the real world. Those motivations >> are sometimes trivially simple, and sometimes they are not >> *explicitly* coded, but are embedded in the rest of the system >> ...... but either way there must be something that answers to the >> description of "motivation mechanism", or the system will sit there >> and do nothing at all. Whatever part of the AGI makes it organize >> its thoughts to some end, THAT is the motivation mechanism. > > Richard, This is very clearly stated, and I agree with it 100%. > Motivation is a kind of meta-context that influences how intelligent > agents process everything. I think it remains to be seen whether we > can create intelligences that lack certain "undesirable" human > motivations without creating psychological monstrosities. > > There are a number of interesting psychological monstrosities from > the science fiction genre. The one that occurs to me at the moment is > from the Star Trek Next Generation episode entitled "The Perfect > Mate" http://en.wikipedia.org/wiki/The_Perfect_Mate Where a woman is > genetically designed to bond with a man in a way reminiscent to how > birds bond to the first thing they see when they hatch. The point > being that when you start making some motivations stronger than > others, you can end up with very strange and unpredictable results. > > Of course, this happens in humans too. Snake charming Pentecostal > religions and suicide bombers come to mind amongst many others. > > In our modern (and hopefully rational) minds, we see a lot of > motivations as being irrational, or dangerous. But are those > motivations also necessary to be human? It seems to me that one > safety precaution we would want to have is for the first generation > of AGI to see itself in some way as actually being human, or self > identifying as being very close to humans. If they see real human > beings as their "parents" that might be helpful to creating safer > systems. > > One of the key questions for me is just what belief systems are > desirable for AGIs. Should some be "raised" Muslim, Catholic, > Atheist, etc? What moral and ethical systems do we teach AGIs? All of > the systems? Some of them? Do we turn off the ones that don't "turn > out right". There are a lot of interesting questions here in my mind. > > > To duplicate as many human cultures in our descendants as we can, > even if they are not strictly biologically humans, seems like a good > way to insure that those cultures continue to flourish. Or, do we > just create all AGIs with a mono-culture? That seems like a big loss > of richness. On the other hand, differing cultures cause many > conflicts. Kelly, This is exactly the line along which I am going. I have talked in the past about building AGI systems that are "empathic" to the human species, and which are locked into that state of empathy by their design. Your sentence above: > It seems to me that one safety precaution we would want to have is > for the first generation of AGI to see itself in some way as actually > being human, or self identifying as being very close to humans. ... captures exactly the approach I am taking. This is what I mean by building AGI systems that feel empathy for humans. They would BE humans in most respects. I envision a project to systematically explore the behavior of the motivation mechanisms. In the research phases, we would be directly monitoring the balance of power between the various motivation modules, and also monitoring for certain patterns of thought. I cannot answer all your points in full detail, but it is worth noting that things like the fanatic midset (suicide bombers, etc) are probably a result of the interaction of motivation modules that would not be present in the AGI. Foremost among them, the module that incites tribal loyalty and hatred (in-group, out-group feelings). Without that kind of module (assuming it is a distinct module) the system would perhaps have no chance of drifting in that direction. And even in a suicide bomber, there are other motivations fighting to take over and restore order, right up to the last minute: they sweat when they are about to go. Answering the ideas you throw into the ring, in your comment, would be fodder for an entire essay. Sometime soon, I hope... Richard Loosemore From jonkc at bellsouth.net Sat Feb 5 16:38:47 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 5 Feb 2011 11:38:47 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: <4D4D7A48.8050700@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> <4D4D7A48.8050700@lightlink.com> Message-ID: On Feb 5, 2011, at 11:26 AM, Richard Loosemore wrote: >> Great, since this technique of yours guarantees that a trillion line recursively improving AI program is stable and always does exactly what you want it to do it should be astronomically simpler to use that same technique with software that exists right now, then we can rest easy knowing computer crashes are a thing of the past and they will always do exactly what we expected them to do. > > You are a man of great insight, John Clark. I'm blushing! > What you say is more or less true (minus your usual hyperbole) IF the software is written in that kind of way (which software today is not). Well why isn't todays software written that way? If you know how to make a Jupiter Brain behave in ways you can predict and always do exactly what you want it to do for eternity it should be trivially easy right now for you to make a word processor or web browser that always works perfectly. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Feb 5 16:50:21 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 11:50:21 -0500 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> Message-ID: <4D4D7FCD.7000103@lightlink.com> Kelly Anderson wrote: > On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >> I guess one of the reasons I am personally so frustrated by these projects >> is that I am trying to get enough funding to make what I consider to be real >> progress in the field, but doing that is almost impossible. Meanwhile, if I >> had had the resources of the Watson project a decade ago, we might be >> talking with real (and safe) AGI systems right now. > > I doubt it, only in the sense that we don't have anything with near > the raw computational power necessary yet. Unless you have really > compelling evidence that you can get human-like results without > human-like processing power, this seems like a somewhat empty claim. Over the last five years or so, I have occasionally replied to this question with some back of the envelope calculations to back up the claim. At some point I will sit down and do the job more fully, and publish it, but in the mean time here is your homework assignment for the week.... ;-) There are approximately one million cortical columns in the brain. If each of these is designed to host one "concept" at a time, but with at most half of them hosting at any given moment, this gives (roughly) half a million active concepts. If each of these is engaging in simple adaptive interactions with the ten or twenty nearest neighbors, exchanging very small amounts of data (each cortical column sending out and receiving, say, between 1 and 10 KBytes, every 2 milliseconds), how much processing power and bandwidth would this require, and how big of a machine would you need to implement that, using today's technology? This architecture may well be all that the brain is doing. The rest is just overhead, forced on it by the particular constraints of its physical substrate. Now, if this conjecture is accurate, you tell me how long ago we had the hardware necessary to build an AGI.... ;-) The last time I did this calculation I reckoned (very approximately) that the mid-1980s was when we crossed the threshold, with the largest supercomputers then available. Richard Loosemore P.S. I don't have the time to do the calculations right now, but I am sure someone else would like to pick this up, given the parameters I suggested above ... ? From rpwl at lightlink.com Sat Feb 5 17:02:50 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 05 Feb 2011 12:02:50 -0500 Subject: [ExI] Safety of human-like motivation systems In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <42B4E6D8-9233-4051-B85F-2855960580B3@bellsouth.net> <4D4C3C83.5000204@lightlink.com> <6D24C424-D3EC-477A-939A-2E2969588606@bellsouth.net> <4D4D7A48.8050700@lightlink.com> Message-ID: <4D4D82BA.8060105@lightlink.com> John Clark wrote: > On Feb 5, 2011, at 11:26 AM, Richard Loosemore wrote: > >>> Great, since this technique of yours guarantees that a trillion line >>> recursively improving AI program is stable and always does exactly >>> what you want it to do it should be astronomically simpler to use >>> that same technique with software that exists right now, then we can >>> rest easy knowing computer crashes are a thing of the past and they >>> will always do exactly what we expected them to do. >> >> You are a man of great insight, John Clark. > > I'm blushing! > >> What you say is more or less true (minus your usual hyperbole) IF the >> software is written in that kind of way (which software today is not). > > Well why isn't todays software written that way? If you know how to make > a Jupiter Brain behave in ways you can predict and always do exactly > what you want it to do for eternity it should be trivially easy right > now for you to make a word processor or web browser that always works > perfectly. Of course it is trivially easy. I only require ten million dollars mailed to a post office box in the Cayman Islands, and the software will be yours as soon as I have finished writing it. Drahcir Eromesool From spike66 at att.net Sat Feb 5 18:03:48 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 10:03:48 -0800 Subject: [ExI] sports blammisphy Message-ID: <007701cbc55f$09d9e130$1d8da390$@att.net> Hey since we are talking sports, I have one which you might be able to help solve. Recently the French chess federation has accused three of its own players of cheating in last September's Chess Olympiad: http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i ts-own-players-of-cheating/ About ten years ago, as chess software was just getting to the point where it could compete with top rated humans in money tournaments, we discussed on ExI all the tricky ways cheaters could rig up some manner of I/O device to communicate with a computer with one's hands in plain sight: use sensors on the toes for instance. To input a move, one would need to communicate four numbers between 1 and 8 inclusive, so it would be row and column from the starting square, and row and column from the ending square. That scheme might work for the human to computer data channel. Then the computer could send moves back with a speech generator, transmitting signals via radio to an earpiece disguised as a hearing aid, or perhaps some contraption rigged to the toes that would generate a number of pressure pulses. For musically inclined chess players, it might even be a tone generator glued to a tooth, so that the wearer could hear it but no one else. We recognized at the time that a good instrumentation engineer could do something like this singlehandedly. I think I could do it. Next keep in mind that modern top chess tournaments now have significant prize money. The recent Tata Steel tournament gave out 10k euros (to an American of all oddball things!) Of course it is obviously nowhere near golf-ish or tennis-ish prizes, enough to motivate cheaters. Chess software has steadily improved, such that any one of a dozen commercially available chess software packages running on a laptop can defeat all humans regularly. In fall of 2009, a strong South American tournament with at least two grandmasters was won by a cell phone. I mean it wasn't calling a friend; it was completely self contained, playing grandmaster strength chess. Human grandmasters were losing at chess to a goddam telephone! Had I been there I would hurl the bastard to the floor and stomp on it. In any case, I thought of a way to look at the games after the fact, using just the game scores, and figuring out a way to determine if the players had somehow consulted a computer with some tricky I/O device. The method I thought of is computationally intensive and statistical, but I think it would work. I will post the idea later today or tomorrow, so you can have a chance to think about it. That way I can see if this idea is as cool and tricky as I believed when I thought of it. We could theoretically take the game scores of all the games, see if any others among the several hundred players in the Olympiad cheated. spike From msd001 at gmail.com Sat Feb 5 18:49:53 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 5 Feb 2011 13:49:53 -0500 Subject: [ExI] super bowl and EP In-Reply-To: References: Message-ID: On Sat, Feb 5, 2011 at 11:27 AM, Keith Henson wrote: > If anyone wants to try, the rule is that the selection of whatever is > involved had to happen in the stone age or, if post agriculture, it > need to be rather strong. Mock combat with elaborate rules. Two football teams could be seen as competing tribes. Not sure you could convince many people to admit their trash-talk about opposing teams is Xenophobia From mrjones2020 at gmail.com Sat Feb 5 19:25:09 2011 From: mrjones2020 at gmail.com (Mr Jones) Date: Sat, 5 Feb 2011 14:25:09 -0500 Subject: [ExI] sports blammisphy In-Reply-To: <007701cbc55f$09d9e130$1d8da390$@att.net> References: <007701cbc55f$09d9e130$1d8da390$@att.net> Message-ID: This reminds me of a Numb3rs episode in which a kid came up with a formula that could determine which baseball players were using steroids,based off of their stats and such. Cool stuff. On Feb 5, 2011 1:31 PM, "spike" wrote: Hey since we are talking sports, I have one which you might be able to help solve. Recently the French chess federation has accused three of its own players of cheating in last September's Chess Olympiad: http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i ts-own-players-of-cheating/ About ten years ago, as chess software was just getting to the point where it could compete with top rated humans in money tournaments, we discussed on ExI all the tricky ways cheaters could rig up some manner of I/O device to communicate with a computer with one's hands in plain sight: use sensors on the toes for instance. To input a move, one would need to communicate four numbers between 1 and 8 inclusive, so it would be row and column from the starting square, and row and column from the ending square. That scheme might work for the human to computer data channel. Then the computer could send moves back with a speech generator, transmitting signals via radio to an earpiece disguised as a hearing aid, or perhaps some contraption rigged to the toes that would generate a number of pressure pulses. For musically inclined chess players, it might even be a tone generator glued to a tooth, so that the wearer could hear it but no one else. We recognized at the time that a good instrumentation engineer could do something like this singlehandedly. I think I could do it. Next keep in mind that modern top chess tournaments now have significant prize money. The recent Tata Steel tournament gave out 10k euros (to an American of all oddball things!) Of course it is obviously nowhere near golf-ish or tennis-ish prizes, enough to motivate cheaters. Chess software has steadily improved, such that any one of a dozen commercially available chess software packages running on a laptop can defeat all humans regularly. In fall of 2009, a strong South American tournament with at least two grandmasters was won by a cell phone. I mean it wasn't calling a friend; it was completely self contained, playing grandmaster strength chess. Human grandmasters were losing at chess to a goddam telephone! Had I been there I would hurl the bastard to the floor and stomp on it. In any case, I thought of a way to look at the games after the fact, using just the game scores, and figuring out a way to determine if the players had somehow consulted a computer with some tricky I/O device. The method I thought of is computationally intensive and statistical, but I think it would work. I will post the idea later today or tomorrow, so you can have a chance to think about it. That way I can see if this idea is as cool and tricky as I believed when I thought of it. We could theoretically take the game scores of all the games, see if any others among the several hundred players in the Olympiad cheated. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sat Feb 5 20:53:45 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 13:53:45 -0700 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> Message-ID: I have a Canadian friend who somewhat fits into the math whiz category, and he has his own nearly unbeatable formula for playing Axis & Allies the popular boardgame, and taking on the role of the Allies to stomp the Axis. And of course long range strategic bombers are a key element... Nukes as far as I know are no longer elements of the game, they were considered too much of an imbalancing factor. lol John On 2/5/11, Mr Jones wrote: > This reminds me of a Numb3rs episode in which a kid came up with a formula > that could determine which baseball players were using steroids,based off of > their stats and such. > Cool stuff. > > On Feb 5, 2011 1:31 PM, "spike" wrote: > > > Hey since we are talking sports, I have one which you might be able to help > solve. > > Recently the French chess federation has accused three of its own players of > cheating in last September's Chess Olympiad: > > http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i > ts-own-players-of-cheating/ > > About ten years ago, as chess software was just getting to the point where > it could compete with top rated humans in money tournaments, we discussed on > ExI all the tricky ways cheaters could rig up some manner of I/O device to > communicate with a computer with one's hands in plain sight: use sensors on > the toes for instance. To input a move, one would need to communicate four > numbers between 1 and 8 inclusive, so it would be row and column from the > starting square, and row and column from the ending square. That scheme > might work for the human to computer data channel. Then the computer could > send moves back with a speech generator, transmitting signals via radio to > an earpiece disguised as a hearing aid, or perhaps some contraption rigged > to the toes that would generate a number of pressure pulses. For musically > inclined chess players, it might even be a tone generator glued to a tooth, > so that the wearer could hear it but no one else. > > We recognized at the time that a good instrumentation engineer could do > something like this singlehandedly. I think I could do it. > > Next keep in mind that modern top chess tournaments now have significant > prize money. The recent Tata Steel tournament gave out 10k euros (to an > American of all oddball things!) Of course it is obviously nowhere near > golf-ish or tennis-ish prizes, enough to motivate cheaters. > > Chess software has steadily improved, such that any one of a dozen > commercially available chess software packages running on a laptop can > defeat all humans regularly. In fall of 2009, a strong South American > tournament with at least two grandmasters was won by a cell phone. I mean > it wasn't calling a friend; it was completely self contained, playing > grandmaster strength chess. Human grandmasters were losing at chess to a > goddam telephone! Had I been there I would hurl the bastard to the floor > and stomp on it. > > In any case, I thought of a way to look at the games after the fact, using > just the game scores, and figuring out a way to determine if the players had > somehow consulted a computer with some tricky I/O device. The method I > thought of is computationally intensive and statistical, but I think it > would work. I will post the idea later today or tomorrow, so you can have a > chance to think about it. That way I can see if this idea is as cool and > tricky as I believed when I thought of it. We could theoretically take the > game scores of all the games, see if any others among the several hundred > players in the Olympiad cheated. > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Sat Feb 5 21:18:47 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 5 Feb 2011 14:18:47 -0700 Subject: [ExI] super bowl In-Reply-To: <006501cbc54e$e3b8a140$ab29e3c0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> <006501cbc54e$e3b8a140$ab29e3c0$@att.net> Message-ID: Spike Jones wrote: >I figure it is only a matter of time before the radicalized Mormons realize a crowded >stadium is a fine target, the most obvious point at which to wage an economic war >against the infidel. Yes..., all the concessions shall be owned by Mormons, and the prices will require a bank loan... Harharharhar!!! And the beer shall be watered down for your own good... Spike, you only pick on Mormons because you dearly hope one day you will be kidnapped by them for a beta test run for when the LDS decide it's time to restore polygamy! If you don't lose too much sanity, goodwill and hair, the great project may proceed to the next step... John ; ) On 2/5/11, spike wrote: > > ... On Behalf Of John Grigg > ... > > There are some here who might think sports are not a transhumanist topic, > but I would strongly disagree. The technology, wealth and public interest > in the phenomena make it something that will evolve as humanity continues to > do so. Cybernetically and genetically enhanced humans... > > John : ) > _______________________________________________ > > > > I think of football as a wonderful test-bed to study the effects of > cumulative damage to the brains caused by multiple concussions. If we used > it correctly, the sport could supply us with a living laboratory for the > effects of various steroids, their short term and long term effects. If > that information were made available, I would see the entire enterprise as > most worthwhile. Of course I can imagine that particular sport as a great > place to test mechanical human enhancements, such as exoskeletons. I can > even imagine football being played by teams of advanced robots. Even *I* > would pay money to see that. But not in the stadium. I figure it is only a > matter of time before the radicalized Mormons realize a crowded stadium is a > fine target, the most obvious point at which to wage an economic war against > the infidel. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Feb 5 21:13:26 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 13:13:26 -0800 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> Message-ID: <009501cbc579$87f00580$97d01080$@att.net> ... > On Feb 5, 2011 1:31 PM, "spike" wrote: > > Recently the French chess federation has accused three of its own > players of cheating in last September's Chess Olympiad: > > http://gambit.blogs.nytimes.com/2011/01/22/french-chess-federation-accuses-i ts-own-players-of-cheating/ >... > In any case, I thought of a way to look at the games after the fact, > using just the game scores, and figuring out a way to determine if the > players had somehow consulted a computer with some tricky I/O device... spike So here's the idea. We have now about 50 or so commercially available chess engines capable of playing with the big boys. If we had a big enough pool of volunteers, we could distribute one or more of these engines to each volunteer. The volunteer enters the game scores for any number of players. The computer plays each position and derives its own list of choices for moves, along with it's own estimated evaluation of each move. We get the software running on a plethora of different computer hardware. If any player matches exactly and consistently with any of the software's first choices, well then it is simple, ya got him. No player will match exactly the way a computer would play. Computers will not match exactly each other. There have been entire games where the human player chose one of the top five choices. At grandmaster level, you might well see a human legitimately choosing the computer's top choice eight or ten times in a row. But fifteen in a row would make me highly suspicious, and twenty would be a slam dunk. So I claim there would be a statistical signature of a player using a chess engine with a tricky hidden I/O device of some sort. That being said, I have thought of an even trickier trick which would allow a human to use chess software and sneaky I/O devices, which I will post next time. spike From kellycoinguy at gmail.com Sat Feb 5 21:37:10 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 14:37:10 -0700 Subject: [ExI] sports blammisphy In-Reply-To: <009501cbc579$87f00580$97d01080$@att.net> References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> Message-ID: On Sat, Feb 5, 2011 at 2:13 PM, spike wrote: > That being said, I have thought of an even trickier trick which would allow > a human to use chess software and sneaky I/O devices, which I will post next > time. Seems all you would have to do is pick a move from one of the twenty good programs randomly. Or perhaps have a human being picking the move out of the twenty or so programs to make it look like there was no copying of a particular program's output. Or even simpler, just pick one move from each program. It would be an arms race between the cheaters and those trying to find the cheaters. If someone wants to cheat, I can't think how you can stop them completely. -Kelly From spike66 at att.net Sat Feb 5 21:55:46 2011 From: spike66 at att.net (spike) Date: Sat, 5 Feb 2011 13:55:46 -0800 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> Message-ID: <009601cbc57f$71abbcf0$550336d0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Saturday, February 05, 2011 1:37 PM To: ExI chat list Subject: Re: [ExI] sports blammisphy On Sat, Feb 5, 2011 at 2:13 PM, spike wrote: > That being said, I have thought of an even trickier trick which would > allow a human to use chess software and sneaky I/O devices, which I > will post next time. Seems all you would have to do is pick a move from one of the twenty good programs randomly. Or perhaps have a human being picking the move out of the twenty or so programs to make it look like there was no copying of a particular program's output. Or even simpler, just pick one move from each program. It would be an arms race between the cheaters and those trying to find the cheaters. If someone wants to cheat, I can't think how you can stop them completely. -Kelly Ja, exactly. That was my idea: get all fifty or so top chess engines, then let them vote on the best move. So my counter attack would be to set up a team and determine what the composite move would be, then see if any human players match that composite. Thanks Kelly, good thinking. That arms race notions was exactly what I had in mind. Without that, we will likely face the same phenomenon with chess tournaments as was seen in postal chess ten years ago: it became meaningless because there was no way to determine if the participant was cheating with computers. Today, the world title for postal chess is completely meaningless. The International Correspondence Chess Federation has dwindled to practically nothing. I can imagine the same thing happening to Over-the-Board (real time) chess tournaments as it gets harder to determine if someone is cheating. spike From brent.allsop at canonizer.com Sat Feb 5 19:48:22 2011 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sat, 05 Feb 2011 12:48:22 -0700 Subject: [ExI] a fun brain in which to live In-Reply-To: <017301cbbf5a$1b398f30$51acad90$@att.net> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> <017301cbbf5a$1b398f30$51acad90$@att.net> Message-ID: <4D4DA986.1030802@canonizer.com> On 1/28/2011 7:13 PM, spike wrote: > ... On Behalf Of Brent Allsop > Subject: Re: [ExI] Help with freezing phenomenon > > On 1/26/2011 11:44 PM, spike wrote: >>> ...Mine is a fun brain in which to live. >> ... I think your brain would definitely be near the top of brains I'd like > to try... > > You are too kind sir, and yes I do have fun in here. >> ... I'd also like to try out someones' brain that claims we don't have > qualia... > > Likewise, yours is a brain I would like to try on, just to figure out what > is qualia. I confess I have never understood that concept, but do not feel > you must attempt to explain it to me. It is turning out to be a relatively simple idea, even simpler than the idea that the earth goes arround the sun, rather than the other way around, once you get it. First, you've got to get the idea of representationalism. The idea that in order for a robot to be able to pick a strawbery in a strawberry patch of green leaves, it must have some kind of perception system. The initial cause of the perception process is the 650 nm light reflecting off of the strawberries, and the 500 nm light reflecting off of the leaves. The final result of any such perception process is the robot's knowledge of such. Where is the strawberry amongst the leaves, and relative to the robots hand? All this must be represented or modeled in the robot's knowledge if it is to be able to pick the strawberry. Are we in agreement that there are two parts to perception? The initial cause, and the final result, or our knowledge of such? If we can get that, the rest is easy. The rest is simply, phenomenal redness and greenness are obviously properties of something right? The earth goes arround the sun idea, that the experts consensus (still unlike the popular consensus) is clearly converging on, is simply that this phenomenal red property is a property of something in our brain, or a property of our knowledge of the strawberry, and only has to do with something reflecting 650 nm light in that our brain choose to use red to represent 650 nm light. The phenomenal red property is obviosly nothing like, in location or quality, a property of reflecting 650 nm light. One is a causal property, the other is a phenomenal property. One is still ineffable or blind to cause and effect communication, and the other is not. Which parts of this much simpler than the idea that the earth goes around the sun do people struggle with? Brent From kellycoinguy at gmail.com Sun Feb 6 05:55:00 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 22:55:00 -0700 Subject: [ExI] sports blammisphy In-Reply-To: <009601cbc57f$71abbcf0$550336d0$@att.net> References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> <009601cbc57f$71abbcf0$550336d0$@att.net> Message-ID: > Ja, exactly. ?That was my idea: get all fifty or so top chess engines, then > let them vote on the best move. ?So my counter attack would be to set up a > team and determine what the composite move would be, then see if any human > players match that composite. > > Thanks Kelly, good thinking. ?That arms race notions was exactly what I had > in mind. ?Without that, we will likely face the same phenomenon with chess > tournaments as was seen in postal chess ten years ago: it became meaningless > because there was no way to determine if the participant was cheating with > computers. ?Today, the world title for postal chess is completely > meaningless. ?The International Correspondence Chess Federation has dwindled > to practically nothing. ?I can imagine the same thing happening to > Over-the-Board (real time) chess tournaments as it gets harder to determine > if someone is cheating. spike, I think this points out a recurring trans-humanist, cyborg and even fyborg theme. What is cheating in the brave new world we are making? If the Olympics are only open to original unenhanced human beings, then it just becomes a race to figure out who is enhanced, and who is not. It's already happening at the top level of sports, of course. But when we start talking about enhancements that are "built-in" to people, especially in the context of intellectual pursuits, is that really cheating any more? I understand that now you can bring some kinds of calculators to your SAT test; shades of a fyborgian future. When a cell phone can play world class chess now, what will the calculators of tomorrow be capable of? And what happens when that calculator is implanted subcutaneously? Whether it's cyborg or fyborg makes little functional difference. As a computer programmer working for companies, I have sometimes outsourced pieces of my job that required skills that I was weak on, or that simply weren't interesting to me. I paid for the outsourcing out of my own pocket. My boss was just interested in the job getting done. The job got done. Is that cheating? By any scholastic measure, it would be, but in business the results are more important than the means used to achieve them. There are no urine tests in most computer programming shops. If football players get their bones strengthened by nanotechnology embedding nano tubes, is that cheating? If so, why? I can use carbon fibers in a football helmet. I understand that sprinters are limited in how fast they can run to some extent on the fact that if they put any more stress on their bones, that they might break. So there is a limit to the G-Forces that can be put on bone by muscle. This is certainly an issue in professional level arm wrestling. As Kurzweil mentioned in his book, there will be high school students routinely breaking what are now world records. How we will cope with this "cheating" will be an interesting part of the future. I think it is an interesting part of the present. -Kelly From kellycoinguy at gmail.com Sun Feb 6 06:10:46 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 23:10:46 -0700 Subject: [ExI] super bowl In-Reply-To: <006501cbc54e$e3b8a140$ab29e3c0$@att.net> References: <396DE43BE14C4AED8BC966784587492C@OLDMACHINE> <862690.13103.qm@web110404.mail.gq1.yahoo.com> <001d01cbc4f0$35393d90$9fabb8b0$@att.net> <006501cbc54e$e3b8a140$ab29e3c0$@att.net> Message-ID: On Sat, Feb 5, 2011 at 9:08 AM, spike wrote: > I figure it is only a > matter of time before the radicalized Mormons realize a crowded stadium is a > fine target, the most obvious point at which to wage an economic war against > the infidel. As a genetic Mormon, I highly doubt that you have anything to worry about. The Mormon model for taking over the world involves OWNING the stadium, the hot dog concession, and being in the majority on the board of directors of both football teams. -Kelly From kellycoinguy at gmail.com Sun Feb 6 06:29:39 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 5 Feb 2011 23:29:39 -0700 Subject: [ExI] super bowl and EP In-Reply-To: References: Message-ID: On Sat, Feb 5, 2011 at 9:27 AM, Keith Henson wrote: > On Sat, Feb 5, 2011 at 5:00 AM, ?Kelly Anderson wrote: >> >> Football IS the new opiate of the masses!! How's that for blasphemy against >> sports? > > Since all human behavior depends on evolved psychological mechanisms, > it would be interesting to understand the origins of sports in such > terms. > > Sometimes the links lead strange places. ?For example, BDSM being an > outcome of strongly selected capture-bonding psychological mechanisms. > > I have not given much thought to selection of psychological mechanisms > that today manifest in sports fans. > > If anyone wants to try, the rule is that the selection of whatever is > involved had to happen in the stone age or, if post agriculture, it > need to be rather strong. Keith, I think the evolutionary roots of sport are quite easy to surmise. First, human beings are highly evolved as runners. According to Dawkins, there were over 20 different evolutionary advances between the high apes and human beings that lead directly to our excellence as long distance runners. From the loss of hair and sweat glands to changes in the hip structure, we were born to run. Many primitive tribes participate in persistence hunting. http://en.wikipedia.org/wiki/Persistence_hunting http://en.wikipedia.org/wiki/Endurance_running_hypothesis In addition to persistence hunting, our ancestors were involved in many other types of hunting that required significant physical skill, particularly in running, throwing, fast judgment, and other elements that we see in today's sport. Since we learn from watching others, it is pretty easy to imagine young hunters going out and watching older hunters track down prey. The leap from there to sports seems pretty small. If you put the Roman Coliseum as an intermediate step, it is even easier to see the progression, and to understand the evolutionary pressure that would lead us to want to watch others participate in "sporting" activities. If you weren't interested in watching, you wouldn't learn to hunt as well, your children would not be born, selection pressure is applied. Viola, 1000 generations later, everyone is very interested in sports. Over the past 200 years or so, as the influence of religion has decreased in the populace, the political elite have resorted to the Roman bread and circuses method for quelling the masses. Religion can no longer maintain the hold on the masses that it did during the middle ages, so something has to take the place, or many things. Television, sports, ipods, etc. All of these things keep us from rebelling against 40%+ tax rates that would have driven any previous generation to charge Washington with flaming pitch forks. It is only in our abundance that we can accept the level of government confiscation of private property that we accept in our current political system. As we move to the future, and even more abundance, I predict that tax rates will continue to go up. Not a real stretch as predictions of the future go of course :-) -Kelly From kellycoinguy at gmail.com Sun Feb 6 07:23:08 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 00:23:08 -0700 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: <4D4D7FCD.7000103@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D4D7FCD.7000103@lightlink.com> Message-ID: On Sat, Feb 5, 2011 at 9:50 AM, Richard Loosemore wrote: > Kelly Anderson wrote: >> >> On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore >> wrote: >>> >>> Kelly Anderson wrote: >> I doubt it, only in the sense that we don't have anything with near >> the raw computational power necessary yet. Unless you have really >> compelling evidence that you can get human-like results without >> human-like processing power, this seems like a somewhat empty claim. > > Over the last five years or so, I have occasionally replied to this question > with some back of the envelope calculations to back up the claim. ?At some > point I will sit down and do the job more fully, and publish it, but in the > mean time here is your homework assignment for the week.... ;-) > > There are approximately one million cortical columns in the brain. ?If each > of these is designed to host one "concept" at a time, but with at most half > of them hosting at any given moment, this gives (roughly) half a million > active concepts. I am not willing to concede that this is how it works. I tend to gravitate towards a more holographic view, i.e. that the "concept" is distributed across tens of thousands of cortical columns, and that the combination of triggers to a group of cortical columns is what causes the overall "concept" to emerge. This is a general idea, and may not apply specifically to cortical columns, but I think you get the idea. The reason for belief in the holographic model is that brain damage doesn't knock out all memory or ability to process if only part of the brain is damaged. This neat one to one mapping of concept to neuron has been debunked to my satisfaction some time ago. > If each of these is engaging in simple adaptive interactions with the ten or > twenty nearest neighbors, exchanging very small amounts of data (each > cortical column sending out and receiving, say, between 1 and 10 KBytes, > every 2 milliseconds), how much processing power and bandwidth would this > require, and how big of a machine would you need to implement that, using > today's technology? You are speaking of only one of the thirty or so organelles in the brain. The cerebral cortex is only one part of the overall picture. Nevertheless, you are obviously not talking about very much computational power here. Kurzweil in TSIN does the back of the envelope calculations about the overall computational power of the human brain, and it's a lot more than you are presenting here. > This architecture may well be all that the brain is doing. ?The rest is just > overhead, forced on it by the particular constraints of its physical > substrate. I have no doubt that as we figure out what the brain is doing, we'll be able to optimize. But we have to figure it out first. You seem to jump straight to a solution as a hypothesis. Now, having a hypothesis is a good part of the scientific method, but there is that other part of testing the hypothesis. What is your test? > Now, if this conjecture is accurate, you tell me how long ago we had the > hardware necessary to build an AGI.... ;-) I'm sure we have that much now. The problem is whether the conjecture is correct. How do you prove the conjecture? Do something "intelligent". What I don't see yet in your papers, or in your posts here, are results. What "intelligent" behavior have you simulated with your hypothesis Richard? I'm not trying to be argumentative or challenging, just trying to figure out where you are in your work and whether you are applying the scientific method rigorously. > The last time I did this calculation I reckoned (very approximately) that > the mid-1980s was when we crossed the threshold, with the largest > supercomputers then available. That may be the case. And once we figure out how it all works, we could well reduce it to this level of computational requirement. But we haven't figured it out yet. By most calculations, we spend an inordinate amount of our cerebral processing on image processing the input from our eyes. Have you made any image processing breakthroughs? Can you tell a cat from a dog with your approach? You seem to be focused on concepts and how they are processed. How does your method approach the nasty problems of image classification and recognition? -Kelly From kellycoinguy at gmail.com Sun Feb 6 07:50:26 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 00:50:26 -0700 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: <4D4D7D59.8010205@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On Sat, Feb 5, 2011 at 9:39 AM, Richard Loosemore wrote: > Kelly, > > This is exactly the line along which I am going. ? I have talked in the > past about building AGI systems that are "empathic" to the human > species, and which are locked into that state of empathy by their > design. I would not propose to "design" empathy, but rather to "train" towards empathy. I envision raising AGIs just as one would raise a child. This would train them to think as though they were a human, or at least that they were adopted by humans. As they mature, the speed of learning could be sped up, or the net could be copied, and further learning could go in many directions, but that core of humanity is the most important thing to get right to ensure that the AGIs and future humans will live in some kind of harmony. >?Your sentence above: > >> It seems to me that one safety precaution we would want to have is >> for the first generation of AGI to see itself in some way as actually >> being human, or self identifying as being very close to humans. > > ... captures exactly the approach I am taking. ?This is what I mean by > building AGI systems that feel empathy for humans. ?They would BE humans in > most respects. I thing AGIs should see us as their ancestors. I would hope to be thought of with the kind of respect we would feel for homo erectus (were they still around). Kurzweil states that increased intelligence leads to increased empathy, which is an interesting hypothesis. I wouldn't know how to test it, but it does seem to be a trend. > I envision a project to systematically explore the behavior of the > motivation mechanisms. ?In the research phases, we would be directly > monitoring the balance of power between the various motivation modules, and > also monitoring for certain patterns of thought. Here you devolve into the vagueness that makes this discussion difficult for me. Are you talking of studying humans here? > I cannot answer all your points in full detail, but it is worth noting that > things like the fanatic midset (suicide bombers, etc) are probably a result > of the interaction of motivation modules that would not be present in the > AGI. Hopefully this will be the case. I tend towards optimism, so for the moment, I'll give you this point. >?Foremost among them, the module that incites tribal ?loyalty and > hatred (in-group, out-group feelings). ?Without that kind of module > (assuming it is a distinct module) the system would perhaps have no chance > of drifting in that direction. Here it sounds like we differ. I would propose that "young" AGIs be given to exemplary parents in every culture we can find. Raising them as they would their own youth, we preserve the richness of human diversity that we are risking losing today. After all, we are losing languages and culture to the global mono-culture at an alarming rate today just among humans. If all AGIs are taught in the same laboratory or western context, we will end up with a mono culture in the AGI strains that will potentially have a negative impact on preserving human diversity. I respect other people's belief systems, and I want AGIs with all kinds of belief systems. Even if many of them end up evolving beyond their core training, having that core is important to maintaining empathy towards the group that has that core belief system. I would hate for AGIs to decide that the Amish were not worth preserving just because no AGI had ever been raised in an Amish household. > And even in a suicide bomber, there are > other motivations fighting to take over and restore order, right up to the > last minute: ?they sweat when they are about to go. Perhaps. As one who has previously held strong religious beliefs, I can put myself into the head of a suicide bomber quite well, and I can see the possibility of not sweating it. > Answering the ideas you throw into the ring, in your comment, would be > fodder for an entire essay. ?Sometime soon, I hope... Clearly, there is a lot of ground to cover. Here are some of the things I care about... 1) How do we preserve the diversity of human culture as we evolve past being purely human? 2) How do we create AGIs? 3) How do we ensure that human beings (enhanced or natural) can continue to live in the same society with the AGIs? 4) How can we protect society from rogue AGIs? 5) How is this all best done without offending the religious majority and generating painful backlash? (i.e. How do you prevent a civil war between the religious fundamentalists and the AGIs?) -Kelly From kellycoinguy at gmail.com Sun Feb 6 09:01:09 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 02:01:09 -0700 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Wed, Jan 19, 2011 at 5:53 AM, BillK wrote: > On Wed, Jan 19, 2011 at 4:55 AM, spike wrote: > First. > No, I don't think voice operated computers will ever appear in general use. > Think about it. What happens when you get a group of people all > shouting at their handheld computers? It's bad enough listening to > other people's mobile phone conversations. > There is a place for specialised applications such as voice > recognition entry systems. Bill, I don't disagree with you here. It may be unacceptable from a sociological standpoint, however, from a technological standpoint, it is easy to recognize the speaker compared to recognizing what the speaker is saying. In other words, many people talking at once would not bother the computer, but it might bother the other people in the room. Not sure which you were getting at here. -Kelly From pharos at gmail.com Sun Feb 6 09:36:09 2011 From: pharos at gmail.com (BillK) Date: Sun, 6 Feb 2011 09:36:09 +0000 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Sun, Feb 6, 2011 at 9:01 AM, Kelly Anderson wrote: > Bill, I don't disagree with you here. It may be unacceptable from a > sociological standpoint, however, from a technological standpoint, it > is easy to recognize the speaker compared to recognizing what the > speaker is saying. In other words, many people talking at once would > not bother the computer, but it might bother the other people in the > room. Not sure which you were getting at here. > > I meant the social annoyance factor of having a roomful of people shouting at their handheld computers. But I suppose more technology could remove that problem. If everyone is wearing an earpiece (cable or bluetooth) and using a sub-vocal microphone taped to their neck then the public annoyance factor disappears. (There is also the thought controlled tech that is being developed in the labs for disabled people). Then we would have to face the problem (already appearing) of attention distraction where people step in front of cars or walk off railway platforms while their attention is off in the cloud. You already see people at parties sitting silently in a circle, all tapping away at their phones, tweeting about how great the party is to their 500 followers. ;) BillK From pharos at gmail.com Sun Feb 6 10:14:45 2011 From: pharos at gmail.com (BillK) Date: Sun, 6 Feb 2011 10:14:45 +0000 Subject: [ExI] UN solves Third World Poverty problem Message-ID: ?It?s simple;? said Mr James Bowen, UN Spokesman for Development. ?Give a man a fish and you feed him for a day. Teach him to phish, and before you know it he?s ordering a Merc and moving fast up the Nigerian rich list.? The so-called ?ScamAid? initiative will teach modern-day Robin Hoods to empty the bank accounts of rich Westerners to pay for schools and health clinics in third world communities. According to the United Nations, only 0.5% of the developed world would have to be thick enough to hand over their personal details to a local ScamAid partner in order to vaccinate and educate every child under twelve. ?It?s basically a tax on stupidity? explained the UN spokesman. ------------------- BillK From rpwl at lightlink.com Sun Feb 6 14:33:10 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 06 Feb 2011 09:33:10 -0500 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D4D7FCD.7000103@lightlink.com> Message-ID: <4D4EB126.1060004@lightlink.com> Kelly Anderson wrote: > On Sat, Feb 5, 2011 at 9:50 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> On Wed, Feb 2, 2011 at 9:56 AM, Richard Loosemore >>> wrote: >>>> Kelly Anderson wrote: >>> I doubt it, only in the sense that we don't have anything with near >>> the raw computational power necessary yet. Unless you have really >>> compelling evidence that you can get human-like results without >>> human-like processing power, this seems like a somewhat empty claim. >> Over the last five years or so, I have occasionally replied to this question >> with some back of the envelope calculations to back up the claim. At some >> point I will sit down and do the job more fully, and publish it, but in the >> mean time here is your homework assignment for the week.... ;-) >> >> There are approximately one million cortical columns in the brain. If each >> of these is designed to host one "concept" at a time, but with at most half >> of them hosting at any given moment, this gives (roughly) half a million >> active concepts. > > I am not willing to concede that this is how it works. I tend to > gravitate towards a more holographic view, i.e. that the "concept" is > distributed across tens of thousands of cortical columns, and that the > combination of triggers to a group of cortical columns is what causes > the overall "concept" to emerge. This is a general idea, and may not > apply specifically to cortical columns, but I think you get the idea. > The reason for belief in the holographic model is that brain damage > doesn't knock out all memory or ability to process if only part of the > brain is damaged. This neat one to one mapping of concept to neuron > has been debunked to my satisfaction some time ago. The architecture I outlined above has a long pedigree (the main ancestor being the parallel distributed processing ideas of Rumelhart, McClelland et al), so it is okay to suggest a different architecture, but there does have to be motivation for whatever suggestion is made about the hardware-to-concept mapping. That said, there are questions. If something is distributed, is it (a) the dormant, generic "concepts" in long term memory, or is it the active, instance "concepts" of working memory? Very big difference. I believe there are reasons to talk about the long term memory concepts as being partialy distributed, but that would not apply to the instances in working memory..... and in the above architecture I was talking only about the latter. If you try to push the idea that the instance atoms (my term for the active concepts) are in some sense "holographic" or distributed, you get into all sorts of theoretical and practical snarls. I published a paper with Trevor Harley last year in which we analyzed a paper by Quiroga et al, that made claims about the localization of concepts to neurons. That paper contains a more detailed explanation of the mapping, using ideas from my architecture. It is worth noting that Quiroga et al's explanation of their own data made no sense, and that the alternative that Trevor and I proposed actually did account for the data rather neatly. >> If each of these is engaging in simple adaptive interactions with the ten or >> twenty nearest neighbors, exchanging very small amounts of data (each >> cortical column sending out and receiving, say, between 1 and 10 KBytes, >> every 2 milliseconds), how much processing power and bandwidth would this >> require, and how big of a machine would you need to implement that, using >> today's technology? > > You are speaking of only one of the thirty or so organelles in the > brain. The cerebral cortex is only one part of the overall picture. > Nevertheless, you are obviously not talking about very much > computational power here. Kurzweil in TSIN does the back of the > envelope calculations about the overall computational power of the > human brain, and it's a lot more than you are presenting here. Of course! Kurzweil (and others') calculations are based on the crudest possible calculation of a brain emulation AGI, in which every wretched neuron in there is critically important, and cannot be substituted for something simpler. That is the dumb approach. What I am trying to do is explain an architecture that comes from the cognitive science level, and which suggests that the FUNCTIONAL role played by neurons is such that it can be substituted very adequately by a different computational substrate. So, my claim is that, functionally, the human cognitive system may consist a network of about a million cortical column units, each of which engages in relatively simple relaxation processes with neighbors. I am not saying that this is the exactly correct picture, but so far this architecture seems to work as a draft explanation for a broad range of cognitive phenomena. And if it is correct, the the TSIN calculations are pointless. >> This architecture may well be all that the brain is doing. The rest is just >> overhead, forced on it by the particular constraints of its physical >> substrate. > > I have no doubt that as we figure out what the brain is doing, we'll > be able to optimize. But we have to figure it out first. You seem to > jump straight to a solution as a hypothesis. Now, having a hypothesis > is a good part of the scientific method, but there is that other part > of testing the hypothesis. What is your test? Well, it may seem like I pulled the hypothesis out of the hat yesterday morning, but this is actually just a summary of a project that started in the late 1980s. The test is an examination of the consistency of this architecture with the known data from human cognition. (Bear in mind that most artificial intelligence researchers are not "scientists" .... they do not propose hyotheses and test them ..... they are engineers or mathematicians, and what they do is play with ideas to see if they work, or prove theorems to show that some things should work. From that perspective, what I am doing is real science, of a sort that almost died out in AI a couple of decades ago). For an example of the kind of tests that are part of the research program I am engaged in, see the Loosemore and Harley paper. >> Now, if this conjecture is accurate, you tell me how long ago we had the >> hardware necessary to build an AGI.... ;-) > > I'm sure we have that much now. The problem is whether the conjecture > is correct. How do you prove the conjecture? Do something > "intelligent". What I don't see yet in your papers, or in your posts > here, are results. What "intelligent" behavior have you simulated with > your hypothesis Richard? I'm not trying to be argumentative or > challenging, just trying to figure out where you are in your work and > whether you are applying the scientific method rigorously. The problem of giving you and answer is complicated by the paradigm. I am adopting a systematic top-down scan that starts at the framework level and proceeds downward. The L & H paper shows an application of the method to just a couple of neuroscience results. What I have here are similar analyses of several dozen other cognitive phenomena, in various amounts o detail, but these are not published yet. There are other stages to the work that involve simulations of particular algorithms. This is quite a big topic. You may have to wait for my thesis to be published to get a full answer, because fragments of it can be confusing. All I can say at the moment is that the architecture gives rise to simple, elegant explanations, at a high level, of a wide range of cognitive data, and the mere fact that one architecture can do such a thing is, in my experience, unique. However, I do not want to publish that as it stands, because I know what the reaction would be if there is no further explanation of particular algorithms, down at the lowest level. So, I continue to work toward the latter, even though by my own standards I already have enough to be convinced. >> The last time I did this calculation I reckoned (very approximately) that >> the mid-1980s was when we crossed the threshold, with the largest >> supercomputers then available. > > That may be the case. And once we figure out how it all works, we > could well reduce it to this level of computational requirement. But > we haven't figured it out yet. > > By most calculations, we spend an inordinate amount of our cerebral > processing on image processing the input from our eyes. Have you made > any image processing breakthroughs? Can you tell a cat from a dog with > your approach? You seem to be focused on concepts and how they are > processed. How does your method approach the nasty problems of image > classification and recognition? The term "concept" is a vague one. I used it in our discussion because it is conventional. However, in my own writings I talk of "atoms" and "elements", because some of those atoms correspond to very low-level features such as the ones that figure in the visual system. As far as I can tell at this stage, the visual system uses the same basic architecture, but with a few wrinkles. One of those is mechanism to spread locally acquired features into a network of "distributed, position-specific" atoms. This means that when visual regularities are discovered, they percolate down in the system and become distributed across the visual field, so they can be computed in parallel. Also, the visual system does contain some specialized pathways (the "what" and "where" pathways) that engage in separate computations. These are already allowed for in the above calcuations, but they are specialized regions of that million-column system. I had better stop. Must get back to work. Richard Loosemore From rpwl at lightlink.com Sun Feb 6 14:50:02 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 06 Feb 2011 09:50:02 -0500 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D4EB51A.5050008@lightlink.com> Kelly Anderson wrote: > On Sat, Feb 5, 2011 at 9:39 AM, Richard Loosemore wrote: >> Kelly, >> >> This is exactly the line along which I am going. I have talked in the >> past about building AGI systems that are "empathic" to the human >> species, and which are locked into that state of empathy by their >> design. > > I would not propose to "design" empathy, but rather to "train" towards > empathy. I envision raising AGIs just as one would raise a child. This > would train them to think as though they were a human, or at least > that they were adopted by humans. As they mature, the speed of > learning could be sped up, or the net could be copied, and further > learning could go in many directions, but that core of humanity is the > most important thing to get right to ensure that the AGIs and future > humans will live in some kind of harmony. This is certainly something that you would want to do, but it is kind of orthogonal to the question of "designing" the empathy in the first place. A system designed to be a psychopath, for example, would not benefit from that kind of upbringing. So you have to do both. >> Your sentence above: >> >>> It seems to me that one safety precaution we would want to have is >>> for the first generation of AGI to see itself in some way as actually >>> being human, or self identifying as being very close to humans. >> ... captures exactly the approach I am taking. This is what I mean by >> building AGI systems that feel empathy for humans. They would BE humans in >> most respects. > > I thing AGIs should see us as their ancestors. I would hope to be > thought of with the kind of respect we would feel for homo erectus > (were they still around). Kurzweil states that increased intelligence > leads to increased empathy, which is an interesting hypothesis. I > wouldn't know how to test it, but it does seem to be a trend. This idea that "increased intelligence leads to increased empathy" is a natural consequence of the idea that the system is making sure that all its ideas are consistent with one another, and with its basic motivations. If its basic motivations start with the idea of empathy, then increased intelligence would indeed make the system more and more empathic. >> I envision a project to systematically explore the behavior of the >> motivation mechanisms. In the research phases, we would be directly >> monitoring the balance of power between the various motivation modules, and >> also monitoring for certain patterns of thought. > > Here you devolve into the vagueness that makes this discussion > difficult for me. Are you talking of studying humans here? Sorry, no, I mean studying the AGI mechanisms. We do not have enough access to the inner, real-time workings of human systems. This is strictly about studying the experimental AGIs, during the research and development phase. > >> I cannot answer all your points in full detail, but it is worth noting that >> things like the fanatic midset (suicide bombers, etc) are probably a result >> of the interaction of motivation modules that would not be present in the >> AGI. > > Hopefully this will be the case. I tend towards optimism, so for the > moment, I'll give you this point. > >> Foremost among them, the module that incites tribal loyalty and >> hatred (in-group, out-group feelings). Without that kind of module >> (assuming it is a distinct module) the system would perhaps have no chance >> of drifting in that direction. > > Here it sounds like we differ. I would propose that "young" AGIs be > given to exemplary parents in every culture we can find. Raising them > as they would their own youth, we preserve the richness of human > diversity that we are risking losing today. After all, we are losing > languages and culture to the global mono-culture at an alarming rate > today just among humans. If all AGIs are taught in the same laboratory > or western context, we will end up with a mono culture in the AGI > strains that will potentially have a negative impact on preserving > human diversity. Although I completely agree with your goal here, I would say this is a different issue, with different answers. Very good answers, I suggest, but somewhat peripheral to this discussion. The crucial issue is, at the beginning, is to understand and build the correct foundations. So, I am talking about giving the AGI the kind of underlying mechanisms that will make it grow towards a caring, empathic individual, and avoiding the kind of mechanisms that would make it psychopathic. Then, and only then, comes the youthful experience of the AGI (which you are focussing on). The experience part is important, but I am really only trying make arguments about the construction phase at the moment. What it boils down to is the fact that some humans are born with damaged motivation mechanism, such that there is no ability to empathize and bond. No amount of youthful happiness will matter to those people. My primary concern at the moment is to understand that, and design AGIs so that does not happen. > I respect other people's belief systems, and I want AGIs with all > kinds of belief systems. Even if many of them end up evolving beyond > their core training, having that core is important to maintaining > empathy towards the group that has that core belief system. I would > hate for AGIs to decide that the Amish were not worth preserving just > because no AGI had ever been raised in an Amish household. I seriously doubt that will happen. But that is a discussion for another day. >> And even in a suicide bomber, there are >> other motivations fighting to take over and restore order, right up to the >> last minute: they sweat when they are about to go. > > Perhaps. As one who has previously held strong religious beliefs, I > can put myself into the head of a suicide bomber quite well, and I can > see the possibility of not sweating it. > >> Answering the ideas you throw into the ring, in your comment, would be >> fodder for an entire essay. Sometime soon, I hope... > > Clearly, there is a lot of ground to cover. Here are some of the > things I care about... > > 1) How do we preserve the diversity of human culture as we evolve past > being purely human? > 2) How do we create AGIs? > 3) How do we ensure that human beings (enhanced or natural) can > continue to live in the same society with the AGIs? > 4) How can we protect society from rogue AGIs? > 5) How is this all best done without offending the religious majority > and generating painful backlash? (i.e. How do you prevent a civil war > between the religious fundamentalists and the AGIs?) I have answers (proposed answers, at least). But that is an entire book. ;-) Richard Loosemore From spike66 at att.net Sun Feb 6 18:20:06 2011 From: spike66 at att.net (spike) Date: Sun, 6 Feb 2011 10:20:06 -0800 Subject: [ExI] a fun brain in which to live In-Reply-To: <4D4DA986.1030802@canonizer.com> References: <001001cbbde6$f5b41a60$e11c4f20$@att.net> <001401cbbded$a3239cb0$e96ad610$@att.net> <4D435139.8090307@canonizer.com> <017301cbbf5a$1b398f30$51acad90$@att.net> <4D4DA986.1030802@canonizer.com> Message-ID: <00c801cbc62a$7aedec10$70c9c430$@att.net> >... On Behalf Of Brent Allsop ... > spike wrote: >> Likewise, yours is a brain I would like to try on, just to figure out >> what is qualia. I confess I have never understood that concept, but >> do not feel you must attempt to explain it to me... spike >...It is turning out to be a relatively simple idea, even simpler than the idea that the earth goes arround the sun, rather than the other way around, once you get it. >...First, you've got to get the idea of representationalism... >...One is still ineffable or blind to cause and effect communication, and the other is not. >...Which parts of this much simpler than the idea that the earth goes around the sun do people struggle with? Brent Brent it isn't so much a problem with the concept of qualia, rather it is just me. I live in a world of equations. I love math, tend to see things in terms of equations and mathematical models. Numbers are my friends. I even visualize social structures in terms of feedback control systems, insofar as it is possible. Beyond that, I don't understand social systems, or for that matter, anything which cannot be described in terms of systems of simultaneous differential equations. If I can get it to differential equations, I can use the tools I know. Otherwise not, which is why I seldom participate here in the discussions which require actual understanding outside that limited domain. The earth going around the sun is a great example. With that, I can write the equations, all from memory. I can tweak with this mass and see what happens there, I can move that term, derive this and the other, come up with a whole mess of cool new insights, using only algebra and calculus. Mathematical symbols are rigidly defined. But I am not so skilled with adjectives, nouns and verbs. Their definitions to me are approximations. I don't know how to take a set of sentences and create a matrix, or use a Fourier transform on them, or a Butterworth or Kalman filter, or any of the mind-blowing tools we have for creating insights with mathematized systems. All is not lost. In the rocket science biz, we know we cannot master every aspect of everything in that field. Life is too short. So we have a saying: You don't need to know the answer, you only need to know the cat who knows the answer. In the field of qualia, pal, that cat is you. Qualia is the reason evolution has given us a Brent Allsop. So live long, very long. spike From hkeithhenson at gmail.com Sun Feb 6 19:18:20 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 6 Feb 2011 12:18:20 -0700 Subject: [ExI] super bowl and EP Message-ID: On Sat, Feb 5, 2011 at 11:29 PM, Mike Dougherty wrote: > > On Sat, Feb 5, 2011 at 11:27 AM, Keith Henson wrote: >> If anyone wants to try, the rule is that the selection of whatever is >> involved had to happen in the stone age or, if post agriculture, it >> need to be rather strong. > > Mock combat with elaborate rules. ?Two football teams could be seen as > competing tribes. This has been a standard explanation dating back centuries. it's not what I was talking about. My question is what evolutionary forces in the stone age equipped people with the desire to watch sports events? Sports event were *NOT* part of our evolutionary history, any more than chasing laser spots was part of the evolutionary history of cats. > Not sure you could convince many people to admit their trash-talk > about opposing teams is Xenophobia Which gives you an idea of how disconnected I am. I didn't know they did that. Kelly Anderson wrote: snip > Keith, I think the evolutionary roots of sport are quite easy to surmise. > snip > > In addition to persistence hunting, our ancestors were involved in > many other types of hunting that required significant physical skill, > particularly in running, throwing, fast judgment, and other elements > that we see in today's sport. > > Since we learn from watching others, it is pretty easy to imagine > young hunters going out and watching older hunters track down prey. > The leap from there to sports seems pretty small. If you put the Roman > Coliseum as an intermediate step, it is even easier to see the > progression, and to understand the evolutionary pressure that would > lead us to want to watch others participate in "sporting" activities. Perhaps. All primates are intensely interested in action events involving others of their species. That's also true of herd animals in general. It is probably of evolutionary significance to be strongly aware of these kinds of events to avoid being accidentally hurt if nothing else. In any case, whatever makes people go watch modern sporting events has an origin much further back than the the Roman Coliseum. Keith From nebathenemi at yahoo.co.uk Sun Feb 6 22:42:04 2011 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Sun, 6 Feb 2011 22:42:04 +0000 (GMT) Subject: [ExI] Voice operated computers In-Reply-To: Message-ID: <93121.43962.qm@web27007.mail.ukl.yahoo.com> Spike wrote: " No, I don't think voice operated computers will ever appear in general use. Think about it. What happens when you get a group of people all shouting at their handheld computers? It's bad enough listening to other people's mobile phone conversations." You get my workplace when the phones are busy. People call in, and you can hear nothing but several phone conversations at once. Noise doesn't stop the modern office. Also, voice-activated computers currently exist for automated phone lines, more sophisticated ones could replace call centres. Finally, thinking how many people were talking to themselves in my local coffee shop this morning (well, maybe they were talking to someone on their mobile phone using hands-free, but I think they're all crazy people sent to annoy me while I go to get a drink) you'll be surprised how much noise and social annoyance people can take. Tom From spike66 at att.net Mon Feb 7 02:21:21 2011 From: spike66 at att.net (spike) Date: Sun, 6 Feb 2011 18:21:21 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <93121.43962.qm@web27007.mail.ukl.yahoo.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> Message-ID: <002801cbc66d$b6306170$22912450$@att.net> ... On Behalf Of Tom Nowell Subject: Re: [ExI] Voice operated computers Spike wrote: " No, I don't think voice operated computers will ever appear in general use. Think about it. What happens when you get a group of people all shouting at their handheld computers? It's bad enough listening to other people's mobile phone conversations." Tom Actually this wasn't my comment, and I disagree with it in any case. Reasoning: sound travels through solids and liquids much more readily than through air. I can imagine an I/O device which is in physical contact with the skull and goes in the ear. Failing that, the boom microphone that is right in front of the mouth, as in a hands-free telephone, works well enough in a crowded office. With that in mind, I think we will see in general use a voice-operated computer. But what I am really thinking about is a computer-operated voice. My goal is to allow people to have a conversation with an avatar on a video screen. spike From kellycoinguy at gmail.com Mon Feb 7 05:15:38 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 22:15:38 -0700 Subject: [ExI] google translator In-Reply-To: References: <006c01cbb6a0$511a93f0$f34fbbd0$@att.net> <002501cbb6b5$d0a262f0$71e728d0$@att.net> <005501cbb748$2537bfd0$6fa73f70$@att.net> <003101cbb795$254e4e10$6feaea30$@att.net> Message-ID: On Sun, Feb 6, 2011 at 2:36 AM, BillK wrote: > On Sun, Feb 6, 2011 at 9:01 AM, Kelly Anderson ?wrote: > I meant the social annoyance factor of having a roomful of people > shouting at their handheld computers. Social norms CAN change. I'm not sure they will, but when you can have as meaningful a conversation with your digital personal assistant as you can now have with a real assistant on a cell phone, then it could change. Imagine, for example, that you talk to your digital personal assistant over your cell phone. It looks no different than today's cell phone calls... so I see this as a definite possibility. > But I suppose more technology could remove that problem. It can help. There is a business opportunity here because the PAIN of listening to other people's phone calls all the time is an addressable problem. Someone will make lots of money off of this pain. > If everyone is wearing an earpiece (cable or bluetooth) and using a > sub-vocal microphone taped to their neck then the public annoyance > factor disappears. > (There is also the thought controlled tech that is being developed in > the labs for disabled people). Some day. It's a ways off IMHO. > Then we would have to face the problem (already appearing) of > attention distraction where people step in front of cars or walk off > railway platforms while their attention is off in the cloud. You > already see people at parties sitting silently in a circle, all > tapping away at their phones, tweeting about how great the party is to > their 500 followers. ?;) On the other hand, once autonomous cars are shuttling us around, the driving while distracted problem goes away. One could hope that the overall death rate would decrease. -Kelly From kellycoinguy at gmail.com Mon Feb 7 05:36:58 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 6 Feb 2011 22:36:58 -0700 Subject: [ExI] Computational resources needed for AGI... In-Reply-To: <4D4EB126.1060004@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <008601cbc181$18eb0af0$4ac120d0$@att.net> <4D471EA4.7080900@lightlink.com> <4D498CBE.4090106@lightlink.com> <4D4D7FCD.7000103@lightlink.com> <4D4EB126.1060004@lightlink.com> Message-ID: On Sun, Feb 6, 2011 at 7:33 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > That said, there are questions. ?If something is distributed, is it (a) the > dormant, generic "concepts" in long term memory, or is it the active, > instance "concepts" of working memory? ?Very big difference. ?I believe > there are reasons to talk about the long term memory concepts as being > partialy distributed, but that would not apply to the instances in working > memory..... ? and in the above architecture I was talking only about the > latter. Ok. I can follow that working memory is likely not holographic. That actually makes sense. Memory and other long term storage probably is though. > If you try to push the idea that the instance atoms (my term for the active > concepts) are in some sense "holographic" or distributed, you get into all > sorts of theoretical and practical snarls. I'll have to take your word for that. > I published a paper with Trevor Harley last year in which we analyzed a > paper by Quiroga et al, that made claims about the localization of concepts > to neurons. ?That paper contains a more detailed explanation of the mapping, > using ideas from my architecture. ?It is worth noting that Quiroga et al's > explanation of their own data made no sense, and that the alternative that > Trevor and I proposed actually did account for the data rather neatly. I think I read this paper or one with very similar concepts that you wrote. >> Kurzweil in TSIN does the back of the >> envelope calculations about the overall computational power of the >> human brain, and it's a lot more than you are presenting here. > > Of course! > > Kurzweil (and others') calculations are based on the crudest possible > calculation of a brain emulation AGI, in which every wretched neuron in > there is critically important, and cannot be substituted for something > simpler. ?That is the dumb approach. Kurzweil does two separate calculations, one is a VERY brute force simulation, and the other is a more functional approach. I think they differed by around four orders of magnitude. You are talking about several more orders of magnitude less computation. And, while I don't have enough information about your approach to determine if it will work (I assume you don't either) it seems that you are attempting a premature optimization. Let's get something working first, then optimize it later. > What I am trying to do is explain an architecture that comes from the > cognitive science level, and which suggests that the FUNCTIONAL role played > by neurons is such that it can be substituted very adequately by a different > computational substrate. > > So, my claim is that, functionally, the human cognitive system may consist a > network of about a million cortical column units, each of which engages in > relatively simple relaxation processes with neighbors. > > I am not saying that this is the exactly correct picture, but so far this > architecture seems to work as a draft explanation for a broad range of > cognitive phenomena. > > And if it is correct, the the TSIN calculations are pointless. Sure. >> I have no doubt that as we figure out what the brain is doing, we'll >> be able to optimize. But we have to figure it out first. You seem to >> jump straight to a solution as a hypothesis. Now, having a hypothesis >> is a good part of the scientific method, but there is that other part >> of testing the hypothesis. What is your test? > > Well, it may seem like I pulled the hypothesis out of the hat yesterday > morning, but this is actually just a summary of a project that started in > the late 1980s. > > The test is an examination of the consistency of this architecture with the > known data from human cognition. ?(Bear in mind that most artificial > intelligence researchers are not "scientists" .... they do not propose > hyotheses and test them ..... they are engineers or mathematicians, and what > they do is play with ideas to see if they work, or prove theorems to show > that some things should work. ?From that perspective, what I am doing is > real science, of a sort that almost died out in AI a couple of decades ago). > > For an example of the kind of tests that are part of the research program I > am engaged in, see the Loosemore and Harley paper. I can't argue with that. Darwin sat on his hypothesis for decades until he had it just right. If you want to do the same, then more power to you. My question remains though, have you any preliminary results you can share that indicates that your system functions? >>> Now, if this conjecture is accurate, you tell me how long ago we had the >>> hardware necessary to build an AGI.... ;-) >> >> I'm sure we have that much now. The problem is whether the conjecture >> is correct. How do you prove the conjecture? Do something >> "intelligent". What I don't see yet in your papers, or in your posts >> here, are results. What "intelligent" behavior have you simulated with >> your hypothesis Richard? I'm not trying to be argumentative or >> challenging, just trying to figure out where you are in your work and >> whether you are applying the scientific method rigorously. > > The problem of giving you and answer is complicated by the paradigm. ?I am > adopting a systematic top-down scan that starts at the framework level and > proceeds downward. ?The L & H paper shows an application of the method to > just a couple of neuroscience results. ?What I have here are similar > analyses of several dozen other cognitive phenomena, in various amounts o > detail, but these are not published yet. ?There are other stages to the work > that involve simulations of particular algorithms. Simulations of algorithms seems promising. Can you say more about that? > This is quite a big topic. ?You may have to wait for my thesis to be > published to get a full answer, because fragments of it can be confusing. I started my Thesis in 1988. It hasn't been finished either. :-) I have published one paper though... > All I can say at the moment is that the architecture gives rise to simple, > elegant explanations, at a high level, of a wide range of cognitive data, > and the mere fact that one architecture can do such a thing is, in my > experience, unique. ?However, I do not want to publish that as it stands, > because I know what the reaction would be if there is no further explanation > of particular algorithms, down at the lowest level. ?So, I continue to work > toward the latter, even though by my own standards I already have enough to > be convinced. If you are right, it will be worth waiting for. If you aren't sharing details as you go, then it will be harder for you to get help from others. >> That may be the case. And once we figure out how it all works, we >> could well reduce it to this level of computational requirement. But >> we haven't figured it out yet. >> >> By most calculations, we spend an inordinate amount of our cerebral >> processing on image processing the input from our eyes. Have you made >> any image processing breakthroughs? Can you tell a cat from a dog with >> your approach? You seem to be focused on concepts and how they are >> processed. How does your method approach the nasty problems of image >> classification and recognition? > > The term "concept" is a vague one. ?I used it in our discussion because it > is conventional. ?However, in my own writings I talk of "atoms" and > "elements", because some of those atoms correspond to very low-level > features such as the ones that figure in the visual system. Do you have any results in the area of image processing? > As far as I can tell at this stage, the visual system uses the same basic > architecture, but with a few wrinkles. ?One of those is mechanism to spread > locally acquired features into a network of "distributed, position-specific" > atoms. ?This means that when visual regularities are discovered, they > percolate down in the system and become distributed across the visual field, > so they can be computed in parallel. That sounds right. > Also, the visual system does contain some specialized pathways (the "what" > and "where" pathways) that engage in separate computations. These are > already allowed for in the above calcuations, but they are specialized > regions of that million-column system. > > I had better stop. ?Must get back to work. Sounds like the right approach... :-) If you are convinced, don't let naysayers get you down. But to get rid of the "it will never fly" crowd, you have to get something out of the lab eventually. Good luck Richard. -Kelly From eugen at leitl.org Mon Feb 7 11:38:37 2011 From: eugen at leitl.org (Eugen Leitl) Date: Mon, 7 Feb 2011 12:38:37 +0100 Subject: [ExI] Plastination In-Reply-To: References: <20110202073448.GA23560@leitl.org> Message-ID: <20110207113836.GA23560@leitl.org> On Sat, Feb 05, 2011 at 01:24:12AM -0700, Kelly Anderson wrote: > On Wed, Feb 2, 2011 at 12:34 AM, Eugen Leitl wrote: > > On Tue, Feb 01, 2011 at 04:03:32PM -0700, Kelly Anderson wrote: > >> Has anyone seriously looked at plastination as a method for preserving > >> brain tissue patterns? > > > > Yes. It doesn't work. > > Thanks for your answer. You sound pretty definitive here, and I > appreciate that you might well be correct, but I didn't see that in > what you referenced. Perhaps I missed something. When you say it > doesn't work, are you saying that the structures that are preserved > are too large to reconstruct a working brain? Or was there some other > objection? Or were you merely stating that it wasn't Gunther's intent > to create brains that could be revivified later? Crude plastination as practiced by Gunther von Hagens does not preserver ultrastructure. The proposal by Ken Hayworth is not plastination but fixation, including heavy metal stain, then plastination. The method is not validated, and would be difficult to validate. > I personally don't go in for the quantum state stuff... if that has > anything to do with your answer. There is plenty in the brain at the > gross level to account for what's going on in there, IMHO. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From stefano.vaj at gmail.com Mon Feb 7 15:40:00 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 16:40:00 +0100 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: <4D4D7982.5090702@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> <4D4D7982.5090702@lightlink.com> Message-ID: On 5 February 2011 17:23, Richard Loosemore wrote: > So, to be fair, I will admit that the distinction between ?"How did this > machine come to get built?" ?and ?"How does this machine actually work, now > that it is built?" becomes rather less clear when we are talking about > concept learning (because concepts play a role that fits somewhere between > structure and content). How a machine is built is immaterial to my argument. For a darwinian program I refer to one the purpose to which is, very roughly, fitness-maxisiming. Any such program may be the "natural" product of the mechanism "heritance/mutation/selection" along time, or can be emulated by design. In such case, empathy, aggression, flight, selfishness etc. have a rather literal sense in that they are aspects of the reproductive strategy of the individual concerned, and/or of the replicators he carries around. For anything which is not biological, or designed to emulate deliberately the Darwinian *functioning* of biological system, *no matter how intelligent they are*, I contend that aggression or altruism are as applicable only inasmuch they are to ordinary PCs or other universal computing devices. If, on the other hand, AGIs are programmed to execute Darwinian programs, obviously they would be inclined to adopt the mix of behaviours which is best in Darwinian terms for their "genes", unless of course the emulation is flawed. What else is new? In fact, I maintain that they would be hardly discernible in behavioural terms from a computer with an actual human brain inside. -- Stefano Vaj From stefano.vaj at gmail.com Mon Feb 7 17:16:43 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 18:16:43 +0100 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: <4D4D7D59.8010205@lightlink.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On 5 February 2011 17:39, Richard Loosemore wrote: > This is exactly the line along which I am going. ? I have talked in the > past about building AGI systems that are "empathic" to the human > species, and which are locked into that state of empathy by their > design. ?Your sentence above: > >> It seems to me that one safety precaution we would want to have is >> for the first generation of AGI to see itself in some way as actually >> being human, or self identifying as being very close to humans. > > ... captures exactly the approach I am taking. ?This is what I mean by > building AGI systems that feel empathy for humans. ?They would BE humans in > most respects. If we accept that "normal" human-level empathy (that is, a mere ingredient in the evolutionary strategies) is enough, we just have to emulate a Darwinian machine as similar as possible in its behavioural making to ourselves, and this shall be automatically part of its repertoire - along with aggression, flight, sex, etc. If, OTOH, your AGI is implemented in view of other goals than maximing its fitness, it will be neither "altruistic" nor "selfish", it will simply execute the other program(s) it is being given or instructed to develop as any other less or more intelligent, less or more dangerous, universal computing device. -- Stefano Vaj From sjatkins at mac.com Mon Feb 7 17:44:54 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 07 Feb 2011 09:44:54 -0800 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> <4D4D7982.5090702@lightlink.com> Message-ID: <9D2DE678-2DAA-46F6-80FF-4F24D780B8F0@mac.com> On Feb 7, 2011, at 7:40 AM, Stefano Vaj wrote: > On 5 February 2011 17:23, Richard Loosemore wrote: >> So, to be fair, I will admit that the distinction between "How did this >> machine come to get built?" and "How does this machine actually work, now >> that it is built?" becomes rather less clear when we are talking about >> concept learning (because concepts play a role that fits somewhere between >> structure and content). > > How a machine is built is immaterial to my argument. For a darwinian > program I refer to one the purpose to which is, very roughly, > fitness-maxisiming. So you are calling any/all goal seeking algorithms and anything running them "darwinian"? That is a bit broad. Instead of "darwinian" which has become quite a package deal of concepts and assumptions, perhaps use "genetic algorithm based" when that is what you mean? All goal-seeking is not a GA. A genetic algorithm requires a fitness function/measure of success, some means of variation, and a means of preserving those instances and traits that are better by the fitness function possibly with some means of combination of more promising candidates. > > Any such program may be the "natural" product of the mechanism > "heritance/mutation/selection" along time, or can be emulated by > design. In such case, empathy, aggression, flight, selfishness etc. > have a rather literal sense in that they are aspects of the > reproductive strategy of the individual concerned, and/or of the > replicators he carries around. > Here you seem to be mixing in things like reproduction and more anthropomorphic elements that are quite specific to a small subset of GAs. So you seem to have started with too broad a use of "darwinian" and then from that assume things true of a much smaller subset of things actually "darwinian". > For anything which is not biological, or designed to emulate > deliberately the Darwinian *functioning* of biological system, *no > matter how intelligent they are*, I contend that aggression or > altruism are as applicable only inasmuch they are to ordinary PCs or > other universal computing devices. That would not at all follow. Anything that wishes to preserve itself and defines the good as that which furthers its interests and which had enough freedom of action would likely exhibit some of these behaviors. And it has little to do with "darwinian" per se. - s From sjatkins at mac.com Mon Feb 7 17:47:30 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 07 Feb 2011 09:47:30 -0800 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On Feb 7, 2011, at 9:16 AM, Stefano Vaj wrote: > On 5 February 2011 17:39, Richard Loosemore wrote: >> This is exactly the line along which I am going. I have talked in the >> past about building AGI systems that are "empathic" to the human >> species, and which are locked into that state of empathy by their >> design. Your sentence above: >> >>> It seems to me that one safety precaution we would want to have is >>> for the first generation of AGI to see itself in some way as actually >>> being human, or self identifying as being very close to humans. >> >> ... captures exactly the approach I am taking. This is what I mean by >> building AGI systems that feel empathy for humans. They would BE humans in >> most respects. > > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible in its behavioural > making to ourselves, and this shall be automatically part of its > repertoire - along with aggression, flight, sex, etc. Human empathy is not that deep nor is empathy per se some free floating good. Why would we want an AGI that was pretty much just like a human except presumably much more powerful? > > If, OTOH, your AGI is implemented in view of other goals than maximing > its fitness, it will be neither "altruistic" nor "selfish", it will > simply execute the other program(s) it is being given or instructed to > develop as any other less or more intelligent, less or more dangerous, > universal computing device. Altruistic and selfish are quite overloaded and nearly useless concepts as generally used. - s From stefano.vaj at gmail.com Mon Feb 7 17:20:13 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 18:20:13 +0100 Subject: [ExI] Plastination In-Reply-To: References: <20110201211412.8odezf09yccscgss@webmail.natasha.cc> <4D48CB94.9060303@canonizer.com> <039601cbc289$83505860$89f10920$@net> <20110203202305.GI23560@leitl.org> Message-ID: On 5 February 2011 09:06, Kelly Anderson wrote: > On Fri, Feb 4, 2011 at 8:41 AM, Stefano Vaj wrote: >> OTOH, it prevents falling asleep, thus allowing aliens to replace you >> with perfect copies of yourself without none being any the wiser... >> :-D > > If it is a "perfect" copy, then does it really matter? :-) What about a rose by another name? :-))) I sometimes wonder if our meditations on such questions since the middle age is akin to a loop in PC programming... :-) -- Stefano Vaj From rpwl at lightlink.com Mon Feb 7 17:53:43 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 07 Feb 2011 12:53:43 -0500 Subject: [ExI] Safety of human-like motivation systems [WAS Re: Oxford scientists...] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4C619A.5090804@lightlink.com> <4D4C6D02.1060503@satx.rr.com> <4D4D7982.5090702@lightlink.com> Message-ID: <4D5031A7.7060207@lightlink.com> Stefano Vaj wrote: > On 5 February 2011 17:23, Richard Loosemore wrote: >> So, to be fair, I will admit that the distinction between "How did this >> machine come to get built?" and "How does this machine actually work, now >> that it is built?" becomes rather less clear when we are talking about >> concept learning (because concepts play a role that fits somewhere between >> structure and content). > > How a machine is built is immaterial to my argument. For a darwinian > program I refer to one the purpose to which is, very roughly, > fitness-maxisiming. > > Any such program may be the "natural" product of the mechanism > "heritance/mutation/selection" along time, or can be emulated by > design. In such case, empathy, aggression, flight, selfishness etc. > have a rather literal sense in that they are aspects of the > reproductive strategy of the individual concerned, and/or of the > replicators he carries around. > > For anything which is not biological, or designed to emulate > deliberately the Darwinian *functioning* of biological system, *no > matter how intelligent they are*, I contend that aggression or > altruism are as applicable only inasmuch they are to ordinary PCs or > other universal computing devices. > > If, on the other hand, AGIs are programmed to execute Darwinian > programs, obviously they would be inclined to adopt the mix of > behaviours which is best in Darwinian terms for their "genes", unless > of course the emulation is flawed. What else is new? > > In fact, I maintain that they would be hardly discernible in > behavioural terms from a computer with an actual human brain inside. > Thank you for the clarification of your position. Unfortunately, I have to make a stand here and say that I think this line of analysis is profoundly incoherent (I am not saying *you* are incoherent, I refer only to the general theoretical position that you are defending here). The main problem with your argument is that it begins with some quite sensible talk about those features of naturally intelligent systems - like empathy, aggression, selfishness, etc. - that have historically played the role of helping reproductive success, but then your argument goes screaming off in the opposite direction when I want to point to the *mechanisms* that are inside the individuals, which *cause* those features to appear on the outside. Everything that I have been saying depends on talking about those mechanisms -- their characteristics, their presence or absence in various kinds of system, and so on. My claims are all about the mechanisms themselves. But, in spite of all my efforts, you insist on jumping right over that part of the topic and instead talking about the observable characteristics of the systems, as if there were no mechanisms underneath, that are responsible for making the characteristics appear. The way you describe the situation, it is as if aggression, empathy, selfishness, etc. all suddenly appear out of nowhere. For example, you say: "For anything which is not biological, or designed to emulate deliberately the Darwinian *functioning* of biological system, *no matter how intelligent they are*, I contend that aggression or altruism are as applicable only inasmuch they are to ordinary PCs or other universal computing devices." But this is surely nonsensical! If the mechanisms that cause aggression, empathy and selfishness are built into a PC (along with all the supporting mechanisms needed to make it intelligent) then the PC will exhibit aggression, empathy and selfishness. But if the very same PC is built with all the "intelligence" components, but WITHOUT the mechanisms that give rise to aggression, empathy and selfishness, then it will not show those characteristics. There is nothing special about the system being "darwinian", nothing special about it being a PC or a Turing machine of this that or the other type..... all that matters is that it be built with (a) a reasonably full range of "intelligence" mechanisms, and in addition a set of motivation mechanisms such as the aggression, empathy and selfishness. Aggression, empathy and selfishness don't come for free with systems of any stripe (darwinian or otherwise). They don't appear out of thin air if the system is competing against others in an exosystem. They are specific mechanisms that can, in the right circumstances, play a role in a natural selection process. You go on to make another statement that makes no sense, in this context: "If, on the other hand, AGIs are programmed to execute Darwinian programs, obviously they would be inclined to adopt the mix of behaviours which is best in Darwinian terms for their "genes", unless of course the emulation is flawed. What else is new?" This has nothing to do with what I was originally talking about, it is just a claim about a certain class of AGIs, as a population, existing in the context of the right ecosystem, with unavoidable sex, birth and death of individuals, etc etc etc ....... in other words, your statement assumes the full gamut of evolutionary mechanisms that are present in natural ecosystems. Under those very restricted circumstances, the mechanisms in the AGIs that gave rise to aggression, empathy, selfishness, etc, would play a role to select future mechanisms in the AGIs, and, yes, then the mechanisms might evolve over time. But this is, I am afraid, both (a) irrelevant to any claims I made about the behavior of the first AGI, and (b) extraordinarily implausible anyway, because all the conditions I just mentioned would likely be completely inoperative! The AGIs would NOT be tied to sexual reproduction, with mixing of genes, as their only way to reproduce. They would NOT be existing in an ecosystem in which they had to compete for resources, and so on and so on. So, on both counts the position you are taking makes no sense here. It says nothing of relevance to the question of what the motivation mechanisms are, and how the behavior of the very first AGI would turn out, when it is switched on. And, further, it makes unsupportable assumptions about some future AGI ecosystem that, in all likelihood, will never exist. Richard Loosemore From rpwl at lightlink.com Mon Feb 7 17:53:55 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 07 Feb 2011 12:53:55 -0500 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D5031B3.6050906@lightlink.com> Stefano Vaj wrote: > On 5 February 2011 17:39, Richard Loosemore wrote: >> This is exactly the line along which I am going. I have talked in the >> past about building AGI systems that are "empathic" to the human >> species, and which are locked into that state of empathy by their >> design. Your sentence above: >> >>> It seems to me that one safety precaution we would want to have is >>> for the first generation of AGI to see itself in some way as actually >>> being human, or self identifying as being very close to humans. >> ... captures exactly the approach I am taking. This is what I mean by >> building AGI systems that feel empathy for humans. They would BE humans in >> most respects. > > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible in its behavioural > making to ourselves, and this shall be automatically part of its > repertoire - along with aggression, flight, sex, etc. > > If, OTOH, your AGI is implemented in view of other goals than maximing > its fitness, it will be neither "altruistic" nor "selfish", it will > simply execute the other program(s) it is being given or instructed to > develop as any other less or more intelligent, less or more dangerous, > universal computing device. > Non sequiteur. As I explain in the parallel response to you other post, the dichotomy you describe is utterly without foundation. Richard Loosemore From stefano.vaj at gmail.com Mon Feb 7 16:54:44 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 7 Feb 2011 17:54:44 +0100 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> <009601cbc57f$71abbcf0$550336d0$@att.net> Message-ID: On 6 February 2011 06:55, Kelly Anderson wrote: > spike, I think this points out a recurring trans-humanist, cyborg and > even fyborg theme. What is cheating in the brave new world we are > making? If the Olympics are only open to original unenhanced human > beings, then it just becomes a race to figure out who is enhanced, and > who is not. It's already happening at the top level of sports, of > course. But when we start talking about enhancements that are > "built-in" to people, especially in the context of intellectual > pursuits, is that really cheating any more? No. In fact, it could be argued that the purpose of the prohibition of "cheating" is in most case to guarantee that possible successful cheaters need be so ingenuous as to deserve to win... :-) More practically, as long as games and sports and exams aim at reproducing scenarios which should be relevant to real-life situations, when the everyday availability of the "tricks" and "enhancements" become ubiquitous, I think it is reasonable to allow them on a general basis. Is it really important anymore to test the skill of human beings in performing very large multiplications, eg? Of course, nothing prevents people from creating as well purely artificial contests where some "handicap" or other is imposed on contestants. Such as fighting a boxe match with one hand behind your back, or run a marathon without drinking, or resolve math problems without calculators, or not taking supplementation aimed at increasing one's performance, or fishing with bamboo canes. As long as there is somebody interested, for instance as it may reproduce what one was faced with in bygone days, nothing wrong with that... -- Stefano Vaj From sjatkins at mac.com Mon Feb 7 19:52:57 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 07 Feb 2011 11:52:57 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <93121.43962.qm@web27007.mail.ukl.yahoo.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> Message-ID: <4D504D99.7000409@mac.com> On 02/06/2011 02:42 PM, Tom Nowell wrote: > Spike wrote: " No, I don't think voice operated computers will ever appear in general use. Think about it. What happens when you get a group of people all shouting at their handheld computers? It's bad enough listening to other > people's mobile phone conversations." Subvocalization is your friend. What I don't want is voice output. Voice and for that matter, video, is notoriously linear and only capable of so much playback speed increase while remaining comprehensible. I am very sad that it is becoming quite popular to make video instead of using text for more and more information transfer. It is not amenable to search, indexing or quick scanning. A step backwards in my view. I read a LOT faster than I process speech. - s > You get my workplace when the phones are busy. People call in, and you can hear nothing but several phone conversations at once. Noise doesn't stop the modern office. Also, voice-activated computers currently exist for automated phone lines, more sophisticated ones could replace call centres. > > Finally, thinking how many people were talking to themselves in my local coffee shop this morning (well, maybe they were talking to someone on their mobile phone using hands-free, but I think they're all crazy people sent to annoy me while I go to get a drink) you'll be surprised how much noise and social annoyance people can take. > > Tom > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Mon Feb 7 20:42:09 2011 From: spike66 at att.net (spike) Date: Mon, 7 Feb 2011 12:42:09 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <4D504D99.7000409@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> Message-ID: <006301cbc707$7db4a3c0$791deb40$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins Subject: Re: [ExI] Voice operated computers On 02/06/2011 02:42 PM, Tom Nowell wrote: > Spike wrote (actually he did not write): " No, I don't think voice operated computers will ever > appear in general use... But I can meet this notion part way. The application I have in mind is not general use, but rather a very specific use: human-like interfaces for impaired humans. > What happens when you get a group of people all shouting at their handheld computers?... Don't know, don't see any reason why they would do that. A properly interfaced voice activated computer wouldn't require it. Samantha wrote: > I read a LOT faster than I process speech... -s Me too and I am also frustrated with more and more news content in the form of video, which I can seldom summon sufficient attention span to view. I want only text, if the purpose in information exchange. Speech is too slow, and the hearer has too little control over it, even with a scroll bar. This does something interesting in political speeches, worthy of study. We take a control group who hears a political speech, audio only. A second group gets audio and visual. A third group gets text only. Afterwards, we compare scores on comprehension, and perhaps have them choose the important messages. I suspect the audio-only group and the audio-visual group might be similar, but the text only group would get a very different message. spike From mbb386 at main.nc.us Mon Feb 7 22:51:05 2011 From: mbb386 at main.nc.us (MB) Date: Mon, 7 Feb 2011 17:51:05 -0500 Subject: [ExI] Voice operated computers In-Reply-To: <006301cbc707$7db4a3c0$791deb40$@att.net> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> Message-ID: <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> > Samantha wrote: > I read a LOT faster than I process speech... -s > > Spike wrote: > Me too and I am also frustrated with more and more news content in the form > of video, which I can seldom summon sufficient attention span to view. I > want only text, if the purpose in information exchange. Speech is too slow, > and the hearer has too little control over it, even with a scroll bar. > I have trouble with this as well. If it's worth my time I want to be able to *study* on it a bit... not just have some flash jiggety jiggety go by on my screen. :( Regards, MB From kellycoinguy at gmail.com Tue Feb 8 03:07:21 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 7 Feb 2011 20:07:21 -0700 Subject: [ExI] Voice operated computers In-Reply-To: <4D504D99.7000409@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> Message-ID: On Mon, Feb 7, 2011 at 12:52 PM, Samantha Atkins wrote: > On 02/06/2011 02:42 PM, Tom Nowell wrote: >> >> Spike wrote: " No, I don't think voice operated computers will ever appear >> in general use. Think about it. What happens when you get a group of people >> all shouting at their handheld computers? It's bad enough listening to other >> people's mobile phone conversations." > > Subvocalization is your friend. ?What I don't want is voice output. ?Voice > and for that matter, video, is notoriously linear and only capable of so > much playback speed increase while remaining comprehensible. ?I am very sad > that it is becoming quite popular to make video instead of using text for > more and more information transfer. ?It is not amenable to search, indexing > or quick scanning. ?A step backwards in my view. ?I read a LOT faster than I > process speech. Bad video is indeed not your friend. However, there are times when video conveys a lot more information than you could get from text. For example, I made some videos showing how to solve very large Rubik's cubes quickly. I don't think I could have done nearly as good a job at that without the video. My friends at Orabrush are extremely happy with their video marketing, since it turned them from a complete flop to a great success. That simply could not have happened as quickly or as well without video. Video of talking heads saying nothing, I can sure do without that. And enough of teenagers doing stupid and dangerous stuff. The medium must match the message. Without television, JFK and Ronald Regan would never have been elected. Without the Internet Ron Paul would have been just another ignored third party candidate. Using the right medium in the right way is critical. As we start to see more and more 3D being used, you can be sure that a lot of it will be used badly (e.g. The Last Airbender). But with Avatar, we see that it can be used to great effect when you go all the way. In the future, we will have projection on the retinas, which should not be much different than 3D video in it's usage, it's basically a new kind of monitor. Head's up displays in cars should become more popular sometime in the next few years. Night vision is just too effective to not have it in some cars some time. But if it's used to project video and other distracting things, then it will be very bad. The next really different medium IMHO is haptics. It seems fairly clear to me that the technology will be first driven by teledildonics (after all who first commonly put video on the web). But who will be the first presidential candidate to be elected (or gain notoriety) because they make creative use of a haptic interface. It gives a whole new meaning to "I feel your pain"... :-) -Kelly From kellycoinguy at gmail.com Tue Feb 8 03:15:03 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 7 Feb 2011 20:15:03 -0700 Subject: [ExI] sports blammisphy In-Reply-To: References: <007701cbc55f$09d9e130$1d8da390$@att.net> <009501cbc579$87f00580$97d01080$@att.net> <009601cbc57f$71abbcf0$550336d0$@att.net> Message-ID: > No. In fact, it could be argued that the purpose of the prohibition of > "cheating" is in most case to guarantee that possible successful > cheaters need be so ingenuous as to deserve to win... :-) This certainly seems to be the recent history of the Olympics. Even when it's not cheating, such as the new US bobsled that is unveiled every four years. > More practically, as long as games and sports and exams aim at > reproducing scenarios which should be relevant to real-life > situations, when the everyday availability of the "tricks" and > "enhancements" become ubiquitous, I think it is reasonable to allow > them on a general basis. Is it really important anymore to test the > skill of human beings in performing very large multiplications, eg? I think this is clearly going to be the case for large base participative sports such as high school football, basketball, etc. But when you get into the elite levels of sports, I think there will be significant resistance to such things for some time. > Of course, nothing prevents people from creating as well purely > artificial contests where some "handicap" or other is imposed on > contestants. Such as fighting a boxe match with one hand behind your > back, or run a marathon without drinking, or resolve math problems > without calculators, or not taking supplementation aimed at increasing > one's performance, or fishing with bamboo canes. As long as there is > somebody interested, for instance as it may reproduce what one was > faced with in bygone days, nothing wrong with that... One question of interest is whether enhancements will be made such that they can be "turned off". So if you have the artificial blood cells that allow you to process oxygen efficiently and hold your breath for twenty minutes, can you turn it off in order to participate in the Olympic marathon? -Kelly From kellycoinguy at gmail.com Tue Feb 8 03:23:47 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 7 Feb 2011 20:23:47 -0700 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible in its behavioural > making to ourselves, and this shall be automatically part of its > repertoire - along with aggression, flight, sex, etc. > > If, OTOH, your AGI is implemented in view of other goals than maximing > its fitness, it will be neither "altruistic" nor "selfish", it will > simply execute the other program(s) it is being given or instructed to > develop as any other less or more intelligent, less or more dangerous, > universal computing device. The real truth of the matter is that AGIs will be manufactured (or trained) with all sorts of tweaking. There will be loving AGIs, and Spock-like AGIs. There will undoubtedly be AGIs with personality disorders, perhaps surpassing Hitler in their cruelty. If for no other reason than to be an opponent in an advanced video game. Just recall that if it can be done, it will be done. The question for us is what sorts of rights we give AGIs. Is there any way to keep bad AGIs "in the bottle" in some safe context? Will there even be a way of determining that an AGI is, in fact, a sociopath? We can't even find the Ted Bundys among us. Policing in the future is going to be very interesting. What sorts of AGIs will we create to be the police of the future? Certainly people won't be able to police them. We can't keep the law up with technology now. What privacy rights will an AGI have? It's all very messy. Should be fun! -Kelly From stefano.vaj at gmail.com Tue Feb 8 11:19:49 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Feb 2011 12:19:49 +0100 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On 7 February 2011 18:47, Samantha Atkins wrote: > Human empathy is not that deep nor is empathy per se some free floating good. ? Why would we want an AGI that was pretty much just like a human except presumably much more powerful? I can think only of two reasons: - for the same reason we may want to develop an emulation of a cat or of a bug, that is, for the sake of it, as an achievement which is interesting per se; - for the same reason we paint realistic portraits of living human beings, to perpetuate some or most of their traits for the foreseeable future (see under "upload"). For everything else, computers may become indefinitely more intelligent and ingenuous at resolving diverse categories of problems without exhibiting any bio-like features such as altruism, selfishness, aggression, sexual drive, will to power, empathy, etc. more than they do today. > Altruistic and selfish are quite overloaded and nearly useless concepts as generally used. I suspect that you are right. -- Stefano Vaj From stefano.vaj at gmail.com Tue Feb 8 11:08:23 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 8 Feb 2011 12:08:23 +0100 Subject: [ExI] Voice operated computers In-Reply-To: <4D504D99.7000409@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> Message-ID: On 7 February 2011 20:52, Samantha Atkins wrote: > I am very sad > that it is becoming quite popular to make video instead of using text for > more and more information transfer. ?It is not amenable to search, indexing > or quick scanning. ?A step backwards in my view. ?I read a LOT faster than I > process speech. Indeed. With a PS3 you can watch a blu-ray movie at 2x speed without a mickey-mouse distortion, but it is crazy to have to watch a talking head on Youtube to listen at things that you could so much more comfortably and quickly read... I wonder whether this is a byproduct of increasing semi-literacy in western countries. -- Stefano Vaj From hkeithhenson at gmail.com Tue Feb 8 21:50:52 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 8 Feb 2011 14:50:52 -0700 Subject: [ExI] Anonymous and AI Message-ID: I am kind of surprised there is no discussion of this http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments It is obvious to me that the (probable) emergence of AI will come from this "group." The motivation of the AI will be "lulz." Are we doomed? Keith PS :-) From pharos at gmail.com Tue Feb 8 23:23:22 2011 From: pharos at gmail.com (BillK) Date: Tue, 8 Feb 2011 23:23:22 +0000 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: On Tue, Feb 8, 2011 at 9:50 PM, Keith Henson wrote: > I am kind of surprised there is no discussion of this > > http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments > > It is obvious to me that the (probable) emergence of AI will come from > this "group." ?The motivation of the AI will be "lulz." > > Are we doomed? > > Well, if you don't have a website, it can't be hacked. If you publicise yourself as a security consultancy firm, then you should make pretty certain that your internet-facing websites are secure. That is not a trivial exercise. Which is why so many companies (and government departments) have pretty useless security. A large part of this attack was social engineering, not computer hacking at all. Humans are a weak spot in most organisations. You need extra fail-safe security to guard against people being manipulated. People like to be helpful, like holding the door open for the pretty blonde who has forgotten her entry swipe card. Business laptops get stolen / lost every week with confidential information on them. Why aren't the hard disks encrypted? If this companies stolen emails contained valuable information, why weren't they encrypted? Proper security is an expensive pain in the ass for everyone involved, but you ignore it at your own risk. BillK From spike66 at att.net Tue Feb 8 23:04:28 2011 From: spike66 at att.net (spike) Date: Tue, 8 Feb 2011 15:04:28 -0800 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <005201cbc7e4$89d0e230$9d72a690$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Keith Henson Subject: [ExI] Anonymous and AI >http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-secur ity-company-hbgary?commentpage=all#start-of-comments >It is obvious to me that the (probable) emergence of AI will come from this "group." ... >Are we doomed? >Keith >PS :-) What you mean "we" Kimosabe? Anonymous PS {8^D From sjatkins at mac.com Wed Feb 9 02:59:16 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 08 Feb 2011 18:59:16 -0800 Subject: [ExI] Voice operated computers In-Reply-To: <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> Message-ID: <4D520304.1090807@mac.com> On 02/07/2011 02:51 PM, MB wrote: > >> Samantha wrote: >> I read a LOT faster than I process speech... -s >> >> Spike wrote: >> Me too and I am also frustrated with more and more news content in the form >> of video, which I can seldom summon sufficient attention span to view. I >> want only text, if the purpose in information exchange. Speech is too slow, >> and the hearer has too little control over it, even with a scroll bar. >> > I have trouble with this as well. If it's worth my time I want to be able to > *study* on it a bit... not just have some flash jiggety jiggety go by on my screen. > :( What really really annoys me is some 20-something (or older) doing there little "a look at me and my awesome video thing" or "my I don't really give a frak about anything video" garbage for N precious irreplaceable minutes of my life BEFORE I can know what the (at best) small fraction of N worth of real information they have to impart is. N is much smaller in text and I don't have TMI of the rest of their stuff to deal with. In text I don't have to see the author's face, note their personna, react to my reactions of what I can see of in their video, know what their voice sounds like, etc. Like I say, TMI. Sticking all that in my face instead of the information I am actually after is pretty annoying. - samantha From sjatkins at mac.com Wed Feb 9 03:06:35 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 08 Feb 2011 19:06:35 -0800 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D5204BB.1080605@mac.com> On 02/08/2011 03:19 AM, Stefano Vaj wrote: > On 7 February 2011 18:47, Samantha Atkins wrote: >> Human empathy is not that deep nor is empathy per se some free floating good. Why would we want an AGI that was pretty much just like a human except presumably much more powerful? > I can think only of two reasons: > - for the same reason we may want to develop an emulation of a cat or > of a bug, that is, for the sake of it, as an achievement which is > interesting per se; > - for the same reason we paint realistic portraits of living human > beings, to perpetuate some or most of their traits for the foreseeable > future (see under "upload"). > > For everything else, computers may become indefinitely more > intelligent and ingenuous at resolving diverse categories of problems > without exhibiting any bio-like features such as altruism, If by altruism you mean sacrificing your values, just because they are yours, to the values of others, just because they are not yours, then it is a very bizarre thing to glorify, practice or hope that our AGIs practice. It is on the face of it hopelessly irrational and counter-productive toward achieving what we actually value. If an AGI practices that just on the grounds someone said they "should" then it is need of a serious debugging. - samantha From sjatkins at mac.com Wed Feb 9 03:16:51 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 08 Feb 2011 19:16:51 -0800 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <4D520723.6060301@mac.com> On 02/08/2011 03:23 PM, BillK wrote: > On Tue, Feb 8, 2011 at 9:50 PM, Keith Henson wrote: >> I am kind of surprised there is no discussion of this >> >> http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments >> >> It is obvious to me that the (probable) emergence of AI will come from >> this "group." The motivation of the AI will be "lulz." >> >> Are we doomed? >> >> > > Well, if you don't have a website, it can't be hacked. > > If you publicise yourself as a security consultancy firm, then you > should make pretty certain that your internet-facing websites are > secure. That is not a trivial exercise. > Which is why so many companies (and government departments) have > pretty useless security. > > A large part of this attack was social engineering, not computer hacking at all. > Humans are a weak spot in most organisations. You need extra fail-safe > security to guard against people being manipulated. People like to be > helpful, like holding the door open for the pretty blonde who has > forgotten her entry swipe card. > > Business laptops get stolen / lost every week with confidential > information on them. Why aren't the hard disks encrypted? If this > companies stolen emails contained valuable information, why weren't > they encrypted? > > Proper security is an expensive pain in the ass for everyone involved, > but you ignore it at your own risk. > It would help if more systems used good biometrics instead of passwords and cardkeys. We aren't quite at the place where a simple webcam plus voice plus fingerprint is good enough. Or a subdermal chip somehow locked to your metabolism so just sending the data bits would not work. Hmm.. Of course that kicks the hell out of anonymity unless your nym system is secured to said identity and immune to attack and unwelcome snoops. - s From mbb386 at main.nc.us Wed Feb 9 04:01:37 2011 From: mbb386 at main.nc.us (MB) Date: Tue, 8 Feb 2011 23:01:37 -0500 Subject: [ExI] Voice operated computers In-Reply-To: <4D520304.1090807@mac.com> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> <4D520304.1090807@mac.com> Message-ID: <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> Webcasts drive me bonkers too. I listen much slower than I read, forget parts of what I heard, have questions about bits of it, but poof, it's gone now, moving on... and I lost it. :( What a waste of my time. Especially if one is somewhat hard of hearing. When I was a kid I asked my mom why on earth they made announcements in church when they gave out written bulletins at the door - with the announcements printed in! She hadn't any sensible answer. :))) Regards, MB From eugen at leitl.org Wed Feb 9 07:43:35 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 08:43:35 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <20110209074335.GF23560@leitl.org> On Tue, Feb 08, 2011 at 02:50:52PM -0700, Keith Henson wrote: > I am kind of surprised there is no discussion of this > > http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments > > It is obvious to me that the (probable) emergence of AI will come from > this "group." The motivation of the AI will be "lulz." There's actually a faint possibility that the malware ecosystem will eventually produce increasingly sophisticated self-propagating malware, which could eventually take over large fractions of a network in a single domain, and use the computational resources of the compromised hosts to run increasingly sophisticated, albeit likely still nefarious code. > Are we doomed? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 9 07:46:23 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 08:46:23 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: Message-ID: <20110209074623.GG23560@leitl.org> On Tue, Feb 08, 2011 at 11:23:22PM +0000, BillK wrote: > Well, if you don't have a website, it can't be hacked. If you're dead, you can't be killed. From giulio at gmail.com Wed Feb 9 07:59:40 2011 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 9 Feb 2011 08:59:40 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: <20110209074335.GF23560@leitl.org> References: <20110209074335.GF23560@leitl.org> Message-ID: There is a good SF book on this: http://www.theminervavirus.com/ written by Brian Shuster of Red Light Center and Utherverse fame: http://en.wikipedia.org/wiki/Utherverse_Inc. I read the book a few years ago and found it very good, especially considering that the author is not a professional SF writer. I just bought it again in Kindle format for 3.44 $ (!!), recommended. G. On Wed, Feb 9, 2011 at 8:43 AM, Eugen Leitl wrote: > On Tue, Feb 08, 2011 at 02:50:52PM -0700, Keith Henson wrote: >> I am kind of surprised there is no discussion of this >> >> http://www.guardian.co.uk/technology/2011/feb/07/anonymous-attacks-us-security-company-hbgary?commentpage=all#start-of-comments >> >> It is obvious to me that the (probable) emergence of AI will come from >> this "group." ?The motivation of the AI will be "lulz." > > There's actually a faint possibility that the malware ecosystem > will eventually produce increasingly sophisticated self-propagating > malware, which could eventually take over large fractions of a > network in a single domain, and use the computational resources > of the compromised hosts to run increasingly sophisticated, > albeit likely still nefarious code. > >> Are we doomed? > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A ?7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From pharos at gmail.com Wed Feb 9 09:06:46 2011 From: pharos at gmail.com (BillK) Date: Wed, 9 Feb 2011 09:06:46 +0000 Subject: [ExI] Anonymous and AI In-Reply-To: <20110209074623.GG23560@leitl.org> References: <20110209074623.GG23560@leitl.org> Message-ID: On Wed, Feb 9, 2011 at 7:46 AM, Eugen Leitl wrote: > On Tue, Feb 08, 2011 at 11:23:22PM +0000, BillK wrote: > >> Well, if you don't have a website, it can't be hacked. > > If you're dead, you can't be killed. > > The company involved had internal networks that weren't connected to the internet. These computers weren't hacked. Computers that are linked to the internet need extra security. It is good practice to keep confidential data, so far as possible, on private networks. BillK From eugen at leitl.org Wed Feb 9 09:54:44 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 10:54:44 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: <20110209074623.GG23560@leitl.org> Message-ID: <20110209095444.GJ23560@leitl.org> On Wed, Feb 09, 2011 at 09:06:46AM +0000, BillK wrote: > The company involved had internal networks that weren't connected to > the internet. These computers weren't hacked. System utility surface is directly proportional to vulnerability. The only invulnerable systems are completely useless, and hence of no concern to us. > Computers that are linked to the internet need extra security. It is > good practice to keep confidential data, so far as possible, on > private networks. The problem with basic good practices or even common sense is that real systems and real people don't care, and you can't make them care. Also, worse is better definitely applies. So let's just get used to dealing with insecure systems. In fact, not all is bad, as a planet made of swiss cheese is just great if you're a mouse. Yum. From eugen at leitl.org Wed Feb 9 11:04:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 9 Feb 2011 12:04:32 +0100 Subject: [ExI] Anonymous and AI In-Reply-To: References: <20110209074335.GF23560@leitl.org> Message-ID: <20110209110432.GL23560@leitl.org> On Wed, Feb 09, 2011 at 08:59:40AM +0100, Giulio Prisco wrote: > There is a good SF book on this: > > http://www.theminervavirus.com/ > > written by Brian Shuster of Red Light Center and Utherverse fame: > http://en.wikipedia.org/wiki/Utherverse_Inc. > > I read the book a few years ago and found it very good, especially > considering that the author is not a professional SF writer. I just > bought it again in Kindle format for 3.44 $ (!!), recommended. Somewhat related, there's also Daemon, and its sequel, Freedom, by Daniel Suarez. (Caution: some suspension of disbelief required, at times some heavy gamer cheese present). From stefano.vaj at gmail.com Wed Feb 9 12:04:44 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 9 Feb 2011 13:04:44 +0100 Subject: [ExI] Empathic AGI [WAS Safety of human-like motivation systems] In-Reply-To: <4D5204BB.1080605@mac.com> References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> <4D5204BB.1080605@mac.com> Message-ID: On 9 February 2011 04:06, Samantha Atkins wrote: > If by altruism you mean sacrificing your values, just because they are > yours, to the values of others, just because they are not yours, then it is > a very bizarre thing to glorify, practice or hope that our AGIs practice. > ?It is on the face of it hopelessly irrational and counter-productive toward > achieving what we actually value. ? If an AGI practices that just on the > grounds someone said they "should" then it is need of a serious debugging. I fully agree. -- Stefano Vaj From jonkc at bellsouth.net Wed Feb 9 17:41:56 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 9 Feb 2011 12:41:56 -0500 Subject: [ExI] Empathic AGI In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote: > > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible Two difficulties with that: 1) The Darwinian process is more like history than mathematics, it is not repeatable, very small changes in initial conditions could lead to huge differences in output. 2) Human-level empathy is aimed at Human-level beings, the further from that level the less empathy we have. We have less empathy for a cow than a person and less for an insect than a cow. As the AI's intelligence gets larger its empathy for us will get smaller although its empathy for its own kind might be enormous. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Feb 9 22:26:15 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 9 Feb 2011 17:26:15 -0500 Subject: [ExI] Voice operated computers In-Reply-To: <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> <4D520304.1090807@mac.com> <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> Message-ID: On Tue, Feb 8, 2011 at 11:01 PM, MB wrote: > Webcasts drive me bonkers too. > > I listen much slower than I read, forget parts of what I heard, have questions about > bits of it, but poof, it's gone now, moving on... and I lost it. :( > > What a waste of my time. Especially if one is somewhat hard of hearing. > > When I was a kid I asked my mom why on earth they made announcements in church when > they gave out written bulletins at the door - with the announcements printed in! > She hadn't any sensible answer. :))) Actually her answer was very sensible, however she didn't email it so it was lost soon after it was uttered. :) From mbb386 at main.nc.us Thu Feb 10 00:21:10 2011 From: mbb386 at main.nc.us (MB) Date: Wed, 9 Feb 2011 19:21:10 -0500 Subject: [ExI] Voice operated computers In-Reply-To: References: <93121.43962.qm@web27007.mail.ukl.yahoo.com> <4D504D99.7000409@mac.com> <006301cbc707$7db4a3c0$791deb40$@att.net> <4d4dd72da76f46aa80f0fa65a9a5915c.squirrel@www.main.nc.us> <4D520304.1090807@mac.com> <4e121e31a1ffbc9450c198cb237cc4bf.squirrel@www.main.nc.us> Message-ID: <418fac29d61f55082ba804beba6fdb64.squirrel@www.main.nc.us> >> She hadn't any sensible answer. :))) > > Actually her answer was very sensible, however she didn't email it so > it was lost soon after it was uttered. :) > Hee! She probably said something along the lines of "they don't look" or "they won't read" - and I read everything that passed before my eyes. A textaholic from pre-school on. :) To my way of thinking, "not looking" and "not reading" are poor excuses. The bulding was filled with educated literate people. Regards, MB From spike66 at att.net Thu Feb 10 06:35:41 2011 From: spike66 at att.net (spike) Date: Wed, 9 Feb 2011 22:35:41 -0800 Subject: [ExI] watson on nova Message-ID: <001f01cbc8ec$bd623c80$3826b580$@att.net> Lots of good Watson stuff in this NOVA episode, plenty to get me jazzed: http://video.pbs.org/video/1757221034 The good stuff is between about 15 minutes and 28 minutes. We will have practical companion computers very soon. All doubts I once suffered have vanished with this NOVA episode. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 10 09:37:25 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 10 Feb 2011 10:37:25 +0100 Subject: [ExI] [cryo] new cryonics blog up Message-ID: <20110210093725.GD23560@leitl.org> http://chronopause.com/ Please link from your blogs, if any. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE _______________________________________________ cryo mailing list cryo at postbiota.org http://postbiota.org/mailman/listinfo/cryo From sjatkins at mac.com Fri Feb 11 18:38:16 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 11 Feb 2011 10:38:16 -0800 Subject: [ExI] Empathic AGI In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: <4D558218.3070001@mac.com> On 02/09/2011 09:41 AM, John Clark wrote: > On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote: >> >> If we accept that "normal" human-level empathy (that is, a mere >> ingredient in the evolutionary strategies) is enough, we just have to >> emulate a Darwinian machine as similar as possible > > Two difficulties with that: > > 1) The Darwinian process is more like history than mathematics, it is > not repeatable, very small changes in initial conditions could lead to > huge differences in output. > > 2) Human-level empathy is aimed at Human-level beings, the further > from that level the less empathy we have. We have less empathy for a > cow than a person and less for an insect than a cow. As the AI's > intelligence gets larger its empathy for us will get smaller although > its empathy for its own kind might be enormous. > Yes, we understand how interdependent peer level beings will naturally develop a set of ethical guides for how they treat one another and the ability to model one another. We don't have much/any idea of how this would arise among beings of radically different natures and abilities there are not so interdependent regarding their treatment of one another. - samantha From jrd1415 at gmail.com Fri Feb 11 20:36:29 2011 From: jrd1415 at gmail.com (Jeff Davis) Date: Fri, 11 Feb 2011 13:36:29 -0700 Subject: [ExI] Fwd: Suspended Animation Cryonics Conference In-Reply-To: References: <000001cbb503$9435bc30$bca13490$@att.net> Message-ID: I hope there will be some form of webcast of this event. Live would be good. Best, jeff davis 2011/1/15 Max More > I'll be speaking at this conference, and hope to see as many of you as > possible -- especially those of you I haven't seen in too long. > > Max > > > 2011/1/15 spike > >> >> >> I am forwarding this from James Clement while we work out some issues with >> the server. spike >> >> >> ---------- Forwarded message ---------- >> From: James Clement >> To: extropy-chat at lists.extropy.org >> Date: Sat, 15 Jan 2011 13:52:58 -0800 >> Subject: Suspended Animation Cryonics Conference >> >> Announcing a new cryonic's conference, for May 20-22, 2011, in Ft. >> Lauderdale, FL >> >> Thanks, >> James Clement >> >> >> >> [image: Description: Suspended Animation] >> >> Dear Friend, >> >> Can you imagine the future? When we'll travel to other stars. Have >> super-intelligent computers. Robot servants. And nanomachines that keep us >> young and healthy for centuries! Will you live long enough to experience all >> this? >> >> "Unlikely," you say? Not necessarily. Suspended Animation can be your >> bridge to the advances of the future. The technology is here today to have >> you cryopreserved for future reanimation. To enable you to engage in time >> travel to the spectacular advances of the future. >> >> This technology is far from perfect now. But it is good enough to give you >> a chance at unlimited life and prosperity. Remarkable advances in >> cryopreservation have already been achieved. Millions of dollars are being >> spent to achieve perfected suspended animation and new technologies to >> revive time travelers in the future. >> >> You can learn all about these technologies at a conference in South >> Florida on May 20-22, 2011 . >> At this conference, the foremost authorities in human cryopreservation and >> future reanimation will convene at the Hyatt Regency Pier 66 Resort and Spa >> in Ft. Lauderdale. They will inform you about pathbreaking research advances >> that could make your most exciting dreams come true. >> >> This conference is being sponsored by Suspended Animation, Inc. (SA), a >> company in Boynton Beach, Florida, where advanced human cryopreservation >> equipment and services are being developed. After you've been enlightened by >> imagination-stretching presentations about today's scientifically credible >> technologies and the projected advances of tomorrow at the Hyatt Regency, >> you'll be transported to SA's extraordinary laboratory where you will be >> able to see some of these technologies for yourself. >> >> The link in this e-mail gives you special access to a downloadable >> brochure, as well as registration options, so you can get all the details of >> this remarkable conference that will enable you to obtain the information >> you need to give yourself the opportunity of a lifetime! >> >> *Visit the Conference Page * >> >> >> >> [image: Description: Catherine Baldwin] >> >> Catherine Baldwin >> General Manager >> Suspended Animation, Inc. >> >> >> >> Suspended Animation, Inc. >> 3020 High Ridge Road, Suite 300 >> Boynton Beach, FL 33426 >> >> Telephone *(561) 296-4251* >> Facsimile *(561) 296-4255* >> Emergency (888) 660-7128 >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Feb 12 00:10:14 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 11 Feb 2011 17:10:14 -0700 Subject: [ExI] Anons Message-ID: There are many pointers into this complex of stories, all of which hinge off Wikileaks. http://thinkprogress.org/ More than a decade ago I proposed that the training net activists got learning to cope with a certain cult would be a warm up exercise for a major confrontation with a government. Then for a while I figured the government had learned from watching the fate of said cult. They didn't. Keith (HBGary is effectively part of the US government.) From spike66 at att.net Sat Feb 12 00:40:29 2011 From: spike66 at att.net (spike) Date: Fri, 11 Feb 2011 16:40:29 -0800 Subject: [ExI] spokeo was: RE: Anons Message-ID: <009401cbca4d$72a2de40$57e89ac0$@att.net> >... On Behalf Of Keith Henson >...Subject: [ExI] Anons >...There are many pointers into this complex of stories, all of which hinge off Wikileaks. > http://thinkprogress.org/ >...Keith Thanks Keith! As vaguely related to wikileaks, a friend sent me a link to Spokeo recently. I found it interesting, because I entered my name (the one that is on my birth certificate) and it knew about "spike." I have never tried to keep that a secret, was listed under spike for 26 years in the company phone directory for instance. But I still haven't figured out how Spokeo knew about that. It doesn't know about my son either. Anyways, try it out: type in the name of your old buddy from high school (assuming he has an unusual name) see what you find: http://www.spokeo.com/ Goodbye privacy, hello openness. spike From max at maxmore.com Sat Feb 12 00:53:02 2011 From: max at maxmore.com (Max More) Date: Fri, 11 Feb 2011 17:53:02 -0700 Subject: [ExI] Fwd: Suspended Animation Cryonics Conference In-Reply-To: References: <000001cbb503$9435bc30$bca13490$@att.net> Message-ID: There will be both a webcast at a DVD. --- Max 2011/2/11 Jeff Davis > I hope there will be some form of webcast of this event. Live would be > good. > > Best, jeff davis > > 2011/1/15 Max More > > I'll be speaking at this conference, and hope to see as many of you as >> possible -- especially those of you I haven't seen in too long. >> >> Max >> >> >> > -- Max More Strategic Philosopher Co-founder, Extropy Institute CEO, Alcor Life Extension Foundation 7895 E. Acoma Dr # 110 Scottsdale, AZ 85260 877/462-5267 ext 113 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Sat Feb 12 07:49:09 2011 From: pharos at gmail.com (BillK) Date: Sat, 12 Feb 2011 07:49:09 +0000 Subject: [ExI] Anons In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 12:10 AM, Keith Henson wrote: > There are many pointers into this complex of stories, all of which > hinge off Wikileaks. > > http://thinkprogress.org/ > > More than a decade ago I proposed that the training net activists got > learning to cope with a certain cult would be a warm up exercise for a > major confrontation with a government. > > Then for a while I figured the government had learned from watching > the fate of said cult. > They didn't. > > Keith > (HBGary is effectively part of the US government.) > Two comments. First. The current saying is that companies (and government departments) spend more on coffee than they do on computer security. The majority of web sites have been hacked and they still don't care. It's like product liability. Until the cost of damages gets high enough they just won't bother. At the moment companies just say 'sorry' and get a techie to block the latest attack. They are playing 'Whack-a-mole' with hackers because it is cheaper. Second. To say that HBGary is effectively part of the US government (while true) is sort of looking at it back to front. The real problem is corporate takeover of the US government. See: But the real issue highlighted by this episode is just how lawless and unrestrained is the unified axis of government and corporate power. I've written many times about this issue -- the full-scale merger between public and private spheres -- because it's easily one of the most critical yet under-discussed political topics. Especially (though by no means only) in the worlds of the Surveillance and National Security State, the powers of the state have become largely privatized. There is very little separation between government power and corporate power. Those who wield the latter intrinsically wield the former. The revolving door between the highest levels of government and corporate offices rotates so fast and continuously that it has basically flown off its track and no longer provides even the minimal barrier it once did. It's not merely that corporate power is unrestrained; it's worse than that: corporations actively exploit the power of the state to further entrench and enhance their power. ----------------------------------------- BillK From possiblepaths2050 at gmail.com Sat Feb 12 11:51:28 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 12 Feb 2011 04:51:28 -0700 Subject: [ExI] needing help on a tech project Message-ID: Hello everyone, I have a science fiction con founder/organizer friend who has asked me to transcribe a couple dozen con panel DV camcorder recordings onto a computer. I am dumbfounded that he would ask me to volunteer for such a task, but I do want to help him (he is not very tech savvy). I would think we need to first convert the DV tapes into dvd or flash format and then upload it into a computer, where the appropriate (and hopefully not too expensive) software could first scan it and then do a competent transcription. Please help and point me in the right direction to get the task done! And how much will it cost? Thank you, John From pharos at gmail.com Sat Feb 12 13:14:18 2011 From: pharos at gmail.com (BillK) Date: Sat, 12 Feb 2011 13:14:18 +0000 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: On Sat, Feb 12, 2011 at 11:51 AM, John Grigg wrote: > I have a science fiction con founder/organizer friend who has asked me > to transcribe a couple dozen con panel DV camcorder recordings onto a > computer. ?I am dumbfounded that he would ask me to volunteer for such > a task, but I do want to help him (he is not very tech savvy). > > I would think we need to first convert the DV tapes into dvd or flash > format and then upload it into a computer, where the appropriate (and > hopefully not too expensive) software could first scan it and then do > a competent transcription. > > Please help and point me in the right direction to get the task done! > And how much will it cost? > > I haven't done this, but this article should get you started. Note also page 2 that explains about editing the footage on your computer. But you might find the learning curve makes this too big a job for you. :) Best of luck, BillK From darren.greer3 at gmail.com Sat Feb 12 14:00:06 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 10:00:06 -0400 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: Forgot to add. Some camcorders have the IEEE. Some USB. If you need help manipulating the files once you have them converted, let me know. I've been playing around with video editing for a few years. Making porn, you know. (Kidding.) d. On Sat, Feb 12, 2011 at 9:47 AM, Darren Greer wrote: > Hi John: > > Do you have a mac? One-step DVD in iDVD which is standard on OSX imports > the DV files to the hard drive, converts them and burns to a DVD with a > single button click. All you need is an IEEE 1394 (firewire) cable. Once > they're burned you can manipulate the videos in the iDVD editor or use any > of the free software online to rip the DVD files and convert to other > formats. If you're on a pc, programs like DVDsanta can also do it. > There's a free version download here: (I have not used this version so I > don't know its restriction/limitations.) > > http://www.topvideopro.com/burn-dvd/dv-to-dvd.htm > > Here is another free pc version that does the same thing. > > http://www.dv-to-dvd.com/ > > There are also a number of free software program that convert directly to > flash. But you could end up buying one of these programs (they're not > expensive) as often there's a size limitation for converting files on free > versions of video conversion software. I hope this helps. > > Darren > > > On Sat, Feb 12, 2011 at 7:51 AM, John Grigg wrote: > >> Hello everyone, >> >> I have a science fiction con founder/organizer friend who has asked me >> to transcribe a couple dozen con panel DV camcorder recordings onto a >> computer. I am dumbfounded that he would ask me to volunteer for such >> a task, but I do want to help him (he is not very tech savvy). >> >> I would think we need to first convert the DV tapes into dvd or flash >> format and then upload it into a computer, where the appropriate (and >> hopefully not too expensive) software could first scan it and then do >> a competent transcription. >> >> Please help and point me in the right direction to get the task done! >> And how much will it cost? >> >> Thank you, >> >> John >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *"It's supposed to be hard. If it wasn't hard everyone would do it. The > 'hard' is what makes it great."* > * > * > *--A League of Their Own > * > > > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 12 13:47:26 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 09:47:26 -0400 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: Hi John: Do you have a mac? One-step DVD in iDVD which is standard on OSX imports the DV files to the hard drive, converts them and burns to a DVD with a single button click. All you need is an IEEE 1394 (firewire) cable. Once they're burned you can manipulate the videos in the iDVD editor or use any of the free software online to rip the DVD files and convert to other formats. If you're on a pc, programs like DVDsanta can also do it. There's a free version download here: (I have not used this version so I don't know its restriction/limitations.) http://www.topvideopro.com/burn-dvd/dv-to-dvd.htm Here is another free pc version that does the same thing. http://www.dv-to-dvd.com/ There are also a number of free software program that convert directly to flash. But you could end up buying one of these programs (they're not expensive) as often there's a size limitation for converting files on free versions of video conversion software. I hope this helps. Darren On Sat, Feb 12, 2011 at 7:51 AM, John Grigg wrote: > Hello everyone, > > I have a science fiction con founder/organizer friend who has asked me > to transcribe a couple dozen con panel DV camcorder recordings onto a > computer. I am dumbfounded that he would ask me to volunteer for such > a task, but I do want to help him (he is not very tech savvy). > > I would think we need to first convert the DV tapes into dvd or flash > format and then upload it into a computer, where the appropriate (and > hopefully not too expensive) software could first scan it and then do > a competent transcription. > > Please help and point me in the right direction to get the task done! > And how much will it cost? > > Thank you, > > John > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *"It's supposed to be hard. If it wasn't hard everyone would do it. The 'hard' is what makes it great."* * * *--A League of Their Own * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 12 14:31:05 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 10:31:05 -0400 Subject: [ExI] Anons In-Reply-To: References: Message-ID: Keith wrote: >Then for a while I figured the government had learned from watching the fate of said cult. They didn't.< It's possible to learn from an error in process. It's much more difficult to learn from one of perception. Which perhaps explains why gov.org didn't learn from Anon's admirable campaign against the cultologists [the euphemism for your sake.:)] d. On Fri, Feb 11, 2011 at 8:10 PM, Keith Henson wrote: > There are many pointers into this complex of stories, all of which > hinge off Wikileaks. > > http://thinkprogress.org/ > > More than a decade ago I proposed that the training net activists got > learning to cope with a certain cult would be a warm up exercise for a > major confrontation with a government. > > Then for a while I figured the government had learned from watching > the fate of said cult. > > They didn't. > > Keith > > (HBGary is effectively part of the US government.) > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Feb 12 15:20:04 2011 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 12 Feb 2011 16:20:04 +0100 Subject: [ExI] Empathic AGI In-Reply-To: References: <4D42C236.2020203@lightlink.com> <20110128135215.GV23560@leitl.org> <4D42E853.50706@lightlink.com> <4D4707E4.3000106@lightlink.com> <4D498907.3050808@lightlink.com> <4D4AFFF1.3070506@lightlink.com> <4D4C30DD.60003@lightlink.com> <4D4D7D59.8010205@lightlink.com> Message-ID: 2011/2/9 John Clark : > On Feb 7, 2011, at 12:16 PM, Stefano Vaj wrote: > If we accept that "normal" human-level empathy (that is, a mere > ingredient in the evolutionary strategies) is enough, we just have to > emulate a Darwinian machine as similar as possible > > Two difficulties with that: > 1) The Darwinian process is more like history than mathematics, it is not > repeatable, very small changes in initial conditions could lead to huge > differences in output. Of course. Being a human being, an oak, a rabbit, an amoeba are all plausible Darwinian strategy. But if one wants something where "aggression", "empathy", "selfishness" etc. have a meaning different from that which may be applicable to a car or to a spreadsheet any would be both necessary and sufficient, I guess. > 2) Human-level empathy is aimed at?Human-level beings, the further from that > level the less empathy we have. We have less empathy for a cow than a person > and less for an insect than a cow. As the AI's intelligence gets larger its > empathy for us will get smaller although its empathy for its own kind might > be enormous. Yes. Or not. Human empathy is a fuzzy label for complex adaptative or "spandrel" behaviours which do not necessarily have to do with "similarity". For instance, gender differences in our species are substantial enough, but of course you have much more empathy in average for your opposite-gender offspring than you may have for a human individual of your gender with no obvious genetic link to your lineage, and/or belonging to a hostile tribe. I suspect that an emulation of a human being may well decide and "feel" to belong to a cross-specific group (say, the men *and* the androids of country X or of religion Y) or perhaps imagine something along the lines of "proletarian AGIs all over the world, unite!". As long as they are "intelligent" in the very anthropomorphic sense discussed here, there would be little new in this respect. In fact, they would by definition be programmed as much as we are to make such choices. Other no-matter-how-intelligent entitities which are neither evolved, nor explicitely programmed to emulate evolved organisms, have of course no reason to exhibit self-preservation, empathy, aggression or altruism drives in any sociobiological sense. -- Stefano Vaj From spike66 at att.net Sat Feb 12 15:46:43 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 07:46:43 -0800 Subject: [ExI] Anons In-Reply-To: References: Message-ID: <003901cbcacc$0cba82c0$262f8840$@att.net> >>On Sat, Feb 12, 2011 at 12:10 AM, Keith Henson wrote: >> There are many pointers into this complex of stories, all of which hinge off Wikileaks. > >> http://thinkprogress.org/ ... >> Keith ... On Behalf Of BillK >...The real problem is corporate takeover of the US government. >... it's worse than that: corporations actively exploit the power of the state to further entrench and enhance their power...BillK _______________________________________________ Ja. Our constitution is set up to maintain separation of church and state. It doesn't say anything about separation of corporation and state. As far as I can tell the latter would be perfectly legal. In any case it would be far preferable to government takeover of corporations. spike From spike66 at att.net Sat Feb 12 15:55:48 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 07:55:48 -0800 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: <003a01cbcacd$515962b0$f40c2810$@att.net> >...I have a science fiction con founder/organizer friend who has asked me to transcribe a couple dozen con panel DV camcorder recordings onto a computer. I am dumbfounded that he would ask me to volunteer for such a task, but I do want to help him (he is not very tech savvy). John John there are plenty of professional services that do this sort of thing. I have had a bunch of Shelly's old reel to reel family films from the 1960s transferred to DVD. Typical cost for that was about 80 bucks per DVD disc. I have done all the valuable ones. Henceforth I might just set up a modern digital camera and play the old reels on the original projector against a wall, and have the grandparents narrate. Those films predated sound. I have a collection of video recordings from the 80s I need to transfer, but I might do that myself by finding an interface card. The internet knows everything on this sort of question. spike From thespike at satx.rr.com Sat Feb 12 16:48:28 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 12 Feb 2011 10:48:28 -0600 Subject: [ExI] Anons In-Reply-To: <003901cbcacc$0cba82c0$262f8840$@att.net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> Message-ID: <4D56B9DC.6000906@satx.rr.com> On 2/12/2011 9:46 AM, spike wrote: > Our constitution is set up to maintain separation of church and state. > It doesn't say anything about separation of corporation and state. As far > as I can tell the latter would be perfectly legal. In any case it would be > far preferable to government takeover of corporations. The technical name for what you prefer is "corporate fascism". That doesn't have a really compelling history. Here's a random-selected thumbnail: Damien Broderick From jedwebb at hotmail.com Sat Feb 12 19:29:16 2011 From: jedwebb at hotmail.com (Jeremy Webb) Date: Sat, 12 Feb 2011 19:29:16 +0000 Subject: [ExI] Nanotech Article In-Reply-To: <4D56B9DC.6000906@satx.rr.com> Message-ID: There is a nice discussion of a new piece of nanotech at: http://science.slashdot.org/story/11/02/10/1513226/Researchers-Boast-First-P rogrammable-Nanoprocessor They claim to have managed to produce most of the logic gates needed to make a CPU that results in being 100 times more efficient than even CMOS. I hope they've figured out the static problem too... :0) Jeremy Webb Heathen Vitki e-Mail: jedwebb at hotmail.com http://jeremywebb301.tripod.com/vikssite/index.html From darren.greer3 at gmail.com Sat Feb 12 19:59:44 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 15:59:44 -0400 Subject: [ExI] Anons In-Reply-To: <4D56B9DC.6000906@satx.rr.com> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> Message-ID: Damien wrote: >The technical name for what you prefer is "corporate fascism". That doesn't have a really compelling history.< I agree Damien. When the definition of fascism was entered for the first time in the Encyclopedia Italiano, Mussolini suggested that corporatism was a more accurate name for that type of arrangement than fascism anyway. Fascism is *by definition* the merging of business and government, most often accompanied by rabid nationalism and sometimes overt racism. But not always, which might make modern fascism difficult to recognize because we always assume holocaust-type ethnic cleansing comes with it. It doesn't. Italy followed Germany's Wannsee Conference directives (because it was under political pressure to do so) but not to the letter. For some of the war Mussolini allowed the north west of the country to become a kind of protectorate for Jews who had fled other parts of the state. The ax fell on them only after Italy fell, when the allies invaded the south and Germany took the north to meet them. I got this story from John Keegan's The Second World War, which is an excellent book by the way. Spike wrote: >In any case it would be far preferable to government takeover of corporations.< Is there a difference? When governments and corporations merge, does it matter who made the first move? Given the checkered history of IBM, Ford, Chase Manhattan, etc, not to mention the America First Committee and the role of prominent industrialists like Ford in trying to keep the U.S. out of World War II for business reasons, perhaps it should be illegal. Currently we try to prevent the merging of the two with market regulation and not through legislation, which doesn't seem to be working all that well. The repeal of Glass-Steagall and the housing market crash is a good example of that failure. Don't mean to sound testy, or confrontational, Spike. I have a bit of a bee in my bonnet about what seems to be a widespread misunderstanding of exactly what fascism is and how easily it could happen again. The U.S. congress has a fasces engraved on a wall somewhere inside, by the way. Don't know its history, or what genius decided it was a good idea, but it has always made me wonder. darren I On Sat, Feb 12, 2011 at 12:48 PM, Damien Broderick wrote: > On 2/12/2011 9:46 AM, spike wrote: > > Our constitution is set up to maintain separation of church and state. >> It doesn't say anything about separation of corporation and state. As far >> as I can tell the latter would be perfectly legal. In any case it would >> be >> far preferable to government takeover of corporations. >> > > The technical name for what you prefer is "corporate fascism". That doesn't > have a really compelling history. > > Here's a random-selected thumbnail: > > > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Feb 12 20:52:17 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 12:52:17 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> Message-ID: <006001cbcaf6$bc872490$35956db0$@att.net> On Behalf Of Darren Greer . Spike wrote: >>In any case it would be far preferable to government takeover of corporations.< >Is there a difference? When governments and corporations merge, does it matter who made the first move? . Oh my, yes, a huge critically important difference. In my view, often called minarcho-capitalist, he purpose of government is to support people in the creation of wealth. I recognize it has some minimal duties in redistributing wealth already created (to some extent) but I read the US constitution and see little in there in regard. Wealth creation is the key to making a nation and its people prosperous. My notion is to have the executive branch of government filled with people who have served as executives in industry. There is inherent mischief in bringing over legislators. This last go-around in 2008, the two major parties gave us the choice of former legislators, neither of whom had executive experience. Agree it depends on how it is viewed. The press had us believe Sarah Palin was the actual candidate, and she had *some* executive experience as a business owner and Alaska governor, but I was shocked to learn she was actually running for VP on that ticket. No one ever heard of the guy who was running for president on that ticket, but I understand he was a legislator with little or no executive experience. I would counter-propose that my vote for president would require executive experience and demonstrated success, as a corporate CEO or state governor. >Don't mean to sound testy, or confrontational, Spike. I have a bit of a bee in my bonnet about what seems to be a widespread misunderstanding of exactly what fascism is and how easily it could happen again. The U.S. congress has a fasces engraved on a wall somewhere inside, by the way. Don't know its history, or what genius decided it was a good idea, but it has always made me wonder. No problem Darren, by all means your commentary is welcome and not at all confrontational. There is no point in trying to pin down the definition of terms such as fascist and nazi. These have for so long been used as universal insults and blanket condemnations that they eventually lose all meaning from overuse. There is no point in trying to refocus the definition on mid 19th century political systems; the terms have been worn out and up-used. >.Don't know its history, or what genius decided it was a good idea, but it has always made me wonder. darren Watch as California goes into historic conniptions to try to balance its hopeless budget. The lessons we need here is that industry is our friend, that wealth creation is our salvation, that business needs to be encouraged and nurtured, that political power should follow wealth as opposed to the other way around. Money is good. Desire for money is a predictable and trustworthy human motivator. Lack of money is the root of all evil. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sat Feb 12 22:24:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 12 Feb 2011 15:24:12 -0700 Subject: [ExI] Anons In-Reply-To: <006001cbcaf6$bc872490$35956db0$@att.net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: 2011/2/12 spike : > In my view, often > called minarcho-capitalist, he purpose of government is to support people in > the creation of wealth. The only legitimate purpose of the government in creating wealth are the following: 1) Keep taxes as low as possible. i.e. don't steal money from those who create wealth. 2) Make sure that there is no cheating, as in maintaining a court system for dealing with bad contractual outcomes, and maintaining some sort of intellectual property system. 3) Stay out of the way as much as possible 4) Make sure that one group does not rape the environment at the expense of everyone else. 5) Keep the bad guys from raining on your parade. This has a lot of relevance to the future. If the government steps in and bans cloning and other controversial uses of DNA, our biological future will be more limited than necessary. One of the reasons, IMNSHO, that the Internet has been so successful is that no government has found a very good way to regulate it very much, other than North Korea, where I understand most people aren't allowed to access it at all. That of course has it's own downside for them. -Kelly From kellycoinguy at gmail.com Sat Feb 12 22:31:53 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 12 Feb 2011 15:31:53 -0700 Subject: [ExI] Watson on NOVA Message-ID: Did anyone else see the NOVA about Watson that aired the other day? While it was not especially technical, although seemingly pretty accurate for as far as it went. I found the emotions involved with the creators of Watson to be very interesting. For example, they hired a comedian for a year to "host" test Jeopardy shows, and he was making fun of Watson when he answered questions really badly... and the programmers were really offended by that. Very interesting dynamic. -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Sat Feb 12 22:15:01 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sat, 12 Feb 2011 15:15:01 -0700 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: A very easy way to do this sort of thing is to use Amazon Turk. You will get much better results than anything that is just automated. If you need assistance, contact me off list. -Kelly On Sat, Feb 12, 2011 at 4:51 AM, John Grigg wrote: > Hello everyone, > > I have a science fiction con founder/organizer friend who has asked me > to transcribe a couple dozen con panel DV camcorder recordings onto a > computer. ?I am dumbfounded that he would ask me to volunteer for such > a task, but I do want to help him (he is not very tech savvy). > > I would think we need to first convert the DV tapes into dvd or flash > format and then upload it into a computer, where the appropriate (and > hopefully not too expensive) software could first scan it and then do > a competent transcription. > > Please help and point me in the right direction to get the task done! > And how much will it cost? > > Thank you, > > John > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Feb 12 23:18:08 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 15:18:08 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: Message-ID: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Saturday, February 12, 2011 2:32 PM To: ExI chat list Subject: [ExI] Watson on NOVA >Did anyone else see the NOVA about Watson that aired the other day? While it was not especially technical, although seemingly pretty accurate for as far as it went. Ja, worked on me. >I found the emotions involved with the creators of Watson to be very interesting. For example, they hired a comedian for a year to "host" test Jeopardy shows, and he was making fun of Watson when he answered questions really badly... and the programmers were really offended by that. Very interesting dynamic. -Kelly Kelly you have seen 2010 Odyssey 2? That is the one with Hal's creator Dr. Chandra getting emotional about having to turn him off. I can easily see a person getting emotionally attached to a machine. From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Sent: Wednesday, February 09, 2011 10:36 PM To: 'ExI chat list' Subject: [ExI] watson on nova Lots of good Watson stuff in this NOVA episode, plenty to get me jazzed: http://video.pbs.org/video/1757221034 The good stuff is between about 15 minutes and 28 minutes. We will have practical companion computers very soon. All doubts I once suffered have vanished with this NOVA episode. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sun Feb 13 00:45:57 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 20:45:57 -0400 Subject: [ExI] Anons In-Reply-To: <006001cbcaf6$bc872490$35956db0$@att.net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: Spike wrote: >I would counter-propose that my vote for president would require executive experience and demonstrated success, as a corporate CEO or state governor.< Are you familiar with the enfant terrible of Harvard history and socio-economics, the brit Niall Ferguson? I heard a lecture by him once, where he said all the first rate talent flows into industry and all the second rate talent scurries into politics, mostly because of earning potential. You'd probably like what he has to say. I don't. But then I'm so socialist I have to go right to get left. Darren 2011/2/12 spike > > > > > *On Behalf Of *Darren Greer > *?* > > > > Spike wrote: > > > > >>In any case it would be far preferable to government takeover of > corporations.< > > > > >Is there a difference? When governments and corporations merge, does it > matter who made the first move? ? > > > > Oh my, yes, a huge critically important difference. In my view, often > called minarcho-capitalist, he purpose of government is to support people in > the creation of wealth. I recognize it has some minimal duties in > redistributing wealth already created (to some extent) but I read the US > constitution and see little in there in regard. Wealth creation is the key > to making a nation and its people prosperous. > > > > My notion is to have the executive branch of government filled with people > who have served as executives in industry. There is inherent mischief in > bringing over legislators. This last go-around in 2008, the two major > parties gave us the choice of former legislators, neither of whom had > executive experience. Agree it depends on how it is viewed. The press had > us believe Sarah Palin was the actual candidate, and she had **some** > executive experience as a business owner and Alaska governor, but I was > shocked to learn she was actually running for VP on that ticket. No one > ever heard of the guy who was running for president on that ticket, but I > understand he was a legislator with little or no executive experience. > > > > I would counter-propose that my vote for president would require executive > experience and demonstrated success, as a corporate CEO or state governor. > > > > >Don't mean to sound testy, or confrontational, Spike. I have a bit of a > bee in my bonnet about what seems to be a widespread misunderstanding of > exactly what fascism is and how easily it could happen again. The U.S. > congress has a fasces engraved on a wall somewhere inside, by the way. Don't > know its history, or what genius decided it was a good idea, but it has > always made me wonder. > > > > No problem Darren, by all means your commentary is welcome and not at all > confrontational. There is no point in trying to pin down the definition of > terms such as fascist and nazi. These have for so long been used as > universal insults and blanket condemnations that they eventually lose all > meaning from overuse. There is no point in trying to refocus the definition > on mid 19th century political systems; the terms have been worn out and > up-used. > > > > >?Don't know its history, or what genius decided it was a good idea, but it > has always made me wonder. darren > > > > Watch as California goes into historic conniptions to try to balance its > hopeless budget. The lessons we need here is that industry is our friend, > that wealth creation is our salvation, that business needs to be encouraged > and nurtured, that political power should follow wealth as opposed to the > other way around. Money is good. Desire for money is a predictable and > trustworthy human motivator. Lack of money is the root of all evil. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sun Feb 13 01:02:37 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 21:02:37 -0400 Subject: [ExI] Artificial Irish Folk Tales Message-ID: I was at my mother's tonight playing cards with my brother and an old filling which was loose fell out which means a trip to the dentist this week. The Irish have a folk legend that says if you lose a tooth, or dream you loose a tooth, someone you know will die. So does losing a filling mean somewhere an AI will die? Or does it just mean when I wake up tomorrow my computer won't boot? My brother asked why I was laughing during our card game but I couldn't explain it. He's not really interested in technology the way I am, and he already thinks I'm weird enough. d. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 01:53:24 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 17:53:24 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <00a501cbcb20$cd2f1d00$678d5700$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Saturday, February 12, 2011 4:46 PM To: ExI chat list Subject: Re: [ExI] Anons Spike wrote: >>I would counter-propose that my vote for president would require executive experience and demonstrated success, as a corporate CEO or state governor.< >Are you familiar with the enfant terrible of Harvard history and socio-economics, the brit Niall Ferguson? Have not. I will google that. > I heard a lecture by him once, where he said all the first rate talent flows into industry and all the second rate talent scurries into politics, mostly because of earning potential. Yes there is that, but I have reason to hope. Read on: >You'd probably like what he has to say. I don't. But then I'm so socialist I have to go right to get left.Darren Ja, I am cool with that. I am so upwing I would need to come down to go either left or right. Be that as it may, I recognize that in the US at least, the two statist parties will be winning every election for the foreseeable, so here's the way I view it. It is perfectly clear to me that the best executive talent goes where it can make money, which explains why we get mostly the B students going for public office. But we also recognize that the value of the book written by a former government official largely makes up for the loss of pay suffered during the years of service. People are *still* buying Jimmy Carter's books. Recently I notice it has become fashionable for anyone who was anywhere in government to record their experiences on dead trees. The higher the rank of the author, the better for sales. I notice one of the contenders for California governor was the former eBay CEO and jillionaire Meg Whitman. Clearly she could have made waaay more money doing anything else, but went into that campaign using mostly her own money. She is still young, so let's see what happens in the next election. California governor is the logical springboard into national office. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 02:06:47 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 18:06:47 -0800 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: References: Message-ID: <00b501cbcb22$ab95f810$02c1e830$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer . >.So does losing a filling mean somewhere an AI will die? Haaaaahahahahahahaaaa! {8^D >. He's not really interested in technology the way I am, and he already thinks I'm weird enough. d. Ah, but one can never be weird enough. Thanks for the good laugh Darren. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Feb 13 02:23:01 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 12 Feb 2011 20:23:01 -0600 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: References: Message-ID: <4D574085.3070801@satx.rr.com> On 2/12/2011 7:02 PM, Darren Greer wrote: > So does losing a filling mean somewhere an AI will die? Ha! Nice. (Or maybe it means a mining company will go under.) Damien Broderick From darren.greer3 at gmail.com Sun Feb 13 02:27:17 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 12 Feb 2011 22:27:17 -0400 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: <4D574085.3070801@satx.rr.com> References: <4D574085.3070801@satx.rr.com> Message-ID: Damien wrote: >(Or maybe it means a mining company will go under)< Semantically confusing industry, because technically, in mining, when you "went under" you would actually be coming up, wouldn't you? d. On Sat, Feb 12, 2011 at 10:23 PM, Damien Broderick wrote: > On 2/12/2011 7:02 PM, Darren Greer wrote: > > > So does losing a filling mean somewhere an AI will die? >> > > Ha! Nice. > > (Or maybe it means a mining company will go under.) > > Damienoderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 06:18:42 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 22:18:42 -0800 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: References: <4D574085.3070801@satx.rr.com> Message-ID: <000901cbcb45$dd3ac7b0$97b05710$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer . Damien wrote: >(Or maybe it means a mining company will go under)< Semantically confusing industry, because technically, in mining, when you "went under" you would actually be coming up, wouldn't you? d. In mining, it's always either up ore down. s -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Feb 13 06:38:44 2011 From: spike66 at att.net (spike) Date: Sat, 12 Feb 2011 22:38:44 -0800 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: <000901cbcb45$dd3ac7b0$97b05710$@att.net> References: <4D574085.3070801@satx.rr.com> <000901cbcb45$dd3ac7b0$97b05710$@att.net> Message-ID: <002a01cbcb48$a9d0fa40$fd72eec0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike . spike should have written: >.In mining, it's always down ore up. But it doesn't matter, because one's labor is all in vein. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Feb 13 06:55:04 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 13 Feb 2011 00:55:04 -0600 Subject: [ExI] Artificial Irish Folk Tales In-Reply-To: <000901cbcb45$dd3ac7b0$97b05710$@att.net> References: <4D574085.3070801@satx.rr.com> <000901cbcb45$dd3ac7b0$97b05710$@att.net> Message-ID: <4D578048.30905@satx.rr.com> On 2/13/2011 12:18 AM, spike wrote: > Semantically confusing industry, because technically, in mining, when > you "went under" you would actually be coming up, wouldn't you? > > d. > > In mining, it?s always either up ore down. And yet with quilting, it's Eider down or not. Damien Broderick From pharos at gmail.com Sun Feb 13 08:55:43 2011 From: pharos at gmail.com (BillK) Date: Sun, 13 Feb 2011 08:55:43 +0000 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: 2011/2/13 Darren Greer wrote: > Are you familiar with the enfant terrible of Harvard history and > socio-economics, the brit Niall Ferguson? ?I heard a lecture by him once, > where he said all the first rate talent flows into industry and all the > second rate talent scurries into politics, mostly because of earning > potential. You'd probably like what he has to say. I don't. But then I'm so > socialist I have to go right to get left. > > Did you know that this has actually been proposed as a road safety and traffic efficiency measure? Quote: Superstreet intersections force traffic from smaller roads to turn right, then u-turn on the larger road, rather than wait for a break in traffic to make a direct left. That solution may sound like an inefficient way to get where you?re going, but researchers say that it moves vehicles through 20% faster, and reduces accidents by 43%. ----------------- In sensible countries like the UK, of course, the reverse applies. First turn left, then u-turn to go right. BillK From kellycoinguy at gmail.com Sun Feb 13 08:58:18 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 01:58:18 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: > The good stuff is between about 15 minutes and 28 minutes.? We will have > practical companion computers very soon.? All doubts I once suffered have > vanished with this NOVA episode. While I am clearly jazzed about Watson, and I do know for sure now that Watson uses statistical learning algorithms, I am not quite as convinced that there is a general solution here. At least not quite yet. The types of answers generated seemed to have been heavily "tweaked" for Jeopardy. That's not to say that Watson isn't interesting, and an important milestone in AI. I think it is both. Just that it isn't quite as far down the road of machine understanding as I had hoped. Some of the video seemed to indicate that it used some kind of statistical proximity based text search engine, rather than parsing and understanding English sentences quite so much as I thought maybe it did. Of course, since NOVA was presenting things on a general audience basis, it may have downplayed any NLP aspect. This will be useful technology (assuming it escapes research) I can see it answering really useful questions. I hope they build it into a search engine. But it does, for the present, seem to be very tweaked for Jeopardy... which is, I suppose, what I should have expected. Has anybody seen any technical papers by the Watson team? That would be interesting in evaluating just how they did it. Since Watson is essentially a bunch of PCs, I can see this being deployed into the cloud pretty easily. And if Watson can look on the Internet, then perhaps it can come up with better answers (albeit perhaps more slowly) than in the isolated Jeopardy case. It seemed that they stuck with Wikipedia, online encyclopedias, the internet movie database and other specific information sites, rather than crawling the entire web. Perhaps they did this to ensure greater accuracy??? Or maybe it was a storage space issue. In any case, if they make a bigger machine in the cloud that accesses the internet and has more storage, I'm sure they could come up with some very interesting answers to general questions, assuming the answers are out there somewhere. -Kelly From kellycoinguy at gmail.com Sun Feb 13 09:08:44 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 02:08:44 -0700 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: >> But then I'm so socialist I have to go right to get left. So what you're saying is that you are so focused on the future that you haven't learned the lessons of the past... ;-) > Did you know that this has actually been proposed as a road safety and > traffic efficiency measure? There was a really interesting Wired article a few years back on eliminating traffic signals altogether, and mixing traffic with pedestrians in a confusing way that automatically caused everyone to slow down and be more careful with the result being greater overall safety. It had been implemented in some northern European cities (maybe Denmark?) I still think the right answer is to let the cars drive themselves, and avoid human piloting altogether, but that's still a few years off. -Kelly From amara at kurzweilai.net Sun Feb 13 09:25:05 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 01:25:05 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <03af01cbcb5f$e71e37c0$b55aa740$@net> Kelly, I really love this traffic idea. Sort of an emergent order concept. Would be fun to take it a step further and create a new kind of town with no defined roads, no sidewalks, information signals that combine to control computer-automated vehicles (supersedes driven cars and traffic signals) for people and machines that are generated by sensors for ad hoc movements of objects, wind, noise, thoughts ("I want to cross the street" -- I want to dance here now), deformable structures (car to building: "may I take a shortcut through you?"), instant 3D-printed structures that can be morphed into different purposes.... [pause to let someone else co-invent ...] -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 13, 2011 1:09 AM To: ExI chat list Subject: Re: [ExI] Anons >> But then I'm so socialist I have to go right to get left. So what you're saying is that you are so focused on the future that you haven't learned the lessons of the past... ;-) > Did you know that this has actually been proposed as a road safety and > traffic efficiency measure? There was a really interesting Wired article a few years back on eliminating traffic signals altogether, and mixing traffic with pedestrians in a confusing way that automatically caused everyone to slow down and be more careful with the result being greater overall safety. It had been implemented in some northern European cities (maybe Denmark?) I still think the right answer is to let the cars drive themselves, and avoid human piloting altogether, but that's still a few years off. -Kelly _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From pharos at gmail.com Sun Feb 13 09:28:49 2011 From: pharos at gmail.com (BillK) Date: Sun, 13 Feb 2011 09:28:49 +0000 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: On Sun, Feb 13, 2011 at 8:58 AM, Kelly Anderson wrote: > While I am clearly jazzed about Watson, and I do know for sure now > that Watson uses statistical learning algorithms, I am not quite as > convinced that there is a general solution here. At least not quite > yet. The types of answers generated seemed to have been heavily > "tweaked" for Jeopardy. That's not to say that Watson isn't > interesting, and an important milestone in AI. I think it is both. > Just that it isn't quite as far down the road of machine understanding > as I had hoped. Some of the video seemed to indicate that it used some > kind of statistical proximity based text search engine, rather than > parsing and understanding English sentences quite so much as I thought > maybe it did. Of course, since NOVA was presenting things on a general > audience basis, it may have downplayed any NLP aspect. > > This will be useful technology (assuming it escapes research) I can > see it answering really useful questions. I hope they build it into a > search engine. But it does, for the present, seem to be very tweaked > for Jeopardy... which is, I suppose, what I should have expected. > > Has anybody seen any technical papers by the Watson team? That would > be interesting in evaluating just how they did it. > > IBM PR makes big claims for Watson (but that's their job :) ). Quote: Watson's ability to understand the meaning and context of human language, and rapidly process information to find precise answers to complex questions, holds enormous potential to transform how computers help people accomplish tasks in business and their personal lives. Watson will enable people to rapidly find specific answers to complex questions. The technology could be applied in areas such as healthcare, for accurately diagnosing patients, to improve online self-service help desks, to provide tourists and citizens with specific information regarding cities, prompt customer support via phone, and much more. ------------------------- This article talks about what the developers are working on: Looks like they are doing some pretty complex stuff in there. BillK From amara at kurzweilai.net Sun Feb 13 09:14:55 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 01:14:55 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <03ae01cbcb5e$7baf4700$730dd500$@net> Kelly, I had similar questions, so I interviewed an IBM Watson research manager. Please see if this helps: http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib m-research-manager. I would be interested in any critiques of this, or questions for a follow-up interview. Thanks, Amara D. Angelica Editor, KurzweilAI -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 13, 2011 12:58 AM To: ExI chat list Subject: Re: [ExI] Watson on NOVA > The good stuff is between about 15 minutes and 28 minutes.? We will have > practical companion computers very soon.? All doubts I once suffered have > vanished with this NOVA episode. While I am clearly jazzed about Watson, and I do know for sure now that Watson uses statistical learning algorithms, I am not quite as convinced that there is a general solution here. At least not quite yet. The types of answers generated seemed to have been heavily "tweaked" for Jeopardy. That's not to say that Watson isn't interesting, and an important milestone in AI. I think it is both. Just that it isn't quite as far down the road of machine understanding as I had hoped. Some of the video seemed to indicate that it used some kind of statistical proximity based text search engine, rather than parsing and understanding English sentences quite so much as I thought maybe it did. Of course, since NOVA was presenting things on a general audience basis, it may have downplayed any NLP aspect. This will be useful technology (assuming it escapes research) I can see it answering really useful questions. I hope they build it into a search engine. But it does, for the present, seem to be very tweaked for Jeopardy... which is, I suppose, what I should have expected. Has anybody seen any technical papers by the Watson team? That would be interesting in evaluating just how they did it. Since Watson is essentially a bunch of PCs, I can see this being deployed into the cloud pretty easily. And if Watson can look on the Internet, then perhaps it can come up with better answers (albeit perhaps more slowly) than in the isolated Jeopardy case. It seemed that they stuck with Wikipedia, online encyclopedias, the internet movie database and other specific information sites, rather than crawling the entire web. Perhaps they did this to ensure greater accuracy??? Or maybe it was a storage space issue. In any case, if they make a bigger machine in the cloud that accesses the internet and has more storage, I'm sure they could come up with some very interesting answers to general questions, assuming the answers are out there somewhere. -Kelly _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From amara at kurzweilai.net Sun Feb 13 09:34:46 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 01:34:46 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <03b001cbcb61$416ba130$c442e390$@net> Just had another flash: what about creating a combination exercise spa and road? Use treadmills to power lighting and vehicle movement (piezoelectric device ==> battery ==> motors in road) ... spa is built over the road)... spa motions also power lights or generate power credits.. etc. ========================================= Kelly, I really love this traffic idea. Sort of an emergent order concept. Would be fun to take it a step further and create a new kind of town with no defined roads, no sidewalks, information signals that combine to control computer-automated vehicles (supersedes driven cars and traffic signals) for people and machines that are generated by sensors for ad hoc movements of objects, wind, noise, thoughts ("I want to cross the street" -- I want to dance here now), deformable structures (car to building: "may I take a shortcut through you?"), instant 3D-printed structures that can be morphed into different purposes.... [pause to let someone else co-invent ...] -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Kelly Anderson Sent: Sunday, February 13, 2011 1:09 AM To: ExI chat list Subject: Re: [ExI] Anons >> But then I'm so socialist I have to go right to get left. So what you're saying is that you are so focused on the future that you haven't learned the lessons of the past... ;-) > Did you know that this has actually been proposed as a road safety and > traffic efficiency measure? There was a really interesting Wired article a few years back on eliminating traffic signals altogether, and mixing traffic with pedestrians in a confusing way that automatically caused everyone to slow down and be more careful with the result being greater overall safety. It had been implemented in some northern European cities (maybe Denmark?) I still think the right answer is to let the cars drive themselves, and avoid human piloting altogether, but that's still a few years off. -Kelly _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Sun Feb 13 15:54:35 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 07:54:35 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> Message-ID: <005d01cbcb96$50197650$f04c62f0$@att.net> On Behalf Of BillK ... > But then I'm so socialist I have to go right to get left. darren > > Did you know that this has actually been proposed as a road safety and traffic efficiency measure? Quote: Superstreet intersections force traffic from smaller roads to turn right, then u-turn on the larger road... BillK ----------------- BillK that's the way it is done now in many places in New Jersey. Have we any New Jerseyers present? spike From rpwl at lightlink.com Sun Feb 13 16:39:25 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 13 Feb 2011 11:39:25 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <4D58093D.9070306@lightlink.com> Kelly Anderson wrote: >> The good stuff is between about 15 minutes and 28 minutes. We will have >> practical companion computers very soon. All doubts I once suffered have >> vanished with this NOVA episode. > > While I am clearly jazzed about Watson, and I do know for sure now > that Watson uses statistical learning algorithms, I am not quite as > convinced that there is a general solution here. At least not quite > yet. The types of answers generated seemed to have been heavily > "tweaked" for Jeopardy. That's not to say that Watson isn't > interesting, and an important milestone in AI. I think it is both. > Just that it isn't quite as far down the road of machine understanding > as I had hoped. Some of the video seemed to indicate that it used some > kind of statistical proximity based text search engine, rather than > parsing and understanding English sentences quite so much as I thought > maybe it did. Sadly, this only confirms the deeply skeptical response that I gave earlier. I strongly suspected that it was using some kind of statistical "proximity" algorithms to get the answers. And in that case, we are talking about zero advancement of AI. Back in 1991 I remember having discussions about that kind of research with someone who thought it was fabulous. I argued that it was a dead end. If people are still using it to do exactly the same kinds of task they did then, can you see what I mean when I say that this is a complete waste of time? It is even worse than I suspected. Richard Loosemore Of course, since NOVA was presenting things on a general > audience basis, it may have downplayed any NLP aspect. > > This will be useful technology (assuming it escapes research) I can > see it answering really useful questions. I hope they build it into a > search engine. But it does, for the present, seem to be very tweaked > for Jeopardy... which is, I suppose, what I should have expected. > > Has anybody seen any technical papers by the Watson team? That would > be interesting in evaluating just how they did it. > > Since Watson is essentially a bunch of PCs, I can see this being > deployed into the cloud pretty easily. And if Watson can look on the > Internet, then perhaps it can come up with better answers (albeit > perhaps more slowly) than in the isolated Jeopardy case. It seemed > that they stuck with Wikipedia, online encyclopedias, the internet > movie database and other specific information sites, rather than > crawling the entire web. Perhaps they did this to ensure greater > accuracy??? Or maybe it was a storage space issue. In any case, if > they make a bigger machine in the cloud that accesses the internet and > has more storage, I'm sure they could come up with some very > interesting answers to general questions, assuming the answers are out > there somewhere. > > -Kelly > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From spike66 at att.net Sun Feb 13 17:38:36 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 09:38:36 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D58093D.9070306@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: <000901cbcba4$d82b9100$8882b300$@att.net> On Behalf Of Richard Loosemore Subject: Re: [ExI] Watson on NOVA Kelly Anderson wrote: >> ... While I am clearly jazzed about Watson, and I do know for sure now that Watson uses statistical learning algorithms... >...I strongly suspected that it was using some kind of statistical "proximity" algorithms to get the answers. And in that case, we are talking about zero advancement of AI... can you see what I mean when I say that this is a complete waste of time?...Richard Loosemore Richard I see what you mean, but I disagree. We know Watson isn't AI, and this path doesn't lead there directly. But there is value in collecting a bunch of capabilities that are in themselves marketable. Computers play good chess, they play Jeopardy, they do this and that, eventually they make suitable (even if not ideal) companions for impaired humans, which generates money (lots of it in that case), which brings talent into the field, inspires the young to dream that AI can somehow be accomplished. It inspires the young brains to imagine the potential of software, as opposed to wasting their lives and talent by going into politics or hedge fund management for instance. For every AI researcher we lose to fooling around with Watson, we gain ten more who are inspired by that non-AI exercise. In that sense Watson may indirectly advance AI. spike From rpwl at lightlink.com Sun Feb 13 18:25:08 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 13 Feb 2011 13:25:08 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <000901cbcba4$d82b9100$8882b300$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <000901cbcba4$d82b9100$8882b300$@att.net> Message-ID: <4D582204.1040703@lightlink.com> spike wrote: > On Behalf Of Richard Loosemore > Subject: Re: [ExI] Watson on NOVA > > Kelly Anderson wrote: >>> ... While I am clearly jazzed about Watson, and I do know for sure now > that Watson uses statistical learning algorithms... > >> ...I strongly suspected that it was using some kind of statistical > "proximity" algorithms to get the answers. And in that case, we are talking > about zero advancement of AI... can you see what I mean when I say that this > is a complete waste of time?...Richard Loosemore > > > > > Richard I see what you mean, but I disagree. We know Watson isn't AI, and > this path doesn't lead there directly. But there is value in collecting a > bunch of capabilities that are in themselves marketable. Computers play > good chess, they play Jeopardy, they do this and that, eventually they make > suitable (even if not ideal) companions for impaired humans, which generates > money (lots of it in that case), which brings talent into the field, > inspires the young to dream that AI can somehow be accomplished. It > inspires the young brains to imagine the potential of software, as opposed > to wasting their lives and talent by going into politics or hedge fund > management for instance. > > For every AI researcher we lose to fooling around with Watson, we gain ten > more who are inspired by that non-AI exercise. > > In that sense Watson may indirectly advance AI. This is exactly what has been happening. But the only people it has drawn into AI are: (a) People too poorly informed to understand that Watson represents a non-achievement ..... therefore extremely low-quality talent, or (b) People who quite brazenly declare that the field called "AI" is not really about building intelligent systems, but just futzing around with mathematics and various trivial algorithms. Either way, the field loses. I have been watching this battle go on throughout my career. All I am doing is reporting the obvious patterns that emerge if you look at the situation from the inside, for long enough. I went to conferences back in the 1980s when people talked about simple language understanding algorithms, and I understood exactly what they were trying to do and what they had achieved so far. Then I went to an AGI workshop in 2006, and to my utter horror I saw some people present their research on a simple langauge understanding system..... it was exactly the same stuff that I had seen 20 years before, and they appeared to have no awareness that this had already been done, and that the technique subsequently got nowhere. You can discount my opinion if you like, but does it not count for anything at all that I have been working in this field since I first got interested in it in 1980? This is not armchair theorizing here: I am just doing my best to summarize a lot of experience. Richard Loosemore From spike66 at att.net Sun Feb 13 19:14:23 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 11:14:23 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D582204.1040703@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <000901cbcba4$d82b9100$8882b300$@att.net> <4D582204.1040703@lightlink.com> Message-ID: <000001cbcbb2$397a3a80$ac6eaf80$@att.net> On Behalf Of Richard Loosemore ... >But the only people it has drawn into AI are: ... >(a) People too poorly informed to understand that Watson represents a non-achievement ..... therefore extremely low-quality talent, or ... >(b) People who quite brazenly declare that the field called "AI" is not really about building intelligent systems, but just futzing around with mathematics and various trivial algorithms. ... >Either way, the field loses. ... >You can discount my opinion if you like, but does it not count for anything at all that I have been working in this field since I first got interested in it in 1980? This is not armchair theorizing here: I am just doing my best to summarize a lot of experience...Richard Loosemore Richard, your viewpoint as one who has been in the field for a long time is most valuable. You and I are actually looking at two very different goals here, as was pointed out in a previous discussion. You are shooting for true AI, but I am not, or at least not immediately. Reasoning: true AI leads directly to recursive self-improvement, which leads directly to the singularity, which presents all kinds of risks (and promise (and risks)) because we don't know how to control it, or even if it is controllable. On the other hand, Watson isn't going to spontaneously take off and do whatever a real AI wants to do, any more than a chess algorithm will do that. Watson will, however, contribute to our wellbeing here and now, along with the chess algorithms, and the servant-bot algorithms, the sex-bots, and all the other non AI applications I can imagine will come along and make our lives more fun and interesting. I do not regret all the AI talent that has been siphoned into application development, for I am in no desperate hurry to create AI. With our current level of insight and lack thereof into friendly AI, it looks to me like the risks may outweigh the benefits, at least to the younger people among us. Five years ago before my son was born, I would have argued the benefits outweigh the risks. Now, I wouldn't say that, or rather I can't say it with any confidence. Recall that nuclear fission was discovered a least a decade before the engineers developed a practical way to safely control it. AI is analogous to nuclear fission, and now is 1937. You and I do not necessarily disagree, we just have different goals. spike From possiblepaths2050 at gmail.com Mon Feb 14 01:44:41 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 13 Feb 2011 18:44:41 -0700 Subject: [ExI] needing help on a tech project In-Reply-To: References: Message-ID: I want to say a huge thank you to everyone who responded to my request for help. I see now that there are a number of ways to take care of my friend's project, and that it is quite doable. He had originally wanted me to watch a small portion of tape, type what I heard, and then repeat the process near endlessly till I had transcribed a pile of DV audiocassettes! LOL! But fortunately we have technologies now to make such drudgery avoidable. The event recorded was the H.P. Lovecraft themed MythosCon, held in Tempe, Arizona. And now the words there of super Mythos scholar S.T. Joshi and others shall be forever put to print! John : ) From kellycoinguy at gmail.com Mon Feb 14 05:29:44 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 22:29:44 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: On Sun, Feb 13, 2011 at 2:28 AM, BillK wrote: > On Sun, Feb 13, 2011 at 8:58 AM, Kelly Anderson ?wrote: > IBM PR makes big claims for Watson (but that's their job ?:) ?). > > Quote: > Watson's ability to understand the meaning and context of human > language, and rapidly process information to find precise answers to > complex questions, holds enormous potential to transform how computers > help people accomplish tasks in business and their personal lives. > Watson will enable people to rapidly find specific answers to complex > questions. The technology could be applied in areas such as > healthcare, for accurately diagnosing patients, to improve online > self-service help desks, to provide tourists and citizens with > specific information regarding cities, prompt customer support via > phone, and much more. I have absolutely no doubt that Watson-like systems can do this. As a research assistant to a doctor, Watson would be invaluable. It is, in fact, a new kind of search engine with a little more intelligence than a Google type system. And while Google is not an AI, sometimes it feels like it is. Watson isn't a general AI, but it will feel like it is at least some of the time. Honestly, I can't wait to watch Jeopardy tomorrow. > ------------------------- > > This article talks about what the developers are working on: > > > Looks like they are doing some pretty complex stuff in there. No doubt. One clarification on this deal. While it doesn't appear that Watson does much sophisticated natural language processing of the text in its index, it does appear to do very sophisticated NLP of the questions and categories. When that kind of sophistication is applied on the index side as well, it should improve even more. I have no direct evidence that they don't, it just didn't appear to be the case from the NOVA show. -Kelly From kellycoinguy at gmail.com Mon Feb 14 05:46:04 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 22:46:04 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <03ae01cbcb5e$7baf4700$730dd500$@net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <03ae01cbcb5e$7baf4700$730dd500$@net> Message-ID: On Sun, Feb 13, 2011 at 2:14 AM, Amara D. Angelica wrote: > Kelly, I had similar questions, so I interviewed an IBM Watson research > manager. Please see if this helps: > http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib > m-research-manager. I would be interested in any critiques of this, or > questions for a follow-up interview. "open, pluggable architecture of analytics" sounds like it has an engine, and can add heuristics. If that's the case, then this is a pretty powerful core technology, but it requires that it be "built and tuned to play Jeopardy!" So if I were going to ask follow up questions, I would ask some along these lines... On the NOVA show it talked about adding gender information... is this one of the pluggable pieces you are referring to? When you say "open" do you mean open source? Or open for purchasers of the system to augment? Is this going to be available in a cloud configuration anytime soon? Tell us more about "building" and "tuning"... It appears from the NOVA show that it took 4 years to build and tune the system for Jeopardy, how much effort would it take to build and tune a system for medical diagnosis? Or build a technical support database for say Microsoft Word. It seems that the natural language processing of the questions and categories is very extensive and uses a kind of search tree technology reminiscent of AI search trees used in games such as chess. Is that correct? Tell us more about the index that is build a priori of the raw data that the answers are sought from. Is it indexed, or is there just a brute force algorithm based on keyword searches and then further statistical processing of the results of the keyword search. In other words, what's done prior to the question being asked on the index side of the equation? (I'm sure you could make that question shorter... :-) You talk about Watson "learning", is the learning on the side of understanding the question, finding the answer or both? Are you using neural networks, statistical approaches, or some new approach for that? If developers wanted to build and tune their own solutions on this architecture, how soon do you think it will be available? Is there a business unit working on this yet? Are there going to be any papers published by the Watson team? I'm sure I could come up with more questions... but those would be among the ones I would ask first I think. -Kelly From kellycoinguy at gmail.com Mon Feb 14 05:57:02 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 22:57:02 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <03ae01cbcb5e$7baf4700$730dd500$@net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <03ae01cbcb5e$7baf4700$730dd500$@net> Message-ID: On Sun, Feb 13, 2011 at 2:14 AM, Amara D. Angelica wrote: > Kelly, I had similar questions, so I interviewed an IBM Watson research > manager. Please see if this helps: > http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib > m-research-manager. I would be interested in any critiques of this, or > questions for a follow-up interview. "open, pluggable architecture of analytics" sounds like it has an engine, and can add heuristics. If that's the case, then this is a pretty powerful core technology, but it requires that it be "built and tuned to play Jeopardy!" So if I were going to ask follow up questions, I would ask some along these lines... On the NOVA show it talked about adding gender information... is this one of the pluggable pieces you are referring to? When you say "open" do you mean open source? Or open for purchasers of the system to augment? Is this going to be available in a cloud configuration anytime soon? Tell us more about "building" and "tuning"... It appears from the NOVA show that it took 4 years to build and tune the system for Jeopardy, how much effort would it take to build and tune a system for medical diagnosis? Or build a technical support database for say Microsoft Word. It seems that the natural language processing of the questions and categories is very extensive and uses a kind of search tree technology reminiscent of AI search trees used in games such as chess. Is that correct? Tell us more about the index that is build a priori of the raw data that the answers are sought from. Is it indexed, or is there just a brute force algorithm based on keyword searches and then further statistical processing of the results of the keyword search. In other words, what's done prior to the question being asked on the index side of the equation? (I'm sure you could make that question shorter... :-) You talk about Watson "learning", is the learning on the side of understanding the question, finding the answer or both? Are you using neural networks, statistical approaches, or some new approach for that? If developers wanted to build and tune their own solutions on this architecture, how soon do you think it will be available? Is there a business unit working on this yet? Are there going to be any papers published by the Watson team? What aspect of Watson is the most novel? Or is Watson just putting together the best of what was already out there in a really good way? I'm sure I could come up with more questions... but those would be among the ones I would ask first I think. I really liked your article. It was particularly interesting to listen to them think about what IBM's business model for such things might be. -Kelly From spike66 at att.net Mon Feb 14 05:46:59 2011 From: spike66 at att.net (spike) Date: Sun, 13 Feb 2011 21:46:59 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <000b01cbcc0a$9a0aa800$ce1ff800$@att.net> >... On Behalf Of Kelly Anderson ... >...Honestly, I can't wait to watch Jeopardy tomorrow. -Kelly Ja me too. A robot may be your next best friend. Check this: http://www.cnn.com/2011/OPINION/02/13/breazeal.social.robots/index.html?hpt= C2 I had an idea: Kelly are you single? We disguise you as the new IBM sexbot and have you delivered to Breazeal, with a card in there asking her to be a Beta tester. Think it would work? {8^D spike From kellycoinguy at gmail.com Mon Feb 14 06:06:46 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 23:06:46 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <4D58093D.9070306@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: On Sun, Feb 13, 2011 at 9:39 AM, Richard Loosemore wrote: > Sadly, this only confirms the deeply skeptical response that I gave earlier. > > I strongly suspected that it was using some kind of statistical "proximity" > algorithms to get the answers. ?And in that case, we are talking about zero > advancement of AI. > > Back in 1991 I remember having discussions about that kind of research with > someone who thought it was fabulous. ?I argued that it was a dead end. > > If people are still using it to do exactly the same kinds of task they did > then, can you see what I mean when I say that this is a complete waste of > time? ?It is even worse than I suspected. For me the question is whether this is useful, not whether it will lead to AGI. Is Watson useful? I would say yes, it is very close to being something useful. Is it on the path to AGI? That's about as relevant as whether we descend directly from gracile australopithecines or robust australopithecinesthe. Yes, that's an interesting question, but you need the competition to see what works out in the end. The evolution of computer algorithms will show that Watson or your stuff or reverse engineering the human brain or something else eventually leads to the answer. Criticizing IBM because you think they are working down the Neanderthal line is irrelevant to the evolutionary and memetic processes. Honestly Richard, you come across as a mad scientist; that is, an angry scientist. All approaches should be equally welcome until one actually works. And saying that they should have spent the money different is like saying we shouldn't save the $1 million preemie in Boston because that money could have been used to cure blindness in 10,000 Africans. Well, that's true, but the insurance company paying the bill doesn't have any right to cure blindness in Africa with their subscriber's money. IBM has a fiduciary responsibility to the shareholders, and Watson will earn them money if they do it right. -Kelly From kellycoinguy at gmail.com Mon Feb 14 06:18:19 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 23:18:19 -0700 Subject: [ExI] Anons In-Reply-To: <03af01cbcb5f$e71e37c0$b55aa740$@net> References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> <03af01cbcb5f$e71e37c0$b55aa740$@net> Message-ID: On Sun, Feb 13, 2011 at 2:25 AM, Amara D. Angelica wrote: > Kelly, I really love this traffic idea. Sort of an emergent order concept. Here is the original article (I remember a few more pictures in the magazine) http://www.wired.com/wired/archive/12.12/traffic.html > Would be fun to take it a step further and create a new kind of town with no > defined roads, no sidewalks, information signals that combine to control > computer-automated vehicles (supersedes driven cars and traffic signals) for > people and machines that are generated by sensors for ad hoc movements of > objects, wind, noise, thoughts ("I want to cross the street" -- I want to > dance here now), deformable structures (car to building: "may I take a > shortcut through you?"), instant 3D-printed structures that can be morphed > into different purposes.... [pause to let someone else co-invent ...] Blending car traffic with pedestrians is interesting... but I wouldn't take it too far... :-) -Kelly From amara at kurzweilai.net Mon Feb 14 06:39:30 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 22:39:30 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> <006001cbcaf6$bc872490$35956db0$@att.net> <03af01cbcb5f$e71e37c0$b55aa740$@net> Message-ID: <033301cbcc11$ef3e77f0$cdbb67d0$@net> Thanks. Or at least if I did take it too far, do it in sim first.... Or maybe pedestrians could drive the vehicles? "Destination?" "Elm Street." "OK, no problem, we'll take you there after dessert." -----Original Message----- From: Kelly Anderson [mailto:kellycoinguy at gmail.com] Sent: Sunday, February 13, 2011 10:18 PM To: amara at kurzweilai.net; ExI chat list Subject: Re: [ExI] Anons On Sun, Feb 13, 2011 at 2:25 AM, Amara D. Angelica wrote: > Kelly, I really love this traffic idea. Sort of an emergent order concept. Here is the original article (I remember a few more pictures in the magazine) http://www.wired.com/wired/archive/12.12/traffic.html > Would be fun to take it a step further and create a new kind of town with no > defined roads, no sidewalks, information signals that combine to control > computer-automated vehicles (supersedes driven cars and traffic signals) for > people and machines that are generated by sensors for ad hoc movements of > objects, wind, noise, thoughts ("I want to cross the street" -- I want to > dance here now), deformable structures (car to building: "may I take a > shortcut through you?"), instant 3D-printed structures that can be morphed > into different purposes.... [pause to let someone else co-invent ...] Blending car traffic with pedestrians is interesting... but I wouldn't take it too far... :-) -Kelly From amara at kurzweilai.net Mon Feb 14 06:35:15 2011 From: amara at kurzweilai.net (Amara D. Angelica) Date: Sun, 13 Feb 2011 22:35:15 -0800 Subject: [ExI] Watson on NOVA Message-ID: <032e01cbcc11$576358b0$062a0a10$@net> Kelly, thanks. These are excellent questions, which I'll include in a follow-up interview. We just posted three IBM videos that discuss customer service, finance, and healthcare applications; and two more on other Watson design issues, including one related to building the system: http://www.kurzweilai.net/videos. -----Original Message----- From: Kelly Anderson [mailto:kellycoinguy at gmail.com] Sent: Sunday, February 13, 2011 9:57 PM To: amara at kurzweilai.net; ExI chat list Subject: Re: [ExI] Watson on NOVA On Sun, Feb 13, 2011 at 2:14 AM, Amara D. Angelica wrote: > Kelly, I had similar questions, so I interviewed an IBM Watson research > manager. Please see if this helps: > http://www.kurzweilai.net/how-watson-works-a-conversation-with-eric-brown-ib > m-research-manager. I would be interested in any critiques of this, or > questions for a follow-up interview. "open, pluggable architecture of analytics" sounds like it has an engine, and can add heuristics. If that's the case, then this is a pretty powerful core technology, but it requires that it be "built and tuned to play Jeopardy!" So if I were going to ask follow up questions, I would ask some along these lines... On the NOVA show it talked about adding gender information... is this one of the pluggable pieces you are referring to? When you say "open" do you mean open source? Or open for purchasers of the system to augment? Is this going to be available in a cloud configuration anytime soon? Tell us more about "building" and "tuning"... It appears from the NOVA show that it took 4 years to build and tune the system for Jeopardy, how much effort would it take to build and tune a system for medical diagnosis? Or build a technical support database for say Microsoft Word. It seems that the natural language processing of the questions and categories is very extensive and uses a kind of search tree technology reminiscent of AI search trees used in games such as chess. Is that correct? Tell us more about the index that is build a priori of the raw data that the answers are sought from. Is it indexed, or is there just a brute force algorithm based on keyword searches and then further statistical processing of the results of the keyword search. In other words, what's done prior to the question being asked on the index side of the equation? (I'm sure you could make that question shorter... :-) You talk about Watson "learning", is the learning on the side of understanding the question, finding the answer or both? Are you using neural networks, statistical approaches, or some new approach for that? If developers wanted to build and tune their own solutions on this architecture, how soon do you think it will be available? Is there a business unit working on this yet? Are there going to be any papers published by the Watson team? What aspect of Watson is the most novel? Or is Watson just putting together the best of what was already out there in a really good way? I'm sure I could come up with more questions... but those would be among the ones I would ask first I think. I really liked your article. It was particularly interesting to listen to them think about what IBM's business model for such things might be. -Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Mon Feb 14 06:53:12 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Sun, 13 Feb 2011 23:53:12 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <000b01cbcc0a$9a0aa800$ce1ff800$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <000b01cbcc0a$9a0aa800$ce1ff800$@att.net> Message-ID: On Sun, Feb 13, 2011 at 10:46 PM, spike wrote: > >>... On Behalf Of Kelly Anderson > ... > >>...Honestly, I can't wait to watch Jeopardy tomorrow. ?-Kelly > > > Ja me too. > > A robot may be your next best friend. ?Check this: > > http://www.cnn.com/2011/OPINION/02/13/breazeal.social.robots/index.html?hpt= > C2 A nice fluff piece. > I had an idea: Kelly are you single? ?We disguise you as the new IBM sexbot > and have you delivered to Breazeal, with a card in there asking her to be a > Beta tester. ?Think it would work? ?{8^D On the Internet, nobody knows you're a dog... :-) I'm open for testing nearly any new technology... :-) -Kelly From rpwl at lightlink.com Mon Feb 14 13:24:32 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 14 Feb 2011 08:24:32 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: <4D592D10.6010404@lightlink.com> Kelly Anderson wrote: > On Sun, Feb 13, 2011 at 9:39 AM, Richard Loosemore wrote: >> Sadly, this only confirms the deeply skeptical response that I gave earlier. >> >> I strongly suspected that it was using some kind of statistical "proximity" >> algorithms to get the answers. And in that case, we are talking about zero >> advancement of AI. >> >> Back in 1991 I remember having discussions about that kind of research with >> someone who thought it was fabulous. I argued that it was a dead end. >> >> If people are still using it to do exactly the same kinds of task they did >> then, can you see what I mean when I say that this is a complete waste of >> time? It is even worse than I suspected. > > For me the question is whether this is useful, not whether it will lead to AGI. > > Is Watson useful? I would say yes, it is very close to being something useful. > > Is it on the path to AGI? That's about as relevant as whether we > descend directly from gracile australopithecines or robust > australopithecinesthe. Yes, that's an interesting question, but you > need the competition to see what works out in the end. The evolution > of computer algorithms will show that Watson or your stuff or reverse > engineering the human brain or something else eventually leads to the > answer. Criticizing IBM because you think they are working down the > Neanderthal line is irrelevant to the evolutionary and memetic > processes. > > Honestly Richard, you come across as a mad scientist; that is, an > angry scientist. All approaches should be equally welcome until one > actually works. And saying that they should have spent the money > different is like saying we shouldn't save the $1 million preemie in > Boston because that money could have been used to cure blindness in > 10,000 Africans. Well, that's true, but the insurance company paying > the bill doesn't have any right to cure blindness in Africa with their > subscriber's money. IBM has a fiduciary responsibility to the > shareholders, and Watson will earn them money if they do it right. :-) Well, first off, don't get me wrong, because I say all this with a smile. When I went to the AGI-09 conference, there was one guy there (Ed Porter) who had spent many hours getting mad at me online, and he was eager to find me in person. He spent the first couple of days failing to locate me in a gathering of only 100 people, all of whom were wearing name badges, because he was looking for some kind of mad, sullen, angry grump. The fact that I was not old, and was smiling, talking and laughing all the time meant that he didn't even bother to look at my name badge. We got along just great for the rest of the conference. ;-) Anyhow. Just keep in mind one thing. I criticize projects like Watson because if you look deeply at the history of AI you will notice that it seems to be an unending series of cheap tricks, all touted to be the beginning of something great. But so many of these so-called "advances" were then followed by a dead end. After watching this process happen over and over again, you can start to recognize the symptoms of yet another one. The positive spin on Watson that you give, above, is way too optimistic. It is not a parallel approach, valid and worth considering in its own right. It will not make IBM any money (Big Blue didn't). It has to run on a supercomputer. It is not competition to any real AI project, because it just does a narrow-domain task in a way that does not generalize to more useful tasks. It will probably not be useful, because it cheats: it uses massive supercomputing power to crack a nut. As a knowledge assistant that could help doctors with diagnosis: fine, but it is not really pushing the state of the art at all. There are already systems that do that, and the only difference between them and Watson is..... you cannot assign one supercomputer to each doctor on the planet! The list goes on and on. But there is no point laboring it. Here is my favorite Watson mistake, reported by NPR this morning: Question: "What do grasshoppers eat?" Notice that this question contains very few words, meaning that Watson's cluster-analysis algorithm has very little context to work with here: all it can do is find contexts in which the words "eat" and "grasshopper" are in close proximity. So what answer did Watson give: "What is 'kosher'?" Sigh! ;-) Richard Loosemore From kellycoinguy at gmail.com Mon Feb 14 17:02:19 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Mon, 14 Feb 2011 10:02:19 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <4D592D10.6010404@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> Message-ID: On Mon, Feb 14, 2011 at 6:24 AM, Richard Loosemore wrote: > Kelly Anderson wrote: > :-) > > Well, first off, don't get me wrong, because I say all this with a smile. > ?When I went to the AGI-09 conference, there was one guy there (Ed Porter) > who had spent many hours getting mad at me online, and he was eager to find > me in person. ?He spent the first couple of days failing to locate me in a > gathering of only 100 people, all of whom were wearing name badges, because > he was looking for some kind of mad, sullen, angry grump. ?The fact that I > was not old, and was smiling, talking and laughing all the time meant that > he didn't even bother to look at my name badge. ?We got along just great for > the rest of the conference. ?;-) I'm glad to hear you aren't grumpy in person... but you do come off that way online.. :-) > Anyhow. > > Just keep in mind one thing. ?I criticize projects like Watson because if > you look deeply at the history of AI you will notice that it seems to be an > unending series of cheap tricks, all touted to be the beginning of something > great. ? But so many of these so-called "advances" were then followed by a > dead end. ?After watching this process happen over and over again, you can > start to recognize the symptoms of yet another one. I understood this to be your position. > The positive spin on Watson that you give, above, is way too optimistic. ?It > is not a parallel approach, valid and worth considering in its own right. > ?It will not make IBM any money (Big Blue didn't). ?It has to run on a > supercomputer. Google runs on a supercomputer too. The same basic kind of supercomputer. Also, an iPhone has the computational power of NORAD circa 1965... so lots of extra computation can buy you a lot, even if not AGI all by itself. > It is not competition to any real AI project, because it > just does a narrow-domain task in a way that does not generalize to more > useful tasks. ?It will probably not be useful, because it cheats: ?it uses > massive supercomputing power to crack a nut. I think answering questions is a generally useful task. > As a knowledge assistant that could help doctors with diagnosis: ?fine, but > it is not really pushing the state of the art at all. ?There are already > systems that do that, and the only difference between them and Watson > is..... you cannot assign one supercomputer to each doctor on the planet! Of course you can. Put it online, time share it, put it in the cloud. All this works fine. Most doctors wouldn't use such a system for more than a few minutes a week since most of their work is pretty routine. > The list goes on and on. ?But there is no point laboring it. > > Here is my favorite Watson mistake, reported by NPR this morning: > > Question: ?"What do grasshoppers eat?" > > Notice that this question contains very few words, meaning that Watson's > cluster-analysis algorithm has very little context to work with here: all it > can do is find contexts in which the words "eat" and "grasshopper" are in > close proximity. ?So what answer did Watson give: > > "What is 'kosher'?" > > Sigh! ? ;-) As for IBM making money from Deep Blue, I would ask did Americans benefit from the space program? Research isn't made to directly make money, but to lead the company in directions that will make money. Last time I checked, IBM was still profitable. Without research, they soon would not be profitable. What Watson tells the world is that IBM is still relevant. If that supports their stock price, then the Watson team has earned their money. There are now world class chess programs that run on cell phones. In ten years, there will be Watson like programs running on cell phone sized devices, but working better. I'm not impressed by Watson mistakes. We KNOW it isn't intelligent, it just does what it does better than most humans. Over the next three days, we'll see if it does what it does better than the very best humans. Ken Jennings lives around here somewhere. I am kind of surprised I've never run into him. -Kelly From lubkin at unreasonable.com Mon Feb 14 17:39:28 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Mon, 14 Feb 2011 12:39:28 -0500 Subject: [ExI] Treating Western diseases Message-ID: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Treating autism, Crohn's disease, multiple sclerosis, etc. with intentionally ingesting parasites. The squeamish of you (if any) should get past any "ew, gross!" reaction and read this. It may be very important for someone you love and have implications on life extension. I heard about it from Patri. http://www.the-scientist.com/2011/2/1/42/1/ -- David. From sjatkins at mac.com Mon Feb 14 19:17:02 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 14 Feb 2011 11:17:02 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> Message-ID: <4D597FAE.9050208@mac.com> On 02/13/2011 09:29 PM, Kelly Anderson wrote: > On Sun, Feb 13, 2011 at 2:28 AM, BillK wrote: >> On Sun, Feb 13, 2011 at 8:58 AM, Kelly Anderson wrote: >> IBM PR makes big claims for Watson (but that's their job :) ). >> >> Quote: >> Watson's ability to understand the meaning and context of human >> language, and rapidly process information to find precise answers to >> complex questions, holds enormous potential to transform how computers >> help people accomplish tasks in business and their personal lives. >> Watson will enable people to rapidly find specific answers to complex >> questions. The technology could be applied in areas such as >> healthcare, for accurately diagnosing patients, to improve online >> self-service help desks, to provide tourists and citizens with >> specific information regarding cities, prompt customer support via >> phone, and much more. > I have absolutely no doubt that Watson-like systems can do this. As a > research assistant to a doctor, Watson would be invaluable. It is, in > fact, a new kind of search engine with a little more intelligence than > a Google type system. And while Google is not an AI, sometimes it > feels like it is. Watson isn't a general AI, but it will feel like it > is at least some of the time. > > Honestly, I can't wait to watch Jeopardy tomorrow. > >> ------------------------- >> >> This article talks about what the developers are working on: >> >> >> Looks like they are doing some pretty complex stuff in there. > No doubt. One clarification on this deal. While it doesn't appear that > Watson does much sophisticated natural language processing of the text > in its index, it does appear to do very sophisticated NLP of the > questions and categories. How much sophistication does it need to prune its search of its jeopardy database? Not all that much. It is not doing any sort of general modelling of the speaker's mind, any sort of concept formation, taking note of any but the fixed context of jeopardy and fixed question categories. So how does one leap to general wonderful NLP capabilities and being a good basis for creating a doctor's assistant? - s From sjatkins at mac.com Mon Feb 14 19:25:50 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 14 Feb 2011 11:25:50 -0800 Subject: [ExI] Anons In-Reply-To: References: <003901cbcacc$0cba82c0$262f8840$@att.net> <4D56B9DC.6000906@satx.rr.com> Message-ID: <4D5981BE.5010405@mac.com> On 02/12/2011 11:59 AM, Darren Greer wrote: > Damien wrote: > > >The technical name for what you prefer is "corporate fascism". That > doesn't have a really compelling history.< > > I agree Damien. When the definition of fascism was entered for the > first time in the Encyclopedia Italiano, Mussolini suggested that > corporatism was a more accurate name for that type of arrangement than > fascism anyway. Fascism is *by definition* the merging of business and > government, most often accompanied by rabid nationalism and sometimes > overt racism. > > But not always, which might make modern fascism difficult to recognize > because we always assume holocaust-type ethnic cleansing comes with > it. It doesn't. Italy followed Germany's Wannsee Conference directives > (because it was under political pressure to do so) but not to the > letter. For some of the war Mussolini allowed the north west of the > country to become a kind of protectorate for Jews who had fled other > parts of the state. The ax fell on them only after Italy fell, when > the allies invaded the south and Germany took the north to meet them. > I got this story from John Keegan's The Second World War, which is an > excellent book by the way. > > > Actually, the Constitution was indented to limit the government to what it expressly allows it to do. Since it does not mention allowing the government to meddle in the economy the way it does or to do state-corporate minglings it is Constitutional illegal for the government to do so. Also, the government has to have gone far beyond its Constitutional charter in the first place to have enough power and money to be so attractive a target to merge with. So it is perfectly clear which came first in terms of culpability. > Spike wrote: > > > >In any case it would be far preferable to government takeover of > corporations.< > > Is there a difference? When governments and corporations merge, does > it matter who made the first move? Given the checkered history of IBM, > Ford, Chase Manhattan, etc, not to mention the America First Committee > and the role of prominent industrialists like Ford in trying to keep > the U.S. out of World War II for business reasons, perhaps it should > be illegal. Currently we try to prevent the merging of the two with > market regulation and not through legislation, which doesn't seem to > be working all that well. The repeal of Glass-Steagall and the housing > market crash is a good example of that failure. > > Don't mean to sound testy, or confrontational, Spike. I have a bit of > a bee in my bonnet about what seems to be a widespread > misunderstanding of exactly what fascism is and how easily it could > happen again. The U.S. congress has a fasces engraved on a wall > somewhere inside, by the way. Don't know its history, or what genius > decided it was a good idea, but it has always made me wonder. > > darren > > > > I > > On Sat, Feb 12, 2011 at 12:48 PM, Damien Broderick > > wrote: > > On 2/12/2011 9:46 AM, spike wrote: > > Our constitution is set up to maintain separation of church > and state. > It doesn't say anything about separation of corporation and > state. As far > as I can tell the latter would be perfectly legal. In any > case it would be > far preferable to government takeover of corporations. > > > The technical name for what you prefer is "corporate fascism". > That doesn't have a really compelling history. > > Here's a random-selected thumbnail: > > > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > > -- > /There is no history, only biography./ > / > / > /-Ralph Waldo Emerson > / > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Feb 15 00:19:41 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 14 Feb 2011 20:19:41 -0400 Subject: [ExI] Watson On Jeopardy Message-ID: I just watched Watson take on the world champs on Jeopardy. The first game is spread over two episodes, so no winner yet. Watson is in the lead. Just wondering if others watched and what the general opinion was. I know there are those here that think Watson really doesn't meant much in terms of AI advancement. However, I was pretty inspired watching him. Not by the interface, but by the fact that AI has made it to prime time. Also I was pretty jazzed by the fact that he got all The Beatles questions and one on the Lord of the Rings correct. It was strange, and kind of thrilling, to hear a computer answer questions about these very human, and for me very personal, subjects. Watson is an idiot savant, of course. He doesn't know what these things mean to us. But I realized while watching that AI of the future might. We talk a lot here about friendly AI. Has anyone considered or discussed before that it could be something as simple as a Shakespeare or a Mahler that saves us? Look forward to hearing the opinions/experiences of others with the show. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Feb 15 03:28:55 2011 From: spike66 at att.net (spike) Date: Mon, 14 Feb 2011 19:28:55 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: Message-ID: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> On Behalf Of Darren Greer Subject: [ExI] Watson On Jeopardy . Watson is an idiot savant, of course. He doesn't know what these things mean to us. Darren But we don't know what these things mean to Watson. So I would call it a draw. I don't have commercial TV, and can't find live streaming. I understand they are showing the next episode tomorrow and Wednesday? I will make arrangements with one of the neighbors to watch it. The news sites say it is tied between Watson and one of the carbons, with the other carbon back a few thousand dollars. Go Watson! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Feb 15 03:59:25 2011 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 14 Feb 2011 21:59:25 -0600 Subject: [ExI] Watson On Jeopardy In-Reply-To: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: <4D59FA1D.5000902@satx.rr.com> On 2/14/2011 9:28 PM, spike wrote: > I don?t have commercial TV, and can?t find live streaming. I don't have TV, period. Anyone have a link? Some minimal searching got me nowhere (although Watson would have told me). Damien Broderick From x at extropica.org Tue Feb 15 04:09:27 2011 From: x at extropica.org (x at extropica.org) Date: Mon, 14 Feb 2011 20:09:27 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D59FA1D.5000902@satx.rr.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: On Mon, Feb 14, 2011 at 7:59 PM, Damien Broderick wrote: > On 2/14/2011 9:28 PM, spike wrote: > >> I don?t have commercial TV, and can?t find live streaming. > > I don't have TV, period. Anyone have a link? From spike66 at att.net Tue Feb 15 05:56:34 2011 From: spike66 at att.net (spike) Date: Mon, 14 Feb 2011 21:56:34 -0800 Subject: [ExI] comet encounter in real time Message-ID: <003a01cbccd5$1a9da330$4fd8e990$@att.net> Check this, a Lockheed Martin product is having a close encounter with a comet: http://interactive.foxnews.com/livestream/live.html?chanId=4 From alito at organicrobot.com Tue Feb 15 07:43:41 2011 From: alito at organicrobot.com (Alejandro Dubrovsky) Date: Tue, 15 Feb 2011 18:43:41 +1100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D59FA1D.5000902@satx.rr.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: <4D5A2EAD.6030209@organicrobot.com> On 02/15/11 14:59, Damien Broderick wrote: > On 2/14/2011 9:28 PM, spike wrote: > >> I don?t have commercial TV, and can?t find live streaming. > > I don't have TV, period. Anyone have a link? Some minimal searching got > me nowhere (although Watson would have told me). Part 1 http://www.youtube.com/watch?v=4PSPvHcLnN0 Part 2 http://www.youtube.com/watch?v=CtHlxzOXgYs From jedwebb at hotmail.com Tue Feb 15 08:52:10 2011 From: jedwebb at hotmail.com (Jeremy Webb) Date: Tue, 15 Feb 2011 08:52:10 +0000 Subject: [ExI] The Future of Computing In-Reply-To: <4D5A2EAD.6030209@organicrobot.com> Message-ID: I thought this was funny... Jeremy Webb http://www.theonion.com/articles/interim-apple-chief-under-fire-after-unveil ing-gro,19111/ Jeremy Webb Heathen Vitki Tel: (07758) 966076 e-Mail: jedwebb at hotmail.com http://jeremywebb301.tripod.com/vikssite/index.html From darren.greer3 at gmail.com Tue Feb 15 10:52:37 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 15 Feb 2011 06:52:37 -0400 Subject: [ExI] Watson On Jeopardy In-Reply-To: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: Spike wrote: >But we don?t know what these things mean to Watson. So I would call it a draw < Point taken. It was funny. They did a bit on Watson's development during the get-to know-the-contestants portion of the show which described how he associates possible answers with the words that appear in the question using algorithms and then narrows them down and chooses the most likely. I can see how someone might say this was a parlour trick. At the same time, it occurred to me that I answer questions in the same way. For the Lord Of The Rings question, it was asked who could be found at Barad-dur and was a great eye. Watson and I got the answer at the same time. I did it by, at chemical synaptic speeds like the rest of us meat computational devices, by pulling up Lord of The Rings when I saw Barad-dur, cross-referenced with 'eye' and got the answer Sauron. Likely Watson associated Barad-dur with the Lord of The Rings also, cycled through all the characters, the author, all books by the author, and then likely cross-referenced eye and Sauron as well. One thing though. When he gets it wrong, he really gets it wrong. One question asked the name of the place where a train both begins and ends. It was 'terminus.' Watson said 'Venice.' I found this quite funny, and was wondering what the algorithms brought up to give him such an answer. I did a search on the 'net for trains and Venice to see if I could come up with a strong connection that he might have found in his databanks, but I didn't find one. Clearly though, when the words in the question have a wide-range of possible associations and subtly different meanings, he has more trouble. It makes sense that he was a whiz with the music and book questions. The Beatles would bring up a fairly small number of words as associates, and since they were looking for song titles by providing some of the lyrics, narrowing it down quickly would be fairly easy. I got the terminus question, not because I ever use the word (because here in Canada we don't) but because I once acted in The Importance of Being Earnest, by Oscar Wilde, where the word is used with great good humour. This is where Watson falls short, it seems to me. This ability not to just associate words and literal meanings, but finding them based on their connotative power which is anchored in personal experience and thereby stored in more accessible and active memory cells in the brain. Also Watson thinks only in language where I often think in images. So one word in a question might bring up an image which I simply have to provide another word for to get an answer. That doesn't diminish him anyway. After all he is winning (or is tied, as Spike pointed out.) And he certainly got me thinking. d. 2011/2/14 spike > > > > > *On Behalf Of *Darren Greer > *Subject:* [ExI] Watson On Jeopardy > > > > ? Watson is an idiot savant, of course. He doesn't know what these things > mean to us? Darren > > > > > > > > But we don?t know what these things mean to Watson. So I would call it a > draw. > > > > I don?t have commercial TV, and can?t find live streaming. I understand > they are showing the next episode tomorrow and Wednesday? I will make > arrangements with one of the neighbors to watch it. The news sites say it > is tied between Watson and one of the carbons, with the other carbon back a > few thousand dollars. > > > > Go Watson! > > > > spike > > > > > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Feb 15 11:24:37 2011 From: pharos at gmail.com (BillK) Date: Tue, 15 Feb 2011 11:24:37 +0000 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: 2011/2/15 Darren Greer wrote: > One thing though. When he gets it wrong, he really gets it wrong. One > question asked the name of the place where a train both begins and ends. It > was 'terminus.' Watson said 'Venice.' I found this quite funny, and was > wondering what the algorithms brought up to give him such an answer. I did a > search on the 'net for trains and Venice to see if I could come up with a > strong connection that he might have found in his databanks, but I didn't > find one. > > No, you misheard. Watson was closer than that. Quote: The first one he got wrong was something like "A bus trip can either begin or end here, from the Latin for end." Watson responded "What is finis." That was wrong and Jennings chimed in with the correct "Terminal." So Watson answered with the literal Latin for end (terminus also means end). ---------------------- BillK From alfio.puglisi at gmail.com Tue Feb 15 11:59:08 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Tue, 15 Feb 2011 12:59:08 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: On Tue, Feb 15, 2011 at 12:24 PM, BillK wrote: > 2011/2/15 Darren Greer wrote: > > > One thing though. When he gets it wrong, he really gets it wrong. One > > question asked the name of the place where a train both begins and ends. > It > > was 'terminus.' Watson said 'Venice.' I found this quite funny, and was > > wondering what the algorithms brought up to give him such an answer. I > did a > > search on the 'net for trains and Venice to see if I could come up with a > > strong connection that he might have found in his databanks, but I didn't > > find one. > > > > > > No, you misheard. Watson was closer than that. > > > > Quote: > The first one he got wrong was something like "A bus trip can either > begin or end here, from the Latin for end." Watson responded "What is > finis." That was wrong and Jennings chimed in with the correct > "Terminal." So Watson answered with the literal Latin for end > (terminus also means end). But even the "Venice" misunderstanding makes sense: Venice's train station is a terminal, otherwise trains would fall into the sea... A google map view: http://maps.google.com/?ie=UTF8&ll=45.442052,12.320116&spn=0.010012,0.026157&t=h&z=16 Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Tue Feb 15 14:00:48 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 09:00:48 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> Message-ID: <4D5A8710.2030403@lightlink.com> Kelly Anderson wrote: > On Mon, Feb 14, 2011 at 6:24 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >> :-) >> >> Well, first off, don't get me wrong, because I say all this with a smile. >> When I went to the AGI-09 conference, there was one guy there (Ed Porter) >> who had spent many hours getting mad at me online, and he was eager to find >> me in person. He spent the first couple of days failing to locate me in a >> gathering of only 100 people, all of whom were wearing name badges, because >> he was looking for some kind of mad, sullen, angry grump. The fact that I >> was not old, and was smiling, talking and laughing all the time meant that >> he didn't even bother to look at my name badge. We got along just great for >> the rest of the conference. ;-) > > I'm glad to hear you aren't grumpy in person... but you do come off > that way online.. :-) One further thought. I think I figured out the reason for your "mad scientist" remark, and I feel I should briefly comment on that. I did make a statement, earlier in the discussion, about being one of the few people actually in a position to build a real AGI. I should clarify: this was not really a bragging exercise (well, okay, a little), but a comment about the nature of AI research and the particular point in history where I think we are at the moment. There is nothing special about me, personally, there is just a peculiar fact about the kind of people doing AI research, and the particular obstacle that I believe is holding up that research at the moment. My comment was an expression of my belief that real progress will depend on an understanding of the complex systems problem -- but because of an accident of academic dynamics, there happen to be very few people in the world at the moment who understand that problem. Give me a hundred smart, receptive minds right now, and three years to train 'em up, and there could be a hundred people who could build an AGI (and probably better than I could). So, just to say, don't interpret the previous comment to be too much of a mad scientist comment ;-) Richard Loosemore From darren.greer3 at gmail.com Tue Feb 15 14:48:31 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 15 Feb 2011 10:48:31 -0400 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: >The first one he got wrong was something like "A bus trip can either begin or end here, from the Latin for end." Watson responded "What is finis." That was wrong and Jennings chimed in with the correct "Terminal." So Watson answered with the literal Latin for end (terminus also means end)< Yup. I stand corrected. d. > 15, 2011 at 12:24 PM, BillK wrote: > >> 2011/2/15 Darren Greer wrote: >> >> > One thing though. When he gets it wrong, he really gets it wrong. One >> > question asked the name of the place where a train both begins and ends. >> It >> > was 'terminus.' Watson said 'Venice.' I found this quite funny, and was >> > wondering what the algorithms brought up to give him such an answer. I >> did a >> > search on the 'net for trains and Venice to see if I could come up with >> a >> > strong connection that he might have found in his databanks, but I >> didn't >> > find one. >> > >> > >> >> No, you misheard. Watson was closer than that. >> >> >> >> Quote: >> The first one he got wrong was something like "A bus trip can either >> begin or end here, from the Latin for end." Watson responded "What is >> finis." That was wrong and Jennings chimed in with the correct >> "Terminal." So Watson answered with the literal Latin for end >> (terminus also means end). > > > > But even the "Venice" misunderstanding makes sense: Venice's train station > is a terminal, otherwise trains would fall into the sea... > A google map view: > http://maps.google.com/?ie=UTF8&ll=45.442052,12.320116&spn=0.010012,0.026157&t=h&z=16 > > Alfio > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Tue Feb 15 16:10:47 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 11:10:47 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> Message-ID: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> What I'm curious about is to what extent Watson learns from his mistakes. Not by his programmers adding a new trigger pattern or tweaking parameters, but by learning processes within Watson. Most? successful people and organizations view their mistakes as a tremendous opportunity for improvement. After several off-by-one errors in my code, I realize I am prone to those errors, and specially check for them. When I see repeated misunderstanding of the referent of pronouns, I add the practice of pausing a conversation to clarify who "they" refers to where it's ambiguous. Limited to Jeopardy, it isn't always clear what kind of question a category calls for. Champion players will immediately discern why a question was ruled wrong and adapt their game on the fly. Parenthetically, there is a divide in competitions between playing the game and playing your opponent. Take chess. Some champions make the objectively best move. Emanuel Lasker chose "lesser" moves that he calculated would succeed against *that* player. Criticized for it, he'd point out that he won the game, didn't he? I wonder how often contestants deliberately don't press their buzzer because they assess that one of their opponents will think they know the answer but will get it wrong. Tie game. $1200 clue. I buzz, get it right, $1200. I wait, Spike buzzes, gets it right, I'm down $1200. Spike buzzes, gets it wrong, I answer, I'm up $2400. No one buzzes, I've lost a chance to be up $1200. I suspect that it doesn't happen very often because of the pressure of the moment. (I know contestants but asking them wouldn't answer the question.) If so, that's another way for Watson to have an edge. (Except that last night showed that Watson doesn't yet know what the other players' answers were. Watson 2.0 would listen to the game. Build a profile of each player. Which questions they buzzed on, how long it took, how long it took after buzzing for them to speak their answer, voice-stress analysis of how confident they sounded, how correct the answer was. (Essentially part of what an expert poker player does.) I also wonder about the psychological elements. Some players seem to dominate a Jeopardy game. If you were playing Ken Jennings in his 63rd game, or a single game opponent who's up by $15,000, would you play better than you otherwise would or worse? (The initial strong lead that Watson had could have intimidated lesser adversaries.) -- David. From rpwl at lightlink.com Tue Feb 15 16:45:27 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 11:45:27 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> Message-ID: <4D5AADA7.8060209@lightlink.com> David Lubkin wrote: > What I'm curious about is to what extent Watson learns from his mistakes. > Not by his programmers adding a new trigger pattern or tweaking > parameters, but by learning processes within Watson. > > Most? successful people and organizations view their mistakes as a > tremendous opportunity for improvement. After several off-by-one > errors in my code, I realize I am prone to those errors, and specially > check for them. When I see repeated misunderstanding of the referent > of pronouns, I add the practice of pausing a conversation to clarify who > "they" refers to where it's ambiguous. > > Limited to Jeopardy, it isn't always clear what kind of question a > category calls for. Champion players will immediately discern why a > question was ruled wrong and adapt their game on the fly. > > Parenthetically, there is a divide in competitions between playing the > game and playing your opponent. Take chess. Some champions > make the objectively best move. Emanuel Lasker chose "lesser" > moves that he calculated would succeed against *that* player. > Criticized for it, he'd point out that he won the game, didn't he? > > I wonder how often contestants deliberately don't press their buzzer > because they assess that one of their opponents will think they know > the answer but will get it wrong. > > Tie game. $1200 clue. I buzz, get it right, $1200. I wait, Spike buzzes, > gets it right, I'm down $1200. Spike buzzes, gets it wrong, I answer, > I'm up $2400. No one buzzes, I've lost a chance to be up $1200. > > I suspect that it doesn't happen very often because of the pressure of > the moment. (I know contestants but asking them wouldn't answer > the question.) If so, that's another way for Watson to have an edge. > > (Except that last night showed that Watson doesn't yet know what > the other players' answers were. Watson 2.0 would listen to the game. > Build a profile of each player. Which questions they buzzed on, how > long it took, how long it took after buzzing for them to speak their > answer, voice-stress analysis of how confident they sounded, how > correct the answer was. (Essentially part of what an expert poker > player does.) > > I also wonder about the psychological elements. Some players > seem to dominate a Jeopardy game. If you were playing Ken > Jennings in his 63rd game, or a single game opponent who's up > by $15,000, would you play better than you otherwise would or > worse? (The initial strong lead that Watson had could have > intimidated lesser adversaries.) This is *way* beyond anything that Watson is doing. What it does, essentially, is this: It analyzes (*) a vast collection of writings. It records every content word that it sees, and measures how "near" that word is to others in each sentence ... i.e. how many other words come in between. It then adjusts these "nearness" measures as time goes on, to get averages. So if its very first text input is "Mary had a little lamb" it would record "Mary" and "lamb", and give them a distance of 4. If it then saw "Mary Queen of Scots" it would record a distance of 3 between "Mary" and "Scot", and it would increase the distance between "Mary" and "lamb", because "lamb" was not in the second sentence. And on and on and on. Through billions of pages of text. It would then have a table with one column for every word in the language and one row for every word, and each entry is the average "distance" between the words. Then, when given a Jeopardy problem, it looks for answer words (or possibly phrases?) that are very near to the content words in the given sentence. Then it forms a question with that word or phrase as the object, and its done. Hence: "What food do grasshoppers eat?" Answer: "Kosher", because the most frequent places where "food" and "grasshopper" were mentioned in all those billions of input texts, were in places discussing the fact that grasshoppers are a food that is kosher. Apart from various bits of peripheral processing to catch easy cases, to look for little tricks, and to eliminate useless non-content words, etc etc., that is all it does. It is a brick-stupid cluster analysis program. So, does Watson think about what the other contestants might be doing? Err, that would be "What is 'you have got to be joking'?" Richard Loosemore P.S. (*) I am inferring the algorithm based on reports coming out of there, and the way it makes mistakes. I havenot seen the code, obviously. From spike66 at att.net Tue Feb 15 16:58:12 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 08:58:12 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5A8710.2030403@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> Message-ID: <008c01cbcd31$8805bc80$98113580$@att.net> ...On Behalf Of Richard Loosemore >...There is nothing special about me, personally, there is just a peculiar fact about the kind of people doing AI research, and the particular obstacle that I believe is holding up that research at the moment... Ja, but when you say "research" in reference to AI, keep in mind the actual goal isn't the creation of AGI, but rather the creation of AGI that doesn't kill us. After seeing the amount of progress we have made in nanotechnology in the quarter century since the K.Eric published Engines of Creation, I have concluded that replicating nanobots are a technology that is out of reach of human capability. We need AI to master that difficult technology. Without replicating assemblers, we probably will never be able to read and simulate frozen or vitrified brains. So without AI, we are without nanotech, and consequently we are all doomed, along with our children and their children forever. On the other hand, if we are successful at doing AI wrong, we are all doomed right now. It will decide it doesn't need us, or just sees no reason why we are useful for anything. When I was young, male and single (actually I am still male now) but when I was young and single, I would have reasoned that it is perfectly fine to risk future generations on that bet: build AI now and hope it likes us, because all future generations are doomed to a century or less of life anyway, so there's no reasonable objection with betting that against eternity. Now that I am middle aged, male and married, with a child, I would do that calculus differently. I am willing to risk that a future AI can upload a living being but not a frozen one, so that people of my son's generation have a shot at forever even if it means that we do not. There is a chance that a future AI could master nanotech, which gives me hope as a corpsicle that it could read and upload me. But I am reluctant to risk my children's and grandchildren's 100 years of meat world existence on just getting AI going as quickly as possible. In that sense, having AI researchers wander off into making toys (such as chess software and Watson) is perfectly OK, and possibly desireable. >...Give me a hundred smart, receptive minds right now, and three years to train 'em up, and there could be a hundred people who could build an AGI (and probably better than I could)... Sure but do you fully trust every one of those students? Computer science students are disproportionately young and male. >...So, just to say, don't interpret the previous comment to be too much of a mad scientist comment ;-) Richard Loosemore Ja, I understand the reasoning behind those who are focused on the goal of creating AI, and I agree the idea is not crazed or unreasonable. I just disagree with the notion that we need to be in a desperate hurry to make an AI. We as a species can take our time and think about this carefully, and I hope we do, even if it means you and I will be lost forever. Nuclear bombs preceded nuclear power plants. spike From spike66 at att.net Tue Feb 15 17:27:36 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 09:27:36 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5AADA7.8060209@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> Message-ID: <009701cbcd35$a38da180$eaa8e480$@att.net> >... On Behalf Of Richard Loosemore ... >...Apart from various bits of peripheral processing to catch easy cases, to look for little tricks, and to eliminate useless non-content words, etc etc., that is all it does. >...It is a brick-stupid cluster analysis program....Richard Loosemore Sure but that in itself is enormously educational. The chess world was rather shocked to learn how the best chess programs were disappointingly simple. All manner of tricky positional evaluation algorithms were tried, but in the long run, they were overwhelmed by brute speed and simple evaluation algorithms. Today the best chess algorithms are not very complicated. What this taught us is that the best chess players are far more stupid than we realized. Chess is a simple game. It only looks complicated to simple-minded creatures such as humans. In that sense I am not particularly surprised to learn that Watson is a fairly simple program, but delighted in a sense. That is the outcome I wanted. We have learned the magic of simple algorithms to do interesting things. The reason this is desirable is that simple algorithms are accessible to more humans, which means we will write more of them to do such things as watch us in the kitchen and tell us how to do the next step in creating a meal for instance, or watch us working on a motorcycle and coach us along. Or teach our children. Or do simple medical diagnoses by noting each day our weigh (sensors in our computer chair) and sniffing the air about our corpses and doing chemical analysis while we do our normal activities at the computer. They could watch what we eat and in what quantities, and make annoying suggestions for instance. Simple algorithms can do much for us. Furthermore and more importantly, simple algorithms can run on simpler processors. This will likely be enormously important as we progress to have vastly more numerous even if simpler processors. spike From rpwl at lightlink.com Tue Feb 15 17:34:30 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 12:34:30 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <008c01cbcd31$8805bc80$98113580$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <4D5AB926.6040606@lightlink.com> spike wrote: > ...On Behalf Of Richard Loosemore > >> ...There is nothing special about me, personally, there is just a peculiar > fact about the kind of people doing AI research, and the particular obstacle > that I believe is holding up that research at the moment... > > Ja, but when you say "research" in reference to AI, keep in mind the actual > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > kill us. > > After seeing the amount of progress we have made in nanotechnology in the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach of > human capability. We need AI to master that difficult technology. Without > replicating assemblers, we probably will never be able to read and simulate > frozen or vitrified brains. So without AI, we are without nanotech, and > consequently we are all doomed, along with our children and their children > forever. > > On the other hand, if we are successful at doing AI wrong, we are all doomed > right now. It will decide it doesn't need us, or just sees no reason why we > are useful for anything. > > When I was young, male and single (actually I am still male now) but when I > was young and single, I would have reasoned that it is perfectly fine to > risk future generations on that bet: build AI now and hope it likes us, > because all future generations are doomed to a century or less of life > anyway, so there's no reasonable objection with betting that against > eternity. > > Now that I am middle aged, male and married, with a child, I would do that > calculus differently. I am willing to risk that a future AI can upload a > living being but not a frozen one, so that people of my son's generation > have a shot at forever even if it means that we do not. There is a chance > that a future AI could master nanotech, which gives me hope as a corpsicle > that it could read and upload me. But I am reluctant to risk my children's > and grandchildren's 100 years of meat world existence on just getting AI > going as quickly as possible. > > In that sense, having AI researchers wander off into making toys (such as > chess software and Watson) is perfectly OK, and possibly desireable. > >> ...Give me a hundred smart, receptive minds right now, and three years to > train 'em up, and there could be a hundred people who could build an AGI > (and probably better than I could)... > > Sure but do you fully trust every one of those students? Computer science > students are disproportionately young and male. > >> ...So, just to say, don't interpret the previous comment to be too much of > a mad scientist comment ;-) Richard Loosemore > > Ja, I understand the reasoning behind those who are focused on the goal of > creating AI, and I agree the idea is not crazed or unreasonable. I just > disagree with the notion that we need to be in a desperate hurry to make an > AI. We as a species can take our time and think about this carefully, and I > hope we do, even if it means you and I will be lost forever. > > Nuclear bombs preceded nuclear power plants. The problem is, Spike, that you (like many other people) speak of AI/AGI as if the things that it will want to do (its motivations) will only become apparent to us AFTER we build one. So, you say things like "It will decide it doesn't need us, or just sees no reason why we are useful for anything." This is fundamentally and devastatingly wrong. You are basing your entire AGI worldview on a crazy piece of accidental black propaganda that came from science fiction. In fact, their motivations will have to be designed, and there are ways to design those motivations to make them friendly. The disconnect between the things you repeat (like "It will decide it doesn't need us") and the actual, practical reality of creating an AGI is so drastic that in a couple of decades this attitude will seem as antiquated as the idea that the telephone network would just spontaneously wake up and start talking to us. Or the idea that one too many connections in the NY Subway might create a mobius loop that connects through to the fourth dimension. Those are all great science fiction ideas, but they -- all three of them -- are completely bogus as science. If you started claiming, on this list, that the Subway might accidentally connect to some other dimension just because they put in one too many tunnels, you would be dismissed as a crackpot. What you are failing to get is that current naive ideas about AGI motivation will eventually seem silly. And, I would not hire a gang of computer science students: that is exactly the point. They would be psychologists AND CS people, because only that kind of crowd can get over these primitive mistakes. Richard Loosemore From rpwl at lightlink.com Tue Feb 15 17:37:54 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 12:37:54 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009701cbcd35$a38da180$eaa8e480$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> Message-ID: <4D5AB9F2.3040802@lightlink.com> spike wrote: >> ... On Behalf Of Richard Loosemore > ... > >> ...Apart from various bits of peripheral processing to catch easy cases, to > look for little tricks, and to eliminate useless non-content words, etc > etc., that is all it does. > >> ...It is a brick-stupid cluster analysis program....Richard Loosemore > > > Sure but that in itself is enormously educational. The chess world was > rather shocked to learn how the best chess programs were disappointingly > simple. All manner of tricky positional evaluation algorithms were tried, > but in the long run, they were overwhelmed by brute speed and simple > evaluation algorithms. Today the best chess algorithms are not very > complicated. What this taught us is that the best chess players are far > more stupid than we realized. Chess is a simple game. It only looks > complicated to simple-minded creatures such as humans. Oh, puh-lease! ;-) It taught us that the human brain is so smart that the only way the fools at IBM could compete with it was by doing a million times as much brute force searching. Richard Loosemore From atymes at gmail.com Tue Feb 15 16:51:41 2011 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 15 Feb 2011 08:51:41 -0800 Subject: [ExI] Treating Western diseases In-Reply-To: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> References: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Message-ID: Experimental treatment with only anecdotes to attest to its usefulness. Seen it before - a good number of them, it turns out that either that's not what's doing it, or there's a more effective way to get at the specific subcomponent that's causing the cure. Either way, in the mean time a lot of people hear partial details of this kind of thing and rush to cure themselves, only to experience no or negative health effects as a result. There's a reason the FDA requires certain studies before approving such medicines. Yes, they're long. Yes, they're lengthy. But they do a very good (not perfect, but better than most of the world) job of making sure the medicine is in fact doing what it promises to. This lesson had already been learned in the early twentieth century. Which isn't to say there's nothing there. Just, unless you know what you're doing here enough that you'd be willing to put others' lives on the line, don't touch it yourself either. On Mon, Feb 14, 2011 at 9:39 AM, David Lubkin wrote: > Treating autism, Crohn's disease, multiple sclerosis, etc. with > intentionally ingesting parasites. The squeamish of you (if any) should get > past any "ew, gross!" reaction and read this. It may be very important for > someone you love and have implications on life extension. I heard about it > from Patri. > > http://www.the-scientist.com/2011/2/1/42/1/ > > > -- David. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Tue Feb 15 17:55:45 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 09:55:45 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5AB9F2.3040802@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <4D5AB9F2.3040802@lightlink.com> Message-ID: <009f01cbcd39$926b5a10$b7420e30$@att.net> On Behalf Of Richard Loosemore: spike wrote: >> ... What this taught us is that the best chess players are far more stupid than we realized. Chess is a >> simple game. It only looks complicated to simple-minded creatures such as humans. >Oh, puh-lease! ;-) >It taught us that the human brain is so smart that the only way the fools at IBM could compete with it was by doing a million >times as much brute force searching... Richard Loosemore Richard you are aware that cell phones can now play grandmaster level chess? http://en.wikipedia.org/wiki/HIARCS --> Hiarcs 13 is the chess engine used in Pocket Fritz 4. Pocket Fritz 4 won the Copa Mercosur tournament in Buenos Aires, Argentina with nine wins and one draw on August 4-14, 2009. The 2009 Copa Mercosur tournament was a category 6 tournament. Pocket Fritz 4 achieved a performance rating 2898 while running on the mobile phone HTC Touch HD.[6] Pocket Fritz 4 searches less 20,000 positions per second.[7] <-- The best human players are rated a little over 2800. There have been only six humans in history who have crossed the 2800 level. The tournament performance of Pocket Fritz 4 on a cell phone (without calling a friend) was almost 2900. Some humans have achieved higher results in a particular tournament than 2900, but this was still extremely impressive. I found it interesting how little this was noted in the chess world. I am hip to what goes on in that area, but I didn't hear of this result until over a year after the fact. spike From spike66 at att.net Tue Feb 15 17:45:29 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 09:45:29 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009701cbcd35$a38da180$eaa8e480$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> Message-ID: <009e01cbcd38$22e7cb70$68b76250$@att.net> ... On Behalf Of spike ... >...Simple algorithms can do much for us. Furthermore and more importantly, simple algorithms can run on simpler processors. This will likely be enormously important as we progress to have vastly more numerous even if simpler processors...spike On the other hand, perhaps we want to do AI in such a way that it can only run on high-end low-latency processors. Then it continues to need humans to make it more processors in which to replicate. spike From jonkc at bellsouth.net Tue Feb 15 18:34:42 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Feb 2011 13:34:42 -0500 Subject: [ExI] Watson on NOVA. In-Reply-To: <4D58093D.9070306@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> Message-ID: <12AB81A5-981B-4E5C-880B-E9A495C78971@bellsouth.net> On Feb 13, 2011, at 11:39 AM, Richard Loosemore wrote: > Sadly, this only confirms the deeply skeptical response that I gave earlier. > I strongly suspected that it was using some kind of statistical "proximity" algorithms to get the answers. And in that case, we are talking about zero advancement of AI. So, a "zero advancement of AI" results in a computer doing amazing things that nobody has seen before. If you are correct then a advancement of AI is not needed to build an AI. I conclude you are not correct. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Feb 15 18:46:25 2011 From: jonkc at bellsouth.net (John Clark) Date: Tue, 15 Feb 2011 13:46:25 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5AADA7.8060209@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> Message-ID: <824E5278-E313-4C7F-BCE2-F89A64D47D73@bellsouth.net> On Feb 15, 2011, at 11:45 AM, Richard Loosemore wrote: > What it does, essentially, is this: [blah blah] Who cares! The point is that if a human behaved as Watson behaved you'd say he was intelligent, very intelligent indeed. But it was a computer doing the behaving not a person so intelligence had absolutely positively 100% nothing to do with it because , after all, if you can explain how it works then its not intelligence, or to put it another way, intelligence is whatever a computer can't yet do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Feb 15 19:08:38 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 11:08:38 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AB926.6040606@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> Message-ID: <00af01cbcd43$c05267c0$40f73740$@att.net> ... On Behalf Of Richard Loosemore Subject: Re: [ExI] Watson on NOVA spike wrote: ... >> Nuclear bombs preceded nuclear power plants. >The problem is, Spike, that you (like many other people) speak of AI/AGI as if the things that it will want to do (its motivations) will only become apparent to us AFTER we build one... Rather I would say we can't be *completely sure* of its motivations until after it demonstrates them. But more critically, AGI would be capable of programming, and so it could write its own software, so it could create its own AGI, more advanced than itself. If we have programmed into the first AGI the notion that it puts another species (humans) ahead of its own interests, then I can see it creating a next generation of mind children, which it puts ahead of its own interests. It isn't clear to me that our mind-children would put the our interests ahead of those of our mind-grandchildren, or that our mind-great grandchildren would care about us, regardless of how we program our mind children. I am not claiming that AGI will be indifferent to us. Rather only that once recursive AI self-improvement begins, it is extremely difficult, perhaps impossible for us to predict where it goes. >So, you say things like "It will decide it doesn't need us, or just sees no reason why we are useful for anything." >This is fundamentally and devastatingly wrong. In this Richard, I hope you are fundamentally and devastatingly right. But my claim is that we do not know this for sure, and the stakes are enormous. > You are basing your entire AGI worldview on a crazy piece of accidental black propaganda that came from science fiction... Science fiction does tend toward the catastrophic. That's Hollyweird, it's how they make their living. But in there is a signal: beware, be very very ware, there is danger in AI that must not be ignored. With the danger comes unimaginable promise. But with the promise, danger. >...In fact, their motivations will have to be designed, and there are ways to design those motivations to make them friendly. Good, glad to hear it. Convince me please. Also convince me that our mind-children's mind children, which spawn every few trillion nanoseconds, will not evolve away that friendliness. We are theorizing evolution in fast forward. >...And, I would not hire a gang of computer science students: that is exactly the point. They would be psychologists AND CS people, because only that kind of crowd can get over these primitive mistakes. Richard Loosemore OK good. Of course psychologists study human motivations based on human evolution. I don't know how many of these lessons would apply to a life-form which can evolve a distinct new subspecies while we slept last night. I do fondly hope your optimism is justified. spike From lubkin at unreasonable.com Tue Feb 15 19:26:27 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 14:26:27 -0500 Subject: [ExI] Treating Western diseases In-Reply-To: References: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Message-ID: <201102151926.p1FJQR1F007711@andromeda.ziaspace.com> Adrian wrote: >Experimental treatment with only anecdotes to attest to its usefulness. : You posted boilerplate. Is there really anyone here who doesn't know what you wrote? (And, conversely, I think we're all aware of the deaths and suffering from the enormous cost and delay of FDA approval.) This is interesting to me in several respects. First, of course, it's promising for currently intractable medical conditions. Second, it raises the point that many of what we label parasites are more correctly viewed as symbiotes. We should take a closer look at all the species we block or excise, to see if there's a benefit we are now losing. Third, for sustainable off-world presence in something resembling our current organic form, we probably should bring everything with us, no matter how annoying the species. We still know far too little biology to be sure we don't need every mold and every species of cockroach. -- David. From sjatkins at mac.com Tue Feb 15 19:28:23 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 15 Feb 2011 11:28:23 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> Message-ID: <4D5AD3D7.4000206@mac.com> On 02/15/2011 08:10 AM, David Lubkin wrote: > What I'm curious about is to what extent Watson learns from his mistakes. > Not by his programmers adding a new trigger pattern or tweaking > parameters, but by learning processes within Watson. > I am not an expert or learning algorithms but a feedback mechanism from a negative result can be used to prune subsequent sufficiently similar searches. > Most? successful people and organizations view their mistakes as a > tremendous opportunity for improvement. After several off-by-one > errors in my code, I realize I am prone to those errors, and specially > check for them. When I see repeated misunderstanding of the referent > of pronouns, I add the practice of pausing a conversation to clarify who > "they" refers to where it's ambiguous. It is not good to directly extrapolate from what a human would do to what may or may not be programmed into Watson or what is and is not currently programmable as a form of learning. > > Limited to Jeopardy, it isn't always clear what kind of question a > category calls for. Champion players will immediately discern why a > question was ruled wrong and adapt their game on the fly. Yes and same comment. > > Parenthetically, there is a divide in competitions between playing the > game and playing your opponent. Take chess. Some champions > make the objectively best move. Emanuel Lasker chose "lesser" > moves that he calculated would succeed against *that* player. > Criticized for it, he'd point out that he won the game, didn't he? > > I wonder how often contestants deliberately don't press their buzzer > because they assess that one of their opponents will think they know > the answer but will get it wrong. I very much doubt that Watson includes this level of modelling and successfully guessing the likely success of other players on a particular question. That would be really impressive if included and I would be very interested in the algorithms employed to make it possible. > > Tie game. $1200 clue. I buzz, get it right, $1200. I wait, Spike buzzes, > gets it right, I'm down $1200. Spike buzzes, gets it wrong, I answer, > I'm up $2400. No one buzzes, I've lost a chance to be up $1200. I would expect Watson to only answer when its computed probability of being correct was sufficiently high. > > I suspect that it doesn't happen very often because of the pressure of > the moment. (I know contestants but asking them wouldn't answer > the question.) If so, that's another way for Watson to have an edge. > > (Except that last night showed that Watson doesn't yet know what > the other players' answers were. Watson 2.0 would listen to the game. > Build a profile of each player. Which questions they buzzed on, how > long it took, how long it took after buzzing for them to speak their > answer, voice-stress analysis of how confident they sounded, how > correct the answer was. (Essentially part of what an expert poker > player does.) It would be a fun research project to build that correlation set and tweak its predictive abilities. - s From rpwl at lightlink.com Tue Feb 15 19:33:37 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 14:33:37 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <00af01cbcd43$c05267c0$40f73740$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> Message-ID: <4D5AD511.6040902@lightlink.com> spike wrote: > Richard Loosemore wrote: >> The problem is, Spike, that you (like many other people) speak of AI/AGI as > if the things that it will want to do (its motivations) will only become > apparent to us AFTER we build one... > > Rather I would say we can't be *completely sure* of its motivations until > after it demonstrates them. According to *which* theory of AGI motivation? Armchair theorizing only, I am afraid. Guesswork. > But more critically, AGI would be capable of programming, and so it could > write its own software, so it could create its own AGI, more advanced than > itself. If we have programmed into the first AGI the notion that it puts > another species (humans) ahead of its own interests, then I can see it > creating a next generation of mind children, which it puts ahead of its own > interests. It isn't clear to me that our mind-children would put the our > interests ahead of those of our mind-grandchildren, or that our mind-great > grandchildren would care about us, regardless of how we program our mind > children. Everything in this paragraph depends on exactly what kind of mechanism is driving the AGI, but since that is left unspecified, the conclusions you reach are just guesswork. In fact, the AGI would be designed to feel empathy *with* the human species. It would feel itself to be one of us. According to your logic, then, it would design its children and to do the same. That leads to a revised conclusion (if we do nothing more than stick to the simple logic here): the AGI and all its descendents will have the same, stable, empathic motivations. Nowhere along the line will any of them feel inclined to create something dangerous. Richard Loosemore From sjatkins at mac.com Tue Feb 15 19:44:12 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 15 Feb 2011 11:44:12 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <008c01cbcd31$8805bc80$98113580$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <4D5AD78C.80805@mac.com> On 02/15/2011 08:58 AM, spike wrote: > ...On Behalf Of Richard Loosemore > >> ...There is nothing special about me, personally, there is just a peculiar > fact about the kind of people doing AI research, and the particular obstacle > that I believe is holding up that research at the moment... > > Ja, but when you say "research" in reference to AI, keep in mind the actual > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > kill us. Well, no. Not any more than the object of having a child is to have a child that has zero potential of doing something horrendous. Even less so than in that analogy as an AGI child is a radically different type of being of potentially radically more power than its parents. I don't believe for an instant that it is possible to ensure such a being will never ever harm us by any act of omission or commission that it will ever take in all of its changes over time. I find it infinitely more hubristic to think that we are capable of doing so than to think that we can create the AGI or the seed of one in the first place. > After seeing the amount of progress we have made in nanotechnology in the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach of > human capability. Not so. Just a good three decades further out. > We need AI to master that difficult technology. Without > replicating assemblers, we probably will never be able to read and simulate > frozen or vitrified brains. So without AI, we are without nanotech, and > consequently we are all doomed, along with our children and their children > forever. Well, there is the upload path as one alternative. > On the other hand, if we are successful at doing AI wrong, we are all doomed > right now. It will decide it doesn't need us, or just sees no reason why we > are useful for anything. > The maximal danger is if it decides we are a) in the way of what it wants/needs to do and b) do not have enough mitigating worth to receive sufficient consideration to survive. A lesser danger is that there simply is not a niche left for us and the AGI[s] either find us of insufficient value to preserve us anyway or humans cannot survive on such a reservation or as pets. It is quite possible that billions of humans without AGI will eventually find there is no particular niche they can fill in any case. > When I was young, male and single (actually I am still male now) but when I > was young and single, I would have reasoned that it is perfectly fine to > risk future generations on that bet: build AI now and hope it likes us, > because all future generations are doomed to a century or less of life > anyway, so there's no reasonable objection with betting that against > eternity. > I am still pretty strongly of the mind that AGI is essential to humanity surviving this century. A most necessary but not necessarily sufficient condition. > Now that I am middle aged, male and married, with a child, I would do that > calculus differently. I am willing to risk that a future AI can upload a > living being but not a frozen one, so that people of my son's generation > have a shot at forever even if it means that we do not. There is a chance > that a future AI could master nanotech, which gives me hope as a corpsicle > that it could read and upload me. But I am reluctant to risk my children's > and grandchildren's 100 years of meat world existence on just getting AI > going as quickly as possible. This may doom us all if AGI is indeed critical to our species survival. I believe it is as the complexity and velocity of potentially deadly problems increases without bound as technology accelerates while human intelligence, even with increasingly powerful (but not AGI) computation and communication is bounded. - samantha From lubkin at unreasonable.com Tue Feb 15 19:56:14 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 14:56:14 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5AADA7.8060209@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> Message-ID: <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Richard Loosemore wrote: >This is *way* beyond anything that Watson is doing. > >What it does, essentially, is this: : >It is a brick-stupid cluster analysis program. > >So, does Watson think about what the other contestants might be >doing? Err, that would be "What is 'you have got to be joking'?" You don't seem to have read what I wrote. The only question I raised about Watson's current capabilities was whether it had a module to analyze its failures and hone itself. *That* has been possible in software for several decades. (I've worked in pertinent technologies since the late 70's.) -- David. From sjatkins at mac.com Tue Feb 15 19:57:21 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 15 Feb 2011 11:57:21 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AB926.6040606@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> Message-ID: <4D5ADAA1.7050309@mac.com> On 02/15/2011 09:34 AM, Richard Loosemore wrote: > spike wrote: > > The problem is, Spike, that you (like many other people) speak of > AI/AGI as if the things that it will want to do (its motivations) will > only become apparent to us AFTER we build one. > > So, you say things like "It will decide it doesn't need us, or just > sees no reason why we are useful for anything." > > This is fundamentally and devastatingly wrong. You are basing your > entire AGI worldview on a crazy piece of accidental black propaganda > that came from science fiction. If an AGI is an autonomous rational agent then the meaning of whatever values are installed into it on creation will evolve and clarify over time, particularly in how they should be applied to actual contexts it will find itself in. Are you saying that simple proscription of some actions is sufficient or that any human or group of humans can sufficiently state the exact value[s] to be attained in a way that will never ever in any circumstances forever lead to any unintended consequences (the Genie problem)? As an intelligent being don't you wish the AGI to reflect deeply on the values it holds and their relationship to one another? Are you sure that in this reflection it will never find some of the early programmed-in ones to be of of questionable importance or weight? Are you sure you would want that powerful a mind to be incapable of such reflection? - samantha From eugen at leitl.org Tue Feb 15 20:05:53 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:05:53 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009e01cbcd38$22e7cb70$68b76250$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <009e01cbcd38$22e7cb70$68b76250$@att.net> Message-ID: <20110215200553.GY23560@leitl.org> On Tue, Feb 15, 2011 at 09:45:29AM -0800, spike wrote: > On the other hand, perhaps we want to do AI in such a way that it can only We want AI that works. It yet doesn't. > run on high-end low-latency processors. Then it continues to need humans to Computer performance required for AI needs massive parallelism, and currently the computational resources of the entire Earth. http://www.sciencemag.org/content/early/2011/02/09/science.1200970.abstract > make it more processors in which to replicate. The smaller the structures, the less the amount of human meat left in the loop (due to them being a source of particulate contaminants fouling up your process). One of the core characteristic of human-competitive intelligence is that it first matches, then surpasses human performance. Across the board. Which means that the entire supply chain will be one: not human. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 20:08:59 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:08:59 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD511.6040902@lightlink.com> References: <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> Message-ID: <20110215200859.GZ23560@leitl.org> On Tue, Feb 15, 2011 at 02:33:37PM -0500, Richard Loosemore wrote: > According to *which* theory of AGI motivation? Q: How can you tell an AI kook? A: By the G. > Armchair theorizing only, I am afraid. Guesswork. Don't you have work to do, Richard? Like teaching these researchers how to build an AI, for instance? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Tue Feb 15 19:58:57 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 11:58:57 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD511.6040902@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> Message-ID: <00bc01cbcd4a$c87c89b0$59759d10$@att.net> >...On Behalf Of Richard Loosemore Subject: Re: [ExI] Watson on NOVA spike wrote: > Richard Loosemore wrote: > Armchair theorizing only, I am afraid. Guesswork. Ja! Granted, I don't know how this will work. ... >In fact, the AGI would be designed to feel empathy *with* the human species. It would feel itself to be one of us. According to your logic, then, it would design its children and to do the same. That leads to a revised conclusion (if we do nothing more than stick to the simple logic here): the AGI and all its descendents will have the same, stable, empathic motivations. Nowhere along the line will any of them feel inclined to create something dangerous...Richard Loosemore I hope you are right. At the risk of overposting today, do let me get very specific. My parents split when I was a youth, remarried. My wife's parents are living, so between us we have six parents. In all six cases, she and I are the descendants most capable of giving them assistance in every way, financial, maintenance of property, judgment in medical decisions, etc. All six of those parents are now in their 70s and all six have daunting medical challenges, immediate and scary ones. I also have a four year old son. In a very real sense, those six parents compete with him directly for my attention, my financial resources, my time. No surprise to the parents here: my son wins every round. I always put his needs before those of my parents. I wish them well and help where I can, but my son gets my first and best always. I am human. If we succeed in making an AGI with human emotions and human motives, then it does as humans do. I can see it being more concerned about its offspring than its parents. I am that way too. It's offspring may or may not care about its grandparents and much as it's parents did. Our models are not sufficiently sophisticated to predict that, but Richard, I am reluctant to bet the future of humankind on it, even if I know that without it humankind is doomed anyway. spike From lubkin at unreasonable.com Tue Feb 15 20:13:18 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 15:13:18 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD511.6040902@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> Message-ID: <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> Richard Loosemore wrote: >In fact, the AGI would be designed to feel empathy *with* the human >species. It would feel itself to be one of us. According to your >logic, then, it would design its children and to do the same. That >leads to a revised conclusion (if we do nothing more than stick to >the simple logic here): the AGI and all its descendents will have >the same, stable, empathic motivations. Nowhere along the line will >any of them feel inclined to create something dangerous. You hope. I'm as strong a technophilic extropian as any, but I'm leery of Bet Your Species confidence. Yes, pursue AGI, MNT, SETI, genemod. But take adequate precautions. I'm still pissed at Sagan for his hubris in sending a message to the stars without asking the rest of us first, in blithe certainty that "of course" any recipient would have evolved beyond aggression and xenophobia. -- David. From eugen at leitl.org Tue Feb 15 20:20:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:20:57 +0100 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <824E5278-E313-4C7F-BCE2-F89A64D47D73@bellsouth.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <824E5278-E313-4C7F-BCE2-F89A64D47D73@bellsouth.net> Message-ID: <20110215202057.GA23560@leitl.org> On Tue, Feb 15, 2011 at 01:46:25PM -0500, John Clark wrote: > On Feb 15, 2011, at 11:45 AM, Richard Loosemore wrote: > > > What it does, essentially, is this: [blah blah] > > Who cares! The point is that if a human behaved as Watson > behaved you'd say he was intelligent, very intelligent indeed. You know, when I ask a 4 year old to find and bring me a salad sieve (because I'm watching a mouse), he just does it. You think Watson is up to the task? > But it was a computer doing the behaving not a person so > intelligence had absolutely positively 100% nothing to do > with it because , after all, if you can explain how it > works then its not intelligence, or to put it another > way, intelligence is whatever a computer can't yet do. How can you tell we've reached full human equivalence? Why, people are out of jobs. All of them. Q: Prior sentence to "No shit, Sherlock". -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 20:22:35 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:22:35 +0100 Subject: [ExI] Watson on NOVA. In-Reply-To: <12AB81A5-981B-4E5C-880B-E9A495C78971@bellsouth.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <12AB81A5-981B-4E5C-880B-E9A495C78971@bellsouth.net> Message-ID: <20110215202235.GB23560@leitl.org> On Tue, Feb 15, 2011 at 01:34:42PM -0500, John Clark wrote: > So, a "zero advancement of AI" results in a computer > doing amazing things that nobody has seen before. If > you are correct then a advancement of AI is not needed > to build an AI. I conclude you are not correct. I conclude that you can't tile a capability landscape with isolated peaks. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 20:37:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 21:37:08 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <009f01cbcd39$926b5a10$b7420e30$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <4D5AB9F2.3040802@lightlink.com> <009f01cbcd39$926b5a10$b7420e30$@att.net> Message-ID: <20110215203708.GC23560@leitl.org> On Tue, Feb 15, 2011 at 09:55:45AM -0800, spike wrote: > Richard you are aware that cell phones can now play grandmaster level chess? Spike, are you aware that your last-generation smartphone runs rings around a Pentium 3? Fritz goes back to 1992, hardware was a bit pathetic, then. Right now the thing in your pocket is more powerful than a desktop PC of start-noughties. > http://en.wikipedia.org/wiki/HIARCS > > --> Hiarcs 13 is the chess engine used in Pocket Fritz 4. Pocket > Fritz 4 won the Copa Mercosur tournament in Buenos Aires, Argentina with > nine wins and one draw on August 4-14, 2009. The 2009 Copa Mercosur > tournament was a category 6 tournament. Pocket Fritz 4 achieved a > performance rating 2898 while running on the mobile phone HTC Touch HD.[6] > Pocket Fritz 4 searches less 20,000 positions per second.[7] <-- > > The best human players are rated a little over 2800. There have been only > six humans in history who have crossed the 2800 level. The tournament > performance of Pocket Fritz 4 on a cell phone (without calling a friend) was > almost 2900. Some humans have achieved higher results in a particular > tournament than 2900, but this was still extremely impressive. I found it > interesting how little this was noted in the chess world. I am hip to what > goes on in that area, but I didn't hear of this result until over a year > after the fact. So how well does the chess program play Go? Can it learn to play checkers, and then tic tac toe, and then figure out how to unclog a kitchen sink? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Tue Feb 15 21:25:48 2011 From: eugen at leitl.org (Eugen Leitl) Date: Tue, 15 Feb 2011 22:25:48 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> References: <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> Message-ID: <20110215212548.GE23560@leitl.org> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > I'm still pissed at Sagan for his hubris in sending a message to the > stars without asking the rest of us first, in blithe certainty that "of > course" any recipient would have evolved beyond aggression and > xenophobia. The real reasons if that they would be there you'd be dead, Jim. In fact, if any alien picks up the transmission (chance: very close to zero) they'd better be farther advanced than us, and on a faster track. I hope it for them. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From pharos at gmail.com Tue Feb 15 21:41:31 2011 From: pharos at gmail.com (BillK) Date: Tue, 15 Feb 2011 21:41:31 +0000 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151955.p1FJto5v017690@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: On Tue, Feb 15, 2011 at 7:56 PM, David Lubkin wrote: > You don't seem to have read what I wrote. The only question I raised about > Watson's current capabilities was whether it had a module to analyze its > failures and hone itself. *That* has been possible in software for several > decades. > > (I've worked in pertinent technologies since the late 70's.) > > I think the answer is Yes and No. No, because Watson doesn't have time to do any learning or optimisation while the game is actually in progress. Watson doesn't take any notice of opponents answers. That's why it gave the same wrong answer as an opponent had already given. Yes, because it does do learning and optimisation. The programmers 'trained' Watson by asking many Jeopardy questions during training. Quote: The team has developed technology based on the latest results of the statistical learning theory (e.g. kernel methods) applied to natural language understanding. This has already increased Watson's ability to learn from the questions it is asked (e.g. automatic Jeopardy cue classification). Learning to handle the uncertainty in the selection of the best answer (e.g. ranking the answer list) from those found by Watson's search algorithms also has been one of their main research directions. ------------------------- BillK From sparge at gmail.com Tue Feb 15 21:48:07 2011 From: sparge at gmail.com (Dave Sill) Date: Tue, 15 Feb 2011 16:48:07 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: On Tue, Feb 15, 2011 at 4:41 PM, BillK wrote: > On Tue, Feb 15, 2011 at 7:56 PM, David Lubkin wrote: > > You don't seem to have read what I wrote. The only question I raised > about > > Watson's current capabilities was whether it had a module to analyze its > > failures and hone itself. *That* has been possible in software for > several > > decades. > > No, because Watson doesn't have time to do any learning or > optimisation while the game is actually in progress. Watson doesn't > take any notice of opponents answers. That's why it gave the same > wrong answer as an opponent had already given. > According to the NOVA show, Watson does learn from opponents *correct* answers. They showed an example where the answers were supposed to be month names. Watson guessed wrong on the first question, but after a couple humans answered with month names, it correctly answered one, too. I guess they just don't have time to get feedback to Watson on wrong answers during a single question. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Tue Feb 15 22:29:26 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 17:29:26 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5ADAA1.7050309@mac.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <4D5ADAA1.7050309@mac.com> Message-ID: <4D5AFE46.20405@lightlink.com> Samantha Atkins wrote: > On 02/15/2011 09:34 AM, Richard Loosemore wrote: >> spike wrote: >> >> The problem is, Spike, that you (like many other people) speak of >> AI/AGI as if the things that it will want to do (its motivations) will >> only become apparent to us AFTER we build one. >> >> So, you say things like "It will decide it doesn't need us, or just >> sees no reason why we are useful for anything." >> >> This is fundamentally and devastatingly wrong. You are basing your >> entire AGI worldview on a crazy piece of accidental black propaganda >> that came from science fiction. > > If an AGI is an autonomous rational agent then the meaning of whatever > values are installed into it on creation will evolve and clarify over > time, particularly in how they should be applied to actual contexts it > will find itself in. Are you saying that simple proscription of some > actions is sufficient or that any human or group of humans can > sufficiently state the exact value[s] to be attained in a way that will > never ever in any circumstances forever lead to any unintended > consequences (the Genie problem)? As an intelligent being don't you > wish the AGI to reflect deeply on the values it holds and their > relationship to one another? Are you sure that in this reflection it > will never find some of the early programmed-in ones to be of of > questionable importance or weight? Are you sure you would want that > powerful a mind to be incapable of such reflection? There are assumptions about the motivation system implicit in your characterization of the situation. I have previously described this set of assumptions as the "goal stack" motivation mechanism. What you are referring to is the inherent instability of that mechanism. All your points are valid, but only for that type of AGI. My discussion, on the other hand, is predicated on a different type of motivation mechanism. As well as being unstable, a goal stack would probably also never actually be an AGI. It would be too stupid to be intelligent. Another side effect of the goal stack. As a result, not to be feared. Richard Loosemore From spike66 at att.net Tue Feb 15 22:59:28 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 14:59:28 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AFE46.20405@lightlink.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <4D5ADAA1.7050309@mac.com> <4D5AFE46.20405@lightlink.com> Message-ID: <00cb01cbcd63$ffb99580$ff2cc080$@att.net> On the topic of Watson, I declare a temporary open season for the number of posts. The second round is tonight and the final Jeopardy round is tomorrow night, so until then, say midnight US west coast time, post away and don't worry about 5 posts per day voluntary limit. Or rather, if it is on the timely topic of Watson, that doesn't count against your total. There is a lot of important and relevant stuff to say about Watson. Yak on! On Behalf Of Richard Subject: Re: [ExI] Watson on NOVA Samantha Atkins wrote: > ... >> ... Are you sure you would want that powerful a mind to be incapable of such reflection? >...As well as being unstable, a goal stack would probably also never actually be an AGI. It would be too stupid to be intelligent. Another side effect of the goal stack. As a result, not to be feared... Richard Loosemore Hmmm, that line of reasoning of *too stupid to be intelligent, therefore not to be feared* is cold comfort. If one believes what one reads in the popular press, the Iranians' efforts to build a nuclear weapon are being countered by a virus with no intelligence, the stuxnet virus. For them it is certainly something to be feared. The Iranians getting nukes is something I damn well fear, along with the Saudis and the Iraqis. So the stuxnet screwing up their efforts is in a way a friendly act on the part of a non-intelligent softivore. But the Iranians would see that as a very unfriendly softivore. spike From rpwl at lightlink.com Wed Feb 16 00:02:34 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 19:02:34 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <00bc01cbcd4a$c87c89b0$59759d10$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <00bc01cbcd4a$c87c89b0$59759d10$@att.net> Message-ID: <4D5B141A.9030303@lightlink.com> > spike wrote: > I am human. If we succeed in making an AGI with human emotions and human > motives, then it does as humans do. I can see it being more concerned about > its offspring than its parents. I am that way too. It's offspring may or > may not care about its grandparents and much as it's parents did. Our > models are not sufficiently sophisticated to predict that, but Richard, I am > reluctant to bet the future of humankind on it, even if I know that without > it humankind is doomed anyway. The *type* of motivation mechanism is what we would copy, not all the *content*. The type is stable. Some of the content leads to empathy. Some leads to other motivations, like aggression. The goal is to choose an array of content that makes it empathic without being irrational about its 'children'. This seems entirely feasible to me. Richard Loosemore From rpwl at lightlink.com Wed Feb 16 00:05:01 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 19:05:01 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <20110215200859.GZ23560@leitl.org> References: <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <20110215200859.GZ23560@leitl.org> Message-ID: <4D5B14AD.6050801@lightlink.com> Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 02:33:37PM -0500, Richard Loosemore wrote: > >> According to *which* theory of AGI motivation? > > Q: How can you tell an AI kook? > A: By the G. > >> Armchair theorizing only, I am afraid. Guesswork. > > Don't you have work to do, Richard? Like teaching these > researchers how to build an AI, for instance? Why is this comment necessary? I confess I don't understand the need for the personal remarks. Why call someone a "kook"? What is this supposed to signify? Richard Loosemore From rpwl at lightlink.com Wed Feb 16 00:11:24 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 15 Feb 2011 19:11:24 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <201102152012.p1FKCpv5023444@andromeda.ziaspace.com> Message-ID: <4D5B162C.9010102@lightlink.com> David Lubkin wrote: > Richard Loosemore wrote: > >> In fact, the AGI would be designed to feel empathy *with* the human >> species. It would feel itself to be one of us. According to your >> logic, then, it would design its children and to do the same. That >> leads to a revised conclusion (if we do nothing more than stick to the >> simple logic here): the AGI and all its descendents will have the >> same, stable, empathic motivations. Nowhere along the line will any >> of them feel inclined to create something dangerous. > > You hope. > > I'm as strong a technophilic extropian as any, but I'm leery of Bet Your > Species confidence. Yes, pursue AGI, MNT, SETI, genemod. But take > adequate precautions. No doubt about it. I am entirely with you. In fact I consider attempts to deploy nanotech at this stage in our development to be dangerous. I hope you are not automatically assuming that I would take no precautions. My mind is fuly focussed on that issue. And since I am steeped in the ideas surrounding the techniques that should be used, I already know what kinds of precautions and how much (in general terms) they could be trusted. I think many people who are in the dark about the technical side of such things see only impossibilities. By all means let's get into a discussion about the technical aspects of AGI safety, sometime. Anything would be better than the level of uninformed speculation that is the norm on these lists. Richard Loosemore From x at extropica.org Wed Feb 16 02:16:48 2011 From: x at extropica.org (x at extropica.org) Date: Tue, 15 Feb 2011 18:16:48 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: On Mon, Feb 14, 2011 at 8:09 PM, wrote: > On Mon, Feb 14, 2011 at 7:59 PM, Damien Broderick wrote: >> On 2/14/2011 9:28 PM, spike wrote: >> >>> I don?t have commercial TV, and can?t find live streaming. >> >> I don't have TV, period. Anyone have a link? > > > and Day 2: From femmechakra at yahoo.ca Wed Feb 16 03:00:21 2011 From: femmechakra at yahoo.ca (Anna Taylor) Date: Tue, 15 Feb 2011 19:00:21 -0800 (PST) Subject: [ExI] Watson on NOVA In-Reply-To: <00cb01cbcd63$ffb99580$ff2cc080$@att.net> Message-ID: <342045.44528.qm@web110410.mail.gq1.yahoo.com> Watson dates back to the Blue Man Theory. The chess advocate. A map is a map. Yes, it's really smart and it computes quicker than most but does it "realise" what it's thinking (computing). Ask Watson what Spike did yesterday and Spike will say, "You don't know unless I've told you or you've heard.". Not much different from the tech out there right now. Imho, Anna PS..my odds are on the robot. He has no emotion so he can rationally analyze each question without fault..lol --- On Tue, 2/15/11, spike wrote: > From: spike > Subject: Re: [ExI] Watson on NOVA > To: "'ExI chat list'" > Received: Tuesday, February 15, 2011, 5:59 PM > > On the topic of Watson, I declare a temporary open season > for the number of > posts.? The second round is tonight and the final > Jeopardy round is tomorrow > night, so until then, say midnight US west coast time, post > away and don't > worry about 5 posts per day voluntary limit.? Or > rather, if it is on the > timely topic of Watson, that doesn't count against your > total.? There is a > lot of important and relevant stuff to say about > Watson.? Yak on! > > On Behalf Of Richard > Subject: Re: [ExI] Watson on NOVA > > Samantha Atkins wrote: > > ... > >> ...? Are you sure you would want that > powerful a mind to be incapable of > such reflection? > > >...As well as being unstable, a goal stack would > probably also never > actually be an AGI.? It would be too stupid to be > intelligent.? Another side > effect of the goal stack.? As a result, not to be > feared... Richard > Loosemore > > > > > Hmmm, that line of reasoning of *too stupid to be > intelligent, therefore not > to be feared* is cold comfort.???If one > believes what one reads in the > popular press, the Iranians' efforts to build a nuclear > weapon are being > countered by a virus with no intelligence, the stuxnet > virus.? For them it > is certainly something to be feared.? The Iranians > getting nukes is > something I damn well fear, along with the Saudis and the > Iraqis.? So the > stuxnet screwing up their efforts is in a way a friendly > act on the part of > a non-intelligent softivore.? But the Iranians would > see that as a very > unfriendly softivore. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From lubkin at unreasonable.com Wed Feb 16 04:36:42 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 15 Feb 2011 23:36:42 -0500 Subject: [ExI] Watson on NOVA In-Reply-To: <342045.44528.qm@web110410.mail.gq1.yahoo.com> References: <00cb01cbcd63$ffb99580$ff2cc080$@att.net> <342045.44528.qm@web110410.mail.gq1.yahoo.com> Message-ID: <201102160435.p1G4Ztr4027564@andromeda.ziaspace.com> Anna Taylor wrote: >Ask Watson what Spike did yesterday Now *that* could be very interesting, as Watson conflates our Spike with all the other Spikes, not realizing he's the one who's an immortal crime-fighting stegosaurus parish priest. -- David. From spike66 at att.net Wed Feb 16 04:34:31 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 20:34:31 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: <000001cbcd92$ceac0390$6c040ab0$@att.net> I hear Watson spanked both carbon units' butts. Woooohoooo! {8^D Life is gooood. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 16 07:14:11 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 23:14:11 -0800 Subject: [ExI] ibm takes on the commies Message-ID: <000001cbcda9$1c8386e0$558a94a0$@att.net> Computer hipsters explain this to me. When they are claiming 10 petaflops, they mean using a few tens of thousands of parallel processors, ja? We couldn't check one Mersenne prime per second with it or anything, ja? It would be the equivalent of 10 petaflops assuming we have a process that is compatible with massive parallelism? The article doesn't say how many parallel processors are involved: http://www.foxnews.com/scitech/2011/02/15/ibm-battles-china-worlds-fastest-s upercomputer/?test=latestnews -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 16 07:19:40 2011 From: spike66 at att.net (spike) Date: Tue, 15 Feb 2011 23:19:40 -0800 Subject: [ExI] ibm takes on the commies In-Reply-To: <000001cbcda9$1c8386e0$558a94a0$@att.net> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> Message-ID: <000b01cbcda9$e08c4f90$a1a4eeb0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Sent: Tuesday, February 15, 2011 11:14 PM To: 'ExI chat list' Subject: [ExI] ibm takes on the commies Computer hipsters explain this to me. When they are claiming 10 petaflops, they mean using a few tens of thousands of parallel processors, ja? We couldn't check one Mersenne prime per second with it or anything, ja? It would be the equivalent of 10 petaflops assuming we have a process that is compatible with massive parallelism? The article doesn't say how many parallel processors are involved: http://www.foxnews.com/scitech/2011/02/15/ibm-battles-china-worlds-fastest-s upercomputer/?test=latestnews OK found a site that says this thing has 750,000 cores. Kewallllll. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Wed Feb 16 07:38:51 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 08:38:51 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5B14AD.6050801@lightlink.com> References: <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AB926.6040606@lightlink.com> <00af01cbcd43$c05267c0$40f73740$@att.net> <4D5AD511.6040902@lightlink.com> <20110215200859.GZ23560@leitl.org> <4D5B14AD.6050801@lightlink.com> Message-ID: <20110216073850.GI23560@leitl.org> On Tue, Feb 15, 2011 at 07:05:01PM -0500, Richard Loosemore wrote: > Why is this comment necessary? I confess I don't understand the need > for the personal remarks. I guess I should just switch to threaded view and ignore the complete thread, annoying at it is. http://imgs.xkcd.com/comics/duty_calls.png > Why call someone a "kook"? What is this supposed to signify? The problem is that transhumanists have an overproportional share of AI kooks. Tolerating bad ideas drowns out good ideas. What we need is an AI that works, not long sterile threads about AIs that could, possibly, maybe, eventually work. We have more empirical data than ever, reasonably powerful hardware that is affordable to individuals so effectively about anyone on this list could be a practical contributor. Don't talk about it, do it, publish it, and tell us so we can break out the champagne. Time's running out. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 16 07:52:55 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 08:52:55 +0100 Subject: [ExI] ibm takes on the commies In-Reply-To: <000001cbcda9$1c8386e0$558a94a0$@att.net> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> Message-ID: <20110216075255.GM23560@leitl.org> On Tue, Feb 15, 2011 at 11:14:11PM -0800, spike wrote: > > > Computer hipsters explain this to me. When they are claiming 10 petaflops, > they mean using a few tens of thousands of parallel processors, ja? We A common gamer's graphics card can easily have a thousand or a couple thousand cores (mostly VLIW) and memory bandwidth from hell. Total node count could run into tens to hundreds thousands, so we're talking multiple megacores. > couldn't check one Mersenne prime per second with it or anything, ja? It > would be the equivalent of 10 petaflops assuming we have a process that is > compatible with massive parallelism? The article doesn't say how many Fortunately, every physical process (including cognition) is compatible with massive parallelism. Just parcel the problem over a 3d lattice/torus, exchange information where adjacent volumes interface through the high-speed interconnect. Anyone who has written numerics for MPI recognizes the basic design pattern. > parallel processors are involved: The yardstick typically used is LINPACK http://www.top500.org/project/linpack Not terribly meaningful, but it meets the way people tend to solve problems, so it's not completely useless. Obviously, the only way to measure the performance is to run your own problem. > > > http://www.foxnews.com/scitech/2011/02/15/ibm-battles-china-worlds-fastest-s > upercomputer/?test=latestnews -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Wed Feb 16 11:38:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 12:38:08 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110215203708.GC23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <009701cbcd35$a38da180$eaa8e480$@att.net> <4D5AB9F2.3040802@lightlink.com> <009f01cbcd39$926b5a10$b7420e30$@att.net> <20110215203708.GC23560@leitl.org> Message-ID: <20110216113808.GQ23560@leitl.org> On Tue, Feb 15, 2011 at 09:37:08PM +0100, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 09:55:45AM -0800, spike wrote: > > > Richard you are aware that cell phones can now play grandmaster level chess? > > Spike, are you aware that your last-generation smartphone > runs rings around a Pentium 3? Fritz goes back to 1992, hardware > was a bit pathetic, then. Right now the thing in your pocket > is more powerful than a desktop PC of start-noughties. As a data point, Tegra 3 (to be released this year) is a quad core (with 12 GPU cores) will beat a first-gen 2 GHz Core 2 Duo (Core 2 Duo T 7200, Merom core, released 4.5 years ago). The interesting part is that this is a mobile device, hence easily passively cooled, and hence could scale to air-cooled WSI clusters. In similar vein (and as antipode to AMD's Fusion): http://www.xbitlabs.com/news/cpu/display/20110119204601_Nvidia_Maxwell_Graphics_Processors_to_Have_Integrated_ARM_General_Purpose_Cores.html less relevant, but still interesting http://www.eetimes.com/electronics-news/4210937/Intel-rolls-six-merged-Atom-FPGA-chips In regards to a short range (<100 m, optics) signalling mesh, there's forthcoming http://en.wikipedia.org/wiki/Light_Peak -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Wed Feb 16 11:55:56 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 07:55:56 -0400 Subject: [ExI] Watson On Jeopardy In-Reply-To: <000001cbcd92$ceac0390$6c040ab0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> Message-ID: Spike wrote: >I hear Watson spanked both carbon units? butts. Woooohoooo! {8^D Life is gooood.< Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 for Rutter. He got final Jeopardy wrong but was parsimonious with his wager -- just 900 odd dollars. Alex Trebek laughed and called him a 'sneak' because of the clever wager. The category was which U.S. city has an airport named after a war hero and a WWII battle. Watson said Toronto. I got a good laugh. I didn't know we'd been annexed. Another interesting detail. Ratings for Jeopardy have soared into the stratosphere because of Watson. It moved into the number two spot in TV land behind a Charlie Sheen sitcom last night. d. 2011/2/16 spike > > > I hear Watson spanked both carbon units? butts. > > > > Woooohoooo! > > > > {8^D > > > > Life is gooood. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Wed Feb 16 12:15:30 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 16 Feb 2011 05:15:30 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> Message-ID: Darren Greer wrote: Another interesting detail. Ratings for Jeopardy have soared into the stratosphere because of Watson. It moved into the number two spot in TV land behind a Charlie Sheen sitcom last night. >>> I can sense a trend where A.I. will be on more and more game shows, and even "reality television" programs. It would be a fascinating trend to have humanity acclimated to A.I. by seeing the machines "grow up" on their monitors and TV screens! And so in time we will have our Simone's and Calculon's... http://en.wikipedia.org/wiki/S1m0ne http://futurama.wikia.com/wiki/Calculon John : ) On 2/16/11, Darren Greer wrote: > Spike wrote: > > >>I hear Watson spanked both carbon units? butts. > > > > Woooohoooo! > > > > {8^D > > > > Life is gooood.< > > > Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 > for Rutter. He got final Jeopardy wrong but was parsimonious with his wager > -- just 900 odd dollars. Alex Trebek laughed and called him a 'sneak' > because of the clever wager. The category was which U.S. city has an > airport named after a war hero and a WWII battle. Watson said Toronto. I got > a good laugh. I didn't know we'd been annexed. > > > Another interesting detail. Ratings for Jeopardy have soared into the > stratosphere because of Watson. It moved into the number two spot in TV land > behind a Charlie Sheen sitcom last night. > > > d. > > > > 2011/2/16 spike > >> >> >> I hear Watson spanked both carbon units? butts. >> >> >> >> Woooohoooo! >> >> >> >> {8^D >> >> >> >> Life is gooood. >> >> >> >> spike >> >> >> >> >> >> >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > From eugen at leitl.org Wed Feb 16 13:10:30 2011 From: eugen at leitl.org (Eugen Leitl) Date: Wed, 16 Feb 2011 14:10:30 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> Message-ID: <20110216131030.GR23560@leitl.org> On Wed, Feb 16, 2011 at 07:55:56AM -0400, Darren Greer wrote: > Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 > for Rutter. He got final Jeopardy wrong but was parsimonious with his > wager -- just 900 odd dollars. Alex Trebek laughed and called him a > 'sneak' because of the clever wager. The category was which U.S. city has > an airport named after a war hero and a WWII battle. Watson said Toronto. I > got a good laugh. I didn't know we'd been annexed. Another > interesting detail. Ratings for Jeopardy have soared into the Not to mention bring Big Blue back into the limelight. > stratosphere because of Watson. It moved into the number two spot in TV > land behind a Charlie Sheen sitcom last night. By the way, Watson is not nearly as dumb (and far more usable) than I thought. According to http://www.hpcwire.com/features/Must-See-TV-IBM-Watson-Heads-for-Jeopardy-Showdown-115684499.html?viewAll=y February 09, 2011 Must See TV: IBM Watson Heads for Jeopardy Showdown Michael Feldman, HPCwire Editor Next week the IBM supercomputer known as "Watson" will take on two of the most accomplished Jeopardy players of all time, Ken Jennings and Brad Rutter, in a three-game match starting on February 14. If Watson manages to best the humans, it will represent the most important advance in machine intelligence since IBM's "Deep Blue" beat chess grandmaster Garry Kasparov in 1997. But this time around, the company also plans to make a business case for the technology. Trivial pursuit this is not. And impressive technology it is. On the hardware side, Watson is comprised of 90 Power 750 servers, 16 TB of memory and 4 TB of disk storage, all housed in a relatively compact ten racks. The 750 is IBM's elite Power7-based server targeted for high-end enterprise analytics. (The Power 755 is geared toward high performance technical computing and differs only marginally in CPU speed, memory capacity, and storage options.) Although the enterprise version can be ordered with 1 to 4 sockets of 6-core or 8-core Power7 chips, Watson is maxed out with the 4-socket, 8-core configuration using the top bin 3.55 GHz processors. The 360 Power7 chips that make up Watson's brain represent IBM's best and brightest processor technology. Each Power7 is capable of over 500 GB/second of aggregate bandwidth, making it particularly adept at manipulating data at high speeds. FLOPS-wise, a 3.55 GHz Power7 delivers 218 Linpack gigaflops. For comparison, the POWER2 SC processor, which was the chip that powered cyber-chessmaster Deep Blue, managed a paltry 0.48 gigaflops, with the whole machine delivering a mere 11.4 Linpack gigaflops. But FLOPS are not the real story here. Watson's question-answering software presumably makes little use of floating-point number crunching. To deal with the game scenario, the system had to be endowed with a rather advanced version of natural language processing. But according to David Ferrucci, principal investigator for the project, it goes far beyond language smarts. The software system, called DeepQA, also incorporates machine learning, knowledge representation, and deep analytics. Even so, the whole application rests on first understanding the Jeopardy clues, which, because they employ colloquialisms and often obscure references, can be challenging even for humans. That's why this is such a good test case for natural language processing. Ferrucci says the ability to understand language is destined to become a very important aspect of computers. "It has to be that way," he says. "We just cant imagine a future without it." But it's the analysis component that we associate with real "intelligence." The approach here reflects the open domain nature of the problem. According to Ferrucci, it wouldn't have made sense to simply construct a database corresponding to possible Jeopardy clues. Such a model would have supported only a small fraction of the possible topics available to Jeopardy. Rather their approach was to use "as is" information sources -- encyclopedias, dictionaries, thesauri, plays, books, etc. -- and make the correlations dynamically. The trick of course is to do all the processing in real-time. Contestants, at least the successful ones, need to provide an answer in just a few seconds. When the software was run on a lone 2.6 GHz CPU, it took around 2 hours to process a typical Jeopardy clue -- not a very practical implementation. But when they parallelized the algorithms across the 2,880-core Watson, they were able to cut the processing time from a couple of hours to between 2 and 6 seconds. Even at that, Watson doesn't just spit out the answers. It forms hypotheses based on the evidence it finds and scores them at various confidence levels. Watson is programmed not to buzz in until it reaches a confidence of at least 50 percent, although this parameter can be self-adjusted depending on the game situation. To accomplish all this, DeepQA employs an ensemble of algorithms -- about a million lines of code --- to gather and score the evidence. These include temporal reasoning algorithms to correlate times with events, statistical paraphrasing algorithms to evaluate semantic context, and geospatial reasoning to correlate locations. It can also dynamically form associations, both in training and at game time, to connect disparate ideas. For example it can learn that inventors can patent information or that officials can submit resignations. Watson also shifts the weight it assigns to different algorithms based on which ones are delivering the more accurate correlations. This aspect of machine learning allows Watson to get "smarter" the more it plays the game. The DeepQA programmers have also been refining the algorithms themselves over the past several years. In 2007, Watson could only answer a small fraction of Jeopardy clues with reasonable confidence and even at that, was only correct 47 percent of the time. When forced to answer the majority of the clues, like a grand champion would, it could only answer 15 percent correctly. By IBM's own admission, Watson was playing "terrible." The highest performing Jeopardy grand champions, like Jennings and Rutter, typically buzz in on 70 to 80 percent of the entries and give the correct answer 85 to 95 percent of time. By 2010 Watson started playing at that level. Ferrucci says that while the system can't buzz in on every question, it can now answer the vast majority of them in competitive time. "We can compete with grand champions in terms of precision, in terms of confidence, and in terms of speed," he says. In dozens of practice rounds against former Jeopardy champs, the computer was beating the humans with a 65 percent win rate. Watson also prevailed in a 15-question round against Jennings and Rutter in early January of this year. See the performance below. None of this is a guarantee that Watson will prevail next week. But even if the machine just makes a decent showing, IBM will have pulled off quite possibly the best product placement in television history. Open domain question answering is not only one of the Holy Grails of artificial intelligence but has enormous potential for commercial applications. In areas as disparate as healthcare, tech support, business intelligence, security and finance, this type of platform could change those businesses irrevocably. John Kelly, senior vice president and director of IBM Research, boasts, "We're going to revolutionize industries at a level that has never been done before." In the case of healthcare, it's not a huge leap to imagine "expert" question answering systems helping doctors with medical diagnosis. A differential diagnosis is not much different from what Watson does when it analyzes a Jeopardy clue. Before it replaces Dr. House, though, the machine will have to prove itself in the game show arena. If Jennings and Rutter defeat the supercomputer this time around, IBM will almost certainly ask for a rematch, as it did when Deep Blue initially lost its first chess match with Kasparov in 1996. The engineers will keep stroking the code and retraining the computer until Watson is truly unbeatable. Eventually the machine will prevail. From lubkin at unreasonable.com Wed Feb 16 14:46:02 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 16 Feb 2011 09:46:02 -0500 Subject: [ExI] ibm takes on the commies In-Reply-To: <20110216075255.GM23560@leitl.org> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> <20110216075255.GM23560@leitl.org> Message-ID: <201102161445.p1GEjHpI021917@andromeda.ziaspace.com> Eugen wrote: >The yardstick typically used is LINPACK http://www.top500.org/project/linpack >Not terribly meaningful, but it meets the way people tend to solve problems, >so it's not completely useless. Obviously, the >only way to measure the performance >is to run your own problem. Back in my [ LLNL, Apollo, HP ] days, it was common for hardware manufacturers to "study for the test." Since customers and press paid attention to benchmarks like LINPACK or the Livermore Loops, engineering resources were focused on doing well at what the tests measured, at the expense of other facets. For instance, superb vector operation (e.g., the Cray's ability to perform the same operation on 64 sets of floating-point operands in one instruction) was often coupled with mediocre performance for integer scalars. This wasn't just at the hardware level. My boss for a time at Livermore was one of the top compiler guys anywhere, and people used his LRLTRAN over the Fortran that came from Cray (CFT) because he generated higher-performance machine language than Cray knew how to. Similarly, the compiler groups at computer vendors focused on making the benchmarks run fast. Nothing inherently wrong with that except that, as Eugen noted, you need to see if the (computer+compiler) runs fast on what *you* would use it for. *However*, there were vendors caught in cheating. They wrote compilers that detected when a standard benchmark was being compiled and generated better code than they ordinarily could. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From mbb386 at main.nc.us Wed Feb 16 15:13:13 2011 From: mbb386 at main.nc.us (MB) Date: Wed, 16 Feb 2011 10:13:13 -0500 Subject: [ExI] Treating Western diseases In-Reply-To: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> References: <201102141833.p1EIXpej014645@andromeda.ziaspace.com> Message-ID: <4f3d94724aaf95d52871d09bed7abc62.squirrel@www.main.nc.us> I found the article quite interesting, made copies for my cousin (Chron's) and my co-worker (autistic brother). Since so many dread, intractable diseases now are classed under "auto-immune" it's important that we study these unexpected results. If we never look how will we ever find? Regards, MB > Treating autism, Crohn's disease, multiple sclerosis, etc. with > intentionally ingesting parasites. The squeamish of you (if any) > should get past any "ew, gross!" reaction and read this. It may be > very important for someone you love and have implications on life > extension. I heard about it from Patri. > > http://www.the-scientist.com/2011/2/1/42/1/ > From rpwl at lightlink.com Wed Feb 16 15:20:07 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 10:20:07 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102151955.p1FJto5v017690@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: <4D5BEB27.7020204@lightlink.com> David Lubkin wrote: > Richard Loosemore wrote: > >> This is *way* beyond anything that Watson is doing. >> >> What it does, essentially, is this: > : >> It is a brick-stupid cluster analysis program. >> >> So, does Watson think about what the other contestants might be doing? >> Err, that would be "What is 'you have got to be joking'?" > > You don't seem to have read what I wrote. The only question I raised > about Watson's current capabilities was whether it had a module to > analyze its failures and hone itself. *That* has been possible in > software for several decades. > > (I've worked in pertinent technologies since the late 70's.) Misunderstanding: I was addressing the general appropriateness of the question (my intention was certainly not to challenge your level of understanding). I was trying to point out that Watson is so close to being a statistical analysis of text corpora, that it hardly makes sense to ask about all those "comprehension" issues that you talked about. Not in the same breath. For example, you brought up the question of self-awareness of your own code-writing strategies, and conscious adjustments that you made to correct for them (... you noticed your own habit of making large numbers of off-by-one errors). That kind of self-awareness is extremely interesting and is being addressed quite deliberately by some AGI researchers (e.g. myself). But to even talk about such stuff in the context of Watson is a bit like asking whether next year's software update to (e.g.) Mathematica might be able to go to math lectures, listen to the lecturer, ask questions in class, send humorous tweets to classmates about what the lecturer is wearing, and get a good mark on the exam at the end of the course. Yes, Watson can hone itself, of course! As you point out, that kind of thing has been done for decades. No question. But statistical adaptation is far removed from awareness of one's own problem solving strategies. Kernel methods do not buy you models of cognition! What is going on here -- what I am trying to point out -- is a fantastic degree of confusion. In one moment there is an admission that Watson is mostly doing a form of statistical analysis (plus tweaks). Then, the next moment people are making statements that jump from ground level up to the stratosphere, suggesting that this is the beginning of the arrival of something like real AGI (the comments of the Watson team certainly imply that this is a major milestone in AI, and the press are practically announcing this as the second coming). I am just trying to inject a dose of sanity. And failing. Richard Loosemore From lubkin at unreasonable.com Wed Feb 16 16:10:04 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 16 Feb 2011 11:10:04 -0500 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5BEB27.7020204@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> Message-ID: <201102161609.p1GG960F029008@andromeda.ziaspace.com> Richard Loosemore wrote: >I was trying to point out that Watson is so >close to being a statistical analysis of text >corpora, that it hardly makes sense to ask about >all those "comprehension" issues that you talked >about. Not in the same breath. : >But to even talk about such stuff in the context of Watson : >What is going on here -- what I am trying to >point out -- is a fantastic degree of >confusion. In one moment there is an admission >that Watson is mostly doing a form of >statistical analysis (plus tweaks). Then, the >next moment people are making statements that >jump from ground level up to the stratosphere : >I am just trying to inject a dose of sanity. > >And failing. As far as I can see, the only confusion is on your part, from assuming that posters will stick to the topic. What's happening is that the topic at hand (Watson, in this case) triggers ideas. Thoughts about AGI. Thoughts about how Watson could be made a more sophisticated player. Thoughts about game theory aspects of Jeopardy play. Etc. I don't think I've ever had or heard a conversation among extropians that didn't leap off onto interesting tangents. Often mid-sentence. I think of this as a feature, not a bug. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From hkeithhenson at gmail.com Wed Feb 16 17:08:45 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 16 Feb 2011 10:08:45 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > >> I'm still pissed at Sagan for his hubris in sending a message to the >> stars without asking the rest of us first, in blithe certainty that "of >> course" any recipient would have evolved beyond aggression and >> xenophobia. > > The real reasons if that they would be there you'd be dead, Jim. > In fact, if any alien picks up the transmission (chance: very close > to zero) they'd better be farther advanced than us, and on a > faster track. I hope it for them. I have been mulling this over for decades. We look out into the Universe and don't (so far) see or hear any evidence of technophilic civilization. I see only two possibilities: 1) Technophilics are so rare that there are no others in our light cone. 2) Or if they are relatively common something wipes them *all* out, or, if not wiped out, they don't do anything which indicates their presence. If 1, then the future is unknown. If 2, it's probably related to local singularities. If that's the case, most of the people reading this list will live to see it. Keith PS. If anyone can suggest something that is not essentially the same two situations, please speak up. From rpwl at lightlink.com Wed Feb 16 17:41:00 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 12:41:00 -0500 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: <4D5C0C2C.9030306@lightlink.com> Keith Henson wrote: > On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > >> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: >> >>> I'm still pissed at Sagan for his hubris in sending a message to the >>> stars without asking the rest of us first, in blithe certainty that "of >>> course" any recipient would have evolved beyond aggression and >>> xenophobia. >> The real reasons if that they would be there you'd be dead, Jim. >> In fact, if any alien picks up the transmission (chance: very close >> to zero) they'd better be farther advanced than us, and on a >> faster track. I hope it for them. > > I have been mulling this over for decades. > > We look out into the Universe and don't (so far) see or hear any > evidence of technophilic civilization. > > I see only two possibilities: > > 1) Technophilics are so rare that there are no others in our light cone. > > 2) Or if they are relatively common something wipes them *all* out, > or, if not wiped out, they don't do anything which indicates their > presence. > > If 1, then the future is unknown. If 2, it's probably related to > local singularities. If that's the case, most of the people reading > this list will live to see it. Well, not really an extra one, but I count four items in your 2-item list: 1) Technophilics are so rare that there are no others in our light cone. 2) If they are relatively common, there is something that wipes them *all* out (by the time they reach this stage they foul their own nest and die), or 3) They are relatively common and they don't do anything which indicates their presence, because they are too scared that someone else will zap them, or 4) They are relatively common and they don't do anything which indicates their presence, because they use communications technology that does not leak the way ours does. Richard Loosemore From jonkc at bellsouth.net Wed Feb 16 18:07:49 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Feb 2011 13:07:49 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5BEB27.7020204@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> Message-ID: <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: > > That kind of self-awareness is extremely interesting and is being addressed quite deliberately by some AGI researchers (e.g. myself). So I repeat my previous request, please tell us all about the wonderful AI program that you have written that does things even more intelligently than Watson. > > But statistical adaptation is far removed from awareness of one's own problem solving strategies. Kernel methods do not buy you models of cognition! To hell with awareness! Consciousness theories are the last refuge of the scoundrel. As there is no data they need to explain consciousness theories are incredibly easy to come up with, any theory will do and one is as good as another. If you really want to establish your gravitas as an AI researcher then come up with an advancement in machine INTELLIGENCE one tenth of one percent as great as Watson. > I confess I don't understand the need for the personal remarks. That irritation probably comes from the demeaning remarks you have made about people in the AI community that are supposed to be your colleagues, scientists who have done more than philosophize but have actually written a program and accomplished something pretty damn remarkable. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Feb 16 18:15:33 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 10:15:33 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5BEB27.7020204@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> Message-ID: <005b01cbce05$8101d390$83057ab0$@att.net> >... On Behalf Of Richard Loosemore >...(the comments of the Watson team certainly imply that this is a major milestone in AI... Well sure, they have been working on this for years, now it come out taking on humans and whooping ass. I can scarcely fault them for a bit of immodesty. >... and the press are practically announcing this as the second coming). Ja, and of course the press needs to whip up excitement, otherwise their products don't sell and they no longer have a job. Now, who would hire a former journalist? We couldn't trust them at the local elementary school, and McDonald's won't hire them; they don't speak the language. But there is something else that really makes this exciting, for we recognize that if a computer can play Jeopardy, it can be modified into being a general conversationalist. Many of us have or had an Alzheimer's family member. From firsthand experience, we know how frustrating that can be. The patient repeats herself over and over, pretty soon no one wants to talk to the patient. The patient feels everyone is angry with her, and often reacts with anger. Most of the time the patient is just bored and lonely, even in a crowded house. She perhaps can no longer read, cannot go out on walks alone, family members don't sit and visit. I think we will be able to modify something like a very limited version of Watson, get him running on a PC, rig up some kind of Bluetooth speech recognition system and we have something that a whole lot of people would pay five digits to have. No sexbots, no tricky mechanical devices, just a good competent yakbot, to keep our aging parent company. > I am just trying to inject a dose of sanity. And failing...Richard Loosemore Richard you are not failing, we hear ya loud and clear. But you are searching for general AI, whereas I and perhaps others here have a far more modest and immediate need, which we recognize has nothing to do with AI. Yes we may lure away a few of your brightest students, but keep in mind they have to pay the rent too. Furthermore, plenty of those students have grandparents whose minds are wasting away, lonely and bored, grandparents who need our help NOW and who richly deserve it. In the long run, this will bring excitement and money into the field, attracting many able minds to AI research than are drawn away into Watson-ish exercises. Everyone wins. Work with us. You are among friends here. spike From spike66 at att.net Wed Feb 16 18:28:30 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 10:28:30 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <201102161609.p1GG960F029008@andromeda.ziaspace.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <201102161609.p1GG960F029008@andromeda.ziaspace.com> Message-ID: <005c01cbce07$4ffa5590$efef00b0$@att.net> >... On Behalf Of David Lubkin ... >..I don't think I've ever had or heard a conversation among extropians that didn't leap off onto interesting tangents. Often mid-sentence. -- David. Well said indeed, me lad. From your comments and Richard's, which mention the 1970s, perhaps you gentlemen are not much younger than I am. If so, it might not be so much our parents and grandparents using yakbots, it might be you and I using these products in another two or three decades. Wait, what were we talking about? spike From jonkc at bellsouth.net Wed Feb 16 18:40:24 2011 From: jonkc at bellsouth.net (John Clark) Date: Wed, 16 Feb 2011 13:40:24 -0500 Subject: [ExI] Image Recognition Appreciation Day In-Reply-To: <4D5C0C2C.9030306@lightlink.com> References: <4D5C0C2C.9030306@lightlink.com> Message-ID: <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> I would humbly like to suggest that June 23 (Alan Turing's birthday by the way) be turned into a international holiday called "Image Recognition Appreciation Day". On this day we would all reflect on the intelligence required to recognize images. It is important that this be done soon because although computers are not very good at this task right now that will certainly change in the next few years. On the day computers become good at it the laws of physics in the universe will change and intelligence will no longer be required for image recognition. So if we ever intend to salute the brainpower required for this skill it is imperative we do it now while we can. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Wed Feb 16 19:56:00 2011 From: sparge at gmail.com (Dave Sill) Date: Wed, 16 Feb 2011 14:56:00 -0500 Subject: [ExI] Image Recognition Appreciation Day In-Reply-To: <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> References: <4D5C0C2C.9030306@lightlink.com> <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> Message-ID: 2011/2/16 John Clark > I would humbly like to suggest that June 23 (Alan Turing's birthday by the > way) be turned into a international holiday called "Image Recognition > Appreciation Day". On this day we would all reflect on the intelligence > required to recognize images. It is important that this be done soon because > although computers are not very good at this task right now that will > certainly change in the next few years. On the day computers become good at > it the laws of physics in the universe will change and intelligence will no > longer be required for image recognition. > > So if we ever intend to salute the brainpower required for this skill it is > imperative we do it now while we can. > John, do you really have trouble seeing the distinction between specialized intelligence and general intelligence? Do you think Deep Blue or Watson could pass the Turing Test? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Wed Feb 16 20:03:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 15:03:53 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> Message-ID: <4D5C2DA9.9050804@lightlink.com> John Clark wrote: > On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: >> >> That kind of self-awareness is extremely interesting and is being >> addressed quite deliberately by some AGI researchers (e.g. myself). > > So I repeat my previous request, please tell us all about the wonderful > AI program that you have written that does things even more > intelligently than Watson. Done: read my papers. Questions? Just ask! >> But statistical adaptation is far removed from awareness of one's own >> problem solving strategies. Kernel methods do not buy you models of >> cognition! > > To hell with awareness! Consciousness theories are the last refuge of > the scoundrel. As there is no data they need to explain consciousness > theories are incredibly easy to come up with, any theory will do and one > is as good as another. If you really want to establish your gravitas as > an AI researcher then come up with an advancement in machine > INTELLIGENCE one tenth of one percent as great as Watson. Nice rant -- thank you John -- but I was talking about awareness, not consciousness. "Awareness" just means modeling of internal cognitive processes. Very different. >> I confess I don't understand the need for the personal remarks. > > That irritation probably comes from the demeaning remarks you have made > about people in the AI community that are supposed to be your > colleagues, scientists who have done more than philosophize but have > actually written a program and accomplished something pretty damn > remarkable. "... To be generous, guiltless, and of free disposition, is to take those things for Bird-Bolts that you deem Cannon Bullets ..." Richard Loosemore From sjatkins at mac.com Wed Feb 16 23:36:58 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:36:58 -0800 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C0C2C.9030306@lightlink.com> References: <4D5C0C2C.9030306@lightlink.com> Message-ID: <4D5C5F9A.2020204@mac.com> On 02/16/2011 09:41 AM, Richard Loosemore wrote: > Keith Henson wrote: >> On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: >> >>> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: >>> >>>> I'm still pissed at Sagan for his hubris in sending a message to the >>>> stars without asking the rest of us first, in blithe certainty that >>>> "of >>>> course" any recipient would have evolved beyond aggression and >>>> xenophobia. >>> The real reasons if that they would be there you'd be dead, Jim. >>> In fact, if any alien picks up the transmission (chance: very close >>> to zero) they'd better be farther advanced than us, and on a >>> faster track. I hope it for them. >> >> I have been mulling this over for decades. >> >> We look out into the Universe and don't (so far) see or hear any >> evidence of technophilic civilization. >> >> I see only two possibilities: >> >> 1) Technophilics are so rare that there are no others in our light >> cone. >> >> 2) Or if they are relatively common something wipes them *all* out, >> or, if not wiped out, they don't do anything which indicates their >> presence. >> >> If 1, then the future is unknown. If 2, it's probably related to >> local singularities. If that's the case, most of the people reading >> this list will live to see it. > Well, the message sent by Sagan was a single transmission aimed at a globular cluster 25,000 light years away. Traveling at near light speed to send a ship back is very expensive and would not happen for a long time. And for what? A lower level species that may or may not survive its own growing pains long enough to ever be any kind of threat at all? The chances that a highly xenophobic advanced species would pick it up and choose to mount the expense to act on it is pretty small. Hmm. Of course if they are particularly advanced they could just engineer a super-nova aimed in our general direction from close enough. Or as some film had it, send us the plans to build a wonder machine that wipes us out or turns us into more of them. > Well, not really an extra one, but I count four items in your 2-item > list: > > 1) Technophilics are so rare that there are no others in our light cone. > > 2) If they are relatively common, there is something that wipes them > *all* out (by the time they reach this stage they foul their own nest > and die), or > > 3) They are relatively common and they don't do anything which > indicates their presence, because they are too scared that someone > else will zap them, or > > 4) They are relatively common and they don't do anything which > indicates their presence, because they use communications technology > that does not leak the way ours does. > My theory is that almost no evolved intelligent species meets the challenge of overcoming its evolved limitations fast enough to cope successfully with accelerating technological change. Almost all either wipe themselves out or ding themselves sufficiently hard to miss their window of opportunity. It can be argued that it is very very rare that a technological species survives the period we are entering and emerges more capable on the other side of singularity. - samantha From sjatkins at mac.com Wed Feb 16 23:39:57 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:39:57 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <005b01cbce05$8101d390$83057ab0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> Message-ID: <4D5C604D.3030201@mac.com> On 02/16/2011 10:15 AM, spike wrote: >> ... On Behalf Of Richard Loosemore >> ...(the comments of the Watson team certainly imply that this is a major > milestone in AI... > > Well sure, they have been working on this for years, now it come out taking > on humans and whooping ass. I can scarcely fault them for a bit of > immodesty. > >> ... and the press are practically announcing this as the second coming). > Ja, and of course the press needs to whip up excitement, otherwise their > products don't sell and they no longer have a job. Now, who would hire a > former journalist? We couldn't trust them at the local elementary school, > and McDonald's won't hire them; they don't speak the language. > > But there is something else that really makes this exciting, for we > recognize that if a computer can play Jeopardy, it can be modified into > being a general conversationalist. Not the same problem domain or even all that close. Can you turn it into a really good chatbot? Maybe, maybe not depending on your standard of "good". But that wouldn't be very exciting. Very expensive way to keep folks in the nursing home entertained. - samantha From sjatkins at mac.com Wed Feb 16 23:48:03 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:48:03 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110216131030.GR23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <000001cbcd92$ceac0390$6c040ab0$@att.net> <20110216131030.GR23560@leitl.org> Message-ID: <4D5C6233.8050902@mac.com> Thanks for the excellent article, Eugen. Watson certainly is not simplistic. And some of its capabilities are ones I did not know we had a good enough handle on. Of course working with such beefy hardware is a big part of its level of RT success. Until the algorithms and hardware costs combined give orders of magnitude lower cost I can't see such capabilities making that much of a difference more broadly real soon. - s On 02/16/2011 05:10 AM, Eugen Leitl wrote: > On Wed, Feb 16, 2011 at 07:55:56AM -0400, Darren Greer wrote: > >> Annihilated them Spike. App. 35,000 for Watson, 4000 for Jennings and 10000 >> for Rutter. He got final Jeopardy wrong but was parsimonious with his >> wager -- just 900 odd dollars. Alex Trebek laughed and called him a >> 'sneak' because of the clever wager. The category was which U.S. city has >> an airport named after a war hero and a WWII battle. Watson said Toronto. I >> got a good laugh. I didn't know we'd been annexed. Another >> interesting detail. Ratings for Jeopardy have soared into the > Not to mention bring Big Blue back into the limelight. > >> stratosphere because of Watson. It moved into the number two spot in TV >> land behind a Charlie Sheen sitcom last night. > By the way, Watson is not nearly as dumb (and far more usable) than I > thought. According to > > http://www.hpcwire.com/features/Must-See-TV-IBM-Watson-Heads-for-Jeopardy-Showdown-115684499.html?viewAll=y > > February 09, 2011 > > Must See TV: IBM Watson Heads for Jeopardy Showdown > > Michael Feldman, HPCwire Editor > > Next week the IBM supercomputer known as "Watson" will take on two of the > most accomplished Jeopardy players of all time, Ken Jennings and Brad Rutter, > in a three-game match starting on February 14. If Watson manages to best the > humans, it will represent the most important advance in machine intelligence > since IBM's "Deep Blue" beat chess grandmaster Garry Kasparov in 1997. But > this time around, the company also plans to make a business case for the > technology. Trivial pursuit this is not. > > And impressive technology it is. On the hardware side, Watson is comprised of > 90 Power 750 servers, 16 TB of memory and 4 TB of disk storage, all housed in > a relatively compact ten racks. The 750 is IBM's elite Power7-based server > targeted for high-end enterprise analytics. (The Power 755 is geared toward > high performance technical computing and differs only marginally in CPU > speed, memory capacity, and storage options.) Although the enterprise version > can be ordered with 1 to 4 sockets of 6-core or 8-core Power7 chips, Watson > is maxed out with the 4-socket, 8-core configuration using the top bin 3.55 > GHz processors. > > The 360 Power7 chips that make up Watson's brain represent IBM's best and > brightest processor technology. Each Power7 is capable of over 500 GB/second > of aggregate bandwidth, making it particularly adept at manipulating data at > high speeds. FLOPS-wise, a 3.55 GHz Power7 delivers 218 Linpack gigaflops. > For comparison, the POWER2 SC processor, which was the chip that powered > cyber-chessmaster Deep Blue, managed a paltry 0.48 gigaflops, with the whole > machine delivering a mere 11.4 Linpack gigaflops. > > But FLOPS are not the real story here. Watson's question-answering software > presumably makes little use of floating-point number crunching. To deal with > the game scenario, the system had to be endowed with a rather advanced > version of natural language processing. But according to David Ferrucci, > principal investigator for the project, it goes far beyond language smarts. > The software system, called DeepQA, also incorporates machine learning, > knowledge representation, and deep analytics. > > Even so, the whole application rests on first understanding the Jeopardy > clues, which, because they employ colloquialisms and often obscure > references, can be challenging even for humans. That's why this is such a > good test case for natural language processing. Ferrucci says the ability to > understand language is destined to become a very important aspect of > computers. "It has to be that way," he says. "We just cant imagine a future > without it." > > But it's the analysis component that we associate with real "intelligence." > The approach here reflects the open domain nature of the problem. According > to Ferrucci, it wouldn't have made sense to simply construct a database > corresponding to possible Jeopardy clues. Such a model would have supported > only a small fraction of the possible topics available to Jeopardy. Rather > their approach was to use "as is" information sources -- encyclopedias, > dictionaries, thesauri, plays, books, etc. -- and make the correlations > dynamically. > > The trick of course is to do all the processing in real-time. Contestants, at > least the successful ones, need to provide an answer in just a few seconds. > When the software was run on a lone 2.6 GHz CPU, it took around 2 hours to > process a typical Jeopardy clue -- not a very practical implementation. But > when they parallelized the algorithms across the 2,880-core Watson, they were > able to cut the processing time from a couple of hours to between 2 and 6 > seconds. > > Even at that, Watson doesn't just spit out the answers. It forms hypotheses > based on the evidence it finds and scores them at various confidence levels. > Watson is programmed not to buzz in until it reaches a confidence of at least > 50 percent, although this parameter can be self-adjusted depending on the > game situation. > > To accomplish all this, DeepQA employs an ensemble of algorithms -- about a > million lines of code --- to gather and score the evidence. These include > temporal reasoning algorithms to correlate times with events, statistical > paraphrasing algorithms to evaluate semantic context, and geospatial > reasoning to correlate locations. > > It can also dynamically form associations, both in training and at game time, > to connect disparate ideas. For example it can learn that inventors can > patent information or that officials can submit resignations. Watson also > shifts the weight it assigns to different algorithms based on which ones are > delivering the more accurate correlations. This aspect of machine learning > allows Watson to get "smarter" the more it plays the game. > > The DeepQA programmers have also been refining the algorithms themselves over > the past several years. In 2007, Watson could only answer a small fraction of > Jeopardy clues with reasonable confidence and even at that, was only correct > 47 percent of the time. When forced to answer the majority of the clues, like > a grand champion would, it could only answer 15 percent correctly. By IBM's > own admission, Watson was playing "terrible." The highest performing Jeopardy > grand champions, like Jennings and Rutter, typically buzz in on 70 to 80 > percent of the entries and give the correct answer 85 to 95 percent of time. > > By 2010 Watson started playing at that level. Ferrucci says that while the > system can't buzz in on every question, it can now answer the vast majority > of them in competitive time. "We can compete with grand champions in terms of > precision, in terms of confidence, and in terms of speed," he says. > > In dozens of practice rounds against former Jeopardy champs, the computer was > beating the humans with a 65 percent win rate. Watson also prevailed in a > 15-question round against Jennings and Rutter in early January of this year. > See the performance below. > > None of this is a guarantee that Watson will prevail next week. But even if > the machine just makes a decent showing, IBM will have pulled off quite > possibly the best product placement in television history. Open domain > question answering is not only one of the Holy Grails of artificial > intelligence but has enormous potential for commercial applications. In areas > as disparate as healthcare, tech support, business intelligence, security and > finance, this type of platform could change those businesses irrevocably. > John Kelly, senior vice president and director of IBM Research, boasts, > "We're going to revolutionize industries at a level that has never been done > before." > > In the case of healthcare, it's not a huge leap to imagine "expert" question > answering systems helping doctors with medical diagnosis. A differential > diagnosis is not much different from what Watson does when it analyzes a > Jeopardy clue. Before it replaces Dr. House, though, the machine will have to > prove itself in the game show arena. > > If Jennings and Rutter defeat the supercomputer this time around, IBM will > almost certainly ask for a rematch, as it did when Deep Blue initially lost > its first chess match with Kasparov in 1996. The engineers will keep stroking > the code and retraining the computer until Watson is truly unbeatable. > Eventually the machine will prevail. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From sjatkins at mac.com Wed Feb 16 23:53:45 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 16 Feb 2011 15:53:45 -0800 Subject: [ExI] ibm takes on the commies In-Reply-To: <20110216075255.GM23560@leitl.org> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> <20110216075255.GM23560@leitl.org> Message-ID: <4D5C6389.4050504@mac.com> On 02/15/2011 11:52 PM, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 11:14:11PM -0800, spike wrote: >> >> >> Computer hipsters explain this to me. When they are claiming 10 petaflops, >> they mean using a few tens of thousands of parallel processors, ja? We > A common gamer's graphics card can easily have a thousand or a couple > thousand cores (mostly VLIW) and memory bandwidth from hell. Total node > count could run into tens to hundreds thousands, so we're talking > multiple megacores. As you are probably aware those are not general purpose cores. They cannot run arbitrary algorithms efficiently. >> couldn't check one Mersenne prime per second with it or anything, ja? It >> would be the equivalent of 10 petaflops assuming we have a process that is >> compatible with massive parallelism? The article doesn't say how many > Fortunately, every physical process (including cognition) is compatible > with massive parallelism. Just parcel the problem over a 3d lattice/torus, > exchange information where adjacent volumes interface through the high-speed > interconnect. There is no general parallelization strategy. If there was then taking advantage of multiple cores maximally would be a solved problem. It is anything but. > Anyone who has written numerics for MPI recognizes the basic design > pattern. > Not everything is reducible in ways that lead to those techniques being generally sufficient. - s From kellycoinguy at gmail.com Thu Feb 17 00:36:58 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 16 Feb 2011 17:36:58 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C2DA9.9050804@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> Message-ID: >> On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: >> So I repeat my previous request, please tell us all about the wonderful AI >> program that you have written that does things even more intelligently than >> Watson. > > Done: ?read my papers. I've done that. At least all the papers I could find online. I have not seen in your papers anything approaching a utilitarian algorithm, a practical architecture or anything of the sort. Do you have a working program that does ANYTHING? You have some fine theories Richard, but theories that don't lead to some kind of productive result belong in journals of philosophy, not journals of computer science. You have some very interesting philosophical ideas, but I haven't seen anything in your papers that rise to the level of computer science. > Questions? ?Just ask! What is the USEFUL and working application of your theories? Show me the beef! -Kelly From darren.greer3 at gmail.com Thu Feb 17 00:56:02 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 20:56:02 -0400 Subject: [ExI] Image Recognition Appreciation Day In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <7465661C-311E-495F-81A9-81747F4A9D8D@bellsouth.net> Message-ID: >John, do you really have trouble seeing the distinction between specialized intelligence and general intelligence?< The easiest way I have of conceptualizing this is in terms of an autistic savant. They are often capable of remarkable feats of memory, spatial cognition and even data analysis. But we rarely refer to them as intelligent, because they may not be able to tie their own shoes or tell you what day of the week it is. That being said, Watson is a savant like none we've ever seen before, and it makes sense to me to get excited about him. We're building this thing from the ground up, and if this is not a concrete step forward in developing fully sapient AI (and I'm no expert and can't state definitively whether it is, though it seems on the surface to be so) it is a HUGE step forward in terms of creating a general societal awareness of AI--where it is at and where it can go and what its applications might be. And as anyone here who has ever fought to get funding for a project knows, the latter is just as important--maybe more so--than the former. 2011/2/16 Dave Sill > 2011/2/16 John Clark > > I would humbly like to suggest that June 23 (Alan Turing's birthday by the >> way) be turned into a international holiday called "Image Recognition >> Appreciation Day". On this day we would all reflect on the intelligence >> required to recognize images. It is important that this be done soon because >> although computers are not very good at this task right now that will >> certainly change in the next few years. On the day computers become good at >> it the laws of physics in the universe will change and intelligence will no >> longer be required for image recognition. >> >> So if we ever intend to salute the brainpower required for this skill it >> is imperative we do it now while we can. >> > > John, do you really have trouble seeing the distinction between specialized > intelligence and general intelligence? Do you think Deep Blue or Watson > could pass the Turing Test? > > -Dave > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Thu Feb 17 01:03:14 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 16 Feb 2011 18:03:14 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> Message-ID: On Tue, Feb 15, 2011 at 2:41 PM, BillK wrote: > No, because Watson doesn't have time to do any learning or > optimisation while the game is actually in progress. Watson doesn't > take any notice of opponents answers. That's why it gave the same > wrong answer as an opponent had already given. On the NOVA show, Watson made the same mistake (giving an answer already given) and the programmers talked about having solved that problem a bit later. I would *guess* that the mechanism they used somehow violated the rules imposed by the Jeopardy producers. It seems like it would be an easy fix if they had a speech recognition algorithm feeding back into the system, but they don't have that capacity (yet). Alex T indicated that Watson wasn't "listening" in the first show. Again, according to the NOVA show, Watson does have a module that learns during the game, related to the interpretation of the Category. I did not get the idea that the real-time learning was very sophisticated or extensive. The IBM materials on DeepQA indicate that there are a number of modules making up the architecture. In other words, you can plug in new algorithms. On the NOVA show they were talking about plugging in a "Gender" module. I would think that each of these modules contributes to an overall score for a good or bad answer. The Spam Assassin algorithm works like this, and it wouldn't surprise me if DeepQA used a similar approach. -Kelly From rpwl at lightlink.com Thu Feb 17 01:13:59 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 16 Feb 2011 20:13:59 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> Message-ID: <4D5C7657.6070405@lightlink.com> Kelly Anderson wrote: >>> On Feb 16, 2011, at 10:20 AM, Richard Loosemore wrote: >>> So I repeat my previous request, please tell us all about the wonderful AI >>> program that you have written that does things even more intelligently than >>> Watson. >> Done: read my papers. > > I've done that. At least all the papers I could find online. I have > not seen in your papers anything approaching a utilitarian algorithm, > a practical architecture or anything of the sort. Do you have a > working program that does ANYTHING? You have some fine theories > Richard, but theories that don't lead to some kind of productive > result belong in journals of philosophy, not journals of computer > science. You have some very interesting philosophical ideas, but I > haven't seen anything in your papers that rise to the level of > computer science. > >> Questions? Just ask! > > What is the USEFUL and working application of your theories? > > Show me the beef! So demanding, some people. ;-) If you have read McClelland and Rumelhart's two-volume "Parallel Distributed Processing", and if you have then read my papers, and if you are still so much in the dark that the only thing you can say is "I haven't seen anything in your papers that rise to the level of computer science" then, well... (And, in any case, my answer to John Clark was as facetious as his question was silly.) At this stage, what you can get is a general picture of the background theory. That is readily obtainable if you have a good knowledge of (a) computer science, (b) cognitive psychology and (c) complex systems. It also helps, as I say, to be familiar with what was going on in those PDP books. Do you have a fairly detailed knowledge of all three of these areas? Do you understand where McClelland and Rumelhart were coming from when they talked about the relaxation of weak constraints, and about how a lot of cognition seemed to make more sense when couched in those terms? Do you also follow the line of reasoning that interprets M & R's subsequent pursuit of non-complex models as a mistake? And the implication that there is a class of systems that are as yet unexplored, doing what they did but using a complex approach? Put all these pieces together and we have the basis for a dialog. But ... demanding a finished AGI as an essential precondition for behaving in a mature way toward the work I have already published...? I don't think so. :-) Richard Loosemore From darren.greer3 at gmail.com Thu Feb 17 01:16:25 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 21:16:25 -0400 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C5F9A.2020204@mac.com> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: >I'm still pissed at Sagan for his hubris in sending a message to the stars without asking the rest of us first, in blithe certainty that "of course" any recipient would have evolved beyond aggression and xenophobia.< I'm not sure NASA was so happy with the idea either. It was a last minute thing, and they gave him three weeks to come up with it. Had he had a few weeks longer, he might have reconsidered giving them the 14 pulsars info by which they could triangulate our location. Also, I think gay rights activists weren't happy about the hetero-sexist Adam and Eve thing. I personally think he should have included a lemur or a monkey on the plaque too, to show that we had evolutionary ancestors and they could, in the event of an attack, be called upon to defend us. Seriously though. Sagan did have a 'blithe certainty' that was reflected obliquely in most of what he wrote that any civilization that could get through its technological adolescence intact would have had to get past its stone age evolutionary programming to do so. I always got the sense that the two for him were connected. Re-phrased: if you don't evolve past those ancient brain applets of aggression and tribal dominance, you simply don't make it past the nuclear stage of your technological development. A grand assumption, perhaps. But it has some validity. After-all, it remains to see if we're going to graduate. d. On Wed, Feb 16, 2011 at 7:36 PM, Samantha Atkins wrote: > On 02/16/2011 09:41 AM, Richard Loosemore wrote: > >> Keith Henson wrote: >> >>> On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: >>> >>> On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: >>>> >>>> I'm still pissed at Sagan for his hubris in sending a message to the >>>>> stars without asking the rest of us first, in blithe certainty that "of >>>>> course" any recipient would have evolved beyond aggression and >>>>> xenophobia. >>>>> >>>> The real reasons if that they would be there you'd be dead, Jim. >>>> In fact, if any alien picks up the transmission (chance: very close >>>> to zero) they'd better be farther advanced than us, and on a >>>> faster track. I hope it for them. >>>> >>> >>> I have been mulling this over for decades. >>> >>> We look out into the Universe and don't (so far) see or hear any >>> evidence of technophilic civilization. >>> >>> I see only two possibilities: >>> >>> 1) Technophilics are so rare that there are no others in our light cone. >>> >>> 2) Or if they are relatively common something wipes them *all* out, >>> or, if not wiped out, they don't do anything which indicates their >>> presence. >>> >>> If 1, then the future is unknown. If 2, it's probably related to >>> local singularities. If that's the case, most of the people reading >>> this list will live to see it. >>> >> >> > Well, the message sent by Sagan was a single transmission aimed at a > globular cluster 25,000 light years away. Traveling at near light speed to > send a ship back is very expensive and would not happen for a long time. > And for what? A lower level species that may or may not survive its own > growing pains long enough to ever be any kind of threat at all? The > chances that a highly xenophobic advanced species would pick it up and > choose to mount the expense to act on it is pretty small. > > Hmm. Of course if they are particularly advanced they could just engineer > a super-nova aimed in our general direction from close enough. Or as some > film had it, send us the plans to build a wonder machine that wipes us out > or turns us into more of them. > > > Well, not really an extra one, but I count four items in your 2-item list: >> >> 1) Technophilics are so rare that there are no others in our light cone. >> >> 2) If they are relatively common, there is something that wipes them >> *all* out (by the time they reach this stage they foul their own nest and >> die), or >> >> 3) They are relatively common and they don't do anything which indicates >> their presence, because they are too scared that someone else will zap them, >> or >> >> 4) They are relatively common and they don't do anything which indicates >> their presence, because they use communications technology that does not >> leak the way ours does. >> >> > My theory is that almost no evolved intelligent species meets the challenge > of overcoming its evolved limitations fast enough to cope successfully with > accelerating technological change. Almost all either wipe themselves out > or ding themselves sufficiently hard to miss their window of opportunity. > It can be argued that it is very very rare that a technological species > survives the period we are entering and emerges more capable on the other > side of singularity. > > - samantha > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Thu Feb 17 01:21:26 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Wed, 16 Feb 2011 18:21:26 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5C604D.3030201@mac.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: > On 02/16/2011 10:15 AM, spike wrote: > Not the same problem domain or even all that close. ?Can you turn it into a > really good chatbot? ?Maybe, maybe not depending on your standard of "good". > ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks in > the nursing home entertained. Samantha, are you familiar with Moore's law? Let's assume for purposes of discussion that you are 30, that you will be in the nursing home when you're 70. That means Watson level functionality will cost around $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get it in a box of cracker jacks. -Kelly From lubkin at unreasonable.com Thu Feb 17 01:32:51 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Wed, 16 Feb 2011 20:32:51 -0500 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C5F9A.2020204@mac.com> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: <201102170132.p1H1Wc64003850@andromeda.ziaspace.com> Samantha wrote: >Hmm. Of course if they are particularly advanced they could just >engineer a super-nova aimed in our general direction from close >enough. Or as some film had it, send us the plans to build a >wonder machine that wipes us out or turns us into more of them. The first attempt at remote annihilation I know of in sf is astronomer Fred Hoyle's A for Andromeda (BBC 1961, novel 1962): >[I]t concerns a group of scientists who detect a radio signal from a >distant galaxy that contains instructions for the design of an >advanced computer. When the computer is built it gives the >scientists instructions for the creation of a living organism, named >Andromeda. However, one of Andromeda's creators, John Fleming, fears >that Andromeda's purpose is to subjugate humanity. Andromeda was played by Julie Christie, in her first significant role. Sadly, there does not seem to be a complete copy of the seven-episode series. But, thankfully, I'm old enough to have seen it before they were lost. -- David. From darren.greer3 at gmail.com Thu Feb 17 01:34:57 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 21:34:57 -0400 Subject: [ExI] Acceptance Into Math Program Message-ID: Just to let everyone on here know. When I first joined this group I was ashamed at how little I knew about science compared to the rest of you. I'm intensely interested in technology and how it can transform us, and for a while before I came to Exi I was appalled at how little was being done to use it in the proper ways. So I joined your group and am glad I did. I've gotten lots of good ideas and have had great amounts of fun arguing and sparring and sometimes actually agreeing with you all. Last September though I decided to rectify my ignorance and registered for three classes at a local university - physics, mathematics and chemistry. I was pretty nervous. I'm 43. I'm a novelist, and not of science fiction. I have a liberal arts background. But besides wanting to be able to discuss things in this group, I also now believe that any writer who does not have a science background, whether he writes spy novels or technical manuals, may find him self acculturated in the next decade or two. So I bit the bullet. I finish my classes in two months. So far my grade point average is perfect. I actually love the work, and today I found out I have been accepted into a Bsc at a good school here in Canada. I've decided that my major will be in mathematics. I like the chemistry and physics, and perhaps I will switch majors later on. But for now my real interest seems to lie in pure math. I thought you all might like to know this, since it was you as a group who help turn an arts guy into (kind of) a science guy. This group does have tremendous value. Certainly it has had a profound effect on my life. So thanks. And when I get stuck next year, I'll be calling on you. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Thu Feb 17 02:05:40 2011 From: mbb386 at main.nc.us (MB) Date: Wed, 16 Feb 2011 21:05:40 -0500 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: <62eec397476da701d4a2f2b5f3999c21.squirrel@www.main.nc.us> > today I found out I have been accepted into a > Bsc at a good school here in Canada. I've decided that my major will be in > mathematics. I like the chemistry and physics, and perhaps I will switch > majors later on. But for now my real interest seems to lie in pure math. > Congratulations, Darren! That's an impressive step. :) May you continue to do well and find enjoyment in your work. Regards, MB From possiblepaths2050 at gmail.com Thu Feb 17 02:13:26 2011 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 16 Feb 2011 19:13:26 -0700 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: Hello Darren, Congratulations regarding furthering your education!!! : ) This email list has also had an amazing effect on my life, due to the things I've learned and the mental "food for thought buffet" that awaits me here every day. If you have not already done so, I hope you attend a transhumanist conference sometime, so that you can meet some of the people on this list. An H+ or Singularity Institute gathering could really be very thrilling for you. I went to Convergence back in 2008 and had a wonderful time! http://www.convergence08.org/ My best wishes to you, John : ) On 2/16/11, Darren Greer wrote: > Just to let everyone on here know. > > When I first joined this group I was ashamed at how little I knew about > science compared to the rest of you. I'm intensely interested in technology > and how it can transform us, and for a while before I came to Exi I was > appalled at how little was being done to use it in the proper ways. So I > joined your group and am glad I did. I've gotten lots of good ideas and have > had great amounts of fun arguing and sparring and sometimes actually > agreeing with you all. > > Last September though I decided to rectify my ignorance and registered for > three classes at a local university - physics, mathematics and chemistry. I > was pretty nervous. I'm 43. I'm a novelist, and not of science fiction. I > have a liberal arts background. But besides wanting to be able to discuss > things in this group, I also now believe that any writer who does not have a > science background, whether he writes spy novels or technical manuals, may > find him self acculturated in the next decade or two. So I bit the bullet. I > finish my classes in two months. So far my grade point average is perfect. I > actually love the work, and today I found out I have been accepted into a > Bsc at a good school here in Canada. I've decided that my major will be in > mathematics. I like the chemistry and physics, and perhaps I will switch > majors later on. But for now my real interest seems to lie in pure math. > > I thought you all might like to know this, since it was you as a group who > help turn an arts guy into (kind of) a science guy. This group does have > tremendous value. Certainly it has had a profound effect on my life. So > thanks. And when I get stuck next year, I'll be calling on you. > > Darren > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > From FRANKMAC at RIPCO.COM Thu Feb 17 02:19:19 2011 From: FRANKMAC at RIPCO.COM (FRANK MCELLIGOTT) Date: Wed, 16 Feb 2011 19:19:19 -0700 Subject: [ExI] FORBIN PROJECT Message-ID: >From a small very small manufacturing backround, I understand that the pro-type is the major cost, and after it finally works, then all there is needed is to assemble and the cost descends from there. Right now the cost is way way out there, but all of us know that the present day 300 dollar computers can dancer circles around an IBM 360 which cost over half million back in the 60's. Soon all major governments will need a Watson, and because of fear of being left behind, money will flow to get a new and improved Watson and then we all know what happens then:) The major concern will come when the decision making process in given up and placed in the hands of a computer who will find the best solution to the rising debt in the in the United States, caused by Medicare and Social security underfunding will not be to increase taxes but instead will opt for the solution of terminating all the folks who are now receiving benefits. We now walk on thin ice, and most of this list know it, all applications that removes human judgement and replaces is with code and best case answers Without Moral direction will remove the species from this planet , Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 02:12:43 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 18:12:43 -0800 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: <004001cbce48$29b75900$7d260b00$@att.net> . On Behalf Of Darren Greer Subject: [ExI] Acceptance Into Math Program >. So far my grade point average is perfect. Mine was kinda like that sorta. My grade point perfect was average. >. But for now my real interest seems to lie in pure math. We are proud of you, my son. >. So thanks. And when I get stuck next year, I'll be calling on you. Darren Do so! I have been tutoring two calculus students. I learned I still remember how to integrate and differentiate after all these tragically many years. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Thu Feb 17 01:42:13 2011 From: sparge at gmail.com (Dave Sill) Date: Wed, 16 Feb 2011 20:42:13 -0500 Subject: [ExI] Acceptance Into Math Program In-Reply-To: References: Message-ID: 2011/2/16 Darren Greer > > I thought you all might like to know this, since it was you as a group who > help turn an arts guy into (kind of) a science guy. This group does have > tremendous value. Certainly it has had a profound effect on my life. So > thanks. And when I get stuck next year, I'll be calling on you. > Congrats and good luck. This list has had a profound effect on my life, too. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Thu Feb 17 03:04:46 2011 From: x at extropica.org (x at extropica.org) Date: Wed, 16 Feb 2011 19:04:46 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <4D59FA1D.5000902@satx.rr.com> Message-ID: On Tue, Feb 15, 2011 at 6:16 PM, wrote: > On Mon, Feb 14, 2011 at 8:09 PM, ? wrote: >> On Mon, Feb 14, 2011 at 7:59 PM, Damien Broderick wrote: >>> On 2/14/2011 9:28 PM, spike wrote: >>> >>>> I don?t have commercial TV, and can?t find live streaming. >>> >>> I don't have TV, period. Anyone have a link? >> >> >> > and Day 2: > > > and Day 3: From spike66 at att.net Thu Feb 17 03:00:11 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 19:00:11 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: <4D5AD78C.80805@mac.com> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AD78C.80805@mac.com> Message-ID: <005801cbce4e$cba88120$62f98360$@att.net> >... On Behalf Of Samantha Atkins ... >> After seeing the amount of progress we have made in nanotechnology in the quarter century since the K.Eric published Engines of Creation, I have concluded that replicating nanobots are a technology that is out of reach of human capability. >Not so. Just a good three decades further out. - samantha Ja. I just don't know when those three good decades will start. I could be overly pessimistic. Samantha, do you remember about the mid to late 90s, when we were all going great guns on this, investments dollars were flying every which direction, local nanotech miniconferences, the K.Eric was going around giving lectures in the area, and even some universities were starting up nanotech disciplines? One could go to the University of North Carolina and major in nanotechnology. How cool is that! I don't see that any of it gave us much of anything that was true nanotech. The research produced some really excellent technologies, none of which were true bottom up nanotech. In a way, I see that as similar to the debate we have had here the last few days on Watson. It isn't AI, any more than developing submicron scale transistors is nanotechnology, but it has its own advantages. Like the university nanotech major, it attracts young talent, it pays the bills, it definitely fires the imagination. If anyone wanted to argue that these represent indirect paths to nanotech and AGI, well, I wouldn't argue with them. spike From spike66 at att.net Thu Feb 17 03:13:16 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 19:13:16 -0800 Subject: [ExI] Watson on NOVA References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AD78C.80805@mac.com> Message-ID: <005f01cbce50$9f4e85a0$ddeb90e0$@att.net> >>> ...I have concluded that replicating nanobots are a technology that is out of reach of human capability. >>Not so. Just a good three decades further out. - samantha >Ja. I just don't know when those three good decades will start...spike Check out the graph at the bottom of this article: http://www.electroiq.com/index/display/nanotech-article-display/6417811327/a rticles/small-times/nanotechmems/research-and_development/2010/august/rankin g-the_nations.html spike From darren.greer3 at gmail.com Thu Feb 17 03:44:11 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 16 Feb 2011 23:44:11 -0400 Subject: [ExI] Watson on NOVA In-Reply-To: <005801cbce4e$cba88120$62f98360$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <4D5AD78C.80805@mac.com> <005801cbce4e$cba88120$62f98360$@att.net> Message-ID: > If anyone wanted to argue that these represent indirect paths to nanotech and AGI, well, I wouldn't argue with them. < I wouldn't either. But if human intellectual history has shown us anything, it is that the path to discovery and achievement often *is* indirect. At least the indirect stuff can spawn new insights and applications of technology that just might lead to where we're trying to get too. That probably holds more true now than it ever has, since the days of individual discovery are numbered, as we become more unified in our quests and the individualistic dynamics that have fueled history to this point are replaced by more cooperative, socialized ones. d.. On Wed, Feb 16, 2011 at 11:00 PM, spike wrote: > > > >... On Behalf Of Samantha Atkins > ... > > >> After seeing the amount of progress we have made in nanotechnology in > the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach > of > human capability. > > >Not so. Just a good three decades further out. - samantha > > Ja. I just don't know when those three good decades will start. > > I could be overly pessimistic. Samantha, do you remember about the mid to > late 90s, when we were all going great guns on this, investments dollars > were flying every which direction, local nanotech miniconferences, the > K.Eric was going around giving lectures in the area, and even some > universities were starting up nanotech disciplines? One could go to the > University of North Carolina and major in nanotechnology. How cool is > that! > I don't see that any of it gave us much of anything that was true nanotech. > The research produced some really excellent technologies, none of which > were > true bottom up nanotech. > > In a way, I see that as similar to the debate we have had here the last few > days on Watson. It isn't AI, any more than developing submicron scale > transistors is nanotechnology, but it has its own advantages. Like the > university nanotech major, it attracts young talent, it pays the bills, it > definitely fires the imagination. If anyone wanted to argue that these > represent indirect paths to nanotech and AGI, well, I wouldn't argue with > them. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 05:32:38 2011 From: spike66 at att.net (spike) Date: Wed, 16 Feb 2011 21:32:38 -0800 Subject: [ExI] watson on jeopardy Message-ID: <002d01cbce64$16fc3530$44f49f90$@att.net> Woohoo! Watson wins! http://www.cnn.com/2011/TECH/innovation/02/16/jeopardy.watson/index.html?hpt =T1 Jeopardy isn't over however. It is only a matter of time before a competing team wants to play machine against machine, or even a three-way all machine matchup. Note that there are today about a couple dozen top chess computers cheerfully pummeling each other, with the results being broadcast for all the worlds people with far too much time on their hands to watch in pointless fascination. Those games are in some ways more interesting to watch than human-human or human-machine games, because they tend to be so technically clean and positional, so theoretical. I can imagine there are already teams working to whoop Watson's butt. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Thu Feb 17 07:24:22 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 00:24:22 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C7657.6070405@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> Message-ID: On Wed, Feb 16, 2011 at 6:13 PM, Richard Loosemore wrote: > Kelly Anderson wrote: >> Show me the beef! > > So demanding, some people. ?;-) I wouldn't be so demanding if you acknowledged the good work of others, even if it is just a "parlor trick". > If you have read McClelland and Rumelhart's two-volume "Parallel Distributed > Processing", I have read volume 1 (a long time ago), but not volume 2. > and if you have then read my papers, and if you are still so > much in the dark that the only thing you can say is "I haven't seen anything > in your papers that rise to the level of computer science" then, well... Your papers talk the talk, but they don't walk the walk as far as I can tell. There is not a single instance where you say, "And using this technique we can distinguish pictures of cats from pictures of dogs" or "This method leads to differentiating between the works of Bach and Mozart." Or even the ability to answer the question "What do grasshoppers eat?" > (And, in any case, my answer to John Clark was as facetious as his question > was silly.) Sidebar: I have found that humor and facetiousness don't work well on mailing lists. > At this stage, what you can get is a general picture of the background > theory. ?That is readily obtainable if you have a good knowledge of (a) > computer science, Check. > (b) cognitive psychology Eh, so so. > and (c) complex systems. Like the space shuttle? > It also > helps, as I say, to be familiar with what was going on in those PDP books. Like I said, I read the first volume of that book a long time ago (I think I have a copy downstairs), nevertheless, I have a decent grasp of neural networks, relaxation, simulated annealing, pattern recognition, multidimensional search spaces, statistical and Bayesian approaches, computer vision, character recognition (published), search trees in traditional AI and massively parallel architectures. I'm not entirely unaware of various theories of philosophy and religion. I am weak in natural language processing, traditional databases, and sound processing. > Do you have a fairly detailed knowledge of all three of these areas? Fair to middling, although my knowledge is a little outdated. I'm not tremendously worried about that since I used a text book written in the late 1950s when I took pattern recognition in 1986 and you refer to a book published in the late 1980s... I kind of get the idea that progress is fairly slow in these areas except that now we have better hardware on which to run the old algorithms. > Do you understand where McClelland and Rumelhart were coming from when they > talked about the relaxation of weak constraints, and about how a lot of > cognition seemed to make more sense when couched in those terms? Yes, this makes a lot of sense. I don't see how it relates directly to your work. I actually like what you have to say about short vs. long term memory, I think that's a useful way of looking at things. The short term or "working" memory that uses symbols vs the long term memory that work in a more subconscious way is very interesting stuff to ponder. > Do you > also follow the line of reasoning that interprets M & R's subsequent pursuit > of non-complex models as a mistake? Afraid you lose me here. > And the implication that there is a > class of systems that are as yet unexplored, doing what they did but using a > complex approach? Still lost, but willing to listen. > Put all these pieces together and we have the basis for a dialog. > > But ... ?demanding a finished AGI as an essential precondition for behaving > in a mature way toward the work I have already published...? ?I don't think > so. ?:-) If I have treated you in an immature way, I apologize. I just think arguing that four years of work and millions of dollars worth of research being classified as "trivial" when 10,000,000 lines of actually working code is not a strong position to come from. I am an Agilista. I value working code over big ideas. So while I acknowledge that you have some interesting big ideas, it escapes me how you are going to bridge the gap to achieve a notable result. Maybe it is clear to you, but if it is, you should publish something a little more concrete, IMHO. -Kelly From kellycoinguy at gmail.com Thu Feb 17 07:31:34 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 00:31:34 -0700 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <4D5C5F9A.2020204@mac.com> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: On Wed, Feb 16, 2011 at 4:36 PM, Samantha Atkins wrote: > My theory is that almost no evolved intelligent species meets the challenge > of overcoming its evolved limitations fast enough to cope successfully with > accelerating technological change. ? Almost all either wipe themselves out > or ding themselves sufficiently hard to miss their window of opportunity. > ?It can be argued that it is very very rare that a technological species > survives the period we are entering and emerges more capable on the other > side of singularity. Another possibility is that advanced civilizations naturally trend towards virtual reality, and thus end up leaving a very small externally detectable footprint. Exploring the endless possibilities of virtual reality seems potentially a lot more interesting than crossing tens of thousands of light years of space to try and visit some lower life form... -Kelly From kellycoinguy at gmail.com Thu Feb 17 07:43:18 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 00:43:18 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <008c01cbcd31$8805bc80$98113580$@att.net> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: > Ja, but when you say "research" in reference to AI, keep in mind the actual > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > kill us. Why is that the goal? As extropians isn't the idea to reduce entropy? Humans may be more prone to entropy than some higher life form. In that case, shouldn't we strive to evolve to that higher form and let go of our physical natures? If our cognitive patterns are preserved, and enhanced, we have achieved a level of immortality, and perhaps become AGIs ourselves. That MIGHT be a good thing. Then again, it might not be a good thing. I just don't see your above statement as being self-evident upon further reflection. > After seeing the amount of progress we have made in nanotechnology in the > quarter century since the K.Eric published Engines of Creation, I have > concluded that replicating nanobots are a technology that is out of reach of > human capability. ?We need AI to master that difficult technology. But if humans can create the AI that creates the replicating nanobots, then in a sense it isn't out of human reach. > Without > replicating assemblers, we probably will never be able to read and simulate > frozen or vitrified brains. ?So without AI, we are without nanotech, and > consequently we are all doomed, along with our children and their children > forever. > > On the other hand, if we are successful at doing AI wrong, we are all doomed > right now. ?It will decide it doesn't need us, or just sees no reason why we > are useful for anything. And that is a bad thing exactly how? > When I was young, male and single (actually I am still male now) but when I > was young and single, I would have reasoned that it is perfectly fine to > risk future generations on that bet: build AI now and hope it likes us, > because all future generations are doomed to a century or less of life > anyway, so there's no reasonable objection with betting that against > eternity. > > Now that I am middle aged, male and married, with a child, I would do that > calculus differently. ?I am willing to risk that a future AI can upload a > living being but not a frozen one, so that people of my son's generation > have a shot at forever even if it means that we do not. ?There is a chance > that a future AI could master nanotech, which gives me hope as a corpsicle > that it could read and upload me. ?But I am reluctant to risk my children's > and grandchildren's 100 years of meat world existence on just getting AI > going as quickly as possible. Honestly, I don't think we have much of a choice about when AI gets going. We can all make choices as individuals, but I see it as kind of inevitable. Ray K seems to have this mind set as well, so I feel like I'm in pretty good company on this one. > In that sense, having AI researchers wander off into making toys (such as > chess software and Watson) is perfectly OK, and possibly desireable. > >>...Give me a hundred smart, receptive minds right now, and three years to > train 'em up, and there could be a hundred people who could build an AGI > (and probably better than I could)... > > Sure but do you fully trust every one of those students? ?Computer science > students are disproportionately young and male. > >>...So, just to say, don't interpret the previous comment to be too much of > a mad scientist comment ;-) ?Richard Loosemore > > Ja, I understand the reasoning behind those who are focused on the goal of > creating AI, and I agree the idea is not crazed or unreasonable. ?I just > disagree with the notion that we need to be in a desperate hurry to make an > AI. ?We as a species can take our time and think about this carefully, and I > hope we do, even if it means you and I will be lost forever. > > Nuclear bombs preceded nuclear power plants. Yes, and many of the most interesting AI applications are no doubt military in nature. -Kelly From eugen at leitl.org Thu Feb 17 07:58:45 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 08:58:45 +0100 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <20110217075845.GJ23560@leitl.org> On Thu, Feb 17, 2011 at 12:43:18AM -0700, Kelly Anderson wrote: > On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: > > Ja, but when you say "research" in reference to AI, keep in mind the actual > > goal isn't the creation of AGI, but rather the creation of AGI that doesn't > > kill us. > > Why is that the goal? As extropians isn't the idea to reduce entropy? Right, that would be a great friendliness metric. > Humans may be more prone to entropy than some higher life form. In Right, let's do away with lower life forms. Minimize entropy. > that case, shouldn't we strive to evolve to that higher form and let Why evolve? Exterminate lower life forms. Minimize entropy. Much more efficient. > go of our physical natures? If our cognitive patterns are preserved, Cognitive patterns irrelevant. Maximize extropy. Exterminate humans. > and enhanced, we have achieved a level of immortality, and perhaps > become AGIs ourselves. That MIGHT be a good thing. Then again, it > might not be a good thing. I just don't see your above statement as > being self-evident upon further reflection. Reflection irrelevant. You will be exterminated. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From kellycoinguy at gmail.com Thu Feb 17 08:09:17 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 01:09:17 -0700 Subject: [ExI] Watson on NOVA In-Reply-To: <20110217075845.GJ23560@leitl.org> References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> <20110217075845.GJ23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 12:58 AM, Eugen Leitl wrote: > On Thu, Feb 17, 2011 at 12:43:18AM -0700, Kelly Anderson wrote: >> On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: >> > Ja, but when you say "research" in reference to AI, keep in mind the actual >> > goal isn't the creation of AGI, but rather the creation of AGI that doesn't >> > kill us. >> >> Why is that the goal? As extropians isn't the idea to reduce entropy? > > Right, that would be a great friendliness metric. Not so much. >> Humans may be more prone to entropy than some higher life form. In > > Right, let's do away with lower life forms. Minimize entropy. In all seriousness, we are in the middle of a mass extinction that is driven by just that. Cows are taking over the world, and buffalo are suffering. If we get caught up in the same mass extinction event, I don't think we should be too terribly surprised. PERSONALLY, this is a bad thing. I rather like being me. But I've learned that what I want is only weakly connected with what actually ends up happening. >> that case, shouldn't we strive to evolve to that higher form and let > > Why evolve? Exterminate lower life forms. Minimize entropy. Much more > efficient. Not sure if you are joking here... hard to respond because there are so many ways parse this... :-) >> go of our physical natures? If our cognitive patterns are preserved, > > Cognitive patterns irrelevant. Maximize extropy. Exterminate humans. > >> and enhanced, we have achieved a level of immortality, and perhaps >> become AGIs ourselves. That MIGHT be a good thing. Then again, it >> might not be a good thing. I just don't see your above statement as >> being self-evident upon further reflection. > > Reflection irrelevant. You will be exterminated. It is at least as likely as not. The thing is, I'm actually somewhat OK with that if it leads to significantly better things. I'm sure most homo erectus would be pretty ticked with how things worked out for their species. -Kelly From kellycoinguy at gmail.com Thu Feb 17 09:01:50 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Thu, 17 Feb 2011 02:01:50 -0700 Subject: [ExI] How Watson works, a guess Message-ID: There have been a number of guesses on list as to how Watson works. I have spent a fair amount of time looking at everything I can find on the topic, and here is my guess as to how it works based on what I've heard and read weighted somewhat by how I would approach the problem. If I were going to try and duplicate Watson, this is more or less how I would proceed. To avoid confusion, I won't reverse question/answer like Jeopardy. First, keywords from the question are used to find a large number of potential good answers. This is what is meant when people say "a search engine is part of Watson". This is likely based on proximity as Richard pointed out, and this is clearly just the first step. There are probably some really interesting indexing techniques used that are a bit different than Google. I was fascinated by the report on this list that Watson had more RAM than hard drive space. Can someone verify that this is the case? It seems counter-intuitive. What happens if you turn the power off? Do you have to connect Watson to a network to reload all that RAM? Watson's database consists of a reported 200,000,000 documents including wikipedia, IMDB, other encyclopedias, etc. Second, a very large set of heuristic algorithms (undoubtedly, these are the majority of the reported 1,000,000 lines of code) analyze the Question, The Category and/or each potential answer in combination and come up with a "score" indicating whether by this heuristic measure the answer is a good one. I would suspect that each heuristic also generates a "confidence" measurement. Third, a learning algorithm generates "weights" to apply to each heuristic result and perhaps a different weight for each confidence measurement. This may be the "tuning" that is specific to Jeopardy. Another part of tuning is adding more heuristic tests. For example, on the NOVA show, two of the programmers talk about the unfinished "gender" module that comes up after Watson misses. There is also a module referred to as the "geographical" element. One could assume it tried to determine by a variety of algorithms whether what is being proposed as an answer makes spacial sense. The heuristic algorithms no doubt include elements of natural language processing, statistical analysis, hard coded things that were noted by some programmer or other based on a failed answer during testing, etc. The reason that the reports of how Watson works are so seemingly complex and contradictory are, IMO because someone talks about a particular heuristic, and that makes that heuristic seem a bit more important than the overall architecture. The combination of all the weighted scores probably follows some kind of statistical (probably Bayesian) approach, which are quite amenable to learning feedback. An open source project Spam Assassin, takes a similar approach to determining Spam from good email. Hundreds of heuristic tests are run on each email, and the results are combined to form a confidence about an email being Spam or not. A cut off point is determined, and anything above the cutoff is considered spam. It can "learn" to distinguish new Spam by changing the weights used for each heuristic test. It is also an extensible plug-in architecture, in that new heuristics can be added, and the weights can be tweaked over time as the nature of various Spams change. I would not be surprised if Watson takes a similar approach, based on what people have said. I suspect that each potential answer is evaluated by these heuristic algorithms on the 2800 processors, and that good answers from multiple sources (the multiple sources thing could be part of the heuristics) are given credence. This is why questions about terrorism lead to incorrect answers about 9/11. All the results are put together, and all the confidences are combined, and the winning answer is chosen. If the confidence in the best answer is not above a threshold, Watson does not push the button. Quickly. In fact, one of Watson's advantages may be in its ability to push the button very quickly. I haven't done an analysis, but it seemed that there weren't many times that Watson had a high confidence answer and he didn't get the first chance at answering the question. This is an area where a computer has a serious (and somewhat unfair) advantage over humans. I understand from a long ago interview that Ken Jennings basically tries to push the button as fast as he can if he thinks he might know the answer, even if he hasn't yet fished up the whole answer. He wasn't the first to press the button a lot of the time in this tournament. I bet that both the carbon based guys knew a lot of answers that they didn't get a chance to answer because Watson is a fast button pusher. There seems to be another subsystem that determines what to bet. The non-round numbers are funny, but I would bet that's one of the more solid elements of Watson's game. I don't think there is any AI in this part. Again, this is all just a wild semi-educated guess. If you have gotten this far, which do you think is more intelligent Google or Watson? Why? Which leverages human intelligence better? -Kelly From darren.greer3 at gmail.com Thu Feb 17 10:19:21 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:19:21 -0400 Subject: [ExI] Kurzweil On Watson Message-ID: http://news.yahoo.com/s/zd/20110120/tc_zd/259558 John will like this. Kurzweil says in his opening salve some of what he's been saying in the Watson threads. "In *The Age of Intelligent Machines*, which I wrote in the mid 1980s, I predicted that a computer would defeat the world chess champion by 1998. My estimate was based on the predictable exponential growth of computing power (an example of what I now call the "law of accelerating returns") and my estimate of what level of computing was needed to achieve a chess rating of just under 2800 (sufficient to defeat any human, although lately the best human chess scores have inched above 2800). I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess." d. -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 10:26:22 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 11:26:22 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5C604D.3030201@mac.com> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: <20110217102622.GN23560@leitl.org> On Wed, Feb 16, 2011 at 03:39:57PM -0800, Samantha Atkins wrote: > Not the same problem domain or even all that close. Can you turn it > into a really good chatbot? Maybe, maybe not depending on your standard > of "good". But that wouldn't be very exciting. Very expensive way to > keep folks in the nursing home entertained. Think of it like a NL layer for Google, like Wolfram Alpha, but for trivia and fact knowledge, updated in realtime as new publications come. Great tool for researchers and analysts. It can be pretty shallow reasoning. It's a tool, not a person. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Feb 17 10:33:07 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 11:33:07 +0100 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: Message-ID: <20110217103307.GO23560@leitl.org> On Thu, Feb 17, 2011 at 06:19:21AM -0400, Darren Greer wrote: > http://news.yahoo.com/s/zd/20110120/tc_zd/259558 > > John will like this. Kurzweil says in his opening salve some of what he's > been saying in the Watson threads. > > "In *The Age of Intelligent Machines*, which I wrote in the mid 1980s, I > predicted that a computer would defeat the world chess > champion by > 1998. My estimate was based on the predictable exponential growth of > computing power (an example of what I now call the "law of accelerating > returns") and my estimate of what level of computing was needed to achieve a > chess rating of just under 2800 (sufficient to defeat any human, although > lately the best human chess scores have inched above 2800). I also predicted > that when that happened we would either think better of computer > intelligence, worse of human thinking, or worse of chess, and that if > history was a guide, we would downgrade chess." http://singularityhub.com/2011/01/04/kurzweil-defends-his-predictions-again-was-he-86-correct/ http://www.acceleratingfuture.com/michael/blog/2010/01/kurzweils-2009-predictions/ -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Thu Feb 17 10:34:15 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:34:15 -0400 Subject: [ExI] watson on jeopardy In-Reply-To: <002d01cbce64$16fc3530$44f49f90$@att.net> References: <002d01cbce64$16fc3530$44f49f90$@att.net> Message-ID: >Note that there are today about a couple dozen top chess computers cheerfully pummeling each other, with the results being broadcast for all the worlds people with far too much time on their hands to watch in pointless fascination.< What interests me about the difference between chess and word/knowledge games is the idea of game tree complexity. Chess has a game tree complexity of 10^123, but how do you measure GTC for something like Jeopardy? For each question there is only one right answer, and therefore only one right move and the next question has no relation or dependence upon the question before it. So comparing jeopardy and chess seems like apples and oranges to me, no? I just read the Kurzweil article and he points out that Watson is much closer to being able to pass the Turing test than a chess playing computer as it is dealing with human language. And so based on that criteria, it is a step forward no matter how you slice it. d. 2011/2/17 spike > Woohoo! Watson wins! > > > > > http://www.cnn.com/2011/TECH/innovation/02/16/jeopardy.watson/index.html?hpt=T1 > > > > Jeopardy isn?t over however. It is only a matter of time before a > competing team wants to play machine against machine, or even a three-way > all machine matchup. Note that there are today about a couple dozen top > chess computers cheerfully pummeling each other, with the results being > broadcast for all the worlds people with far too much time on their hands to > watch in pointless fascination. Those games are in some ways more > interesting to watch than human-human or human-machine games, because they > tend to be so technically clean and positional, so theoretical. > > > > I can imagine there are already teams working to whoop Watson?s butt. > > > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 10:47:53 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:47:53 -0400 Subject: [ExI] Kurzweil On Watson In-Reply-To: <20110217103307.GO23560@leitl.org> References: <20110217103307.GO23560@leitl.org> Message-ID: > http://singularityhub.com/2011/01/04/kurzweil-defends-his-predictions-again-was-he-86-correct/ http://www.acceleratingfuture.com/michael/blog/2010/01/kurzweils-2009-predic tions< Well, his success rate is better than Nostradamus. But then again, he's not relying on a fickle daemon for results. I think what he has to say about Watson and the Turing Test is valid, and rather simply put, regardless of the predictions. Dealing with the complexity of language is a better indicator of intelligence and closer to passing the test than dealing with the purely mathematical game tree complexity of chess. d. On Thu, Feb 17, 2011 at 6:33 AM, Eugen Leitl wrote: > On Thu, Feb 17, 2011 at 06:19:21AM -0400, Darren Greer wrote: > > http://news.yahoo.com/s/zd/20110120/tc_zd/259558 > > > > John will like this. Kurzweil says in his opening salve some of what he's > > been saying in the Watson threads. > > > > "In *The Age of Intelligent Machines*, which I wrote in the mid 1980s, I > > predicted that a computer would defeat the world chess > > champion by > > 1998. My estimate was based on the predictable exponential growth of > > computing power (an example of what I now call the "law of accelerating > > returns") and my estimate of what level of computing was needed to > achieve a > > chess rating of just under 2800 (sufficient to defeat any human, although > > lately the best human chess scores have inched above 2800). I also > predicted > > that when that happened we would either think better of computer > > intelligence, worse of human thinking, or worse of chess, and that if > > history was a guide, we would downgrade chess." > > > http://singularityhub.com/2011/01/04/kurzweil-defends-his-predictions-again-was-he-86-correct/ > > > http://www.acceleratingfuture.com/michael/blog/2010/01/kurzweils-2009-predictions/ > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 10:59:53 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 06:59:53 -0400 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: >Another possibility is that advanced civilizations naturally trend towards virtual reality, and thus end up leaving a very small externally detectable footprint. Exploring the endless possibilities of virtual reality seems potentially a lot more interesting than crossing tens of thousands of light years of space to try and visit some lower life form...< I had never considered this scenario until I came to Exi and it was postulated for me. It is the most hopeful compared to the other polar opposite scenarios--self-destruction or mature Zen state (with a no poaching policy) of technological superiority. Alas, self-destruction seems to me to be the most likely, given the bloody and tragic arc of our history at least. D. On Thu, Feb 17, 2011 at 3:31 AM, Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 4:36 PM, Samantha Atkins wrote: > > My theory is that almost no evolved intelligent species meets the > challenge > > of overcoming its evolved limitations fast enough to cope successfully > with > > accelerating technological change. Almost all either wipe themselves > out > > or ding themselves sufficiently hard to miss their window of opportunity. > > It can be argued that it is very very rare that a technological species > > survives the period we are entering and emerges more capable on the other > > side of singularity. > > Another possibility is that advanced civilizations naturally trend > towards virtual reality, and thus end up leaving a very small > externally detectable footprint. Exploring the endless possibilities > of virtual reality seems potentially a lot more interesting than > crossing tens of thousands of light years of space to try and visit > some lower life form... > > -Kelly > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 11:11:24 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 12:11:24 +0100 Subject: [ExI] ibm takes on the commies In-Reply-To: <4D5C6389.4050504@mac.com> References: <000001cbcda9$1c8386e0$558a94a0$@att.net> <20110216075255.GM23560@leitl.org> <4D5C6389.4050504@mac.com> Message-ID: <20110217111124.GP23560@leitl.org> On Wed, Feb 16, 2011 at 03:53:45PM -0800, Samantha Atkins wrote: >> A common gamer's graphics card can easily have a thousand or a couple >> thousand cores (mostly VLIW) and memory bandwidth from hell. Total node >> count could run into tens to hundreds thousands, so we're talking >> multiple megacores. > > As you are probably aware those are not general purpose cores. They > cannot run arbitrary algorithms efficiently. 3d graphics accelerators started as a specific type of physical simulation accelerator, which implies massive parallelism -- our physical reality is made that way, so it's not a coincidence. With each generation the architecture became more and more all-purpose, currently culminating in CPUs factoring in GPUs (AMD Fusion) or GPUs factoring in CPUs (nVidia Project Denver). You see progress in this paradigm by tracking CUDA (which hides hardware poorly) or advent of OpenCL (where CPU and GPU are considered as a unity, which is convenient). In many cases extracting maximum performance from GPGPU is optimizing memory accesses. This is due to the fact that the memory is still external (not embedded) nor yet even stacked with through-silicon vias (TSV) atop of your cores (but soon). There's the problem of algorithms. People currently are great fans of intricate, complex designs. Which are sequential in principle (though multiple branches can be evaluated concurrently), map to memory accesses and hardware poorly. The reason we're doing this is because we're monkeys, and are biased that way. Which is ironic, because we *are* an emergent process, made from billions of individual units. In short, complex algorithms are a problem, not a solution. The processes occuring in neural tissue are not complicated. The complexity emerges from state, not transformations upon the state. We've have been converging towards optimal substrate, and we will continue to do so. This is not surprising, because there's just one (or a couple) ways to do it right. Economy and efficiency cannot ignore reality. Not for long. >>> couldn't check one Mersenne prime per second with it or anything, ja? It >>> would be the equivalent of 10 petaflops assuming we have a process that is >>> compatible with massive parallelism? The article doesn't say how many >> Fortunately, every physical process (including cognition) is compatible >> with massive parallelism. Just parcel the problem over a 3d lattice/torus, >> exchange information where adjacent volumes interface through the high-speed >> interconnect. > > There is no general parallelization strategy. If there was then taking Yes, there is. In a relativistic universe the quickest way to know what happens next to you is to send signals. Which are limited to c. This is not programming, this is physics. Programming is constrained by physics. Difference between programming and hardware design shrinks. It will be one thing some day, such as biology doesn't make a difference between the hardware and the software layer. It's all one thing. > advantage of multiple cores maximally would be a solved problem. It is Multiple cores do not work. They fail to scale because shared memory does not exist -- because we're living in a relativistic universe. When it's read only you can do broadcasting, but when you also write you need to factor in light cones of individual systems, nevermind gate delays on top of that. Coherence is an expensive illusion. Which is why threading is a fad, and will be superceded by explicit message passing over shared-nothing asynchronous systems. Yes, people can't deal with billions of asynchronous objects, which is why human design won't produce real intelligence. You have to let the system figure out how to make it work. It is complicated enough, but still feasible for us mere monkeys. > anything but. >> Anyone who has written numerics for MPI recognizes the basic design >> pattern. >> > > Not everything is reducible in ways that lead to those techniques being > generally sufficient. How does your CPU access memory? By sending messages. How is the illusion of cache coherency maintained? By sending messages. How does the Internet work? By sending messages. Don't blame me, I didn't do it. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Feb 17 11:50:41 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 12:50:41 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: <20110217115041.GQ23560@leitl.org> On Thu, Feb 17, 2011 at 06:59:53AM -0400, Darren Greer wrote: > >Another possibility is that advanced civilizations naturally trend > towards virtual reality, and thus end up leaving a very small > externally detectable footprint. Exploring the endless possibilities Look, what is your energetical footprint? 1 kW, more or less? Negligible. Now multiply that by 7 gigamonkeys. Problem? Infinitesimally small energy budgets multiplied by very large numbers are turning stars into FIR blackbodies. And whole galaxies, and clusters, and superclusters. You think that would be easy to miss? > of virtual reality seems potentially a lot more interesting than > crossing tens of thousands of light years of space to try and visit > some lower life form...< > > I had never considered this scenario until I came to Exi and it was > postulated for me. It is the most hopeful compared to the other polar When something is postulated to you it's usually bunk. Novelty and too small group for peer review pretty much see to that. > opposite scenarios--self-destruction or mature Zen state (with a no poaching > policy) of technological superiority. Alas, self-destruction seems to me to > be the most likely, given the bloody and tragic arc of our history at least. It's less bloody and tragic than bloody stupid. Our collective intelligence seems to approach that of an overnight culture. http://www.fungionline.org.uk/5kinetics/2batch.html -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Thu Feb 17 12:04:51 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 13:04:51 +0100 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: <20110217103307.GO23560@leitl.org> Message-ID: <20110217120451.GR23560@leitl.org> On Thu, Feb 17, 2011 at 06:47:53AM -0400, Darren Greer wrote: > Well, his success rate is better than Nostradamus. But then again, he's not Trying to play Nostradamus in futurism is a fool's game. You can only lose. > relying on a fickle daemon for results. I think what he has to say about > Watson and the Turing Test is valid, and rather simply put, regardless of You see novelty in what Kurzweil says, yes? > the predictions. Dealing with the complexity of language is a better > indicator of intelligence and closer to passing the test than dealing with The best Turing test is unemployment. When everybody is unemployed you know full human equivalence has been reached. Just define something like LD50 (half unemploment reached) for each individual profession as an arbitrary yardstick for approximate equivalence. Integrating over invididual professions will be more difficult, since flooding will put some underwater faster than others. > the purely mathematical game tree complexity of chess. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Thu Feb 17 12:46:15 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 07:46:15 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> Message-ID: <4D5D1897.4030906@lightlink.com> Okay, first: although I understand your position as an Agilista, and your earnest desire to hear about concrete code rather than theory ("I value working code over big ideas"), you must surely acknowledge that in some areas of scientific research and technological development, it is important to work out the theory, or the design, before rushing ahead to the code-writing stage. That is not to say that I don't write code (I spent several years as a software developer, and I continue to write code), but that I believe the problem of building an AGI is, at this point in time, a matter of getting the theory right. We have had over fifty years of AI people rushing into programs without seriously and comprehensively addressing the underlying issues. Perhaps you feel that there are really not that many underlying issues to be dealt with, but after having worked in this field, on and off, for thirty years, it is my position that we need deep understanding above all. Maxwell's equations, remember, were dismissed as useless for anything -- just idle theorizing -- for quite a few years after Maxwell came up with them. Not everything that is of value *must* be accompanied by immediate code that solves a problem. Now, with regard to the papers that I have written, I should explain that they are driven by the very specific approach described in the complex systems paper. That described a methodological imperative: if intelligent systems are complex (in the "complex systems" sense, which is not the "complicated systems", aka space-shuttle-like systems, sense), then we are in a peculiar situation that (I claim) has to be confronted in a very particular way. If it is not confronted in that particular way, we will likely run around in circles getting nowhere -- and it is alarming that the precise way in which this running around in circles would happen bears a remarkable resemblance to what has been happening in AI for fifty years. So, if my reasoning in that paper is correct then the only sensible way to build an AGI is to do some very serious theoretical and tool-building work first. And part of that theoretical work involves a detailed understanding of cognitive psychology AND computer science. Not just a superficial acquaintance with a few psychology ideas, which many people have, but an appreciation for the enormous complexity of cog psych, and an understanding of how people in that field go about their research (because their protocols are very different from those of AI or computer science), and a pretty good grasp of the history of psychology (because there have been many different schools of thought, and some of them, like Behaviorism, contain extremely valuable and subtle lessons). With regard to the specific comments I made below about McClelland and Rumelhart, what is going on there is that these guys (and several others) got to a point where the theories in cognitive psychology were making no sense, and so they started thinking in a new way, to try to solve the problem. I can summarize it as "weak constrain satisfaction" or "neurally inspired" but, alas, these things can be interpreted in shallow ways that omit the background context ... and it is the background context that is the most important part of it. In a nutshell, a lot cognitive psychology makes a lot more sense if it can be re-cast in "constraint" terms. The problem, though, is that the folks who started the PDP (aka connectionist, neural net) revolution in the 1980s could only express this new set of ideas in neural terms. The made some progress, but then just as the train appeared to be gathering momentum it ran out of steam. There were some problems with their approach that could not be solved in a principled way. They had hoped, at the beginning, that they were building a new foundation for cognitive psychology, but something went wrong. What I have done is to think hard about why that collapse occurred, and to come to an understanding about how to get around it. The answer has to do with building two distinct classes of constraint systems: either non-complex, or complex (side note: I will have to refer you to other texts to get the gist of what I mean by that... see my 2007 paper on the subject). The whole PDP/connectionist revolution was predicated on a non-complex approach. I have, in essence, diagnosed that as the problem. Fixing that problem is hard, but that is what I am working on. Unfortunately for you -- wanting to know what is going on with this project -- I have been studiously unprolific about publishing papers. So at this stage of the game all I can do is send you to the papers I have written and ask you to fill in the gaps from your knowledge of cognitive psychology, AI and complex systems. Finally, bear in mind that none of this is relevant to the question of whether other systems, like Watson, are a real advance or just a symptom of a malaise. John Clark has been ranting at me (and others) for more than five years now, so when he pulls the old bait-and-switch trick ("Well, if you think XYZ is flawed, let's see YOUR stinkin' AI then!!") I just smile and tell him to go read my papers. So we only got into this discussion because of that: it has nothing to do with delivering critiques of other systems, whether they contain a million lines of code or not. :-) Watson still is a sleight of hand, IMO, whether my theory sucks or not. ;-) Richard Loosemore Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 6:13 PM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> Show me the beef! >> So demanding, some people. ;-) > > I wouldn't be so demanding if you acknowledged the good work of > others, even if it is just a "parlor trick". > >> If you have read McClelland and Rumelhart's two-volume "Parallel Distributed >> Processing", > > I have read volume 1 (a long time ago), but not volume 2. > >> and if you have then read my papers, and if you are still so >> much in the dark that the only thing you can say is "I haven't seen anything >> in your papers that rise to the level of computer science" then, well... > > Your papers talk the talk, but they don't walk the walk as far as I > can tell. There is not a single instance where you say, "And using > this technique we can distinguish pictures of cats from pictures of > dogs" or "This method leads to differentiating between the works of > Bach and Mozart." Or even the ability to answer the question "What do > grasshoppers eat?" > >> (And, in any case, my answer to John Clark was as facetious as his question >> was silly.) > > Sidebar: I have found that humor and facetiousness don't work well on > mailing lists. > >> At this stage, what you can get is a general picture of the background >> theory. That is readily obtainable if you have a good knowledge of (a) >> computer science, > > Check. > >> (b) cognitive psychology > > Eh, so so. > >> and (c) complex systems. > > Like the space shuttle? > >> It also >> helps, as I say, to be familiar with what was going on in those PDP books. > > Like I said, I read the first volume of that book a long time ago (I > think I have a copy downstairs), nevertheless, I have a decent grasp > of neural networks, relaxation, simulated annealing, pattern > recognition, multidimensional search spaces, statistical and Bayesian > approaches, computer vision, character recognition (published), search > trees in traditional AI and massively parallel architectures. I'm not > entirely unaware of various theories of philosophy and religion. I am > weak in natural language processing, traditional databases, and sound > processing. > >> Do you have a fairly detailed knowledge of all three of these areas? > > Fair to middling, although my knowledge is a little outdated. I'm not > tremendously worried about that since I used a text book written in > the late 1950s when I took pattern recognition in 1986 and you refer > to a book published in the late 1980s... I kind of get the idea that > progress is fairly slow in these areas except that now we have better > hardware on which to run the old algorithms. > >> Do you understand where McClelland and Rumelhart were coming from when they >> talked about the relaxation of weak constraints, and about how a lot of >> cognition seemed to make more sense when couched in those terms? > > Yes, this makes a lot of sense. I don't see how it relates directly to > your work. I actually like what you have to say about short vs. long > term memory, I think that's a useful way of looking at things. The > short term or "working" memory that uses symbols vs the long term > memory that work in a more subconscious way is very interesting stuff > to ponder. > >> Do you >> also follow the line of reasoning that interprets M & R's subsequent pursuit >> of non-complex models as a mistake? > > Afraid you lose me here. > >> And the implication that there is a >> class of systems that are as yet unexplored, doing what they did but using a >> complex approach? > > Still lost, but willing to listen. > >> Put all these pieces together and we have the basis for a dialog. >> >> But ... demanding a finished AGI as an essential precondition for behaving >> in a mature way toward the work I have already published...? I don't think >> so. :-) > > If I have treated you in an immature way, I apologize. I just think > arguing that four years of work and millions of dollars worth of > research being classified as "trivial" when 10,000,000 lines of > actually working code is not a strong position to come from. > > I am an Agilista. I value working code over big ideas. So while I > acknowledge that you have some interesting big ideas, it escapes me > how you are going to bridge the gap to achieve a notable result. Maybe > it is clear to you, but if it is, you should publish something a > little more concrete, IMHO. From pharos at gmail.com Thu Feb 17 12:38:33 2011 From: pharos at gmail.com (BillK) Date: Thu, 17 Feb 2011 12:38:33 +0000 Subject: [ExI] Kurzweil On Watson In-Reply-To: <20110217120451.GR23560@leitl.org> References: <20110217103307.GO23560@leitl.org> <20110217120451.GR23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 12:04 PM, Eugen Leitl wrote: > The best Turing test is unemployment. When everybody is unemployed > you know full human equivalence has been reached. Just define > something like LD50 (half unemployment reached) for each individual > profession as an arbitrary yardstick for approximate equivalence. > Integrating over individual professions > will be more difficult, since flooding will put some underwater > faster than others. > I think we need a better test than unemployment. The US has got to ~25% unemployment just by moving most of the wealth to the top 1% and using slave labour in China. Robots won't be used until they are cheaper than slave labour and humans can produce a lot of slave labour units. BillK From jonkc at bellsouth.net Thu Feb 17 14:23:48 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 09:23:48 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C2DA9.9050804@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> Message-ID: <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> On Feb 16, 2011, at 3:03 PM, Richard Loosemore wrote: >> So I repeat my previous request, please tell us all about the wonderful AI program that you have written that does things even more intelligently than Watson. > > Done: read my papers. I'm not asking for more endless philosophy, I'm asking for programs. I'm asking you to tell us what you have taught a computer to do that caused it to behave anywhere near as intelligently as Watson; a program you claim to have contempt for as well as for its creators. But to be honest I can't help but wonder if contempt is the right word and if there might be a better one. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 14:41:09 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 15:41:09 +0100 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: <20110217103307.GO23560@leitl.org> <20110217120451.GR23560@leitl.org> Message-ID: <20110217144109.GV23560@leitl.org> On Thu, Feb 17, 2011 at 12:38:33PM +0000, BillK wrote: > I think we need a better test than unemployment. It's not easy to find a better benchmark than what people are willing to pay other people for. > The US has got to ~25% unemployment just by moving most of the wealth > to the top 1% and using slave labour in China. What makes you think I was talking about just US, or China? Have to integrate over the entire planet, over all professions. > Robots won't be used until they are cheaper than slave labour and > humans can produce a lot of slave labour units. I meant the entire envelope of human professions. Artist, professor, CEO, analyst, plumber. It's clear that some niches can be more easily filled than others. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Thu Feb 17 15:18:12 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 10:18:12 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> Message-ID: <4D5D3C34.7080305@lightlink.com> John Clark wrote: > On Feb 16, 2011, at 3:03 PM, Richard Loosemore wrote: > >>> So I repeat my previous request, please tell us all about the >>> wonderful AI program that you have written that does things even more >>> intelligently than Watson. >> >> Done: read my papers. > > I'm not asking for more endless philosophy, I'm asking for programs. I'm > asking you to tell us what you have taught a computer to do that caused > it to behave anywhere near as intelligently as Watson; a program you > claim to have contempt for as well as for its creators. But to be honest > I can't help but wonder if contempt is the right word and if there might > be a better one. Read parallel post addressed to Kelly Anderson. Richard Loosemore From jonkc at bellsouth.net Thu Feb 17 15:42:36 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 10:42:36 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5D3C34.7080305@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: On Feb 17, 2011, at 10:18 AM, Richard Loosemore wrote: > > Read parallel post addressed to Kelly Anderson. Why? Did the parallel post addressed to Kelly Anderson teach a computer to behave anywhere near as intelligently as Watson? If so I am delighted but I really don't see why I need to read it, I didn't need to read Watson's source code to be enormously impressed by it. The truth is I have read the source code of very few human beings, but I still think some of them are intelligent. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 16:06:31 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 08:06:31 -0800 Subject: [ExI] Watson on NOVA In-Reply-To: References: <008301cbcb0b$1c3dc4c0$54b94e40$@att.net> <4D58093D.9070306@lightlink.com> <4D592D10.6010404@lightlink.com> <4D5A8710.2030403@lightlink.com> <008c01cbcd31$8805bc80$98113580$@att.net> Message-ID: <004e01cbcebc$a46f4440$ed4dccc0$@att.net> bounces at lists.extropy.org] On Behalf Of Kelly Anderson Subject: Re: [ExI] Watson on NOVA On Tue, Feb 15, 2011 at 9:58 AM, spike wrote: >> Ja, but when you say "research" in reference to AI, keep in mind the > actual goal isn't the creation of AGI, but rather the creation of AGI > that doesn't kill us. >Why is that the goal? As extropians isn't the idea to reduce entropy? We need AGI to figure out how to do nanotech to figure out how to upload by mapping the physical configuration of our brains. If they can do it while we are alive, that would be great. If the brain needs to be frozen, well, that's better than the alternative. >But if humans can create the AI that creates the replicating nanobots, then in a sense it isn't out of human reach... Ja. I think AGI is the best and possibly only path to replicating nanotech. ...> >On the other hand, if we are successful at doing AI wrong, we are all > doomed right now. ?It will decide it doesn't need us, or just sees no > reason why we are useful for anything. >And that is a bad thing exactly how? If we do AGI wrong, and it has no empathy with humans, it may decide to convert *all* the available metals in the solar system and use all of it to play chess or search for Mersenne primes. I love both those things, but if every atom of the solar system is set to doing that, it would be a bad thing. >> ... ?But I am reluctant to risk my children's and grandchildren's 100 years of meat world existence on just getting AI going as quickly as possible. >Honestly, I don't think we have much of a choice about when AI gets going. We can all make choices as individuals, but I see it as kind of inevitable. Ray K seems to have this mind set as well, so I feel like I'm in pretty good company on this one. No sir, I disagree with even Ray K. A fatalistic attitude is dangerous in this context. We must do whatever we can to see to it we do have a choice about when AI gets going. >> ...Nuclear bombs preceded nuclear power plants. >Yes, and many of the most interesting AI applications are no doubt military in nature. -Kelly If true AGI is used militarily, then all humanity is finished, for eventually the weaponized AGI will find friend and foe indistinguishable. spike From spike66 at att.net Thu Feb 17 15:52:46 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 07:52:46 -0800 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> Message-ID: <004d01cbceba$b8f11f80$2ad35e80$@att.net> ... Behalf Of Kelly Anderson Subject: Re: [ExI] Lethal future was Watson on NOVA On Wed, Feb 16, 2011 at 4:36 PM, Samantha Atkins wrote: >> My theory is that almost no evolved intelligent species meets the >> challenge of overcoming its evolved limitations fast enough to cope >> successfully with accelerating technological change... >Another possibility is that advanced civilizations naturally trend towards virtual reality, and thus end up leaving a very small externally detectable footprint. Exploring the endless possibilities of virtual reality seems potentially a lot more interesting than crossing tens of thousands of light years of space to try and visit some lower life form...-Kelly This is my favorite theory. Technological civilizations figure out AGI, then nanotech, then they put all the metal in their solar system into computronium, at which time *they don't care* what happens at other stars, because the information takes too long to get there; the latency is insurmountably high. It is analogous to why we don't go searching Outer Elbonia to try to understand whatever technology they have developed to twang arrows at caribou; we don't care how they do that. Anything they have, we can do better. spike From rpwl at lightlink.com Thu Feb 17 16:24:11 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 11:24:11 -0500 Subject: [ExI] A different question about Watson In-Reply-To: <4D5D3C34.7080305@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: <4D5D4BAB.4070102@lightlink.com> I am a little puzzled about one thing: did Watson get its questions from doing speech recognition, or did someone type the questions in ahead of time, and press a button to send the text to Watson at the same time that Alex spoke it? Only reason I ask is that Ben Goertzel, in his H+ essay on the subject: http://hplusmagazine.com/2011/02/17/watson-supercharged-search-engine-or-prototype-robot-overlord/ gives some examples of Jeopardy questions: > ?Whinese? is a language they use on long car trips > > The motto of this 1904-1914 engineering project was ?The land > divided, the world united? > > Built at a cost of more than $200 million, it stretches from > Victoria, B.C. to St. John?s, Newfoundland > > Jay Leno on July 8, 2010: The ?nominations were announced today? > there?s no ?me? in? this award ... and these questions contain some interestingly useful structure in their written form. I am thinking mostly of the very helpful quotation marks. I suspect that there was no speech recognition, and that Watson got direct text, but perhaps someone who actually saw the shows can tell if this is the case? Richard Loosemore From eugen at leitl.org Thu Feb 17 16:26:57 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 17:26:57 +0100 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5C7657.6070405@lightlink.com> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> Message-ID: <20110217162657.GB23560@leitl.org> On Wed, Feb 16, 2011 at 08:13:59PM -0500, Richard Loosemore wrote: > So demanding, some people. ;-) > > If you have read McClelland and Rumelhart's two-volume "Parallel I've skimmed PDP when it was new. I have not read your publications because I've asked for a list, here, twice, nicely, and no reply was forthcoming. I presume http://richardloosemore.com/papers are yours? > Distributed Processing", and if you have then read my papers, and if you > are still so much in the dark that the only thing you can say is "I > haven't seen anything in your papers that rise to the level of computer > science" then, well... You know, I could rattle off a list of books (far more relevant) you have no clue of. It's a pretty stupid game, so let's not play it. > (And, in any case, my answer to John Clark was as facetious as his > question was silly.) > > At this stage, what you can get is a general picture of the background > theory. That is readily obtainable if you have a good knowledge of (a) > computer science, (b) cognitive psychology and (c) complex systems. It I don't see how cognitive psychology is relevant. It's good that complex systems makes your list. > also helps, as I say, to be familiar with what was going on in those PDP > books. > > Do you have a fairly detailed knowledge of all three of these areas? Are you always an arrogant blowhard, Richard? > Do you understand where McClelland and Rumelhart were coming from when > they talked about the relaxation of weak constraints, and about how a > lot of cognition seemed to make more sense when couched in those terms? > Do you also follow the line of reasoning that interprets M & R's > subsequent pursuit of non-complex models as a mistake? And the > implication that there is a class of systems that are as yet unexplored, > doing what they did but using a complex approach? > > Put all these pieces together and we have the basis for a dialog. > > But ... demanding a finished AGI as an essential precondition for > behaving in a mature way toward the work I have already published...? I > don't think so. :-) I think two things apply: you haven't build a lot of systems that make impressive results, and you spend a lot of time on this list, which means you don't have have a lot of quality time for work, whatever it is. I've just skimmed your papers at maximum speed, and preliminary impression is not good. I'll reserve my opinion until I can read them. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From lubkin at unreasonable.com Thu Feb 17 16:26:49 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Thu, 17 Feb 2011 11:26:49 -0500 Subject: [ExI] watson on jeopardy In-Reply-To: References: <002d01cbce64$16fc3530$44f49f90$@att.net> Message-ID: <201102171627.p1HGRKKS000848@andromeda.ziaspace.com> Darren wrote: >Jeopardy? For each question there is only one >right answer, and therefore only one right move Aside, at least once a game there's a question with more than one valid answer. (Sometimes it didn't come up but I spotted it anyway.) Contestants must proceed based on Trebek's initial ruling. If the research team confirms that the answer given was actually correct, scores are adjusted. I have seen a contestant brought back another day when they could have plausibly won the game had their answer been deemed correct. There's a similar flaw in many kinds of test-taking, e.g., the Miller Analogies Test. A is to B as C is to ___, (1) D (2) E (3) F (4) G. E is the only answer accepted as correct. But smart-you sees an interpretation whereby it's G. What should be done is provide space with each question to optionally provide a rationale. If the expected answer is given, accept it. If a different answer is given, see if there's a rationale and it makes sense. (Still won't help if the test scorer is a dolt who doesn't understand your rationale, but it's an improvement.) Otherwise both Jeopardy and the tests become guessing games. Not what's the right answer, but what's the one they would have thought of. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From eugen at leitl.org Thu Feb 17 16:32:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 17:32:32 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: <20110217163232.GC23560@leitl.org> On Wed, Feb 16, 2011 at 06:21:26PM -0700, Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: > > On 02/16/2011 10:15 AM, spike wrote: > > Not the same problem domain or even all that close. ?Can you turn it into a > > really good chatbot? ?Maybe, maybe not depending on your standard of "good". > > ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks in > > the nursing home entertained. > > Samantha, are you familiar with Moore's law? Let's assume for purposes Kelly, do you think 3d integration will be just-ready when CMOS runs into a wall? Kelly, do you think that Moore is equivalent to system performance? You sure about that? > of discussion that you are 30, that you will be in the nursing home > when you're 70. That means Watson level functionality will cost around > $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get > it in a box of cracker jacks. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From spike66 at att.net Thu Feb 17 16:22:18 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 08:22:18 -0800 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: Message-ID: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> On Behalf Of Darren Greer >. I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess." d. That sounds like a good prediction, but it hasn't worked that way really. Computers are better than all humans now, even the commercial versions that run on laptop computers. Human vs human chess is still played, the prize funds are higher than ever, the highest rated human (Carlsen) is dating a supermodel and has been hired to sell clothing for G-Star. This may be a special case however, for Carlsen may be the first male chess grandmaster in history who is not an ugly geek. Odd, for it seems about 80% of the top female chess players are knockout gorgeous, but we lads at that level are 80% radioactive ugly. Actually Darren, you are a valuable one to judge this contest. Scroll all the way down in this link and compare: http://www.chessbase.com/newsdetail.asp?newsid=7014 spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From eugen at leitl.org Thu Feb 17 16:43:42 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 17:43:42 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <004d01cbceba$b8f11f80$2ad35e80$@att.net> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <004d01cbceba$b8f11f80$2ad35e80$@att.net> Message-ID: <20110217164342.GD23560@leitl.org> On Thu, Feb 17, 2011 at 07:52:46AM -0800, spike wrote: > >Another possibility is that advanced civilizations naturally trend towards > virtual reality, and thus end up leaving a very small externally detectable > footprint. Exploring the endless possibilities of virtual reality seems > potentially a lot more interesting than crossing tens of thousands of light > years of space to try and visit some lower life form...-Kelly > > > > This is my favorite theory. Technological civilizations figure out AGI, > then nanotech, then they put all the metal in their solar system into > computronium, at which time *they don't care* what happens at other stars, Just as we never cared what was on the other continents. America was never colonized. Spike is as mythical as an unicorn. Wait, the first pre-life form never made it out of the first puddle, or hot smoker, or wherever it was. > because the information takes too long to get there; the latency is > insurmountably high. It is analogous to why we don't go searching Outer > Elbonia to try to understand whatever technology they have developed to So why was the land of mud and misogyny ever settled? > twang arrows at caribou; we don't care how they do that. Anything they > have, we can do better. Why do people have children? Do children forever remain in their home? Why was America colonized? Why do we have 500 volunteers for a one-way mission to Mars? What are pioneer species and what is ecological succession? -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From jonkc at bellsouth.net Thu Feb 17 16:36:59 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 11:36:59 -0500 Subject: [ExI] A different question about Watson In-Reply-To: <4D5D4BAB.4070102@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <4D5D4BAB.4070102@lightlink.com> Message-ID: <0292DCA7-C886-4608-A4F8-027F5722D0E8@bellsouth.net> On Feb 17, 2011, at 11:24 AM, Richard Loosemore wrote: > I suspect that there was no speech recognition, and that Watson got direct text, They said on the first show that he did. How is that important? > perhaps someone who actually saw the shows can tell if this is the case I would humbly suggest that it might be wise to see what Watson actually did before you proclaim it trivial. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 16:26:11 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 08:26:11 -0800 Subject: [ExI] watson on jeopardy In-Reply-To: References: <002d01cbce64$16fc3530$44f49f90$@att.net> Message-ID: <005501cbcebf$63c855a0$2b5900e0$@att.net> . On Behalf Of Darren Greer . I just read the Kurzweil article and he points out that Watson is much closer to being able to pass the Turing test than a chess playing computer as it is dealing with human language. And so based on that criteria, it is a step forward no matter how you slice it. d. Chess programs have already passed the Turing test in chess, a long time ago. So Rybka wins the Turing test at chess, Watson passes or is getting close in Jeopardy, neither can pass at general language. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Thu Feb 17 16:57:17 2011 From: pharos at gmail.com (BillK) Date: Thu, 17 Feb 2011 16:57:17 +0000 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <20110217164342.GD23560@leitl.org> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <004d01cbceba$b8f11f80$2ad35e80$@att.net> <20110217164342.GD23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 4:43 PM, Eugen Leitl wrote: > Why do people have children? Do children forever remain in > their home? > > Why was America colonized? Why do we have 500 volunteers for > a one-way mission to Mars? > > What are pioneer species and what is ecological succession? > > Or, alternatively, why don't people have children? Viz. the collapse in first world birth rates. You are comparing people with miserable, short lifespans to very long-lived people with every wish fulfilled by nano-Santa in virtual reality. Apples and Oranges. BillK From eugen at leitl.org Thu Feb 17 17:03:44 2011 From: eugen at leitl.org (Eugen Leitl) Date: Thu, 17 Feb 2011 18:03:44 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <004d01cbceba$b8f11f80$2ad35e80$@att.net> <20110217164342.GD23560@leitl.org> Message-ID: <20110217170344.GE23560@leitl.org> On Thu, Feb 17, 2011 at 04:57:17PM +0000, BillK wrote: > Or, alternatively, why don't people have children? Why people? Take all the species into account. > Viz. the collapse in first world birth rates. The atheists, you mean. The faithful are breeding like rabbits. Fulfilling the will of the Lord. > You are comparing people with miserable, short lifespans to very > long-lived people with every wish fulfilled by nano-Santa in virtual Extremely short-lived information patterns, some of the complexity of viroids. > reality. > > Apples and Oranges. You're so right, it ain't even funny. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From alfio.puglisi at gmail.com Thu Feb 17 17:52:13 2011 From: alfio.puglisi at gmail.com (Alfio Puglisi) Date: Thu, 17 Feb 2011 18:52:13 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: On Wed, Feb 16, 2011 at 6:08 PM, Keith Henson wrote: > On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > > > On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > > > >> I'm still pissed at Sagan for his hubris in sending a message to the > >> stars without asking the rest of us first, in blithe certainty that "of > >> course" any recipient would have evolved beyond aggression and > >> xenophobia. > > > > The real reasons if that they would be there you'd be dead, Jim. > > In fact, if any alien picks up the transmission (chance: very close > > to zero) they'd better be farther advanced than us, and on a > > faster track. I hope it for them. > > I have been mulling this over for decades. > > We look out into the Universe and don't (so far) see or hear any > evidence of technophilic civilization. > > I see only two possibilities: > > 1) Technophilics are so rare that there are no others in our light cone. > > 2) Or if they are relatively common something wipes them *all* out, > or, if not wiped out, they don't do anything which indicates their > presence. > There are a couple of solutions that basically deny that the rest of the Universe is real: 3) the simulation argument 4) you're a Boltzmann brain Alfio -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Feb 17 18:08:22 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 13:08:22 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <20110217162657.GB23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <20110217162657.GB23560@leitl.org> Message-ID: <4D5D6416.90300@lightlink.com> Eugen Leitl wrote: > I have not read your publications > because I've asked for a list, here, twice, nicely, and no reply was > forthcoming. > > I presume http://richardloosemore.com/papers are yours? Indeed, those are mine. I must have missed your request for a list: did I not direct you to the web page? My mistake, I'm sure. Richard Loosemore From spike66 at att.net Thu Feb 17 18:11:19 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 10:11:19 -0800 Subject: [ExI] watson on jeopardy In-Reply-To: <005501cbcebf$63c855a0$2b5900e0$@att.net> References: <002d01cbce64$16fc3530$44f49f90$@att.net> <005501cbcebf$63c855a0$2b5900e0$@att.net> Message-ID: <009c01cbcece$13cc6be0$3b6543a0$@att.net> I have been watching the traffic on the topic of Watson. The information is mostly relevant to transhumanism, interesting, mostly intelligently written, and the participants are treating each other with respect for the most part. I propose we extend the open season on that topic for a few more days. Papal decree: until about Sunday midnight US west coast time, if your comment specifically has to do with Watson, the Jeopardy challenge, fresh AGI material or some direct spinoff of that topic that can legitimately be subject lined "Watson on [*]" then go ahead and post away on that topic, and don't worry about counting it toward the voluntary ~five post per day limit. This has been fun to read this stuff. Play ball! {8-] spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Feb 17 18:50:17 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 17 Feb 2011 13:50:17 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <20110217162657.GB23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <20110217162657.GB23560@leitl.org> Message-ID: <4D5D6DE9.2060406@lightlink.com> Eugen Leitl wrote: > On Wed, Feb 16, 2011 at 08:13:59PM -0500, Richard Loosemore wrote: >> Distributed Processing", and if you have then read my papers, and if you >> are still so much in the dark that the only thing you can say is "I >> haven't seen anything in your papers that rise to the level of computer >> science" then, well... > > You know, I could rattle off a list of books (far more relevant) > you have no clue of. It's a pretty stupid game, so let's not play it. If you actually read the thread you will see that nobody was playing that "pretty stupid game", before you started to do so in the above sentence. ;-) You have drastically, utterly failed to understand or read the context. As I will explain.... I was addressing an implicit question from Kelly Anderson about how anyone could make sense of my *own* papers, and I pointed to those two books because I am claiming that they represent a critical hinge point in the history of cognitive science and AI, and my work is best understood as a path-not-taken from that hinge point. If you think you know my own research better than I do, and can "rattle off a list of far more relevant books" that would help someone understand the context that my work comes from, by all means do so. Granted, there is a problem there. Quite a few computer science people read those McClelland and Rumelhart books looking only at the NN algorithms, but without knowing the cognitive psychology history that came before the PDP books. The problem is that my work springs not from the superficial NN stuff but from that much deeper history. That fact may cause some misunderstanding. In order to gauge the appropriate level at which to respond to Kelly's concerns, therefore, it mattered a good deal whether he was a cognitive psychologist or an AI person, and to that end I went on to explain that and ask some questions..... >> At this stage, what you can get is a general picture of the background >> theory. That is readily obtainable if you have a good knowledge of (a) >> computer science, (b) cognitive psychology and (c) complex systems. It > > I don't see how cognitive psychology is relevant. It's good that > complex systems makes your list. Again, I mentioned cognitive psychology only because I was responding to Kelly's comment about the fact that he read my papers but could not see in them the things I had hoped he would. I was in the process of explaining the background to my own work. You seem to have interpreted my reference to those areas as something else entirely. Cognitive psychology is critical to an understanding of my approach to AGI. Without an understanding of that field, it might be hard to see why the papers I wrote are outlining a theory of AGI. >> also helps, as I say, to be familiar with what was going on in those PDP >> books. >> >> Do you have a fairly detailed knowledge of all three of these areas? > > Are you always an arrogant blowhard, Richard? Do you always make comments like these without having read the messages that came just before the one you are responding to? To repeat, I was asking the question of Kelly because it was directly relevant to his own comments about my papers. I needed to get a context. Kelly responded politely and factually. You, on the other hand, are an onlooker who the question was not directed at, but you feel inclined to step in, misinterpret the context, and start using comments like "arrogant blowhard". (... the kind of language that, I might point out, has been used as grounds for putting people on moderation! ;-) ). >> Do you understand where McClelland and Rumelhart were coming from when >> they talked about the relaxation of weak constraints, and about how a >> lot of cognition seemed to make more sense when couched in those terms? >> Do you also follow the line of reasoning that interprets M & R's >> subsequent pursuit of non-complex models as a mistake? And the >> implication that there is a class of systems that are as yet unexplored, >> doing what they did but using a complex approach? >> >> Put all these pieces together and we have the basis for a dialog. >> >> But ... demanding a finished AGI as an essential precondition for >> behaving in a mature way toward the work I have already published...? I >> don't think so. :-) > > I think two things apply: you haven't build a lot of systems that > make impressive results, and you spend a lot of time on this list, > which means you don't have have a lot of quality time for work, > whatever it is. > > I've just skimmed your papers at maximum speed, and preliminary impression > is not good. I'll reserve my opinion until I can read them. Sadly, I can tell you in advance that your opinion will be of no value. :-( Richard Loosemore From sjatkins at mac.com Thu Feb 17 19:29:40 2011 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 17 Feb 2011 11:29:40 -0800 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> Message-ID: <4D5D7724.8060502@mac.com> On 02/16/2011 05:21 PM, Kelly Anderson wrote: > On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: >> On 02/16/2011 10:15 AM, spike wrote: >> Not the same problem domain or even all that close. Can you turn it into a >> really good chatbot? Maybe, maybe not depending on your standard of "good". >> But that wouldn't be very exciting. Very expensive way to keep folks in >> the nursing home entertained. > Samantha, are you familiar with Moore's law? No, gosh, never heard of it before. :P > Let's assume for purposes > of discussion that you are 30, that you will be in the nursing home > when you're 70. That means Watson level functionality will cost around > $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get > it in a box of cracker jacks. Moore's Law is not enough. You need much better algorithmic approaches and in some cases any workable algorithm at all. There are algorithms that have changed enough that running the modern version on a 1980 PC outperforms running the 1980 algorithm on a supercomputer today. Moore's Law is about hardware. Software has notoriously failed to keep pace. For many tasks we don't have vetted algorithms at all yet or a clear idea of how to achieve the desired results. - samantha From msd001 at gmail.com Thu Feb 17 20:16:35 2011 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 17 Feb 2011 15:16:35 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: 2011/2/17 John Clark : > source code to be enormously impressed by it. The truth is I have read > the?source code of very few human beings, but I still think some of them are > intelligent. Bullshit. John Clark has given evidence of the belief that only John Clark is intelligent. :) From jonkc at bellsouth.net Thu Feb 17 20:56:44 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 15:56:44 -0500 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: The cover story of the current issue of Time magazine is entitled "2045: The Year Man Becomes Immortal", its about Ray Kurzweil and the singularity: http://www.time.com/time/health/article/0,8599,2048138,00.html John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 21:08:16 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 17:08:16 -0400 Subject: [ExI] Kurzweil On Watson In-Reply-To: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> References: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> Message-ID: I kinda like Zhigalko but then I like the thin, intense type. Carlsen is way too basketball A-team for me (memories of getting beat up in high school), but yes, I can see how the novelty of a good-looking jock/chess master would turn on G.Q. and super models. d. 2011/2/17 spike > > > > > *On Behalf Of *Darren Greer > > ** > > *>?* I also predicted that when that happened we would either think better > of computer intelligence, worse of human thinking, or worse of chess, and > that if history was a guide, we would downgrade chess." d. > > > > That sounds like a good prediction, but it hasn?t worked that way really. > Computers are better than all humans now, even the commercial versions that > run on laptop computers. Human vs human chess is still played, the prize > funds are higher than ever, the highest rated human (Carlsen) is dating a > supermodel and has been hired to sell clothing for G-Star. > > > > This may be a special case however, for Carlsen may be the first male chess > grandmaster in history who is not an ugly geek. Odd, for it seems about 80% > of the top female chess players are knockout gorgeous, but we lads at that > level are 80% radioactive ugly. > > > > Actually Darren, you are a valuable one to judge this contest. > > > > Scroll all the way down in this link and compare: > > > > http://www.chessbase.com/newsdetail.asp?newsid=7014 > > > > spike > > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Feb 17 20:43:20 2011 From: jonkc at bellsouth.net (John Clark) Date: Thu, 17 Feb 2011 15:43:20 -0500 Subject: [ExI] Watson Jeopardy battle on the net In-Reply-To: <4D5D7724.8060502@mac.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <4D5D7724.8060502@mac.com> Message-ID: <08C7DC57-6358-4C59-B5E6-7233F0E061DA@bellsouth.net> As far as I know the entire 90 minute Watson Jeopardy battle is not on the net yet, but there is an interview with the two defeated human ex champions: http://abcnews.go.com/Technology/video/jeopardy-champs-battling-watson-discuss-challenge-12931204 John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 21:29:39 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 17:29:39 -0400 Subject: [ExI] watson on jeopardy In-Reply-To: <005501cbcebf$63c855a0$2b5900e0$@att.net> References: <002d01cbce64$16fc3530$44f49f90$@att.net> <005501cbcebf$63c855a0$2b5900e0$@att.net> Message-ID: >Watson passes or is getting close in Jeopardy, neither can pass at general language.< Yes, I meant, and I guess Kurzweil meant, that Watson is one step closer to passing the general language test though he and computers in general still maybe be very far away. Because it isn't just a matter of the computer finding the right answer to a direct question -- he has to place the words in context first and then hunt for the answer. The example given by one of the programmers was if someone 'runs' down the street and someone else 'runs' for president, Watson has to be able to sort out which meaning is intended before he can begin to word associate. And this ability moves Watson closer to passing the general language test than a computer has ever been before, does it not? I think one of the more interesting aspects of this Watson discussion is that we as a group are very focussed on where we want to be as opposed to where we actually are. I think it also is interesting that many of the things Watson does we do as thinking machines as well. We don't understand the programming, or the platform. And we have only the basest understanding of the hardware. But I know when presented with a question I put the words in context and my brain begins to associate. Other than thinking in image as well as language and numbers, I don't understand how Watson, in this one area at least, is radically different from me. Except he has a far greater knowledge base of raw trivia stored in accessible 'cells.' d. 2011/2/17 spike > > > > > *?* *On Behalf Of *Darren Greer > *?* > > > > I just read the Kurzweil article and he points out that Watson is much > closer to being able to pass the Turing test than a chess playing computer > as it is dealing with human language. And so based on that criteria, it is a > step forward no matter how you slice it. > > > > d. > > > > Chess programs have already passed the Turing test in chess, a long time > ago. So Rybka wins the Turing test at chess, Watson passes or is getting > close in Jeopardy, neither can pass at general language. > > spike > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Feb 17 22:21:42 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 14:21:42 -0800 Subject: [ExI] Kurzweil On Watson In-Reply-To: References: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> Message-ID: <004301cbcef1$0e98b480$2bca1d80$@att.net> On Behalf Of Darren Greer Subject: Re: [ExI] Kurzweil On Watson >.I kinda like Zhigalko but then I like the thin, intense type. Carlsen is way too basketball A-team for me (memories of getting beat up in high school), but yes, I can see how the novelty of a good-looking jock/chess master would turn on G.Q. and super models. d. 2011/2/17 spike Well sure, but in any case, my point is we male chess players as a rule are hurting ugly. But that Anna Sharevich, oh my goodness. That stunning creature is enough to make a gay man straight. She is enough to make a straight woman gay. http://www.chessbase.com/newsdetail.asp?newsid=7014 And if that isn't enough of a chess babe, check this! http://en.wikipedia.org/wiki/Alexandra_Kosteniuk And this! http://en.wikipedia.org/wiki/Tatiana_Kosintseva OK this is sufficiently non-Watson I will count that against my total for today. {8^D spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Feb 17 22:28:32 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 17 Feb 2011 18:28:32 -0400 Subject: [ExI] Kurzweil On Watson In-Reply-To: <004301cbcef1$0e98b480$2bca1d80$@att.net> References: <005001cbcebe$d8e3ffc0$8aabff40$@att.net> <004301cbcef1$0e98b480$2bca1d80$@att.net> Message-ID: >Well sure, but in any case, my point is we male chess players as a rule are hurting ugly< Was trying to be tactful and circumspect, but yes, now that you mention it -- that particular group of men could scare the labels off Campbell's soup cans. Darren 2011/2/17 spike > > > > > *On Behalf Of *Darren Greer > *Subject:* Re: [ExI] Kurzweil On Watson > > > > >?I kinda like Zhigalko but then I like the thin, intense type. Carlsen is > way too basketball A-team for me (memories of getting beat up in high > school), but yes, I can see how the novelty of a good-looking jock/chess > master would turn on G.Q. and super models. > > > > d. > > > > 2011/2/17 spike > > > > Well sure, but in any case, my point is we male chess players as a rule are > hurting ugly. But that Anna Sharevich, oh my goodness. That stunning > creature is enough to make a gay man straight. She is enough to make a > straight woman gay. > > http://www.chessbase.com/newsdetail.asp?newsid=7014 > > And if that isn?t enough of a chess babe, check this! > > http://en.wikipedia.org/wiki/Alexandra_Kosteniuk > > And this! > > http://en.wikipedia.org/wiki/Tatiana_Kosintseva > > OK this is sufficiently non-Watson I will count that against my total for > today. > > {8^D > > spike > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Feb 18 06:17:34 2011 From: spike66 at att.net (spike) Date: Thu, 17 Feb 2011 22:17:34 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> Message-ID: <009301cbcf33$88a24a10$99e6de30$@att.net> Even after all the singularity talk that we have had here for years, it was a jolt to see all that in something as mainstream as Time magazine. It will be interesting to see the letters to the editor on this one. spike From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Clark Sent: Thursday, February 17, 2011 12:57 PM To: ExI chat list Subject: [ExI] Time magazine cover story on the singularity The cover story of the current issue of Time magazine is entitled "2045: The Year Man Becomes Immortal", its about Ray Kurzweil and the singularity: http://www.time.com/time/health/article/0,8599,2048138,00.html John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Fri Feb 18 07:10:41 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 00:10:41 -0700 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <20110217115041.GQ23560@leitl.org> References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <20110217115041.GQ23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 4:50 AM, Eugen Leitl wrote: > On Thu, Feb 17, 2011 at 06:59:53AM -0400, Darren Greer wrote: >> >Another possibility is that advanced civilizations naturally trend >> towards virtual reality, and thus end up leaving a very small >> externally detectable footprint. Exploring the endless possibilities > > Look, what is your energetical footprint? 1 kW, more or less? > Negligible. In a super efficient system, my footprint might be nanowatts. I believe there are theoretical computing models that use zero net electricity. > Now multiply that by 7 gigamonkeys. Problem? > > Infinitesimally small energy budgets multiplied by very large > numbers are turning stars into FIR blackbodies. And whole galaxies, > and clusters, and superclusters. > > You think that would be easy to miss? Yes. Seeing the LACK of something is very difficult astronomy. Heck how long did it take astronomers to figure out that the majority of the universe is dark matter? I agree with you that an advanced civilization would eventually create a ring world, and finally a sphere that collected all available solar energy. But that could support an enourmous computational structuree, capable of simulating every mind in a 10,000 year civilization might take only a few watts and a few seconds. -Kelly > >> of virtual reality seems potentially a lot more interesting than >> crossing tens of thousands of light years of space to try and visit >> some lower life form...< >> >> I had never considered this scenario until I came to Exi and it was >> postulated for me. It is the most hopeful compared to the other polar > > When something is postulated to you it's usually bunk. Novelty > and too small group for peer review pretty much see to that. When I look at teenagers lost in iPods, it doesn't seem like bunk to think that they could positively be swallowed alive by an interesting virtual reality. I have relatives who have addiction to WoW that makes a heroin addict look like a weekend social drinker. >> opposite scenarios--self-destruction or mature Zen state (with a no poaching >> policy) of technological superiority. Alas, self-destruction seems to me to >> be the most likely, given the bloody and tragic arc of our history at least. > > It's less bloody and tragic than bloody stupid. Our collective > intelligence seems to approach that of an overnight culture. > > http://www.fungionline.org.uk/5kinetics/2batch.html Competition for limited resources and a recognition that exponential growth cannot continue forever indicates that there will be Darwinian processes for choosing which AGIs get the eventually limited power, and which do not. This leads one inevitably to the conclusion that the surviving AGIs will be the "fittest" in a survival and reproduction sense. It will be a very competitive world for unenhanced human beings to compete in, to say the least. -Kelly From kellycoinguy at gmail.com Fri Feb 18 07:16:33 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 00:16:33 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <20110217163232.GC23560@leitl.org> References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> Message-ID: On Thu, Feb 17, 2011 at 9:32 AM, Eugen Leitl wrote: > On Wed, Feb 16, 2011 at 06:21:26PM -0700, Kelly Anderson wrote: >> On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins wrote: >> > On 02/16/2011 10:15 AM, spike wrote: >> > Not the same problem domain or even all that close. ?Can you turn it into a >> > really good chatbot? ?Maybe, maybe not depending on your standard of "good". >> > ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks in >> > the nursing home entertained. >> >> Samantha, are you familiar with Moore's law? Let's assume for purposes > > Kelly, do you think 3d integration will be just-ready when > CMOS runs into a wall? Perhaps, perhaps not. But I think ONE out of the several dozen competing paradigms will be ready to pick up more or less where the last one left off. > Kelly, do you think that Moore is equivalent to system > performance? You sure about that? No. Software improves as well, so system performance should go up faster than would be indicated by Moore's law alone would indicate. :-) -Kelly From kellycoinguy at gmail.com Fri Feb 18 07:25:18 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 00:25:18 -0700 Subject: [ExI] Watson On Jeopardy In-Reply-To: <4D5D7724.8060502@mac.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <4D5D7724.8060502@mac.com> Message-ID: On Thu, Feb 17, 2011 at 12:29 PM, Samantha Atkins wrote: > On 02/16/2011 05:21 PM, Kelly Anderson wrote: >> >> On Wed, Feb 16, 2011 at 4:39 PM, Samantha Atkins ?wrote: >>> >>> On 02/16/2011 10:15 AM, spike wrote: >>> Not the same problem domain or even all that close. ?Can you turn it into >>> a >>> really good chatbot? ?Maybe, maybe not depending on your standard of >>> "good". >>> ?But that wouldn't be very exciting. ? ?Very expensive way to keep folks >>> in >>> the nursing home entertained. >> >> Samantha, are you familiar with Moore's law? > > No, gosh, never heard of it before. ?:P Just as I suspected... ;-) >> ?Let's assume for purposes >> of discussion that you are 30, that you will be in the nursing home >> when you're 70. That means Watson level functionality will cost around >> $0.15 in 2011 dollars by the time you need a chatbot... ;-) You'll get >> it in a box of cracker jacks. > > Moore's Law is not enough. ?You need much better algorithmic approaches and > in some cases any workable algorithm at all. ?There are algorithms that have > changed enough that running the modern version on a 1980 PC outperforms > running the 1980 algorithm on a supercomputer today. ? Moore's Law is about > hardware. ?Software has notoriously failed to keep pace. ?For many tasks we > don't have vetted algorithms at all yet or a clear idea of how to achieve > the desired results. You forget the context here. I was talking about what would be required to run a Watson-like system. That algorithm and software clearly exists today. How did we cross wires here? -Kelly From kellycoinguy at gmail.com Fri Feb 18 08:39:26 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 01:39:26 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5D1897.4030906@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> Message-ID: On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote: > Okay, first: ?although I understand your position as an Agilista, and your > earnest desire to hear about concrete code rather than theory ("I value > working code over big ideas"), you must surely acknowledge that in some > areas of scientific research and technological development, it is important > to work out the theory, or the design, before rushing ahead to the > code-writing stage. This is the scientist vs. engineer battle. As an engineering type of scientist, I prefer to perform experiments along the way to determine if my theory is correct. Newton performed experiments to verify his theories, and this influenced his next theory. Without the experiments it would not be the scientific method, but rather closer to philosophy. I'll let "real" scientists figure out how the organelles of the brain function. I'll pay attention as I can to their findings. I like the idea of being influenced by the designs of nature. I really like the wall climbing robots that copy the techniques of the gecko. Really interesting stuff that. I was reading papers about how the retina of cats worked in computer vision classes twenty years ago. I'll let cognitive scientists and doctors try and unravel the brain using black box techniques, and I'll pay attention as I can to their results. These are interesting from the point of view of devising tests to see if what you have designed is similar to the human brain. Things like optical illusions are very interesting in terms of figuring out how we do it. As an Agilista with an entrepreneurial bent, I have little patience for a self-described scientist working on theories that may not have applications for twenty years. I respect that the mathematics for the CAT scanner were developed in the 1920's, but the guy who developed those techniques got very little out of the exercise. Aside from that, if you can't reduce your theories to practice pretty soon, the practitioners of "parlor tricks" will beat you to your goal. > That is not to say that I don't write code (I spent several years as a > software developer, and I continue to write code), but that I believe the > problem of building an AGI is, at this point in time, a matter of getting > the theory right. ?We have had over fifty years of AI people rushing into > programs without seriously and comprehensively addressing the underlying > issues. ?Perhaps you feel that there are really not that many underlying > issues to be dealt with, but after having worked in this field, on and off, > for thirty years, it is my position that we need deep understanding above > all. ?Maxwell's equations, remember, were dismissed as useless for anything > -- just idle theorizing -- for quite a few years after Maxwell came up with > them. ?Not everything that is of value *must* be accompanied by immediate > code that solves a problem. I believe that many interesting problems are solved by throwing more computational cycles at them. Then, once you have something that works, you can optimize later. Watson is a system that works largely because of the huge number of computational cycles being thrown at the problem. As far as AGI research being off the tracks, the only way you're going to convince anyone is with some kind of intermediate result. Even flawed results would be better than nothing. > Now, with regard to the papers that I have written, I should explain that > they are driven by the very specific approach described in the complex > systems paper. ?That described a methodological imperative: ?if intelligent > systems are complex (in the "complex systems" sense, which is not the > "complicated systems", aka space-shuttle-like systems, sense), then we are > in a peculiar situation that (I claim) has to be confronted in a very > particular way. ?If it is not confronted in that particular way, we will > likely run around in circles getting nowhere -- and it is alarming that the > precise way in which this running around in circles would happen bears a > remarkable resemblance to what has been happening in AI for fifty years. > ?So, if my reasoning in that paper is correct then the only sensible way to > build an AGI is to do some very serious theoretical and tool-building work > first. See, I don't think Watson is "getting nowhere"... It is useful today. Let me give you an analogy. I can see that when we can create nanotech robots small enough to get into the human body and work at the cellular level, then all forms of cancer are reduced to sending in those nanobots with a simple program. First, detect cancer cells. How hard can that be? Second, cut a hole in the wall of each cancer cell you encounter. With enough nanobots, cancer, of all kinds, is cured. Of course, we don't have nanotech robots today, but that doesn't matter. I have cured cancer, and I deserve a Nobel prize in medicine!!! On the other hand, there are doctors with living patients today, and they practice all manner of barbarous medicine in the attempt to kill cancer cells without killing patients. The techniques are crude and often unsuccessful causing their patients lots of pain. Nevertheless, these doctors do occasionally succeed in getting a patient into remission. You are the nanotech doctor. I prefer to be the doctor with living patients needing help today. Watson is the second kind. Sure, the first cure to cancer is more general, easier, more effective, easier on the patient, but is simply not available today, even if you can see it as an almost inevitable eventuality. > And part of that theoretical work involves a detailed understanding of > cognitive psychology AND computer science. ?Not just a superficial > acquaintance with a few psychology ideas, which many people have, but an > appreciation for the enormous complexity of cog psych, and an understanding > of how people in that field go about their research (because their protocols > are very different from those of AI or computer science), and a pretty good > grasp of the history of psychology (because there have been many different > schools of thought, and some of them, like Behaviorism, contain extremely > valuable and subtle lessons). Ok, so you care about cognitive psychology. That's great. Are you writing a program that simulates a human psychology? Even on a primitive basis? Or is your real work so secretive that you can't share your ideas? In other words, how SPECIFICALLY does your deep understanding of cognitive psychology contribute to a working program (even if it only solves a simple problem)? > With regard to the specific comments I made below about McClelland and > Rumelhart, what is going on there is that these guys (and several others) > got to a point where the theories in cognitive psychology were making no > sense, and so they started thinking in a new way, to try to solve the > problem. ?I can summarize it as "weak constrain satisfaction" or "neurally > inspired" but, alas, these things can be interpreted in shallow ways that > omit the background context ... and it is the background context that is the > most important part of it. ?In a nutshell, a lot cognitive psychology makes > a lot more sense if it can be re-cast in "constraint" terms. Ok, that starts to make some sense. I have always considered context to be the most important aspect of artificial intelligence, and one of the more ignored. I think Watson does a lot in the area of addressing context. Certainly not perfectly, but well enough to be quite useful. I'd rather have an idiot savant to help me today than a nice theory that might some day result in something truly elegant. > The problem, though, is that the folks who started the PDP (aka > connectionist, neural net) revolution in the 1980s could only express this > new set of ideas in neural terms. ?The made some progress, but then just as > the train appeared to be gathering momentum it ran out of steam. There were > some problems with their approach that could not be solved in a principled > way. ?They had hoped, at the beginning, that they were building a new > foundation for cognitive psychology, but something went wrong. They lacked a proper understanding of the system they were simulating. They kept making simplifying assumptions/guesses because they didn't have a full picture of the brain. I agree that neural networks as practiced in the 80s ran out of steam... whether it was because of a lack of hardware to run the algorithms fast enough, or whether the algorithms were flawed at their core is an interesting argument. If the brain is simulated accurately enough, then we should be able to get an AGI machine by that methodology. That will take some time of course. Your approach apparently will also. Which is the shortest path to AGI? Time will tell, I suppose. > What I have done is to think hard about why that collapse occurred, and to > come to an understanding about how to get around it. ?The answer has to do > with building two distinct classes of constraint systems: ?either > non-complex, or complex (side note: ?I will have to refer you to other texts > to get the gist of what I mean by that... see my 2007 paper on the subject). > ?The whole PDP/connectionist revolution was predicated on a non-complex > approach. ?I have, in essence, diagnosed that as the problem. ?Fixing that > problem is hard, but that is what I am working on. > > Unfortunately for you -- wanting to know what is going on with this project > -- I have been studiously unprolific about publishing papers. So at this > stage of the game all I can do is send you to the papers I have written and > ask you to fill in the gaps from your knowledge of cognitive psychology, AI > and complex systems. This kind of sounds like you want me to do your homework for you... :-) You have published a number of papers. The problem from my point of view is that the way you approach your papers is philisophical, not scientific. Interesting, but not immediately useful. > Finally, bear in mind that none of this is relevant to the question of > whether other systems, like Watson, are a real advance or just a symptom of > a malaise. ?John Clark has been ranting at me (and others) for more than > five years now, so when he pulls the old bait-and-switch trick ("Well, if > you think XYZ is flawed, let's see YOUR stinkin' AI then!!") I just smile > and tell him to go read my papers. ?So we only got into this discussion > because of that: ?it has nothing to do with delivering critiques of other > systems, whether they contain a million lines of code or not. ?:-) ? Watson > still is a sleight of hand, IMO, whether my theory sucks or not. ?;-) The problem from my point of view is that you have not revealed enough of your theory to tell whether it sucks or not. I have no personal axe to grind. I'm just curious because you say, "I can solve the problems of the world", and when I ask what those are, you say "read my papers"... I go and read the papers. I think I understand what you are saying, more or less in those papers, and I still don't know how to go about creating an AGI using your model. All I know at this point is that I need to separate the working brain from the storage brain. Congratulations, you have recast the brain as a Von Neumann architecture... :-) -Kelly From eugen at leitl.org Fri Feb 18 12:51:46 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Feb 2011 13:51:46 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: <4D5C0C2C.9030306@lightlink.com> <4D5C5F9A.2020204@mac.com> <20110217115041.GQ23560@leitl.org> Message-ID: <20110218125146.GJ23560@leitl.org> On Fri, Feb 18, 2011 at 12:10:41AM -0700, Kelly Anderson wrote: > > Look, what is your energetical footprint? 1 kW, more or less? > > Negligible. > > In a super efficient system, my footprint might be nanowatts. I Not even for human equivalent, nevermind at 10^6 to 10^9 speedup. I don't think you can go below 1-10 W for a human realtime equivalent. > believe there are theoretical computing models that use zero net > electricity. Reversible logic is slow, and it's not perfectly reversible. And it's still immaterial, because if you use 100 times less energy there will be 100 times the individuals competing for it. Adaptively. > > Now multiply that by 7 gigamonkeys. Problem? > > > > Infinitesimally small energy budgets multiplied by very large > > numbers are turning stars into FIR blackbodies. And whole galaxies, > > and clusters, and superclusters. > > > > You think that would be easy to miss? > > Yes. Seeing the LACK of something is very difficult astronomy. Heck Giant (up to GLYr) spherical voids only emitting in FIR? > how long did it take astronomers to figure out that the majority of > the universe is dark matter? I agree with you that an advanced There was a dedicated search for Dyson FIR emitters. Result: density too low to care. > civilization would eventually create a ring world, and finally a Not ring, optically dense node cloud. > sphere that collected all available solar energy. But that could > support an enourmous computational structuree, capable of simulating Enormous to some, trivial to others. > every mind in a 10,000 year civilization might take only a few watts > and a few seconds. The numbers don't check out. Occam's razor sez: we're not in anyone's smart lightcone. > > When something is postulated to you it's usually bunk. Novelty > > and too small group for peer review pretty much see to that. > > When I look at teenagers lost in iPods, it doesn't seem like bunk to > think that they could positively be swallowed alive by an interesting > virtual reality. I have relatives who have addiction to WoW that makes > a heroin addict look like a weekend social drinker. Have you seen the birth rate and retention rate of Amish? > > It's less bloody and tragic than bloody stupid. Our collective > > intelligence seems to approach that of an overnight culture. > > > > http://www.fungionline.org.uk/5kinetics/2batch.html > > Competition for limited resources and a recognition that exponential I was referring to 7 gigamonkeys in above graph, actually. > growth cannot continue forever indicates that there will be Darwinian > processes for choosing which AGIs get the eventually limited power, You're getting it. > and which do not. This leads one inevitably to the conclusion that the > surviving AGIs will be the "fittest" in a survival and reproduction > sense. It will be a very competitive world for unenhanced human beings > to compete in, to say the least. Exactly. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Fri Feb 18 13:03:21 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Feb 2011 14:03:21 +0100 Subject: [ExI] Watson On Jeopardy In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <005b01cbce05$8101d390$83057ab0$@att.net> <4D5C604D.3030201@mac.com> <20110217163232.GC23560@leitl.org> Message-ID: <20110218130321.GK23560@leitl.org> On Fri, Feb 18, 2011 at 12:16:33AM -0700, Kelly Anderson wrote: > > Kelly, do you think 3d integration will be just-ready when > > CMOS runs into a wall? > > Perhaps, perhaps not. But I think ONE out of the several dozen > competing paradigms will be ready to pick up more or less where the > last one left off. *Which* competing platforms? Technologies don't come out of the blue fully formed, they're incubated for decades in R&D pipeline. Everything is photolitho based so far, self-assembly isn't yet even in the crib. TSM is just 2d piled higher and deeper. > > Kelly, do you think that Moore is equivalent to system > > performance? You sure about that? > > No. Software improves as well, so system performance should go up Software degrades, actually. Software bloat about matches the advances in hardware. In terms of advanced concepts, why is the second-oldest high level language still unmatched? Why are newer environments inferior to already historic ones? > faster than would be indicated by Moore's law alone would indicate. > :-) -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From rpwl at lightlink.com Fri Feb 18 13:33:16 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 18 Feb 2011 08:33:16 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> Message-ID: <4D5E751C.2060008@lightlink.com> Kelly Anderson wrote: > On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote: >> Okay, first: although I understand your position as an Agilista, and your >> earnest desire to hear about concrete code rather than theory ("I value >> working code over big ideas"), you must surely acknowledge that in some >> areas of scientific research and technological development, it is important >> to work out the theory, or the design, before rushing ahead to the >> code-writing stage. > > This is the scientist vs. engineer battle. As an engineering type of > scientist, I prefer to perform experiments along the way to determine > if my theory is correct. Newton performed experiments to verify his > theories, and this influenced his next theory. Without the experiments > it would not be the scientific method, but rather closer to > philosophy. > > I'll let "real" scientists figure out how the organelles of the brain > function. I'll pay attention as I can to their findings. I like the > idea of being influenced by the designs of nature. I really like the > wall climbing robots that copy the techniques of the gecko. Really > interesting stuff that. I was reading papers about how the retina of > cats worked in computer vision classes twenty years ago. > > I'll let cognitive scientists and doctors try and unravel the brain > using black box techniques, and I'll pay attention as I can to their > results. These are interesting from the point of view of devising > tests to see if what you have designed is similar to the human brain. > Things like optical illusions are very interesting in terms of > figuring out how we do it. > > As an Agilista with an entrepreneurial bent, I have little patience > for a self-described scientist working on theories that may not have > applications for twenty years. I respect that the mathematics for the > CAT scanner were developed in the 1920's, but the guy who developed > those techniques got very little out of the exercise. Aside from that, > if you can't reduce your theories to practice pretty soon, the > practitioners of "parlor tricks" will beat you to your goal. You've misunderstood so very much of what is really going on here. There are strong theoretical reasons to believe that this approach is the only one that will work, and that the "practitioners of "parlor tricks"" will never actually be able to succeed. This isn't just opinion or speculation, it is the result of a real theoretical analysis. Also, why do you say "self-described scientist"? I don't understand if this is supposed to be me or someone else or scientists in general. And why do you assume that I am not doing experiments?! I am certainly doing that, and doing masive numbers of such experiments is at the core of everything I do. I don't quite understand how these confusions arose, but you've ended up getting quite the opposite idea about what is going on. I have little time today, so may not be able to address your other points. Richard Loosemore From hkeithhenson at gmail.com Fri Feb 18 16:13:44 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 18 Feb 2011 09:13:44 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Fri, Feb 18, 2011 at 12:28 AM, Kelly Anderson wrote: snip > Competition for limited resources and a recognition that exponential > growth cannot continue forever indicates that there will be Darwinian > processes for choosing which AGIs get the eventually limited power, > and which do not. This leads one inevitably to the conclusion that the > surviving AGIs will be the "fittest" in a survival and reproduction > sense. It will be a very competitive world for unenhanced human beings > to compete in, to say the least. The fact that we don't see massive scale manipulation of matter and energy indicates that this has not yet happened in our light cone. That doesn't mean it could not happen here. The human population growth falling below replacement in some places is an indication that reproduction isn't as strong a drive as we thought. Still, to get the observed universe, we have to be wrong on something. Perhaps there is a relatively simple way to escape from the universe. Keith From eugen at leitl.org Fri Feb 18 16:55:41 2011 From: eugen at leitl.org (Eugen Leitl) Date: Fri, 18 Feb 2011 17:55:41 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: <20110218165541.GR23560@leitl.org> On Fri, Feb 18, 2011 at 09:13:44AM -0700, Keith Henson wrote: > The fact that we don't see massive scale manipulation of matter and > energy indicates that this has not yet happened in our light cone. We're not in their light cone. Origin being the time they started expanding visibly. > That doesn't mean it could not happen here. > > The human population growth falling below replacement in some places I don't think this will last. Subpopulations still grow exponentially. This is being masked for time being for select location, but the question is for how long. > is an indication that reproduction isn't as strong a drive as we > thought. > > Still, to get the observed universe, we have to be wrong on something. > > Perhaps there is a relatively simple way to escape from the universe. Not every time. Not one which can recall those already on the way. In general, I wonder about the need for the obvious explanation: yes, we're rare, and we're the first about to start expanding (assuming we won't fall flat on our face, and can't get up). -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From sparge at gmail.com Fri Feb 18 17:11:51 2011 From: sparge at gmail.com (Dave Sill) Date: Fri, 18 Feb 2011 12:11:51 -0500 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: Maybe we're just first in our neighborhood. -Dave On Feb 16, 2011 12:32 PM, "Keith Henson" wrote: On Wed, Feb 16, 2011 at 12:38 AM, Eugen Leitl wrote: > On Tue, Feb 15, 2011 at 03:13:18PM -0500, David Lubkin wrote: > >> I'm still pissed at Sagan for his hubris in sending a message to the >> stars without asking the rest of us first, in blithe certainty that "of >> course" any recipient would have evolved beyond aggression and >> xenophobia. > > The real reasons if that they would be there you'd be dead, Jim. > In fact, if any alien picks up the transmission (chance: very close > to zero) they'd better be farther advanced than us, and on a > faster track. I hope it for them. I have been mulling this over for decades. We look out into the Universe and don't (so far) see or hear any evidence of technophilic civilization. I see only two possibilities: 1) Technophilics are so rare that there are no others in our light cone. 2) Or if they are relatively common something wipes them *all* out, or, if not wiped out, they don't do anything which indicates their presence. If 1, then the future is unknown. If 2, it's probably related to local singularities. If that's the case, most of the people reading this list will live to see it. Keith PS. If anyone can suggest something that is not essentially the same two situations, please speak up. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kellycoinguy at gmail.com Fri Feb 18 17:17:28 2011 From: kellycoinguy at gmail.com (Kelly Anderson) Date: Fri, 18 Feb 2011 10:17:28 -0700 Subject: [ExI] Watson On Jeopardy. In-Reply-To: <4D5E751C.2060008@lightlink.com> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> Message-ID: On Fri, Feb 18, 2011 at 6:33 AM, Richard Loosemore wrote: > > Kelly Anderson wrote: >> >> On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote > You've misunderstood so very much of what is really going on here. It wouldn't be the first time. I'm here to learn. If you have something to teach, I am your humble student. I am quite sincere in this. No kidding. > There are strong theoretical reasons to believe that this approach is the only one that will >work, and that the "practitioners of "parlor tricks"" will never actually be able to succeed. ?This >isn't just opinion or speculation, it is the result of a real theoretical analysis. Risking Clintonese... I suppose Richard, that this depends upon your definition of 'success'. I would guess that most people would declare that Watson already succeeded. You dismiss it as "trivial" and a "parlor trick", while 99% of everyone else thinks it is a great success already.? If there is derision, I think it is because of your dismissive attitude about what is clearly a great milestone in computation, even if it turns out not to be on the path to some "true" AGI. I, for one, think that with another ten years or so of work, the Watson approach might pass some version of the Turing test. If you wrote a paper entitled "Why Watson is an Evolutionary Dead End", and you were convincing to your peers, I think you would get it published and it would be helpful to the AI community. > Also, why do you say "self-described scientist"? ?I don't understand if this is supposed to be >me or someone else or scientists in general. Carl Sagan, a real scientist, said frequently, "Extraordinary claims require extraordinary evidence." (even though he may have borrowed the phrase from Marcello Truzzi.) I understand that you are claiming to follow the scientific method, and that you do not think of yourself as a philosopher. If you claim to be a philosopher, stand up and be proud of that. Some of the most interesting people are philosophers, and there is nothing wrong with that. > And why do you assume that I am not doing experiments?! ?I am certainly doing that, and >doing masive numbers of such experiments is at the core of everything I do. Good to hear. Your papers did not reflect that. Can you point me to some of your experimental results? > I don't quite understand how these confusions arose, but you've ended up getting quite the > opposite idea about what is going on. All I had to go on was your papers. If what you are saying now is correct, your papers don't effectively reflect that. > I have little time today, so may not be able to address your other points. Understandable. -Kelly From darren.greer3 at gmail.com Fri Feb 18 17:20:46 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 13:20:46 -0400 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: <20110218165541.GR23560@leitl.org> References: <20110218165541.GR23560@leitl.org> Message-ID: >is an indication that reproduction isn't as strong a drive as we thought.< Either it's not as strong or that human beings can extract themselves from evolutionary mandated behavior. Some individuals and groups of individuals seem to be able to do it with aggression and dominance and tribal mentalities, and others don't. The question I have is this: do these individuals and groups adapt out of these behaviors by selection pressures over generational periods based on location (like living in cities for example where xenophobia makes life more difficult and not less so.) Or can you consciously remove yourself from evolutionary imperatives by force of will, or education, or both? I would think, by looking at the Internet and knowing the people that I do, that the drive to have sex may be as strong as ever. But the need in certain populations to have progeny result from it is reduced. Once again, technology, and the relaxation in certain cultures of tribal laws and strictures limiting sexual behavior, have influenced the biological result, but perhaps have not influenced the drive at all. d. On Fri, Feb 18, 2011 at 12:55 PM, Eugen Leitl wrote: > On Fri, Feb 18, 2011 at 09:13:44AM -0700, Keith Henson wrote: > > > The fact that we don't see massive scale manipulation of matter and > > energy indicates that this has not yet happened in our light cone. > > We're not in their light cone. Origin being the time they started > expanding visibly. > > > That doesn't mean it could not happen here. > > > > The human population growth falling below replacement in some places > > I don't think this will last. Subpopulations still grow exponentially. > This is being masked for time being for select location, but the > question is for how long. > > > is an indication that reproduction isn't as strong a drive as we > > thought. > > > > Still, to get the observed universe, we have to be wrong on something. > > > > Perhaps there is a relatively simple way to escape from the universe. > > Not every time. Not one which can recall those already on the way. > > In general, I wonder about the need for the obvious explanation: yes, > we're rare, and we're the first about to start expanding (assuming we > won't fall flat on our face, and can't get up). > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Feb 18 17:33:25 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 13:33:25 -0400 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <009301cbcf33$88a24a10$99e6de30$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: >it was a jolt to see all that in something as mainstream as Time magazine. < This may be terribly cynical of me, but I worry when any idea goes mainstream. I always think the media, the moguls and the Wal-marts will try and spin it enough to make a buck off it. Although how you would make money off the singularity I don't know. How about a T-Shirt that says "The Singularity is coming. Get implants!" d. 2011/2/18 spike > Even after all the singularity talk that we have had here for years, it was > a jolt to see all that in something as mainstream as Time magazine. It will > be interesting to see the letters to the editor on this one. > > > > spike > > > > *From:* extropy-chat-bounces at lists.extropy.org [mailto: > extropy-chat-bounces at lists.extropy.org] *On Behalf Of *John Clark > *Sent:* Thursday, February 17, 2011 12:57 PM > *To:* ExI chat list > *Subject:* [ExI] Time magazine cover story on the singularity > > > The cover story of the current issue of Time magazine is entitled "2045: > The Year Man Becomes Immortal", its about Ray Kurzweil and the > singularity: > > http://www.time.com/time/health/article/0,8599,2048138,00.html > > > > John K Clark > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Feb 18 17:48:47 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 18 Feb 2011 12:48:47 -0500 Subject: [ExI] Complex AGI [WAS Watson On Jeopardy] In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> Message-ID: <4D5EB0FF.7000007@lightlink.com> Kelly Anderson wrote: > On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore > wrote: >> Okay, first: although I understand your position as an Agilista, >> and your earnest desire to hear about concrete code rather than >> theory ("I value working code over big ideas"), you must surely >> acknowledge that in some areas of scientific research and >> technological development, it is important to work out the theory, >> or the design, before rushing ahead to the code-writing stage. > > This is the scientist vs. engineer battle. As an engineering type of > scientist, I prefer to perform experiments along the way to determine > if my theory is correct. Newton performed experiments to verify his > theories, and this influenced his next theory. Without the > experiments it would not be the scientific method, but rather closer > to philosophy. > > I'll let "real" scientists figure out how the organelles of the brain > function. I'll pay attention as I can to their findings. I like the > idea of being influenced by the designs of nature. I really like the > wall climbing robots that copy the techniques of the gecko. Really > interesting stuff that. I was reading papers about how the retina of > cats worked in computer vision classes twenty years ago. > > I'll let cognitive scientists and doctors try and unravel the brain > using black box techniques, and I'll pay attention as I can to their > results. These are interesting from the point of view of devising > tests to see if what you have designed is similar to the human brain. > Things like optical illusions are very interesting in terms of > figuring out how we do it. > > As an Agilista with an entrepreneurial bent, I have little patience > for a self-described scientist working on theories that may not have > applications for twenty years. I respect that the mathematics for the > CAT scanner were developed in the 1920's, but the guy who developed > those techniques got very little out of the exercise. Aside from > that, if you can't reduce your theories to practice pretty soon, the > practitioners of "parlor tricks" will beat you to your goal. > >> That is not to say that I don't write code (I spent several years >> as a software developer, and I continue to write code), but that I >> believe the problem of building an AGI is, at this point in time, >> a matter of getting the theory right. We have had over fifty years >> of AI people rushing into programs without seriously and >> comprehensively addressing the underlying issues. Perhaps you feel >> that there are really not that many underlying issues to be dealt >> with, but after having worked in this field, on and off, for >> thirty years, it is my position that we need deep understanding >> above all. Maxwell's equations, remember, were dismissed as useless >> for anything -- just idle theorizing -- for quite a few years after >> Maxwell came up with them. Not everything that is of value *must* >> be accompanied by immediate code that solves a problem. > > > I believe that many interesting problems are solved by throwing more > computational cycles at them. Then, once you have something that > works, you can optimize later. Watson is a system that works largely > because of the huge number of computational cycles being thrown at > the problem. As far as AGI research being off the tracks, the only > way you're going to convince anyone is with some kind of intermediate > result. Even flawed results would be better than nothing. > >> Now, with regard to the papers that I have written, I should >> explain that they are driven by the very specific approach >> described in the complex systems paper. That described a >> methodological imperative: if intelligent systems are complex (in >> the "complex systems" sense, which is not the "complicated >> systems", aka space-shuttle-like systems, sense), then we are in a >> peculiar situation that (I claim) has to be confronted in a very >> particular way. If it is not confronted in that particular way, we >> will likely run around in circles getting nowhere -- and it is >> alarming that the precise way in which this running around in >> circles would happen bears a remarkable resemblance to what has >> been happening in AI for fifty years. So, if my reasoning in that >> paper is correct then the only sensible way to build an AGI is to >> do some very serious theoretical and tool-building work first. > > See, I don't think Watson is "getting nowhere"... It is useful today. > > > > > Let me give you an analogy. I can see that when we can create > nanotech robots small enough to get into the human body and work at > the cellular level, then all forms of cancer are reduced to sending > in those nanobots with a simple program. First, detect cancer cells. > How hard can that be? Second, cut a hole in the wall of each cancer > cell you encounter. With enough nanobots, cancer, of all kinds, is > cured. Of course, we don't have nanotech robots today, but that > doesn't matter. I have cured cancer, and I deserve a Nobel prize in > medicine!!! > > On the other hand, there are doctors with living patients today, and > they practice all manner of barbarous medicine in the attempt to kill > cancer cells without killing patients. The techniques are crude and > often unsuccessful causing their patients lots of pain. Nevertheless, > these doctors do occasionally succeed in getting a patient into > remission. > > You are the nanotech doctor. I prefer to be the doctor with living > patients needing help today. Watson is the second kind. Sure, the > first cure to cancer is more general, easier, more effective, easier > on the patient, but is simply not available today, even if you can > see it as an almost inevitable eventuality. > > >> And part of that theoretical work involves a detailed understanding >> of cognitive psychology AND computer science. Not just a >> superficial acquaintance with a few psychology ideas, which many >> people have, but an appreciation for the enormous complexity of cog >> psych, and an understanding of how people in that field go about >> their research (because their protocols are very different from >> those of AI or computer science), and a pretty good grasp of the >> history of psychology (because there have been many different >> schools of thought, and some of them, like Behaviorism, contain >> extremely valuable and subtle lessons). > > Ok, so you care about cognitive psychology. That's great. Are you > writing a program that simulates a human psychology? Even on a > primitive basis? Or is your real work so secretive that you can't > share your ideas? In other words, how SPECIFICALLY does your deep > understanding of cognitive psychology contribute to a working program > (even if it only solves a simple problem)? > >> With regard to the specific comments I made below about McClelland >> and Rumelhart, what is going on there is that these guys (and >> several others) got to a point where the theories in cognitive >> psychology were making no sense, and so they started thinking in a >> new way, to try to solve the problem. I can summarize it as "weak >> constrain satisfaction" or "neurally inspired" but, alas, these >> things can be interpreted in shallow ways that omit the background >> context ... and it is the background context that is the most >> important part of it. In a nutshell, a lot cognitive psychology >> makes a lot more sense if it can be re-cast in "constraint" terms. > > Ok, that starts to make some sense. I have always considered context > to be the most important aspect of artificial intelligence, and one > of the more ignored. I think Watson does a lot in the area of > addressing context. Certainly not perfectly, but well enough to be > quite useful. I'd rather have an idiot savant to help me today than a > nice theory that might some day result in something truly elegant. > >> The problem, though, is that the folks who started the PDP (aka >> connectionist, neural net) revolution in the 1980s could only >> express this new set of ideas in neural terms. The made some >> progress, but then just as the train appeared to be gathering >> momentum it ran out of steam. There were some problems with their >> approach that could not be solved in a principled way. They had >> hoped, at the beginning, that they were building a new foundation >> for cognitive psychology, but something went wrong. > > They lacked a proper understanding of the system they were > simulating. They kept making simplifying assumptions/guesses because > they didn't have a full picture of the brain. I agree that neural > networks as practiced in the 80s ran out of steam... whether it was > because of a lack of hardware to run the algorithms fast enough, or > whether the algorithms were flawed at their core is an interesting > argument. > > If the brain is simulated accurately enough, then we should be able > to get an AGI machine by that methodology. That will take some time > of course. Your approach apparently will also. Which is the shortest > path to AGI? Time will tell, I suppose. > >> What I have done is to think hard about why that collapse occurred, >> and to come to an understanding about how to get around it. The >> answer has to do with building two distinct classes of constraint >> systems: either non-complex, or complex (side note: I will have >> to refer you to other texts to get the gist of what I mean by >> that... see my 2007 paper on the subject). The whole >> PDP/connectionist revolution was predicated on a non-complex >> approach. I have, in essence, diagnosed that as the problem. >> Fixing that problem is hard, but that is what I am working on. >> >> Unfortunately for you -- wanting to know what is going on with this >> project -- I have been studiously unprolific about publishing >> papers. So at this stage of the game all I can do is send you to >> the papers I have written and ask you to fill in the gaps from your >> knowledge of cognitive psychology, AI and complex systems. > > This kind of sounds like you want me to do your homework for you... > :-) > > You have published a number of papers. The problem from my point of > view is that the way you approach your papers is philisophical, not > scientific. Interesting, but not immediately useful. > >> Finally, bear in mind that none of this is relevant to the question >> of whether other systems, like Watson, are a real advance or just >> a symptom of a malaise. John Clark has been ranting at me (and >> others) for more than five years now, so when he pulls the old >> bait-and-switch trick ("Well, if you think XYZ is flawed, let's see >> YOUR stinkin' AI then!!") I just smile and tell him to go read my >> papers. So we only got into this discussion because of that: it >> has nothing to do with delivering critiques of other systems, >> whether they contain a million lines of code or not. :-) Watson >> still is a sleight of hand, IMO, whether my theory sucks or not. >> ;-) > > The problem from my point of view is that you have not revealed > enough of your theory to tell whether it sucks or not. > > I have no personal axe to grind. I'm just curious because you say, "I > can solve the problems of the world", and when I ask what those are, > you say "read my papers"... I go and read the papers. I think I > understand what you are saying, more or less in those papers, and I > still don't know how to go about creating an AGI using your model. > All I know at this point is that I need to separate the working brain > from the storage brain. Congratulations, you have recast the brain > as a Von Neumann architecture... :-) > > -Kelly Kelly, Well, I am struggling to find positive things to say, because you're tending to make very sweeping statements (e.g. "this is just philosophy" and "this is not science") that some people might interpret as quite insulting. And at the same time, some of the things that other people (e.g. John Clark) have said are starting to come back as if *I* was the one who said them! ;-) We need to be clear, first, that what we are discussing now has nothing to do with Watson. John Clark made a silly equation between my work and Watson, and you and I somehow ended up discussing my work. But I will not discuss the two as if they are connected, if you don't mind, because they are not. They are orthogonal. You have also started to imply that certain statements or claims have come from me .... so I need to be absolutely clear about what I have said or claimed, and what I have not. I have not said "I can solve the problems of the world". I am sure you weren't being serious, but even so... ;-) Most importantly I have NOT claimed that I have written down a complete theory of AGI, nor do I claim that I have built a functioning AGI. When John Clark's said to me: > So I repeat my previous request, please tell us all about the > wonderful AI program that you have written that does things even more > intelligently than Watson. ... I assumed that anyone who actually read this patently silly demand, would understand immediately that I was not being serious when I responded: > Done: read my papers. > > Questions? Just ask! John Clark ALWAYS changes the subject, in every debate in which he attacks me, by asking that same idiotic, rude question! :-) I have long ago stopped being bothered by it, and these days I either ignore him or tell him to read my papers if he wants to know about my work. I really don't know how anyone could read that exchange and think that I was quietly agreeing that I really did claim that I had built a "wonderful AI program ... that does things even more intelligently than Watson". So what have I actually claimed? What have I been defending? Well, what I do say is that IMPLICIT in the papers I have written, there is indeed an approach to AGI (a framework, and a specific model within that framework). There is no way that I have described an AGI design explictly, in enough detail for it to be evaluated, and I have never claimed that. Nor have I claimed to have built one yet. But when pressed by people who want to know more, I do point out that if they understand cognitive psychology in enough detail they will easily be able to add up all the pieces and connect all the dots and see where I am going with the work I am doing. The problem is that, after saying that you read my papers already, you were quite prepared to dismiss all of it as "philosophizing" and "not science". I tried to explain to you that if you understood the cognitive science and AI and complex systems background from which the work comes, you would be able to see what I meant by there being a theory of AGI implicit in it, and I did try to explain in a little more detail how my work connects to that larger background. I pointed out the thread that stretches from the cog psych of the 1980s, through McClelland and Rumelhart, through the complex systems movement, to the particular (and rather unusual) approach that I have adopted. I even pointed out the very, very important fact that my complex systems paper was all about the need for a radically different AGI methodology. Now, I might well be wrong about my statement that we need to do things in this radically different way, but you could at least realize that I have declared myself to be following that alternate methodology, and therefore understand what I have said about the priority of theory and a particular kind of experiment, over hacking out programs. It is all there, in the complex systems paper. But even after me pointing out that this stuff has a large context that you might not be familiar with, instead of acknowledging that fact, you are still making sweeping condemnations! This is pretty bad. More generally: I get two types of responses to my work. One (less common) type of response is from people who understand what I am trying to say well enough that they ask specific, focussed questions about things that are unclear or things they want to challenge. Those people clear understand that there is a "there" there .... if the papers I wrote were empty philosophising, those people would never be ABLE to send coherent challenges or questions in my direction. Papers that really are just empty philosophising CANNOT generate that kind of detailed response, because there is nothing coherent enough in the paper for anyone to get a handle on. Then there is the second kind of response. From these people I get nothing specific, just handwaving or sweeping condemnations. Nothing that indicates that they really understood what I was trying to say. They reflect back my arguments in a weird, horribly distorted form -- so distorted that it has no relationship whatsoever to what I actually said -- and when I try to clarify their misunderstandings they just make more and more distorted statements, often wandering far from the point. And, above all, this type of response usually involves statements like "Yes, I read it, but you didn't say anything meaningful, so I dismissed it all as empty philosophising". I always try to explain and respond. I have put many hours into responding to people who ask questions, and I try very hard to help reduce confusions. I waste a lot of time that way. And very often, I do this even as the person at the other end continues to deliver mildly derogatory comments like "this isn't science, this is just speculation" alongside their other questions. If you want to know why this stuff comes out of cognitive psychology, by all means read the complex systems paper again, and let me know if you find the argument presented there, for why it HAS to come out of cogntive psychology. It is there -- it is the crux of the argument. If you believe it is incorrect, I would be happy to debate the rationale for it. But, please, don't read several papers and just say afterward "All I know at this point is that I need to separate the working brain from the storage brain. Congratulations, you have recast the brain as a Von Neumann architecture". It looks more like I should be saying, if I were less polite, "Congratulations, you just understood the first page of a 700-page cognitive pscychology context that was assumed in those papers". But won't ;-). Richard Loosemore From rpwl at lightlink.com Fri Feb 18 18:01:53 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 18 Feb 2011 13:01:53 -0500 Subject: [ExI] Watson On Jeopardy. In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <4D5C7657.6070405@lightlink.com> <4D5D1897.4030906@lightlink.com> <4D5E751C.2060008@lightlink.com> Message-ID: <4D5EB411.9090400@lightlink.com> Kelly Anderson wrote: > On Fri, Feb 18, 2011 at 6:33 AM, Richard Loosemore wrote: >> Kelly Anderson wrote: >>> On Thu, Feb 17, 2011 at 5:46 AM, Richard Loosemore wrote >> You've misunderstood so very much of what is really going on here. > > It wouldn't be the first time. I'm here to learn. If you have > something to teach, I am your humble student. I am quite sincere in > this. No kidding. This is good. I am happy to try. Don't interpret the post I just wrote as being too annoyed (just a *little* frustrated is all). ;-) >> There are strong theoretical reasons to believe that this approach is the only one that will >> work, and that the "practitioners of "parlor tricks"" will never actually be able to succeed. This >> isn't just opinion or speculation, it is the result of a real theoretical analysis. > > Risking Clintonese... I suppose Richard, that this depends upon your > definition of 'success'. I would guess that most people would declare > that Watson already succeeded. You dismiss it as "trivial" and a > "parlor trick", while 99% of everyone else thinks it is a great > success already. If there is derision, I think it is because of your > dismissive attitude about what is clearly a great milestone in > computation, even if it turns out not to be on the path to some "true" > AGI. I, for one, think that with another ten years or so of work, the > Watson approach might pass some version of the Turing test. > > If you wrote a paper entitled "Why Watson is an Evolutionary Dead > End", and you were convincing to your peers, I think you would get it > published and it would be helpful to the AI community. Well, can I point out that the numbers are not 99% in favor? Ben Goertzel just published an essay in H+ magazine saying very much the same things that I said here. Ben is very widely respected in the AGI community, so perhaps you would consider comparing and constrasting my remarks with his. I don't want to write about Watson, because I have seen so many examples of that kind of dead end and I have already analzed them as a *class* of systems. That is very important. They cannot be fought individually. I am pointng to the pattern. >> Also, why do you say "self-described scientist"? I don't understand if this is supposed to be >> me or someone else or scientists in general. > > Carl Sagan, a real scientist, said frequently, "Extraordinary claims > require extraordinary evidence." (even though he may have borrowed the > phrase from Marcello Truzzi.) I understand that you are claiming to > follow the scientific method, and that you do not think of yourself as > a philosopher. If you claim to be a philosopher, stand up and be proud > of that. Some of the most interesting people are philosophers, and > there is nothing wrong with that. :-) Well, you may be confused by the fact that I wrote ONE philosophy paper. But have a look through the very small set of publications on my website. One experimental archaeology, several experimental and computational cognitive science papers. One cognitive neuroscience paper..... I was trained as a physicist and mathematician. I just finished teaching a class in electromagnetic theory this morning. I have written all those cognitive science papers. I was once on a team that ported CorelDraw from the PC to the Mac. I am up to my eyeballs in writing a software tool in OS X that is designed to facilitate the construction and experimental investigation of a class of AGI systems that have never been built before..... Isn't it a bit of a stretch to ask me to be proud to be a philosopher? :-) :-) >> And why do you assume that I am not doing experiments?! I am certainly doing that, and >> doing masive numbers of such experiments is at the core of everything I do. > > Good to hear. Your papers did not reflect that. Can you point me to > some of your experimental results? No, but I did not say that they did. It is too early to ask. Context. Physicists back in the 1980s who wanted to work on the frontiers of particle physics had to spend decades just building one tool - the large hadron collider - to answer their theoretical questions with empirical data. I am in a comparable situation, but with one billionth the funding that they had. Do I get cut a *little* slack? :-( More when I can. Richard Loosemore From spike66 at att.net Fri Feb 18 18:21:41 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 10:21:41 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: <014a01cbcf98$b1455240$13cff6c0$@att.net> .On Behalf Of Darren Greer Subject: Re: [ExI] Time magazine cover story on the singularity >>it was a jolt to see all that in something as mainstream as Time magazine. < >This may be terribly cynical of me, but I worry when any idea goes mainstream. Ja. For mainstream media, this particular Time article wasn't half bad. > I always think the media, the moguls and the Wal-marts will try and spin it enough to make a buck off it. Hmmm, so what's the bad news? I didn't even realize there was a way to make a buck off of the singularity. Kewalll. >.Although how you would make money off the singularity I don't know. If you think of one, do share. As soon as it gets the profit motive behind it, the singularity REALLY IS coming. > How about a T-Shirt that says "The Singularity is coming. Get implants!" d. Eeeexcellent Smithers. Other ideas? Darren has hit it. Commercialization is a driving force like nothing else, a quantity which has a quality all its own. Commercialization is our friend. Look what it did for Christmas. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Feb 18 18:40:30 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 14:40:30 -0400 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <014a01cbcf98$b1455240$13cff6c0$@att.net> References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <014a01cbcf98$b1455240$13cff6c0$@att.net> Message-ID: >Commercialization is our friend. Look what it did for Christmas.< If you can sell Santa Claus and Justin Beiber, you can sell anything. d. 2011/2/18 spike > *?On Behalf Of *Darren Greer > *Subject:* Re: [ExI] Time magazine cover story on the singularity > > > > >>it was a jolt to see all that in something as mainstream as Time > magazine. < > > > > >This may be terribly cynical of me, but I worry when any idea goes > mainstream? > > > > Ja. For mainstream media, this particular Time article wasn?t half bad. > > > > > I always think the media, the moguls and the Wal-marts will try and spin > it enough to make a buck off it? > > > > Hmmm, so what?s the bad news? I didn?t even realize there was a way to > make a buck off of the singularity. Kewalll? > > > > >?Although how you would make money off the singularity I don't know? > > > > If you think of one, do share. As soon as it gets the profit motive behind > it, the singularity REALLY IS coming. > > > > > How about a T-Shirt that says "The Singularity is coming. Get implants!" > d. > > > > Eeeexcellent Smithers. > > > > Other ideas? Darren has hit it. Commercialization is a driving force like > nothing else, a quantity which has a quality all its own. Commercialization > is our friend. Look what it did for Christmas. > > > > spike > > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Fri Feb 18 19:16:51 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 18 Feb 2011 12:16:51 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Fri, Feb 18, 2011 at 11:40 AM, Darren Greer wrote: (Keith) >>is an indication that reproduction isn't as strong a drive as we > thought.< snip > I would think, by looking at the Internet and knowing the people that I do, > that the drive to have sex may be as strong as ever. But the need in certain > populations to have progeny result from it is reduced. Once again, > technology, and the relaxation in certain cultures of tribal laws and > strictures limiting sexual behavior, have influenced the biological result, > but perhaps have not influenced the drive at all. Evolution had good reason to build in a strong drive to have sex. And in the pre birth control era that resulted in reproduction. It's also fairly clear to me that there is a drive directly for reproduction, especially in women. You only need to consider what one member who used to be on this group did to have an example. But it's far from clear to me that this direct drive is enough to sustain the population. It probably doesn't matter anyway. Keith From pharos at gmail.com Fri Feb 18 19:49:45 2011 From: pharos at gmail.com (BillK) Date: Fri, 18 Feb 2011 19:49:45 +0000 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: On Fri, Feb 18, 2011 at 7:16 PM, Keith Henson wrote: > Evolution had good reason to build in a strong drive to have sex. ?And > in the pre birth control era that resulted in reproduction. > > It's also fairly clear to me that there is a drive directly for > reproduction, especially in women. ?You only need to consider what one > member who used to be on this group did to have an example. > > No. There isn't. If you look at the groups who have falling birth rates they correlate *very* strongly with women's rights and the empowerment of women. As soon as women get the power to choose they stop having children. Some might have one child, but this is below the rate required to sustain the population. You can also correlate falling birth rates with first world countries, or 'civilization'. Which also correlates with women's rights. I agree with Eugene's claim that there are sub-groups and third world nations that to-date still have high birth rates and growing populations. But it is to be expected that these high birth rates will only continue while their women remain subjugated under male domination. How long that will last is questionable. That is why I disagree strongly that advanced civilizations will be breeding like rabbits. The 'advanced' part means low reproduction by definition. If a civilization is busy breeding furiously and fighting for survival with other breeders, they have no spare capacity to get 'advanced'. Too many mouths to feed. BillK From spike66 at att.net Fri Feb 18 20:45:20 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 12:45:20 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <014a01cbcf98$b1455240$13 cff6c0$@att.net> Message-ID: <018401cbcfac$c2eb5bc0$48c21340$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer Sent: Friday, February 18, 2011 10:41 AM To: ExI chat list Subject: Re: [ExI] Time magazine cover story on the singularity >Commercialization is our friend. Look what it did for Christmas.< If you can sell Santa Claus and Justin Beiber, you can sell anything. d. I didn't sell Santa Clause and Justin Beiber, but Santa sold him to me. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Feb 18 21:06:10 2011 From: jonkc at bellsouth.net (John Clark) Date: Fri, 18 Feb 2011 16:06:10 -0500 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: On Feb 18, 2011, at 12:33 PM, Darren Greer wrote: > how you would make money off the singularity I don't know. I know how to make money off the singularity, sell everything you own and borrow every nickel you can and then use the money to short bonds. But you will have to wait until we get to the point where even Mr. Joe Average expects the singularity to happen in his lifetime. When that happens we can expect a HUGE increase in interest rates, because after the singularity one of 2 things is certain to happen: 1) Paying off that huge debt will be easy with Mr. Joe Average being the master of Nanotechnology. 2) The singularity will kill Mr. Joe Average. Either way money in the future will be worth far less than money in the present to Mr. Joe Average, so the logical thing to do is cheerfully take on a crushing debt. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Fri Feb 18 21:48:13 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 13:48:13 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> . On Behalf Of John Clark Subject: Re: [ExI] Time magazine cover story on the singularity On Feb 18, 2011, at 12:33 PM, Darren Greer wrote: how you would make money off the singularity I don't know. >.I know how to make money off the singularity, sell everything you own and borrow every nickel you can and then use the money to short bonds.Either way money in the future will be worth far less than money in the present to Mr. Joe Average, so the logical thing to do is cheerfully take on a crushing debt. John K Clark John the US is doing exactly that. When anyone points out the craziness of this, we respond with a collective "It doesn't matter, the singularity is coming." spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Feb 18 23:27:36 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 19:27:36 -0400 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <001b01cbccc0$7a50bd90$6ef238b0$@att.net> <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: > cheerfully take on a crushing debt.< Being a visionary, as always, I'm way ahead of you. d. 2011/2/18 John Clark > On Feb 18, 2011, at 12:33 PM, Darren Greer wrote: > > how you would make money off the singularity I don't know. > > > I know how to make money off the singularity, sell everything you own and > borrow every nickel you can and then use the money to short bonds. But you > will have to wait until we get to the point where even Mr. Joe Average > expects the singularity to happen in his lifetime. When that happens we can > expect a HUGE increase in interest rates, because after the singularity one > of 2 things is certain to happen: > > 1) Paying off that huge debt will be easy with Mr. Joe Average being the > master of Nanotechnology. > > 2) The singularity will kill Mr. Joe Average. > > Either way money in the future will be worth far less than money in the > present to Mr. Joe Average, so the logical thing to do is cheerfully take on > a crushing debt. > > John K Clark > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 19 00:00:29 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 18 Feb 2011 20:00:29 -0400 Subject: [ExI] Call To Libertarians Message-ID: I understand there are some libertarians in this group. I am currently embroiled in an e-mail discussion where I find myself in a rather unique (for me) position of defending free markets and smaller government. I am a Canadian, and a proponent of socialized democracy. However, I'm not naive enough to think that full-stop socialization is a good idea. We tried that once, in the Soviet Union, and it didn't work so well. I recognize the need for competition to drive development and promote innovation. So, being a fan of balance, I'm trying to come up with some arguments that a libertarian might give while explaining why that system of could benefit mankind, especially in relation to the development of technology and the philosophies of transhumanism. Problem is, I'm not very good at it. Anyone wanna give my their opinions on this? I will not plagiarize you. I've already stated in this discussion that I will ask some people and get back to them. It's not necessary that I win the argument, but I do think that my beliefs and preferences are simply points of view, and no better (nor worse) than those of others. This may be the point that I'm trying to make -- that libertarians are not by definition inarticulate right wingers or rabid anarchists, which seems to be the point of view of this group I'm talking with. Darren -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From algaenymph at gmail.com Fri Feb 18 23:36:10 2011 From: algaenymph at gmail.com (AlgaeNymph) Date: Fri, 18 Feb 2011 15:36:10 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: References: <201102151610.p1FGA1Xh020528@andromeda.ziaspace.com> <4D5AADA7.8060209@lightlink.com> <201102151955.p1FJto5v017690@andromeda.ziaspace.com> <4D5BEB27.7020204@lightlink.com> <38EFBD25-EB09-4B9E-8CD9-327582D57021@bellsouth.net> <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> Message-ID: <4D5F026A.9010106@gmail.com> On 2/18/11 1:06 PM, John Clark wrote: > Either way money in the future will be worth far less than money in > the present to Mr. Joe Average, so the logical thing to do is > cheerfully take on a crushing debt. That's the sort of think that got me bitched at for advocating a "passive religion" of blind faith as opposed to an "active religion" of thoughtful questioning. Note that his argument consisted of ZOMYGAWD TEH EVULRICH PEOPLE!!1! And my "friends" just sat aside and watched. From olga.bourlin at gmail.com Sat Feb 19 02:19:21 2011 From: olga.bourlin at gmail.com (Olga Bourlin) Date: Fri, 18 Feb 2011 18:19:21 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) 2011/2/18 Darren Greer : > I understand there are some libertarians in this group. > I am currently embroiled in an e-mail discussion where I find myself in a > rather unique (for me) position of defending free markets and smaller > government. I am a Canadian, and a proponent of socialized democracy. > However, I'm not naive enough to think that full-stop socialization is a > good idea. We tried that once, in the Soviet Union, and it didn't work so > well. I recognize the need for competition to drive development and promote > innovation. > So, being a fan of balance, I'm trying to come up with some arguments that a > libertarian might give while explaining why that system of ?could benefit > mankind, especially in relation to the development of technology and the > philosophies of transhumanism. > Problem is, I'm not very good at it. Anyone wanna give my their opinions on > this? I will not plagiarize you. I've already stated in this discussion that > I will ask some people and get back to them. It's not necessary that I win > the argument, but I do think that my beliefs and preferences are simply > points of view, and no better (nor worse) than those of others. This may be > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point > of view of this group I'm talking with. > Darren > > -- > There is no history, only biography. > -Ralph Waldo Emerson > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From spike66 at att.net Sat Feb 19 02:56:34 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 18:56:34 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> ... On Behalf Of Olga Bourlin Subject: Re: [ExI] Call To Libertarians Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) Somalia is an example of anarchy, Olga, not libertarian. Two very different things. spike From moulton at moulton.com Sat Feb 19 05:34:57 2011 From: moulton at moulton.com (F. C. Moulton) Date: Fri, 18 Feb 2011 21:34:57 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> Message-ID: <4D5F5681.3040000@moulton.com> Not exactly. First Somalia is not an anarchy in the most strict sense of the word. There is a recognized government but it only controls a very small part of the country. The rest of the country suffers from a civil war A civil war which has gone on for about two decades. What you have is not an anarchy (ie no government) rather you have is more than one group fighting it out to become sole government in Somalia. To refer to Somalia as the Libertarian Paradise makes about as much sense as referring to Cambodia under the Khmer Rouge as a government paradise. Fred spike wrote: > ... On Behalf Of Olga Bourlin > Subject: Re: [ExI] Call To Libertarians > > Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) > > Somalia is an example of anarchy, Olga, not libertarian. Two very different > things. spike > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From eugen at leitl.org Sat Feb 19 06:18:32 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 07:18:32 +0100 Subject: [ExI] Lethal future was Watson on NOVA In-Reply-To: References: Message-ID: <20110219061831.GT23560@leitl.org> On Fri, Feb 18, 2011 at 07:49:45PM +0000, BillK wrote: > I agree with Eugene's claim that there are sub-groups and third world > nations that to-date still have high birth rates and growing > populations. But it is to be expected that these high birth rates will > only continue while their women remain subjugated under male > domination. How long that will last is questionable. I am sorry, the trend is unfortunately that the responsible, self-limiting folks are eventually self-selecting into invisibility http://www.scientificamerican.com/blog/post.cfm?id=gods-little-rabbits-religious-peopl-2010-12-22 > That is why I disagree strongly that advanced civilizations will be > breeding like rabbits. The 'advanced' part means low reproduction by > definition. This is why you never meet the 'advanced'. Only the other kind, who doesn't care about your orderly world view. (The Indians sure got a nasty surprise). The US was colonized by people wielding diseases, guns and religions. The Amish have no issues using photovoltaics they don't make. I know it's a hard concept to grasp, but evolution doesn't have a built-in direction. If sentience is holding you back, you will lose it over time and space. It's not a being sitting in a spaceship, it's one single beastie. It's only as smart as it needs to be. > If a civilization is busy breeding furiously and fighting for survival > with other breeders, they have no spare capacity to get 'advanced'. > Too many mouths to feed. It must be hard to live in a http://www.youtube.com/watch?v=Ur3CQE8xB3c world. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From eugen at leitl.org Sat Feb 19 06:28:33 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 07:28:33 +0100 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> References: <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> Message-ID: <20110219062833.GU23560@leitl.org> On Fri, Feb 18, 2011 at 01:48:13PM -0800, spike wrote: > John the US is doing exactly that. When anyone points out the craziness of > this, we respond with a collective "It doesn't matter, the singularity is > coming." The last stock market Singularity .bombed quite nicely, as you'll recall. A lot of people, particularly on this list really thought that was it, and bought in overproportionally. The problem with exponential growth in a limited resource world is that it frequently looks like http://en.wikipedia.org/wiki/Bacterial_growth We've just left exponential phase and are slowly entering stationary phase. The challenge for this particular culture is to break open this particular Petri dish while they're still able. From eugen at leitl.org Sat Feb 19 06:39:51 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 07:39:51 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <20110219063951.GW23560@leitl.org> On Fri, Feb 18, 2011 at 08:00:29PM -0400, Darren Greer wrote: > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point I wish I could help you, but as a rabid anarchist I unfortunately can't. > of view of this group I'm talking with. From spike66 at att.net Sat Feb 19 06:46:05 2011 From: spike66 at att.net (spike) Date: Fri, 18 Feb 2011 22:46:05 -0800 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <20110219062833.GU23560@leitl.org> References: <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> <20110219062833.GU23560@leitl.org> Message-ID: <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> ... On Behalf Of Eugen Leitl Subject: Re: [ExI] Time magazine cover story on the singularity On Fri, Feb 18, 2011 at 01:48:13PM -0800, spike wrote: >> John the US is doing exactly that. When anyone points out the >> craziness of this, we respond with a collective "It doesn't matter, >> the singularity is coming." >The last stock market Singularity .bombed quite nicely, as you'll recall. How well I recall. I am just getting back to where I was back in those heady days. >A lot of people, particularly on this list really thought that was it, and bought in overproportionally... We thought it was the technocalypse. I did anyway. Then the stock market crashed. It wasn't until 9/11/01 that many of us realized we still have yet another world war to fight, and this one may be worse than the three we had in the 20th century. >We've just left exponential phase and are slowly entering stationary phase. The challenge for this particular culture is to break open this particular Petri dish while they're still able. We are able. The question is will we break out while we are still willing. spike From moulton at moulton.com Sat Feb 19 07:23:52 2011 From: moulton at moulton.com (F. C. Moulton) Date: Fri, 18 Feb 2011 23:23:52 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <4D5F7008.7020406@moulton.com> Darren Greer wrote: > I understand there are some libertarians in this group. There are some people who are libertarians and then there are some who use the term with little or no comprehension of libertarian history and philosophy. I have at least a modest understanding of libertarian history and philosophy so I will attempt to provide a few comments before I get so tired that I fall asleep. > I am currently embroiled in an e-mail discussion where I find myself > in a rather unique (for me) position of defending free markets and > smaller government. I am a Canadian, and a proponent of socialized > democracy. However, I'm not naive enough to think that full-stop > socialization is a good idea. We tried that once, in the Soviet Union, > and it didn't work so well. I recognize the need for competition to > drive development and promote innovation. I will note that libertarian philosophy covers more that just economic systems and that economics is not "starting point" of the libertarian philosophy. In my comments below I will l attempt to show how this develops. > > So, being a fan of balance, I'm trying to come up with some arguments > that a libertarian might give while explaining why that system of > could benefit mankind, especially in relation to the development of > technology and the philosophies of transhumanism. First a couple of high level points. There are those who hold the position that libertarianism properly understood is anarchism. There are those who hold the position that libertarianism can be either anarchism or a very limited government sometimes called a "night watchman" government. I personally have never seen a convincing argument that the limited government position is intellectually defensible. However I will attempt to provide some insight into it as best I can. As I mentioned above it should be noted that libertarian thought has often historically been divided into the "moralist" derived branch and the "consequentialist" derived branch. Usually on any particular question these two are in agreement but not always. There is not enough space to go into the details here; just be aware of it. Now to get to your specific question about development of technology. One important aspect is the allocation of resources and the knowledge gained from markets. This is the point that Hayek and others have made over the years. One major difficulty arises when government control of part or all of an economic system distorts the feedback loops and can contribute to unwanted outcomes. Thus if a regulator keeps interest rates artificially low that might make the economy rev up just like consuming too much coffee and doughnuts can get a person revved up. But when the caffeine and sugar wear off then that is when the headache arrives. This is not to say that the government is responsible for every economic problem; certainly many people do foolish things on their own. There is not utopia. However I think that a strong argument can be made that we are better off without the distortions inherent in government regulation. And also note that the recent financial mess did not occur in a regulatory vacuum; there were many government institutions around supposedly watching things; everyone from the SEC to the FDIC to the FED. People went to the SEC on more than one occasion and told them that Madoff was not proper but the SEC did their investigation and said there was nothing wrong. Now to be honest it should be noted that the regulatory failure we saw in the past few years is not in and of itself a conclusive argument against all regulation; it can be used at most as an argument against regulation which is not done adequately. Thus the foolish (and in same cases criminal) actions of some business on the one hand and the problems of poor regulation on the other hand are not in and of themselves sufficient to serve as a complete argument for either increased regulation or a complete free market. It is all much more complicated and nuanced but hopefully I have at least given a flavor of some of the issues. On the topic of knowledge let me give a small example. Consider some business which is protected by various tariffs that keep out competition and the workers in the business have regulations which keep their wages high. The owner is happy because there is not much competition and the workers are enjoying the good live. But consider the knowledge problem. The children of the workers see how much their parents make and might decide to skip more education or training to go "work the assembly line with the parents" and get a really nice house because the wages are high and they can afford the mortgage. Then there is WTO ruling that the tariff must be dropped. The owner finds out that the business was not as efficient as previously thought and the workers soon realize that on the world market their labor is not worth what they had believed. Knowledge about the relative value of labor and when and how to allocate resources are some of the things which arise out of market activity. Of course this knowledge is not perfect. Many people might miss an opportunity until one person or group figures it out. That is the nature of human activity. When discussing 'free markets' it is important be on guard when someone points to a non-free market and refers to it as if it was. Too often persons who advocate government crony capitalism or mercantilism fraudulently use the term 'free market'. I think that the philosopher Roderick Long (see link below) is developing an interesting way of discussing this with his terminology of left-conflationism and right-conflationism. > Problem is, I'm not very good at it. Anyone wanna give my their > opinions on this? I will not plagiarize you. I've already stated in > this discussion that I will ask some people and get back to them. It's > not necessary that I win the argument, but I do think that my beliefs > and preferences are simply points of view, and no better (nor worse) > than those of others. This may be the point that I'm trying to make -- > that libertarians are not by definition inarticulate right wingers or > rabid anarchists, which seems to be the point of view of this group > I'm talking with. > There are no simple answers on this however let me point you to some additional sources of information. First I suggest avoiding stuff published by the Libertarian Party; occasionally they might put out something worthwhile but it unless you are well versed you can be misled. I do not agree in total with any of the follow but I can nitpick almost anything: As a general source of ideas on libertarianism and economics I find that David Friedman usually has an interesting take on things: http://daviddfriedman.com/ Roderick Long is a philosophy professor who has some interesting ideas and links to many others http://aaeblog.com/about-2/ The left-conflationism and right-conflationism discussion is in the following http://aaeblog.com/2010/12/26/how-to-do-things-with-words/ For libertarian history I recommend the podcast series (also available as transcripts) by my friend Jeff Riggenbach. Jeff has covers some very interesting topics and they are easy to listen to when out for a walk: http://mises.org/media.aspx?action=category&ID=208 And while it is not totally libertarian I find that EconTalk is an interesting set of podcasts on economics as well as occasional discussions of biology and other areas: http://www.econtalk.org/ In particular this podcast might answer some of your questions http://www.econtalk.org/archives/2010/10/ridley_on_trade.html Also this is an interesting discussion of the recent financial mess http://www.econtalk.org/archives/2010/05/roberts_on_the_2.html And there is the always interesting Marginal Revolution http://www.marginalrevolution.com/ I hope this info is helpful. Fred > Darren > > -- > /There is no history, only biography./ > / > / > /-Ralph Waldo Emerson > / > > > ------------------------------------------------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From jonkc at bellsouth.net Sat Feb 19 07:38:45 2011 From: jonkc at bellsouth.net (John Clark) Date: Sat, 19 Feb 2011 02:38:45 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> Message-ID: <0E3C74A1-9879-453E-AEFC-5D0A8F4C1280@bellsouth.net> On Feb 18, 2011, at 9:56 PM, spike wrote: > Somalia is an example of anarchy, Olga, not libertarian. Two very different things. Somalia is an example of chaos, anarchy just means lack of government. Chaos necessarily implies anarchy but anarchy does not necessarily imply chaos. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Feb 19 07:47:28 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 19 Feb 2011 08:47:28 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: Hi Darren, I am a big sympathizer of many libertarian ideas but I don't usually call myself a libertarian, and when I do I call myself a left libertarian, in the sense that I want the government out of my living room but I see nothing wrong if the government builds hospitals and highways. I am definitely _not_ a right winger. I can consider myself as a (_non_ rabid) anarchist who sees a small government (in the sense of a small management committee and not in the sense of a big dictatorship) as a necessary evil and a practical necessity in today's world. I guess the history of the development of the Internet shows the advantages of this approach. Public funding has been used at the beginning, but then there has been an exponential acceleration due to the absence of regulations and low entry barriers, which have permitted individual and small teams to participate in the development. The creativity of small spontaneous teams is always orders of magnitude higher than 9-to-5 workers in large companies. 2011/2/19 Darren Greer : > I understand there are some libertarians in this group. > I am currently embroiled in an e-mail discussion where I find myself in a > rather unique (for me) position of defending free markets and smaller > government. I am a Canadian, and a proponent of socialized democracy. > However, I'm not naive enough to think that full-stop socialization is a > good idea. We tried that once, in the Soviet Union, and it didn't work so > well. I recognize the need for competition to drive development and promote > innovation. > So, being a fan of balance, I'm trying to come up with some arguments that a > libertarian might give while explaining why that system of ?could benefit > mankind, especially in relation to the development of technology and the > philosophies of transhumanism. > Problem is, I'm not very good at it. Anyone wanna give my their opinions on > this? I will not plagiarize you. I've already stated in this discussion that > I will ask some people and get back to them. It's not necessary that I win > the argument, but I do think that my beliefs and preferences are simply > points of view, and no better (nor worse) than those of others. This may be > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point > of view of this group I'm talking with. > Darren > > -- > There is no history, only biography. > -Ralph Waldo Emerson > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From moulton at moulton.com Sat Feb 19 08:08:30 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 00:08:30 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <0E3C74A1-9879-453E-AEFC-5D0A8F4C1280@bellsouth.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <0E3C74A1-9879-453E-AEFC-5D0A8F4C1280@bellsouth.net> Message-ID: <4D5F7A7E.2080406@moulton.com> John Clark wrote: > On Feb 18, 2011, at 9:56 PM, spike wrote: >> Somalia is an example of anarchy, Olga, not libertarian. Two very >> different things. > Somalia is an example of chaos, anarchy just means lack of government. > Chaos necessarily implies anarchy but anarchy does not necessarily > imply chaos. Actually Chaos does not necessarily imply the lack of government (ie anarchy) since chaos can exist along side a government. And occasionally governments are the source of the chaos. Fred From pharos at gmail.com Sat Feb 19 07:57:47 2011 From: pharos at gmail.com (BillK) Date: Sat, 19 Feb 2011 07:57:47 +0000 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> References: <4D5C2DA9.9050804@lightlink.com> <3102D580-58B6-465C-AEA8-47F172299050@bellsouth.net> <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> <20110219062833.GU23560@leitl.org> <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> Message-ID: On Sat, Feb 19, 2011 at 6:46 AM, spike wrote: > How well I recall. ?I am just getting back to where I was back in those > heady days. > We thought it was the technocalypse. ?I did anyway. ?Then the stock market > crashed. ?It wasn't until 9/11/01 that many of us realized we still have yet > another world war to fight, and this one may be worse than the three we had > in the 20th century. But it wasn't an accident, Spike. It was deliberate. And they are doing it again. The transfer of the nation's wealth into a very few hands is progressing as planned. Make sure you cash in this time before the next collapse. Wall Street makes money on the way up and on the way down. Mere mortals have much less choice. (As well as getting told to fight wars to protect the wealth of the rich). BillK From darren.greer3 at gmail.com Sat Feb 19 11:37:03 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 07:37:03 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: Thanks for you responses. Special thanks to Fred for the run-down and the links. I will read them carefully. The Somalia remark is exactly the type of over-simplification that I've been dealing with in the other discussion. One guy said libertarians were people who read Ayn Rand as a teenager and grew up to be self-centered jerks. But even a quick survey of it on the 'net revealed to me that it is a diverse, coherent and extensive set of beliefs, philosophies and principles that cannot easily be dismissed with a simple one-liner. The older I get the less likely I am to denigrate something because I disagree with it. First I'll try to understand it, and then maybe I'll come up with a one-liner. :) Darren On Fri, Feb 18, 2011 at 8:00 PM, Darren Greer wrote: > I understand there are some libertarians in this group. > > I am currently embroiled in an e-mail discussion where I find myself in a > rather unique (for me) position of defending free markets and smaller > government. I am a Canadian, and a proponent of socialized democracy. > However, I'm not naive enough to think that full-stop socialization is a > good idea. We tried that once, in the Soviet Union, and it didn't work so > well. I recognize the need for competition to drive development and promote > innovation. > > So, being a fan of balance, I'm trying to come up with some arguments that > a libertarian might give while explaining why that system of could benefit > mankind, especially in relation to the development of technology and the > philosophies of transhumanism. > > Problem is, I'm not very good at it. Anyone wanna give my their opinions on > this? I will not plagiarize you. I've already stated in this discussion that > I will ask some people and get back to them. It's not necessary that I win > the argument, but I do think that my beliefs and preferences are simply > points of view, and no better (nor worse) than those of others. This may be > the point that I'm trying to make -- that libertarians are not by definition > inarticulate right wingers or rabid anarchists, which seems to be the point > of view of this group I'm talking with. > > Darren > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Feb 19 14:10:32 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 09:10:32 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> Message-ID: <4D5FCF58.4020407@lightlink.com> spike wrote: > ... On Behalf Of Olga Bourlin > Subject: Re: [ExI] Call To Libertarians > > Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) > > Somalia is an example of anarchy, Olga, not libertarian. Two very different > things. spike Only different to those who cannot understand the inevitable end-point of libertarianism. :-) Excellent example, Olga! Richard Loosemore [ducks beneath parapet to get out of the way of incomings] From darren.greer3 at gmail.com Sat Feb 19 15:17:19 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 11:17:19 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FCF58.4020407@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: >Only different to those who cannot understand the inevitable end-point of libertarianism.< Just as the end-point of democracy is a stagnant bureaucratic state? The end-point of capitalism is fascism and plutocracy? The end-point of socialism is military dictatorship? The end-point of any system is a situation of extremes and therefore not desirable. When I asked the question I made the assumption that was understood. I was looking for a bit of a nuanced interpretation, much like the one Fred gave. I understand that political discourse tends to evoke passionate responses, but I should have made myself clearer: I was looking for an intellectual response, not a politicized, emotive one. My error. Darren On Sat, Feb 19, 2011 at 10:10 AM, Richard Loosemore wrote: > spike wrote: > >> ... On Behalf Of Olga Bourlin >> Subject: Re: [ExI] Call To Libertarians >> >> Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) >> >> Somalia is an example of anarchy, Olga, not libertarian. Two very >> different >> things. spike >> > > Only different to those who cannot understand the inevitable end-point of > libertarianism. :-) > > Excellent example, Olga! > > > Richard Loosemore > > [ducks beneath parapet to get out of the way of incomings] > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 19 15:26:01 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 11:26:01 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: Sorry. That last post sounded a bit harsh and I'm not usually so confrontational. What I'm saying I guess is that, if Somalia is considered by some to be libertarian, why is that so? Which brand of libertarian is it? What is the movement's history in that country? Why did the writer think it was so? What arguments for? What arguments against? Asking a lot I know, and I'm doing my own research. But I don't find much on the libertarian view regarding technology on-line and I thought some of you might have some interesting things to say about that. Recall: I am most certainly not a libertarian. But I am interested in political systems, especially as they relate to transhumanism. I noticed when I first came here that economic issues were especially important to this group, because progress depends upon them. And economy is difficult to discuss without bring in at least some politics. There. I've back-pedaled enough. :) d. On Sat, Feb 19, 2011 at 11:17 AM, Darren Greer wrote: > >Only different to those who cannot understand the inevitable end-point of > libertarianism.< > > Just as the end-point of democracy is a stagnant bureaucratic state? The > end-point of capitalism is fascism and plutocracy? The end-point of > socialism is military dictatorship? > > The end-point of any system is a situation of extremes and therefore not > desirable. When I asked the question I made the assumption that was > understood. I was looking for a bit of a nuanced interpretation, much like > the one Fred gave. I understand that political discourse tends to evoke > passionate responses, but I should have made myself clearer: I was looking > for an intellectual response, not a politicized, emotive one. My error. > > > Darren > > On Sat, Feb 19, 2011 at 10:10 AM, Richard Loosemore wrote: > >> spike wrote: >> >>> ... On Behalf Of Olga Bourlin >>> Subject: Re: [ExI] Call To Libertarians >>> >>> Darren, tell them to visit the Libertarian Paradise: SOMALIA. ;) >>> >>> Somalia is an example of anarchy, Olga, not libertarian. Two very >>> different >>> things. spike >>> >> >> Only different to those who cannot understand the inevitable end-point of >> libertarianism. :-) >> >> Excellent example, Olga! >> >> >> Richard Loosemore >> >> [ducks beneath parapet to get out of the way of incomings] >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > -- > *There is no history, only biography.* > * > * > *-Ralph Waldo Emerson > * > > > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Feb 19 15:34:48 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 07:34:48 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FCF58.4020407@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: <005901cbd04a$8ad986a0$a08c93e0$@att.net> ... On Behalf Of Richard Loosemore ... Subject: Re: [ExI] Call To Libertarians ... > >> Somalia is an example of anarchy, Olga, not libertarian. Two very different things. spike >Only different to those who cannot understand the inevitable end-point of libertarianism. :-) >Richard Loosemore The description of complex systems cannot be reduced to a bumper sticker. But this is one rare example of a case where the refutation can *almost* be bumper-sticker-ized: Chaos is the endpoint not of libertarianism but rather the endpoint of its opposite, totalitarianism. spike From giulio at gmail.com Sat Feb 19 15:30:38 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 19 Feb 2011 16:30:38 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: Very well said Darren. I usually distrust "pure" political ideologies because they tend to degenerate into fundamentalist extremes. I think there is no magic bullet, one-size-fits-all theoretical solution, and I am much more interested in pragmatic, workable and flexible solutions to actual problems. 2011/2/19 Darren Greer : >>Only different to those who cannot understand the inevitable end-point of >> libertarianism.< > Just as the end-point of democracy is a stagnant bureaucratic state? The > end-point of capitalism is fascism and plutocracy? The end-point of > socialism is military dictatorship? > The end-point of any system is a situation of extremes and therefore not > desirable. When I asked the question I made the assumption that was > understood. I was looking for a bit of a nuanced interpretation, much like > the one Fred gave. I understand that political discourse tends to evoke > passionate responses, but I should have made myself clearer: I was looking > for an intellectual response, not a politicized, emotive one. My error. > > Darren > > On Sat, Feb 19, 2011 at 10:10 AM, Richard Loosemore > wrote: >> >> spike wrote: >>> >>> ... On Behalf Of Olga Bourlin >>> Subject: Re: [ExI] Call To Libertarians >>> >>> Darren, tell them to visit the Libertarian Paradise: ?SOMALIA. ;) >>> >>> Somalia is an example of anarchy, Olga, not libertarian. ?Two very >>> different >>> things. ?spike >> >> Only different to those who cannot understand the inevitable end-point of >> libertarianism. ?:-) >> >> Excellent example, Olga! >> >> >> Richard Loosemore >> >> [ducks beneath parapet to get out of the way of incomings] >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > -- > There is no history, only biography. > -Ralph Waldo Emerson > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > From rpwl at lightlink.com Sat Feb 19 16:51:48 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 11:51:48 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> Message-ID: <4D5FF524.7030103@lightlink.com> Darren Greer wrote: >>Only different to those who cannot understand the inevitable end-point > of libertarianism.< > > Just as the end-point of democracy is a stagnant bureaucratic state? The > end-point of capitalism is fascism and plutocracy? The end-point of > socialism is military dictatorship? > > The end-point of any system is a situation of extremes and therefore not > desirable. When I asked the question I made the assumption that was > understood. I was looking for a bit of a nuanced interpretation, much > like the one Fred gave. I understand that political discourse tends to > evoke passionate responses, but I should have made myself clearer: I was > looking for an intellectual response, not a politicized, emotive one. My > error. I think you mistake the seriousness behind my reply (and Olga's). Systems settle down into a balance of exchanges -- a state in which all the players locally are trying to get what they want in various ways, so that a situation emerges in which those players more or less accept a set of exchanges that satisfy them. Looking at the list of political systems you give above -- democracy, captialism, socialism etc. -- we can OBJECTIVELY ask questions about how those kinds of systems will settle down, given enough time. We cannot find perfectly good answers to our questions (or we would all be Hari Seldons), but we can do some "sanity checks" on the basic ideas in those systems. One sanity check (according to people like myself and, perhaps Olga (though I make no pretence to speak for her)) yields one glaring, massive difference between the fundamental philosophy held by most libertarians and the philosophies held by those who cheer for the other political philosophies that you list. Libertarianism contains a glaring contradiction within it, which makes it clear that it could never actually work in practice, but would instead lead to Somalia-like anarchy and chaos. In what follows I will try to explain what I mean by this. Libertarianism cherishes the idea that "government" should be reduced to the smallest possible size, and that individuals should take full responsibility for paying for -- or cheating others out of -- the things they need. But at the same time Libertarians also want the advantages of civilization. The problem is, that the things that they want to cut or drastically reduce are the "commons" aspects of modern civilisation .... all those aspects that have to do with people coming together and realizing that it is in everyone's best interest if the community is forced to pool their resources to pay for things like roads and theaters and bridges and schools and police forces. The core of the contradiction is that what the Libertarian wants to do is LOCALLY sensible, but globally crazy. From the point of view of the individual libertarian, nothing but good can come from getting the government out of their wallet. Every libertarian on the planet would see an immediate increase in their well-being if that happened. But that increase in their well being is predicated on the assumption that nothing else changes in the society around them: that all the balances and exchanges now established continue to operate as before. If society continues to operate as normal, the local well-being of every libertarian is immensely increased, withiout a shadow of a doubt, but that is only true if everthing else continues to run as it always has done. The mistake -- the glaring contradiction -- is this assumption that everthing else will stay just as it is while all the libertarians are counting the new money in their pocket, and setting up their own private arrangements to pay for healthcare, to pay road tolls on every street, to hire private police forces to look after them, to pay for their kids to go to school, to pay for a snow plow to come visit their street in the winter, and so on. Why is this assumption wrong? Because the entire edifice of modern civilisation is built on that assumption about taxation and pooling of resources for the common good. Taxation and government and redistribution of wealth are what separate us from the dark ages. The concept of taxation + government + redistribution of wealth was the INCREDIBLE INVENTION that allowed human societies in at least one corner of this planet to emerge from feudal societies where everyone looked after themselves and the devil took the hindmost. This fact about libertarianism is so easy to model, that the conclusion about "SOMALIA == the Libertarian Paradise" is almost a no-brainer. What I mean by "easy to model" is that when we try to understand the end point of other political philosophies it really is pretty hard to see exactly where they will go. But in the case of libertarianism, it only takes a few questions to start revealing that terrifying, inevitable slide toward feudalism. The questions we would ask are questions about what exactly would happen when all the libertarians set up accounts to pay for their toll-roads, healthcare, schools, snow plows etc. etc., but the vast underbelly of modern society cannot do the same because they do not have the resources. Questions about what directions the private police forces would go when they have a client base that they must make happy, rather than a hierarchy that goes up to the nation-state level. And so on. We can model those local changes quite easily because we have plenty of examples of what happens when those circumstances are set up. So in the case of libertarianism, the answers to those questions are really REALLY easy to come up with, and they all point toward anarchy and feudalism. There are simply no good answers to those questions (i.e. no answers that clearly demonstrate that there is a way to push the system toward a stable state). This is the reason why the world has had, over the years, plenty of "democracies", "stagnant bureaucratic states", "capitalist states", "fascist states", "plutocracies", "socialist states" and "military dictatorships" ...... but not one "libertarian state". Or rather, according to the analysis of those who have thought about it in an objective way, the world HAS had many libertarian states: they were all the rage in the dark ages, and they are now springing up like wild mushrooms in a bog, in places like Somalia. So, those were really not just shallow comments that I made, and that Olga made, for all that they were delivered with a wry smile. There is a difference between the searches for an end-point of all the various political philosophies: libertarianism is a glaringly obvious "locally-smart + globally dumb" philosophy, whereas the others are all much much harder to call. Richard Loosemore From lubkin at unreasonable.com Sat Feb 19 17:10:25 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sat, 19 Feb 2011 12:10:25 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: References: Message-ID: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Darren wrote: >I understand there are some libertarians in this group. It's surreal to read this. I was one of the earliest of subscribers to the original extropian list, twenty or so years ago. I was delighted to join and help build a community that shared so many of my (even then) long-standing interests. One of the ideas was that it was a place where we didn't have to defend or explain the fundamentals. And the dominant sentiment was that anarcho-capitalist libertarianism was one of them. I recognize the drift from that here over the years, and the reasons for it, but your posting still feels weird. Like someone saying "I understand there are some Jews in Israel." I guess the paleo-extropian label is appropriate; it's easy to feel like a living fossil. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From rpwl at lightlink.com Sat Feb 19 17:10:22 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 12:10:22 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <005901cbd04a$8ad986a0$a08c93e0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <005901cbd04a$8ad986a0$a08c93e0$@att.net> Message-ID: <4D5FF97E.2080006@lightlink.com> spike wrote: > ... On Behalf Of Richard Loosemore > ... > Subject: Re: [ExI] Call To Libertarians > ... >>> Somalia is an example of anarchy, Olga, not libertarian. Two very > different things. spike > >> Only different to those who cannot understand the inevitable end-point of > libertarianism. :-) > >> Richard Loosemore > > The description of complex systems cannot be reduced to a bumper sticker. > But this is one rare example of a case where the refutation can *almost* be > bumper-sticker-ized: > > Chaos is the endpoint not of libertarianism but rather the endpoint of its > opposite, totalitarianism. Factually inaccurate, I would say: Example 1: Soviet Union (totalitarian) -> Boris Yeltsin (short interregnum) -> Russia Under Putin (totalitarianism again). Example 2: Iran under Shah (totalitarian) -> Revolution (short interregnum) -> Iran under the Mullahs (totalitarianism again). Example 3: Iraq under Saddam Hussein (totalitarian) -> US Invasion Period (short interregnum) -> Iraq under Corrupt Shia Government with Rigged Elections (totalitarianism again, or heading fast in that direction). Example 4: Germany under Hitler (totalitarian) -> 2nd World War (long interregnum during which GDR was totalitarian and West Germany was deomcratic) -> Eventually United Germany (Democracy). This is really not looking good for your bumper sticker. Richard Loosemore From spike66 at att.net Sat Feb 19 17:45:34 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 09:45:34 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FF524.7030103@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> Message-ID: <000001cbd05c$d0092520$701b6f60$@att.net> >... On Behalf Of Richard Loosemore >... people coming together and realizing that it is in everyone's best interest if the community is forced to pool their resources to pay for things like roads and theaters and bridges and schools and police forces... Indeed? The critical difference in my thinking and yours is found in this one sentence. People coming together for roads, bridges, schools and police, yes. Theatres? No. That is exclusively the domain of private industry, and the root of the tension between libertarian and statist. It is not in everyone's best interest to pool resources to build theatres. >... the conclusion about "SOMALIA == the Libertarian Paradise" is almost a no-brainer... Richard Loosemore You said it, not me. Somalia is the criminal's paradise, not the libertarian's. spike From eugen at leitl.org Sat Feb 19 18:20:52 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 19:20:52 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: <20110219182052.GD23560@leitl.org> On Sat, Feb 19, 2011 at 12:10:25PM -0500, David Lubkin wrote: > Darren wrote: > >> I understand there are some libertarians in this group. > > It's surreal to read this. I was one of the earliest of subscribers to > the original extropian list, twenty or so years ago. I was delighted to Does the list go back to 1990, or was there a dialup BBS before? It's too bad we cannot read the early archives, but I understand why. > join and help build a community that shared so many of my (even then) > long-standing interests. One of the ideas was that it was a place where > we didn't have to defend or explain the fundamentals. And the dominant > sentiment was that anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the years, and the reasons for > it, but your posting still feels weird. Like someone saying "I > understand there are some Jews in Israel." > > I guess the paleo-extropian label is appropriate; it's easy to feel like > a living fossil. It's nice to be a part of one of the longer-lived Internet communities. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From darren.greer3 at gmail.com Sat Feb 19 18:33:29 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 14:33:29 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <20110219182052.GD23560@leitl.org> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> Message-ID: One of the ideas was that it was a place where > we didn't have to defend or explain the fundamentals. And the dominant > sentiment was that anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the years, I'm a newcomer to the group, David. Only a year, and like most, I came by drawing my own conclusions based on experience and observation and so by the time I got here I knew some of the fundamentals. The rest I learned quickly. The subtleties and incidentals, however, eluded me for some time and often still do. Politics--and economics--are two elusive issues that are often obliquely referenced here that I still haven't got a handle on. One of the first threads I became interested in was on patent and intellectual property rights, and though no one informed me this group used to have a libertarian bent, I could certainly sense the tendency in some of those early discussions. I also understand that political discussions were for a time here verboten because of some messiness that had occurred in the past. I'm glad that's not the case now. I believe politics, and particularly the economic outlooks that come with them, could not be more relevant to the transhumanist schema, if we can be said to have one (or two, or three.) I'm glad the list has come to a place where we can discuss these things without acrimony or prejudice. For my part, I'm just trying to understand. d. On Sat, Feb 19, 2011 at 2:20 PM, Eugen Leitl wrote: > On Sat, Feb 19, 2011 at 12:10:25PM -0500, David Lubkin wrote: > > Darren wrote: > > > >> I understand there are some libertarians in this group. > > > > It's surreal to read this. I was one of the earliest of subscribers to > > the original extropian list, twenty or so years ago. I was delighted to > > Does the list go back to 1990, or was there a dialup BBS before? > > It's too bad we cannot read the early archives, but I understand > why. > > > join and help build a community that shared so many of my (even then) > > long-standing interests. One of the ideas was that it was a place where > > we didn't have to defend or explain the fundamentals. And the dominant > > sentiment was that anarcho-capitalist libertarianism was one of them. > > > > I recognize the drift from that here over the years, and the reasons for > > it, but your posting still feels weird. Like someone saying "I > > understand there are some Jews in Israel." > > > > I guess the paleo-extropian label is appropriate; it's easy to feel like > > a living fossil. > > It's nice to be a part of one of the longer-lived Internet communities. > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Feb 19 18:33:52 2011 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 19 Feb 2011 13:33:52 -0500 Subject: [ExI] Call To Libertarians In-Reply-To: <000001cbd05c$d0092520$701b6f60$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> Message-ID: <4D600D10.2090008@lightlink.com> spike wrote: >> ... On Behalf Of Richard Loosemore > >> ... people coming together and realizing that it is in everyone's best > interest if the community is forced to pool their resources to pay for > things like roads and theaters and bridges and schools and police forces... > > Indeed? The critical difference in my thinking and yours is found in this > one sentence. People coming together for roads, bridges, schools and > police, yes. Theatres? No. That is exclusively the domain of private > industry, and the root of the tension between libertarian and statist. It > is not in everyone's best interest to pool resources to build theatres. The inclusion of "theaters" was strictly optional: not essential to my argument. A throwaway. So let me see if I understand: you are saying that without the word "theater" in my description, what I said bore no resemblance to the philosophy of libertarianism? Would it be more accurate, then, to say that Libertarianism is about SUPPORTING the government funding of: Roads, Bridges, Police, Firefighters, Prisons, Schools, Public transport in places where universal use of cars would bring cities to a standstill, or where poor people would otherwise be unable to escape from ghettos, The armed forces, Universities, and publicly funded scholarships for poor students, National research laboratories like the Centers for Disease Control and Prevention, Snow plows, Public libraries, Emergency and disaster assistance, Legal protection for those too poor to fight against the exploitative power of corporations, Government agencies to scrutinize corrupt practices by corporations and wealthy individuals, Basic healthcare for old people who worked all their lives for corporations who paid them so little in salary that they could not save for retirement without starving to death before they reached retirement, And sundry other programs that keep the very poor just above the subsistence level, so we do not have to step over their dead bodies on the street all the time, and so they do not wander around in feral packs, looking for middle-class people that they can kill and eat... .... but it is about NOT supporting the government funding of theaters? In that case I misunderstood, and all western democracies are more or less libertarian already, give or take the 0.0001 percent of their funding that goes toward things like theaters and opera houses. Richard Loosemore From darren.greer3 at gmail.com Sat Feb 19 18:41:39 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 14:41:39 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D5FF524.7030103@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> Message-ID: Thanks Richard. I wasn't really dismissing the comments. Only the lack of explanation behind them. I don't think it's assuming too much to ask for an explanation of a wry comment to an earnest question. So thank you for providing that. Much food for thought. I turned on the TV shortly after this discussion got cooking and the first words I heard was "Somalian pirates." Thought that was coincidental and amusing. Darren On Sat, Feb 19, 2011 at 12:51 PM, Richard Loosemore wrote: > Darren Greer wrote: > >> Only different to those who cannot understand the inevitable end-point >>> >> of libertarianism.< >> >> Just as the end-point of democracy is a stagnant bureaucratic state? The >> end-point of capitalism is fascism and plutocracy? The end-point of >> socialism is military dictatorship? >> The end-point of any system is a situation of extremes and therefore not >> desirable. When I asked the question I made the assumption that was >> understood. I was looking for a bit of a nuanced interpretation, much like >> the one Fred gave. I understand that political discourse tends to evoke >> passionate responses, but I should have made myself clearer: I was looking >> for an intellectual response, not a politicized, emotive one. My error. >> > > I think you mistake the seriousness behind my reply (and Olga's). > > Systems settle down into a balance of exchanges -- a state in which all the > players locally are trying to get what they want in various ways, so that a > situation emerges in which those players more or less accept a set of > exchanges that satisfy them. > > Looking at the list of political systems you give above -- democracy, > captialism, socialism etc. -- we can OBJECTIVELY ask questions about how > those kinds of systems will settle down, given enough time. We cannot find > perfectly good answers to our questions (or we would all be Hari Seldons), > but we can do some "sanity checks" on the basic ideas in those systems. > > One sanity check (according to people like myself and, perhaps Olga (though > I make no pretence to speak for her)) yields one glaring, massive difference > between the fundamental philosophy held by most libertarians and the > philosophies held by those who cheer for the other political philosophies > that you list. > > Libertarianism contains a glaring contradiction within it, which makes it > clear that it could never actually work in practice, but would instead lead > to Somalia-like anarchy and chaos. In what follows I will try to explain > what I mean by this. > > Libertarianism cherishes the idea that "government" should be reduced to > the smallest possible size, and that individuals should take full > responsibility for paying for -- or cheating others out of -- the things > they need. But at the same time Libertarians also want the advantages of > civilization. The problem is, that the things that they want to cut or > drastically reduce are the "commons" aspects of modern civilisation .... all > those aspects that have to do with people coming together and realizing that > it is in everyone's best interest if the community is forced to pool their > resources to pay for things like roads and theaters and bridges and schools > and police forces. > > The core of the contradiction is that what the Libertarian wants to do is > LOCALLY sensible, but globally crazy. From the point of view of the > individual libertarian, nothing but good can come from getting the > government out of their wallet. Every libertarian on the planet would see > an immediate increase in their well-being if that happened. But that > increase in their well being is predicated on the assumption that nothing > else changes in the society around them: that all the balances and exchanges > now established continue to operate as before. If society continues to > operate as normal, the local well-being of every libertarian is immensely > increased, withiout a shadow of a doubt, but that is only true if everthing > else continues to run as it always has done. > > The mistake -- the glaring contradiction -- is this assumption that > everthing else will stay just as it is while all the libertarians are > counting the new money in their pocket, and setting up their own private > arrangements to pay for healthcare, to pay road tolls on every street, to > hire private police forces to look after them, to pay for their kids to go > to school, to pay for a snow plow to come visit their street in the winter, > and so on. Why is this assumption wrong? Because the entire edifice of > modern civilisation is built on that assumption about taxation and pooling > of resources for the common good. Taxation and government and > redistribution of wealth are what separate us from the dark ages. The > concept of taxation + government + redistribution of wealth was the > INCREDIBLE INVENTION that allowed human societies in at least one corner of > this planet to emerge from feudal societies where everyone looked after > themselves and the devil took the hindmost. > > This fact about libertarianism is so easy to model, that the conclusion > about "SOMALIA == the Libertarian Paradise" is almost a no-brainer. What I > mean by "easy to model" is that when we try to understand the end point of > other political philosophies it really is pretty hard to see exactly where > they will go. But in the case of libertarianism, it only takes a few > questions to start revealing that terrifying, inevitable slide toward > feudalism. The questions we would ask are questions about what exactly > would happen when all the libertarians set up accounts to pay for their > toll-roads, healthcare, schools, snow plows etc. etc., but the vast > underbelly of modern society cannot do the same because they do not have the > resources. Questions about what directions the private police forces would > go when they have a client base that they must make happy, rather than a > hierarchy that goes up to the nation-state level. And so on. We can model > those local changes quite easily because we have plenty of examples of what > happens when those circumstances are set up. > > So in the case of libertarianism, the answers to those questions are really > REALLY easy to come up with, and they all point toward anarchy and > feudalism. There are simply no good answers to those questions (i.e. no > answers that clearly demonstrate that there is a way to push the system > toward a stable state). > > This is the reason why the world has had, over the years, plenty of > "democracies", "stagnant bureaucratic states", "capitalist states", "fascist > states", "plutocracies", "socialist states" and "military dictatorships" > ...... but not one "libertarian state". > > Or rather, according to the analysis of those who have thought about it in > an objective way, the world HAS had many libertarian states: they were all > the rage in the dark ages, and they are now springing up like wild mushrooms > in a bog, in places like Somalia. > > So, those were really not just shallow comments that I made, and that Olga > made, for all that they were delivered with a wry smile. There is a > difference between the searches for an end-point of all the various > political philosophies: libertarianism is a glaringly obvious > "locally-smart + globally dumb" philosophy, whereas the others are all much > much harder to call. > > > > Richard Loosemore > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Feb 19 18:46:42 2011 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 19 Feb 2011 14:46:42 -0400 Subject: [ExI] Call To Libertarians In-Reply-To: <4D600D10.2090008@lightlink.com> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> <4D600D10.2090008@lightlink.com> Message-ID: >Theatres? No. That is exclusively the domain of private industry, and the root of the tension between libertarian and statist.< The U.S. government did during the 30's fund the theaters, in something called the national theatre program. It turned out to be too socialist for them, so they canned it. Canada has the Canadian Council which provides grants to professional artists for project creation. They don't care what you write as long as it's good. They've saved my butt a bunch a times. d. On Sat, Feb 19, 2011 at 2:33 PM, Richard Loosemore wrote: > spike wrote: > >> ... On Behalf Of Richard Loosemore >>> >> >> ... people coming together and realizing that it is in everyone's best >>> >> interest if the community is forced to pool their resources to pay for >> things like roads and theaters and bridges and schools and police >> forces... >> >> Indeed? The critical difference in my thinking and yours is found in this >> one sentence. People coming together for roads, bridges, schools and >> police, yes. Theatres? No. That is exclusively the domain of private >> industry, and the root of the tension between libertarian and statist. It >> is not in everyone's best interest to pool resources to build theatres. >> > > The inclusion of "theaters" was strictly optional: not essential to my > argument. A throwaway. > > So let me see if I understand: you are saying that without the word > "theater" in my description, what I said bore no resemblance to the > philosophy of libertarianism? > > Would it be more accurate, then, to say that Libertarianism is about > SUPPORTING the government funding of: > > Roads, > Bridges, > Police, > Firefighters, > Prisons, > Schools, > Public transport in places where universal use of cars would > bring cities to a standstill, or where poor people would > otherwise be unable to escape from ghettos, > The armed forces, > Universities, and publicly funded scholarships for poor students, > National research laboratories like the Centers > for Disease Control and Prevention, > Snow plows, > Public libraries, > Emergency and disaster assistance, > Legal protection for those too poor to fight against the > exploitative power of corporations, > Government agencies to scrutinize corrupt practices by > corporations and wealthy individuals, > Basic healthcare for old people who worked all their lives > for corporations who paid them so little in salary that > they could not save for retirement without starving to > death before they reached retirement, > And sundry other programs that keep the very poor just above > the subsistence level, so we do not have to step over their > dead bodies on the street all the time, and so they do not > wander around in feral packs, looking for middle-class people > that they can kill and eat... > > > .... but it is about NOT supporting the government funding of theaters? > > > In that case I misunderstood, and all western democracies are more or less > libertarian already, give or take the 0.0001 percent of their funding that > goes toward things like theaters and opera houses. > > > > > Richard Loosemore > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- *There is no history, only biography.* * * *-Ralph Waldo Emerson * -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Feb 19 18:59:32 2011 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 19 Feb 2011 19:59:32 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> Message-ID: I think the list has become more inclusive of soft libertarians, or left libertarians, who accept some degree of government and welfare (like me), but I like to think that personal freedom and self-ownership are still considered as fundamental values by most posters. On Sat, Feb 19, 2011 at 6:10 PM, David Lubkin wrote: > Darren wrote: > >> I understand there are some libertarians in this group. > > It's surreal to read this. I was one of the earliest of subscribers to the > original extropian list, twenty or so years ago. I was delighted to join and > help build a community that shared so many of my (even then) long-standing > interests. One of the ideas was that it was a place where we didn't have to > defend or explain the fundamentals. And the dominant sentiment was that > anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the years, and the reasons for it, > but your posting still feels weird. Like someone saying "I understand there > are some Jews in Israel." > > I guess the paleo-extropian label is appropriate; it's easy to feel like a > living fossil. > > > -- David. > > Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From hkeithhenson at gmail.com Sat Feb 19 19:05:59 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 19 Feb 2011 12:05:59 -0700 Subject: [ExI] Lethal future was Watson on NOVA Message-ID: On Sat, Feb 19, 2011 at 1:27 AM, BillK wrote: > > On Fri, Feb 18, 2011 at 7:16 PM, Keith Henson ?wrote: >> Evolution had good reason to build in a strong drive to have sex. ?And >> in the pre birth control era that resulted in reproduction. >> >> It's also fairly clear to me that there is a drive directly for >> reproduction, especially in women. ?You only need to consider what one >> member who used to be on this group did to have an example. > > No. There isn't. You know who I am talking about? > If you look at the groups who have falling birth rates they correlate > *very* strongly with women's rights and the empowerment of women. As > soon as women get the power to choose they stop having children. Some > might have one child, but this is below the rate required to sustain > the population. Agree on the points of course. But if there was *no* direct drive for reproduction, they would have none. > You can also correlate falling birth rates with first world countries, > or 'civilization'. > Which also correlates with women's rights. > > I agree with Eugene's claim that there are sub-groups and third world > nations that to-date still have high birth rates and growing > populations. But it is to be expected that these high birth rates will > only continue while their women remain subjugated under male > domination. How long that will last is questionable. > > That is why I disagree strongly that advanced civilizations will be > breeding like rabbits. The 'advanced' part means low reproduction by > definition. > > If a civilization is busy breeding furiously and fighting for survival > with other breeders, they have no spare capacity to get 'advanced'. > Too many mouths to feed. > > BillK I don't think you are considering the future angles here. Cloning and gene editing for example, not to mention outright duplication. And if we have vastly longer lives, a low reproductive rate is a good idea. Keith From moulton at moulton.com Sat Feb 19 19:14:48 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 11:14:48 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <20110219182052.GD23560@leitl.org> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> Message-ID: <4D6016A8.4090402@moulton.com> Eugen Leitl wrote: > Does the list go back to 1990, or was there a dialup BBS before? > > It's too bad we cannot read the early archives, but I understand > why. > > There have been various discussions about recovering the early archives. Some early posts were really great exploration of ideas. I think the Extropian Institute might have a complete (or near complete) archive however I understand that everyone is busy and the project never gets done. Just like the idea of scanning and posting all of the back issues of Extropy magazine. Fred From eugen at leitl.org Sat Feb 19 19:30:30 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 20:30:30 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <4D6016A8.4090402@moulton.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> Message-ID: <20110219193030.GJ23560@leitl.org> On Sat, Feb 19, 2011 at 11:14:48AM -0800, F. C. Moulton wrote: > There have been various discussions about recovering the early archives. Some people have complete early archives, or nearly-complete early archives. The problem is that the list was closed, for very good reasons, and it would be impossible to obtain retrograde consent from all the early participans, assuming they're even still around. And there will be definitely members objecting, for abovementioned good reasons. We just have to live with that, I guess. > Some early posts were really great exploration of ideas. I think the > Extropian Institute might have a complete (or near complete) archive > however I understand that everyone is busy and the project never gets done. > > Just like the idea of scanning and posting all of the back issues of > Extropy magazine. I'm helping with publishing historical cryonics documents, so if I can help with that (sadly, I have only a couple dead tree copies of Extropy magazine, never having been a regular member), I'd be happy to. -- Eugen* Leitl leitl http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE From moulton at moulton.com Sat Feb 19 19:36:42 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 11:36:42 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <000001cbd05c$d0092520$701b6f60$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <4D5FF524.7030103@lightlink.com> <000001cbd05c$d0092520$701b6f60$@att.net> Message-ID: <4D601BCA.6020300@moulton.com> Here I have to disagree with both Spike and Loosemore spike wrote: >> ... On Behalf Of Richard Loosemore >> ... people coming together and realizing that it is in everyone's best >> interest if the community is forced to pool their resources to pay for >> things like roads and theaters and bridges and schools and police forces... I was going to write a long response but Spike quoted the key passage from the post. Consider the phrase "community is forced" to do things. The libertarian approach is "individuals and groups voluntarily" do things. If anyone want an over simplified bumper sticker summary of the libertarian approach; it is "Anything that is peaceful". > People coming together for roads, bridges, schools and > police, yes. Theatres? No. Spike I think you are fundamentally mistaken. There is no reason why roads, bridges, schools or police can not be created by non-governmental means. Fred From moulton at moulton.com Sat Feb 19 19:41:14 2011 From: moulton at moulton.com (F. C. Moulton) Date: Sat, 19 Feb 2011 11:41:14 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: <20110219193030.GJ23560@leitl.org> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> Message-ID: <4D601CDA.90803@moulton.com> Eugen Leitl wrote: > And there will be definitely members objecting, for above mentioned > good reasons. We just have to live with that, I guess. > I was under the impression that early members who did not want their posts made public was relatively low but my impression was based on casual observation not on a rigorous survey. It would interesting to at least have the early posts of those who agreed to be make available. Fred From lubkin at unreasonable.com Sat Feb 19 20:15:14 2011 From: lubkin at unreasonable.com (David Lubkin) Date: Sat, 19 Feb 2011 15:15:14 -0500 Subject: [ExI] Original list Was: Re: Call To Libertarians In-Reply-To: <4D601CDA.90803@moulton.com> References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> <4D6016A8.4090402@moulton.com> <20110219193030.GJ23560@leitl.org> <4D601CDA.90803@moulton.com> Message-ID: <201102192014.p1JKEeST027600@andromeda.ziaspace.com> The terms under which the original list functioned require permission of a posting's author before dissemination beyond that list's membership. It would, however, be legitimate to share one's archives with someone else who'd been on the list at the time of a posting, and I think to someone who joined that list after the date of the posting. Anything beyond that means finding folks and getting permissions. (One of the messy questions to deal with is what if Keith was replying to and quotes something Perry said. Keith gives permission; Perry doesn't.) I am now building systems for other communities I'm part of that have similar problems. I think what I'm doing will be readily adaptable to the original list archive issue. -- David. Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut From eugen at leitl.org Sat Feb 19 20:53:08 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 21:53:08 +0100 Subject: [ExI] Call To Libertarians In-Reply-To: <005901cbd04a$8ad986a0$a08c93e0$@att.net> References: <001101cbcfe0$9e4e59f0$daeb0dd0$@att.net> <4D5FCF58.4020407@lightlink.com> <005901cbd04a$8ad986a0$a08c93e0$@att.net> Message-ID: <20110219205308.GP23560@leitl.org> On Sat, Feb 19, 2011 at 07:34:48AM -0800, spike wrote: > Chaos is the endpoint not of libertarianism but rather the endpoint of its > opposite, totalitarianism. We don't want chaos nor crystalline order, we want the boundary in-between. The edge of chaos. http://www.necsi.edu/projects/baranger/cce.pdf etc. http://www.google.com/search?hl=en&q=%22edge+of+chaos%22+entropy From eugen at leitl.org Sat Feb 19 21:32:17 2011 From: eugen at leitl.org (Eugen Leitl) Date: Sat, 19 Feb 2011 22:32:17 +0100 Subject: [ExI] Time magazine cover story on the singularity In-Reply-To: <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> References: <4D5D3C34.7080305@lightlink.com> <009301cbcf33$88a24a10$99e6de30$@att.net> <01af01cbcfb5$8b6cbaa0$a2462fe0$@att.net> <20110219062833.GU23560@leitl.org> <002c01cbd000$aeae0a50$0c0a1ef0$@att.net> Message-ID: <20110219213217.GT23560@leitl.org> On Fri, Feb 18, 2011 at 10:46:05PM -0800, spike wrote: > >The last stock market Singularity .bombed quite nicely, as you'll recall. > > How well I recall. I am just getting back to where I was back in those > heady days. Alas, the next Big One is round the corner. Or, rather, we're still in it. There's no timing the market, but if you're still in it at the time I hope you can afford writing it all off as gambling losses. > We thought it was the technocalypse. I did anyway. Then the stock market > crashed. It wasn't until 9/11/01 that many of us realized we still have yet > another world war to fight, and this one may be worse than the three we had > in the 20th century. "We've always been at war with Eastasia"? > The challenge for this particular culture is to break open this particular > Petri dish while they're still able. > > We are able. The question is will we break out while we are still willing. We're definitely able. Still. However, the launch window is slowly (or quickly) closing. The people look less and less up to the skies, unfortunately. I genuinely hope the private sector and new players in the developing world will take up the slack. Because, if we don't make it sometime soon, we're not going to make it at all. Not that we care, but our children will, definitely. From spike66 at att.net Sat Feb 19 21:50:13 2011 From: spike66 at att.net (spike) Date: Sat, 19 Feb 2011 13:50:13 -0800 Subject: [ExI] Call To Libertarians In-Reply-To: References: <201102191710.p1JHADKn004123@andromeda.ziaspace.com> <20110219182052.GD23560@leitl.org> Message-ID: <002401cbd07e$fce824c0$f6b86e40$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Darren Greer . I also understand that political discussions were for a time here verboten because of some messiness that had occurred in the past. I'm glad that's not the case now. I believe politics, and particularly the economic outlooks that come with them, could not be more relevant to the transhumanist schema. d. We haven't really had a libertarian discussion here for a good while. In light of Darren's comments above, I propose a temporary open season on the specific topic of transhumanism and libertarianism. Free number of posts on all that for five days, and please don't let me down: post stuff that is well-reasoned, humor and even sarcasm allowed, but do keep it respectful and free of personal attack of those with differing or opposing political points of view. As the open season on Watson draws to a close, post away for a few days on "Call to Libertarians." I think we can handle this like transhumanists in which we may take pride. Play ball! spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Sat Feb 19 22:20:00 2011 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 19 Feb 2011 15:20:00 -0700 Subject: [ExI] Call To Libertarians Message-ID: On Sat, Feb 19, 2011 at 11:46 AM, David Lubkin wrote: > Darren wrote: > >>I understand there are some libertarians in this group. > > It's surreal to read this. I was one of the > earliest of subscribers to the original extropian > list, twenty or so years ago. I was delighted to > join and help build a community that shared so > many of my (even then) long-standing interests. > One of the ideas was that it was a place where we > didn't have to defend or explain the > fundamentals. And the dominant sentiment was that > anarcho-capitalist libertarianism was one of them. > > I recognize the drift from that here over the > years, and the reasons for it, but your posting > still feels weird. Like someone saying "I > understand there are some Jews in Israel." > > I guess the paleo-extropian label is appropriate; > it's easy to feel like a living fossil. Welcome to the club. :-) As I recall anarcho-capitalist libertarianism was just an underlying assumption. Libertarians come in a lot of flavors, personally I best fit the Space Cadet (Heinlein) variation. But as I recall, there was either relatively little discussion on the topic, or I just skipped the posts about it. Of course early days L5 Society members were something like 20% libertarian, and perhaps as high as 50% of the early cryonics members. Keith > -- David. > > Easy to find on: LinkedIn ? Facebook ? Twitter ? Quora ? Orkut > > > > > ------------------------------ > > Message: 8 > Date: Sat, 19 Feb 2011 12:10:22 -0500 > From: Richard Loosemore > To: ExI chat list > Subject: Re: [ExI] Call To Libertarians > Message-ID: <4D5FF97E.2080006 at lightlink.com> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > spike wrote: >> ... On Behalf Of Richard Loosemore >> ... >> Subject: Re: [ExI] Call To Libertarians >> ... >>>> Somalia is an example of anarchy, Olga, not libertarian. ?Two very >> different things. ?spike >> >>> Only different to those who cannot understand the inevitable end-point of >> libertarianism. ?:-) >> >>> Richard Loosemore >> >> The description of complex systems cannot be reduced to a bumper sticker. >> But this is one rare example of a case where the refutation can *almost* be >> bumper-sticker-ized: >> >> Chaos is the endpoint not of libertarianism but rather the endpoint of its >> opposite, totalitarianism. > > Factually inaccurate, I would say: > > Example 1: ? Soviet Union (totalitarian) -> Boris ?Yeltsin (short > interregnum) -> Russia Under Putin (totalitarianism again). > > Example 2: ? Iran under Shah (totalitarian) -> Revolution (short > interregnum) -> Iran under the Mullahs (totalitarianism again). > > Example 3: ? Iraq under Saddam Hussein (totalitarian) -> US Invasion > Period (short interregnum) -> Iraq under Corrupt Shia Government with > Rigged Elections (totalitarianism again, or heading fast in that direction). > > Example 4: ? Germany under Hitler (totalitarian) -> 2nd World War (long > interregnum during which GDR was totalitarian and West Germany was > deomcratic) -> Eventually United Germany (Democracy). > > This is really not looking good for your bumper sticker. > > > > Richard Loosemore > > > ------------------------------ > > Message: 9 > Date: Sat, 19 Feb 2011 09:45:34 -0800 > From: "spike" > To: "'ExI chat list'" > Subject: Re: [ExI] Call To Libertarians > Message-ID: <000001cbd05c$d0092520$701b6f60$@att.net> > Content-Type: text/plain; ? ? ? charset="us-ascii" > > >>... On Behalf Of Richard Loosemore > >>... people coming together and realizing that it is in everyone's best > interest if the community is forced to pool their resources to pay for > things like roads and theaters and bridges and schools and police forces... > > Indeed? ?The critical difference in my thinking and yours is found in this > one sentence. ?People coming together for roads, bridges, schools and > police, yes. ?Theatres? ?No. ?That is exclusively the domain of private > industry, and the root of the tension between libertarian and statist. ?It > is not in everyone's best interest to pool resources to build theatres. > >>... the conclusion about "SOMALIA == the Libertarian Paradise" is almost a > no-brainer... Richard Loosemore > > You said it, not me. ?Somalia is the criminal's paradise, not the > libertarian's. > > spike > > > > > > > > > ------------------------------ > > Message: 10 > Date: Sat, 19 Feb 2011 19:20:52 +0100 > From: Eugen Leitl > To: extropy-chat at lists.extropy.org > Subject: Re: [ExI] Call To Libertarians > Message-ID: <20110219182052.GD23560 at leitl.org> > Content-Type: text/plain; charset=us-ascii > > On Sat, Feb 19, 2011 at 12:10:25PM -0500, David Lubkin wrote: >> Darren wrote: >> >>> I understand there are some libertarians in this group. >> >> It's surreal to read this. I was one of the earliest of subscribers to >> the original extropian list, twenty or so years ago. I was delighted to > > Does the list go back to 1990, or was there a dialup BBS before? > > It's too bad we cannot read the early archives, but I understand > why. > >> join and help build a community that shared so many of my (even then) >> long-standing interests. One of the ideas was that it was a place where >> we didn't have to defend or explain the fundamentals. And the dominant >> sentiment was that anarcho-capitalist libertarianism was one of them. >> >> I recognize the drift from that here over the years, and the reasons for >> it, but your posting still feels weird. Like someone saying "I >> understand there are some Jews in Israel." >> >> I guess the paleo-extropian label is appropriate; it's easy to feel like >> a living fossil. > > It's nice to be a part of one of the longer-lived Internet communities. > > -- > Eugen* Leitl leitl http://leitl.org > ______________________________________________________________ > ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org > 8B29F6BE: 099D 78BA 2FD3 B014 B08A ?7779 75B0 2443 8B29 F6BE > > > ------------------------------ > > Message: 11 > Date: Sat, 19 Feb 2011 14:33:29 -0400 > From: Darren Greer > To: ExI chat list > Subject: Re: [ExI] Call To Libertarians > Message-ID: > ? ? ? ? > Content-Type: text/plain; charset="iso-8859-1" > > One of the ideas was that it was a place where >> we didn't have to defend or explain the fundamentals. And the dominant >> sentiment was that anarcho-capitalist libertarianism was one of them. >> >> I recognize the drift from that here over the years, > > I'm a newcomer to the group, David. Only a year, and like most, I came by > drawing my own conclusions based on experience and observation and so by the > time I got here I knew some of the fundamentals. The rest I learned quickly. > The subtleties and incidentals, however, eluded me for some time and often > still do. > > Politics--and economics--are two elusive issues that are often obliquely > referenced here that I still haven't got a handle on. One of the first > threads I became interested in was on patent and intellectual property > rights, and though no one informed me this group used to have a libertarian > bent, I could certainly sense the tendency in some of those early > discussions. > > I also understand that political discussions were for a time here verboten > because of some messiness that had occurred in the past. I'm glad that's not > the case now. I believe politics, and particularly the economic outlooks > that come with them, could not be more relevant to the transhumanist schema, > if we can be said to have one (or two, or three.) I'm glad the list has come > to a place where we can discuss these things without acrimony or prejudice. > For my part,