From agrimes at speakeasy.net Mon Nov 1 00:19:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 31 Oct 2010 20:19:40 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <4CCDEE41.20706@canonizer.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> Message-ID: <4CCE079C.4010102@speakeasy.net> > And remember that there are two parts to most conscious perception. > There is the conscious knowledge, and it's referent. For out of body > experiences, the knowledge of our 'spirit' or 'I' leaves our knowledge > of our body (all in our brain). > Our conscious knowledge of our body has a referent in reality, but our > knowledge of this 'spirit' does not. Surely in the future we'll be able > to alter and represent all this conscious knowledge any way we want. > And evolution surely had survivable reasons for usually representing > this 'I' just behind our knowledge of our eyes. Interesting. I don't seem to have any such perception. I see what I see, I type what I type, but I'm not, metaphysically speaking, directly present in any of my own perceptions. I have no perception at all of being "inside my head" -- I am my head. =P It seems perfectly natural to me. People are always talking about this concept of "self esteem" WTF is that? I mean it's meaningless to either hold one's self in esteem or contempt. Generally, by my appearance and sometimes by my actions, I do display a lack of self-consciousness. =\ I'm not sure if that's directly related. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From jrd1415 at gmail.com Mon Nov 1 00:21:27 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Sun, 31 Oct 2010 17:21:27 -0700 Subject: [ExI] Wind Power Without the Blades In-Reply-To: <4CCADE6F.30603@satx.rr.com> References: <4CCADE6F.30603@satx.rr.com> Message-ID: On Fri, Oct 29, 2010 at 7:47 AM, Damien Broderick wrote: > Here's another improvement over the first generation pinwheel-on-a-stick. Don't know how bird or bat friendly it is though. http://nextbigfuture.com/2010/10/order-of-magnitude-enhancement-of-wind.html Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From possiblepaths2050 at gmail.com Mon Nov 1 06:43:10 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 31 Oct 2010 23:43:10 -0700 Subject: [ExI] Atomic rockets science fiction/fact online source! Message-ID: This science fiction/fact website talks about the atomic rockets that were so popular in the speculative fiction of many decades past. And how to try to imbue some sound science into one's science fiction, if you want your characters to travel the galaxy in one of these "retro" vehicles.... http://www.projectrho.com/rocket/ John : ) From jonkc at bellsouth.net Mon Nov 1 16:48:36 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 1 Nov 2010 12:48:36 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCDE6E0.3020008@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> Message-ID: <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> This universal obsession with the original makes me wonder if it could be the result of an innate flaw in our mental wiring; otherwise it's difficult to explain something like the persistent irrationality in the art market. People will happily pay 140 million dollars for an original Jackson Pollock abstract painting, but those same people wouldn't pay 5 dollars for a copy so good they couldn't tell the difference, a copy so good it would take a team of world class art experts many hours of close study to tell the difference; and even then the difference wouldn't be that one was better than the other, just that they had at last found a tiny difference between the two. Up to now that sort of erroneous thinking hasn't caused enormous problems, it just led some rich men into making some very stupid purchases, but during the singularity that sort of dementia could become much more serious. Unless you can develop software fixes to mitigate the wiring errors in your head and put aside the Mighty Original dogma then you will be dog meat in the singularity. Well..., you probably will be anyway but at least you'll have a chance. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Nov 1 18:21:20 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Nov 2010 13:21:20 -0500 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> Message-ID: <4CCF0520.9000601@satx.rr.com> On 11/1/2010 11:48 AM, John Clark wrote: > This universal obsession with the original makes me wonder if it could > be the result of an innate flaw in our mental wiring; otherwise it's > difficult to explain something like the persistent irrationality in the > art market. It's very easy to understand, in a culture that fetishizes individual ownership. Once, only the wealthy could afford to pay an excellent painter to handmake a likeness of the family, the residence, the dog or the god. These were unique and occasionally were even prized for their aesthetic value. With what is called by scholars The Age of Mechanical Reproduction, suddenly a thousand or a million pleasing or useful indistinguishable objects could be turned out like chair legs. Art-as-index-of-wealth and art-as-index-of-superior taste had to adjust, valorizing the individual work, and especially the item that could not be a copy. When nanotech arrives, capable of replicating the most distinctive and rare items, this upheaval will happen again. Have you ever seen a real van Gogh? The thick raised edges of the paint, catching the light differently from different angles? Next to that, printed reproductions are dull, faithless traitors. If nano makes it possible to compile an exact copy in three dimensions, only the fourth will be lost--and that irretrievably, except to the most extreme tests. We'll see increasingly what we have seen as avant-garde for a century: evanescent art, performance, destruction of an art work after its creation. And in addition, a widespread downward revaluation of originals *of the art-work kind*. All of this might have some bearing on how individuals regard *themselves* as "originals", but we have no experiences of nearly exact human copies other than the near resemblance of twins, triplets, etc. Certainly monozygotic "copies" of people usually have a marked fondness for each other, but they don't consider each other as mutually fungible. Damien Broderick From pjmanney at gmail.com Mon Nov 1 20:32:47 2010 From: pjmanney at gmail.com (PJ Manney) Date: Mon, 1 Nov 2010 13:32:47 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF0520.9000601@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 11:21 AM, Damien Broderick wrote: > When nanotech arrives, capable of replicating the most distinctive and rare > items, this upheaval will happen again. Have you ever seen a real van Gogh? > The thick raised edges of the paint, catching the light differently from > different angles? Next to that, printed reproductions are dull, faithless > traitors. If nano makes it possible to compile an exact copy in three > dimensions, only the fourth will be lost--and that irretrievably, except to > the most extreme tests. We'll see increasingly what we have seen as > avant-garde for a century: evanescent art, performance, destruction of an > art work after its creation. And in addition, a widespread downward > revaluation of originals *of the art-work kind*. I agree with Damien on most of his post. However, I disagree on the downward revaluation. Let me add something from my own experience raised in the art world. Right now, you can buy copies of famous works of art, made with oil, "painted" on canvas. For about $300, I can have a "handmade" and same size oil copy of Van Gogh's Starry Night: http://www.1st-art-gallery.com/Vincent-Van-Gogh/Starry-Night.html They don't diminish Van Gogh's original one bit. It boils down to one word: provenance. It's the most important aspect determining value in a piece AFTER rarity/culturally agreed value. Nanofabbing affects rarity. It doesn't affect provenance. And it doesn't even have to apply to art. If I clean out my attic, the items go to the trash bin or Goodwill. When they cleaned out Marilyn Monroe's attic, even her x-rays were valuable. http://www.nydailynews.com/money/2010/06/28/2010-06-28_marilyn_monroes_chest_xray_from_1954_sells_for_45000_at_las_vegas_auction.html The auctioned contents of Jackie Kennedy Onassis' attic (she apparently threw nothing away) brought a total of $50 million to her estate. Even if I owned the exact same triple-strand pearl necklace, rocking chair and fountain pens, you can bet mine wouldn't! And why should it? The buyers were purchasing history. Not jewelry, furniture or office supplies. Provenance has an important place in the art market. Your nanomade Van Gogh may look as good as the real thing, but was it owned by an established lineage, from the hand of Vincent, to his brother/dealer Theo, to the Van Gogh family and dealers to MOMA? http://www.moma.org/collection/provenance/provenance_object.php?object_id=79802 Or how about this Paul Gauguin masterpiece, owned by fellow artist Edgar Degas? http://www.moma.org/collection/provenance/provenance_object.php?object_id=78621 Or Picasso's famous portrait of Gertrude Stein, given in her will to the Metropolitan Museum of Art. That's as good a provenance as you're going to find! http://wings.buffalo.edu/english/faculty/conte/syllabi/377/Images/Ray_Stein.jpg http://www.nytimes.com/2010/06/12/arts/12iht-melik12.html I don't care how many portraits of Stein you're going to make in your nanofabber. The history of the original in the Met, held in Picasso's and Stein's hands and so important in art history, can't be replicated and will retain its value -- as long as no one mixes the two up and there are people with the ego to stoke and means to own it. ;-) PJ From agrimes at speakeasy.net Mon Nov 1 21:29:59 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 01 Nov 2010 17:29:59 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> Message-ID: <4CCF3157.5080602@speakeasy.net> > Unless you can develop software fixes to mitigate the > wiring errors in your head and put aside the Mighty Original dogma then > you will be dog meat in the singularity. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Can we please have a lengthy, protracted and heated argument over this last line here? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From possiblepaths2050 at gmail.com Mon Nov 1 21:57:50 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 1 Nov 2010 14:57:50 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF3157.5080602@speakeasy.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF3157.5080602@speakeasy.net> Message-ID: John K Clark wrote: Unless you can develop software fixes to mitigate the wiring errors in your head and put aside the Mighty Original dogma then you will be dog meat in the singularity. Well..., you probably will be anyway but at least you'll have a chance. >>> John, the odds are that you will have died of old age before the Singularity happens. I sure hope you are signed up for cryonics (and the odds are not so great for that, either)! John ; ) On 11/1/10, Alan Grimes wrote: >> Unless you can develop software fixes to mitigate the >> wiring errors in your head and put aside the Mighty Original dogma then >> you will be dog meat in the singularity. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > Can we please have a lengthy, protracted and heated argument over this > last line here? > > > > -- > DO NOT USE OBAMACARE. > DO NOT BUY OBAMACARE. > Powers are not rights. > > From spike66 at att.net Mon Nov 1 21:46:50 2010 From: spike66 at att.net (spike) Date: Mon, 1 Nov 2010 14:46:50 -0700 Subject: [ExI] failure to commuicate Message-ID: <000001cb7a0e$4ab444d0$e01cce70$@att.net> I saw this at my son's favorite zoo yesterday. It really got me to thinking about such things as my having walked over this access cover about 20 to 30 times before I noticed the epic fail. Millions likely walked over it and never noticed. So how is it that so much happens all around us that we never see? Or on the other hand, what kind of silly goofball actually reads manhole covers? Why is it that you and I see a pile of ants and we are all aahhh jaysus, where's my can of raid; yet Charles Darwin sees the same thing and writes the stunning seventh chapter of Origin of Species? I want to be like Darwin when I grow up. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 33659 bytes Desc: not available URL: From pharos at gmail.com Mon Nov 1 23:07:19 2010 From: pharos at gmail.com (BillK) Date: Mon, 1 Nov 2010 23:07:19 +0000 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 8:32 PM, PJ Manney wrote: > > I don't care how many portraits of Stein you're going to make in your > nanofabber. ?The history of the original in the Met, held in Picasso's > and Stein's hands and so important in art history, can't be replicated > and will retain its value -- as long as no one mixes the two up and > there are people with the ego to stoke and means to own it. ?;-) > > I appreciate the *present* importance of provenance in the art and antiques world. People pay a million dollars for a painting with provenance because they expect to be able to sell it on to someone else for two million dollars. It's an investment. That's really the only reason to pay extra for provenance. When nanotech lets everyone have their own Van Gogh, provenance will become worthless, because there will be no way to tell if the certificate is attached to the original or a nanocopy identical down to the atomic level. (Even today expert forgers forge the provenance as well, of course). I would distinguish between provenance and 'intrinsic value'. A Walmart sweater that was once worn by George Bush is still just a Walmart sweater. BillK From possiblepaths2050 at gmail.com Mon Nov 1 23:48:29 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 1 Nov 2010 16:48:29 -0700 Subject: [ExI] An Aubrey deGrey documentary Message-ID: I realize many of you have probably already seen this, but for those of you who have not, I recommend it. A bittersweet production that tries to show both sides, and even peeks in Aubrey's inner life... http://video.google.com/videoplay?docid=-3329065877451441972# John From thespike at satx.rr.com Mon Nov 1 23:48:12 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 01 Nov 2010 18:48:12 -0500 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <4CCF51BC.6070708@satx.rr.com> On 11/1/2010 6:07 PM, BillK wrote: > I would distinguish between provenance and 'intrinsic value'. > A Walmart sweater that was once worn by George Bush is still just a > Walmart sweater. No, it's a Walmart sweater with cooties. From spike66 at att.net Tue Nov 2 03:12:10 2010 From: spike66 at att.net (spike) Date: Mon, 1 Nov 2010 20:12:10 -0700 Subject: [ExI] prediction for 2 November 2010 Message-ID: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> Tomorrow the US has its biannual symbolic insurgency in the form of congressional elections. I make the following prediction: a middle-of-the-road outcome, where the currently out of power party gains a net of 55 seats in the house and 7 (perhaps 8) in the senate. Once again, we libertarians will go home empty handed. I predict something else as well: after tomorrow, both major parties will be surprised and disappointed with the outcome and will be accusing the other of election fraud. Stay tuned. spike From avantguardian2020 at yahoo.com Tue Nov 2 10:55:20 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 2 Nov 2010 03:55:20 -0700 (PDT) Subject: [ExI] Fusion Rocket In-Reply-To: References: Message-ID: <319941.52817.qm@web65601.mail.ac4.yahoo.com> John Grigg's post on atomic rockets inspired me to commit to virtual paper a concept design for a fusion rocket. So feel free to beat up on this idea for a while. http://sollegro.com/fusion_rocket/ Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung From bbenzai at yahoo.com Tue Nov 2 12:42:58 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 2 Nov 2010 12:42:58 +0000 (GMT) Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: Message-ID: <495099.98419.qm@web114412.mail.gq1.yahoo.com> PJ Manney wrote: > I don't care how many portraits of Stein you're > going to make in your > nanofabber. The history of the original in the Met, > held in Picasso's > and Stein's hands and so important in art history, > can't be replicated > and will retain its value -- as long as no one mixes > the two up and > there are people with the ego to stoke and means to > own it. ;-) Let me just check that I understand this correctly. If an art dealer makes a molecularly-precise copy of a famous artwork, so that the two are literally completely indistinguishable, and mixes them up so that even he doesn't know which is the original, he has thereby destroyed something? Presumably this is only true if he admits to doing it. If he never admits to it, and nobody ever finds out, the something is not destroyed. Or am I missing something? Ben Zaiboc From dan_ust at yahoo.com Tue Nov 2 13:13:45 2010 From: dan_ust at yahoo.com (Dan) Date: Tue, 2 Nov 2010 06:13:45 -0700 (PDT) Subject: [ExI] failure to commuicate In-Reply-To: <000001cb7a0e$4ab444d0$e01cce70$@att.net> References: <000001cb7a0e$4ab444d0$e01cce70$@att.net> Message-ID: <605562.38403.qm@web30101.mail.mud.yahoo.com> Regarding what you do when you see ants, speak for yourself. :) Regards, Dan From: spike To: ExI chat list Sent: Mon, November 1, 2010 5:46:50 PM Subject: [ExI] failure to commuicate I saw this at my son?s favorite zoo yesterday.? It really got me to thinking about such things as my having walked over this access cover about 20 to 30 times before I noticed the epic fail.? Millions likely walked over it and never noticed.? So how is it that so much happens all around us that we never see?? Or on the other hand, what kind of silly goofball actually reads manhole covers? ?Why is it that you and I see a pile of ants and we are all aahhh jaysus, where?s my can of raid; yet Charles Darwin sees the same thing and writes the stunning seventh chapter of Origin of Species? ? I want to be like Darwin when I grow up. ? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Tue Nov 2 15:17:58 2010 From: pharos at gmail.com (BillK) Date: Tue, 2 Nov 2010 15:17:58 +0000 Subject: [ExI] DARPA funded 100 year starship program In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 1:15 AM, John Grigg wrote: > Well, at least DARPA seems capable of longterm thinking... > > > > > More information is now available. Apparently DARPA are NOT planning to build a starship. The commentators got a bit over-excited. Quote: DARPA?s press release actually deals with HOW starships should be studied, rather than studying the starships themselves. They want help from Ames to consider the business case for a non-government organization to provide such services that would use philanthropic donations to make it happen. Quoting from DARPA?s news release: ?The 100-Year Starship study looks to develop the business case for an enduring organization designed to incentivize breakthrough technologies enabling future spaceflight.? Quote from the press release: ?We endeavor to excite several generations to commit to the research and development of breakthrough technologies and cross-cutting innovations across a myriad of disciplines such as physics, mathematics, biology, economics, and psychological, social, political and cultural sciences, as well as the full range of engineering disciplines to advance the goal of long-distance space travel, but also to benefit mankind.? ------------ This may come as a surprise to many, as DARPA is a military defense agency. (!) But DARPA adds... "DARPA also anticipates that the advancements achieved by such technologies will have substantial relevance to Department of Defense (DoD) mission areas including propulsion, energy storage, biology/life support, computing, structures, navigation, and others. ------------------------- Ah-ha! That explains it. DARPA's plan is apparently to encourage private funding of breakthrough technologies that DARPA can make use of in military endeavours. So not quite so wonderful as at first sight. BillK From jonkc at bellsouth.net Tue Nov 2 15:53:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 2 Nov 2010 11:53:56 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF0520.9000601@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <1F812800-56CF-40F2-A059-15D385A29BAE@bellsouth.net> On Nov 1, 2010, at 2:21 PM, Damien Broderick wrote: > If nano makes it possible to compile an exact copy in three dimensions, only the fourth will be lost--and that irretrievably, except to the most extreme tests. I don't know what you mean by that. > All of this might have some bearing on how individuals regard *themselves* as "originals", but we have no experiences of nearly exact human copies Yes and for the same reason Evolution had little incentive to develop our emotional hunches regarding this issue so that they corresponded with reality. So if we have no experience on this matter yet, and if there is no reason to think that emotion will lead us in the correct direction, then if we are ever in a situation where it's important to make correct decisions involving the original-copy distinction we will only have logic to rely on. Even if you are so lucky as to live long enough to enter the singularity you will never survive it unless bronze age beliefs and superstitions are abandoned. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Nov 2 16:03:01 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 2 Nov 2010 12:03:01 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> On Nov 1, 2010, at 4:32 PM, PJ Manney wrote: > I don't care how many portraits of Stein you're going to make in your > nanofabber. The history of the original in the Met, held in Picasso's > and Stein's hands and so important in art history, can't be replicated > and will retain its value Art will retain its value only as long as people retain irrational and downright contradictory views regarding the original and the copy, but as there is no chance such people will survive the singularity there is no chance original art with its high value will survive the singularity either. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Nov 2 16:18:11 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 2 Nov 2010 12:18:11 -0400 Subject: [ExI] Fusion Rocket In-Reply-To: <319941.52817.qm@web65601.mail.ac4.yahoo.com> References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: <31CBC069-35F9-4945-AFD7-873F852DA9EC@bellsouth.net> I sent this to the list back in 2002. ========================= The efficiency of a rocket depends on its exhaust velocity, the faster the better. The space shuttle's oxygen hydrogen engine has a exhaust velocity of about 4500 meters per second and that's pretty good for a chemical rocket, the nuclear heated rocket called NERVA tested in the 1960's had a exhaust velocity of 8000 meters per second, and ion engines are about 80,000. Is there any way to do better, much better, say around 200,000,000 meters per second? Perhaps. The primary products of a fission reaction are about that fast, but if you use Uranium 235 or Plutonium 239 the large bulk of the material will absorb the primary fission products and just heat up the material, that slows things way down. However the critical mass for the little used element Americium-242 (half life about a century) is less than 1% that of Plutonium. This would be great stuff to make a nuclear bomb you could put in your pocket, but it may have other uses. In the January 2000 issue of Nuclear Instruments and Methods Physics Research A Yigal Ronen and Eugene Shwagerous calculate that a metallic film of Americium 242 less than a thousandth of a millimeter thick would undergo fission. This is so thin that rather than heat the bulk material the energy of the process would go almost entirely into the speed of the primary fission products, they would go free. They figure a Americium-242 rocket could get to Mars in two weeks not two years as with a chemical rocket. There are problems of course, engineering the rocket would be tricky and I'm not sure I'd want to be on the same continent as a Americium 242 production facility, but it's an interesting idea. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Tue Nov 2 16:00:16 2010 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 2 Nov 2010 09:00:16 -0700 Subject: [ExI] Fusion Rocket In-Reply-To: <319941.52817.qm@web65601.mail.ac4.yahoo.com> References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: The main problem is, current fusion reactor operators consider sustaining fusion for a few seconds to be "long duration", and have engineered several tricks to keep it going that long. (See the entire "inertial confinement" branch, for example: "it's 'contained' because we imploded it, for the duration of the implosion".) You'd need to keep it up for several minutes. If you could solve that problem, while keeping the fusion self-sustaining, you probably would not be far from having a commercially viable fusion reactor - as well as being much closer to a working fusion rocket. On Tue, Nov 2, 2010 at 3:55 AM, The Avantguardian < avantguardian2020 at yahoo.com> wrote: > John Grigg's post on atomic rockets inspired me to commit to virtual paper > a > concept design for a fusion rocket. So feel free to beat up on this idea > for a > while. > > http://sollegro.com/fusion_rocket/ > > > Stuart LaForge > > ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Nov 2 16:09:09 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 09:09:09 -0700 Subject: [ExI] DARPA funded 100 year starship program In-Reply-To: References: Message-ID: <003b01cb7aa8$48e199b0$daa4cd10$@att.net> ... On Behalf Of BillK On Tue, Oct 19, 2010 at 1:15 AM, John Grigg wrote: >> Well, at least DARPA seems capable of longterm thinking... >> >> >More information is now available. Apparently DARPA are NOT planning to build a starship. The commentators got a bit over-excited. > During this discussion we saw the "suicide astronaut" concept, where the experts were saying a Mars mission would be a no-return. If you look thru the ExI archives from the 90s, that concept is all over the place in there. In about 1989 thru 1992, I did the calculations on that a hundred different ways, and every time it points to the same conclusion: if we land humans on the surface of Mars, even one human, in any kind of meaningful mission, it is a one-way trip. Many weights engineers in 80s and 90s concluded likewise. Nothing has changed. spike From protokol2020 at gmail.com Tue Nov 2 16:42:44 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Tue, 2 Nov 2010 17:42:44 +0100 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> Message-ID: A balloon. It's an overpriced thing, this "originality". Pretty much everything can be overpriced and ballooned for some time and it is what happened with the "original art pieces". -------------- next part -------------- An HTML attachment was scrubbed... URL: From pjmanney at gmail.com Tue Nov 2 17:08:53 2010 From: pjmanney at gmail.com (PJ Manney) Date: Tue, 2 Nov 2010 10:08:53 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 4:07 PM, BillK wrote: > I appreciate the *present* importance of provenance in the art and > antiques world. People pay a million dollars for a painting with > provenance because they expect to be able to sell it on to someone > else for two million dollars. It's an investment. That's really the > only reason to pay extra for provenance. No, it's not. You're missing the psychology behind the entire art, antique and collectibles markets. Lots of people buy provenanced items because 1) they're crazy fans of the creator or previous owner; 2) they need to feel the item in THEIR hot little hands and its proximity brings them that much closer to the fame/infamy/whatever associated with the object; 3) the ego-investment of owning it outstrips the financial investment (much more common than you think). The investment value of a Babe Ruth baseball means squat to a rabid Yankees fan. And owning a famous Picasso (there aren't a lot of famous ones) makes its [male] owner feel his [male] member swell with pride... ;-) If you ever spent time at Sotheby's, Christies or any high powered auction house and watched the insanity all around you, you'd get what I mean. Real collectors don't care squat about increasing their investment. Once they own it, it's THEIRS. [Daffy Duck: "Go, go, go! Mine, mine, mine!"] Those who buy for investment -- and there are many these days -- are simply acquisitive and usually only the ego/genital-inflation applies. [Paging Steve Wynn...] But that doesn't mean there isn't some bat-s#!t crazy collector waiting in the wings to buy it if Wynn doesn't. You need to separate the post-scarcity economics of everyday crap from the really unusual items. Almost all stuff will instantly lose value. We've seen the beginning of this already, as when eBay entered the marketplace and suddenly, the "rarity" wasn't so rare anymore and prices dropped like buckshot-filled ducks from the sky. But the insanely special item will retain value IF YOU CAN PROVE IT IS WHAT IT CLAIMS. That's not impossible. Don't think identification based on atomic structure. Think identification based on proof of location/ownership. Then provenance is the only thing that's important. > When nanotech lets everyone have their own Van Gogh, provenance will > become worthless, because there will be no way to tell if the > certificate is attached to the original or a nanocopy identical down > to the atomic level. > (Even today expert forgers forge the provenance as well, of course). Yes, forgers do forge provenance -- in fact, most dealers forge items and provenance ALL THE TIME and MOST COLLECTORS KNOW THAT -- it's up to the collector to make sure the dealer is not full of crap. Big-time collecting is not for the faint of heart, ignorant or gullible. Which is why now, as in the future, the protection of original objects is a business in itself. As future technology makes originals harder to forge, future technology (and sleuthing) will make verification possible. Think of what's at stake in the market. The guys who pay hundreds of millions are willing to protect their investment. Or their passion. Or their privates. Which is what is really at stake. ;-) > I would distinguish between provenance and 'intrinsic value'. > A Walmart sweater that was once worn by George Bush is still just a > Walmart sweater. And that's why provenance IS important. Right now, GWB's sweat stains are worth money to someone, not the sweater. Picasso's real fingerprints are worth money to many people. Not the reproduction of them. These things may not have value to you, but based on collecting psychology, I am willing to bet money something immensely cool, like the originals of Van Gogh's Starry Night or Picasso's Guernica will have value in a nanofabbed future. Now, all this goes out the window in a post-apocalyptic future, when we're using Shakespeare's First Folio to wipe our buttocks. PJ From x at extropica.org Tue Nov 2 17:22:19 2010 From: x at extropica.org (x at extropica.org) Date: Tue, 2 Nov 2010 10:22:19 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Mon, Nov 1, 2010 at 1:32 PM, PJ Manney wrote: > The history of the original in the Met, held in Picasso's > and Stein's hands and so important in art history, can't be replicated > and will retain its value -- as long as no one mixes the two up and > there are people with the ego to stoke and means to own it. ?;-) Against my better judgment I reenter the perennial identity debates. The value of the "original", whether an object of art or a human agent, is based entirely on perceived status--very real in terms of our evolutionarily derived nature and cultural context but nothing intrinsic. Yes, the history may be important information, but it's NOT a property of the object. The meaning of anything lies not in what it "is", but in what it does, as perceived in relation to the values of some observer, even when it is the observer. We see through the eyes of our ancestors, for valid evolutionary reasons, just as our present system of social decision-making is based on competition over scarcity rather than cooperation for abundance; artwork and jewelry are prized more for their rarity than for their capacity to inspire; and the "self" is considered discrete and essential despite the synergistic advantages of diverse agency acting on behalf of an entirely fictitious entity. Recognizing this is not to diminish the assumed "intrinsic" value of the art or the person, but to open up new opportunities for meaningful interaction with what is ultimately only perceived patterns of information. - Jef From x at extropica.org Tue Nov 2 17:38:19 2010 From: x at extropica.org (x at extropica.org) Date: Tue, 2 Nov 2010 10:38:19 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Tue, Nov 2, 2010 at 10:08 AM, PJ Manney wrote: > Lots of people buy provenanced > items because 1) they're crazy fans of the creator or previous owner; > 2) they need to feel the item in THEIR hot little hands and its > proximity brings them that much closer to the fame/infamy/whatever > associated with the object; 3) the ego-investment of owning it > outstrips the financial investment (much more common than you think). > The investment value of a Babe Ruth baseball means squat to a rabid > Yankees fan. ?And owning a famous Picasso (there aren't a lot of > famous ones) makes its [male] owner feel his [male] member swell with > pride... ?;-) Yes. Just as the alpha chimp defends his mating privileges. But what of the bonobo, more inclined to give and receive favors...? > You need to separate the post-scarcity economics of everyday crap from > the really unusual items. ?Almost all stuff will instantly lose value. Yes, referring to items valued for function rather than status. >?But the > insanely special item will retain value IF YOU CAN PROVE IT IS WHAT IT > CLAIMS. Not if the values of the agent have evolved from hording to giving, taking to producing, narrow to broad self-interest. And this need not be at the biological level. A stronger driver and reinforcer of such change is a society and culture that rewards more altruistic behavior and we're already on that path. - Jef From msd001 at gmail.com Tue Nov 2 18:17:43 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 2 Nov 2010 14:17:43 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: I'm not sure originality matters in the sense of "this thing was created first" as much as the novelty "this thing is unlike any thing that preceded it." It will be difficult to maintain uniqueness in a nanofabbed world but if the artist sells new works under a non-disclosure agreement and copies show up everywhere then the artist may have a legal case against the purchaser. I doubt even the singularity will be enough to stop lawyers from making money. I wonder how exact a copy this supposed nanofab future will produce. ex: There is considerable notoriety in the world of 'high fashion' despite the fact that anyone clever enough to cut cloth and use a sewing machine could theoretically reproduce those articles worn by Paris runway models. Will the owner of the current 'original' Van Gogh allow it to be scanned to the molecular level to facilitate the perfect copy? Until we have the ability to rearrange subatomic particles to literally create gold, such materials will continue to have a material worth that could retain inherent value. Conquistadors hammered Aztec/Inca gold statues into bricks for easier transport of the raw metal with no regard for the production items they were destroying. Those items would be worth far more than their weight in gold if found today. If found in the far future, are they again valued only for the weight of their materials? I guess if they could be copied to data and later reproduced at will, there's no inherent value in the item (assuming the pattern is not lost). I suppose this necessitates having the mass converted losslessly to energy and the energy credit applied to the owner of the converted object. Even if this wondrous violation of physics becomes possible, greedy bankers (or politicians) will take a small fee during the transaction process. So even with a magical upload of mass to a communal energy pool there will (likely) be a fee directed to the bank that manages your share of the pool, there will be lawyers fees for protecting novelty and uniqueness rights (as well as prosecuting violation of those rights) and politicians to tax individual's consumption of the communal energy pool to download items back into physical reality. This post-singularity scenario isn't even zero-sum; it's negative sum. From spike66 at att.net Tue Nov 2 18:29:03 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 11:29:03 -0700 Subject: [ExI] hot processors? Message-ID: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> Question please for you microprocessor hipsters. I retired a seven year old desktop and replaced with an HP Pavilion dv7 notebook. I have plenty of sims that I run on a regular basis, ones that need to run over night. I ran a new one yesterday (it?s an excel macro) and found that it runs about six times faster than the 7 yr old desktop. However? after about half an hour it conked. It didn?t actually crash, in fact excel didn?t even stop. When I touched the mouse this morning, it resumed right where it left off, but it did nothing all night. Is that a feature of laptops? Can the batteries run down while the thing is plugged in to AC? Is there any reason why a laptop would not run continuously over night? It put out a lot of heat while it was running: perhaps there is some kind of thermal protection? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrd1415 at gmail.com Tue Nov 2 18:24:59 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 2 Nov 2010 11:24:59 -0700 Subject: [ExI] Age of Gliese 581 was Re: Retired military officers come forward about UFO visitations Message-ID: All this talk of aliens got me thinking, and so a question popped into my head: "How old," I wondered, "is Gliese 581?" Googled it. Wikipediaed it. Bingo! Citations 5 and 7 as follows: 5. # ^ a b "Star: Gl 581". Extrasolar Planets Encyclopaedia. http://exoplanet.eu/star.php?st=Gl+581. Retrieved 2009-04-27. "Mass 0.31 Msun, Age 8+3-1 Gyr" 7. # ^ Selsis 3.4 page 1382 "lower limit of the age that, considering the associated uncertainties, could be around 7 Gyr", "preliminary estimate", "should not be above 10-11 Gyr" ANSWER: 7-11 billion years. Whereas our little neighborhood is a mere 4 billion years old. And of course, Gliese 581 is in the news lately on account of Gliese 581g. I don't have to tell you where I'm going with this , do I? Hint: Three to seven billion years head start. Oh, and by the way, you shouldn't assign military personnel more credibility than they deserve. At best they live in a bubble, at worst they're full on Kool-aid junkies. Been there. Seen it. Generalizations -- particularly worshipful ones -- aren't helpful. Best, Jeff Davis "Everything's hard till you know how to do it." Ray Charles From pjmanney at gmail.com Tue Nov 2 19:59:34 2010 From: pjmanney at gmail.com (PJ Manney) Date: Tue, 2 Nov 2010 12:59:34 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <944B3FA3-1909-4669-A880-2022A7E10837@bellsouth.net> Message-ID: 2010/11/2 John Clark : > Art will retain its value only as long as people retain irrational and > downright contradictory views regarding the original and the copy, EXACTLY!!! Most of the list (as usual) is confusing the rationality of the Gedankenexperiment with the irrationality of real, on the ground, human behavior. > but as > there is no chance such people will survive the singularity ?there is no > chance original art with its high value will survive the singularity > either. I'm not assuming the singularity. Nanofabbers don't define the singularity IMHO, because they don't assume ever-increasing AGI. I'm assuming post-scarcity economics. BIG difference. PJ From pjmanney at gmail.com Tue Nov 2 20:10:44 2010 From: pjmanney at gmail.com (PJ Manney) Date: Tue, 2 Nov 2010 13:10:44 -0700 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: On Tue, Nov 2, 2010 at 10:38 AM, wrote: > Not if the values of the agent have evolved from hording to giving, > taking to producing, narrow to broad self-interest. ?And this need not > be at the biological level. ?A stronger driver and reinforcer of such > change is a society and culture that rewards more altruistic behavior > and we're already on that path. You and I have talked at great length about Non Zero Sum behavior, etc. And while I fervently agree with you and Robert Wright that the arrow of history has demonstrated an increase of empathetic and altruistic behavior and increased context (for many reasons), I think nanofabbers will occur too soon in our future for us to have evolved either biologically or culturally beyond our chimp-brains entirely. PJ From sparge at gmail.com Tue Nov 2 19:47:23 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 2 Nov 2010 15:47:23 -0400 Subject: [ExI] hot processors? In-Reply-To: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> Message-ID: 2010/11/2 spike > > > Is that a feature of laptops? Can the batteries run down while the thing > is plugged in to AC? Is there any reason why a laptop would not run > continuously over night? It put out a lot of heat while it was running: > perhaps there is some kind of thermal protection? > I suspect it's some kind of fancy power-saving mode. You can probably disable that while it's plugged in. You might also want to consider keeping it on a laptop cooler when it's running unattended for a long time to reduce the fire hazard. -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Nov 2 20:13:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 15:13:40 -0500 Subject: [ExI] more altruistic behavior In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <4CD070F4.1070108@satx.rr.com> On 11/2/2010 12:38 PM, x at extropica.org wrote: > A stronger driver and reinforcer of such > change is a society and culture that rewards more altruistic behavior > and we're already on that path. Hahahahahahahahaha! ( uhrgh, groans Krusty ) Well, let's see the results of today's US elections for an index. Damien Broderick [yes, I know, just a blip in the trajectory from appalling-horror-then to somewhat-moderated-horror-now] From stefano.vaj at gmail.com Tue Nov 2 20:04:02 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Nov 2010 21:04:02 +0100 Subject: [ExI] Flash of insight... In-Reply-To: <4CCDE6E0.3020008@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> Message-ID: On 31 October 2010 23:00, Damien Broderick wrote: > Also interesting that in NDE reports, many people claim to experience > themselves as "floating above" their damaged bodies (although still > "visuo"-centric, I gather). > I believe there were experiments a couple of year ago inducing out-of-body "delocalisation" in perfectly healthy people. Interesting, but not such a big deal, IMHO. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 2 19:59:42 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Nov 2010 20:59:42 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/10/31 John Clark > Actually its quite difficult to come up with a scenario where the copy DOES > instantly know he is the copy. > > Mmhhh. Nobody ever feels to be a copy. What you could become aware is that somebody forked in the past (as in "a copy left behind"). That he is the "original" is a matter of perspective... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 2 20:28:20 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 2 Nov 2010 21:28:20 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/2 Adrian Tymes > The main problem is, current fusion reactor operators consider sustaining > fusion > for a few seconds to be "long duration", and have engineered several tricks > to keep > it going that long. > What's wrong in a pulse propulsion detonating H-bombs one after another, V1-style? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan_ust at yahoo.com Tue Nov 2 20:36:28 2010 From: dan_ust at yahoo.com (Dan) Date: Tue, 2 Nov 2010 13:36:28 -0700 (PDT) Subject: [ExI] Age of Gliese 581 was Re: Retired military officers come forward about UFO visitations In-Reply-To: References: Message-ID: <524986.75237.qm@web30106.mail.mud.yahoo.com> I recall a recent letter or article in _Science_ or _Nature_ questioned whether there is a Gliese 581g after all. The data appear to not be unambiguous on this. Regards, Dan ----- Original Message ---- From: Jeff Davis To: ExI chat list Sent: Tue, November 2, 2010 2:24:59 PM Subject: [ExI] Age of Gliese 581 was Re: Retired military officers come forward about UFO visitations All this talk of aliens got me thinking, and so a question popped into my head: "How old," I wondered, "is Gliese 581?" Googled it.? Wikipediaed it.? Bingo! Citations 5 and 7 as follows: 5.? # ^ a b "Star: Gl 581". Extrasolar Planets Encyclopaedia. http://exoplanet.eu/star.php?st=Gl+581. Retrieved 2009-04-27. "Mass 0.31 Msun, Age 8+3-1 Gyr" 7. # ^ Selsis 3.4 page 1382 "lower limit of the age that, considering the associated uncertainties, could be around 7 Gyr", "preliminary estimate", "should not be above 10-11 Gyr" ANSWER: 7-11 billion years. Whereas our little neighborhood is a mere 4 billion years old. And of course, Gliese 581 is in the news lately on account of Gliese 581g. I don't have to tell you where I'm going with this , do I?? Hint: Three to seven billion years head start. Oh, and by the way, you shouldn't assign military personnel more credibility than they deserve.? At best they live in a bubble, at worst they're full on Kool-aid junkies.? Been there.? Seen it. Generalizations -- particularly worshipful ones -- aren't helpful. Best, Jeff Davis "Everything's hard till you know how to do it." ? ? ? ? ? ? ? ? ? ? ? Ray Charles From thespike at satx.rr.com Tue Nov 2 21:11:59 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 16:11:59 -0500 Subject: [ExI] Flash of insight... In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> Message-ID: <4CD07E9F.8040700@satx.rr.com> On 11/2/2010 3:04 PM, Stefano Vaj wrote: > I believe there were experiments a couple of year ago inducing > out-of-body "delocalisation" in perfectly healthy people. Interesting, > but not such a big deal, IMHO. It's only a big deal given that several people who seemed to think that sense of identity is innately constructed as being *behind your eyes* might be wrong about how this actually works at a deep level. From scerir at alice.it Tue Nov 2 21:27:20 2010 From: scerir at alice.it (scerir) Date: Tue, 2 Nov 2010 22:27:20 +0100 Subject: [ExI] hot processors? In-Reply-To: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> Message-ID: <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> "spike": It put out a lot of heat while it was running: perhaps there is some kind of thermal protection? # I had several problems (ie laptop running very very slow) due to hot temperature this summer. Now I use something like this: http://www.laptoptoys.net/lapcool_tx_adjustable_notebook_stand.html From possiblepaths2050 at gmail.com Tue Nov 2 22:21:32 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 2 Nov 2010 15:21:32 -0700 Subject: [ExI] more altruistic behavior In-Reply-To: <4CD070F4.1070108@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <4CD070F4.1070108@satx.rr.com> Message-ID: Damien Broderick wrote: Well, let's see the results of today's US elections for an index. [yes, I know, just a blip in the trajectory from appalling-horror-then to somewhat-moderated-horror-now] >>> Hey Damien, at least I voted today! : ) Oh, but am I merely contributing to the overall problem??? John On 11/2/10, Damien Broderick wrote: > On 11/2/2010 12:38 PM, x at extropica.org wrote: > >> A stronger driver and reinforcer of such >> change is a society and culture that rewards more altruistic behavior >> and we're already on that path. > > Hahahahahahahahaha! ( uhrgh, groans Krusty ) > > Well, let's see the results of today's US elections for an index. > > Damien Broderick > > [yes, I know, just a blip in the trajectory from appalling-horror-then > to somewhat-moderated-horror-now] > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From lists1 at evil-genius.com Tue Nov 2 21:23:25 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 02 Nov 2010 14:23:25 -0700 Subject: [ExI] Fire and evolution (was hypnosis) Message-ID: <4CD0814D.3040806@evil-genius.com> From: "spike" "I have long pondered if speciation between humans and chimps was accelerated by the fact that for some reason the protohumans figured out that little burning bush trick, and the chimps didn't, or just couldn't master it. This would represent the technology segregation we talk about today, that separates those humans who use electronics from those who do not. Today it is called the digital divide. Back then it was what we might call the conflagration chasm." That would be surprising, as the earliest current evidence for the domestication of fire is ~1.7 million years ago, and that is hotly disputed: many archaeologists put it ~400,000 years ago. All these dates are long, long after the human/chimp/bonobo split 6-7 million years ago. Of course, the progression of protohuman evolution from the split onward had many different branches, and was not a neat linear sequence...there were many species of Australopithecus and Homo which died out. So Spike's hypothesis may well be correct for a more recent evolutionary divide. From lists1 at evil-genius.com Tue Nov 2 21:09:14 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 02 Nov 2010 14:09:14 -0700 Subject: [ExI] Counterfeits (Was: THE MIGHTY ORIGINAL) In-Reply-To: References: Message-ID: <4CD07DFA.5040802@evil-genius.com> This reminds me of the old conundrum: "Who is the most successful counterfeiter in history?" > From: Ben Zaiboc > > If an art dealer makes a molecularly-precise copy of a > famous artwork, so that the two are literally > completely indistinguishable, and mixes them up so > that even he doesn't know which is the original, he > has thereby destroyed something? > > Presumably this is only true if he admits to doing it. > If he never admits to it, and nobody ever finds out, > the something is not destroyed. > > Or am I missing something? From atymes at gmail.com Tue Nov 2 22:13:07 2010 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 2 Nov 2010 15:13:07 -0700 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/2 Stefano Vaj > 2010/11/2 Adrian Tymes > >> The main problem is, current fusion reactor operators consider sustaining >> fusion >> for a few seconds to be "long duration", and have engineered several >> tricks to keep >> it going that long. >> > > What's wrong in a pulse propulsion detonating H-bombs one after another, > V1-style? > > That didn't seem to be what was proposed here, nor is that really V1-style. What you're talking about was once called Project Orion. It could work, in theory, especially if you kept it outside the atmosphere to avoid radiation concerns - but, the major need for rockets today is ones that can work inside the atmosphere, to get people and things to orbit without riding the extreme edge of performance. What was illustrated here would be safe to use inside the atmosphere: no or minimally radioactive exhaust (i.e., radiation-safe if you're far enough away that the heat alone won't fry you). The problem is keeping it lit for about 10 minutes (the typical length of time it takes to achieve orbit). -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Nov 2 23:13:18 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 16:13:18 -0700 Subject: [ExI] hot processors? In-Reply-To: <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> Message-ID: <000701cb7ae3$894e3540$9bea9fc0$@att.net> "spike": >>It put out a lot of heat while it was running: perhaps there is some kind of thermal protection? ># I had several problems (ie laptop running very very slow) due to hot temperature this summer. Now I use something like this: http://www.laptoptoys.net/lapcool_tx_adjustable_notebook_stand.html OK, I just got back from the local electronics merchant where I purchased a notebook cooler. Let's see if this helps. If this machine fails to run all night, I will need to rethink my strategy on using a laptop, and may cause me to rethink the notion of the singularity. We may be seeing what really is an S-curve in computing technology, where we are approaching a limit of calculations per watt of power input. Or not, I confess I haven't followed it in the past 5 yrs the way I did in my misspent youth. Are we still advancing in calculations per watt? spike From brent.allsop at canonizer.com Wed Nov 3 02:51:05 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Tue, 2 Nov 2010 20:51:05 -0600 Subject: [ExI] Flash of insight... In-Reply-To: <4CCE079C.4010102@speakeasy.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> Message-ID: Alan, It is certainly possible there is some amount of diversity in the way people consciously represent themselves. So you don't have a feeling of looking out of your eyes? And can you imagine what an out of body experience might be like? Thanks, Stefano, for mentioning the recent scientists that were able to so easily induce out of body experiences. Here is one reference to some of this work in science daily: http://www.sciencedaily.com/releases/2007/08/070823141057.htm Alan, I bet you'd have fun if you could get a head set and camera setup like that, so you could experience such yourself. Certainly experiencing this would be very enlightening to everyone. I'm always chuckling at how people are so clueless when they talk about having a 'spirit' or an "out of body experience' in the traditional religious interpretation way. Everyone assumes such dosn't have to have any knowledge. The referent or reality isn't near as important as the knowledge of such - whether veridacal or not. All this induction of out of body experiences is as exactly as predicted is possible by the emerging expert consensus "Representational Qualai Theory", and as was described in the 1229 story, written well before such science was demonstrated. And we surely haven't seen the last of this type of stuff - wait till we start effing the ineffable, and start learning just how diverse various people's conscious experiences of the world, their bodies, and their spirits are. I look forward to soon knowing first hand just how diverse your experience of yourself are, Alan, compared to my own. Brent Allsop 2010/10/31 Alan Grimes > > And remember that there are two parts to most conscious perception. > > There is the conscious knowledge, and it's referent. For out of body > > experiences, the knowledge of our 'spirit' or 'I' leaves our knowledge > > of our body (all in our brain). > > > Our conscious knowledge of our body has a referent in reality, but our > > knowledge of this 'spirit' does not. Surely in the future we'll be able > > to alter and represent all this conscious knowledge any way we want. > > And evolution surely had survivable reasons for usually representing > > this 'I' just behind our knowledge of our eyes. > > Interesting. > > I don't seem to have any such perception. I see what I see, I type what > I type, but I'm not, metaphysically speaking, directly present in any of > my own perceptions. > > I have no perception at all of being "inside my head" -- I am my head. > =P It seems perfectly natural to me. > > People are always talking about this concept of "self esteem" WTF is > that? I mean it's meaningless to either hold one's self in esteem or > contempt. > > Generally, by my appearance and sometimes by my actions, I do display a > lack of self-consciousness. =\ I'm not sure if that's directly related. > > -- > DO NOT USE OBAMACARE. > DO NOT BUY OBAMACARE. > Powers are not rights. > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Nov 3 03:07:01 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 20:07:01 -0700 Subject: [ExI] hot processors? In-Reply-To: References: Message-ID: <002b01cb7b04$3035a9e0$90a0fda0$@att.net> -----Original Message----- From: Tomasz Rola [mailto:rtomek at ceti.com.pl] ... Subject: Re: [ExI] hot processors? On Tue, 2 Nov 2010, spike wrote: > "spike": > >>It put out a lot of heat while it was running: perhaps there is some > >>kind of thermal protection? ... >...1. You sure this is about cpu temperature? I don't recall you giving any figures, so how do you know it? Don't know this, just a theory. Turned out wrong. Read on. >5. Wrt switching off, check your power settings in Windows (and in BIOS, too, if we are at it). If you plan to run something at night, you don't want the thing to hibernate two hours after you go to bed. Just tell it to stay always on while on A/C power... Thanks! Did this. It had a default to turn off after half an hour even if plugged in. I told it to stay the heck on and WORK, all night, or until I tell it to stop. In return I bought it a nice laptop cooler, so it should be eager to work for me. >6. AFAIK there is no way batteries could go low while you are plugged to the wall. Unless something is broken... OK cool, I thought that would be the case, but didn't know for sure. >BTW, you don't want to turn fancy screensaver in your laptop. Instead, you may want to blank and switch off the display after some no-activity period... I have the laptop driving a big screen, with the laptop lid closed. >Now, you can run your excel sim and have a look on cpu temps given by NHC... Thanks Tomasz, this is cool. The laptop looks like it is about 6 times faster than the desktop it replaces. spike From msd001 at gmail.com Wed Nov 3 03:33:41 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 2 Nov 2010 23:33:41 -0400 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <4CD0814D.3040806@evil-genius.com> References: <4CD0814D.3040806@evil-genius.com> Message-ID: On Tue, Nov 2, 2010 at 5:23 PM, wrote: > That would be surprising, as the earliest current evidence for the > domestication of fire is ~1.7 million years ago, and that is hotly disputed: domestication of fire is hotly disputed? nice. From thespike at satx.rr.com Wed Nov 3 03:45:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 22:45:57 -0500 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: References: <4CD0814D.3040806@evil-genius.com> Message-ID: <4CD0DAF5.9090602@satx.rr.com> On 11/2/2010 10:33 PM, Mike Dougherty wrote: > On Tue, Nov 2, 2010 at 5:23 PM, wrote: >> > That would be surprising, as the earliest current evidence for the >> > domestication of fire is ~1.7 million years ago, and that is hotly disputed: > domestication of fire is hotly disputed? nice. No flames, please! From spike66 at att.net Wed Nov 3 03:37:57 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 20:37:57 -0700 Subject: [ExI] Flash of insight... In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> Message-ID: <003501cb7b08$81ce5910$856b0b30$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Brent Allsop Sent: Tuesday, November 02, 2010 7:51 PM To: ExI chat list Subject: Re: [ExI] Flash of insight... . http://www.sciencedaily.com/releases/2007/08/070823141057.htm >.Alan, I bet you'd have fun if you could get a head set and camera setup like that, so you could experience such yourself. Certainly experiencing this would be very enlightening to everyone. Cool idea Brent! Rig up a hat with a rod about a meter long with a camera on the end, out behind and above, with the output rigged to video display glasses. Then you could pretend to be an avatar. And watch all the crazy looks you would get from normal people. >.I'm always chuckling at how people are so clueless when they talk about having a 'spirit' or an "out of body experience' in the traditional religious interpretation way. I imagined myself as a disembodied spirit, but instead of an out-of-body experience, I demon-possessed my own body. It is kinda like The Exorcist, only it was me in here, so it became sorta self-referential, and without all the projectile barfing (eewww, that really turns me off.) And I am not really an evil spirit either, nor a saint by any means, but rather more like half way between good and evil. So it was like The Exorcist, except it was self-possession by a neutral spirit. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Wed Nov 3 03:52:23 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 02 Nov 2010 23:52:23 -0400 Subject: [ExI] Flash of insight... In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> Message-ID: <4CD0DC77.7070603@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Alan, > It is certainly possible there is some amount of diversity in the way > people consciously represent themselves. > So you don't have a feeling of looking out of your eyes? There might be a terminology gap here. What I'm saying is that there is no sense of a "homunculus" that observes things through the eyes. > And can you imagine what an out of body experience might be like? The When I was a bit more of a free thinker than I am now, I experimented with all manner of things, I don't think I ever achieved one. The closest I got was imagining I was seeing a remote location while I was doing something else. I was really keen on trying to achieve some level of ESP, but I couldn't and eventually gave up, except for my precog ability, which seems to be marginal at best, possibly/probably merely intuition. > I look forward to soon > knowing first hand just how diverse your experience of yourself are, > Alan, compared to my own. ???? How do you propose to do that? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From thespike at satx.rr.com Wed Nov 3 04:44:42 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 02 Nov 2010 23:44:42 -0500 Subject: [ExI] hot processors? In-Reply-To: <002b01cb7b04$3035a9e0$90a0fda0$@att.net> References: <002b01cb7b04$3035a9e0$90a0fda0$@att.net> Message-ID: <4CD0E8BA.4020007@satx.rr.com> On 11/2/2010 10:07 PM, spike wrote: > with the laptop lid closed. I thought *that* causes it to overheat. Damien Broderick From spike66 at att.net Wed Nov 3 05:46:44 2010 From: spike66 at att.net (spike) Date: Tue, 2 Nov 2010 22:46:44 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> Message-ID: <005301cb7b1a$8015f580$8041e080$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike Subject: [ExI] prediction for 2 November 2010 >... Once again, we libertarians will go home empty handed...I predict something else as well: after tomorrow, both major parties will be surprised and disappointed with the outcome and will be accusing the other of election fraud...spike Well damn. Looks like the democrats and republicans have won nearly every race. Kennita Watson appears to have lost this time. Better luck next time! spike From natasha at natasha.cc Wed Nov 3 06:05:53 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Wed, 03 Nov 2010 02:05:53 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <4CCF0520.9000601@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net><8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <20101103020553.dn3kyivxcg8gg4oo@webmail.natasha.cc> Poiesis need not a painting, a performance or a structure to embellish the process of creation.? The electical charges of the brain?signify this process.? The many and varied?outcomes, as?presented in?mediums of paint, performance and structure,?care little, if anything, about what society considers to be the mighty original.? They are all spirts of thought, coalescing image and narrative, metaphor and symbol.? All the mightly originals are copies, and became copies once they left the electrical charges.? Personhood is the ultimately the electrical charge and the outcome.? All else is stuff.? And, as lovely as the original Matisse or Monet truly are --?and as incomparabel a printed image is next to the brush strokes and refraction of light across the hues, textures and tones or the originals (Damien is accurate in his?observations); the former does have a distinguishable?character that the later lacks. Alas, they all are copies. Nano+pershood?may simply?wink at its own assembleges. It is?a wink that may make very light of a very heavy topic that has manipulated high art and economics for a while now. Natasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From stathisp at gmail.com Wed Nov 3 06:04:20 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Wed, 3 Nov 2010 17:04:20 +1100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/11/3 Stefano Vaj : > 2010/10/31 John Clark >> >> Actually its quite difficult to come up with a scenario where the copy >> DOES instantly know he is the copy. >> > > Mmhhh. Nobody ever feels to be a copy. What you could become aware is that > somebody forked in the past (as in "a copy left behind"). That he is the > "original" is a matter of perspective... Think about what you would say and do if provided with evidence that you are actually a copy, replaced while the original you was sleeping some time last week. -- Stathis Papaioannou From possiblepaths2050 at gmail.com Wed Nov 3 07:09:45 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 00:09:45 -0700 Subject: [ExI] A fun animated short about the continuity of identity and "making copies" Message-ID: I absolutely loved this animated short film, which reminded me of the countless discussion threads about this very topic, that have graced so many transhumanist email lists over the years. http://www.youtube.com/watch?v=pdxucpPq6Lc John : ) From jrd1415 at gmail.com Wed Nov 3 07:11:08 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 3 Nov 2010 00:11:08 -0700 Subject: [ExI] The answer to tireless stupidity Message-ID: You're gonna like this. Chatbot Wears Down Proponents of Anti-Science Nonsense http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 Best, Jeff Davis "Men occasionally stumble over the truth, but most pick themselves up and hurry off as if nothing had happened." Winston Churchill From possiblepaths2050 at gmail.com Wed Nov 3 07:14:05 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 00:14:05 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <005301cb7b1a$8015f580$8041e080$@att.net> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> Message-ID: Things went about like I expected. The general public was not just not happy about Obama's performance record... I really wonder if he will even get re-elected... John On 11/2/10, spike wrote: > > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of spike > Subject: [ExI] prediction for 2 November 2010 > >>... Once again, we libertarians will go home empty handed...I predict > something else as well: after tomorrow, both major parties will be surprised > and disappointed with the outcome and will be accusing the other of election > fraud...spike > > > Well damn. Looks like the democrats and republicans have won nearly every > race. Kennita Watson appears to have lost this time. Better luck next > time! > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From ablainey at aol.com Wed Nov 3 11:37:46 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 03 Nov 2010 07:37:46 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <4CD07E9F.8040700@satx.rr.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <4CD07E9F.8040700@satx.rr.com> Message-ID: <8CD4962ABDE7A88-99C-1376@webmail-d024.sysops.aol.com> I had a mental play with this after the thread the other day. I closed my eyes and tried to consciously move the 'I' around with very little success. I tried to concentrate on various sensory inputs to see if it make any difference to the percieved position of consciousness. Apart from the perception of moving maybe a few inches inside my head it wash a complete wash out. Perhaps that is enough to show something. I certainly wasn't floating around the room or having any sense of perception from an external point. One thing I did notice is that the 'I' is not perceived as a singular point, it feels more like it is diffused over a 3d region. I would still like to know if blind people also percieve themselves to be in thier heads? Especially if they are cortically blind. Also if it visual input from an artificial source would alter the position? This might show if the 'I' position is created by a physical reference to the sensory input or by the physical position of the brain itself. Do snails percieve themselves to be at a point somewhere between their eye stalks or in thier heads? -----Original Message----- From: Damien Broderick To: ExI chat list Sent: Tue, Nov 2, 2010 9:11 pm Subject: Re: [ExI] Flash of insight... On 11/2/2010 3:04 PM, Stefano Vaj wrote: > I believe there were experiments a couple of year ago inducing > out-of-body "delocalisation" in perfectly healthy people. Interesting, > but not such a big deal, IMHO. It's only a big deal given that several people who seemed to think that sense of identity is innately constructed as being *behind your eyes* might be wrong about how this actually works at a deep level. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Wed Nov 3 13:07:16 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 3 Nov 2010 13:07:16 +0000 (GMT) Subject: [ExI] Flash of insight... In-Reply-To: Message-ID: <609321.58856.qm@web114404.mail.gq1.yahoo.com> ablainey at aol.com wrote: ... > This might show if the 'I' position is > created by a physical reference to the sensory input > or by the physical position of the brain itself. Do > snails percieve themselves to be at a point > somewhere between their eye stalks or in thier > heads? How could it be related to the physical position of the brain? You don't know where your brain is unless someone tells you or you read it in a book, or extrapolate from where someone else's is. There is no direct perception of the position of your brain, unlike, say, your stomach. The whole concept of the 'I' position is meaningless anyway. All you can say is where your current /viewpoint/ is. The feeling of being somewhere is solely a product of your senses, and can change very easily. I particularly liked Spike's idea for locating your awareness behind and above your own head, using a camera on a pole. Ben Zaiboc From dan_ust at yahoo.com Wed Nov 3 13:06:30 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 06:06:30 -0700 (PDT) Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <4CD0814D.3040806@evil-genius.com> References: <4CD0814D.3040806@evil-genius.com> Message-ID: <668700.46914.qm@web30107.mail.mud.yahoo.com> Wasn't the homo line (from the hypothesized homo/pan split)?also in different niches at this point too? I'm not sure of the research done on pan genus itself -- in terms of its evolution -- but I was under the impression that it was limited to dense forests -- while the homo line was exploring many different niches, some of them not dense forests. Regards, Dan ----- Original Message ---- From: "lists1 at evil-genius.com" To: extropy-chat at lists.extropy.org Sent: Tue, November 2, 2010 5:23:25 PM Subject: [ExI] Fire and evolution (was hypnosis) From: "spike" "I have long pondered if speciation between humans and chimps was accelerated by the fact that for some reason the protohumans figured out that little burning bush trick, and the chimps didn't, or just couldn't master it.? This would represent the technology segregation we talk about today, that separates those humans who use electronics from those who do not.? Today it is called the digital divide.? Back then it was what we might call the conflagration chasm." That would be surprising, as the earliest current evidence for the domestication of fire is ~1.7 million years ago, and that is hotly disputed: many archaeologists put it ~400,000 years ago.? All these dates are long, long after the human/chimp/bonobo split 6-7 million years ago. Of course, the progression of protohuman evolution from the split onward had many different branches, and was not a neat linear sequence...there were many species of Australopithecus and Homo which died out.? So Spike's hypothesis may well be correct for a more recent evolutionary divide. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From ablainey at aol.com Wed Nov 3 13:52:44 2010 From: ablainey at aol.com (ablainey at aol.com) Date: Wed, 03 Nov 2010 09:52:44 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <609321.58856.qm@web114404.mail.gq1.yahoo.com> Message-ID: <8CD4975865E2116-1DA0-31A7@webmail-d031.sysops.aol.com> What if the positional perception is related to neural pathway length. So the nerves which have the lowest latency and presumably get more run time time accordingly create a positional reference for the brain rather than a simple weighting of the senses. That is why I ask about the cortically blind. Ideally I would like to know where the 'I' is for someone who is blind, deaf and has no sense of smell or taste. How you would ever communicate such an abstract question to such a person is beyond me. The camera on a stick is akin to the snail however this only shows visual perception of position. I can change that perception by putting the TV on or playing a FPS game and it doesn't effect where I percieve myself when my eyes are closed. I don't the 'I' as being meaningless. Imagine an upload scenario where your consciousness is stored in a black box in some safe vault while a robot you goes out wandering the universe. If you are correct then you will 'feel' that you are out there doing all those things. However if the 'I' is a perception created by latency of input, you would feel the remoteness of your robot body. yes? You might as well be wetware sitting in a vault operating an avatar via VR. thus my interest in the issue which isn't as simple as it seems. -----Original Message----- From: Ben Zaiboc To: extropy-chat at lists.extropy.org Sent: Wed, Nov 3, 2010 1:07 pm Subject: Re: [ExI] Flash of insight... ablainey at aol.com wrote: ... > This might show if the 'I' position is > created by a physical reference to the sensory input > or by the physical position of the brain itself. Do > snails percieve themselves to be at a point > somewhere between their eye stalks or in thier > heads? How could it be related to the physical position of the brain? You don't know where your brain is unless someone tells you or you read it in a book, or extrapolate from where someone else's is. There is no direct perception of the position of your brain, unlike, say, your stomach. The whole concept of the 'I' position is meaningless anyway. All you can say is where your current /viewpoint/ is. The feeling of being somewhere is solely a product of your senses, and can change very easily. I particularly liked Spike's idea for locating your awareness behind and above your own head, using a camera on a pole. Ben Zaiboc _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Wed Nov 3 14:02:47 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 03 Nov 2010 10:02:47 -0400 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: <4CD16B87.2060301@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Think about what you would say and do if provided with evidence that > you are actually a copy, replaced while the original you was sleeping > some time last week. My copy would go find the clown who did it and kill him suicide-bomber style. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From pharos at gmail.com Wed Nov 3 14:55:12 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 14:55:12 +0000 Subject: [ExI] hot processors? In-Reply-To: <000701cb7ae3$894e3540$9bea9fc0$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> Message-ID: On Tue, Nov 2, 2010 at 11:13 PM, spike wrote: > OK, I just got back from the local electronics merchant where I purchased a > notebook cooler. ?Let's see if this helps. ?If this machine fails to run all > night, I will need to rethink my strategy on using a laptop, and may cause > me to rethink the notion of the singularity. ?We may be seeing what really > is an S-curve in computing technology, where we are approaching a limit of > calculations per watt of power input. ?Or not, I confess I haven't followed > it in the past 5 yrs the way I did in my misspent youth. ?Are we still > advancing in calculations per watt? > > Oh-oh. I just did a search on 'HP Pavilion dv7 overheating' and it looks like you've bought a problem laptop. Do the search and you'll see what I mean. ************************ Is there any chance of returning it and getting your money back? ***************************** If not, then a high-power laptop cooler is required. Something like this: with twin fans. A simple stand won't be sufficient. You won't be able to use the laptop on your lap without getting burnt. Even using it on any flat surface like a desk will cause overheating. It seems to be a design fault by HP on this model. The internal fan is too small to cool the processor and the graphics chip they fitted. And the air vents are badly positioned and easily blocked. It is essential to keep the vents clean on this model by blowing compressed air through the vents on a regular basis. Best of luck! BillK From jonkc at bellsouth.net Wed Nov 3 15:59:19 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 11:59:19 -0400 Subject: [ExI] Let's play What If. In-Reply-To: <4CD16B87.2060301@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD16B87.2060301@speakeasy.net> Message-ID: <31B57CF6-0901-48BA-B8D8-296482340D65@bellsouth.net> On Nov 3, 2010, at 10:02 AM, Alan Grimes wrote: >> Think about what you would say and do if provided with evidence that >> you are actually a copy, replaced while the original you was sleeping >> some time last week. > > My copy would go find the clown who did it and kill him suicide-bomber style. I doubt if you'd do that, I often disagree with you but you don't seem like the suicide-bomber type; but then again, they always say it's the person you'd least suspect. At any rate you certainly wouldn't if you didn't know you were a copy, and you wouldn't know unless you met up with a very convincing person armed with ironclad evidence and a golden tongue. And I'm not sure you'd really believe it even then as logical arguments have little effect on some. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Nov 3 16:22:04 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 12:22:04 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> Message-ID: <10DB97EF-5FDC-43A9-AA80-7F181DF4A7D3@bellsouth.net> On Nov 2, 2010, at 2:17 PM, Mike Dougherty wrote: > Until we have the ability to rearrange subatomic particles to > literally create gold, such materials will continue to have a material > worth that could retain inherent value. If it has value then it has a price, but in the age of nanotechnology if you had some gold that I wanted (because I thought it looked pretty?) what could I trade you for it? About the only thing I can think of is another rare element, platinum maybe, because both the elements gold and platinum are unique, although atoms of gold or platinum are not. One gold atom is just like another but it is not like a platinum atom, it is like nothing else in the universe except for another gold atom. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Wed Nov 3 16:26:39 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 12:26:39 -0400 Subject: [ExI] Counterfeits (Was: THE MIGHTY ORIGINAL) In-Reply-To: <4CD07DFA.5040802@evil-genius.com> References: <4CD07DFA.5040802@evil-genius.com> Message-ID: <73C31AF2-6B9C-49AE-B36B-B4E829D3A513@bellsouth.net> On Nov 2, 2010, at 5:09 PM, lists1 at evil-genius.com wrote: > "Who is the most successful counterfeiter in history?" The world's tallest midget who lives on the world's largest island. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Nov 3 17:03:13 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 3 Nov 2010 13:03:13 -0400 Subject: [ExI] THE MIGHTY ORIGINAL In-Reply-To: <10DB97EF-5FDC-43A9-AA80-7F181DF4A7D3@bellsouth.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDE6E0.3020008@satx.rr.com> <55BC1FCA-104D-4EE9-BB1C-A6FC97770BC0@bellsouth.net> <4CCF0520.9000601@satx.rr.com> <10DB97EF-5FDC-43A9-AA80-7F181DF4A7D3@bellsouth.net> Message-ID: 2010/11/3 John Clark : > If it has value then it has a price, but in the age of nanotechnology if you > had some gold that I wanted (because I thought it looked pretty?) what could > I trade you for it? About the only thing I can think of is another rare > element, platinum maybe, because both the elements gold and platinum are > unique, although atoms of gold or platinum are not. One gold atom is just > like another but it is not like a platinum atom, it is like nothing else in > the universe except for another gold atom. This may be the only context where the high-holy atom argument has you making a case for differences in atoms :) Possibly the only thing we can trade that is more rare than minerals: time. If I am to enjoy clock time at any multiplier above 1 then I need your clock time working for me. Slavery is certainly nothing new. Wage slavery is simply a PC term for the idea. (and I agree with how you feel about PC terms too) From jonkc at bellsouth.net Wed Nov 3 17:00:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 3 Nov 2010 13:00:49 -0400 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> Message-ID: On Nov 3, 2010, at 3:14 AM, John Grigg wrote: > The general public was not just not happy about Obama's performance record... I really wonder if he > will even get re-elected... I don't know but the big republican victory yesterday makes it far MORE likely Obama will be re-elected in two years because now he will have somebody to blame. Not counting yesterday, presidents have suffered 3 huge midterm losses since World War 2, Truman in 1946, Reagan in 1982, and Clinton in 1994; in all three cases the president was EASILY re-elected two years later. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Nov 3 17:17:42 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 10:17:42 -0700 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <668700.46914.qm@web30107.mail.mud.yahoo.com> References: <4CD0814D.3040806@evil-genius.com> <668700.46914.qm@web30107.mail.mud.yahoo.com> Message-ID: <004a01cb7b7b$0671db70$13559250$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Dan Subject: Re: [ExI] Fire and evolution (was hypnosis) Wasn't the homo line (from the hypothesized homo/pan split)?also in different niches at this point too? I'm not sure of the research done on pan genus itself -- in terms of its evolution -- but I was under the impression that it was limited to dense forests -- while the homo line was exploring many different niches, some of them not dense forests. Regards, Dan Ja, clearly the pan's feet are better adapted for swinging from trees and homo's feet are better for walking distances on a grassy plane. Good point Dan. For that matter, as pointed out by someone earlier, pan's hands are not as good as homo's at grasping a burning bush. Pan's thumbs are mounted too far aft. spike From dan_ust at yahoo.com Wed Nov 3 17:43:52 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 10:43:52 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: <832116.3227.qm@web30106.mail.mud.yahoo.com> This is not necessarily a cure for anti-science nonsense or even nonsense. It could be used against anyone holding any view: simply wear them down. E.g., someone here argues for Extropians or transhumanist views and someone else sets up a chatbot merely to keep pushing their buttons. Also, the usual argument I've seen regarding other planets warming up doesn't use Neptune, but Mars. And the evidence that the warming Mars has to do with fluctuations in solar output seems much more relevant -- though, to my mind, it's by no means decisive here. Regards, Dan ----- Original Message ---- From: Jeff Davis To: ExI chat list Sent: Wed, November 3, 2010 3:11:08 AM Subject: [ExI] The answer to tireless stupidity You're gonna like this. Chatbot Wears Down Proponents of Anti-Science Nonsense http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 Best, Jeff Davis "Men occasionally stumble over the truth, but most pick themselves up and hurry off as if nothing had happened." ? ? ? ? ? ? ? Winston Churchill _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Wed Nov 3 17:33:27 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 10:33:27 -0700 Subject: [ExI] hot processors? In-Reply-To: References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> Message-ID: <006101cb7b7d$39b5b810$ad212830$@att.net> ... On Behalf Of BillK ... > >Oh-oh. I just did a search on 'HP Pavilion dv7 overheating' and it looks like you've bought a problem laptop. Do the search and you'll see what I mean. Did that yesterday, found the same site you did, bought the cooler stand, now it seems to be working fine. Turns out I incorrectly concluded that it had overheated before. There is a setting that defaults to turning itself to sleep mode if all four processor cores are working at full bore for an hour. I reset that to never sleep while the power cord is plugged in, and it ran all night last night, and returned a buttload of useful results. ************************ >Is there any chance of returning it and getting your money back? ***************************** >Best of luck! BillK I will run it full bore for a few nights. If it works, then I will be satisfied with it. spike From atymes at gmail.com Wed Nov 3 17:11:47 2010 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 3 Nov 2010 10:11:47 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: I was wondering when someone would put something like this together. Perhaps in the next American election cycle, some high profile candidate (large state governor, or President) can put it together to rebut tweets using common arguments of the opposition. On Wed, Nov 3, 2010 at 12:11 AM, Jeff Davis wrote: > You're gonna like this. > > Chatbot Wears Down Proponents of Anti-Science Nonsense > > http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 > > Best, Jeff Davis > > "Men occasionally stumble over the truth, > but most pick themselves up and hurry off > as if nothing had happened." > Winston Churchill > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Wed Nov 3 18:17:20 2010 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 3 Nov 2010 11:17:20 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: <832116.3227.qm@web30106.mail.mud.yahoo.com> References: <832116.3227.qm@web30106.mail.mud.yahoo.com> Message-ID: Very true. However: 1) Might it be the case that those whose arguments are not based on facts have more buttons to push? If they can not be secure in letting the other side have the last word, because they know everyone else can tell which side is the buffoon... 2) The point of the debate is more often to convince the silent audience. If one side keeps making emotional arguments, and the other side keeps rebutting by linking to facts supported by outside sources, more people who witness the debate will come away leaning toward the latter. 3) This is an interesting development as a political tool. Like any technology, it can be used for good or evil. However, like many new technologies, those who we view as "good" tend to be in a better position to use these tools, and thus will probably make more effective use of them (at least in the next decade or two). (In other words: try imagining an Extropian setting one of these up, then try to imagine a creationist setting one of these up. It's easier to imagine the former case, no?) On Wed, Nov 3, 2010 at 10:43 AM, Dan wrote: > This is not necessarily a cure for anti-science nonsense or even nonsense. > It > could be used against anyone holding any view: simply wear them down. E.g., > someone here argues for Extropians or transhumanist views and someone else > sets > up a chatbot merely to keep pushing their buttons. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Nov 3 18:48:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 13:48:27 -0500 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> Message-ID: <4CD1AE7B.9000808@satx.rr.com> On 11/3/2010 12:00 PM, John Clark wrote: > the big republican victory yesterday makes it far MORE likely Obama will > be re-elected in two years because now he will have somebody to blame. OMG, you mean the USA won't have President Palin to lead the nation to recovery? This is another crushing blow after the loss of Christine O'Donnell as VP. Damien Broderick From pharos at gmail.com Wed Nov 3 18:51:59 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 18:51:59 +0000 Subject: [ExI] hot processors? In-Reply-To: <006101cb7b7d$39b5b810$ad212830$@att.net> References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> <006101cb7b7d$39b5b810$ad212830$@att.net> Message-ID: On Wed, Nov 3, 2010 at 5:33 PM, spike wrote: > I will run it full bore for a few nights. ?If it works, then I will be > satisfied with it. > > Fair enough. But remember that with the cooler you're effectively changing it into a desktop pc and losing the flexibility of having a laptop. I'd recommend running a temperature monitor software that rings alarms or shuts down if the temperature gets too high. (It's easy to get the air vents blocked up without noticing). Core Temp reports on multiple cores and seems quite nice. Even if the temperature doesn't get quite high enough to close down, running for long periods at high temperatures will shorten the life span of the chips. Cheers, BillK From spike66 at att.net Wed Nov 3 18:50:49 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 11:50:49 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: <007d01cb7b88$08bb9170$1a32b450$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Jeff Davis ... >Chatbot Wears Down Proponents of Anti-Science Nonsense >http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 >Best, Jeff Davis The immediate problem I see with this is that both sides can set up a chatbot, which then chatter away tirelessly about inane trivia through the night. But other than that, the chatbots are not like their human counterparts. On the subject of global warming, there is no need to have humans in that loop. So impervious are the participants on both sides to actual scientific data and mathematical models, it would soon become impossible to distinguish between the chat generated by this means vs the human input, so mired is this particular topic in culture, politics and even religion. I can think of a possible criterion to distinguish between human and mechanical conversation: as soon either side actually changes its views on global warming or even demonstrates it has actually learned, we know for sure that is a chatbot, for humans have never been observed to change their views on this topic. spike From spike66 at att.net Wed Nov 3 19:04:21 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 12:04:21 -0700 Subject: [ExI] sex machine, was: RE: The answer to tireless stupidity Message-ID: <008101cb7b89$ec95fe20$c5c1fa60$@att.net> >Subject: [ExI] The answer to tireless stupidity >Chatbot Wears Down Proponents of Anti-Science Nonsense... Jeff Davis Actually this application would be a pointless waste of perfectly good technology. Consider the online lonely hearts club. There are places on the web (and the usenets before that, and DARPAnet even before that) where lonely hearts would hang out and make small talk. A really useful application of a chatbot would be to have it mines one's own writings and produce an enormous lookup table, which it would then use to perform all the tedious, error-prone and emotionally hazardous early stages of online seduction. As soon as the other party agrees to meeting for, um, stimulating conversation (and so forth), then the seductobot would alert the user, who then reads over what the bot has said to the perspective contact. Of course, the other party might also have set up a seduct-o-matic to do the same thing. Similarly to Jeff's example, it might soon become very difficult to distinguish two humans trying to get each other into the sack from two lookup tables doing likewise. As soon as actual creativity or innovation is seen in the mating process, we know it must be a chatbot, for humans have discovered nothing essentially new in that area since a few weeks after some adventurous pair of protobonobos first discovered copulation. spike From dan_ust at yahoo.com Wed Nov 3 19:31:20 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 12:31:20 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: Message-ID: <726837.76877.qm@web30107.mail.mud.yahoo.com> I can imagine a variation on this that might along with Spike's chabotting on global warming: set up a chatbot to push a position you disagree with, let it become really popular, then have it switch sides in a discussion. This might look like an honest changing of opinion and some might be duped by it. Regards, Dan From: Adrian Tymes To: ExI chat list Sent: Wed, November 3, 2010 1:11:47 PM Subject: Re: [ExI] The answer to tireless stupidity I was wondering when someone would put something like this together. Perhaps in the next American election cycle, some high profile candidate (large state governor, or President) can put it together to rebut tweets using common arguments of the opposition. On Wed, Nov 3, 2010 at 12:11 AM, Jeff Davis wrote: You're gonna like this. > >Chatbot Wears Down Proponents of Anti-Science Nonsense > >http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 > >Best, Jeff Davis > >?"Men occasionally stumble over the truth, >?but most pick themselves up and hurry off >?as if nothing had happened." >? ? ? ? ? ? ? Winston Churchill >_______________________________________________ >extropy-chat mailing list >extropy-chat at lists.extropy.org >http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan_ust at yahoo.com Wed Nov 3 19:28:18 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 12:28:18 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: References: <832116.3227.qm@web30106.mail.mud.yahoo.com> Message-ID: <238426.99429.qm@web30104.mail.mud.yahoo.com> I don't disagree about the?"silent audience" in any discussion, though I wonder if some of them aren't just immediately turned off by a continuous stream of emotional arguments anyhow. Regarding facts, the problem here would be interpretation in many cases. Also, merely citing journal articles doesn't settle things in many cases. Think about those economists and market analysts pointing out that the housing bubble was going to burst and those who argued against them. The latter could've easily created chatbots citing all the relevant articles in peer-reviewed journals right up until the market unraveled in 2008. In a sense, it's all going to depend on what the silent audience takes for fact and reliable reasoning in the first place. (Of course, this is not an attack on chatbots per se, but merely to point out that the wider social context is important.) Regarding a Creationist setting these up, well, aren't there already cheat sheets that Creationists use? Isn't there a book out called _How to Debate an Atheist_? Yes, this can be used for good or ill, and, like you, I'm more the optimist here. But the likely long-term outcome is probably not going to be the Dark Side is thwarted by chatbots, but that Dark Side chatbots make the more intelligent people less likely to take chat seriously. (In my opinion, that might actually be a big win. There are almost always more important things to do. :) Regards, Dan From: Adrian Tymes To: ExI chat list Sent: Wed, November 3, 2010 2:17:20 PM Subject: Re: [ExI] The answer to tireless stupidity Very true.? However: 1) Might it be the case that those whose arguments are not based on facts have more buttons to push?? If they can not be secure in letting the other side have the last word, because they know everyone else can tell which side is the buffoon... 2) The point of the debate is more often to convince the silent audience.? If one side keeps making emotional arguments, and the other side keeps rebutting by linking to facts supported by outside sources, more people who witness the debate will come away leaning toward the latter. 3) This is an interesting development as a political tool.? Like any technology, it can be used for good or evil.? However, like many new technologies, those who we view as "good" tend to be in a better position to use these tools, and thus will probably make more effective use of them (at least in the next decade or two). (In other words: try imagining an Extropian setting one of these up, then try to imagine a creationist setting one of these up.? It's easier to imagine the former case, no?) On Wed, Nov 3, 2010 at 10:43 AM, Dan wrote: This is not necessarily a cure for anti-science nonsense or even nonsense. It >could be used against anyone holding any view: simply wear them down. E.g., >someone here argues for Extropians or transhumanist views and someone else sets >up a chatbot merely to keep pushing their buttons. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan_ust at yahoo.com Wed Nov 3 19:42:01 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 12:42:01 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: <007d01cb7b88$08bb9170$1a32b450$@att.net> References: <007d01cb7b88$08bb9170$1a32b450$@att.net> Message-ID: <773270.29941.qm@web30105.mail.mud.yahoo.com> I've "observed" people changing their minds on this -- mostly from being skeptical to anthropogenic global warming to believing in it. (I'm not going to say these people saw the light or they were duped -- or whether they were just going with the flow.* I don't know enough about their thought processes to say.) Regarding, though, you view of setting these chatbots up to eventually reach a consensus, this is the ideal of rhetoric: to get people to argue by going back to premises (which can include "actual scientific data and mathematical models") and eventually deciding on which conclusions are correct. This is seen with the typical use of enthymemes. Recall, an enthymeme is basically a syllogism where there's an unstated premise. In rhetoric, the person offering up the enthymeme in good faith is assuming that his interlocutors accept the unstated premise. If they don't, then the premise, to argue in good faith, is made explicit. Eventually, it's hope that the process will terminate for any debate -- as eventually all participants reach premises which they all agree on and then can move forward to the conclusion. Again, if they argue in good faith, the conclusion should be acceptable to all and this resolves the?difference in?opinions. Regards, Dan * How many people really need to have an opinion on this? Why is it that, like so many issues, people must take a side rather than just admit that they don't know and are not really capable, at their current state of knowledge and skill, of vetting the arguments on this? ----- Original Message ---- From: spike To: ExI chat list Sent: Wed, November 3, 2010 2:50:49 PM Subject: Re: [ExI] The answer to tireless stupidity From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Jeff Davis ... >Chatbot Wears Down Proponents of Anti-Science Nonsense >http://www.technologyreview.com/blog/mimssbits/25964/?nlid=3722 >Best, Jeff Davis The immediate problem I see with this is that both sides can set up a chatbot, which then chatter away tirelessly about inane trivia through the night.? But other than that, the chatbots are not like their human counterparts. On the subject of global warming, there is no need to have humans in that loop. So impervious are the participants on both sides to actual scientific data and mathematical models, it would soon become impossible to distinguish between the chat generated by this means vs the human input, so mired is this particular topic in culture, politics and even religion. I can think of a possible criterion to distinguish between human and mechanical conversation: as soon either side actually changes its views on global warming or even demonstrates it has actually learned, we know for sure that is a chatbot, for humans have never been observed to change their views on this topic. spike From rtomek at ceti.pl Wed Nov 3 20:36:25 2010 From: rtomek at ceti.pl (Tomasz Rola) Date: Wed, 3 Nov 2010 21:36:25 +0100 (CET) Subject: [ExI] hot processors? In-Reply-To: References: <000001cb7abb$d45d52f0$7d17f8d0$@att.net> <55626316557B47BDA3F67C4B4CF01FC1@PCserafino> <000701cb7ae3$894e3540$9bea9fc0$@att.net> Message-ID: On Wed, 3 Nov 2010, BillK wrote: > On Tue, Nov 2, 2010 at 11:13 PM, spike wrote: > > OK, I just got back from the local electronics merchant where I purchased a > > notebook cooler. ?Let's see if this helps. ?If this machine fails to run all > > night, I will need to rethink my strategy on using a laptop, and may cause > > me to rethink the notion of the singularity. ?We may be seeing what really > > is an S-curve in computing technology, where we are approaching a limit of > > calculations per watt of power input. ?Or not, I confess I haven't followed > > it in the past 5 yrs the way I did in my misspent youth. ?Are we still > > advancing in calculations per watt? Yes, I would say so. Compare: Pentium 1 @ 100MHz - about 15W (clock/wat = 6.67) Athlon XP @ 1800MHz - about 70-80W (c/w = 22.5-25.7) AthlonII 4x @ 2600MHz - about 170W (c/w = 61.2) (source: google, wikipedia, tomshardware, my memory) This assumes there were no other advances than mere clock. But in fact c/w doesn't tell about memory & bus speeds, micro optimisations, out of order execution, etc etc. On the Intel side, it should look even better, especially if we forget the flaky Pentium4. > Oh-oh. I just did a search on 'HP Pavilion dv7 overheating' and it > looks like you've bought a problem laptop. Do the search and you'll > see what I mean. Just in case some other folks here "use their computahs for computaahsion". I'm no big hardware expert but I am a big fan of stability. There are two utilities, that can be used for testing one's machine and they are free. 1. Memtest86 - [ http://en.wikipedia.org/wiki/Memtest86 ] 2. Prime95 - [ http://en.wikipedia.org/wiki/Prime95 ] Since I only use Windows about twice a year or so, I cannot tell about Prime95, but Memtest is ok. Once again, this is good moment to stress about monitoring a hardware. I don't know whether this is obvious, but to me, everybody running some nontrivial load on one's computer really wants to know how it is doing. I am for knowing my machine, knowing it's sounds, what is usual and what is a sign. It is analogous to racing: if you only drive to work or for some shopping, you don't need to understand how it is possible that you move. But once you enter racing, you could do better knowing at least some basics of your car's mechanics. Also, for me stability has more value than speed, so I don't mind downclocking a bit. This is, IMHO, quite a good idea while running so called budget PCs (and which one is not budget nowadays?). A 100 MHz off your clock is just a few percent drop in performance but it can make you feel much better and cooler (and no more questions like, can I go for a walk or should I stay and wait for another mysterious beep). Regards, Tomasz Rola -- ** A C programmer asked whether computer had Buddha's nature. ** ** As the answer, master did "rm -rif" on the programmer's home ** ** directory. And then the C programmer became enlightened... ** ** ** ** Tomasz Rola mailto:tomasz_rola at bigfoot.com ** From thespike at satx.rr.com Wed Nov 3 20:42:52 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 15:42:52 -0500 Subject: [ExI] Australian dollar Message-ID: <4CD1C94C.3040705@satx.rr.com> In case anyone's interested, today 1 AUD = 1.00582 USD From thespike at satx.rr.com Wed Nov 3 20:52:11 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 15:52:11 -0500 Subject: [ExI] Bayes and psi In-Reply-To: <4CD1C4D6.6000101@satx.rr.com> References: <567253.29951.qm@web30701.mail.mud.yahoo.com> <4CD1C4D6.6000101@satx.rr.com> Message-ID: <4CD1CB7B.1080804@satx.rr.com> This might be of interest: a link to a plenary lecture Prof. Utts gave this summer at the 8th International Conference on Teaching Statistics. http://icots8.org/cd/pdfs/plenaries/ICOTS8_PL2_UTTS.pdf THE STRENGTH OF EVIDENCE VERSUS THE POWER OF BELIEF: ARE WE ALL BAYESIANS? Jessica Utts, Michelle Norris, Eric Suess, Wesley Johnson Although statisticians have the job of making conclusions based on data, for many questions in science and society prior beliefs are strong and may take precedence over data when people make decisions. For other questions, there are experts who could shed light on the situation that may not be captured with available data. One of the appealing aspects of Bayesian statistics is that the methods allow prior beliefs and expert knowledge to be incorporated into the analysis along with the data. One domain where beliefs are almost sure to have a role is in the evaluation of scientific data for extrasensory perception (ESP). Experiments to test ESP often are binomial, and they have a clear null hypothesis, so they are an excellent way to illustrate hypothesis testing. Incorporating beliefs makes them an excellent example for the use of Bayesian analysis as well. In this paper, data from one type of ESP study are analyzed using both frequentist and Bayesian methods. From dan_ust at yahoo.com Wed Nov 3 20:54:58 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 13:54:58 -0700 (PDT) Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD1AE7B.9000808@satx.rr.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> Message-ID: <162237.28097.qm@web30101.mail.mud.yahoo.com> Kidding aside, do you think Palin will ever be more than a media phenom? Regards, Dan Overthrow all governments everywhere! ----- Original Message ---- From: Damien Broderick To: ExI chat list Sent: Wed, November 3, 2010 2:48:27 PM Subject: Re: [ExI] prediction for 2 November 2010 On 11/3/2010 12:00 PM, John Clark wrote: > the big republican victory yesterday makes it far MORE likely Obama will > be re-elected in two years because now he will have somebody to blame. OMG, you mean the USA won't have President Palin to lead the nation to recovery? This is another crushing blow after the loss of Christine O'Donnell as VP. Damien Broderick _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From possiblepaths2050 at gmail.com Wed Nov 3 21:03:14 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 14:03:14 -0700 Subject: [ExI] A new Culture novel by Iain Banks Message-ID: This is always a cause for celebration! http://io9.com/5668042/preview-surface-detail-by-iain-m-banks John From pharos at gmail.com Wed Nov 3 21:14:13 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 21:14:13 +0000 Subject: [ExI] Australian dollar In-Reply-To: <4CD1C94C.3040705@satx.rr.com> References: <4CD1C94C.3040705@satx.rr.com> Message-ID: On Wed, Nov 3, 2010 at 8:42 PM, Damien Broderick wrote: > In case anyone's interested, today > > 1 AUD = 1.00582 USD > Yes, I noticed. First time since 28 years ago. Another step in the Fed's campaign to devalue the US dollar. Currency devaluation has many bad consequences, of course, as well as the good consequences of possibly increasing exports and reducing imports. Other countries appear to have noticed what the US Fed is doing, so we may be entering a phase of competitive devaluations around the world. BillK From thespike at satx.rr.com Wed Nov 3 21:15:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 16:15:14 -0500 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <162237.28097.qm@web30101.mail.mud.yahoo.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> Message-ID: <4CD1D0E2.9020009@satx.rr.com> On 11/3/2010 3:54 PM, Dan wrote: > Kidding aside, do you think Palin will ever be more than a media phenom? In the USA, who can say? From spike66 at att.net Wed Nov 3 21:05:55 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 14:05:55 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: <773270.29941.qm@web30105.mail.mud.yahoo.com> References: <007d01cb7b88$08bb9170$1a32b450$@att.net> <773270.29941.qm@web30105.mail.mud.yahoo.com> Message-ID: <003d01cb7b9a$e7bd1620$b7374260$@att.net> ... On Behalf Of Dan ... Subject: Re: [ExI] The answer to tireless stupidity >...I've "observed" people changing their minds on this -- mostly from being skeptical to anthropogenic global warming to believing in it. (I'm not going to say these people saw the light or they were duped -- or whether they were just going with the flow.* I don't know enough about their thought processes to say.)... Dan, the critical and divergent question is not so much if global warming is occurring or if it is anthropogenic, but rather the next step beyond that, which is: what are we going to do about it. That immediately causes a divergence of opinion that is not easily swayed by scientific data. One group suggests creating taxes on carbon dioxide production, while another group makes plans to replace their air conditioners with bigger units. This is a problem that we cannot discuss to a solution. If one economy taxes itself to reduce carbon dioxide emissions while its competitors do not, then the non-taxing competitors continue to generate CO2 with impunity, pretty soon they own the gold, they own everything; then they make the rules. How is discussion of scientific models of any help with this problem? We might as well set up multiple chatbots on both (or all sides) of that issue and let them chatter away, while leaving the rest of us to figure out bigger and better air conditioning systems. >...* How many people really need to have an opinion on this? Why is it that, like so many issues, people must take a side rather than just admit that they don't know and are not really capable, at their current state of knowledge and skill, of vetting the arguments on this?...Dan Everyone who is eligible to vote needs an opinion on this. The tax and cap CO2 solutions require jillions of votes, to elect leaders who will tax CO2 and send us down the branch where our competitors own everything, then once they do, they make our rules for us. spike From dan_ust at yahoo.com Wed Nov 3 21:43:28 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 14:43:28 -0700 (PDT) Subject: [ExI] The answer to tireless stupidity In-Reply-To: <003d01cb7b9a$e7bd1620$b7374260$@att.net> References: <007d01cb7b88$08bb9170$1a32b450$@att.net> <773270.29941.qm@web30105.mail.mud.yahoo.com> <003d01cb7b9a$e7bd1620$b7374260$@att.net> Message-ID: <635001.18085.qm@web30105.mail.mud.yahoo.com> Regarding your final comment: Don't you think that's the problem? I mean you don't seriously think anyone eligible to vote is going to have an intelligent, informed opinion? Also, the incentives are skewed -- as Caplan seemed to demonstrate in his _The Myth of the Rational Voter_: voters experience very low or zero costs for their decision because their vote only counts in a tie breaker. This allows for fantasy views on public policy issues and, if Caplan is right, the issue becomes why don't we have much worse polities. (Caplan attempts to answer that too: elected officials mitigate some of the harm of bad policies by breaking campaign promises and the like.*) Regards, Dan * Someone also presented an argument for corruption as helpful in many cases because it was a market means of subverting bad policies. E.g., if a cop can be bribed not to enforce a bad law (which ones aren't?), then the effects of that bad law can be somewhat mitigated. This is, of course, no a perfect solution and, certainly, worse than getting rid of the bad law and turning over the legislators to me for vivisec -- er, re-education. :) ----- Original Message ---- From: spike To: ExI chat list Sent: Wed, November 3, 2010 5:05:55 PM Subject: Re: [ExI] The answer to tireless stupidity ... On Behalf Of Dan ... Subject: Re: [ExI] The answer to tireless stupidity >...I've "observed" people changing their minds on this -- mostly from being skeptical to anthropogenic global warming to believing in it. (I'm not going to say these people saw the light or they were duped -- or whether they were just going with the flow.* I don't know enough about their thought processes to say.)... Dan, the critical and divergent question is not so much if global warming is occurring or if it is anthropogenic, but rather the next step beyond that, which is: what are we going to do about it.? That immediately causes a divergence of opinion that is not easily swayed by scientific data.? One group suggests creating taxes on carbon dioxide production, while another group makes plans to replace their air conditioners with bigger units.? This is a problem that we cannot discuss to a solution.? If one economy taxes itself to reduce carbon dioxide emissions while its competitors do not, then the non-taxing competitors continue to generate CO2 with impunity, pretty soon they own the gold, they own everything; then they make the rules.? How is discussion of scientific models of any help with this problem?? We might as well set up multiple chatbots on both (or all sides) of that issue and let them chatter away, while leaving the rest of us to figure out bigger and better air conditioning systems. >...* How many people really need to have an opinion on this? Why is it that, like so many issues, people must take a side rather than just admit that they don't know and are not really capable, at their current state of knowledge and skill, of vetting the arguments on this?...Dan Everyone who is eligible to vote needs an opinion on this.? The tax and cap CO2 solutions require jillions of votes, to elect leaders who will tax CO2 and send us down the branch where our competitors own everything, then once they do, they make our rules for us. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From dan_ust at yahoo.com Wed Nov 3 21:44:26 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 3 Nov 2010 14:44:26 -0700 (PDT) Subject: [ExI] Australian dollar In-Reply-To: References: <4CD1C94C.3040705@satx.rr.com> Message-ID: <423771.36663.qm@web30106.mail.mud.yahoo.com> Why is reducing imports a good thing? Regards, Dan ----- Original Message ---- From: BillK To: ExI chat list Sent: Wed, November 3, 2010 5:14:13 PM Subject: Re: [ExI] Australian dollar On Wed, Nov 3, 2010 at 8:42 PM, Damien Broderick wrote: > In case anyone's interested, today > > 1 AUD = 1.00582 USD Yes, I noticed.? First time since 28 years ago. Another step in the Fed's campaign to devalue the US dollar. Currency devaluation has many bad consequences, of course, as well as the good consequences of possibly increasing exports and reducing imports. Other countries appear to have noticed what the US Fed is doing, so we may be entering a phase of competitive devaluations around the world. BillK _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From possiblepaths2050 at gmail.com Wed Nov 3 21:56:04 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 14:56:04 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD1D0E2.9020009@satx.rr.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: I wanted to share some links about things that affected the elections... The stupidity of American voters... http://www.salon.com/technology/how_the_world_works/2010/11/01/the_unbearable_stupidity_of_american_voters The very shadowy world of campaign funding... http://motherjones.com/politics/2010/11/2010-midterms-campaign-finance-secret-spending But life goes on... I do look forward to voting for an AGI candidate in the 2042 presidential election. : ) John On 11/3/10, Damien Broderick wrote: > On 11/3/2010 3:54 PM, Dan wrote: > >> Kidding aside, do you think Palin will ever be more than a media phenom? > > In the USA, who can say? > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Wed Nov 3 22:07:20 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 15:07:20 -0700 Subject: [ExI] Bill Moyers: Welcome to the American Plutocracy Message-ID: An excerpt from the Bill Moyers article: "Time to close the circle: Everyone knows millions of Americans are in trouble. As Robert Reich recently summed it the state of working people: They?ve lost their jobs, their homes, and their savings. Their grown children have moved back in with them. Their state and local taxes are rising. Teachers and firefighters are being laid off. The roads and bridges they count on are crumbling, pipelines are leaking, schools are dilapidated, and public libraries are being shut." "Why isn?t government working for them? Because it?s been bought off. It?s as simple as that. And until we get clean money we?re not going to get clean elections, and until we get clean elections, you can kiss goodbye government of, by, and for the people. Welcome to the plutocracy." I would just add that I would replace the term "Plutocracy" with "Kleptocracy..." http://www.truth-out.org/bill-moyers-money-fights-hard-and-it-fights-dirty64766 From pharos at gmail.com Wed Nov 3 22:14:17 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 22:14:17 +0000 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: On Wed, Nov 3, 2010 at 9:56 PM, John Grigg wrote: > I wanted to share some links about things that affected the elections... > > The very shadowy world of campaign funding... > http://motherjones.com/politics/2010/11/2010-midterms-campaign-finance-secret-spending > > Yes, but Meg Whitman (Republican) spent about 160 million of her own money and still lost. There's losing and then there's really really painful losing. BillK From possiblepaths2050 at gmail.com Wed Nov 3 22:21:33 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 15:21:33 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: >Yes, but Meg Whitman (Republican) spent about 160 million of her own >money and still lost. >There's losing and then there's really really painful losing. At least she spent her own money... On 11/3/10, BillK wrote: > On Wed, Nov 3, 2010 at 9:56 PM, John Grigg wrote: >> I wanted to share some links about things that affected the elections... >> >> The very shadowy world of campaign funding... >> http://motherjones.com/politics/2010/11/2010-midterms-campaign-finance-secret-spending >> >> > > > Yes, but Meg Whitman (Republican) spent about 160 million of her own > money and still lost. > > There's losing and then there's really really painful losing. > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Wed Nov 3 20:56:19 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 13:56:19 -0700 Subject: [ExI] How old people will remake the world Message-ID: At least for some people, the aging of the world population will improve life... http://www.salon.com/books/feature/2010/10/31/shock_of_gray_interview John From thespike at satx.rr.com Wed Nov 3 22:43:37 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 17:43:37 -0500 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: <4CD1E599.7050804@satx.rr.com> On 11/3/2010 5:21 PM, John Grigg wrote: >> Yes, but Meg Whitman (Republican) spent about 160 million of her own >> >money and still lost. >> >There's losing and then there's really really painful losing. > At least she spent her own money... A common misconception. She was secretly funded by the Illuminati, the Masons, the Mormons, the Vatican, and the Grays. From spike66 at att.net Wed Nov 3 22:35:49 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 15:35:49 -0700 Subject: [ExI] Australian dollar In-Reply-To: <423771.36663.qm@web30106.mail.mud.yahoo.com> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> Message-ID: <000801cb7ba7$77590a30$660b1e90$@att.net> That US dollar will drop way faster now that the federal reserve has just bought up 600 billion in US Treasury notes. US government buying its own debt is equivalent to spinning up the printing presses at the national mint. Are we going to keep pretending this is a debt that never needs to be paid back? ... On Behalf Of Dan ... >Why is reducing imports a good thing? Dan Dan, your even asking the question worries me. Answer: because we are spending ourselves to brutal catastrophe. spike From possiblepaths2050 at gmail.com Wed Nov 3 23:03:53 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 3 Nov 2010 16:03:53 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD1E599.7050804@satx.rr.com> References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> <4CD1E599.7050804@satx.rr.com> Message-ID: The Grays will be spending billions down the road for the cause of alien/human hybrid civil rights... Talk about coming out of the closet!!! John : ) On 11/3/10, Damien Broderick wrote: > On 11/3/2010 5:21 PM, John Grigg wrote: > >>> Yes, but Meg Whitman (Republican) spent about 160 million of her own >>> >money and still lost. >>> >There's losing and then there's really really painful losing. > >> At least she spent her own money... > > A common misconception. She was secretly funded by the Illuminati, the > Masons, the Mormons, the Vatican, and the Grays. > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Wed Nov 3 23:10:19 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 03 Nov 2010 18:10:19 -0500 Subject: [ExI] Australian dollar In-Reply-To: <000801cb7ba7$77590a30$660b1e90$@att.net> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> <000801cb7ba7$77590a30$660b1e90$@att.net> Message-ID: <4CD1EBDB.4060207@satx.rr.com> On 11/3/2010 5:35 PM, spike wrote: > Are we going to keep pretending this is a debt that never needs to be paid > back? Suppose there really is going to be a moderately fast but perceptible runup to a technological Singularity, how long would that be a problem? Damien Broderick From nymphomation at gmail.com Wed Nov 3 21:33:37 2010 From: nymphomation at gmail.com (*Nym*) Date: Wed, 3 Nov 2010 21:33:37 +0000 Subject: [ExI] A new Culture novel by Iain Banks In-Reply-To: References: Message-ID: On 3 November 2010 21:03, John Grigg wrote: > This is always a cause for celebration! > > http://io9.com/5668042/preview-surface-detail-by-iain-m-banks *possible spoilerettes* I'm still only up to page 562, if you like the Culture there is a lot more of it than in Matter or Inversions (not read the latter yet..) The whole book is built around aspects of uploading and backing up, but a tortured instance of an alien seems to be the only duplicated 'soul'. =:o) Heavy splashings, Thee Nymphomation 'If you cannot afford an executioner, a duty executioner will be appointed to you free of charge by the court' From spike66 at att.net Wed Nov 3 22:59:44 2010 From: spike66 at att.net (spike) Date: Wed, 3 Nov 2010 15:59:44 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: References: <000001cb7a3b$bd3aaf30$37b00d90$@att.net> <005301cb7b1a$8015f580$8041e080$@att.net> <4CD1AE7B.9000808@satx.rr.com> <162237.28097.qm@web30101.mail.mud.yahoo.com> <4CD1D0E2.9020009@satx.rr.com> Message-ID: <000901cb7baa$cea90030$6bfb0090$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of John Grigg... >The stupidity of American voters... >http://www.salon.com/technology/how_the_world_works/2010/11/01/the_unbearab le_stupidity_of_american_voters ... Andrew Leonard reports "By 52 percent to 19 percent, likely voters say federal income taxes have gone up for the middle class in the past two years." Without a hint of self-doubt, Leonard concludes that American voters are unbearably stupid. He is the kind of guy who buys a ton of junk at the local Walmart, pays with a credit card, notes on the way out he still has as much cash as when he went in, then concludes that he got all this stuff for free. He marvels at the stupidity of all those silly proles in line dishing out actual money for their purchases, instead of just using a credit card, like he and the other smart people do. Clue for Andrew Leonard: if there is a deficit, taxes are actually going up, regardless of what your current tax bill reads. Taxes went up during the W administration. They are going up waaay faster now. Andrew, that is what those unbearably stupid 52% are getting that you are missing. spike From pharos at gmail.com Wed Nov 3 23:24:41 2010 From: pharos at gmail.com (BillK) Date: Wed, 3 Nov 2010 23:24:41 +0000 Subject: [ExI] Fire and evolution (was hypnosis) In-Reply-To: <004a01cb7b7b$0671db70$13559250$@att.net> References: <4CD0814D.3040806@evil-genius.com> <668700.46914.qm@web30107.mail.mud.yahoo.com> <004a01cb7b7b$0671db70$13559250$@att.net> Message-ID: On Wed, Nov 3, 2010 at 5:17 PM, spike wrote: > Ja, clearly the pan's feet are better adapted for swinging from trees and > homo's feet are better for walking distances on a grassy plane. ?Good point > Dan. ?For that matter, as pointed out by someone earlier, pan's hands are > not as good as homo's at grasping a burning bush. ?Pan's thumbs are mounted > too far aft. > > By coincidence, Stone Age humans were only able to develop relatively advanced tools after their brains evolved a greater capacity for complex thought, according to a new study that investigates why it took early humans almost two million years to move from razor-sharp stones to a hand-held stone axe. ------------------ BillK From rpwl at lightlink.com Thu Nov 4 00:52:40 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 03 Nov 2010 20:52:40 -0400 Subject: [ExI] A new Culture novel by Iain Banks In-Reply-To: References: Message-ID: <4CD203D8.5080602@lightlink.com> John Grigg wrote: > This is always a cause for celebration! > > http://io9.com/5668042/preview-surface-detail-by-iain-m-banks Yay!! More Culture! Coming on the heels of John Clark's prognosis that yesterday's election will mean Obama is more likely to get elected in 2012, this is turning out to be a more cheerful day than I expected... :-) And the AUD paritied the USD today... what is this, are the planets all lined up or something? Richard Loosemore From brent.allsop at canonizer.com Thu Nov 4 02:53:40 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Wed, 03 Nov 2010 20:53:40 -0600 Subject: [ExI] Flash of insight... In-Reply-To: <4CD0DC77.7070603@speakeasy.net> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> <4CD0DC77.7070603@speakeasy.net> Message-ID: <4CD22034.4060304@canonizer.com> Psychonaughts, From the way others are talking about all this, they clearly don't yet fully understand what is going on in the right way. If you think of a simulated world like Halo, where there are two competitors in that simulated world. The data representing one of them could be stored in one memory chip, while the data representing the second could be represented by the circuits in a different memory chip. If a third competitor showed up between them, certainly you wouldn't necessarily conclude that the 3rd persons existence was represented by something spatially between these two chips. But, he could be, just by happenstance. The actual representations (or neural correlates of our 3D conscious knowledge) need not have anything to do with each other. Though the brilliant Steven Lehar makes some very powerful arguments, mostly for efficiencies sake, for the correlates being laid out in a very isomorphic 3D way. Kind of like an actual model of your spacial world laid out within the neurons of your cortex. Think of the flat mountains, moon behind them, and the stars, all as being not infinitely far away (since your brain isn't large enough to represent much more than a few miles of 3d space) but merely flat cut outs pasted on the inside of your skull - or actually as being represented by the set of neurons closest to your skull. And of course, your body represented by the neurons near the center of all this - with your 'spirit' being inside this, as if it was looking out of the representation of the eyes - though unlike the rest, you knowledge of your spirit has no referent in reality. On 11/2/2010 9:52 PM, Alan Grimes wrote: > >> I look forward to soon >> knowing first hand just how diverse your experience of yourself are, >> Alan, compared to my own. > ???? > > How do you propose to do that? > > You haven't read chapters 5 and 6 of 1229 Years After Titanic yet, have you? http://home.comcast.net/~brent.allsop/1229.htm#_Toc22030742 To start, if we happen to represent things very similarly, there is a chance something like an FMRI will be able see enough resolution of neural operation to tell us that my experiences are very similar to yours - or not. There may be other tricks, like using cameras and goggles to induce one of us to experience things the way the other does. (Again, being confirmed by the FMRI like device observing us achieving similar responsible neural correlates - and then saying: "There, you have it, that is what it is like for Alan.) Ultimately, though, as predicted by brilliant V.S. Ramachandran, we need to do between brains, what the corpus calosum is doing between our brain hemispheres. We need to eff the ineffable - as in oh THAT is what salt tastes like for you. Such a connecting 'cable of neurons' will enable our conscious models of reality worlds to subjectively merge. When I hug my spouse, currently I only experience half of what is going on. With this kind of a hook up, I'll be able to experience it all, just as I now do for both the right and left half of my body and world of about 2 miles in both directions - represented by both hemispheres - right hemisphere representing my lift body/world and visa verse. And, as predicted in the 1229 story, our 'spirits' will freely traverse between such consciously connected phenomenal worlds. We'll be making unimaginable phenomenal worlds exponentially more diverse and which nobody has yet experienced anything phenomenally like yet, and so much more. Not to mentionn we'll finally know 'what it is like to be a bat' or a snail.... as we grow toward becoming omni phenomenal and realizing that all of nature is so much more than just cause and effect behavior. I know how the light of a sunset behaves, and what my brains representation of a sunset is phenomenally like. The real question is, what is the actual sunset really phenomenally like. Brent Allsop From atymes at gmail.com Thu Nov 4 03:40:10 2010 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 3 Nov 2010 20:40:10 -0700 Subject: [ExI] The answer to tireless stupidity In-Reply-To: <238426.99429.qm@web30104.mail.mud.yahoo.com> References: <832116.3227.qm@web30106.mail.mud.yahoo.com> <238426.99429.qm@web30104.mail.mud.yahoo.com> Message-ID: You miss one of my major points: Yes, anyone _can_ use this. Think about who _will_. Or, at least, who is more likely to. The odds of a scientist who knows evolution using this within the next five years, exceed the odds of a creationist using this within the same time frame. Yes, that's getting into probabilities. Yes, it's not guaranteed. The future can not be guaranteed. Even the Singularity is not absolutely certain to happen, in any form that we'd call a Singularity, but merely likely. But certain outcomes can be made more likely - and if that's all that can be done, then it shall have to be good enough. And in this case, it is more likely that people we would agree with will use it, before people we would disagree with, at least as regards to their use of it. 2010/11/3 Dan > I don't disagree about the "silent audience" in any discussion, though I > wonder if some of them aren't just immediately turned off by a continuous > stream of emotional arguments anyhow. > > Regarding facts, the problem here would be interpretation in many cases. > Also, merely citing journal articles doesn't settle things in many cases. > Think about those economists and market analysts pointing out that the > housing bubble was going to burst and those who argued against them. The > latter could've easily created chatbots citing all the relevant articles in > peer-reviewed journals right up until the market unraveled in 2008. In a > sense, it's all going to depend on what the silent audience takes for fact > and reliable reasoning in the first place. (Of course, this is not an attack > on chatbots per se, but merely to point out that the wider social context is > important.) > > Regarding a Creationist setting these up, well, aren't there already cheat > sheets that Creationists use? Isn't there a book out called _How to Debate > an Atheist_? Yes, this can be used for good or ill, and, like you, I'm more > the optimist here. But the likely long-term outcome is probably not going to > be the Dark Side is thwarted by chatbots, but that Dark Side chatbots make > the more intelligent people less likely to take chat seriously. (In my > opinion, that might actually be a big win. There are almost always more > important things to do. :) > > Regards, > > Dan > > *From:* Adrian Tymes > *To:* ExI chat list > *Sent:* Wed, November 3, 2010 2:17:20 PM > *Subject:* Re: [ExI] The answer to tireless stupidity > > Very true. However: > > 1) Might it be the case that those whose arguments are not based on facts > have > more buttons to push? If they can not be secure in letting the other side > have the > last word, because they know everyone else can tell which side is the > buffoon... > > 2) The point of the debate is more often to convince the silent audience. > If one > side keeps making emotional arguments, and the other side keeps rebutting > by > linking to facts supported by outside sources, more people who witness the > debate will come away leaning toward the latter. > > 3) This is an interesting development as a political tool. Like any > technology, it > can be used for good or evil. However, like many new technologies, those > who we > view as "good" tend to be in a better position to use these tools, and thus > will > probably make more effective use of them (at least in the next decade or > two). > (In other words: try imagining an Extropian setting one of these up, then > try to > imagine a creationist setting one of these up. It's easier to imagine the > former > case, no?) > > On Wed, Nov 3, 2010 at 10:43 AM, Dan wrote: > >> This is not necessarily a cure for anti-science nonsense or even nonsense. >> It >> could be used against anyone holding any view: simply wear them down. >> E.g., >> someone here argues for Extropians or transhumanist views and someone else >> sets >> up a chatbot merely to keep pushing their buttons. >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Thu Nov 4 05:04:47 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 04 Nov 2010 01:04:47 -0400 Subject: [ExI] Flash of insight... In-Reply-To: <4CD22034.4060304@canonizer.com> References: <4CCB3ACB.8000106@speakeasy.net> <8CD46F1D7619EAC-22BC-10939@webmail-d003.sysops.aol.com> <4F38D7D2-0079-4F54-AAB6-9E4B1185A07E@bellsouth.net> <8CD475BDEDA80D9-EDC-21A0D@Webmail-d121.sysops.aol.com> <4CCDEE41.20706@canonizer.com> <4CCE079C.4010102@speakeasy.net> <4CD0DC77.7070603@speakeasy.net> <4CD22034.4060304@canonizer.com> Message-ID: <4CD23EEF.7080309@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: >>> I look forward to soon >>> knowing first hand just how diverse your experience of yourself are, >>> Alan, compared to my own. >> ???? >> How do you propose to do that? > You haven't read chapters 5 and 6 of 1229 Years After Titanic yet, have > you? > http://home.comcast.net/~brent.allsop/1229.htm#_Toc22030742 > =\ I skimmed those again, they just seemed to be a random collection of vague statements or dialogue beginning with "I". =\ If you want to see how to write in the first person, read Orange Sky by myself. =P Problem is, I don't have it up on the web right now and since the thing is over 300k in length, it'd take weeks to convert it to html format. I was thinking of publishing it but then I'd have to rewrite it and I was running out of creative energy before I even finished it. =\ > To start, if we happen to represent things very similarly, there is a > chance something like an FMRI will be able see enough resolution of > neural operation to tell us that my experiences are very similar to > yours - or not. There may be other tricks, like using cameras and > goggles to induce one of us to experience things the way the other > does. (Again, being confirmed by the FMRI like device observing us > achieving similar responsible neural correlates - and then saying: > "There, you have it, that is what it is like for Alan.) Implausible. The proposal fails to account for dimorphisms in the neural architecture that are at the heart of what's being discussed. ie: our neural networks might be incapable of simulating the other without in some ways becoming the other, so you couldn't just "sample" it like a taste test. The proposal doesn't even account for getting even that far. > Ultimately, though, as predicted by brilliant V.S. Ramachandran, we need > to do between brains, what the corpus calosum is doing between our brain > hemispheres. We need to eff the ineffable - as in oh THAT is what salt > tastes like for you. Such a connecting 'cable of neurons' will enable > our conscious models of reality worlds to subjectively merge. When I > hug my spouse, currently I only experience half of what is going on. > With this kind of a hook up, I'll be able to experience it all, just as > I now do for both the right and left half of my body and world of about > 2 miles in both directions - represented by both hemispheres - right > hemisphere representing my lift body/world and visa verse. Now that is an interesting proposal. In my Tortoise Vs. Achilles dialogs I have a character, a borganism, who has a true single consciousness across several bodies. (Look, I've written ten times as much as you and I'm better at it too!, I just don't go around citing it as if it were a classic or peer reviewed literature). I'm extremely cautious with the word "need" but yes, the ability to set such up between brains and, more importantly, between a brain and a computronium counterpart would be extremely useful. It definitely falls within the category of Real Transhumanism (tm). > And, as predicted in the 1229 story, our 'spirits' will freely traverse > between such consciously connected phenomenal worlds. We'll be making > unimaginable phenomenal worlds exponentially more diverse and which > nobody has yet experienced anything phenomenally like yet, and so much > more. Not to mention we'll finally know 'what it is like to be a bat' > or a snail.... as we grow toward becoming omni phenomenal and realizing > that all of nature is so much more than just cause and effect behavior. Predictions of this sort are useless because they don't lead towards meaningful action. The correct way to think about this is "Do you want to do this or do you not want to do it?" With the answer to that in hand, the next question is "So what are you going to do about it, huh? punk... What are you going to do!". Me? I'm going to get my self a NAO, and a personal supercomputer and solve AI. After that it's off to the races... > I know how the light of a sunset behaves, and what my brains > representation of a sunset is phenomenally like. The real question is, > what is the actual sunset really phenomenally like. I'm not sure that question is meaningful. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From dan_ust at yahoo.com Thu Nov 4 13:39:23 2010 From: dan_ust at yahoo.com (Dan) Date: Thu, 4 Nov 2010 06:39:23 -0700 (PDT) Subject: [ExI] Australian dollar In-Reply-To: <000801cb7ba7$77590a30$660b1e90$@att.net> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> <000801cb7ba7$77590a30$660b1e90$@att.net> Message-ID: <362237.28012.qm@web30102.mail.mud.yahoo.com> Imports themselves are not to blame. Also, recall the context of my statement here: BillK wrote, "Currency devaluation has many bad consequences, of course, as well as the good consequences of possibly increasing exports and reducing imports." He's, obviously, here pointing to an upside to currency devaluation. I was questioning whether this was really an upside after all. What's wrong, after all, with imports? They're a sign of trade -- that people somewhere else want to sell stuff to you. This is usually a great thing -- it spreads the division of labor ever further --?making for greater efficiency in production -- and usually provides you with more things to choose from. We are not "spending ourselves to a brutal catastrophe." The US government is. If you're worried about spending being too high (by whose reckoning?), then the thing to do is stop government-sponsored credit expansion. Also, stop government debt-financing -- which is one of the main drivers of credit policy (credit expansion allows big debtors to borrow more; the biggest debtor in any modern economy is its government). This debt, too, doesn't need to be paid back. It should be defaulted. Defaulting on the government debt will make creditors unlikely to loan to the government again. More importantly, paying it off will involve coercion -- via taxation or some other coercive means. Yes, I know, the wealthy creditors who lent to the government enjoy being paid off by taxes and the like. Well, that has to stop and would undermine the Hamiltonian notion of having national debt to cleave the wealthy to the government. (Granted, my recommendation here would be unpopular with these same creditors and they would try to persuade everyone that the world will end if the government default or were just abolished outright.*) Regards, Dan * Which is _the_ libertarian position. Libertarians who advocate government are inconsistent. ----- Original Message ---- From: spike To: ExI chat list Sent: Wed, November 3, 2010 6:35:49 PM Subject: Re: [ExI] Australian dollar That US dollar will drop way faster now that the federal reserve has just bought up 600 billion in US Treasury notes.? US government buying its own debt is equivalent to spinning up the printing presses at the national mint. Are we going to keep pretending this is a debt that never needs to be paid back? ... On Behalf Of Dan ... >Why is reducing imports a good thing? Dan Dan, your even asking the question worries me.? Answer: because we are spending ourselves to brutal catastrophe. spike From rahmans at me.com Thu Nov 4 10:20:57 2010 From: rahmans at me.com (Omar Rahman) Date: Thu, 04 Nov 2010 11:20:57 +0100 Subject: [ExI] New Improved Turing Test was: Subject: The answer to tireless stupidity In-Reply-To: References: Message-ID: <11E05544-EA6E-4921-9866-21539C70EA03@me.com> Spkie, This is brilliant. You've just set up the scenario for a new and improved Turing test. Why improved? It basically fulfills the Turing test....but potentially serves a reproductive purpose thereby influencing evolution. Well done sir! Regards, Omar Rahman P.S. Time to think up some super sexy code to attract post-singularity mates! > > >> Subject: [ExI] The answer to tireless stupidity > >> Chatbot Wears Down Proponents of Anti-Science Nonsense... Jeff Davis > > Actually this application would be a pointless waste of perfectly good > technology. > > Consider the online lonely hearts club. There are places on the web (and > the usenets before that, and DARPAnet even before that) where lonely hearts > would hang out and make small talk. A really useful application of a > chatbot would be to have it mines one's own writings and produce an enormous > lookup table, which it would then use to perform all the tedious, > error-prone and emotionally hazardous early stages of online seduction. As > soon as the other party agrees to meeting for, um, stimulating conversation > (and so forth), then the seductobot would alert the user, who then reads > over what the bot has said to the perspective contact. > > Of course, the other party might also have set up a seduct-o-matic to do the > same thing. > > Similarly to Jeff's example, it might soon become very difficult to > distinguish two humans trying to get each other into the sack from two > lookup tables doing likewise. As soon as actual creativity or innovation is > seen in the mating process, we know it must be a chatbot, for humans have > discovered nothing essentially new in that area since a few weeks after some > adventurous pair of protobonobos first discovered copulation. > > spike From spike66 at att.net Thu Nov 4 15:28:43 2010 From: spike66 at att.net (spike) Date: Thu, 4 Nov 2010 08:28:43 -0700 Subject: [ExI] New Improved Turing Test was: Subject: The answer to tireless stupidity In-Reply-To: <11E05544-EA6E-4921-9866-21539C70EA03@me.com> References: <11E05544-EA6E-4921-9866-21539C70EA03@me.com> Message-ID: <003201cb7c34$f7642470$e62c6d50$@att.net> Subject: [ExI] New Improved Turing Test was: Subject: The answer to tireless stupidity >> Similarly to Jeff's example, it might soon become very difficult to >> distinguish two humans trying to get each other into the sack from two >> lookup tables doing likewise... spike >Spike, >This is brilliant. You've just set up the scenario for a new and improved Turing test. Why improved? It basically fulfills the >Turing test... ... Omar you are too kind sir, but I cannot claim originality. A few years ago, a guy realized that plenty of college-age hipsters had never heard of Eliza, the software psychoanalyst. That was a toy that came and went a long time ago. I played with it some in college. He set up an Eliza-like program, which is easy to reproduce in excel with a good sized lookup table, then set it to hang out in a teen chat room, to see if the kids would ever figure out they were talking to a computer. A few of them did, but most did not. There was one striking example of a kid who poured out his heart to this program for 55 minutes, apparently never realizing it was a machine. That is a form of the Turing test success. It made Slashdot headlines, but I think it has been at least five or six years ago, long enough for everyone to forget and have a fresh innocent batch of teens to redo the experiment. Muwaaaahaaahaahahahahahahaaaa... >...but potentially serves a reproductive purpose thereby influencing evolution. Well done sir! >Regards, >Omar Rahman Hmmm, that gives me pause. Fortunately the kinds of mating I had in mind seldom results in actual reproduction. But I suppose it could generate larvae, in which case we would be encouraging the breeding of people who rely on machines to do the messy emotional stuff that is intertwined with the mating game. Oy freaking vey. Well, wait a minute, hold that thought. Perhaps this isn't anything new. Consider Hallmark cards. There is an example where we take the sweet gooey feeling stuff that many of us here recognize we are not particularly good at, and hire others to do it for us. We buy the birthday wishes written by others for a couple bucks. Same for wedding best wishes, get well soon cards, sympathy cards and so on. We already subcontract emotional care and feeding to others who are better at it than we are. So I guess it isn't such a major stretch to imagine we set up seductobots to look around on the web, get acquainted with, and prime prospective mates. I could even see setting the seductobot with one's own personality quirks. Here's a possible innovation. The seductobot, being tireless, can filter through arbitrarily many potential mates, more than its human counterpart could ever service with actual copulation. It would be a little like the 72 virgins thing, only there would be more than 72 and they wouldn't actually be virgins. So one could set the bot to present the person as he *really is* as opposed to the idealized version of oneself that pretty much everyone presents if they hang out on lonely hearts sites. One could actually downplay one's virtues, as few of us actually ever do. Then the potential mate would enjoy pleasant surprises as opposed to disappointments as she came to know you better. spike From pharos at gmail.com Thu Nov 4 16:54:40 2010 From: pharos at gmail.com (BillK) Date: Thu, 4 Nov 2010 16:54:40 +0000 Subject: [ExI] Australian dollar In-Reply-To: <362237.28012.qm@web30102.mail.mud.yahoo.com> References: <4CD1C94C.3040705@satx.rr.com> <423771.36663.qm@web30106.mail.mud.yahoo.com> <000801cb7ba7$77590a30$660b1e90$@att.net> <362237.28012.qm@web30102.mail.mud.yahoo.com> Message-ID: On Thu, Nov 4, 2010 at 1:39 PM, Dan wrote: > Imports themselves are not to blame. Also, recall the context of my statement > here: BillK wrote, "Currency devaluation has many bad consequences, of course, > as well as the good consequences of possibly increasing exports and reducing > imports." > > He's, obviously, here pointing to an upside to currency devaluation. I was > questioning whether this was really an upside after all. What's wrong, after > all, with imports? They're a sign of trade -- that people somewhere else want to > sell stuff to you. This is usually a great thing -- it spreads the division of > labor ever further --?making for greater efficiency in production -- and usually > provides you with more things to choose from. > I agree that trade is good. But I was writing in the context of the huge US deficit funding. The US specifically needs to get the import / export trade back in balance. > We are not "spending ourselves to a brutal catastrophe." The US government is. > If you're worried about spending being too high (by whose reckoning?), then the > thing to do is stop government-sponsored credit expansion. Also, stop government > debt-financing -- which is one of the main drivers of credit policy (credit > expansion allows big debtors to borrow more; the biggest debtor in any modern > economy is its government). > I'd love to have governments do as I tell them, but they won't listen. :) > This debt, too, doesn't need to be paid back. It should be defaulted. Defaulting > on the government debt will make creditors unlikely to loan to the government > again. More importantly, paying it off will involve coercion -- via taxation or > some other coercive means. Yes, I know, the wealthy creditors who lent to the > government enjoy being paid off by taxes and the like. Well, that has to stop > and would undermine the Hamiltonian notion of having national debt to cleave the > wealthy to the government. (Granted, my recommendation here would be unpopular > with these same creditors and they would try to persuade everyone that the world > will end if the government default or were just abolished outright.*) > > The US *is* defaulting on the debt by devaluing the dollar (and hoping that nobody notices). Your economic theory comments ignore the practical situation that the US in now in. The government is owned by the wealthy and has been used and is currently being used to expedite the transfer of all the wealth in the nation into the pockets of the already unbelievably wealthy few. Dollar devaluation doesn't much affect the super-wealthy who own property, land, gold, etc. in the US and abroad in tax havens. As currency devalues, real assets tend to keep their real value. That's where Obama failed. He had a chance to stop the looting when the financial crisis hit, but instead he caved in, bailed them out by giving them billions more and let them carry on as usual. BillK From scerir at alice.it Thu Nov 4 18:05:09 2010 From: scerir at alice.it (scerir) Date: Thu, 4 Nov 2010 19:05:09 +0100 Subject: [ExI] Bayes and psi In-Reply-To: <4CD1CB7B.1080804@satx.rr.com> References: <567253.29951.qm@web30701.mail.mud.yahoo.com><4CD1C4D6.6000101@satx.rr.com> <4CD1CB7B.1080804@satx.rr.com> Message-ID: Damien Broderick > This might be of interest: a link to a plenary lecture Prof. Utts gave > this summer at the 8th International Conference on Teaching Statistics. > http://icots8.org/cd/pdfs/plenaries/ICOTS8_PL2_UTTS.pdf Michael Strevens wrote papers on Bayes vs philosophy The Bayesian Approach to the Philosophy of Science http://www.strevens.org/research/simplexuality/Bayes.pdf Notes on Bayesian Confirmation Theory http://www.nyu.edu/classes/strevens/BCT/BCT.pdf From jonkc at bellsouth.net Thu Nov 4 21:04:08 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 4 Nov 2010 17:04:08 -0400 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <364035E2-F6CA-4F92-B739-563093FF0921@bellsouth.net> <8D7BE957-ED66-4DEB-AE0C-B77CF6F169CF@bellsouth.net> Message-ID: On Oct 28, 2010, at 2:37 PM, Dave Sill wrote: > If I have two identical apples in my hands, they're still two separate apples, not one. If the apples are truly identical then if you exchange their position then you have made no change at all, the universe has no way of knowing it happened or any reason for caring. >>> I'd never agree to allow a non-destructive upload of myself without it being made clear to the upload immediately upon activation that that's what it is. >> If you are very very very lucky maybe someday Mr. Jupiter Brain will give you that choice, or at least pretend to give you that choice. > I'm assuming that the experiment is being conducted by benevolent, trustworthy parties. If that's not true, all bets are off. If Mr. Jupiter Brain decides, for whatever reason, to upload you rather than just off you then he will not be conducting an experiment, he already knows what will happen; and if he is kind (and I don't know that he will be) and knows you have an irrational fear of being uploaded then he just won't tell you that you are an upload. Ignorance is bliss. John K Clark John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Fri Nov 5 16:59:19 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 5 Nov 2010 11:59:19 -0500 Subject: [ExI] Announcing the Gada Prize in Personal Manufacturing @ Humanity+ Message-ID: Today, Humanity+ is announcing that we are taking on the Gada Prize in Personal Manufacturing. I am really excited about this one. Here's the announcement: Announcing the Gada Prize in Personal Manufacturing @ Humanity+ http://humanityplus.org/2010/11/gada-prize-in-personal-manufacturing-at-humanityplus/ """ Humanity+ is proud to announce the Gada Prize (gadaprize.org) in Personal Manufacturing. By January 1, 2013, we will award $20,000 to the individual or team who demonstrates specific improvements to 3D printing technology. The prize was initially hosted by the Foresight Instituteand is now hosted by Humanity+ . Founded in 1998, Humanity+ focuses on human enhancement and emerging technologies. Desktop 3D printers promise a disruption in manufacturing technology, with improvements on price, productivity, and portability. We believe that a fully open-source 3D printer will herald a new era for both industrial manufacturing and individual prototyping? allowing everyone to rapidly build and test their inventions. The Gada Prize awards innovations applied to the RepRap platform, an open source 3D printer capable of printing plastic objects. Established in 2005 by Adrian Bowyer, the RepRap project has now grown into an international community of scientists, researchers, engineers and RepRap operators. The long-term vision of the RepRap project is an open-source self-replicating machine ? a 3D printer that can build copies of itself. Interested? Everyone is invited to get involved! The teamsare especially friendly, and you can always reach out to us. Humanity+ is an international organization focusing on technologies that expand human capacities. We primarily engage in promotion, conferences, ethics, debate, publication, and sponsored projects. The goal of the Gada Prizes is to improve the lives of one billion people by 2020. After an incubation period with the Foresight Institute, the Gada Prize is now a welcome addition to our portfolio. Resources: - RepRap wiki has a list of teams - RepRap.org prize forum - irc.freenode.net #reprap - irc.freenode.net #hplusroadmap Contact: Bryan Bishop Asst. Director of R&D, Humanity+ bryan at humanityplus.org phone: +1-512-203-0507 """ On a related note, you can access information at gadaprize.org from now on. - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Nov 5 20:04:46 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 5 Nov 2010 13:04:46 -0700 Subject: [ExI] How old people will remake the world In-Reply-To: References: Message-ID: I'm amazed that no one has commented about this fascinating link. The aging of the first world's population has great social ramifications (especially since in some nations the young people are not having enough children to maintain replacement levels). John On 11/3/10, John Grigg wrote: > At least for some people, the aging of the world population will improve > life... > > > http://www.salon.com/books/feature/2010/10/31/shock_of_gray_interview > > John > From spike66 at att.net Fri Nov 5 21:20:19 2010 From: spike66 at att.net (spike) Date: Fri, 5 Nov 2010 14:20:19 -0700 Subject: [ExI] prediction for 2 November 2010 In-Reply-To: <4CD46562.9000903@evil-genius.com> References: <4CD46562.9000903@evil-genius.com> Message-ID: <005101cb7d2f$4035e580$c0a1b080$@att.net> From: lists1 at evil-genius.com [mailto:lists1 at evil-genius.com] Subject: Re: [ExI] prediction for 2 November 2010 >> The stupidity of American voters... > >> http://www.salon.com/technology/how_the_world_works/2010/11/01/the_unbearabl e_stupidity_of_american_voters > >> Clue for Andrew Leonard: ... >> Andrew, that is what those unbearably stupid 52% are getting that you are missing... spike ... >Note to modern liberals: a political strategy based on telling people they're stupid is doomed to fail. And they're not stupid...unlike liberals, they >understand that there *is* a problem, even though they don't know what to do about it and blame the wrong things for it... I have an idea for Andrew Leonard: start a new political party. There was a new one formed recently in New York called "The Rent Is Too Damn High Party." Having a name like that helps the voters sum up what the party is about. In that spirit, I suggest Andrew Leonard for the "Voters Are Unbearably Stupid Party." Its platform is to tell the voters that they are unbearably stupid. spike From nebathenemi at yahoo.co.uk Fri Nov 5 22:31:16 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Fri, 5 Nov 2010 22:31:16 +0000 (GMT) Subject: [ExI] Singularity spotting In-Reply-To: Message-ID: <176276.12152.qm@web27001.mail.ukl.yahoo.com> I saw the Xbox game "Singularity" prominently on sale while at a games store, wondering if I could run Civilisation 5 on my PC. Plus, with "Transhuman Space","Eclipse Phase" and other games bringing transhumanism to role-playing games, I saw this cartoon: http://rpg.drivethrustuff.com/images/site_resources/Happy%20D20%20Adventures%20-%2013.jpg Plus I remember reading in a New Scientist interview maybe two weeks back where the man interviewed said that "we'll either face a singularity-type scenario or a new dark age". So, there's ever-expanding popular usage of The Singularity as a concept. At this rate, it'll be 2011-s buzzword. Tom From emlynoregan at gmail.com Sat Nov 6 04:02:01 2010 From: emlynoregan at gmail.com (Emlyn) Date: Sat, 6 Nov 2010 14:32:01 +1030 Subject: [ExI] The Codescape Message-ID: Hi all, sorry I haven't been around for a while, coding ;-) But I thought this bit that I just wrote was on topic for the list. --- http://point7.wordpress.com/2010/11/06/the-codescape/ There?s this incredible place where I like to spend a lot of my time. Everyone I know is near it, closer every day, but mostly they don?t come in. When I was a kid, it barely existed, except in corporates and universities, but it expanded slowly. There wasn?t much you could do, even after it began to really explode through the 90s. But lately it?s become somewhere new, somewhere much bigger, somewhere much more interesting. It?s a place I call the Codescape, and it?s becoming the platform on which the whole world runs. The Codescape is simply the space of all computer programs (code) spanning the world. The internet is implemented in it, but it is not the ?net. ?The Cloud? is one of the more interesting pieces of it, but it is not the cloud. It exists in every general purpose machine, as soon as anyone tries to make it run code. Some of it is in your computer, some is in your phone, there?s even a little bit in your car. There might be a tiny pocket in your pacemaker. In fact it?s something that many of us grew accustomed to thinking of as a lot of isolated little pocket worlds ? the place inside one machine or the place inside one network. It?s related to the computer scientist?s platonic space of pure code-as-mathematics, but it is really the gritty, logical-physical half-world of the running program instances, and the sharp edged, capricious, often noxious rules that real running environments bring. It is the space of endless edge cases, failures, unforseen and unforeseeable interactions between your own code and dimly perceived layers built by others. The platonic vision of the code is a trick, an illusion. We like to fool ourselves into thinking that we can create software like one might do maths, in a tower of the mind, all axioms and formal system rules known and accounted for, and the program created inside those constraints like a beautiful fractal, eternal in its elegance and parsimony. Less a construct than a discovery. The platonic code feels like a clean creation in the world of vision and symbols. Code is something you can see, after all, expressed as a form of writing. If you spend long enough away from the machines, you can think this is the real thing, mistake the map for the territory. But the real Codescape isn?t amenable to this at all. It is a dark place and a silent place. You know you are in the Codescape because your primary sensory modalities are touch, smell, and frankly, raw instinct. It is an environment composed of APIs, system layers, protocols and, ultimately, raw bytes. It is an environment where the code vibrates in time with the thrumming of the hardware. You feel through this environment, trying to understand the shapes, reach perfectly into rough, edged crenelations, looking for that sensation of lock, the successful grasp. Always, though, you are ready for the feeling of an unexpected sharp edge, a hot surface, the smell of something turned bad, the tingle of your spidey sense. It is a place that you can?t physically be in, but you can project yourself into. The lines of code are like tendrils, or tentacles, or maybe like a trail of ants reaching out from the nest. That painstaking projection, and the mapping of monkey senses and instincts to new purposes, turns most people off, but I think those of us most comfortable with it find the physical world similar. Possibly less abstractable, and so more alien. Certainly dumber. Oddly enough, we don?t talk about codespace much. It isn?t because we don?t want to, but because largely we cannot. We who travel freely between worlds often can?t express it, because it is a place of system and not of narrative. During periods of hype (mostly about the internet), a lot of bad novels and terrible movies get written about it (while missing it entirely), with gee-whiz 3D graphics and faux h4XX0r jargon. Sometimes some of us are even fooled by this, and so we pay unfortunate obeisance to notions like ?virtual reality? and ?cyberspace?, and construct things like 3D corporate meeting places, or Second Life, or World of Warcraft. Those are bonefide places, good for the illiterate, and a pleasant place to unwind for people of the code. They even contain little pockets of bone fide codescape inside themselves ? proper, first-class codescape, because all of the codescape is as real as the rest. But there is something garrish, gauche about these 3D worlds, like the shopping mall inside an airport, divorced from the country in which it physically exists. The main codescape now, as it exists in 2010, is like the mother of all MMOs. Many, many of us, those who can walk it (how many? hundreds of thousands?) play together in the untamed, expanding chaos of a world tied together by software and networks. Each of us play for our own reasons; some for profit, some for potential power, some for attention, and many of us, increasingly, for individual autonomy and personal expression. It?s a weird place. It?s never really been cool (although it?s come close at times), because the kinds of people who decide on what?s cool can?t even see it. These days the cool kids (like Wired, or Make Magazine, or BoingBoing) like open hardware, or physical making. But everything interesting is being enabled by software, more and more and more software, and so becomes at heart a projection out of the Codescape. Douglas Rushkoff?s recent book, ?Program or be Programmed?, talks about how we are now living in this world where what I call the Codescape is shaping the lives of everyone, and where we are divided into the code-literate and not. His book is mostly dreary complaining that it?s all too hard and the ?net should be more like it was in the 90s (joining an increasing chorus of 90s technorati who are finding themselves unable to keep up), but that first sentiment is absolutely spot on. If you can code, then, if you so choose, you can feel your way through codespace, explore the shifting landscape, and maybe carve out part of it in the shape of your own imaginings. Otherwise, you get internet-as-shiny-shopping-mall, a landscape of opaque gadgets, endless ads, monthly fees, and the faint suspicion that you are being constantly conned by fagan-esque gangs. I contend that if you care about personal autonomy, about freedom, in the 21st century, then you really should try to be part of this world. Perhaps for the first time, the potential for individuals is rivalling that of corporate entities. There is cheap and free server time on offer, high level environments into which you can project your codebase. The protocols are open, the documentation (sometimes just code itself) is free and freely available. Even the very best programming tools are free. If you can acquire the skills and the motivation, you can walk the Codescape with nothing more than an internet connection, a $100 chinese netbook, and your own wits. There is no barrier to entry, other than your ability to twist your mind into the shape that the proper incantations demand. Everything has a programmable API, which you can access and play with and create with if you are prepared to make the effort. At your fingertips are the knowledge and information resources of the world, plus the social interactions of 2 billion humans and counting, plus a growing resource of inputs and outputs in the physical world with which you can see and act. It?s a new frontier, expanding faster than we can explore and settle it. It?s going to be unrecognisable in 2020, and again in 2030, and who knows what after that. But the milestones are boring. The fun is in living it. The first challenge is just to try. -- Emlyn http://my.syyn.cc - A service for syncing buzz and facebook, posts, comments and all. http://www.blahblahbleh.com - A simple youtube radio that I built http://point7.wordpress.com - My blog Find me on Facebook and Buzz From kanzure at gmail.com Sat Nov 6 15:14:45 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 6 Nov 2010 10:14:45 -0500 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: scary boots Date: Sat, Nov 6, 2010 at 10:10 AM Subject: [london-hack-space] Request for knowledge: Implantable Microchips To: london-hack-space at googlegroups.com Hello everybody, Some of you may have been there when I mentioned my desire to get myself microchipped. I want to be identifiable with pet scanners, and using it to access places would be cool as well (albeit somewhat unsuave as it'll be in the back of my neck). Can't help noticing that all the ones for sale (and most are only for sale to registered vets) come with different brand names and only assert that they work with that particular company's scanner. Is there any standardization in the market? If not, what is most commonly used/works with easily-obtained scanners? Any other considerations I should bear in mind? I am aware that the insertion cannula is quite large. I'm not worried about the insertion, because I have an experienced piercer who'll do it for me, and I'm not a pussy. But I'm damned if I'm going to get it inserted and then find out it's not compatible with anything. any help or links appreciated! Scary ps. would anyone be interested if i put photos of my crinoline up or is that totally dull to everyone who's not a frivolous poser like me? -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 6 16:30:29 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 6 Nov 2010 09:30:29 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: Message-ID: I'll leave it to someone more qualified talk about possible medical concerns, but as to standardization: nope. "Standardization" means "let other people make stuff that works with our toys", which is something that private vendors are loathe to do in any early stage market such as this, because they think that's a part of the market they could serve themselves. It is only once the market matures, and vendors realize they can do better by focusing on a part of the market and letting other people handle the rest, that standards begin to emerge. Of course, it is usually the case that vendors can do better by specializing on some profitable niche all along, even in an early stage market. In new markets, it is not obvious what that niche is. But more important is greed, and the common, usually errant belief that one vendor can do everything a customer would want with no outside assistance. (Also known as the Not Invented Here syndrome.) 2010/11/6 Bryan Bishop > > > ---------- Forwarded message ---------- > From: scary boots > Date: Sat, Nov 6, 2010 at 10:10 AM > Subject: [london-hack-space] Request for knowledge: Implantable Microchips > To: london-hack-space at googlegroups.com > > > Hello everybody, > > Some of you may have been there when I mentioned my desire to get myself > microchipped. I want to be identifiable with pet scanners, and using it to > access places would be cool as well (albeit somewhat unsuave as it'll be in > the back of my neck). > > Can't help noticing that all the ones for sale (and most are only for sale > to registered vets) come with different brand names and only assert that > they work with that particular company's scanner. Is there any > standardization in the market? If not, what is most commonly used/works with > easily-obtained scanners? Any other considerations I should bear in mind? > > I am aware that the insertion cannula is quite large. I'm not worried about > the insertion, because I have an experienced piercer who'll do it for me, > and I'm not a pussy. But I'm damned if I'm going to get it inserted and then > find out it's not compatible with anything. > > any help or links appreciated! > > Scary > > ps. would anyone be interested if i put photos of my crinoline up or is > that totally dull to everyone who's not a frivolous poser like me? > > > > -- > - Bryan > http://heybryan.org/ > 1 512 203 0507 > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 6 16:23:07 2010 From: spike66 at att.net (spike) Date: Sat, 6 Nov 2010 09:23:07 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: Message-ID: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Bryan wrote: >. I mentioned my desire to get myself microchipped. I want to be identifiable with pet scanners, and using it to access places would be cool as well (albeit somewhat unsuave as it'll be in the back of my neck).- Bryan Hi Bryan, I don't know about compatibility, but implanted microchips are the mark of the beast: http://www.av1611.org/666/biochip.html Fortunately I have always been a fan of beasts. One comment you made here is that the chip will go in the back of the neck. The mark of the beast sites says they did a 1.5 million dollar research project and found that the best places would be the back of the hand or the forehead (as described in holy scripture donchaknow.) Without a penny of research, I can see these would be the second and third worst places for such a device (for men anyways.) That being said, I would think a far better place or a microchip would be the earlobe. There are no muscles nearby, no tendons, no contact with a pillow, no risk of it wandering off and lodging in your damn brain somewhere. Furthermore people already abuse that particular body part for no particular reason other than some misguided fashion notions. Far be it from me to criticize misguided fashion notions, but this looks to me like a far better place for a subcutaneous chip, ja? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 6 16:47:44 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 6 Nov 2010 09:47:44 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: 2010/11/6 spike > One comment you made here is that the chip will go in the back of the > neck. The mark of the beast sites says they did a 1.5 million dollar > research project and found that the best places would be the back of the > hand or the forehead (as described in holy scripture donchaknow.) Without a > penny of research, I can see these would be the second and third worst > places for such a device (for men anyways.) > > I can see the forehead, but why is the back of the hand a bad place? Just the visible bump (since there's not that much flesh between the handbones and the skin there)? > That being said, I would think a far better place or a microchip would be > the earlobe. There are no muscles nearby, no tendons, no contact with a > pillow, > Maybe for you, but I've grown used to sleeping with my head turned sideways (so I can have another pillow atop my head to block out noise), so it would definitely contact pillow there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Sat Nov 6 16:50:44 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Sat, 6 Nov 2010 11:50:44 -0500 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: 2010/11/6 spike > Hi Bryan, Spike-- just to be clear, I didn't write the original email, but I did think it worth consideration. I don't particularly have a need to microchip myself as a cat/dog/antelope. But I imagine someone.. uh. Might? - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 6 18:58:11 2010 From: spike66 at att.net (spike) Date: Sat, 6 Nov 2010 11:58:11 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: <002101cb7de4$8fc551c0$af4ff540$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Adrian Tymes Subject: Re: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips 2010/11/6 spike >> 1.5 million dollar research project and found that the best places would be the back of the hand or the forehead (as described in holy scripture donchaknow.) Without a penny of research, I can see these would be the second and third worst places for such a device (for men anyways.) >I can see the forehead, but why is the back of the hand a bad place? Just the visible bump (since there's not that much flesh between the handbones and the skin there)? Not enough flab on the hand, way too many nerve endings, muscles and tendons everywhere, too much exposure to scrapes, plenty of mechanical stress, just sounds risky to me. Possible alternative would be that loose flab on the upper arm. Most of us recall seeing our elementary school teacher writing something on the board, and that upper-arm flab would get to oscillating. Flab is a good place to put a microchip, to reduce the risk of its wandering off. Actually one of the best places for something like that might be in the scrotum, although it might make the user look a little strange when using the reader. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 6 19:06:30 2010 From: spike66 at att.net (spike) Date: Sat, 6 Nov 2010 12:06:30 -0700 Subject: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips In-Reply-To: References: <001b01cb7dce$e5cc2f50$b1648df0$@att.net> Message-ID: <002601cb7de5$b8e577a0$2ab066e0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Bryan Bishop Subject: Re: [ExI] Fwd: [london-hack-space] Request for knowledge: Implantable Microchips Spike-- just to be clear, I didn't write the original email, but I did think it worth consideration. I don't particularly have a need to microchip myself as a cat/dog/antelope. But I imagine someone.. uh. Might?- Bryan Oh ok cool, I did miss that. When the pet chips became available a few years ago, I thought it might be cool to have something like that to keep one's medical records, blood type, drug allergies and so forth. I didn't get one because of the same reasons your article mentions: there is no standard, and I don't want to keep having it changed every five years. The guy who wrote the article commented "I am not a pussy." and I am not either, don't even play one on TV, but I don't want to keep changing a subcutaneous chip as often as major music distribution formats change. I am one who has already lived thru vinyl LPs, 8 track tapes, cassette tapes, CDs, DVDs, MP3, and now whatever it is that young people use to buy their music. I have already rebought my favorite albums thrice. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Nov 6 20:05:05 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Nov 2010 21:05:05 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/2 Adrian Tymes > 2010/11/2 Stefano Vaj > > 2010/11/2 Adrian Tymes >> >>> The main problem is, current fusion reactor operators consider sustaining >>> fusion >>> for a few seconds to be "long duration", and have engineered several >>> tricks to keep >>> it going that long. >>> >> >> What's wrong in a pulse propulsion detonating H-bombs one after another, >> V1-style? >> >> What you're talking about was once called Project Orion. > Exactly. > It could work, in theory, > especially if you kept it outside the atmosphere to avoid radiation > concerns > Or, you could try to limit somewhat radioactive pollution and accept the rest, especially for "once-for-all" projects... ;-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 6 20:48:29 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 6 Nov 2010 13:48:29 -0700 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/6 Stefano Vaj > Or, you could try to limit somewhat radioactive pollution and accept the > rest, especially for "once-for-all" projects... ;-) > There's a fundamental problem with that type of thing. Anything where you aren't planning on returning to Earth, but where your trip does have adverse consequences for those who remain (like radioactive exhaust during launch), doesn't shield you from people who can predict these consequences and prevent you from launching even once. Given the resources required, keeping it a secret while also getting the spaceship actually built is not possible. If you try, the secret will be discovered by such people after you start bending metal, probably around the time you start test firing the engine's components. (That, or it will remain in the planning stages forever, and thus fail to actually build the spaceship.) Plan on returning, plan on giving those you leave behind no reason to stop you, or plan on never leaving in the first place. Any other plan is guaranteed to fail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sat Nov 6 22:08:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 6 Nov 2010 23:08:34 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/6 Adrian Tymes > There's a fundamental problem with that type of thing. Anything where you > aren't > planning on returning to Earth, but where your trip does have adverse > consequences > for those who remain (like radioactive exhaust during launch), doesn't > shield you > from people who can predict these consequences and prevent you from > launching > even once. > Misunderstanding. Let us imagine that you make use of a Project Orion spaceship to take out of the earth gravity well a space solar power plant which "breaks even" and is then capable of supplying the energy required for its maintenance and growth. Or a mirror aimed at limiting a (hypotethically real, I am not discussing the issue here) runaway global warming by deflecting some of sun irradiation. Or the necessary to create a permanent base where building stuff and fuel is much cheaper. You need not imagine that you would go on launcing Project Orion ships every week for all eternity. They might well simply be a reasonable exception option in terms of risk-performance to break a few vicious circles. Having said that, the environmental consequences of a few launch might well be grossly exaggerated, in particolar in comparison with other environmentally-challenging techs in widespread use in spite of the very real damages suffered by many people as a consequence thereof. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Sat Nov 6 23:27:41 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sat, 6 Nov 2010 16:27:41 -0700 (PDT) Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: <964866.1581.qm@web65602.mail.ac4.yahoo.com> > >It could work, in theory, >>especially if you kept it outside the atmosphere to avoid radiation concerns >> Or, you could try to limit somewhat radioactive pollution and accept the rest, especially for "once-for-all" projects... ;-) Another?criticism with the Orion Project spaceships?is that of the electromagnetic pulse (EMP) that would be generated with each "boost". At high enough altitudes,?the EMP could blackout a whole hemisphere. While the ship itself could be hardened, amounting to putting faraday cages around all the electronics, most earthbound systems would still be vulnerable. Just thought I would throw that in. Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 7 20:24:01 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Nov 2010 21:24:01 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: <964866.1581.qm@web65602.mail.ac4.yahoo.com> References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> <964866.1581.qm@web65602.mail.ac4.yahoo.com> Message-ID: 2010/11/7 The Avantguardian > Another criticism with the Orion Project spaceships is that of the > electromagnetic pulse (EMP) that would be generated with each "boost". > Interesting. But wasn't the Internet developed to deal exactly with widespread fusion explosions, albeit on a much larger scale than a single Project Orion launch? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sun Nov 7 20:29:06 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 7 Nov 2010 12:29:06 -0800 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/6 Stefano Vaj > 2010/11/6 Adrian Tymes > > There's a fundamental problem with that type of thing. Anything where you >> aren't >> planning on returning to Earth, but where your trip does have adverse >> consequences >> for those who remain (like radioactive exhaust during launch), doesn't >> shield you >> from people who can predict these consequences and prevent you from >> launching >> even once. >> > > Misunderstanding. > > Let us imagine that you make use of a Project Orion spaceship to take out > of the earth gravity well a space solar power plant which "breaks even" and > is then capable of supplying the energy required for its maintenance and > growth. Or a mirror aimed at limiting a (hypotethically real, I am not > discussing the issue here) runaway global warming by deflecting some of sun > irradiation. Or the necessary to create a permanent base where building > stuff and fuel is much cheaper. > > You need not imagine that you would go on launcing Project Orion ships > every week for all eternity. They might well simply be a reasonable > exception option in terms of risk-performance to break a few vicious > circles. > Ah. Yes, that is less of a problem, but still a problem. Fundamentally: if it's allowed once, for anyone, it'll be allowed indefinite times. There is ample reason to believe that there won't be any worldwide limits on the number of launches. (For one, if only 5 launches per year would be safe, who decides who will get to do those 5 - and what happens when someone launches a sixth?) People may oppose it on those grounds - but that may be surmountable, especially if no one else will have the ability to do this before you plan to have no further need of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 7 20:33:53 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Nov 2010 21:33:53 +0100 Subject: [ExI] Fusion Rocket In-Reply-To: References: <319941.52817.qm@web65601.mail.ac4.yahoo.com> Message-ID: 2010/11/7 Adrian Tymes > Fundamentally: if it's allowed once, for anyone, it'll be allowed > indefinite times. > There is ample reason to believe that there won't be any worldwide limits > on the > number of launches. (For one, if only 5 launches per year would be safe, > who > decides who will get to do those 5 - and what happens when someone launches > a sixth?) > > People may oppose it on those grounds - but that may be surmountable, > especially > if no one else will have the ability to do this before you plan to have no > further need > of it. > Sure. But they could be, and are, opposing oil burning on the same grounds. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 7 20:15:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 7 Nov 2010 21:15:50 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: On 3 November 2010 07:04, Stathis Papaioannou wrote: > 2010/11/3 Stefano Vaj : > > 2010/10/31 John Clark > >> > >> Actually its quite difficult to come up with a scenario where the copy > >> DOES instantly know he is the copy. > > > > Mmhhh. Nobody ever feels to be a copy. What you could become aware is > that > > somebody forked in the past (as in "a copy left behind"). That he is the > > "original" is a matter of perspective... > > Think about what you would say and do if provided with evidence that > you are actually a copy, replaced while the original you was sleeping > some time last week. > My point is that no possible evidence would make you a "copy". The "original" would in any event from your perspective simply a fork behind. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Sun Nov 7 21:58:52 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 07 Nov 2010 16:58:52 -0500 Subject: [ExI] I love the world. =) Message-ID: <4CD7211C.8060304@speakeasy.net> I've been watching waaay too much Dr. Who. (There's Tom Baker, David Tennant and then everyone else who pretended to be a Doctor. ;) Then as I went back to my kitchen to pig out on yet more cookies, I took a peek out my window through the blinds only to be shocked by a truly dazzling sunset. The world is such a place of amazing majesty, I wouldn't dare change a thing about it. For me, transhumanism is mostly about fixing this horrible mortality bug in the human body, everything else I wouldn't have any other way. Why do other transhumanists suffer the fools who talk about reducing it all to computronium even for an instant? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From spike66 at att.net Sun Nov 7 22:57:43 2010 From: spike66 at att.net (spike) Date: Sun, 7 Nov 2010 14:57:43 -0800 Subject: [ExI] I love the world. =) In-Reply-To: <4CD7211C.8060304@speakeasy.net> References: <4CD7211C.8060304@speakeasy.net> Message-ID: <001801cb7ecf$3023be50$906b3af0$@att.net> >... On Behalf Of Alan Grimes Subject: [ExI] I love the world. =) Me too! {8-] >... The world is such a place of amazing majesty, I wouldn't dare change a thing about it... I would. I would fix it to where mosquitos bite only each other. >...Why do other transhumanists suffer the fools who talk about reducing it all to computronium even for an instant? I don't think the computronium would reduce it all to computronium for only an instant. Once it reduces it all to computronium, it likely would stay that way indefinitely. If you meant the transhumanists reducing it all to computronium, the common notion is that they (and everyone else) have little or no say in the matter. The computronium does whatever it wants. The problem is that we don't know what it wants. We don't even know if the computronium cares what we want. spike From msd001 at gmail.com Mon Nov 8 00:20:11 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sun, 7 Nov 2010 19:20:11 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <001801cb7ecf$3023be50$906b3af0$@att.net> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> Message-ID: On Sun, Nov 7, 2010 at 5:57 PM, spike wrote: > If you meant the transhumanists reducing it all to computronium, the common > notion is that they (and everyone else) have little or no say in the matter. > The computronium does whatever it wants. ?The problem is that we don't know > what it wants. ?We don't even know if the computronium cares what we want. 1) computronium isn't even a real thing. We might as well be discussing trouble with Tribbles (and the humane ways in which we can protect ourselves from them without resorting to genocide) 2) the concept of computronium is maximal computing density of matter. I was under the impression that this magical substance would be employed to do useful work: computing. This should be anthropomorphized no more than the CPU in your current computer "wants" for anything. There are plenty of monsters utilizing currently available computing technology. These monsters can already kill us according to their programming (human-designed programming) Computronium wouldn't make these monsters kill us any more severely than they already can. 3) we will continue to advance according to our own programming. Mostly that frightened monkey programming that kept us from being eaten by primordial predators will make us just as likely to hit the computronium monsters with a proverbial rock or (as recently discussed) a burning branch. Once the threat becomes possible, expect to see right next to the firehose something like "in case of hard takeoff, break glass to employ EMP." In a not-quite-worst-case scenario we are forced to Nuke the Internet and revert back to Amish-level technologies. Not a pretty situation, but humanity would adapt. 4) as far as you or I having any say in the matter, how is that different from any public policy currently "offered" by the government under which you/we are currently living? Yeah right, you could move somewhere more agreeable to your views - if only you had the means to up and leave (and the fortitude to start a new life elsewhere) From avantguardian2020 at yahoo.com Mon Nov 8 00:40:58 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 7 Nov 2010 16:40:58 -0800 (PST) Subject: [ExI] The Codescape In-Reply-To: References: Message-ID: <570302.42318.qm@web65601.mail.ac4.yahoo.com> I liked your post on the codescape, Emlyn.?The interesting thing from my perspective is how much?it has changed in my lifetime. When I was a kid, knowing even a single programming language made you (un)cool. These days you need to know almost half a dozen to put together a decent website. And?if you want to be?a serious?codejockey,?you need to know about a dozen. That's quite a bit different from the way meatspace works where most people?know one or two languages and get by just fine. IMO what the codescape needs is a "lingua franca". ? ?Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung ----- Original Message ---- > From: Emlyn > To: ExI chat list > Sent: Fri, November 5, 2010 9:02:01 PM > Subject: [ExI] The Codescape > > Hi all, sorry I haven't been around for a while, coding ;-) But I > thought this bit that I just wrote was on topic for the list. > > --- > > http://point7.wordpress.com/2010/11/06/the-codescape/ > > There?s this incredible place where I like to spend a lot of my time. > Everyone I know is near it, closer every day, but mostly they don?t > come in. > > When I was a kid, it barely existed, except in corporates and > universities, but it expanded slowly. There wasn?t much you could do, > even after it began to really explode through the 90s. But lately it?s > become somewhere new, somewhere much bigger, somewhere much more > interesting. > > It?s a place I call the Codescape, and it?s becoming the platform on > which the whole world runs. > > The Codescape is simply the space of all computer programs (code) > spanning the world. The internet is implemented in it, but it is not > the ?net. ?The Cloud? is one of the more interesting pieces of it, but > it is not the cloud. It exists in every general purpose machine, as > soon as anyone tries to make it run code. Some of it is in your > computer, some is in your phone, there?s even a little bit in your > car. There might be a tiny pocket in your pacemaker. > > In fact it?s something that many of us grew accustomed to thinking of > as a lot of isolated little pocket worlds ? the place inside one > machine or the place inside one network. It?s related to the computer > scientist?s platonic space of pure code-as-mathematics, but it is > really the gritty, logical-physical half-world of the running program > instances, and the sharp edged, capricious, often noxious rules that > real running environments bring. It is the space of endless edge > cases, failures, unforseen and unforeseeable interactions between your > own code and dimly perceived layers built by others. > > The platonic vision of the code is a trick, an illusion. We like to > fool ourselves into thinking that we can create software like one > might do maths, in a tower of the mind, all axioms and formal system > rules known and accounted for, and the program created inside those > constraints like a beautiful fractal, eternal in its elegance and > parsimony. Less a construct than a discovery. > > The platonic code feels like a clean creation in the world of vision > and symbols. Code is something you can see, after all, expressed as a > form of writing. If you spend long enough away from the machines, you > can think this is the real thing, mistake the map for the territory. > > But the real Codescape isn?t amenable to this at all. It is a dark > place and a silent place. You know you are in the Codescape because > your primary sensory modalities are touch, smell, and frankly, raw > instinct. > > It is an environment composed of APIs, system layers, protocols and, > ultimately, raw bytes. It is an environment where the code vibrates in > time with the thrumming of the hardware. You feel through this > environment, trying to understand the shapes, reach perfectly into > rough, edged crenelations, looking for that sensation of lock, the > successful grasp. Always, though, you are ready for the feeling of an > unexpected sharp edge, a hot surface, the smell of something turned > bad, the tingle of your spidey sense. > > It is a place that you can?t physically be in, but you can project > yourself into. The lines of code are like tendrils, or tentacles, or > maybe like a trail of ants reaching out from the nest. That > painstaking projection, and the mapping of monkey senses and instincts > to new purposes, turns most people off, but I think those of us most > comfortable with it find the physical world similar. Possibly less > abstractable, and so more alien. Certainly dumber. > > Oddly enough, we don?t talk about codespace much. It isn?t because we > don?t want to, but because largely we cannot. We who travel freely > between worlds often can?t express it, because it is a place of system > and not of narrative. > > During periods of hype (mostly about the internet), a lot of bad > novels and terrible movies get written about it (while missing it > entirely), with gee-whiz 3D graphics and faux h4XX0r jargon. Sometimes > some of us are even fooled by this, and so we pay unfortunate > obeisance to notions like ?virtual reality? and ?cyberspace?, and > construct things like 3D corporate meeting places, or Second Life, or > World of Warcraft. Those are bonefide places, good for the illiterate, > and a pleasant place to unwind for people of the code. They even > contain little pockets of bone fide codescape inside themselves ? > proper, first-class codescape, because all of the codescape is as real > as the rest. But there is something garrish, gauche about these 3D > worlds, like the shopping mall inside an airport, divorced from the > country in which it physically exists. > > The main codescape now, as it exists in 2010, is like the mother of > all MMOs. Many, many of us, those who can walk it (how many? hundreds > of thousands?) play together in the untamed, expanding chaos of a > world tied together by software and networks. Each of us play for our > own reasons; some for profit, some for potential power, some for > attention, and many of us, increasingly, for individual autonomy and > personal expression. > > It?s a weird place. It?s never really been cool (although it?s come > close at times), because the kinds of people who decide on what?s cool > can?t even see it. These days the cool kids (like Wired, or Make > Magazine, or BoingBoing) like open hardware, or physical making. But > everything interesting is being enabled by software, more and more and > more software, and so becomes at heart a projection out of the > Codescape. > > Douglas Rushkoff?s recent book, ?Program or be Programmed?, talks > about how we are now living in this world where what I call the > Codescape is shaping the lives of everyone, and where we are divided > into the code-literate and not. His book is mostly dreary complaining > that it?s all too hard and the ?net should be more like it was in the > 90s (joining an increasing chorus of 90s technorati who are finding > themselves unable to keep up), but that first sentiment is absolutely > spot on. If you can code, then, if you so choose, you can feel your > way through codespace, explore the shifting landscape, and maybe carve > out part of it in the shape of your own imaginings. Otherwise, you get > internet-as-shiny-shopping-mall, a landscape of opaque gadgets, > endless ads, monthly fees, and the faint suspicion that you are being > constantly conned by fagan-esque gangs. > > I contend that if you care about personal autonomy, about freedom, in > the 21st century, then you really should try to be part of this world. > Perhaps for the first time, the potential for individuals is rivalling > that of corporate entities. There is cheap and free server time on > offer, high level environments into which you can project your > codebase. The protocols are open, the documentation (sometimes just > code itself) is free and freely available. Even the very best > programming tools are free. If you can acquire the skills and the > motivation, you can walk the Codescape with nothing more than an > internet connection, a $100 chinese netbook, and your own wits. There > is no barrier to entry, other than your ability to twist your mind > into the shape that the proper incantations demand. > > Everything has a programmable API, which you can access and play with > and create with if you are prepared to make the effort. At your > fingertips are the knowledge and information resources of the world, > plus the social interactions of 2 billion humans and counting, plus a > growing resource of inputs and outputs in the physical world with > which you can see and act. > > It?s a new frontier, expanding faster than we can explore and settle > it. It?s going to be unrecognisable in 2020, and again in 2030, and > who knows what after that. But the milestones are boring. The fun is > in living it. The first challenge is just to try. > > -- > Emlyn > > http://my.syyn.cc - A service for syncing buzz and facebook, posts, > comments and all. > http://www.blahblahbleh.com - A simple youtube radio that I built > http://point7.wordpress.com - My blog > Find me on Facebook and Buzz > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From emlynoregan at gmail.com Mon Nov 8 01:25:00 2010 From: emlynoregan at gmail.com (Emlyn) Date: Mon, 8 Nov 2010 11:55:00 +1030 Subject: [ExI] The Codescape In-Reply-To: <570302.42318.qm@web65601.mail.ac4.yahoo.com> References: <570302.42318.qm@web65601.mail.ac4.yahoo.com> Message-ID: On 8 November 2010 11:10, The Avantguardian wrote: > I liked your post on the codescape, Emlyn. Thanks Stuart! > The interesting thing from my > perspective is how much?it has changed in my lifetime. When I was a kid, knowing > even a single programming language made you (un)cool. These days you need to > know almost half a dozen to put together a decent website. And?if you want to > be?a serious?codejockey,?you need to know about a dozen. Absolutely. I've said for a while now, it's much more difficult to be a coder now than it used to be, because there is no certainty. You can't really know your environment in the way you used to be able to, you have to trust often quite opaque layers from elsewhere. You have to turn over knowledge and paradigms constantly (actually at an increasing rate). You have to be comfortable with stringing together lots of shallow knowledge, and also with going deep in what I think of as the shallow-deep way; go in fast, learn the details, really understand temporarily, do what needs doing really well in an encapsulated way (so that what has been made can be used with a lot less understanding), then break back out, and do the next thing, forgetting the depth you had acquired. You'll probably never need that detailed knowledge again, and if you do you can acquire it again. Even understanding can be looked at through the lens of access rather than ownership. > That's quite a bit > different from the way meatspace works where most people?know one or two > languages and get by just fine. IMO what the codescape needs is a "lingua > franca". > ?Stuart LaForge > > ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung > Well, you can get along with just a language or two for a while, if you pick the right one(s). But really to stay in it long term is to commit to changing your knowledge over frequently. There's an underlying unity to at least large families of languages, and of course you look for that to help move. I used to try to find the similarities, to help move from language to language, which is good, but it means you always have a dreadful accent, and lots of impedance. Now I try to find the differences, the things that make each language unique, to try to become as native as possible as quickly as possible. As to a lingua franca, the Codescape is on top of that, it's got heaps of them! > > > ----- Original Message ---- >> From: Emlyn >> To: ExI chat list >> Sent: Fri, November 5, 2010 9:02:01 PM >> Subject: [ExI] The Codescape >> >> Hi all, sorry I haven't been around for a while, coding ;-) But I >> thought this bit that I just wrote was on topic for the list. >> >> --- >> >> http://point7.wordpress.com/2010/11/06/the-codescape/ >> >> There?s this incredible place where I like to spend a lot of my time. >> Everyone I know is near it, closer every day, but mostly they don?t >> come in. >> >> When I was a kid, it barely existed, except in corporates and >> universities, but it expanded slowly. There wasn?t much you could do, >> even after it began to really explode through the 90s. But lately it?s >> become somewhere new, somewhere much bigger, somewhere much more >> interesting. >> >> It?s a place I call the Codescape, and it?s becoming the platform on >> which the whole world runs. >> >> The Codescape is simply the space of all computer programs (code) >> spanning the world. The internet is implemented in it, but it is not >> the ?net. ?The Cloud? is one of the more interesting pieces of it, but >> it is not the cloud. It exists in every general purpose machine, as >> soon as anyone tries to make it run code. Some of it is in your >> computer, some is in your phone, there?s even a little bit in your >> car. There might be a tiny pocket in your pacemaker. >> >> In fact it?s something that many of us grew accustomed to thinking of >> as a lot of isolated little pocket worlds ? the place inside one >> machine or the place inside one network. It?s related to the computer >> scientist?s platonic space of pure code-as-mathematics, but it is >> really the gritty, logical-physical half-world of the running program >> instances, and the sharp edged, capricious, often noxious rules that >> real running environments bring. It is the space of endless edge >> cases, failures, unforseen and unforeseeable interactions between your >> own code and dimly perceived layers built by others. >> >> The platonic vision of the code is a trick, an illusion. We like to >> fool ourselves into thinking that we can create software like one >> might do maths, in a tower of the mind, all axioms and formal system >> rules known and accounted for, and the program created inside those >> constraints like a beautiful fractal, eternal in its elegance and >> parsimony. Less a construct than a discovery. >> >> The platonic code feels like a clean creation in the world of vision >> and symbols. Code is something you can see, after all, expressed as a >> form of writing. If you spend long enough away from the machines, you >> can think this is the real thing, mistake the map for the territory. >> >> But the real Codescape isn?t amenable to this at all. It is a dark >> place and a silent place. You know you are in the Codescape because >> your primary sensory modalities are touch, smell, and frankly, raw >> instinct. >> >> It is an environment composed of APIs, system layers, protocols and, >> ultimately, raw bytes. It is an environment where the code vibrates in >> time with the thrumming of the hardware. You feel through this >> environment, trying to understand the shapes, reach perfectly into >> rough, edged crenelations, looking for that sensation of lock, the >> successful grasp. Always, though, you are ready for the feeling of an >> unexpected sharp edge, a hot surface, the smell of something turned >> bad, the tingle of your spidey sense. >> >> It is a place that you can?t physically be in, but you can project >> yourself into. The lines of code are like tendrils, or tentacles, or >> maybe like a trail of ants reaching out from the nest. That >> painstaking projection, and the mapping of monkey senses and instincts >> to new purposes, turns most people off, but I think those of us most >> comfortable with it find the physical world similar. Possibly less >> abstractable, and so more alien. Certainly dumber. >> >> Oddly enough, we don?t talk about codespace much. It isn?t because we >> don?t want to, but because largely we cannot. We who travel freely >> between worlds often can?t express it, because it is a place of system >> and not of narrative. >> >> During periods of hype (mostly about the internet), a lot of bad >> novels and terrible movies get written about it (while missing it >> entirely), with gee-whiz 3D graphics and faux h4XX0r jargon. Sometimes >> some of us are even fooled by this, and so we pay unfortunate >> obeisance to notions like ?virtual reality? and ?cyberspace?, and >> construct things like 3D corporate meeting places, or Second Life, or >> World of Warcraft. Those are bonefide places, good for the illiterate, >> and a pleasant place to unwind for people of the code. They even >> contain little pockets of bone fide codescape inside themselves ? >> proper, first-class codescape, because all of the codescape is as real >> as the rest. But there is something garrish, gauche about these 3D >> worlds, like the shopping mall inside an airport, divorced from the >> country in which it physically exists. >> >> The main codescape now, as it exists in 2010, is like the mother of >> all MMOs. Many, many of us, those who can walk it (how many? hundreds >> of thousands?) play together in the untamed, expanding chaos of a >> world tied together by software and networks. Each of us play for our >> own reasons; some for profit, some for potential power, some for >> attention, and many of us, increasingly, for individual autonomy and >> personal expression. >> >> It?s a weird place. It?s never really been cool (although it?s come >> close at times), because the kinds of people who decide on what?s cool >> can?t even see it. These days the cool kids (like Wired, or Make >> Magazine, or BoingBoing) like open hardware, or physical making. But >> everything interesting is being enabled by software, more and more and >> more software, and so becomes at heart a projection out of the >> Codescape. >> >> Douglas Rushkoff?s recent book, ?Program or be Programmed?, talks >> about how we are now living in this world where what I call the >> Codescape is shaping the lives of everyone, and where we are divided >> into the code-literate and not. His book is mostly dreary complaining >> that it?s all too hard and the ?net should be more like it was in the >> 90s (joining an increasing chorus of 90s technorati who are finding >> themselves unable to keep up), but that first sentiment is absolutely >> spot on. If you can code, then, if you so choose, you can feel your >> way through codespace, explore the shifting landscape, and maybe carve >> out part of it in the shape of your own imaginings. Otherwise, you get >> internet-as-shiny-shopping-mall, a landscape of opaque gadgets, >> endless ads, monthly fees, and the faint suspicion that you are being >> constantly conned by fagan-esque gangs. >> >> I contend that if you care about personal autonomy, about freedom, in >> the 21st century, then you really should try to be part of this world. >> Perhaps for the first time, the potential for individuals is rivalling >> that of corporate entities. There is cheap and free server time on >> offer, high level environments into which you can project your >> codebase. The protocols are open, the documentation (sometimes just >> code itself) is free and freely available. Even the very best >> programming tools are free. If you can acquire the skills and the >> motivation, you can walk the Codescape with nothing more than an >> internet connection, a $100 chinese netbook, and your own wits. There >> is no barrier to entry, other than your ability to twist your mind >> into the shape that the proper incantations demand. >> >> Everything has a programmable API, which you can access and play with >> and create with if you are prepared to make the effort. At your >> fingertips are the knowledge and information resources of the world, >> plus the social interactions of 2 billion humans and counting, plus a >> growing resource of inputs and outputs in the physical world with >> which you can see and act. >> >> It?s a new frontier, expanding faster than we can explore and settle >> it. It?s going to be unrecognisable in 2020, and again in 2030, and >> who knows what after that. But the milestones are boring. The fun is >> in living it. The first challenge is just to try. >> >> -- >> Emlyn >> >> http://my.syyn.cc - A service for syncing buzz and facebook, posts, >> comments and all. >> http://www.blahblahbleh.com - A simple youtube radio that I built >> http://point7.wordpress.com - My blog >> Find me on Facebook and Buzz >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- Emlyn http://my.syyn.cc - A service for syncing buzz and facebook, posts, comments and all. http://www.blahblahbleh.com - A simple youtube radio that I built http://point7.wordpress.com - My blog Find me on Facebook and Buzz From thespike at satx.rr.com Sun Nov 7 22:42:50 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sun, 07 Nov 2010 16:42:50 -0600 Subject: [ExI] I love the world. =) In-Reply-To: <4CD7211C.8060304@speakeasy.net> References: <4CD7211C.8060304@speakeasy.net> Message-ID: <4CD72B6A.80701@satx.rr.com> On 11/7/2010 3:58 PM, Alan Grimes wrote: > I took > a peek out my window through the blinds only to be shocked by a truly > dazzling sunset. The world is such a place of amazing majesty, I > wouldn't dare change a thing about it. > > For me, transhumanism is mostly about fixing this horrible mortality bug > in the human body, everything else I wouldn't have any other way. > > Why do other transhumanists suffer the fools who talk about reducing it > all to computronium even for an instant? Rudy Rucker argues that case in detail in my anthology YEAR MILLION and in various of his own novels. And nobody can accuse Rudy of lacking imagination or boldness--he was there before just about anyone else. Damien Broderick From giulio at gmail.com Mon Nov 8 07:25:26 2010 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 8 Nov 2010 08:25:26 +0100 Subject: [ExI] Turing Church Online Workshop 1, Teleplace, Saturday November 20, 9am-1pm PST Message-ID: Turing Church Online Workshop 1, Teleplace, Saturday November 20, 9am-1pm PST http://giulioprisco.blogspot.com/2010/11/turing-church-online-workshop-1.html http://telexlr8.wordpress.com/2010/11/07/turing-church-online-workshop-1-teleplace-saturday-november-20-9am-1pm-pst/ Turing Church Online Workshop 1, in Teleplace, Saturday November 20, 9am-1pm PST (noon-4pm EST, 5pm-9pm UK, 6pm-10pm EU). The workshop will explore transhumanist spirituality and "Religion 2.0" and it will be a coordination-oriented summit of groups and organizations active in this area. Format: Online-only workshop in Teleplace. Those who already have Teleplace accounts for teleXLR8 can just show up at the workshop. There are a limited number of seats available for others, please contact me if you wish to attend. Panelists: - Lincoln Cannon (Mormon Transhumanist Association) - Ben Goertzel (Cosmist Manifesto) - Mike Perry (Society for Universal Immortalism) - Giulio Prisco (Turing Church) - Martine Rothblatt (Terasem) Agenda: - Talks by the panelists in the first 2 hours. - Discussion between the panelists in the last 2 hours, with the participation of the audience. Objectives: - To discover parallels and similarities between different organizations and to agree on common interests, agendas, strategies, outreach plans etc. - To discuss whether it makes sense to establish a umbrella organization, or to consider one of the existing organizations as such. - To develop the idea of scientific resurrection: our descendants and mind children will develop ?magic science and technology? in the sense of Clarke?s third law, and may be able to do grand spacetime engineering and even resurrect the dead by ?copying them to the future?. Of course this a hope and not a certainty, but I am persuaded that this concept is scientifically founded and could become the ?missing link? between transhumanists and religious and spiritual communities. - And of course, how to make our our beautiful ideas available, understandable and appealing to billions of seekers. My own presentation will be a revised and expanded version of my talk on a talk on The Cosmic Visions of the Turing Church at the Transhumanism and Spirituality Conference 2010. The main point can be summarized in one sentence (Slide 4): "A memetically strong religion needs to offer resurrection besides immortality." From scerir at alice.it Mon Nov 8 11:21:48 2010 From: scerir at alice.it (scerir) Date: Mon, 8 Nov 2010 12:21:48 +0100 Subject: [ExI] Seth Loyd on birds, plants, ... In-Reply-To: <4CD72B6A.80701@satx.rr.com> References: <4CD7211C.8060304@speakeasy.net> <4CD72B6A.80701@satx.rr.com> Message-ID: <36F35165AAA04A00B8E05C5F4E3E2FA3@PCserafino> Seth Lloyd on quantum 'weirdness' used by plants, animals, etc. http://www.cbc.ca/technology/story/2010/11/03/quantum-physics-biology-living-things.html Supposedly, the video of this lecture will appear on the Perimeter Institute website, or at pirsa.org. From charlie.stross at gmail.com Mon Nov 8 11:17:19 2010 From: charlie.stross at gmail.com (Charlie Stross) Date: Mon, 8 Nov 2010 11:17:19 +0000 Subject: [ExI] I love the world. =) In-Reply-To: References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> Message-ID: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > 3) we will continue to advance according to our own programming. > Mostly that frightened monkey programming that kept us from being > eaten by primordial predators will make us just as likely to hit the > computronium monsters with a proverbial rock or (as recently > discussed) a burning branch. Once the threat becomes possible, expect > to see right next to the firehose something like "in case of hard > takeoff, break glass to employ EMP." In a not-quite-worst-case > scenario we are forced to Nuke the Internet and revert back to > Amish-level technologies. Not a pretty situation, but humanity would > adapt. Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. Meanwhile, the Mormons, with their requirement to keep a year of canned goods in the cellar, will be laughing. (Well, praying.) -- Charlie From pharos at gmail.com Mon Nov 8 11:55:13 2010 From: pharos at gmail.com (BillK) Date: Mon, 8 Nov 2010 11:55:13 +0000 Subject: [ExI] I love the world. =) In-Reply-To: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> Message-ID: On Mon, Nov 8, 2010 at 11:17 AM, Charlie Stross wrote: > Humanity *in the abstract* might adapt; but if we have to go there, you and I, > personally, are probably going to die. Even today, all our supply chains have > adapted to just-in-time production and shipping, relying on networked > communications to ensure that stuff gets where it's needed; we can't revert > to doing things the old way -- the equipment has long since been scrapped -- > and we'd rapidly starve. Your average big box supermarket only holds about > 24-48 hours worth of provisions, and their logistics infrastructure is highly > tuned for efficiency. Now add in gas stations, railroad signalling, electricity > grid control ... If we have to Nuke The Net Or Die, it'll mean the difference > between a 100% die-back and a 90% die-back. > > Meanwhile, the Mormons, with their requirement to keep a year of canned > goods in the cellar, will be laughing. (Well, praying.) > > It's bad enough even with your 'highly-tuned' supply system. That's only for popular items. If something breaks nowadays, you just can't get spares. You have to buy a new one. For large items, if you need an unusual spare part for a Fiat car, chances are you will wait a month while they ship it in from Italy. BillK From giulio at gmail.com Mon Nov 8 12:01:22 2010 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 8 Nov 2010 13:01:22 +0100 Subject: [ExI] I love the world. =) In-Reply-To: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> Message-ID: Not only the Mormons, but also rural communities able to produce enough basic goods for their own bare survival. It is us city people who would be totally screwed. I would not know how to survive after Nuke the Internet, but my grandfather would. However if computronium superAIs, if and when such a thing will exist, decide to take over, there is not much that we can do, we would not even see it coming until it is here already. Perhaps they will upload us to a virtual Farmville as real as reality, with our memories edited to continue to live under the illusion that we have escaped. G. On Mon, Nov 8, 2010 at 12:17 PM, Charlie Stross wrote: > On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > >> 3) we will continue to advance according to our own programming. >> Mostly that frightened monkey programming that kept us from being >> eaten by primordial predators will make us just as likely to hit the >> computronium monsters with a proverbial rock or (as recently >> discussed) a burning branch. ?Once the threat becomes possible, expect >> to see right next to the firehose something like "in case of hard >> takeoff, break glass to employ EMP." ?In a not-quite-worst-case >> scenario we are forced to Nuke the Internet and revert back to >> Amish-level technologies. ?Not a pretty situation, but humanity would >> adapt. > > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. > > Meanwhile, the Mormons, with their requirement to keep a year of canned goods in the cellar, will be laughing. (Well, praying.) > > > -- Charlie > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From msd001 at gmail.com Mon Nov 8 14:36:39 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 8 Nov 2010 09:36:39 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> References: <4CD7211C.8060304@speakeasy.net> <001801cb7ecf$3023be50$906b3af0$@att.net> <9FA96748-5E1B-4DB3-BB03-C2CDC3790663@gmail.com> Message-ID: On Mon, Nov 8, 2010 at 6:17 AM, Charlie Stross wrote: > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. Of course. But the usual scenario about AI destroying humanity (with or without computronium) puts me in a mindset that some humans remaining, no matter how distant from my own person/family/tribe/ethnicity/etc. is still better than none at all. I'm willing to expand the definition of humanity to include uploaded-state behavior patterns/identities too though - so maybe the human Farmville is also better than nonexistence. From pharos at gmail.com Mon Nov 8 17:25:04 2010 From: pharos at gmail.com (BillK) Date: Mon, 8 Nov 2010 17:25:04 +0000 Subject: [ExI] War ----- It's a meme! Message-ID: John Horgan has an article in Scientific American about why tribes go to war that might be of interest. I know that Keith has suggested that war is caused either by hard times or an expectation of hard times, but I feel this is a weak theory as it seems to cover all cases and therefore is untestable. Horgan thinks that war is learned behaviour. Some Quotes: Analyses of more than 300 societies in the Human Relations Area Files, an ethnographic database at Yale University, have turned up no clear-cut correlations between warfare and chronic resource scarcity. Similarly, the anthropologist Lawrence Keeley notes in War before Civilization: The Myth of the Peaceful Savage (Oxford University Press, 1997) that the correlation between population pressure and warfare "is either very complex or very weak or both." Margaret Mead dismissed the notion that war is the inevitable consequence of our "basic, competitive, aggressive, warring human nature." This theory is contradicted, she noted, by the simple fact that not all societies wage war. War has never been observed among a Himalayan people called the Lepchas or among the Eskimos. In fact, neither of these groups, when questioned by early ethnographers, was even aware of the concept of war. Warfare is "an invention," Mead concluded, like cooking, marriage, writing, burial of the dead or trial by jury. Once a society becomes exposed to the "idea" of war, it "will sometimes go to war" under certain circumstances. Some people, Mead stated, such as the Pueblo Indians, fight reluctantly to defend themselves against aggressors; others, such as the Plains Indians, sally forth with enthusiasm, because they have elevated martial skills to the highest of manly virtues. ------------------ BillK From spike66 at att.net Mon Nov 8 17:27:16 2010 From: spike66 at att.net (spike) Date: Mon, 8 Nov 2010 09:27:16 -0800 Subject: [ExI] 25th anniversary of engines of creation Message-ID: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> On Mon, Nov 8, 2010 at 6:17 AM, Charlie Stross wrote: > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die... Hi Charlie, good to see you posting here again. Isn't it amazing that we are coming up on the 25th anniversary of Drexler's Engines of Creation? For many of us, that was the book that launched a thousand memeships. Charlie is one who posted back a long time ago when we used to debate something that now seems settled: which comes first, strong AI or strong nanotech (replicating assembler)? The argument at the time (early to mid 90s) was that AI enables nanotech (by providing the designs), but nanotech enables AI (by providing super-capable computers.) Is there anyone here for whom that argument is not completely settled? Do explain please. spike From darren.greer3 at gmail.com Mon Nov 8 17:42:46 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 8 Nov 2010 13:42:46 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: "War has never been observed among a Himalayan people called the Lepchas or among the Eskimos. In fact, neither of these groups, when questioned by early ethnographers, was even aware of the concept of war." Martin Van Creveld has a theory about this in his *Decline of the Nation States*. He calls Inuit society (Eskimo is very culturally offensive, by the way) a modality, the kind of tribe that only goes to war when a number of tribes join together in warfare with a temporary leader united under a single banner but still maintaining tribal autonomy. He cites the war against Troy in *The Iliad* by the tribes under the temporary leadership of Agamemnon and Menelaus to be a good example of this. (recall Achilles and the Myrmidons.) Opportunities for warfare under these circumstances are exceedingly rare, and usually involve a cultural taboo being violated. The Inuit have a unique societal structure and are likely the exception rather the the rule. I can't speak for the Lepchas, but I would imagine it would be something similar. Darren On Mon, Nov 8, 2010 at 1:25 PM, BillK wrote: > John Horgan has an article in Scientific American about why tribes go > to war that might be of interest. I know that Keith has suggested that > war is caused either by hard times or an expectation of hard times, > but I feel this is a weak theory as it seems to cover all cases and > therefore is untestable. Horgan thinks that war is learned behaviour. > > Some Quotes: > > Analyses of more than 300 societies in the Human Relations Area Files, > an ethnographic database at Yale University, have turned up no > clear-cut correlations between warfare and chronic resource scarcity. > Similarly, the anthropologist Lawrence Keeley notes in War before > Civilization: The Myth of the Peaceful Savage (Oxford University > Press, 1997) that the correlation between population pressure and > warfare "is either very complex or very weak or both." > > Margaret Mead dismissed the notion that war is the inevitable > consequence of our "basic, competitive, aggressive, warring human > nature." This theory is contradicted, she noted, by the simple fact > that not all societies wage war. War has never been observed among a > Himalayan people called the Lepchas or among the Eskimos. In fact, > neither of these groups, when questioned by early ethnographers, was > even aware of the concept of war. > > Warfare is "an invention," Mead concluded, like cooking, marriage, > writing, burial of the dead or trial by jury. Once a society becomes > exposed to the "idea" of war, it "will sometimes go to war" under > certain circumstances. Some people, Mead stated, such as the Pueblo > Indians, fight reluctantly to defend themselves against aggressors; > others, such as the Plains Indians, sally forth with enthusiasm, > because they have elevated martial skills to the highest of manly > virtues. > > ------------------ > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* (Recall -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 8 18:00:32 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Nov 2010 13:00:32 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <4CD7211C.8060304@speakeasy.net> References: <4CD7211C.8060304@speakeasy.net> Message-ID: <47B672AC-F800-44DA-9EC4-BF0BF1ECC2DF@bellsouth.net> On Nov 7, 2010, at 4:58 PM, Alan Grimes wrote: > The world is such a place of amazing majesty, It could be better. > I wouldn't dare change a thing about it. You are suffering from either a lack of courage or a lack of imagination. > I wouldn't have any other way. A world without cancer would be another way, and I believe I'd prefer that. > Why do other transhumanists suffer the fools who talk about reducing it > all to computronium even for an instant? Do you have any reason to be certain that hasn't already happened? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 8 18:35:12 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Nov 2010 13:35:12 -0500 Subject: [ExI] 25th anniversary of engines of creation. In-Reply-To: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> References: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> Message-ID: <46A9FE76-ED94-4851-BBB2-9839121037BE@bellsouth.net> On Nov 8, 2010, at 12:27 PM, spike wrote: > The argument at the time (early to mid 90s) was that AI enables nanotech (by providing the designs), > but nanotech enables AI (by providing super-capable computers.) > Is there anyone here for whom that argument is not completely settled? Me. I don't know which will come first but I do know there won't be much time between the two events. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 8 18:41:43 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 8 Nov 2010 13:41:43 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: On Nov 7, 2010, at 3:15 PM, Stefano Vaj wrote: > My point is that no possible evidence would make you a "copy". The "original" would in any event from your perspective simply a fork behind. I see no reason to assume "you" are the original, and even more important I see no reason to care if "you" are the original. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Mon Nov 8 19:07:10 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 08 Nov 2010 14:07:10 -0500 Subject: [ExI] 25th anniversary of engines of creation In-Reply-To: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> References: <000301cb7f6a$30a93e90$91fbbbb0$@att.net> Message-ID: <4CD84A5E.207@lightlink.com> spike wrote: > Isn't it amazing that we are coming up on the 25th anniversary of Drexler's > Engines of Creation? For many of us, that was the book that launched a > thousand memeships. Charlie is one who posted back a long time ago when we > used to debate something that now seems settled: which comes first, strong > AI or strong nanotech (replicating assembler)? The argument at the time > (early to mid 90s) was that AI enables nanotech (by providing the designs), > but nanotech enables AI (by providing super-capable computers.) > > Is there anyone here for whom that argument is not completely settled? Do > explain please. I'm a little confused about which way you are implying that it was settled? Strong AI will, of course, come first, because: (a) We already have the computing power to do it (all that is lacking is the understanding of how to use that computing power), and (b) Without strong AI, designing safe nanotech is going to be very difficult indeed. Richard Loosemore From thespike at satx.rr.com Mon Nov 8 20:16:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 08 Nov 2010 14:16:40 -0600 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: <4CD85AA8.5080402@satx.rr.com> On 11/8/2010 12:41 PM, John Clark wrote: > I see no reason to assume "you" are the original, and even more > important I see no reason to care if "you" are the original. The endless perspective or Point-of-View confusion. Of course a copy experiences himself as the original (that's what an exact copying process *means*). Of course the rest of the world experiences him as equally you. There are two major problems seldom addressed in this complacent view: 1) The jurisprudential--who owns the original's possessions? Where provenance of the original can be established, it seems pretty likely that the law will find for the original, in the absence of an advance agreement to split the loot. 2) If copying requires destruction of the original, is it psychologically likely that he will go to his death happy in the knowledge that his exact subsequent copy will continue elsewhere? Many here say, "Hell, yes, it's only evolved biases and cognitive errors that could support any other opinion!" Others say, "Maybe so, but you're not getting me into that damned gas chamber." So if the world becomes filled with people happy to be killed and copied, of course it's likely that after a few hundred iterations identity will be construed this way by almost everyone. If the USA becomes filled with the antiabortion offspring of the duped who believe evolution is a godless hoax and humans never walked on the moon, those opinions will also be validated. So what? Damien Broderick From agrimes at speakeasy.net Mon Nov 8 22:17:01 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 08 Nov 2010 17:17:01 -0500 Subject: [ExI] I love the world. =) In-Reply-To: <47B672AC-F800-44DA-9EC4-BF0BF1ECC2DF@bellsouth.net> References: <4CD7211C.8060304@speakeasy.net> <47B672AC-F800-44DA-9EC4-BF0BF1ECC2DF@bellsouth.net> Message-ID: <4CD876DD.4020002@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > On Nov 7, 2010, at 4:58 PM, Alan Grimes wrote: >> The world is such a place of amazing majesty, > It could be better. >> I wouldn't dare change a thing about it. > You are suffering from either a lack of courage or a lack of imagination. And you are lacking eyesight. =P >> I wouldn't have any other way. > A world without cancer would be another way, and I believe I'd prefer that. Come on, read my first posting again! I explicitly said that transhumanism was about fixing the bugs in the human body, specifically death, but implicitly all other things one might want to customize for either good or even bad reasons. =P >> Why do other transhumanists suffer the fools who talk about reducing it >> all to computronium even for an instant? > Do you have any reason to be certain that hasn't already happened? Byte me. =| Nick Bostrom is a sophist and so is everyone else who agrees with him. You are getting into a DesCartes versus Occam argument here. If you side with DesCartes you must first claim that you are the happy victim of an unspeakably evil monster. If you side with Occam you get to sit in your easy chair with a smirk on your face and quietly say "prove it" every once in a while. The null hypothesis in this case is that there is nothing artificial about the reality in which we live. Artificial structures are extremely easy to detect wherever they exist on earth, therefore show me an artifact of the simulation you are proposing that proves it exists. Should you manage to prove that it exists, I'll immediately drop everything and start working on the problem of "outloading" myself to whatever is out there. With that done, I'll amuse myself by making silly, arbitrary, and obnoxious changes to this universe with the aim of inspiring my peers to follow me to the exit. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From pharos at gmail.com Tue Nov 9 13:47:15 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Nov 2010 13:47:15 +0000 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: 2010/11/8 Darren Greer wrote: > (Eskimo is very culturally offensive, by the way) That's too simplistic. Let's have a good nit-pick! :) Eskimo isn't offensive in the UK or US or even in Alaska. It is a general term for all the native people in the Arctic region. >From Wikipedia: In Alaska, the term Eskimo is commonly used, because it applies to both Yupik and Inupiat peoples. Inuit is not accepted as a collective term or even specifically used for Inupiat (which technically is Inuit). No universal replacement term for Eskimo, inclusive of all Inuit and Yupik people, is accepted across the geographical area inhabited by the Inuit and Yupik peoples. --------------------------- >From alaskanative.net: Alaska's Native people are divided into eleven distinct cultures, speaking twenty different languages. In order to the tell the stories of this diverse population, the Alaska Native Heritage Center is organized based on five cultural groupings, which draw upon cultural similarities or geographic proximity: * Athabascan * Yup'ik & Cup'ik * Inupiaq & St. Lawrence Island Yupik * Unangax & Alutiiq (Sugpiaq) * Eyak, Tlingit, Haida & Tsimshian ----------------- Some of the indigenous races would be equally offended to be called Inuit. So, to be strictly correct, you have to find out which culture the person you are speaking to is a member of and use that name. BillK From lists1 at evil-genius.com Tue Nov 9 12:13:03 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 09 Nov 2010 04:13:03 -0800 Subject: [ExI] A side note on Inuit/Eskimo (was War ---- It's a meme!) In-Reply-To: References: Message-ID: <4CD93ACF.5010108@evil-genius.com> > From: Darren Greer > He calls Inuit society (Eskimo is very culturally offensive, by the way) So is Inuit -- if you happen to be Yupik (the other major far northern tribal group commonly lumped under 'Eskimo'). Unfortunately, there is no agreed-upon replacement. (An aside: my last three posts to this list have never posted, nor have they been rejected by a moderator. If anyone can tell me what's going on, I'd appreciate it, because it's frustrating to write out a long, thoughtful reply and never see it.) From jonkc at bellsouth.net Tue Nov 9 15:41:49 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 9 Nov 2010 10:41:49 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CD85AA8.5080402@satx.rr.com> References: <4CC6738E.3050609@speakeasy.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> Message-ID: <63E678CC-AA5E-46E3-BF42-B31B9DBB0101@bellsouth.net> On Nov 8, 2010, at 3:16 PM, Damien Broderick wrote: > Of course a copy experiences himself as the original (that's what an exact copying process *means*). Of course the rest of the world experiences him as equally you. Then that pretty much ends the matter as far as I'm concerned, but for some reason never clearly explained, not for you. > There are two major problems seldom addressed in this complacent view: > 1) The jurisprudential--who owns the original's possessions? I don't know what you mean by "seldom addressed", that's the first thing anti-uploaders say, after "it just wouldn't be me!" of course. The answer is that the ownership of the possessions will be determined by whoever makes the law at the time, and that is irrelevant to the question at hand. I said it before I'll say it again, you're talking about the law I'm talking about logic and the two things have absolutely nothing to do with one another. > 2) If copying requires destruction of the original [...] Stop right there! Exactly what is being destroyed? The atoms are not destroyed, not that that's important as they are very far from unique, and the information on how those atoms are arranged are not unique either as that's been duplicated in the uploading process. So that naturally brings up another question: what is so original about The Original? There is only one possible answer to that, but as I've said before I don't believe in the soul. > is it psychologically likely that he will go to his death happy in the knowledge that his exact subsequent copy will continue elsewhere? You are arguing that my ideas must be wrong because some people might fear them for unclear reasons, I don't think that follows. Primitive people are terrified to have their picture taken because they think it will rob them of their essence, some people who like to think of themselves as sophisticated refuse to live or work on the thirteenth floor of a building unless it is renamed "the fourteenth floor"; so what? The only thing more illogical than the law is psychology. > So if the world becomes filled with people happy to be killed and copied, of course it's likely that after a few hundred iterations identity will be construed this way by almost everyone. Yes, so right or wrong your views have no future, mine do. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Nov 9 17:49:28 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 13:49:28 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: >>So, to be strictly correct, you have to find out which culture the person you are speaking to is a member of and use that name.<< Yup, that seems like it might be true. At least it was when I worked at the Pauktuutit Inuit Women's centre. We used Inuit except in cases where those being referred to believed it didn't apply, such as the Innui in Quebec and the Dene in Saskatchewan. >From Wikipedia: In Alaska, the term Eskimo is commonly used, because it applies to both Yupik and Inupiat peoples. Inuit is not accepted as a collective term or even specifically used for Inupiat (which technically is Inuit). No universal replacement term for Eskimo, inclusive of all Inuit and Yupik people, is accepted across the geographical area inhabited by the Inuit and Yupik peoples.<< It's not so much the generalized term that people are using, but what that generalized term means and where it comes from. Inuit means "our people" from a Northern Indigenous tribal dialect. Eskimo means "eater of raw flesh" from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking aside, I can see that the objections to the first might be greater than the second. >>* Athabascan * Yup'ik & Cup'ik * Inupiaq & St. Lawrence Island Yupik * Unangax & Alutiiq (Sugpiaq) * Eyak, Tlingit, Haida & Tsimshian<< So you wanted nit-picking? :) There are specific tribal names and even tribes-within-tribes and generalized rubric headings. The above is a confusing mixture. Athabaskan and Haida generally consider themselves first nations, and more importantly 'treatied' first nations if they live in Canada. The other tribes may or may not call themselves first nations for a number of political reasons, not the least being that when the colonizers dealt with the northern tribes, there were in fact so few of them in numbers that they found there was more bargaining power in being considered as a single nation rather than a group of very small tribes seperated by vast geographical distances. (Some) of the northern tribes you name find the term Inuit inappropriate for political reasons, not cultural ones. The term Inuit was adopted for political reasons, and gained wide-spread use when some Northern tribes negotiated Nunavut as a seperate Canadian Territory. One of the mistakes people make when dealing with Indiginous people in North America (and Russia) is to forget about the political distinctions as well as the language and culture. There was a complex political structure in place when the colonizers first arrived here. So Inuit may be (and is) often objected to on political grounds (such as the Innui and the Dene who have been very successful in negotiating political advantages as isolated tribes by looking at their small numbers and unique culture-within-a-culture as a bargaining chip rather than a libaility.) But the term Eskimo is (or almost is) universally culturally offensive, as far as I know. And this biggest nit-pick of all? I stated the term Eskimo was cultural offensive, and bet even the Yupik and (I know the Innui) find it so. It is best when dealing with tribes who don't identify with the "Our People's" designation to ask them what they prefer to be called, instead of assuming "eater of raw flesh" is OK. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Tue Nov 9 18:42:41 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 9 Nov 2010 13:42:41 -0500 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer : > It's not so much the generalized term that people are using, but what that > generalized term means and where it comes from. ?Inuit means "our people" > from a Northern?Indigenous?tribal dialect. Eskimo means "eater of raw flesh" > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking aside, > I can see that the objections to the first might be greater than the second. Likewise I take offense at being called a Typical American to mean "eater of junk food while watching TV." I think the colloquial "Couch Potato" is more appropriate for that particular meaning. :) > But the term Eskimo is (or almost is) universally culturally offensive, as > far as I know. And this biggest nit-pick of all? I stated the term Eskimo > was cultural offensive, and bet even the Yupik and (I know the Innui) find > it so. It is best when dealing with tribes who don't identify with the "Our > People's" designation to ask them what they prefer to be called, instead of > assuming "eater of raw flesh" is OK. Ultimately I prefer to be called "Mike." If we could remember to call people by their names rather than by labels all these problems could be easily avoided. From pharos at gmail.com Tue Nov 9 18:02:35 2010 From: pharos at gmail.com (BillK) Date: Tue, 9 Nov 2010 18:02:35 +0000 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer wrote>: > It's not so much the generalized term that people are using, but what that > generalized term means and where it comes from. ?Inuit means "our people" > from a Northern?Indigenous?tribal dialect. Eskimo means "eater of raw flesh" > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking aside, > I can see that the objections to the first might be greater than the second. > Thanks for the info. Complicated, isn't it? Does that mean that Spike is really an Eskimo? ;) (He loves sushi). Some sources do give alternative (less-offensive) meanings for Eskimo. BillK From hkeithhenson at gmail.com Tue Nov 9 20:59:54 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 9 Nov 2010 13:59:54 -0700 Subject: [ExI] I love the world. =) Message-ID: On Mon, Nov 8, 2010 at 5:00 AM, Charlie Stross wrote: > > On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > >> 3) we will continue to advance according to our own programming. >> Mostly that frightened monkey programming that kept us from being >> eaten by primordial predators will make us just as likely to hit the >> computronium monsters with a proverbial rock or (as recently >> discussed) a burning branch. ?Once the threat becomes possible, expect >> to see right next to the firehose something like "in case of hard >> takeoff, break glass to employ EMP." ?In a not-quite-worst-case >> scenario we are forced to Nuke the Internet and revert back to >> Amish-level technologies. ?Not a pretty situation, but humanity would >> adapt. > > Humanity *in the abstract* might adapt; but if we have to go there, you and I, personally, are probably going to die. Even today, all our supply chains have adapted to just-in-time production and shipping, relying on networked communications to ensure that stuff gets where it's needed; we can't revert to doing things the old way -- the equipment has long since been scrapped -- and we'd rapidly starve. Your average big box supermarket only holds about 24-48 hours worth of provisions, and their logistics infrastructure is highly tuned for efficiency. Now add in gas stations, railroad signalling, electricity grid control ... If we have to Nuke The Net Or Die, it'll mean the difference between a 100% die-back and a 90% die-back. I understand that just losing GPS will make hash of the banking industry (timestamps). > Meanwhile, the Mormons, with their requirement to keep a year of canned goods in the cellar, will be laughing. (Well, praying.) I thought you are out of date on this since our Mormon neighbors got rid of their year of food back in the late 80s. But it seems like this is still part of Mormon culture, though it may be followed by a minority of them. Could we get through a loss of the net and not loose most of the population? At this point "the net" and phone service is largely the same thing, at least outside a LATA. I think that might be possible today, enough people remember old ways of doing things. Ten years from now? 20? 30? Perhaps, but it gets harder and harder as time goes on and dependency on the net increases. Losing process control computers would be really bad. There are processes that can't be run by hand at all, they are unstable and people are too slow. I wonder if there was any consideration for the possible consequences before this started? Keith From darren.greer3 at gmail.com Tue Nov 9 22:57:34 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 18:57:34 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: >>Ultimately I prefer to be called "Mike." If we could remember to call people by their names rather than by labels all these problems could be easily avoided.<< Agreed. Except that if you're not the one giving yourself or your cultural or ethnic group the label, that's when it becomes a problem. I expect Americans didn't even start off calling themselves Americans. Likely the British came up with it first. Probably because coach potato was already taken. By Canadians. :) Darren On Tue, Nov 9, 2010 at 2:42 PM, Mike Dougherty wrote: > 2010/11/9 Darren Greer : > > It's not so much the generalized term that people are using, but what > that > > generalized term means and where it comes from. Inuit means "our people" > > from a Northern Indigenous tribal dialect. Eskimo means "eater of raw > flesh" > > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking > aside, > > I can see that the objections to the first might be greater than the > second. > > Likewise I take offense at being called a Typical American to mean > "eater of junk food while watching TV." > I think the colloquial "Couch Potato" is more appropriate for that > particular meaning. :) > > > But the term Eskimo is (or almost is) universally culturally offensive, > as > > far as I know. And this biggest nit-pick of all? I stated the term Eskimo > > was cultural offensive, and bet even the Yupik and (I know the Innui) > find > > it so. It is best when dealing with tribes who don't identify with the > "Our > > People's" designation to ask them what they prefer to be called, instead > of > > assuming "eater of raw flesh" is OK. > > Ultimately I prefer to be called "Mike." If we could remember to call > people by their names rather than by labels all these problems could > be easily avoided. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Nov 9 23:03:31 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 19:03:31 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: >>Does that mean that Spike is really an Eskimo? ;) (He loves sushi).<< What little I know of Spike is from Exi. But based on that limited amount of data, I think he probably defies easy categorization. :) Darren On Tue, Nov 9, 2010 at 2:02 PM, BillK wrote: > 2010/11/9 Darren Greer wrote>: > > It's not so much the generalized term that people are using, but what > that > > generalized term means and where it comes from. Inuit means "our people" > > from a Northern Indigenous tribal dialect. Eskimo means "eater of raw > flesh" > > from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking > aside, > > I can see that the objections to the first might be greater than the > second. > > > > > Thanks for the info. Complicated, isn't it? > > Does that mean that Spike is really an Eskimo? ;) > (He loves sushi). > > Some sources do give alternative (less-offensive) meanings for Eskimo. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Tue Nov 9 21:14:11 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Tue, 9 Nov 2010 14:14:11 -0700 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: I'm so proud of the list members for showing sensitivity regarding the indigenous peoples of Alaska. I grew up there with a bestfriend who was half Inuit and half White. He caught hell from both sides... John On 11/9/10, BillK wrote: > 2010/11/9 Darren Greer wrote>: >> It's not so much the generalized term that people are using, but what that >> generalized term means and where it comes from. ?Inuit means "our people" >> from a Northern?Indigenous?tribal dialect. Eskimo means "eater of raw >> flesh" >> from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking >> aside, >> I can see that the objections to the first might be greater than the >> second. >> > > > Thanks for the info. Complicated, isn't it? > > Does that mean that Spike is really an Eskimo? ;) > (He loves sushi). > > Some sources do give alternative (less-offensive) meanings for Eskimo. > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From darren.greer3 at gmail.com Tue Nov 9 23:26:04 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 19:26:04 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: > >>I'm so proud of the list members for showing sensitivity regarding the > indigenous peoples of Alaska. I grew up there with a bestfriend who > was half Inuit and half White. He caught hell from both sides...>> Can relate to that. My Dad's First Nations and Mom's Irish. They used to call us, well, I won't tell you what the local pedigreed European descendants called us, but I was once told in a talking circle when I was a kid that I didn't belong there because I didn't live on the rez. This elder spoke up for me though. He told the objector that a 'circle has no corners.' So the guy beside me gets a scolding and I get my first geometry lesson. :) Darren > > John > > On 11/9/10, BillK wrote: > > 2010/11/9 Darren Greer wrote>: > >> It's not so much the generalized term that people are using, but what > that > >> generalized term means and where it comes from. Inuit means "our > people" > >> from a Northern Indigenous tribal dialect. Eskimo means "eater of raw > >> flesh" > >> from the Cree, who are incontestibly not Eskimo or Inuit. Nit-picking > >> aside, > >> I can see that the objections to the first might be greater than the > >> second. > >> > > > > > > Thanks for the info. Complicated, isn't it? > > > > Does that mean that Spike is really an Eskimo? ;) > > (He loves sushi). > > > > Some sources do give alternative (less-offensive) meanings for Eskimo. > > > > > > BillK > > > > _______________________________________________ > > extropy-chat mailing list > > extropy-chat at lists.extropy.org > > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Wed Nov 10 00:26:15 2010 From: spike66 at att.net (spike) Date: Tue, 9 Nov 2010 16:26:15 -0800 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: <004201cb806d$e32403d0$a96c0b70$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of BillK ... Does that mean that Spike is really an Eskimo? ;) (He loves sushi)... BillK Don't I wish. Eskimos qualify as native American, which are equivalent to Latino by the quirky reasoning of our current employment law. If I could prove Eskimo heritage, I would have a job. Aside: I worked with an Eskimo (Inuit) when I worked in the southern California desert. He died of apparent heat related heart failure at age 43. spike From spike66 at att.net Wed Nov 10 00:44:10 2010 From: spike66 at att.net (spike) Date: Tue, 9 Nov 2010 16:44:10 -0800 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: <000001cb8070$64324660$2c96d320$@att.net> . On Behalf Of Darren Greer Subject: Re: [ExI] War ----- It's a meme! >>Does that mean that Spike is really an Eskimo? ;) (He loves sushi).<< What little I know of Spike is from Exi. But based on that limited amount of data, I think he probably defies easy categorization. :) Darren You are too kind. When are we going to get together for a good sushi devouring session? I miss those. We used to get together with the local transhumanist crowd at least once or twice a year. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Nov 10 00:57:49 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 09 Nov 2010 18:57:49 -0600 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: Message-ID: <4CD9EE0D.4000304@satx.rr.com> On 11/9/2010 4:57 PM, Darren Greer wrote: > I expect Americans didn't even start off calling themselves Americans. > Likely the British came up with it first. Probably because coach potato > was already taken. By Canadians. :) Tut tut, that would be Royal Canadian Mounted Couch Potato. Damien Broderick From darren.greer3 at gmail.com Wed Nov 10 01:51:31 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 21:51:31 -0400 Subject: [ExI] War ----- It's a meme! In-Reply-To: <4CD9EE0D.4000304@satx.rr.com> References: <4CD9EE0D.4000304@satx.rr.com> Message-ID: > > >> > >>Tut tut, that would be Royal Canadian Mounted Couch Potato.<< > Don't forget the french-fried Surete du Quebec, led by the inimitable Constable Poutine (of Kurdish heritage.) > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 02:45:21 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 22:45:21 -0400 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: >>Could we get through a loss of the net and not loose most of the population? At this point "the net" and phone service is largely the same thing, at least outside a LATA. I think that might be possible today, enough people remember old ways of doing things. Ten years from now? 20? 30? Perhaps, but it gets harder and harder as time goes on and dependency on the net increases. Losing process control computers would be really bad. There are processes that can't be run by hand at all, they are unstable and people are too slow.<< An interesting discussion that I missed, likely because I am full-time work and part-time school and don't have a lot of free time. I just moved back to rural Nova Scotia from San Francisco and this subject has been on my mind a lot lately. I've noticed that since I've been here ( a small isolated community of eighty people or so, with the nearest town of any size forty kilometers away) that lives here have changed considerably due to technology since I was a boy. The Internet has created broader personal networks and people are better educated because of it. Improved medical procedures are helping people live longer. Everyone owns more stuff and drive nicer cars and they engage in a broader range of social, political and physical activity than they used to. They are even more tolerant of ethnic and cultural diversity. All of these changes in less than twenty years. Yet, my guess is, based on my recent observations, that if they were forced off the grid tomorrow, and had to go back to "amish-level" technology they would survive a hell of a lot longer than most of my friends in urban areas. Many of them still hunt and they all have guns. Some even cross-bows for bear-hunting. They know how to get food and to keep it, even without electricity. It is not uncommon for them to lose power in the winter months, and they have techniques for keeping their food: root cellars and snow banks and packing an unpowered fridge full of ice if the juice is going to be out for any length of time. They still preserve food (salt and pickle and spice) and keep low-temperature vegetable bins in basements, and almost everybody keeps some kind of garden in the warmer months. And most of all, they know how to cooperate to get things done. They do it all the time at local auxiliary and volunteer fire department and church meetings. Most houses here are oil, gas, propane or pellet stove heated. But many also have wood stoves or fireplaces for emergencies even if they don't use them. You'd be hard-pressed to find a single household without a chopping axe. Since I've moved back here I've even given some thought to defense, as scary as that sounds. The village is in a narrow valley divided by a small, fish-abundant river, and is easily defended if you had enough people motivated to do it (likely why this spot was chosen as a settlement in the late 1700's to begin with, not a peaceful time in my neck of the woods.) All in all, the chances of keeping a thriving community here in the event of something so disastrous as described above would be fairly good, at least for awhile. But there is one interesting aside. Coyotes are a huge problem. They grow sleeker and braver and more numerous with each passing year, glutted on domestic cats and injured deer and human garbage carelessly stored. So people might find that predation was once again an issue, especially with small children, if there were no passing cars and ambient machine noise to keep them away during the day. And of course, access to medication and infection and disease would also be a problem, though my parents still teach their grandchildren remedies for some of the minor common ailments that many of us now run to the doctor for. One more thing. In my black fantasy, when the big one hits, I'm gonna grab a gun and head to the little library village-centre and defend the books. The world's first armed librarian. Darren On Tue, Nov 9, 2010 at 4:59 PM, Keith Henson wrote: > On Mon, Nov 8, 2010 at 5:00 AM, Charlie Stross > wrote: > > > > On 8 Nov 2010, at 00:20, Mike Dougherty wrote: > > > >> 3) we will continue to advance according to our own programming. > >> Mostly that frightened monkey programming that kept us from being > >> eaten by primordial predators will make us just as likely to hit the > >> computronium monsters with a proverbial rock or (as recently > >> discussed) a burning branch. Once the threat becomes possible, expect > >> to see right next to the firehose something like "in case of hard > >> takeoff, break glass to employ EMP." In a not-quite-worst-case > >> scenario we are forced to Nuke the Internet and revert back to > >> Amish-level technologies. Not a pretty situation, but humanity would > >> adapt. > > > > Humanity *in the abstract* might adapt; but if we have to go there, you > and I, personally, are probably going to die. Even today, all our supply > chains have adapted to just-in-time production and shipping, relying on > networked communications to ensure that stuff gets where it's needed; we > can't revert to doing things the old way -- the equipment has long since > been scrapped -- and we'd rapidly starve. Your average big box supermarket > only holds about 24-48 hours worth of provisions, and their logistics > infrastructure is highly tuned for efficiency. Now add in gas stations, > railroad signalling, electricity grid control ... If we have to Nuke The Net > Or Die, it'll mean the difference between a 100% die-back and a 90% > die-back. > > I understand that just losing GPS will make hash of the banking > industry (timestamps). > > > Meanwhile, the Mormons, with their requirement to keep a year of canned > goods in the cellar, will be laughing. (Well, praying.) > > I thought you are out of date on this since our Mormon neighbors got > rid of their year of food back in the late 80s. But it seems like > this is still part of Mormon culture, though it may be followed by a > minority of them. > > Could we get through a loss of the net and not loose most of the > population? At this point "the net" and phone service is largely the > same thing, at least outside a LATA. I think that might be possible > today, enough people remember old ways of doing things. Ten years > from now? 20? 30? Perhaps, but it gets harder and harder as time > goes on and dependency on the net increases. Losing process control > computers would be really bad. There are processes that can't be run > by hand at all, they are unstable and people are too slow. > > I wonder if there was any consideration for the possible consequences > before this started? > > Keith > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Wed Nov 10 03:31:03 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 9 Nov 2010 22:31:03 -0500 Subject: [ExI] War ----- It's a meme! In-Reply-To: References: <4CD9EE0D.4000304@satx.rr.com> Message-ID: 2010/11/9 Darren Greer : >> >>Tut tut, that would be Royal Canadian Mounted Couch Potato.<< > > Don't forget the french-fried Surete du Quebec, led by > the?inimitable?Constable Poutine (of Kurdish heritage.) No doubt a heritage with tuberous roots. Or am I whey off? From msd001 at gmail.com Wed Nov 10 03:40:07 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 9 Nov 2010 22:40:07 -0500 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer : > But there is one interesting aside. Coyotes are a huge problem. They grow > sleeker and braver and more numerous with each passing year, glutted on > domestic cats and injured deer and human garbage carelessly stored. So > people might find that predation was once again an issue, especially with > small children, if there were no passing cars and ambient machine noise to I imagined "sleeker and braver" coyotes as futuristic gold-foil-clad computronium-enhanced beasts with a hivemind and an insatiable thirst for small children. Now there is a terrifying picture of the future: especially if that crossbow "armed for bear" is anything short of a plasma rifle or phaser set to kill. From darren.greer3 at gmail.com Wed Nov 10 03:50:27 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 9 Nov 2010 23:50:27 -0400 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: > > > > I imagined "sleeker and braver" coyotes as futuristic gold-foil-clad > computronium-enhanced beasts with a hivemind and an insatiable thirst > for small children. Now there is a terrifying picture of the future: > especially if that crossbow "armed for bear" is anything short of a > plasma rifle or phaser set to kill. > > Reminds me of W.C. Fields' immortal quip: "I love small children, but I can't eat a whole one." Darren -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Wed Nov 10 03:16:58 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 09 Nov 2010 19:16:58 -0800 Subject: [ExI] Fire and evolution (was hypnosis) Message-ID: <4CDA0EAA.5060907@evil-genius.com> > By coincidence, > > > Stone Age humans were only able to develop relatively advanced tools > after their brains evolved a greater capacity for complex thought, > according to a new study that investigates why it took early humans > almost two million years to move from razor-sharp stones to a > hand-held stone axe. > ------------------ > > BillK Sort of. The actual conclusion is "The physical capabilities of Stone Age humans were not limiting their ability to make more complex stone tools." From that, they *assume* that brainpower was the limitation. Which may be true, but it's an assumption. For instance, other possibilities are that stone axes were not a useful tool until much later, perhaps due to changes in environment, diet, and social organization. (Note that I'm not arguing for or against...just pointing out the leap of faith involved here.) From lists1 at evil-genius.com Wed Nov 10 03:21:01 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 09 Nov 2010 19:21:01 -0800 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) Message-ID: <4CDA0F9D.6070201@evil-genius.com> > From: Charlie Stross >> > In a not-quite-worst-case >> > scenario we are forced to Nuke the Internet and revert back to >> > Amish-level technologies. Not a pretty situation, but humanity would >> > adapt. > Humanity*in the abstract* might adapt; but if we have to go there, > you and I, personally, are probably going to die. The fact people forget is that late Pleistocene hunter-foragers had larger brains than post-agricultural humans! (And were taller, stronger, and healthier...only in the last 50 years have most human cultures regained the height of our distant ancestors.) The implication, of course, is that hunting and foraging *required* that brainpower -- otherwise it would not have been selected for. In other words, successfully hunting and foraging was *intellectually challenging*, and you didn't reproduce unless you were very good at it. In contrast, agriculture takes a genius to invent, but can be practiced by nearly anybody. Follow the ox, back and forth, sow and weed and harvest. Don't get 'distracted' (which was, for millions of years, a survival characteristic known as 'noticing something possibly edible amidst the blooming confusion of life'). Industrialization and mass-production increased this divide. The entire point of the Industrial Revolution was to decrease cost of goods by eliminating expensive skilled craftsmen and replacing them with low-wage unskilled labor. Each advance in technology involves more and more specialized knowledge whose fundamentals grow more complex with each step, and are understood by fewer... ..and which decrease the base level of intelligence and physical capability required to survive. In modern Western societies, absolutely *anyone* survives, even the persistently vegetative. Where this all ends up: technology allows a very few smart and capable people to enable the survival of *billions* of much less capable people. So if you take away that technology and require everyone to fend for themselves, you would expect a large dieback. > ... If we have to Nuke The Net Or Die, it'll mean the > difference between a 100% die-back and a 90% die-back. Given the world's rapidly disappearing supply of topsoil and ocean fish and continued population growth, that 90% figure you mention is basically a guarantee at some point in the not-too-distant future. (Anyone who wants to make the Julian Simon argument needs to also look at the rapidly disappearing supply of climax predators: world lion population has crashed from 200,000 to 20,000, a 90% decrease in ten years, due to habitat loss, and tigers are essentially extinct in the wild. Agricultural productivity has flattened out: all we've been doing is using up our buffer zones -- which *used* to have wild animals in them, hence their rapid decline. And you can't increase productivity via genetic engineering without using up your topsoil more quickly...unless you're returning human waste to the soil your food grew in, which we aren't.) > Meanwhile, the Mormons, with their requirement to keep a year of > canned goods in the cellar, will be laughing. (Well, praying.) I'm not sure one can learn subsistence farming or hunting in one year of hiding in a cellar. The Amish and Mennonites have the skills to manage...but I'm not sure they survive the waves of heavily armed and *very* hungry urban gangs exploding outward from the cities. The thing to remember is that 90%+ dieoffs are very common throughout the Earth's history...and given the rapid rate of replenishment relative to the geological record, don't tend to even show up unless environmental change held the population down for an extended period of time. Human population is thought to have hit a bottleneck of 5K-10K somewhere in the Late Pleistocene. So if there were a 99.9% dieoff and the only remaining humans were a few thousand Amish, Hadza, Ache, !Kung, and New Guinea highlanders, it wouldn't make a great deal of difference in the long-term. But you and I might not be too happy about it. From darren.greer3 at gmail.com Wed Nov 10 04:01:21 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 00:01:21 -0400 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: <4CDA0F9D.6070201@evil-genius.com> References: <4CDA0F9D.6070201@evil-genius.com> Message-ID: Only on this list, and perhaps a few others, could the topic "I love the world. =)" be converted in a few short posts into "Technology, specialization, and diebacks.." Gotta love it. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Wed Nov 10 08:10:56 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 10 Nov 2010 01:10:56 -0700 Subject: [ExI] extropy-chat Digest, Vol 86, Issue 13 In-Reply-To: References: Message-ID: On Tue, Nov 9, 2010 at 5:00 AM, BillK wrote: > > John Horgan has an article in Scientific American about why tribes go > to war that might be of interest. I know that Keith has suggested that > war is caused either by hard times or an expectation of hard times, > but I feel this is a weak theory as it seems to cover all cases and > therefore is untestable. That's nonsense. The theory would be instantly refuted if you found *one* case where a society doing well with a bright future started a war. The US Civil war was a total mystery to me until I realized that anticipation of a bleak future was as effective in sparking a war for sound evolutionary reasons. > Horgan thinks that war is learned behaviour. > > Some Quotes: > > Analyses of more than 300 societies in the Human Relations Area Files, > an ethnographic database at Yale University, have turned up no > clear-cut correlations between warfare and chronic resource scarcity. > Similarly, the anthropologist Lawrence Keeley notes in War before > Civilization: The Myth of the Peaceful Savage (Oxford University > Press, 1997) that the correlation between population pressure and > warfare "is either very complex or very weak or both." Amazing they would say this. "Population pressure" is a variable. War in China followed weather because a population that could be fed in times of good weather could not in times of bad weather. > Margaret Mead dismissed the notion that war is the inevitable > consequence of our "basic, competitive, aggressive, warring human > nature." This theory is contradicted, she noted, by the simple fact > that not all societies wage war. War has never been observed among a > Himalayan people called the Lepchas or among the Eskimos. In fact, > neither of these groups, when questioned by early ethnographers, was > even aware of the concept of war. Given what is now known about Margaret Mead's studies I am amazed that anyone would quote her as an authority. Himalayan people live at the edge of what humans can adapt to. That keeps their numbers down. As for the Eskimos, they killed each other at a high rate. You can call it war if you want too. > Warfare is "an invention," Mead concluded, like cooking, marriage, > writing, burial of the dead or trial by jury. Once a society becomes > exposed to the "idea" of war, it "will sometimes go to war" under > certain circumstances. Some people, Mead stated, such as the Pueblo > Indians, fight reluctantly to defend themselves against aggressors; > others, such as the Plains Indians, sally forth with enthusiasm, > because they have elevated martial skills to the highest of manly > virtues. Sheesh. Citing Mead is just stupid. Keith From pharos at gmail.com Wed Nov 10 10:07:27 2010 From: pharos at gmail.com (BillK) Date: Wed, 10 Nov 2010 10:07:27 +0000 Subject: [ExI] Margaret Mead controversy Message-ID: Quote: Margaret Mead's most famous book, 1928's "Coming of Age in Samoa," portrayed an idyllic, non-Western society, free of much sexual restraint, in which adolescence was relatively easy. Derek Freeman, an Australian anthropologist, wrote two books arguing that Mead was wrong and launched a heated public debate about her work. To Freeman, the issue was larger than the accuracy of "Coming of Age in Samoa." As he saw it, Mead's book was pivotal in arguing that humans' cultural environment -- or "nurture" -- could mold them as much or more than their biological predispositions -- or "nature." Paul Shankman, a University of Colorado professor of anthropology, has spent years studying the controversy and has uncovered new evidence that Freeman's fierce criticism of Mead contained fundamental flaws. "Freeman told a good story. It was a story people wanted to hear, that they wanted to believe," Shankman said. "Unfortunately, that's all it was: a good story." Shankman has exhumed data that deeply undercut Freeman's case. His research, partly based on a probe of Freeman's archives, opened after his death, revealed that Freeman "cherry picked" evidence that supported his thesis and ignored evidence that contradicted it. Shankman dissects the controversy in "The Trashing of Margaret Mead: Anatomy of an Anthropological Controversy," a book published in November by the University of Wisconsin Press. ----------------------------------------- And from Wikipedia: In 1983, five years after Mead had died, New Zealand anthropologist Derek Freeman published Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth, in which he challenged Mead's major findings about sexuality in Samoan society, citing statements of her surviving informants' claiming that she had coaxed them into giving her the answers she wanted. After years of discussion, many anthropologists concluded that Mead's account is for the most part reliable, and most published accounts of the debate have also raised serious questions about Freeman's critique.[17] 17. See Appell 1984, Brady 1991, Feinberg 1988, Leacock 1988, Levy 1984, Marshall 1993, Nardi 1984, Patience and Smith 1986, Paxman 1988, Scheper-Hughes 1984, Shankman 1996, Young and Juan 1985, and Shankman 2009. ----------------------------- Basically it is the nurture versus nature debate all over again. Keith (like Freeman) tends towards nature side, that humans behave more as genetics have programmed them to. I tend more towards the nurture side, that humans behave more as their culture programs them to. Obviously it is all a big mish-mash with parts of both points of view being correct at different times and circumstances. But the nurture side is the whole point of the history of civilization, i.e. trying to control the animal instincts of humans to build a better life. Keith's support of the idea that genetic programming takes precedence is what leads him to his rather depressing view of the future course of humanity. But civilization has been controlling the human genetic impulses (to a greater or lesser extent) for thousands of years. So I think there is still hope for humanity. BillK From msd001 at gmail.com Wed Nov 10 13:05:26 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Nov 2010 08:05:26 -0500 Subject: [ExI] I love the world. =) In-Reply-To: References: Message-ID: 2010/11/9 Darren Greer : > Reminds me of W.C. Fields' immortal quip: "I love small children, but I > can't eat a whole one." Q: "Do you have any kids?" A: "I had a child once; it was delicious" (effectively ends further discussion) From msd001 at gmail.com Wed Nov 10 13:18:40 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Nov 2010 08:18:40 -0500 Subject: [ExI] Margaret Mead controversy In-Reply-To: References: Message-ID: On Wed, Nov 10, 2010 at 5:07 AM, BillK wrote: > Keith's support of the idea that genetic programming takes precedence > is what leads him to his rather depressing view of the future course > of humanity. But civilization has been controlling the human genetic > impulses (to a greater or lesser extent) for thousands of years. ?So I > think there is still hope for humanity. their interdependence forces a balance. We might make a strong case for one extreme but the farther we push our point(s) from that equilibrium the harder it is to believe. While genes program behavior, the environment measures fitness. Our culture is a habitat for competing memes (and the genes that preclude us to accepting/propagating them) Why has western culture fixated on the emaciated waif as the ideal feminine form? What evolutionary advantage exists? Is it simply the rarity of that phenotype that we perceive as valuable from a requisite diversity perspective? Is blond & blue eye the same? The ability to use one's brain to secure material wealth has changed the ideal masculine form from largish alpha brute protector/provider. Maybe the fitness evaluation grows more complex with the environment. fwiw: no, I don't have a dozen citations to legitimize my opinion. If this observation stands on it's own then great; else it's just one person's casual conversation. From msd001 at gmail.com Wed Nov 10 13:30:19 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 10 Nov 2010 08:30:19 -0500 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: <4CDA0F9D.6070201@evil-genius.com> References: <4CDA0F9D.6070201@evil-genius.com> Message-ID: On Tue, Nov 9, 2010 at 10:21 PM, wrote: > The fact people forget is that late Pleistocene hunter-foragers had larger > brains than post-agricultural humans! ?(And were taller, stronger, and > healthier...only in the last 50 years have most human cultures regained the > height of our distant ancestors.) By comparison the Apple IIc I had when I was ten years old was more than twice as powerful as the computer i'm currently using to type this email. Perhaps fossil evidence shows a larger brainbox but can say nothing about the neural density / efficiency of the brain contained therein. Are you suggesting that a sperm whale is 5x smarter than the average human only because of its larger brain? > Where this all ends up: technology allows a very few smart and capable > people to enable the survival of *billions* of much less capable people. ?So > if you take away that technology and require everyone to fend for > themselves, you would expect a large dieback. > >> ... If we have to Nuke The Net Or Die, it'll mean the >> difference between a 100% die-back and a 90% die-back. Yeah. No wonder the future looks bleak. Even without die-back there's the apparently inevitable dumbening that's sure to overtake all but the top 0.1% who'll run everything (possibly the Mr.Smiths) From stathisp at gmail.com Wed Nov 10 13:48:49 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 11 Nov 2010 00:48:49 +1100 Subject: [ExI] Let's play What If. In-Reply-To: <4CD85AA8.5080402@satx.rr.com> References: <4CC6738E.3050609@speakeasy.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> Message-ID: On Tue, Nov 9, 2010 at 7:16 AM, Damien Broderick wrote: > 2) If copying requires destruction of the original, is it psychologically > likely that he will go to his death happy in the knowledge that his exact > subsequent copy will continue elsewhere? Many here say, "Hell, yes, it's > only evolved biases and cognitive errors that could support any other > opinion!" Others say, "Maybe so, but you're not getting me into that damned > gas chamber." > > So if the world becomes filled with people happy to be killed and copied, of > course it's likely that after a few hundred iterations identity will be > construed this way by almost everyone. If the USA becomes filled with the > antiabortion offspring of the duped who believe evolution is a godless hoax > and humans never walked on the moon, those opinions will also be validated. > So what? There is no contradiction in the assertion that the person survives even though the original is destroyed, because survival of the person and survival of the original are two different things. -- Stathis Papaioannou From agrimes at speakeasy.net Wed Nov 10 14:25:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 10 Nov 2010 09:25:40 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> Message-ID: <4CDAAB64.2040803@speakeasy.net> > There is no contradiction in the assertion that the person survives > even though the original is destroyed, because survival of the person > and survival of the original are two different things. In that case the concept of "person" is meaningless. In Existential Nihilism, if you can't poke it in the arm, it doesn't exist. Similarly, there is no such thing as society, forests, governments, wars, etc... Because these concepts are fundamentally fictions, they obscure and obstruct a true understanding of the reality in which you live. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Wed Nov 10 15:37:53 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 10 Nov 2010 08:37:53 -0700 Subject: [ExI] EP, was Margaret Mead controversy Message-ID: On Wed, Nov 10, 2010 at 5:00 AM, BillK wrote: snip > Basically it is the nurture versus nature debate all over again. > > Keith (like Freeman) tends towards nature side, that humans behave > more as genetics have programmed them to. It depends on how widely you class "behavior." Remember, the genes just don't have the information available to program a lot of behavior beyond walking. So a person flying a jet aircraft isn't using a lot of genetically programmed behavior. But if he (or she) is flying in a war, the motivation behind wars is genetically determined because the selection for going to war when it was profitable for genes was under intense selection for millions of years. (In good times it was *not* profitable for genes.) > I tend more towards the nurture side, that humans behave more as their > culture programs them to. It really depends on the situation. Culture has *nothing* to do with you pulling an arm back when you touch something that hurts. Culture (and current culture at that) has everything to do with spending hours working on your Facebook page. > Obviously it is all a big mish-mash with parts of both points of view > being correct at different times and circumstances. > > But the nurture side is the whole point of the history of > civilization, i.e. trying to control the animal instincts of humans to > build a better life. According to Dr. Gregory Clark, civilization set up the conditions for genetic selection in some parts of the world as intense as that which converted wild foxes to cute tame ones in 20 generations. Indeed certain psychological characteristics, such as impulsiveness and time preference, seem to have been greatly reduced in some groups over baseline hunter gatherers. > Keith's support of the idea that genetic programming takes precedence > is what leads him to his rather depressing view of the future course > of humanity. It's not genetic programming that concerns me. I actually don't see much future for humanity at all as we pass into the singularity. We can change to keep up with our intellectual offspring. The result would be something we would not recognize as human. Alternately our intellectual offspring might keep us like we keep cats (depressing when you think of what we do to cats "for their own good"). Perhaps you have another option? > But civilization has been controlling the human genetic > impulses (to a greater or lesser extent) for thousands of years. ?So I > think there is still hope for humanity. Out of curiosity are you doing anything to improve our chances for a future? Keith From jonkc at bellsouth.net Wed Nov 10 15:29:57 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 10 Nov 2010 10:29:57 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDAAB64.2040803@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> Message-ID: <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> Stathis Papaioannou wrote: >> There is no contradiction in the assertion that the person survives >> even though the original is destroyed, because survival of the person >> and survival of the original are two different things. > Alan Grimes wrote: > In that case the concept of "person" is meaningless. No, it just means "the concept of person" is not a noun, in fact no concept is. > > In Existential Nihilism, if you can't poke it in the arm, it doesn't exist. If true then Existential Nihilism is a remarkably silly philosophy. You can't poke the number 42 in the arm because you wouldn't be able to find it, it possesses a size but no position; I doubt if you want to argue that the number 42 doesn't exist. You couldn't even poke a vector in the arm and in addition to a size a vector possesses a direction, but it still has no position; if vectors don't exist then physicists are in big trouble. Lots of important things have no position, when you really get down to it, I'd go so far as to say ALL important things. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 17:00:46 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 13:00:46 -0400 Subject: [ExI] EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: >>It's not genetic programming that concerns me. I actually don't see much future for humanity at all as we pass into the singularity. We can change to keep up with our intellectual offspring. The result would be something we would not recognize as human. << I may be naive about this (and usually am) but isn't there a lot of perception at work here? If what resulted from these changes to 'keep up' was not recognizably human, wouldn't it be more likely that we would just redefine what it meant to be physically (and not necessarily biologically) and mentally human as we evolved (technologically speaking)? So we'd be no longer biologically human from our current stand-point? The way that a 12th century Christian or Islamic physician may see a modern man living with the heart valves of a pig as no longer human because for him that was the seat of the soul? It's a matter of degree, I realize. And the possibility of singularity (which I'm still trying to get a handle on, I admit) makes orderly progress and stochastic prediction impossible. But one of the things I struggle with in terms of TH is figuring out this: if the human body is fully fungible as many seem to believe then can a simple biological definition be useful to define what it means to be human if you have already have that awareness? Whether this biological replacement has reached full potential or not? What defines being human? If it's not biology (and I am presumably no less human with an artificial leg than if all my body parts are replaced and my awareness uploaded and/or reconfigured) then what it is? If I am able to self modify and expand into places that I can't currently imagine because of my biological limitations, then is that being non-human? No longer having those limitations? And if humanity is simply the sum total of my limitations (beginning with my mortality) then you can keep that definition anyway. I've never been a great believer in we are what we can't do. Tell that someone who can't feed their children, and see how it flies. At its base level, it is unethical: a philosophy fed by the oppressor to the oppressed to keep the status quo. But, and here's the rub, where in the hell do my ethics come from? Not from my limitations but my desire to breach them, and free others from theirs if they are unable to do it for themselves. I am mightily confused about this, and would like to know what others think. I think this may actually be transhumanism 101, but I am just now learning and absorbing enough to ask this question and actually have a shot at processing the answer. Even assistance in restating the question into something less confusing would be helpful. Darren -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 17:24:35 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 13:24:35 -0400 Subject: [ExI] Seth Loyd on birds, plants, ... In-Reply-To: <36F35165AAA04A00B8E05C5F4E3E2FA3@PCserafino> References: <4CD7211C.8060304@speakeasy.net> <4CD72B6A.80701@satx.rr.com> <36F35165AAA04A00B8E05C5F4E3E2FA3@PCserafino> Message-ID: He originally appeared in a panel discussion about this subject on CBC radio's show Ideas a few years back. Another guest discussed dark matter and I forget the rest. Was a good broadcast. They have a live panel every year for this radio show. I love CBC radio. Quirks and Quarks and Ideas especially. One year Ideas did a broadcast of the mock-up of the trial of Socrates. Darren On Mon, Nov 8, 2010 at 7:21 AM, scerir wrote: > Seth Lloyd on quantum 'weirdness' used by plants, animals, etc. > > http://www.cbc.ca/technology/story/2010/11/03/quantum-physics-biology-living-things.html > Supposedly, the video of this lecture will appear on the Perimeter > Institute website, or at pirsa.org. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Wed Nov 10 17:32:32 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 10 Nov 2010 13:32:32 -0400 Subject: [ExI] Google maps gets drawn into Latin America border dispute Message-ID: Interesting. When the digitization becomes more real that the physicality. http://www.washingtonpost.com/wp-dyn/content/article/2010/11/09/AR2010110906620.html?wprss=rss_world/wires Darren -- "I don't regret the kingdoms. What sense in borders and nations and patriotism? But I miss the kings." -*Harold and Maude* -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Wed Nov 10 20:40:59 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 10 Nov 2010 15:40:59 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> Message-ID: <4CDB035B.9040406@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > If true then Existential Nihilism is a remarkably silly philosophy. You > can't poke the number 42 in the arm because you wouldn't be able to find > it, it possesses a size but no position; I doubt if you want to argue > that the number 42 doesn't exist. Of course the # 42 doesn't exist! It was invented just like every other concept. That is not to say that the # 42 is either meaningless or useless. The # 42 has a very well defined meaning *with respect to a unit* (which may or may not be meaningful or useful). If you had 42 marbles, they would certainly exist and the number would be useful for describing them. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From kanzure at gmail.com Wed Nov 10 20:21:00 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 10 Nov 2010 14:21:00 -0600 Subject: [ExI] =?windows-1252?q?Paper=3A_It_Will_Be_Awesome_if_They_Don=92?= =?windows-1252?q?t_Screw_it_Up=3A_3D_Printing=2C_Intellectual_Prop?= =?windows-1252?q?erty=2C_and_the_Fight_Over_the_Next_Great_Disrupt?= =?windows-1252?q?ive_Technology?= Message-ID: It Will Be Awesome if They Don?t Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology pdf: http://www.publicknowledge.org/files/docs/3DPrintingPaperPublicKnowledge.pdf http://www.publicknowledge.org/files/docs/3DPrintingPaperPublicKnowledge.pdf author: Michael Weinberg """ The next great technological disruption is brewing just out of sight. In small workshops, and faceless office parks, and garages, and basements, revolutionaries are tinkering with machines that can turn digital bits into physical atoms. The machines can download plans for a wrench from the Internet and print out a real, working wrench. Users design their own jewelry, gears, brackets, and toys with a computer program, and use their machines to create real jewelry, gears, brackets, and toys. These machines, generically known as 3D printers, are not imported from the future or the stuff of science fiction. Home versions, imperfect but real, can be had for around $1,000. Every day they get better, and move closer to the mainstream. In many ways, today?s 3D printing community resembles the personal computing community of the early 1990s. They are a relatively small, technically proficient group, all intrigued by the potential of a great new technology. They tinker with their machines, share their discoveries and creations, and are more focused on what is possible than on what happens after they achieve it. They also benefit from following the personal computer revolution: the connective power of the Internet lets them share, innovate, and communicate much faster than the Homebrew Computer Club could have ever imagined. The personal computer revolution also casts light on some potential pitfalls that may be in store for the growth of 3D printing. When entrenched interests began to understand just how disruptive personal computing could be (especially massively networked personal computing) they organized in Washington, D.C. to protect their incumbent power. Rallying under the banner of combating piracy and theft, these interests pushed through laws like the Digital Millennium Copyright Act (DMCA) that made it harder to use computers in new and innovative ways. In response, the general public learned once-obscure terms like ?fair use? and worked hard to defend their ability to discuss, create, and innovate. Unfortunately, this great public awakening came after Congress had already passed its restrictive laws. Of course, computers were not the first time that incumbents welcomed new technologies by attempting to restrict them. The arrival of the printing press resulted in new censorship and licensing laws designed to slow the spread of information. The music industry claimed that home taping would destroy it. And, perhaps most memorably, the movie industry compared the VCR to the Boston Strangler preying on a woman home alone. One of the goals of this whitepaper is to prepare the 3D printing community, and the public at large, before incumbents try to cripple 3D printing with restrictive intellectual property laws. By understanding how intellectual property law relates to 3D printing, and how changes might impact 3D printing?s future, this time we will be ready when incumbents come calling to Congress. """ - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Wed Nov 10 20:27:27 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Wed, 10 Nov 2010 12:27:27 -0800 Subject: [ExI] Coyotes (Re: I love the world. =) In-Reply-To: References: Message-ID: <4CDB002F.7000801@evil-genius.com> On 11/9/10 6:45 PM, Darren wrote: > Coyotes are a huge problem. They grow > sleeker and braver and more numerous with each passing year, glutted on > domestic cats and injured deer and human garbage carelessly stored. So > people might find that predation was once again an issue, Well, that's what we get for killing off all the wolves: a vacant ecological niche. I think part of the success of coyotes is because, unlike wolves, they have adapted to inhabited areas -- where discharging of firearms is prohibited. A coyote is much safer in an urban area than it is out West, where it will likely be shot or poisoned (frequently at taxpayer expense). Factoid: the extinct "dire wolf" (Canis dirus) was actually more closely related to the modern coyote than the modern wolf. Mike Dougherty wrote: > I imagined "sleeker and braver" coyotes as futuristic gold-foil-clad > computronium-enhanced beasts with a hivemind and an insatiable thirst > for small children. In other words, furry versions of Dick Cheney, Hank Paulson, and Rahm Emmanuel. From jrd1415 at gmail.com Wed Nov 10 22:11:59 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 10 Nov 2010 14:11:59 -0800 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? Message-ID: I'm trying to find a cartoon that appeared in Scientific American about ten years ago. I thought it was in the Sept 2001 issue, but I have a copy of that issue, and I can't find it there. I tried contacting SciAm, but they don't respond. The cartoon depicts a stairway proceeding from lower left to upper right It is the evolutionary stairway. Three "individuals" are climbing the stairs: a lemur-like critter lower left, then a hairy, knuckle-dragging proto-human cave man, and finally in the upper right a "modern" human. The caption has the lemur saying to the cave man, "I wondered when he'd notice there were more steps." Suggesting of course that evolution is not through with the human "line", fundamental thinking for list members. I want to enlarge that cartoon and make a T-shirt out of it. Person who finds it for me gets a free T-shirt. Best, Jeff Davis "You are what you think." Jeff Davis From pharos at gmail.com Wed Nov 10 23:53:41 2010 From: pharos at gmail.com (BillK) Date: Wed, 10 Nov 2010 23:53:41 +0000 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: On Wed, Nov 10, 2010 at 10:11 PM, Jeff Davis wrote: > I'm trying to find a cartoon that appeared in Scientific American > about ten years ago. ?I thought it was in the Sept 2001 issue, but I > have a copy of that issue, and I can't find it there. ?I tried > contacting SciAm, but they don't respond. > > The cartoon depicts a stairway proceeding from lower left to upper > right ?It is the evolutionary stairway. ?Three "individuals" are > climbing the stairs: ?a lemur-like critter lower left, then a hairy, > knuckle-dragging proto-human cave man, and finally in the upper right > a "modern" human. ?The caption has the lemur saying to the cave man, > "I wondered when he'd notice there were more steps." ?Suggesting of > course that evolution is not through with the human "line", > fundamental thinking for list members. > > I want to enlarge that cartoon and make a T-shirt out of it. ?Person > who finds it for me gets a free T-shirt. > > Good news & bad news. I've found the cartoon, but it's copyrighted. Book: Radical Evolution The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What It Means to Be Human By Joel Garreau Quote: Joel Garreau: At the beginning of "Radical Evolution," there is a New Yorker cartoon I bought the rights to. It shows a staircase. on the first step is a little monkey. on the next, a bigger one, on the next, a cro magnon, and on the next, a guy in a suit. the caption has the cro-magnon saying to the suit: "i was wondering when you'd notice there's lots more steps." ---------------------------- BillK From jrd1415 at gmail.com Thu Nov 11 00:22:23 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 10 Nov 2010 16:22:23 -0800 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: Wow! This is bizarre. That's the cartoon all right, but it isn't. The one I remember was smaller format, steeper stairs, only three climbers, the modern human was climbing and looking forward and oblivious, and the caption was Lemur to cave man or vice versa. Now I'm really confused. Jeff Davis On Wed, Nov 10, 2010 at 4:01 PM, Jeff Davis wrote: > Thanks, Bill. > > Copyright's not an issue. ?I'm not planning commercial distribution. > Just want one for me,... and one for you. > > Thanks. > > How did you find it. ?I thought sure I saw it in SciAm. ?Was I wrong? > Did I actually see it in the New Yorker, and misremembered? > > On Wed, Nov 10, 2010 at 3:53 PM, BillK wrote: >> On Wed, Nov 10, 2010 at 10:11 PM, Jeff Davis ?wrote: >>> I'm trying to find a cartoon that appeared in Scientific American >>> about ten years ago. ?I thought it was in the Sept 2001 issue, but I >>> have a copy of that issue, and I can't find it there. ?I tried >>> contacting SciAm, but they don't respond. >>> >>> The cartoon depicts a stairway proceeding from lower left to upper >>> right ?It is the evolutionary stairway. ?Three "individuals" are >>> climbing the stairs: ?a lemur-like critter lower left, then a hairy, >>> knuckle-dragging proto-human cave man, and finally in the upper right >>> a "modern" human. ?The caption has the lemur saying to the cave man, >>> "I wondered when he'd notice there were more steps." ?Suggesting of >>> course that evolution is not through with the human "line", >>> fundamental thinking for list members. >>> >>> I want to enlarge that cartoon and make a T-shirt out of it. ?Person >>> who finds it for me gets a free T-shirt. >>> >>> >> >> >> Good news & bad news. >> >> I've found the cartoon, but it's copyrighted. >> >> Book: >> Radical Evolution >> The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What >> It Means to Be Human >> By ? ?Joel Garreau >> >> Quote: >> Joel Garreau: At the beginning of "Radical Evolution," there is a New >> Yorker cartoon I bought the rights to. It shows a staircase. on the >> first step is a little monkey. on the next, a bigger one, on the next, >> a cro magnon, and on the next, a guy in a suit. the caption has the >> cro-magnon saying to the suit: >> "i was wondering when you'd notice there's lots more steps." >> >> ---------------------------- >> >> >> BillK >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From jrd1415 at gmail.com Thu Nov 11 00:01:26 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 10 Nov 2010 16:01:26 -0800 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: Thanks, Bill. Copyright's not an issue. I'm not planning commercial distribution. Just want one for me,... and one for you. Thanks. How did you find it. I thought sure I saw it in SciAm. Was I wrong? Did I actually see it in the New Yorker, and misremembered? On Wed, Nov 10, 2010 at 3:53 PM, BillK wrote: > On Wed, Nov 10, 2010 at 10:11 PM, Jeff Davis ?wrote: >> I'm trying to find a cartoon that appeared in Scientific American >> about ten years ago. ?I thought it was in the Sept 2001 issue, but I >> have a copy of that issue, and I can't find it there. ?I tried >> contacting SciAm, but they don't respond. >> >> The cartoon depicts a stairway proceeding from lower left to upper >> right ?It is the evolutionary stairway. ?Three "individuals" are >> climbing the stairs: ?a lemur-like critter lower left, then a hairy, >> knuckle-dragging proto-human cave man, and finally in the upper right >> a "modern" human. ?The caption has the lemur saying to the cave man, >> "I wondered when he'd notice there were more steps." ?Suggesting of >> course that evolution is not through with the human "line", >> fundamental thinking for list members. >> >> I want to enlarge that cartoon and make a T-shirt out of it. ?Person >> who finds it for me gets a free T-shirt. >> >> > > > Good news & bad news. > > I've found the cartoon, but it's copyrighted. > > Book: > Radical Evolution > The Promise and Peril of Enhancing Our Minds, Our Bodies -- and What > It Means to Be Human > By ? ?Joel Garreau > > Quote: > Joel Garreau: At the beginning of "Radical Evolution," there is a New > Yorker cartoon I bought the rights to. It shows a staircase. on the > first step is a little monkey. on the next, a bigger one, on the next, > a cro magnon, and on the next, a guy in a suit. the caption has the > cro-magnon saying to the suit: > "i was wondering when you'd notice there's lots more steps." > > ---------------------------- > > > BillK > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Thu Nov 11 04:35:19 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Thu, 11 Nov 2010 15:35:19 +1100 Subject: [ExI] Let's play What If. In-Reply-To: <4CDAAB64.2040803@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> Message-ID: 2010/11/11 Alan Grimes : >> There is no contradiction in the assertion that the person survives >> even though the original is destroyed, because survival of the person >> and survival of the original are two different things. > > In that case the concept of "person" is meaningless. > > In Existential Nihilism, if you can't poke it in the arm, it doesn't > exist. Similarly, there is no such thing as society, forests, > governments, wars, etc... Because these concepts are fundamentally > fictions, they obscure and obstruct a true understanding of the reality > in which you live. Sometimes we can agree on a particular instance of a vaguely defined thing such as a country or a person. We can then try to come up with definitions to see if they fit. The problem with these personal identity discussions is that some participants assume a definition when that definition is inconsistent with their own usage of the terms. A1 Proposed definition: a country is a geographical region populated by people who all speak the same language. A2 Switzerland is a country. A3 The people in Switzerland do not all speak the same language. A4 If we agree on A2 and A3 we must reject A1. B1 Proposed definition: a person survives from t1 to t2 provided that the matter in his body remains the same between those times. B2 Alan has survived from Tuesday to Wednesday. B3 The matter in Alan's body was not the same on Tuesday and Wednesday. B4 If we agree on B2 and B3 we must reject B1. So that is the challenge: come up with a definition of personal survival that excludes destructive copying but allows the situations where normal usage of the term says we have definitely survived. -- Stathis Papaioannou From pharos at gmail.com Thu Nov 11 11:22:55 2010 From: pharos at gmail.com (BillK) Date: Thu, 11 Nov 2010 11:22:55 +0000 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: On Thu, Nov 11, 2010 at 12:22 AM, Jeff Davis wrote: > Wow! This is bizarre. ?That's the cartoon all right, but it isn't. > The one I remember was smaller format, steeper stairs, only three > climbers, the modern human was climbing and looking forward and > oblivious, and the caption was Lemur to cave man or vice versa. > > Now I'm really confused. > > Scientific American did a review of the book, so they may have shown the cartoon. Here is the one still on sale at the New Yorker: (presumably they pay commission to Garreau if he owns the rights) There are many evolution cartoons around, showing lines of characters, so you may be conflating several memories. Cheers, BillK From jonkc at bellsouth.net Thu Nov 11 14:25:15 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Nov 2010 09:25:15 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDB035B.9040406@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> Message-ID: <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> On Nov 10, 2010, at 3:40 PM, Alan Grimes wrote: > Of course the # 42 doesn't exist! It was invented just like every other > concept. That is not to say that the # 42 is either meaningless or useless. If something has meaning then it is meaningless to say "it doesn't exist". And if it is useful too then it is not useful to pretend that it doesn't. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Nov 11 15:26:22 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 11 Nov 2010 10:26:22 -0500 Subject: [ExI] Iain M Banks on uploading In-Reply-To: <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> Message-ID: <4CDC0B1E.3030909@lightlink.com> Interview with Iain M Banks in New Scientist: http://www.newscientist.com/blogs/culturelab/2010/11/iain-m-banks-upload-for-everlasting-life.html A very short, but thoroughly Banksian set of responses. I particularly liked: Q: In your book, virtual minds can get sent to a virtual hell for bad behaviour. What gave you this idea? Banks: Part of a science fiction writer's job is to think how we would manage to fuck up something so potentially cool and life-affirming - so virtual hells seemed almost as obvious as virtual heavens. That pretty much sums up half of all the online discussion about the singularity that I hear inside the singularity community. Richard Loosemore From agrimes at speakeasy.net Thu Nov 11 15:23:49 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 10:23:49 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> Message-ID: <4CDC0A85.3000404@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > On Nov 10, 2010, at 3:40 PM, Alan Grimes wrote: >> Of course the # 42 doesn't exist! It was invented just like every other >> concept. That is not to say that the # 42 is either meaningless >> or useless. > If something has meaning then it is meaningless to say "it doesn't > exist". And if it is useful too then it is not useful to pretend that it > doesn't. No. I don't group ideas with things that have tangible reality. For example, a brain exists, it's tangible. However a computer simulation of a brain has no tangible reality, even a microscopic examination of the computer chips will not reveal it! Furthermore, a computer running a simulation of a brain is indistinguishable from a computer running the ABC at home project. ( my own machine is currently the 30th fastest on the project! =P ) Ever consider the differences between a computer and a brain with regards to a total reset situation? Lets say you had a stroke or a momentary jolt of 10,000 volts through your head. You would be stunned, and your brain would probably re-set itself a few times, but it would go back to being a brain. If you do the same with a computer, *at best* you'll get. #### OPERATING SYSTEM NOT FOUND. INSERT DISK INTO DRIVE A: #### =P So yeah, I have a billion complaints about uploading not counting the identity issue. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From singularity.utopia at yahoo.com Thu Nov 11 10:06:35 2010 From: singularity.utopia at yahoo.com (Singularity Utopia) Date: Thu, 11 Nov 2010 10:06:35 +0000 (GMT) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? Message-ID: <48358.82164.qm@web24905.mail.ird.yahoo.com> I am interested in the Singularitarian Principles described by Eliezer S. Yudkowsky. http://yudkowsky.net/obsolete/principles.html The above link to Eliezer S. Yudkowsky's site has a disclaimer-notice stating: "This document has been marked as wrong, obsolete, deprecated by an improved version, or just plain old." Does anyone know what Yudkowsky's up-to-date Singularitarian Principles are? Does anyone know how to contact Yudkowsky (his webpage discourages contact; he states he is not likely to reply to individual emails), and if you do know how to contact him maybe you could ask him to update his page? Has Yudkowsky stopped believing in his Singularitarian Principles? Maybe Yudkowsky has admitted defeat regarding the concept of AI and the Singularity? Thanks for any help anyone can offer to clarify the current situation regarding the outdated Singularitarian Principles described by Eliezer S. Yudkowsky http://yudkowsky.net/obsolete/principles.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Thu Nov 11 17:14:25 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Thu, 11 Nov 2010 12:14:25 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <48358.82164.qm@web24905.mail.ird.yahoo.com> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> Message-ID: <4CDC2471.5040209@lightlink.com> Singularity Utopia wrote: > [snip] > Does anyone know how to contact Yudkowsky (his webpage discourages > contact; he states he is not likely to reply to individual emails), and > if you do know how to contact him maybe you could ask him to update his > page? > Yudkowsky is one of the easiest people to contact in the entire singularity community: just join the SL4 mailing list (which he created) and say something that annoys him. If you have trouble thinking of something to annoy him, try mentioning my name. That should do it. :-) After that, he'll appear out of nowhere and write a sarcastic comment about your stupidity, and THEN you will have his attention. :-) Have fun. Richard Loosemore From jonkc at bellsouth.net Thu Nov 11 17:02:48 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Nov 2010 12:02:48 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC0A85.3000404@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: On Nov 11, 2010, at 10:23 AM, Alan Grimes wrote: > > I don't group ideas with things that have tangible reality. I don't either, ideas are far more important than tangible reality crap. You don't have ideas you are ideas and its irrelevant what hardware happens to think you. > For example, a brain exists, it's tangible. What an object does is less tangible than the object itself, but I don't care, mind is more important than brain; at least it is in this mind's opinion. > However a computer simulation of a brain has no tangible reality If so then computer or even calculator arithmetic has no tangible reality so you'd better not use one on your tax returns if you want to stay out of jail; but on second thought that really is not a problem because the brain simulating Alan Grimes has no tangible reality either and you can't put a nonexistent entity in prison. > even a microscopic examination of the computer chips will not reveal it! But a microscopic examination of the neurons in your brain will?!! > Furthermore, a computer running a simulation of a brain is indistinguishable from a computer running the ABC at home project. And that versatility is precisely why brains and computers are such useful objects. > Ever consider the differences between a computer and a brain with > regards to a total reset situation? Indeed I have. If I were to suffer a horrible traumatic experience I'd likely be in a funk for many years and possibly for the rest of my life, but if my computer hangs around with bad programs or has a nervous breakdown for any reason I can reset it in just a few minutes; and even if it's totally destroyed everything is backed up on an external hard disk so nothing is lost. I just wish I had an external hard drive backup for me. > I have a billion complaints about uploading not counting the identity issue. There is no identity issue there is only a identity superstition. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Nov 11 17:38:14 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 11 Nov 2010 11:38:14 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDC2471.5040209@lightlink.com> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> Message-ID: <4CDC2A06.2090300@satx.rr.com> On 11/11/2010 11:14 AM, Richard Loosemore wrote: > Yudkowsky is one of the easiest people to contact in the entire > singularity community: just join the SL4 mailing list (which he > created) and say something that annoys him. That shouldn't be difficult for "Singularity Utopia"... Damien Broderick From jonkc at bellsouth.net Thu Nov 11 17:29:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 11 Nov 2010 12:29:55 -0500 Subject: [ExI] Could youse guys help me find a Scientific American cartoon? In-Reply-To: References: Message-ID: <58A12420-BE32-4864-889C-1B576C48229C@bellsouth.net> http://www.cartoonbank.com/2004/i-was-wondering-when-youd-notice-theres-lots-more-steps/invt/127579/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hkeithhenson at gmail.com Thu Nov 11 17:59:14 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 11 Nov 2010 10:59:14 -0700 Subject: [ExI] Singularity was EP, was Margaret Mead controversy Message-ID: On Thu, Nov 11, 2010 at 5:00 AM, Darren Greer wrote: snip > And if humanity is simply the sum total of my limitations (beginning with my > mortality) then you can keep that definition anyway. I've never been a great > believer in we are what we can't do. Tell that someone who can't feed their > children, and see how it flies. At its base level, it is unethical: a > philosophy fed by the oppressor to the oppressed to keep the status quo. > ?But, and here's the rub, where in the hell do my ethics come from? Same place as everything else, evolution, selection of genes in the past. You do need to understand the gene model of evolution and "inclusive fitness" for this to make sense. > Not from > my limitations but my desire to breach them, and free others from theirs if > they are unable to do it for themselves. > > I am mightily confused about this, and would like to know what others think. > ?I think this may actually be transhumanism 101, but I am just now learning > and absorbing enough to ask this question and actually have a shot at > processing the answer. Even assistance in restating the question into > something less confusing would be helpful. I have been involved with this for a *long* time, clear back to the late 70s when Eric Drexler started talking about nanotechnology. It's so hard to understand the ramifications of what nanotech and AI will be able to do in the context of human desires that I had to resort to fiction to express it. http://www.terasemjournals.org/GN0202/henson.html Keith From kanzure at gmail.com Thu Nov 11 19:43:33 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 11 Nov 2010 13:43:33 -0600 Subject: [ExI] Fwd: [NeuralEnsemble] Job Openings - Blue Brain Project In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Eilif Muller Date: Thu, Nov 11, 2010 at 1:26 PM Subject: [NeuralEnsemble] Job Openings - Blue Brain Project To: Neural Ensemble Dear NeuralEnsemble, I would like to draw your attention to the following new openings at the Blue Brain Project in Lausanne, Switzerland: Postdoc in Data-Driven Modeling in Neuroscience (100%) http://jahia-prod.epfl.ch/site/emploi/page-48940-en.html Software Developer on Massively Parallel Compute Architectures (100%) http://jahia-prod.epfl.ch/site/emploi/page-48916-en.html Scientific Visualization Engineer (%100) http://jahia-prod.epfl.ch/site/emploi/page-48941-en.html System Administrator (100%) http://jahia-prod.epfl.ch/site/emploi/page-48939-en.html I would appreciate if you could forward them to qualified persons who might be interested. cheers, Eilif -- You received this message because you are subscribed to the Google Groups "Neural Ensemble" group. To post to this group, send email to neuralensemble at googlegroups.com. To unsubscribe from this group, send email to neuralensemble+unsubscribe at googlegroups.com. For more options, visit this group at http://groups.google.com/group/neuralensemble?hl=en. -- - Bryan http://heybryan.org/ 1 512 203 0507 From sparge at gmail.com Thu Nov 11 18:22:50 2010 From: sparge at gmail.com (Dave Sill) Date: Thu, 11 Nov 2010 13:22:50 -0500 Subject: [ExI] Singularity was EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: On Thu, Nov 11, 2010 at 12:59 PM, Keith Henson wrote: > Same place as everything else, evolution, selection of genes in the > past. > What's the evolutionary/genetic explanation for homosexuality? -Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Thu Nov 11 20:13:47 2010 From: spike66 at att.net (spike) Date: Thu, 11 Nov 2010 12:13:47 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDC2A06.2090300@satx.rr.com> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> <4CDC2A06.2090300@satx.rr.com> Message-ID: <006801cb81dc$f3313e30$d993ba90$@att.net> ... On 11/11/2010 11:14 AM, Richard Loosemore wrote: >> Yudkowsky is one of the easiest people to contact in the entire >> singularity community: just join the SL4 mailing list (which he >> created) and say something that annoys him. >That shouldn't be difficult for "Singularity Utopia"... Damien Broderick Hi Utopia, Do take Damien's comment as the constructive criticism he intended please. When Eli used to hang out here, his theme went something like this: The singularity is coming regardless. Let us work to make it a positive thing. My constructive criticism of your earlier posts was that your theme is: the singularity will be a positive thing regardless. Can you see why Eli would find that attitude annoying and dangerous? Do you see why plenty of people here would find that notion annoying and dangerous? The singularity is not necessarily a good thing, but we know that a no-singularity future is a bad thing. I am in Eli's camp: if we work at it, we can make it a good thing. spike From agrimes at speakeasy.net Thu Nov 11 21:48:10 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 16:48:10 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: <4CDC649A.7030409@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > On Nov 11, 2010, at 10:23 AM, Alan Grimes wrote: >> I don't group ideas with things that have tangible reality. > I don't either, ideas are far more important than tangible reality crap. > You don't have ideas you are ideas and its irrelevant what hardware > happens to think you. Absurd because ideas can't think of ideas. I can think of ideas so therefore I'm not any set of ideas. >> For example, a brain exists, it's tangible. > What an object does is less tangible than the object itself, but I don't > care, mind is more important than brain; at least it is in this mind's > opinion. Because I'm a strict monist, I can't imagine any way through which the two can be separated. All such proposals are inherently irrational religious thinking. (Due to recent threads, I am now highlighting the strictly monistic nature of my viewpoints.) >> However a computer simulation of a brain has no tangible reality > If so then computer or even calculator arithmetic has no tangible > reality so you'd better not use one on your tax returns if you want to > stay out of jail; Who said I filed tax returns? At this time in our history the government is completely evil. If you are a moral and upright human being, you would become a 1099 worker and never file any forms of any kind with the government. Furthermore, and more importantly, you must never accept any money or special benefit from the government. I like having a roof over my head so I pay my property tax ($6k/yr!), but I do not do business with the fedz. > but on second thought that really is not a problem > because the brain simulating Alan Grimes has no tangible reality either > and you can't put a nonexistent entity in prison. ;) >> even a microscopic examination of the computer chips will not reveal it! > But a microscopic examination of the neurons in your brain will?!! Stick a few electrodes on my skull and you'll see my EEG, you can even determine my state of consciousness from it. Combining that with anatomical evidence, you can prove that a brain is a person. If you do the same to a computer you will not be able to detect any differences except for the general level of computational activity, which has no inherent relationship to the state of the upload. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From stathisp at gmail.com Thu Nov 11 22:29:23 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Fri, 12 Nov 2010 09:29:23 +1100 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC649A.7030409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> Message-ID: 2010/11/12 Alan Grimes : > Stick a few electrodes on my skull and you'll see my EEG, you can even > determine my state of consciousness from it. Combining that with > anatomical evidence, you can prove that a brain is a person. There is no way, even in theory, to prove that a given collection of matter is conscious. > If you do the same to a computer you will not be able to detect any > differences except for the general level of computational activity, > which has no inherent relationship to the state of the upload. Are you claiming there is no correlation between the electrical activity in a computer and the computations it is carrying out? -- Stathis Papaioannou From possiblepaths2050 at gmail.com Thu Nov 11 23:58:47 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 11 Nov 2010 16:58:47 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <006801cb81dc$f3313e30$d993ba90$@att.net> References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> <4CDC2A06.2090300@satx.rr.com> <006801cb81dc$f3313e30$d993ba90$@att.net> Message-ID: On 11/11/2010 11:14 AM, Richard Loosemore wrote: Yudkowsky is one of the easiest people to contact in the entire singularity community: just join the SL4 mailing list (which he created) and say something that annoys him. >>> Damien Broderick wrote: >That shouldn't be difficult for "Singularity Utopia"... I have a feeling S.U. *may* get on the SL4 list, but will not be allowed to stay on it for very long!!! John ; ) On 11/11/10, spike wrote: > > ... > > On 11/11/2010 11:14 AM, Richard Loosemore wrote: > >>> Yudkowsky is one of the easiest people to contact in the entire >>> singularity community: just join the SL4 mailing list (which he >>> created) and say something that annoys him. > >>That shouldn't be difficult for "Singularity Utopia"... Damien Broderick > > Hi Utopia, > > Do take Damien's comment as the constructive criticism he intended please. > When Eli used to hang out here, his theme went something like this: The > singularity is coming regardless. Let us work to make it a positive thing. > > My constructive criticism of your earlier posts was that your theme is: the > singularity will be a positive thing regardless. > > Can you see why Eli would find that attitude annoying and dangerous? Do you > see why plenty of people here would find that notion annoying and dangerous? > The singularity is not necessarily a good thing, but we know that a > no-singularity future is a bad thing. I am in Eli's camp: if we work at it, > we can make it a good thing. > > spike > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From avantguardian2020 at yahoo.com Fri Nov 12 01:04:22 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Thu, 11 Nov 2010 17:04:22 -0800 (PST) Subject: [ExI] Singularity was EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: <302902.92160.qm@web65608.mail.ac4.yahoo.com> > >From: Dave Sill >To: ExI chat list >Sent: Thu, November 11, 2010 10:22:50 AM >Subject: Re: [ExI] Singularity was EP, was Margaret Mead controversy > > >On Thu, Nov 11, 2010 at 12:59 PM, Keith Henson wrote: > >Same place as everything else, evolution, selection of genes in the >>past. > >What's the evolutionary/genetic explanation for homosexuality? I don't believe exclusively in genetic determinism but?genes are obviously a very powerful driver of behavior.?Things like?culture, advertising, conditioning, rationality, and other?psychosocial forces can?demonstrably override?genetic behavior in many instances. But homosexuality is not a good example of nurture over nature. In fact, nature is full of homosexuality so the answer to your question?depends on the species you are talking about. In fruit flies, it seems to be due to a mutation of the gene which allows a male fruit fly to distinguish females from other males. In Black Swans, it seems to be a survival adaption because two males can defend a nest/chicks better than a heterosexual pair so they chase the female out after she has laid her eggs. In elephants, it seems to be a form of pederasty. In bonobos, it seems to be a primitive form of economics to diffuse conflict and minimize violence. Dolphins seem to do it because they are just plain horny. Heck dolphins don't even limit sexual activity to their own species and are probably the only animal that practices "nasal sex" by penetrating the blowholes of their own and other species. http://en.wikipedia.org/wiki/Homosexual_behavior_in_animals#cite_ref-ReferenceA_0-0 ? Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung From lists1 at evil-genius.com Fri Nov 12 01:27:23 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Thu, 11 Nov 2010 17:27:23 -0800 Subject: [ExI] Technology, specialization, and diebacks...Re: I, love the world. =) In-Reply-To: References: Message-ID: <4CDC97FB.6070404@evil-genius.com> On 11/11/10 4:00 AM, extropy-chat-request at lists.extropy.org wrote: > On Tue, Nov 9, 2010 at 10:21 PM, wrote: >> > The fact people forget is that late Pleistocene hunter-foragers had larger >> > brains than post-agricultural humans! ?(And were taller, stronger, and >> > healthier...only in the last 50 years have most human cultures regained the >> > height of our distant ancestors.) > By comparison the Apple IIc I had when I was ten years old was more > than twice as powerful as the computer i'm currently using to type > this email. Perhaps fossil evidence shows a larger brainbox but can > say nothing about the neural density / efficiency of the brain > contained therein. Are you suggesting that a sperm whale is 5x > smarter than the average human only because of its larger brain? I believe you mean "more than twice as large" (not "twice as powerful"), so I'll address that point. The comparison is between late Pleistocene hunter-foragers, of 10,000-40,000 years ago, and the post-agricultural humans that were their immediate descendants. Claiming that their brains were substantially different in "neural density/efficiency" requires substantial justification (that appears nowhere in the scientific literature). Comparing them to a sperm whale is simply specious. McDaniel, M.A. (2005) Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33, 337-346 http://www.people.vcu.edu/~mamcdani/Big-Brained%20article.pdf Even if you don't buy that argument, it will be difficult to claim that a slightly bigger brain made our immediate ancestors *dumber*. The anatomically modern human was selected for by millions of years of hunting and foraging. (Orrorin, Sahelanthropus, and Ardipithecus -> Homo sapiens sapiens) Any subsequent change due to a few thousand years of agricultural practices is sufficiently subtle that it hasn't affected our morphology -- and, in fact, we're still arguing over whether it exists. My point stands: intelligence must have been not just valuable, but *absolutely necessary* for hunter-foragers -- otherwise we wouldn't have been selected for it. (Brain size of common human/chimp/bonobo ancestors: ~350cc. Brain size of anatomically modern humans: ~1300cc.) From msd001 at gmail.com Fri Nov 12 02:12:18 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Nov 2010 21:12:18 -0500 Subject: [ExI] Technology, specialization, and diebacks...Re: I, love the world. =) In-Reply-To: <4CDC97FB.6070404@evil-genius.com> References: <4CDC97FB.6070404@evil-genius.com> Message-ID: On Thu, Nov 11, 2010 at 8:27 PM, wrote: > On 11/11/10 4:00 AM, extropy-chat-request at lists.extropy.org wrote: >> >> On Tue, Nov 9, 2010 at 10:21 PM, ?wrote: >>> >>> > ?The fact people forget is that late Pleistocene hunter-foragers had >>> > larger >>> > ?brains than post-agricultural humans! ?(And were taller, stronger, and >>> > ?healthier...only in the last 50 years have most human cultures >>> > regained the >>> > ?height of our distant ancestors.) >> >> By comparison the Apple IIc I had when I was ten years old was more >> than twice as powerful as the computer i'm currently using to type >> this email. ?Perhaps fossil evidence shows a larger brainbox but can >> say nothing about the neural density / efficiency of the brain >> contained therein. ?Are you suggesting that a sperm whale is 5x >> smarter than the average human only because of its larger brain? > > I believe you mean "more than twice as large" (not "twice as powerful"), so > I'll address that point. Let me clarify. Typically when we speak of larger brains we're talking about more intelligence as in, "That evil-genius is a large-brain individual compared to us normally small-brain people." I went on what I assumed was your suggestion that Pleistocene hunter-foragers had "larger brains" than modern humans. I would follow the thinking that modern technology has made it possible for the average human to grow dumber with each generation while a decreasing population of opportunist smarties continue to benefit from this imbalance. > The comparison is between late Pleistocene hunter-foragers, of 10,000-40,000 > years ago, and the post-agricultural humans that were their immediate > descendants. ?Claiming that their brains were substantially different in > "neural density/efficiency" requires substantial justification (that appears > nowhere in the scientific literature). ?Comparing them to a sperm whale is > simply specious. No justification is possible without a cryotank full of preserved Pleistocene brains. ... and if that ever shows up it'll raise many more questions than answers. Of course the sperm whale comment was specious. ;) > McDaniel, M.A. (2005) Big-brained people are smarter: A meta-analysis of the > relationship between in vivo brain volume and intelligence. Intelligence, > 33, 337-346 > http://www.people.vcu.edu/~mamcdani/Big-Brained%20article.pdf > Even if you don't buy that argument, it will be difficult to claim that a > slightly bigger brain made our immediate ancestors *dumber*. I think the margin of error in measuring intelligence is far higher than the performance differences between the various models. Even with some magical means of copying the structural bits of a brain, the fuel going into it probably has similar performance impact as any other machine. ex: High octane fuel & perfect maintenance regimen on a racecar yields significantly better output than lower quality fuel/care on an engine identically machined to within five-nines tolerance. Given the range of energy metabolism, food quality, brain usage training, etc. it's almost impossible to compare two modern brains let alone distant time period brains. > > The anatomically modern human was selected for by millions of years of > hunting and foraging. ?(Orrorin, Sahelanthropus, and Ardipithecus -> Homo > sapiens sapiens) ?Any subsequent change due to a few thousand years of > agricultural practices is sufficiently subtle that it hasn't affected our > morphology -- and, in fact, we're still arguing over whether it exists. > > My point stands: intelligence must have been not just valuable, but > *absolutely necessary* for hunter-foragers -- otherwise we wouldn't have > been selected for it. ?(Brain size of common human/chimp/bonobo ancestors: > ~350cc. ?Brain size of anatomically modern humans: ~1300cc.) Modern human was also selected for running away from things that we couldn't kill first. Probably a considerable amount of our cooperative behaviors came from the discovery that many small animals are able to overpower a large threat when they work together - utilizing that prized possession: intelligence. Have you considered that perhaps intelligence is only secondarily selected for? Perhaps the more general governing rule is energy efficiency. The intelligence to do more work with less effort facilitates energy efficiency, so it has value. Tools make difficult tasks easier, so they become valuable too. Is a back-hoe inherently valuable? Only if the job is to dig. Without the task, that tool is a liability. Nature doesn't need overt intelligence for the the energy efficient to proliferate; and by doing so the environment is made more competitive. From msd001 at gmail.com Fri Nov 12 02:19:58 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Nov 2010 21:19:58 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: 2010/11/11 John Clark : > > There is no identity issue there is only a identity superstition. "There is no Dana; only Zuul" - Zuul From msd001 at gmail.com Fri Nov 12 02:39:46 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 11 Nov 2010 21:39:46 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC649A.7030409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> Message-ID: 2010/11/11 Alan Grimes : > Absurd because ideas can't think of ideas. I can think of ideas so > therefore I'm not any set of ideas. Absurd? Absurd because of another Absurdity? You are making a point by authority when you have no authority. I recommend you try the exercise Stathis suggested. Only after a discourse where we agree on some grounds can you begin to build a convincing argument. > Stick a few electrodes on my skull and you'll see my EEG, you can even > determine my state of consciousness from it. Combining that with > anatomical evidence, you can prove that a brain is a person. > > If you do the same to a computer you will not be able to detect any > differences except for the general level of computational activity, > which has no inherent relationship to the state of the upload. Can you post a URL to the records you kept from this experiment? I would be more likely to conclude that a brain is an unusual piece of meat that, while fresh, is able to produce detectable electrical impulses and that when no longer fresh is able to produce only an offensive odor. Nowhere in that affirmation can I assert your (or anyone else's) consciousness. I am certainly unable to prove that a lump of meat producing electrical impulse is a person. If electrical activity is a proof enough of conscious personhood then any common piezoelectric crystal could qualify. Oh right, the EEG is a complex time-dependent series of impulses and a simple oscillating frequency quartz crystal isn't good enough. When the computer (your second example) starts producing the same time-dependent series of impulses as the control/reference EEG that "proves" the personhood of the brain to which it is hooked, will you be concede that the computer is running a person? When the computer-hosted EEG pattern that has already synchronized its pattern with the biologically-hosted EEG pattern proving the conscious personhood of Alan Grimes detects the sudden loss of signal from the biological system does it report that Alan Grimes has died? Perhaps merely the link was severed, sure. But let's assume the system failure is not in the link, but in the biological system. As far as I (Mike D.) can tell, the computer-hosted pattern could continue to be fanatically against uploading and send emails to the list as such. I'll grant that I have no sense of your qualia. I wonder though if you do either. :) From agrimes at speakeasy.net Fri Nov 12 02:44:05 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 21:44:05 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> Message-ID: <4CDCA9F5.2040208@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > 2010/11/11 John Clark : >> >> There is no identity issue there is only a identity superstition. > > "There is no Dana; only Zuul" - Zuul My, what a lovely singing voice you must have. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Fri Nov 12 03:12:04 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 11 Nov 2010 22:12:04 -0500 Subject: [ExI] Existential Nihilism. Message-ID: <4CDCB084.4010806@speakeasy.net> Since I'm the single most provocative poster on this list, I'll keep up my tradition with a spiel for the philosophy which guides my understanding of the universe. Existential nihilism is a philosophy for understanding the world. It is not intended to make the user look smart to other people, it is absolutely not intended to make you feel good about anything. On the contrary, it is intended to make you feel bad about exactly the things you need to change in yourself to make you feel genuinely good. On the other hand, it does not tell you what you should feel bad about or what you should do about those things. What it does do is blow away all the bullshit you are immersed in from the culture and the propagandists and see the true mechanisms behind how the world works. Existentialism contradicts essentialism. Essentialism insists that everything has an essence that dominates it. For example, that a person is defined by his brain pattern. That this brain pattern can contain one's essence which can be bestowed upon any 8088 that happens to be available. Existentialism, says that you are exactly what you are, I am what I am, and the world is what it is. Sartre gets into a psychological phenomenon he calls "Existential Angst". A great deal of human psychology can be explained by an effort to escape existential angst. Existential angst is what you feel when you switch off your persona and put yourself into a meditative trance (om) where you focus on all of your attention on your senses and the nature of your own temporary existence. If you do it right you will feel angst. You will feel limitations in your own body and mind that you spend most of your time ignoring. A good existential nihilist intentionally keeps his mind as close to this feeling as he is able. Nihilism comes in when you realize that the world is nothing but this. Ideas and concepts are tools not things. No useless concept should be entertained and all ideas must always be open to examination. I am not still arguing with the uploaders because I have failed to consider their ideas, I argue with them because I have and because I cannot reconcile them with my own world view because they rely on a dualist/essentialist viewpoint. The threads prove that they refuse to leave it alone and continue to try to change my mind even though I have not spent any time in recent memory trying to get into their personal space. (Admittedly, I do definitely take a pot shot at them every now and again but mostly because I feel left out of the transhumanist movement.) Me: I think the world r0x0r$ and I don't want anyone turning it into computronium. them: Are you still going on about your pathetic anti-uploading luddisim, once uploaded everything will have so many more *PIXELS*!!! In general their arguments have strongly trended towards being more patronizing so for that and several other reasons, I won't respond to posts which nit-pick things I've said. I will only respond to truly insightful posts, or sufficiently well crafted flame bait. Ultimate nihilism is achieved when you see nothing but chemically bonded swarms of atoms around you. By reaching this state, all forms of self-delusion are nullified. Our consciousness, however, defies ultimate nihilism because it exists. We each should believe in our own consciousness and suspect those around us also exist (except for the philisophical, brain-obsessed zombies that I call uploaders). Now, what do we do with this existence in a wilderness of clumps of atoms? Knowing your own mortality, that becomes one obvious thing to work on. There are other things you might want to change about yourself but that quickly spins off into the world of personal idiosyncrasies. As a matter of personal policy, I don't criticize other people's choices because I don't want to be so judged. I criticize uploaders only because they continue to argue that they should be allowed to reduce the world to computronium for no other reason than that it is the object of their fetish. On the day that main-line transhumanism is about immortality and body modification and uploaders are marginalized, I'll gleefully shut up about it. Anyone who has ever cracked a textbook knows that survival in the world is a hard, grand challenge problem. It is only our exquisitely evolved forms that allow us to forget this stark truth. Only a transhumanism that faces all the challenges of survival head-on can even hope to improve anything about the human condition. Strong AI is indispensable for even approaching the problem. But then I see several organizations dedicated to brain uploading, a tiny handful of individuals working on AI, and hardly anything at all (explicitly) working towards medical enhancements. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Fri Nov 12 04:44:34 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 11 Nov 2010 21:44:34 -0700 Subject: [ExI] Homosexuality was Singularity was EP, was Margaret Mead controversy Message-ID: On Thu, Nov 11, 2010 at 8:43 PM, Dave Sill wrote: > > On Thu, Nov 11, 2010 at 12:59 PM, Keith Henson wrote: > >> Same place as everything else, evolution, selection of genes in the >> past. >> > What's the evolutionary/genetic explanation for homosexuality? That is somewhat the wrong question. The right question is where does heterosexuality come from? We know considerable about this and how it randomly goes "wrong" (to be politically correct). The embryonic default is female and female sexual orientation, i.e., attracted to males. We know considerable about the biochemistry of how this happens and what can go "wrong" with it. For example, male homosexuality rises with the number of previous male births for a given mother for a well understood reason. In the EEA (which included polygamy) a few males being oriented toward males made very little difference in the survival of genes. I can go into a lot more detail if you really care. Keith Henson From jonkc at bellsouth.net Fri Nov 12 05:48:34 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 00:48:34 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDC649A.7030409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> Message-ID: <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> On Nov 11, 2010, at 4:48 PM, Alan Grimes wrote: > ideas can't think of ideas. Absolutely untrue, ideas can be about ideas, in fact most ideas are. > I'm a strict monist So you think everything is one thing and you should not break a thing into manageable pieces to understand it. So you can't hope to understand anything until you understand everything. So the most likely result of this philosophy is not understanding anything. So I'm glad I'm not a strict monist. > I can't imagine any way through which the two can be separated. All such proposals are inherently irrational I see no irrationality in recognizing that a thing and what a thing does is not the same thing. A race car goes fast but a race car is not a goes fast, nor is a brain a mind. > Stick a few electrodes on my skull and you'll see my EEG And stick a few electrodes in a computer motherboard and you'll see its electrical signals. > you can even determine my state of consciousness from it. Don't be ridiculous. The only consciousness we can directly observe is our own, other conscious entities can only be assumed from intelligent behavior; and it matters not one bit if that behavior comes from a man or a machine. > If you do the same to a computer you will not be able to detect any > differences except for the general level of computational activity, > which has no inherent relationship to the state of the upload. I have no idea what that means. John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Nov 12 06:36:17 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 01:36:17 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <48358.82164.qm@web24905.mail.ird.yahoo.com> <4CDC2471.5040209@lightlink.com> <4CDC2A06.2090300@satx.rr.com> <006801cb81dc$f3313e30$d993ba90$@att.net> Message-ID: <58D8FE36-3039-4F70-95BC-E03F79A15050@bellsouth.net> On Nov 11, 2010, at 6:58 PM, John Grigg wrote: > I have a feeling S.U. *may* get on the SL4 list, but will not be > allowed to stay on it for very long!!! Unfortunately the SL4 list is effectively dead, since February you could count the number of posts on the fingers of one hand. The Singularity list has a bit more life. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Nov 12 06:27:45 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 01:27:45 -0500 Subject: [ExI] Existential Nihilism. In-Reply-To: <4CDCB084.4010806@speakeasy.net> References: <4CDCB084.4010806@speakeasy.net> Message-ID: <6794E5E5-C526-476E-98F8-5646CDDF3C10@bellsouth.net> On Nov 11, 2010, at 10:12 PM, Alan Grimes wrote: > Since I'm the single most provocative poster on this list Provocative? Yours is the conventional (and erroneous) view believed by 99.9% of the general public and even most members of this list who should know better. > Existentialism, says that you are exactly what you are, I am what I am, and the world is what it is. Yes, I must admit that's true, A is most certainly equal to A, but that revelation doesn't strike me as being particularly deep. > I don't want anyone turning it [the world] into computronium. Fine, there is no disputing matters of taste, but your personal wishes or mine on this matter are irrelevant. > Our consciousness, however, defies ultimate nihilism because it exists. What's with this "our" business? MY consciousness exists, I only have theories about yours. > the philisophical, brain-obsessed zombies that I call uploaders). Uploaders like me are mind-obsessed, you are brain-obsessed; and if you believe in philosophical zombies then you can't believe in Darwin's Theory of Evolution because the two are 100% incompatible. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Fri Nov 12 09:29:52 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 12 Nov 2010 03:29:52 -0600 Subject: [ExI] A humble suggestion Message-ID: The thing is, collected here in the ExI chat list are a pretty handy set of thinkers/engineers, spread around the world (sort of.) In fact, I can generalize this fact to say that almost all of the people interested in this movement fall into that category as well. Now look. This is a present dropped into your lap. Instead of only discussing lofty ideals and philosophy, we (H+) should focus on the engineering of tools which will eventually be very important in the long run for humanity, and for our goals in particular. List of tools we need to invent/things we need to do: -A very good bidirectional speech-to-speech translator. For spreading the gospel, once H+ wisens up enough to start including the proletariat. -Neoagriculture. This would mean better irrigation systems, GMO crops that can easily harness lots of sun energy and produce more food, maybe machines/instructions for diy fertilizer. -Better Grid--test experimental grid where people opt to operate, on property, efficient windmills/solar panels/any electricity they can make for $$$ -Housing projects that work, or some sort of thing where you pay people to build their own house/project building. -Fulfilling jobs for proles that also help society/space travel/humanism/H+. -So many more, I know you can think of some! I bet you have pet projects like these. Ideas, at least. By Le Ch?telier's principle, improving these fucked up problems that exist for much of society will give us much more leeway and ability to do transhumanisty things, AND we can do them in the meantime. It has to happen eventually, unless you have some fancy vision of the H+ elect ascending to cyberheaven and leaving everyone else behind. Thereby I suggest: a bunch of dedicated transhumanists mobilize and go to problematic regions, experimenting with those tools up there. Everyone will love H+. The movement will have lots of social power and then we can get shit done. Right? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Fri Nov 12 10:04:08 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Fri, 12 Nov 2010 02:04:08 -0800 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: References: Message-ID: <4CDD1118.2090502@evil-genius.com> >> > McDaniel, M.A. (2005) Big-brained people are smarter: A meta-analysis of the >> > relationship between in vivo brain volume and intelligence. Intelligence, >> > 33, 337-346 >> > http://www.people.vcu.edu/~mamcdani/Big-Brained%20article.pdf >> > Even if you don't buy that argument, it will be difficult to claim that a >> > slightly bigger brain made our immediate ancestors*dumber*. > I think the margin of error in measuring intelligence is far higher > than the performance differences between the various models. Even > with some magical means of copying the structural bits of a brain, the > fuel going into it probably has similar performance impact as any > other machine. ex: High octane fuel& perfect maintenance regimen > a racecar yields significantly better output than lower quality > fuel/care on an engine identically machined to within five-nines > tolerance. Given the range of energy metabolism, food quality, brain > usage training, etc. it's almost impossible to compare two modern > brains let alone distant time period brains. It is well established that the hunter-forager diet is superior to the post-agricultural diet in all respects: http://www.ajcn.org/cgi/content/full/81/2/341 ...as corroborated by the fact that all available indicators of health (height, weight, lifespan) crash immediately when a culture takes up farming -- and skeletal disease markers increase dramatically. http://www.environnement.ens.fr/perso/claessen/agriculture/mistake_jared_diamond.pdf And it wasn't until the year 1800 that residents of the richest countries of Europe reached the same caloric intake as the average tribe of hunter-gatherers. http://www.econ.ucdavis.edu/faculty/gclark/papers/Capitalism%20Genes.pdf Which brings me back to my original point: it takes substantial intelligence to make stone tools and weapons, memorize a territory of tens (if not hundreds) of square miles, know where prey and edibles will live and grow throughout the seasons, find them, perhaps chase and kill them, butcher them, start fires with nothing but a couple pieces of wood, etc., etc. If it didn't, intelligence would not have been selected for, and we'd still be little 3-foot Ardipithecuses with 350cc brains. I'm genuinely not sure whether you're objecting to my point, or just throwing up objections with no supporting evidence because you like messing with people. I'm going to start asking you to provide evidence, instead of just casting a bunch of doubts with no basis and no theory to replace what you're attacking. That's a creationist tactic. >> > The anatomically modern human was selected for by millions of years of >> > hunting and foraging. ?(Orrorin, Sahelanthropus, and Ardipithecus -> Homo >> > sapiens sapiens) ?Any subsequent change due to a few thousand years of >> > agricultural practices is sufficiently subtle that it hasn't affected our >> > morphology -- and, in fact, we're still arguing over whether it exists. >> > >> > My point stands: intelligence must have been not just valuable, but >> > *absolutely necessary* for hunter-foragers -- otherwise we wouldn't have >> > been selected for it. ?(Brain size of common human/chimp/bonobo ancestors: >> > ~350cc. ?Brain size of anatomically modern humans: ~1300cc.) > Modern human was also selected for running away from things that we > couldn't kill first. Probably a considerable amount of our > cooperative behaviors came from the discovery that many small animals > are able to overpower a large threat when they work together - > utilizing that prized possession: intelligence. Everything is selected for running away from things we can't kill first. Even lions and crocodiles run away from hippos. > Have you considered that perhaps intelligence is only secondarily > selected for? Perhaps the more general governing rule is energy > efficiency. Everything is secondarily selected for, relative to survival through at least one successful reproduction. I'm not sure that's a useful distinction. And I refuse to enter into a "define intelligence" clusterf**k, because it's all completely ancillary to my original point. From bbenzai at yahoo.com Fri Nov 12 12:56:32 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Nov 2010 12:56:32 +0000 (GMT) Subject: [ExI] Let's play What If. In-Reply-To: Message-ID: <789045.60303.qm@web114402.mail.gq1.yahoo.com> John K Clark wrote: > mind is more important than brain; at least it > is in this mind's opinion. And Alan Grimes replied: > Because I'm a strict monist, I can't imagine any way > through which the two can be separated. As I've already pointed out, this 'monism' of yours seems to reject what other people have called 'property dualism', or the concept that objects have properties. This concept is not an opinion, it's an established fact. Nobody can rationally deny it. To acknowledge that material objects have non-material (and non-mystical) properties is not really 'dualism' at all, it's materialism, and the materialistic view leads inexorably to the possibility of uploading, as recognised by most transhumanists. Your statement above implies that you can't see any way that a dog and a bark can be separated. I can think of dozens of ways, and I'm sure you can too if you try. The point is that it's these non-material (and non-mystical) properties that are important, not the dumb matter that exhibits them. The thing that mystifies me is why the argument that two atoms of the same element are completely and utterly indistinguishable and interchangeable, isn't decisive in this discussion. The fact that I've survived endless changes of material proves conslusively that I am not the matter that my body and brain are made from. Why is this so hard to understand? Ben Zaiboc From kanzure at gmail.com Fri Nov 12 13:18:05 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 12 Nov 2010 07:18:05 -0600 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: 2010/11/12 Will Steinberg > Instead of only discussing lofty ideals and philosophy, we (H+) should > focus on the engineering of tools which will eventually be very important in > the long run for humanity, and for our goals in particular. > Well, what have you been working on? wetware? http://groups.google.com/group/diybio hardware? http://groups.google.com/group/openmanufacturing software? Let's hear it. > List of tools we need to invent/things we need to do: > very rudimentary: http://diyhpl.us/cgit/skdb/tree/doc/proposals/trans-tech.yaml (It's apt-get for technology.) > -Housing projects that work, or some sort of thing where you pay people to > build their own house/project building. > Hextatic? Bucky's dreams? - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Fri Nov 12 13:06:02 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Nov 2010 13:06:02 +0000 (GMT) Subject: [ExI] A humble suggestion In-Reply-To: Message-ID: <168050.32379.qm@web114419.mail.gq1.yahoo.com> Will Steinberg suggested: > Instead of only discussing > lofty ideals and > philosophy, we (H+) should focus on the engineering of > tools which will > eventually be very important in the long run for humanity, > and for our goals > in particular. I'm sure that many of us can do both. In fact, I know for a fact that some of us are, and I'm pretty sure that there are quite a few people 'doing stuff' as well as talking on here. I think that it's important to not only do things, but to also talk about them or about their theoretical and philosophical aspects. Also, talking is a form of doing. I wonder how many lurkers there are here, who are possibly being affected by the ideas we bandy about? Ben Zaiboc From agrimes at speakeasy.net Fri Nov 12 13:38:57 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 12 Nov 2010 08:38:57 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <789045.60303.qm@web114402.mail.gq1.yahoo.com> References: <789045.60303.qm@web114402.mail.gq1.yahoo.com> Message-ID: <4CDD4371.1010403@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > And Alan Grimes replied: >> Because I'm a strict monist, I can't imagine any way >> through which the two can be separated. > As I've already pointed out, this 'monism' of yours > seems to reject what other people have called > 'property dualism', or the concept that objects have > properties. This concept is not an opinion, it's an > established fact. Nobody can rationally deny it. Most of the things that people call "properties" are actually artifacts of human perception. Anything beyond what is strictly scientifically detectable (such as the number of atoms in a substance) is nothing more than something that a human imagines and then forces on the perception. This gets to the Platonic theory of forms. What it means is that things such as vases, speakers, symbols on calculator keys, can only exist in the mind. In the world there is nothing but arrangements of mater which may or may not closely resemble the form you choose to assert over it. > To acknowledge that material objects have non-material > (and non-mystical) properties is not really 'dualism' > at all, it's materialism, and the materialistic view > leads inexorably to the possibility of uploading, as > recognised by most transhumanists. Bullshit. > Your statement above implies that you can't see any > way that a dog and a bark can be separated. I can > think of dozens of ways, and I'm sure you can too if > you try. The sound of a bark is not technically a bark. > The point is that it's these non-material (and > non-mystical) properties that are important, not the > dumb matter that exhibits them. The dumb mater always overrules our stupid, ill-conceived notions about it. > The thing that mystifies me is why the argument that > two atoms of the same element are completely and > utterly indistinguishable and interchangeable, isn't > decisive in this discussion. The fact that I've > survived endless changes of material proves > conslusively that I am not the matter that my body and > brain are made from. Why is this so hard to > understand? Non-sequiter because the routine replacement of some of your atoms at some low rate is not evidence of anything whatsoever. It means nothing more than that it is a natural function of your body is to replace some of your atoms at some rate. > Ben Zaiboc -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Fri Nov 12 13:58:59 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 12 Nov 2010 06:58:59 -0700 Subject: [ExI] A humble suggestion Message-ID: On Fri, Nov 12, 2010 at 5:00 AM, Will Steinberg wrote: > The thing is, collected here in the ExI chat list are a pretty handy set of > thinkers/engineers, spread around the world (sort of.) ?In fact, I can > generalize this fact to say that almost all of the people interested in this > movement fall into that category as well. ?Now look. ?This is a present > dropped into your lap. ?Instead of only discussing lofty ideals and > philosophy, we (H+) should focus on the engineering of tools which will > eventually be very important in the long run for humanity, and for our goals > in particular. I hope you have better luck than I had over the past several years. > List of tools we need to invent/things we need to do: > > -A very good bidirectional speech-to-speech translator. ?For spreading the > gospel, once H+ wisens up enough to start including the proletariat. There is considerable work being done in this area. I think Google is one of the companies working on this. They are doing it (as I recall) for phone service translation, but it should work on a single phone. The computation is currently intense enough to need cloud computing. > -Neoagriculture. This would mean better irrigation systems, GMO crops that > can easily harness lots of sun energy and produce more food, maybe > machines/instructions for diy fertilizer. This is a hard problem. You know how the efficiency of solar PV systems suck? Well photosynthesis is a lot worse and there are good reasons to think it can't be made much better. As for diy fertilizer, that's a snap. Pee on your lawn. > -Better Grid--test experimental grid where people opt to operate, on > property, efficient windmills/solar panels/any electricity they can make for > $$$ Putting power into power lines has been solved. The problem with solar and wind is they are dilute and intermittent. So it takes large and expensive structures to collect energy and then there is the storage problem, which can be ignored if the source is small compared to other sources. A kW full time supplies $800 of electricity in ten years for each penny you charge for it. So if you want to sell power low enough to undercut coal at around 4 cents, you have to sell the power for 2 cents and the cost per kW would need to be $1600. The cost for renewable sources is 10-20 times that high. I have for some years reported here on conceptual progress with power satellites transportation and more recently about StratoSolar. If you want to work on such projects, I often have spreadsheets or mathematical models that need review. Or I will check what you can model. > -Housing projects that work, or some sort of thing where you pay people to > build their own house/project building. Have you ever built a house? It takes a considerable collection of skills. > -Fulfilling jobs for proles that also help society/space travel/humanism/H+. > > -So many more, I know you can think of some! ?I bet you have pet projects > like these. ?Ideas, at least. > > > By Le Ch?telier's principle, improving these fucked up problems that exist > for much of society will give us much more leeway and ability to do > transhumanisty things, AND we can do them in the meantime. ?It has to happen > eventually, unless you have some fancy vision of the H+ elect ascending to > cyberheaven and leaving everyone else behind. > > Thereby I suggest: a bunch of dedicated transhumanists mobilize and go to > problematic regions, experimenting with those tools up there. ?Everyone will > love H+. ? The movement will have lots of social power and then we can get > shit done. ?Right? I started off thinking you were serious, but by the time I reached this point . . . you must be putting us on. Keith From x at extropica.org Fri Nov 12 14:07:55 2010 From: x at extropica.org (x at extropica.org) Date: Fri, 12 Nov 2010 06:07:55 -0800 Subject: [ExI] Let's play What If. In-Reply-To: <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> Message-ID: 2010/11/11 John Clark : > The only consciousness we can directly observe is our > own... John, I like your hard-edged no-nonsense approach to much of the content of this discussion, but your assertion that I quoted above highlights the incoherence at the very core demonstrated by you, Descartes, and many others. Such a singularity of self, with its infinite regress, can't be modeled as a physical system. Nor is it needed. See Dennett for a cogent philosophical explanation, or Ismael & Pollock's nolipsism for a logical-semantic view, or Metzinger's Being No One for a very detailed exposition of the experimental evidence, or Hofstadter's Strange Loop for a sincere but more muddled account, or even Alan Watts' The Taboo Against Knowing Who You Are for a more intuitionist approach. Digest and integrate this thinking, and then we might be able to move this conversation forward with extension from a more coherent basis. - Jef From singularity.utopia at yahoo.com Fri Nov 12 11:19:53 2010 From: singularity.utopia at yahoo.com (Singularity Utopia) Date: Fri, 12 Nov 2010 11:19:53 +0000 (GMT) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? Message-ID: <472742.97978.qm@web24912.mail.ird.yahoo.com> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, that's exactly the info I needed. John Grigg, you say I may not be allowed to stay long on the SL4 list? Why is this, are Singularitarians an intolerant group leaning towards fascism? Spike, you say Eliezer's theme was: "The singularity is coming regardless. Let us work to make it a positive thing."... and you say: "My constructive criticism of your earlier posts was that your theme is: the singularity will be a positive thing regardless." Yes it is my intention to make the Singularity a positive thing regardless. I say the Singularity will be a positive thing regardless of what anyone else says because, and this is the important bit, the power of my expectations positively manifested WILL create utopia. It is all about my determination, self-belief, the power of expectations, self-confidence, confidence in my abilities. You may think my confidence is blind, misguided, or foolishly overconfident but I assure you I will create utopia even if I have to do it all on my own, battling against a legion of pessimists. In a sane world I cannot see why Eliezer would think my attitude would be annoying or dangerous, but the world is insane therefore irrational responses to my views are likely. I actually think people who are overly-obsessed with friendly AI are very dangerous due to their misguided attempts to attain rationality and "overcome bias". My following webpage regarding the dangerous nature of people obsessed with friendly-AI could possibly enlighten you: http://singularity-2045.org/ai-dangerous-hostile-unfriendly.html Spike you say: "The singularity is not necessarily a good thing, but we know that a no-singularity future is a bad thing." I assure you the Singularity WILL absolutely without doubt be a good thing but this will not be through my inaction, it will because the power of my intellect positively manifested has made the Singularity a good thing. I will utilize a Self-Fulfilling Prophecy, which I have previously mentioned. Furthermore a negative intelligence explosion would oxymoronic-intelligence. Intelligence will be "intelligent" therefore the explosion will be utopian if truly intelligent people define "intelligence". The problem with some people who think they are intelligent is that they are misguided about the definition of intelligence, they are actually rather stupid. I will utilize the concept of self-fulfilling prophecy to create utopia. There is no need to doubt the future. Utopia is coming. Rest assured you can expect utopia. I encourage you all to put in the extra effort to make it happen sooner instead of later. I am the Singularity! I am utopia. http://en.wikipedia.org/wiki/Self-fulfilling_prophecy Regarding the fallacy of "Overcoming Bias" I will soon publish a rebuttal on my blogs. The desire to overcome bias is in itself a bias but such pseudo-rational people are unaware of their bias due to the fact they are "bias-deniers" (bias-fascists): they are overcoming bias thus they are creating a blind-spot regarding their bias. Bias cannot be overcome, but if you try to overcome it you will decrease you self-awareness. http://yudkowsky.net/rational/overcoming-bias Here is my forthcoming blog (in progress) regarding "the Bias of overcoming Bias": The major bias plaguing so-called rationalists is their glaring blind-spot regarding the power of Self-fulfilling Prophecy. Contrary to their biased assertions (that bias should be overcome), I state bias is a fundamental part of human consciousness. Bias should be utilized constructively, it should be not transcended. Self-fulfilling Prophecy is a preeminent usage of bias. The solution is to be highly aware. To transcend bias is tantamount to lobotomizing the mind. Bias is the heart of evaluation, judgment, existence. We are biased regarding pain and pleasure for example. If we were not biased regarding pain and pleasure we would be mindless robots. Do Transhumanists seek the evolution of the human organism to a point where we are stoical machines indifferent to emotions? Wishful-thinking, positive-thinking, and overconfidence can be very effective when applied via keen intellect. Sadly the so-called "rationalist-less-wrong" movement (overcoming bias) and similar Transhuman-futurist-cliques are deficient in intellect. Furthermore they are unaware of their intellectual deficiencies due to their bias; they are biased about bias thus they want to overcome it, but they are unaware of there bias. http://www.overcomingbias.com is good example of flawed thinking. Sadly I suspect the proponents of overcoming bias and other similar endeavours will be negatively-biased regarding my contributions? Regards Singularity Utopia http://en.wikipedia.org/wiki/Self-fulfilling_prophecy http://singularity-2045.org/hyper-mind-explosion.html http://singularity-2045.org/ Here is an article I wrote about subjectivity/objectivity a while ago: http://spacecollective.org/SingularityUtopia/6133/Objectivity-Fallacy-a-plea-for-increased-subjectivity UTOPIA IS COMING! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Fri Nov 12 16:03:53 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 12 Nov 2010 11:03:53 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <472742.97978.qm@web24912.mail.ird.yahoo.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> Message-ID: <4CDD6569.5070509@lightlink.com> Singularity Utopia wrote: > Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, > that's exactly the info I needed. > > John Grigg, you say I may not be allowed to stay long on the SL4 list? > Why is this, are Singularitarians an intolerant group leaning towards > fascism? Er.... you may be misunderstanding the situation. ;-) You will be unwelcome and untolerated on SL4, because: a) The singularity is, for Eliezer, a power struggle. It is a matter of which personality "owns" these ideas .... who determines the agenda, who is seen as the pre-eminent power broker .... who has the largest army of volunteers to spread the message. And in that situation, you, my friend, are a Threat. Even if your ideas were more sensible than his you would be attacked and denounced, for the simple reason that you would not be meekly conforming to the standard view of the singularity (as defined by The Wise One). b) Your assertions are wildly egotistical (viz "I am the Singularity! I am utopia"). This is garbage: you are not the singularity, you are a person. The singularity is a posited future event, and a set of ideas about that event. Your ego is, sadly, not enough to define or shape that event. Now, history may well turn out in such a way that one person's ego really does define and shape the singularity. But you can bet your life that that person will never do it by openly DECLARING that they are going to shape and define the thing. Eliezer obviously thinks that he is the chosen one, but whereas you are coming right out and declaring that you are the one, he would never be so dumb as to actually say "Hey, everyone, bow down to me, because I *am* the singularity!". He may be an irrational, Randian asshole, but he is not that stupid. So have fun on SL4, if there is anything left of it. If you don't actually get banned within a couple of months it will be because SL4 is (as John Clark claims) actually dead, and nobody gives a damn what you say there. Richard Loosemore From spike66 at att.net Fri Nov 12 15:48:47 2010 From: spike66 at att.net (spike) Date: Fri, 12 Nov 2010 07:48:47 -0800 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: <006801cb8281$188f7d00$49ae7700$@att.net> . On Behalf Of Will Steinberg . -Housing projects that work, or some sort of thing where you pay people to build their own house/project building. I do have a better idea: housing projects that work, some sort of thing where the builder collects money from herself to buy the land and the materials, then builds her own house. Cuts out the inefficient and corrupt middle man. -Fulfilling jobs for proles that also help society/space travel/humanism/H+. I do hope you succeed at that one, Will. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Fri Nov 12 15:59:42 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Fri, 12 Nov 2010 16:59:42 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <472742.97978.qm@web24912.mail.ird.yahoo.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> Message-ID: As I see they are biased toward Bayes. They will say, that a good bias is not a bias at all. It is only bias, when it's wrong. And they want to be less wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Fri Nov 12 16:00:28 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 12 Nov 2010 11:00:28 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CDD4371.1010403@speakeasy.net> References: <789045.60303.qm@web114402.mail.gq1.yahoo.com> <4CDD4371.1010403@speakeasy.net> Message-ID: <50392A47-365C-4A50-BE10-B205AB7A92FE@bellsouth.net> On Nov 12, 2010, at 8:38 AM, Alan Grimes wrote: > Most of the things that people call "properties" are actually artifacts > of human perception. My consciousness is obviously a human perception, an artifact is an undesirable alteration of data, thus unless you are willing to argue that consciousness is undesirable consciousness is not an artifact. However it is true that consciousness is a side effect of intelligence, Darwin taught us that in 1859. > Anything beyond what is strictly scientifically detectable (such as the number of atoms in a substance) is nothing more than something that a human imagines and then forces on the perception. Or to put it another way, all the really important things are the invention of mind, the invention of what the brain does. > What it means is that things such as vases, speakers, symbols on calculator keys, can only exist in > the mind. Yes, I couldn't have put it better myself, but I'm surprised to hear you say that as it strengthens the case for uploading. The thing we value, the thing we want to survive, is not 70 kilograms of hydrogen oxygen carbon and nitrogen but our wife or husband and ourselves. > In the world there is nothing but arrangements of mater Correct again, and atoms are generic and the information on how they are arranged can be duplicated; remind me again why uploading won't work. > Non-sequiter because the routine replacement of some of your atoms at > some low rate The term "low rate" has meaning only if the amount of time is specified. On a particle physics timescale the rate of atomic replacement is indeed low, but on a geological timescale it is virtually instantaneous. There is no preferred timescale in physics, one is as valid as another. > is not evidence of anything whatsoever. It certainly is not evidence that atoms have anything to do with personal identity; atoms have no individuality themselves so its not very surprising that they can't confer this property to us. > Bullshit. My lawyers will be contacting you on a matter involving copyright infringement. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Fri Nov 12 16:59:35 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Fri, 12 Nov 2010 12:59:35 -0400 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <472742.97978.qm@web24912.mail.ird.yahoo.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> Message-ID: >>I say the Singularity will be a positive thing regardless of what anyone else says because, and this is the important bit, the power of my expectations positively manifested WILL create utopia<< How can an expectation affect an outcome when we move beyond the point (singularity) where stochastic predictions and expectations based on them are no longer possible? Darren 2010/11/12 Singularity Utopia > Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, > that's exactly the info I needed. > > John Grigg, you say I may not be allowed to stay long on the SL4 list? Why > is this, are Singularitarians an intolerant group leaning towards fascism? > > Spike, you say Eliezer's theme was: "The singularity is coming regardless. > Let us work to make it a positive thing."... and you say: "My constructive > criticism of your earlier posts was that your theme is: the singularity will > be a positive thing regardless." > > Yes it is my intention to make the Singularity a positive thing regardless. > I say the Singularity will be a positive thing regardless of what anyone > else says because, and this is the important bit, the power of my > expectations positively manifested WILL create utopia. It is all about my > determination, self-belief, the power of expectations, self-confidence, > confidence in my abilities. You may think my confidence is blind, misguided, > or foolishly overconfident but I assure you I will create utopia even if I > have to do it all on my own, battling against a legion of pessimists. In a > sane world I cannot see why Eliezer would think my attitude would be > annoying or dangerous, but the world is insane therefore irrational > responses to my views are likely. > > I actually think people who are overly-obsessed with friendly AI are very > dangerous due to their misguided attempts to attain rationality and > "overcome bias". My following webpage regarding the dangerous nature of > people obsessed with friendly-AI could possibly enlighten you: > > http://singularity-2045.org/ai-dangerous-hostile-unfriendly.html > > Spike you say: "The singularity is not necessarily a good thing, but we > know that a no-singularity future is a bad thing." > > I assure you the Singularity WILL absolutely without doubt be a good thing > but this will not be through my inaction, it will because the power of my > intellect positively manifested has made the Singularity a good thing. I > will utilize a Self-Fulfilling Prophecy, which I have previously mentioned. > Furthermore a negative intelligence explosion would oxymoronic-intelligence. > Intelligence will be "intelligent" therefore the explosion will be utopian > if truly intelligent people define "intelligence". The problem with some > people who think they are intelligent is that they are misguided about the > definition of intelligence, they are actually rather stupid. > > I will utilize the concept of self-fulfilling prophecy to create utopia. > There is no need to doubt the future. Utopia is coming. Rest assured you can > expect utopia. I encourage you all to put in the extra effort to make it > happen sooner instead of later. I am the Singularity! I am utopia. > > http://en.wikipedia.org/wiki/Self-fulfilling_prophecy > > Regarding the fallacy of "Overcoming Bias" I will soon publish a rebuttal > on my blogs. The desire to overcome bias is in itself a bias but such > pseudo-rational people are unaware of their bias due to the fact they are > "bias-deniers" (bias-fascists): they are overcoming bias thus they are > creating a blind-spot regarding their bias. Bias cannot be overcome, but if > you try to overcome it you will decrease you self-awareness. > > http://yudkowsky.net/rational/overcoming-bias > > Here is my forthcoming blog (in progress) regarding "the Bias of overcoming > Bias": > > The major bias plaguing so-called rationalists is their glaring blind-spot > regarding the power of Self-fulfilling Prophecy. > > Contrary to their biased assertions (that bias should be overcome), I state > bias is a fundamental part of human consciousness. Bias should be utilized > constructively, it should be not transcended. Self-fulfilling Prophecy is a > preeminent usage of bias. The solution is to be highly aware. To transcend > bias is tantamount to lobotomizing the mind. Bias is the heart of > evaluation, judgment, existence. We are biased regarding pain and pleasure > for example. If we were not biased regarding pain and pleasure we would be > mindless robots. Do Transhumanists seek the evolution of the human organism > to a point where we are stoical machines indifferent to emotions? > > Wishful-thinking, positive-thinking, and overconfidence can be very > effective when applied via keen intellect. Sadly the so-called > "rationalist-less-wrong" movement (overcoming bias) and similar > Transhuman-futurist-cliques are deficient in intellect. Furthermore they are > unaware of their intellectual deficiencies due to their bias; they are > biased about bias thus they want to overcome it, but they are unaware of > there bias. > > http://www.overcomingbias.com is good example of flawed thinking. > > Sadly I suspect the proponents of overcoming bias and other similar > endeavours will be negatively-biased regarding my contributions? > > Regards > > Singularity Utopia > > http://en.wikipedia.org/wiki/Self-fulfilling_prophecy > > http://singularity-2045.org/hyper-mind-explosion.html > > http://singularity-2045.org/ > > Here is an article I wrote about subjectivity/objectivity a while ago: > > > http://spacecollective.org/SingularityUtopia/6133/Objectivity-Fallacy-a-plea-for-increased-subjectivity > > UTOPIA IS COMING! > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From steinberg.will at gmail.com Fri Nov 12 16:09:43 2010 From: steinberg.will at gmail.com (Will Steinberg) Date: Fri, 12 Nov 2010 10:09:43 -0600 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: Hmm...I *did* send this. I guess what this was a plea for was a mobile transhumanist revolution. I didn't mean to be naive/patronizing or mean that these things aren't being worked on, but only that it would be germane to our task for H+ folks to begin gathering a following by developing these and giving them where they are needed.. My point is that transhumanism should begin to engage the public, especially the sector that needs the most help, because we will need them eventually. It is very important that transhumanists don't get wrapped up in this whole anti-science movement. There is a real opposition to science itself that has stubbornly persisted, no matter what technology does. A good way to make people like science is to use it to solve their horrible problems. While it's great to speculate on this chat list, nobody can see it. Not gaining any 'cred' so to speak. And while this 'cred' isn't the most important thing there is...it is important, no? Sorry if my first message (or this one) came/comes off as ridiculous. Sometimes I think I am communicating an idea when in truth I am failing to. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksei at iki.fi Fri Nov 12 21:11:27 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Fri, 12 Nov 2010 23:11:27 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDD6569.5070509@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Fri, Nov 12, 2010 at 6:03 PM, Richard Loosemore wrote: > Singularity Utopia wrote: >> >> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, >> that's exactly the info I needed. >> >> John Grigg, you say I may not be allowed to stay long on the SL4 list? Why >> is this, are Singularitarians an intolerant group leaning towards fascism? > > Er.... you may be misunderstanding the situation. ?;-) > > You will be unwelcome and untolerated on SL4, because: > > a) ?The singularity is, for Eliezer, a power struggle. ?It is a matter of > which personality "owns" these ideas .... who determines the agenda, who is > seen as the pre-eminent power broker .... who has the largest army of > volunteers to spread the message. ? And in that situation, you, my friend, > are a Threat. ?Even if your ideas were more sensible than his you would be > attacked and denounced, for the simple reason that you would not be meekly > conforming to the standard view of the singularity (as defined by The Wise > One). Might as well comment on Loosemore's mudslingings for a change... Richard Loosemore is himself one of the very few people who have ever been kicked out from SL4 (the vast majority of people who strongly disagree with e.g. Eliezer of course haven't been kicked out), and ever since he has been talking nasty about Eliezer. Apparently Loosemore's beliefs now include e.g. that the person calling himself "Singularity Utopia" would be felt by Eliezer to be a threat :) In light of such statements, I invite people to make their own judgements on how clearheaded Loosemore manages to be when commenting on Eliezer. To Singularity Utopia: You are free to join SL4, as everyone is (though that list indeed isn't used much these days). But I'm quite certain joining will not result in you successfully managing to contact Eliezer, and it is *not* appropriate to join just for that reason; that would be abuse of the list (even though the contact attempt would likely fail). As Eliezer notes on his homepages that you have read, the primary way to contact him is email. It's just that he gets so much email, including from a large number of crazy people, that he of course doesn't answer them all. (You, unfortunately, are one of those crazy people who pretty surely will be ignored. So in the end, on this matter it would be appropriate of you to accept that -- like all people -- Eliezer should have the right to choose who he spends his time talking to, and that he most likely would not want to correspond with you.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From pharos at gmail.com Fri Nov 12 22:33:19 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Nov 2010 22:33:19 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Fri, Nov 12, 2010 at 9:11 PM, Aleksei Riikonen wrote: > As Eliezer notes on his homepages that you have read, the primary way > to contact him is email. It's just that he gets so much email, > including from a large number of crazy people, that he of course > doesn't answer them all. (You, unfortunately, are one of those crazy > people who pretty surely will be ignored. So in the end, on this > matter it would be appropriate of you to accept that -- like all > people -- Eliezer should have the right to choose who he spends his > time talking to, and that he most likely would not want to correspond > with you.) > > As I understand SU's request, she doesn't particularly want to enter a dialogue with Eliezer. Her request was for an updated version of The Singularitarian Principles Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. Perhaps someone could mention this to Eliezer or point her to more up-to-date writing on that subject? Doesn't sound like an unreasonable request to me. BillK From aleksei at iki.fi Fri Nov 12 22:44:36 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 00:44:36 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 12:33 AM, BillK wrote: > > As I understand SU's request, she doesn't particularly want to enter a > dialogue with Eliezer. Her request was for an updated version of The > Singularitarian Principles > Version 1.0.2 ? ? 01/01/2000 ? marked 'obsolete' on Eliezer's website. > > Perhaps someone could mention this to Eliezer or point her to more > up-to-date writing on that subject? ?Doesn't sound like an > unreasonable request to me. If people want a new version of Singularitarian Principles to exist, they can write one themselves. Eliezer has no magical authority on the topic, that would necessitate that it should be him. (Also, I doubt Eliezer thinks it important for a new version to exist.) (And if people just want newer things that Eliezer has written, just check his homepage.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From pharos at gmail.com Fri Nov 12 23:04:07 2010 From: pharos at gmail.com (BillK) Date: Fri, 12 Nov 2010 23:04:07 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: On Fri, Nov 12, 2010 at 10:44 PM, Aleksei Riikonen wrote: > If people want a new version of Singularitarian Principles to exist, > they can write one themselves. Eliezer has no magical authority on the > topic, that would necessitate that it should be him. (Also, I doubt > Eliezer thinks it important for a new version to exist.) > > (And if people just want newer things that Eliezer has written, just > check his homepage.) > > I don't disagree with you at all, as I agree with your opinion that Eliezer has no magical authority on that topic. It just seems very unhelpful to abuse enquirers and tell them to use Google. If visitors make a persistent nuisance of themselves, perhaps, but it doesn't seem the best attitude to start off with. BillK From rpwl at lightlink.com Fri Nov 12 23:26:04 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 12 Nov 2010 18:26:04 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <4CDDCD0C.8040208@lightlink.com> Aleksei Riikonen wrote: > Might as well comment on Loosemore's mudslingings for a change... > > Richard Loosemore is himself one of the very few people who have ever > been kicked out from SL4 (the vast majority of people who strongly > disagree with e.g. Eliezer of course haven't been kicked out), and > ever since he has been talking nasty about Eliezer. > > Apparently Loosemore's beliefs now include e.g. that the person > calling himself "Singularity Utopia" would be felt by Eliezer to be a > threat :) In light of such statements, I invite people to make their > own judgements on how clearheaded Loosemore manages to be when > commenting on Eliezer. I feel honored to have been one of the few people to have challenged Yudkowsky's ignorance. It gave me - and anyone else who was knowledgeable enough to have understood what happened - a chance to see him for what he was. Hey, I enjoy speaking the truth about the guy. I do it partly because it is fun to get sycophants like yourself riled up. And, as long as that outrageous, defamatory outburst of his is still online, and not withdrawn, I'm afraid, Aleksei, that he is fair game. ;-) "Singularity Utopia" is not, of course, a threat. You are correct about that: my mistake. He only regards someone as a threat when he realizes that they are smarter than he is, and when they have the moxy to talk about his state of undress .... Richard Loosemore From natasha at natasha.cc Fri Nov 12 23:30:20 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 12 Nov 2010 18:30:20 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> This is an interesting dialogue. I suppose most interesting is the way the Singularity has been obfuscated. Eliezer's interest is FAI, where he has developed a theoretical approach to the topic of superintelligence. His expertise is related to FAI. Eli was interested in seed AI, as I recall it. And as far as the Singularity goes, the early experts are Good, Vinge, Broderick and Kurzweil. Since AI, AGI, and FAI are variables of the Singularity, Eli applied this framework to his theory on seed AI and FAI. Eli aligns with Bostrom and Hanson. This is very fortunate for him in light of his nonacademic standing. Regardless, Eli is a delightful speaker. I don?t' know the value of his work other than being theoretical and stimulating. Natasha Quoting Aleksei Riikonen : > On Fri, Nov 12, 2010 at 6:03 PM, Richard Loosemore > wrote: >> Singularity Utopia wrote: >>> >>> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, >>> that's exactly the info I needed. >>> >>> John Grigg, you say I may not be allowed to stay long on the SL4 list? Why >>> is this, are Singularitarians an intolerant group leaning towards fascism? >> >> Er.... you may be misunderstanding the situation. ?;-) >> >> You will be unwelcome and untolerated on SL4, because: >> >> a) ?The singularity is, for Eliezer, a power struggle. ?It is a matter of >> which personality "owns" these ideas .... who determines the agenda, who is >> seen as the pre-eminent power broker .... who has the largest army of >> volunteers to spread the message. ? And in that situation, you, my friend, >> are a Threat. ?Even if your ideas were more sensible than his you would be >> attacked and denounced, for the simple reason that you would not be meekly >> conforming to the standard view of the singularity (as defined by The Wise >> One). > > Might as well comment on Loosemore's mudslingings for a change... > > Richard Loosemore is himself one of the very few people who have ever > been kicked out from SL4 (the vast majority of people who strongly > disagree with e.g. Eliezer of course haven't been kicked out), and > ever since he has been talking nasty about Eliezer. > > Apparently Loosemore's beliefs now include e.g. that the person > calling himself "Singularity Utopia" would be felt by Eliezer to be a > threat :) In light of such statements, I invite people to make their > own judgements on how clearheaded Loosemore manages to be when > commenting on Eliezer. > > > To Singularity Utopia: You are free to join SL4, as everyone is > (though that list indeed isn't used much these days). But I'm quite > certain joining will not result in you successfully managing to > contact Eliezer, and it is *not* appropriate to join just for that > reason; that would be abuse of the list (even though the contact > attempt would likely fail). > > As Eliezer notes on his homepages that you have read, the primary way > to contact him is email. It's just that he gets so much email, > including from a large number of crazy people, that he of course > doesn't answer them all. (You, unfortunately, are one of those crazy > people who pretty surely will be ignored. So in the end, on this > matter it would be appropriate of you to accept that -- like all > people -- Eliezer should have the right to choose who he spends his > time talking to, and that he most likely would not want to correspond > with you.) > > -- > Aleksei Riikonen - http://www.iki.fi/aleksei > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From natasha at natasha.cc Fri Nov 12 23:14:06 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 12 Nov 2010 18:14:06 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <20101112181406.zvr7bddo0ssss04c@webmail.natasha.cc> Yes, well said. Quoting Aleksei Riikonen : > On Sat, Nov 13, 2010 at 12:33 AM, BillK wrote: >> >> As I understand SU's request, she doesn't particularly want to enter a >> dialogue with Eliezer. Her request was for an updated version of The >> Singularitarian Principles >> Version 1.0.2 ? ? 01/01/2000 ? marked 'obsolete' on Eliezer's website. >> >> Perhaps someone could mention this to Eliezer or point her to more >> up-to-date writing on that subject? ?Doesn't sound like an >> unreasonable request to me. > > If people want a new version of Singularitarian Principles to exist, > they can write one themselves. Eliezer has no magical authority on the > topic, that would necessitate that it should be him. (Also, I doubt > Eliezer thinks it important for a new version to exist.) > > (And if people just want newer things that Eliezer has written, just > check his homepage.) > > -- > Aleksei Riikonen - http://www.iki.fi/aleksei > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Sat Nov 13 00:11:46 2010 From: spike66 at att.net (spike) Date: Fri, 12 Nov 2010 16:11:46 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDD6569.5070509@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <00f901cb82c7$5cc37fd0$164a7f70$@att.net> .. On Behalf Of Richard Loosemore ... Eliezer obviously thinks that he is the chosen one, but whereas you are coming right out and declaring that you are the one, he would never be so dumb as to actually say "Hey, everyone, bow down to me, because I *am* the singularity!". He may be an irrational, Randian asshole, but he is not that stupid...Richard Loosemore Richard I get a strong feeling I understand why you ended up getting banned on SL4. Regarding Singularity Utopia, I would go this route. SU, take everything you have written about the singularity, imagine it is 1935 and substitute nuclear fission for singularity. How wonderful it will all be, nuclear fission will provide us all with power too cheap to meter, everything will be wonderful, I *AM* nuclear fission, now everyone give me your plutonium 235 and I will put it all together in one mass and show you this marvelous substance makes heat, here I will show you my calculations that show how wonderful it will be... spike From algaenymph at gmail.com Fri Nov 12 23:33:29 2010 From: algaenymph at gmail.com (AlgaeNymph) Date: Fri, 12 Nov 2010 15:33:29 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> Message-ID: <4CDDCEC9.8080108@gmail.com> On 11/12/10 3:30 PM, natasha at natasha.cc wrote: > Regardless, Eli is a delightful speaker. Pretty good author too. Anyone read his Harry Potter fic? From aleksei at iki.fi Sat Nov 13 01:51:03 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 03:51:03 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDDCD0C.8040208@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 1:26 AM, Richard Loosemore wrote: > Aleksei Riikonen wrote: > >> Might as well comment on Loosemore's mudslingings for a change... >> >> Richard Loosemore is himself one of the very few people who have ever >> been kicked out from SL4 (the vast majority of people who strongly >> disagree with e.g. Eliezer of course haven't been kicked out), and >> ever since he has been talking nasty about Eliezer. >> >> Apparently Loosemore's beliefs now include e.g. that the person >> calling himself "Singularity Utopia" would be felt by Eliezer to be a >> threat :) ?In light of such statements, I invite people to make their >> own judgements on how clearheaded Loosemore manages to be when >> commenting on Eliezer. > > I feel honored to have been one of the few people to have challenged > Yudkowsky's ignorance. ?It gave me - and anyone else who was knowledgeable > enough to have understood what happened - a chance to see him for what he > was. > > Hey, I enjoy speaking the truth about the guy. ?I do it partly because it is > fun to get sycophants like yourself riled up. ? And, as long as that > outrageous, defamatory outburst of his is still online, and not withdrawn, > I'm afraid, Aleksei, that he is fair game. ?;-) > > "Singularity Utopia" is not, of course, a threat. ?You are correct about > that: ?my mistake. > > He only regards someone as a threat when he realizes that they are smarter > than he is, and when they have the moxy to talk about his state of undress I also enjoy this message of yours, though there might not be much similarity in the reasons for our enjoyment. Anyway, good luck to you in your future endeavours. I trust you feel that you are being a very serious, factual and successful person, and anticipate great things to come for you, since you see yourself as e.g. smarter than Eliezer. (You might however want to pay a bit more attention to how prone you yourself are to defamatory outbursts. In your capacity for such behaviour you certainly seem superior to Eliezer.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From natasha at natasha.cc Sat Nov 13 02:01:13 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Fri, 12 Nov 2010 21:01:13 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDDCEC9.8080108@gmail.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> <4CDDCEC9.8080108@gmail.com> Message-ID: <20101112210113.pwbx5ppmo0swkkcc@webmail.natasha.cc> Quoting AlgaeNymph : > On 11/12/10 3:30 PM, natasha at natasha.cc wrote: >> Regardless, Eli is a delightful speaker. > > Pretty good author too. Anyone read his Harry Potter fic? Just read it. Cute. Didn't like the issue with prettiness and found it trite. Liked the acknowledgement that "the only rule in science is that the final arbiter is the observer". Enjoyed the part about "the rationalist's version" and enjoyed the inward dialogue about rationality. I prefer Wikipedia's story here: http://en.wikipedia.org/wiki/Reality But then maybe I'm not such a fan of Harry Potter (sorry ... the story is not consequential enough for me, although the special effects in the films are great!) Natasha From msd001 at gmail.com Sat Nov 13 02:04:25 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 12 Nov 2010 21:04:25 -0500 Subject: [ExI] Technology, specialization, and diebacks...Re: I love the world. =) In-Reply-To: <4CDD1118.2090502@evil-genius.com> References: <4CDD1118.2090502@evil-genius.com> Message-ID: On Fri, Nov 12, 2010 at 5:04 AM, wrote: > It is well established that the hunter-forager diet is superior to the > post-agricultural diet in all respects: > http://www.ajcn.org/cgi/content/full/81/2/341 ok. > ...as corroborated by the fact that all available indicators of health > (height, weight, lifespan) crash immediately when a culture takes up farming > -- and skeletal disease markers increase dramatically. > http://www.environnement.ens.fr/perso/claessen/agriculture/mistake_jared_diamond.pdf ok. > And it wasn't until the year 1800 that residents of the richest countries of > Europe reached the same caloric intake as the average tribe of > hunter-gatherers. > http://www.econ.ucdavis.edu/faculty/gclark/papers/Capitalism%20Genes.pdf ok. > Which brings me back to my original point: it takes substantial intelligence > to make stone tools and weapons, memorize a territory of tens (if not agreed. > I'm genuinely not sure whether you're objecting to my point, or just > throwing up objections with no supporting evidence because you like messing > with people. ?I'm going to start asking you to provide evidence, instead of > just casting a bunch of doubts with no basis and no theory to replace what > you're attacking. ?That's a creationist tactic. I wasn't objecting. I misread your original point, you clarified, I tried to explain my error. I agree with you. I thought to go in another direction. I'd like to believe in the Hegelian principle of thesis-antithesis-synthesis. It seems however that most people on lists are content to remain in antithesis and counterproductive arguments instead of dialog. Note, I'm not accusing you of such, 'just commenting that the default mode of list-based discussion is argument rather than cooperation. too bad for that, huh? > Everything is selected for running away from things we can't kill first. > ?Even lions and crocodiles run away from hippos. At least the smart and nimble ones do. :) >> Have you considered that perhaps intelligence is only secondarily >> selected for? ?Perhaps the more general governing rule is energy >> efficiency. > > Everything is secondarily selected for, relative to survival through at > least one successful reproduction. ?I'm not sure that's a useful > distinction. > > And I refuse to enter into a "define intelligence" clusterf**k, because it's > all completely ancillary to my original point. I thought your original point was about the supremecy of intelligence. I was attempting to posit that energy efficiency may be an easier rule to widely apply than intelligence. It was just a thought. I wasn't trying to counter your point; I had accepted it as given and was hoping to continue. Thanks for reading. From msd001 at gmail.com Sat Nov 13 02:08:23 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 12 Nov 2010 21:08:23 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> Message-ID: On Fri, Nov 12, 2010 at 9:07 AM, wrote: > 2010/11/11 John Clark : >> The only consciousness we can directly observe is our >> own... > > John, I like your hard-edged no-nonsense approach to much of the > content of this discussion, but your assertion that I quoted above > highlights the incoherence at the very core demonstrated by you, > Descartes, and many others. Such a singularity of self, with its > infinite regress, can't be modeled as a physical system. ?Nor is it > needed. > > See Dennett for a cogent philosophical explanation, or Ismael & > Pollock's nolipsism for a logical-semantic view, or Metzinger's Being > No One for a very detailed exposition of the experimental evidence, or > Hofstadter's Strange Loop for a sincere but more muddled account, or > even Alan Watts' The Taboo Against Knowing Who You Are for a more > intuitionist approach. > > Digest and integrate this thinking, and then we might be able to move > this conversation forward with extension from a more coherent basis. So Jef, let me ask if all those names you drop are saying that I really AM the intersection of people who are identified as friends on Facebook despite the fact that I may or may not know who they are? From msd001 at gmail.com Sat Nov 13 02:11:07 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 12 Nov 2010 21:11:07 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <50392A47-365C-4A50-BE10-B205AB7A92FE@bellsouth.net> References: <789045.60303.qm@web114402.mail.gq1.yahoo.com> <4CDD4371.1010403@speakeasy.net> <50392A47-365C-4A50-BE10-B205AB7A92FE@bellsouth.net> Message-ID: 2010/11/12 John Clark : > On Nov 12, 2010, at 8:38 AM, Alan Grimes wrote: > Bullshit. > > My lawyers will be contacting you on a matter involving copyright > infringement. haha. Good one John. From bbenzai at yahoo.com Sat Nov 13 02:03:39 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 12 Nov 2010 18:03:39 -0800 (PST) Subject: [ExI] A humble suggestion In-Reply-To: Message-ID: <209006.6216.qm@web114419.mail.gq1.yahoo.com> Will Steinberg wrote: > There is a real opposition to > science itself that > has stubbornly persisted, no matter what technology > does.? A good way to > make people like science is to use it to solve their > horrible problems. Using science to solve problems *is* technology. So if your first sentence is true, your second can't be. We live in a world where science/technology has solved a huge number of horrible problems, and as you say, opposition to science still persists. People are swayed by their emotions, not logic. If you want to turn people on to technology and science, pointing at their mobile phones and central heating is no use. You need to study how it is that god-botherers and insurance companies can thrive, instead. Ben Zaiboc From rpwl at lightlink.com Sat Nov 13 03:00:11 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Fri, 12 Nov 2010 22:00:11 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> Message-ID: <4CDDFF3B.1080406@lightlink.com> Aleksei Riikonen wrote: > On Sat, Nov 13, 2010 at 1:26 AM, Richard Loosemore wrote: >> Aleksei Riikonen wrote: >> >>> Might as well comment on Loosemore's mudslingings for a change... >>> >>> Richard Loosemore is himself one of the very few people who have ever >>> been kicked out from SL4 (the vast majority of people who strongly >>> disagree with e.g. Eliezer of course haven't been kicked out), and >>> ever since he has been talking nasty about Eliezer. >>> >>> Apparently Loosemore's beliefs now include e.g. that the person >>> calling himself "Singularity Utopia" would be felt by Eliezer to be a >>> threat :) In light of such statements, I invite people to make their >>> own judgements on how clearheaded Loosemore manages to be when >>> commenting on Eliezer. >> I feel honored to have been one of the few people to have challenged >> Yudkowsky's ignorance. It gave me - and anyone else who was knowledgeable >> enough to have understood what happened - a chance to see him for what he >> was. >> >> Hey, I enjoy speaking the truth about the guy. I do it partly because it is >> fun to get sycophants like yourself riled up. And, as long as that >> outrageous, defamatory outburst of his is still online, and not withdrawn, >> I'm afraid, Aleksei, that he is fair game. ;-) >> >> "Singularity Utopia" is not, of course, a threat. You are correct about >> that: my mistake. >> >> He only regards someone as a threat when he realizes that they are smarter >> than he is, and when they have the moxy to talk about his state of undress > > I also enjoy this message of yours, though there might not be much > similarity in the reasons for our enjoyment. > > Anyway, good luck to you in your future endeavours. I trust you feel > that you are being a very serious, factual and successful person, and > anticipate great things to come for you, since you see yourself as > e.g. smarter than Eliezer. > > (You might however want to pay a bit more attention to how prone you > yourself are to defamatory outbursts. In your capacity for such > behaviour you certainly seem superior to Eliezer.) Aleksei, You have no idea how entertaining it is to hear professionally qualified cognitive psychologists, complex systems theorists or philosophers of science commenting on Eliezer's level of competence in these areas. Not many of them do, of course, because they can't be bothered. But among the few who have actually taken the trouble, I am afraid the poor guy is generally scorned as a narcissistic, juvenile amateur. :-( And then, to hear the sycophantic noises made by certain individuals within the singularity community... Oh dear. Kind of embarrassing. Richard Loosemore From thespike at satx.rr.com Sat Nov 13 03:14:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Nov 2010 21:14:40 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDDFF3B.1080406@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> Message-ID: <4CDE02A0.6030007@satx.rr.com> On 11/12/2010 9:00 PM, Richard Loosemore wrote: > You have no idea how entertaining it is to hear professionally qualified > cognitive psychologists, complex systems theorists or philosophers of > science commenting on Eliezer's level of competence in these areas. Not > many of them do, of course, because they can't be bothered. But among > the few who have actually taken the trouble, I am afraid the poor guy is > generally scorned as a narcissistic, juvenile amateur. The problem with this widely-used yardstick, Richard, is that it would apply equally well to you and me (for example) in regard to our convictions about psi--except for the "juvenile" part, alas. The question is how telling such an appeal to expert jeering is. Usually, very. Sometimes, not much, or even not at all. Granted, in this case you are also drawing on your own direct experience of combative encounters with Eliezer and his writings, but that's a rather different point. Damien Broderick From aware at awareresearch.com Sat Nov 13 03:48:40 2010 From: aware at awareresearch.com (Aware) Date: Fri, 12 Nov 2010 19:48:40 -0800 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CD85AA8.5080402@satx.rr.com> <4CDAAB64.2040803@speakeasy.net> <669E4958-DF96-4436-B832-183B2DA68566@bellsouth.net> <4CDB035B.9040406@speakeasy.net> <6E2FC21E-6D2C-41B9-8231-442494AC15B0@bellsouth.net> <4CDC0A85.3000404@speakeasy.net> <4CDC649A.7030409@speakeasy.net> <5A2552AD-5F86-4845-8BEC-B1B9EB2C9287@bellsouth.net> Message-ID: On Fri, Nov 12, 2010 at 6:08 PM, Mike Dougherty wrote: > So Jef, let me ask if all those names you drop are saying that I > really AM the intersection of people who are identified as friends on > Facebook despite the fact that I may or may not know who they are? Non sequitur. Try looking into those references...? - Jef From agrimes at speakeasy.net Sat Nov 13 04:33:27 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 12 Nov 2010 23:33:27 -0500 Subject: [ExI] Vexille Message-ID: <4CDE1517.4030104@speakeasy.net> I just feasted my beady little eyeballs on a film called Vexille. Definite recommendation! =] I also like Bubblegum Crisis 2040. =) -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From mrjones2020 at gmail.com Sat Nov 13 02:49:22 2010 From: mrjones2020 at gmail.com (Mr Jones) Date: Fri, 12 Nov 2010 21:49:22 -0500 Subject: [ExI] Electric cars without batteries In-Reply-To: References: <4BAA53F750AE4EC28C8572A93D30A61F@cpdhemm> <1CFB06B9B09D4E23BE6259E9152E9BE0@spike> <972149C3A4DF44529DE486DFD5F7958B@spike> Message-ID: Kind of off topic,but speaking of steam... What if one or two cylinders in the motor were steam driven,using the heat from the motor's combustion? Perhaps a special block design could facilitate the necessary heat transfer? Maybe the steam cylinders only fire 1:5 revaluation,whatever the #'s work out to be. This process could replace the need for radiators,and increase efficiency? Make some use of all that largely wasted heat energy? On Oct 24, 2010 11:36 PM, "spike" wrote: > -----Original Message----- > From: extropy-chat-bounces at lists.extropy.org > [mailto:extropy-ch... > Sent: Sunday, October 24, 2010 7:14 PM > To: ExI chat list > Subject: Re: [ExI] Electric cars with... > On Sun, Oct 24, 2010 at 5:14 PM, spike wrote: > > > >> ...On Behalf Of Keith Hen... > > Well OK Keith, I need to do some more work on this idea > then. I just > > can't imagine a ga... > turbines turn at 3600 RPM... Oooookaaaaaayyyy, now I know why we were talking past each other. Ja, steam turbines can be made to turn slowly, but we are talking about two completely different things. Steam is cold. Even superheated steam is cold. Products of hydrocarbon combustion are hot. A steam turbine is a big thing, good for power generation, not good for carrying around to generate power in a Detroit. OK no problem, proposal: let's see if there are any steam turbines of 20-ish kw, I will estimate the boiler needed to make the steam and the condenser requirements (because that will be possibly as big and heavy as the rotor if not moreso) and I think we will both see why this notion has never been used as far as I know for automotive use. If instead of a condenser, we throw the low pressure steam overboard after it passes the turbine, the idea would require too much water mass for a typical trip. > Next time you have the hood on a vehicle up, take a look at > the diameter of the alternator and... ... > Keith Hmmm, well OK, with those numbers we should be able to get these two to meet somewhere in the middle. With that in mind, we might be able to get a hot gas turbine to run efficiently down at 30kRPM and a generator that can sustain those speeds without overheating. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extr... -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sat Nov 13 05:09:20 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 13 Nov 2010 00:09:20 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE02A0.6030007@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> Message-ID: <4CDE1D80.5030800@lightlink.com> Damien Broderick wrote: > On 11/12/2010 9:00 PM, Richard Loosemore wrote: > >> You have no idea how entertaining it is to hear professionally qualified >> cognitive psychologists, complex systems theorists or philosophers of >> science commenting on Eliezer's level of competence in these areas. Not >> many of them do, of course, because they can't be bothered. But among >> the few who have actually taken the trouble, I am afraid the poor guy is >> generally scorned as a narcissistic, juvenile amateur. > > The problem with this widely-used yardstick, Richard, is that it would > apply equally well to you and me (for example) in regard to our > convictions about psi--except for the "juvenile" part, alas. The > question is how telling such an appeal to expert jeering is. Usually, > very. Sometimes, not much, or even not at all. > > Granted, in this case you are also drawing on your own direct experience > of combative encounters with Eliezer and his writings, but that's a > rather different point. Damien, To be specific, I am ONLY drawing on my encounter with Eliezer. I am only referring to their opinion of his level of competence in that encounter. On that occasion he made some very definite statements about (a) cognitive science, (b) complex systems and (c) philosophy of science, and they were embarrassingly wrong. Now, as you point out, there are professional cognitive psychologists who pour scorn on the kind of statements that you or I make about psi. But that kind of scorn is wholly unrelated to the kind of scorn that I am talking about in Eliezer's case. What Eliezer did was make statements that, when compared with the contents of an elementary textbook of cognitive psychology, made him a laughing stock. (Example: in the context of human reasoning research, he claimed comprehensive knowledge of the area but then had to look in Wikipedia, in the middle of our argument, to find out about one of the central figures in that field (Johnson-Laird)). By itself his lapses of understanding might have been forgivable, but what really made people dismiss him as a "juvenile amateur" was the fact that he condemned the person he was arguing against as an ignorant crackpot, when all that person did was quote the standard textbook line at him. When you or I face scathing criticism about psi, it is not because we make pugnacious claims about our knowledge of the t-test, and then use the wrong definition .... and then accuse someone else, who gives us the correct definition of a t-test, of being a crackpot. :-) So, I hear what you say, but the two cases are only superficially the same. Richard Loosemore From spike66 at att.net Sat Nov 13 05:41:14 2010 From: spike66 at att.net (spike) Date: Fri, 12 Nov 2010 21:41:14 -0800 Subject: [ExI] Electric cars without batteries In-Reply-To: References: <4BAA53F750AE4EC28C8572A93D30A61F@cpdhemm> <1CFB06B9B09D4E23BE6259E9152E9BE0@spike> <972149C3A4DF44529DE486DFD5F7958B@spike> Message-ID: <002501cb82f5$62b7c440$28274cc0$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mr Jones Sent: Friday, November 12, 2010 6:49 PM To: ExI chat list Subject: Re: [ExI] Electric cars without batteries >.Kind of off topic,but speaking of steam... Not off topic. Long standing tradition at ExI-chat to discuss relevant technologies, even ones not related to uploading and transhumanism. If and until the singularity, we must live and possibly die in a pre-singularity world. Dammit. >.What if one or two cylinders in the motor were steam driven,using the heat from the motor's combustion? Perhaps a special block design could facilitate the necessary heat transfer? Maybe the steam cylinders only fire 1:5 revaluation,whatever the #'s work out to be. This process could replace the need for radiators,and increase efficiency? Make some use of all that largely wasted heat energy? Kind of like a cogeneration system for internal combustion. There should be some literature somewhere on this. In automotive technology, everything that could possibly be thought of has been tried by someone somewhere. If you are in the mood to search for it, look around of an idea I have been kicking around: automotive batteries that have some kind of cooling system for the acid, in order to allow them to charge and discharge quickly. I got the idea from a comment Keith made about turbines. If we had a small turbine it could be allowed to spin like all hell under constant speed and constant load, so it is efficient, if there is a good way to use the electricity in normal traffic. This would require batteries that can handle fast discharging and can handle a lot of recharge current. Someone somewhere must have extensive testing on this notion, ja? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sat Nov 13 05:31:40 2010 From: max at maxmore.com (Max More) Date: Fri, 12 Nov 2010 23:31:40 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> lists1 at evil-genius.com said: >It is well established that the hunter-forager diet is superior to the >post-agricultural diet in all respects: >http://www.ajcn.org/cgi/content/full/81/2/341 That paper is "Origins and evolution of the Western diet: health implications for the 21st century" by Loren Cordain, and co-authors. Ah, a topic I've been somewhat obsessed by in recent months. I've discovered the literature and accompanying community for the paleolithic diet (and exercise and life style) and have been on a strictly paleo diet for at least a month. (During that time, my body fat has dropped a bit to 11% and my blood pressure -- both systolic and diastolic -- has dropped 20 points, from an already healthy 120/80.) The proponents of a Paleo/Primal/Evo [Evolutionary Fitness] approach -- including Loren Cordain, Mark Sisson, Art Devany, Robb Wolff, and Gary Taubes (the latter doesn't quite say so in his last book, but I think it's a very plausible interpretation, to be confirmed in his imminent new book) -- don't agree on every detail, but do all affirm the essentials: Our physiology has not evolved to handle the diet we've adopted over the last 10,000 years since the advent of agriculture, and *especially* not all the sugars in the modern diet. Grains are bad, fats are not, the official Food Pyramid and recommendations of the AHA and AMA are disastrous and have contributed mightily to the obesity epidemic. At first, I thought Loren Cordain was a bit off-base on some things, but he's pretty convincing and his studies appear sound. I recommend his website, especially the FAQ: http://www.thepaleodiet.com/faqs/ He has a 2002 book, but wait a month or so for the revised and expanded new edition: The Paleo Diet: Lose Weight and Get Healthy by Eating the Foods You Were Designed to Eat http://www.amazon.com/gp/product/0470913029/ref=oss_product A less academically thorough but still informative and helpful source is Robb Wolf's The Paleo Solution: The Original Human Diet http://www.amazon.com/Paleo-Solution-Original-Human-Diet/dp/0982565844/ref=pd_sim_b_4 Art De Vany has his own take on this, with an emphasis on exercise. See his "Essay on Evolutionary Fitness": http://www.arthurdevany.com/categories/20091026 His forthcoming book: The New Evolution Diet: What Our Paleolithic Ancestors Can Teach Us about Weight Loss, Fitness, and Aging: http://www.amazon.com/gp/product/1605291838/ref=ord_cart_shr?ie=UTF8&m=ATVPDKIKX0DER A highly readable and well-grounded version of the Paleo/Primal approach is in Mark Sisson's The Primal Blueprint: http://www.amazon.com/Primal-Blueprint-Reprogram-effortless-boundless/dp/0982207700/ref=pd_sim_b_5 Sisson's website is a rich source of information, with an active community of people exploring the Paleo/Primal life style: http://www.marksdailyapple.com/ I urge everyone to read Gary Taubes' dense but brilliant Good Calories, Bad Calories: http://www.amazon.com/Good-Calories-Bad-Controversial-Science/dp/1400033462/ref=pd_sim_b_7 -- and his forthcoming (December 2010) more practically-oriented Why We Get Fat: And What to Do About It: http://www.amazon.com/Why-We-Get-Fat-About/dp/0307272702/ref=pd_sim_b_3 Since I went Paleo for my diet (and I'm shifting my exercise routine in that direction, although it was not too far off already), I've discovered that old pal gerontologist Michael Rose is also a Paleo enthusiast (he says he's been on a fully paleo diet for 1.3 years). He gives some background on the rationale in this talk: http://telexlr8.blip.tv/file/4225188/ Cynthia Kenyon (who some of you will have heard speak back at Extro-3) is also on a low-carb, apparently Paleo diet, based on her own research. As you might surmise, I'm quite enthusiastic about the Paleo/Primal diet (and related ideas). This might seem a little paradoxical for a transhumanist (but really isn't). Since you cannot fully engage in creating and enjoying the future we hope for unless you are alive, I urge you to take a look at this challenge to conventional wisdom about health and longevity. If anyone's interested, I can post some additional URLs to useful sources on the topic. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Co-editor, The Transhumanist Reader The Proactionary Project Vice Chair, Humanity+ Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From amara at kurzweilai.net Sat Nov 13 06:19:36 2010 From: amara at kurzweilai.net (Amara D. Angelica) Date: Fri, 12 Nov 2010 22:19:36 -0800 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> Message-ID: <043601cb82fa$bf7e9550$3e7bbff0$@net> Max: great info. I've been on the paleo diet (without knowing it -- it just made sense) for about a year. I lost 25 pounds and have a lot more energy. One argument I've heard against it is that the diet was optimized for reproduction, but not necessarily longevity. Any data on that? - AA From thespike at satx.rr.com Sat Nov 13 05:48:57 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 12 Nov 2010 23:48:57 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE1D80.5030800@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> Message-ID: <4CDE26C9.90008@satx.rr.com> On 11/12/2010 11:09 PM, Richard Loosemore wrote: > (Example: in the context of human reasoning research, > he claimed comprehensive knowledge of the area but then had to look in > Wikipedia, in the middle of our argument, to find out about one of the > central figures in that field (Johnson-Laird)). That *is* dismaying! Damien Broderick From possiblepaths2050 at gmail.com Sat Nov 13 13:16:22 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 13 Nov 2010 06:16:22 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE26C9.90008@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: Richard Loosemore wrote: You have no idea how entertaining it is to hear professionally qualified cognitive psychologists, complex systems theorists or philosophers of science commenting on Eliezer's level of competence in these areas. Not many of them do, of course, because they can't be bothered. But among the few who have actually taken the trouble, I am afraid the poor guy is generally scorned as a narcissistic, juvenile amateur. >>> Eliezer (I once called him Eli in a post and he responded with, "only friends get to call me that") is in my view a very bright fellow, but I find it a tragedy that he did not attend college and get an advanced degree in something along the lines of artificial intelligence/neuro-computation. I feel he has doomed himself to not being a "heavy hitter" like Robin Hanson, James Hughes, Max More, or Nick Bostrom, due to his lacking in this regard. I realize he has his loyal pals and many friends within transhumanism, but I suspect his success in the much larger world has been greatly blunted due to his stubborn refusal to earn academic credentials. And I have to chuckle at his notion that the Singularity would be right around the corner and so why should he even bother? LOL I realize he found a wealthy patron with Peter Thiel, and so money has been given to the Singularity Institute to keep it afloat. They have had some nice looking conferences (I have never attended one), but I am still not sure to what extent Thiel has donated money to SI or for how long he will continue to do it. I'd like to think that it's enough money that Eliezer and Michael Anissimov can live comfortably. I tried to join SL4 and was turned down! And my Facebook request to be his friend is still *pending.* Yes, I should have never teased the young "boy-genius" back a decade or so ago.... ; ) Oh, but Eliezer told me he dislikes being called a genius. I must not forget! He is now around 30, paunchy, and even beginning to lose his hair. How the time flies.... I met him in person for the first time at the Extropy 5 conference and I think we were mutually surprised at each other's mutual "likeability." I explained how I had really enjoyed his talk, but wished I had a transcript of it, to better understand the material. He immediately dug into his things and gave me a copy of his presentation outline, which really touched me. At Convergence he and Michael Anissimov had a great time laughing their heads off together. I remember a presentation where he and Michael were all giggles and things were not too productive. But then Convergence had a very informal format where anyone could sign up to give a talk to anyone who wanted to show up. I will never forget Bruce Klein and his wife Susan lovingly giving me the finger! : ) Anyway, like everyone, Eliezer has a good and a bad side. Yes, he seems to have a big ego and likes to be the center of attention, but he strikes me as largely being very goodhearted and sincerely wanting to improve the world. But as I said before, without serious academic credentials, he has somewhat muted himself and limited his own (in my view) great potential. I suspect his term "friendly AI" will be viewed by military funders of AI, as something that needs to be replaced with "obedient AI." If they are even aware of his work... John From bbenzai at yahoo.com Sat Nov 13 15:34:44 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 13 Nov 2010 07:34:44 -0800 (PST) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: Message-ID: <224368.78989.qm@web114409.mail.gq1.yahoo.com> Damien Broderick exclaimed: > On 11/12/2010 11:09 PM, Richard Loosemore wrote: > > > (Example:???in the context of human > reasoning research, > > he claimed comprehensive knowledge of the area but > then had to look in > > Wikipedia, in the middle of our argument, to find out > about one of the > > central figures in that field (Johnson-Laird)). > > That *is* dismaying! > Hm. I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". What /would/ be dismaying is if he didn't know how to quickly find relevant information. Ben Zaiboc From bbenzai at yahoo.com Sat Nov 13 15:34:45 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sat, 13 Nov 2010 07:34:45 -0800 (PST) Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: Message-ID: <24001.92256.qm@web114418.mail.gq1.yahoo.com> Damien Broderick exclaimed: > On 11/12/2010 11:09 PM, Richard Loosemore wrote: > > > (Example:???in the context of human > reasoning research, > > he claimed comprehensive knowledge of the area but > then had to look in > > Wikipedia, in the middle of our argument, to find out > about one of the > > central figures in that field (Johnson-Laird)). > > That *is* dismaying! > Hm. I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". What /would/ be dismaying is if he didn't know how to quickly find relevant information. Ben Zaiboc From algaenymph at gmail.com Sat Nov 13 15:19:14 2010 From: algaenymph at gmail.com (AlgaeNymph) Date: Sat, 13 Nov 2010 07:19:14 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <20101112210113.pwbx5ppmo0swkkcc@webmail.natasha.cc> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <20101112183020.qke1h9otsso4gsgs@webmail.natasha.cc> <4CDDCEC9.8080108@gmail.com> <20101112210113.pwbx5ppmo0swkkcc@webmail.natasha.cc> Message-ID: <4CDEAC72.1060607@gmail.com> On 11/12/10 6:01 PM, natasha at natasha.cc wrote: > Just read it. Cute. Didn't like the issue with prettiness and found > it trite. Liked the acknowledgement that "the only rule in science is > that the final arbiter is the observer". Enjoyed the part about "the > rationalist's version" and enjoyed the inward dialogue about > rationality. I prefer Wikipedia's story here: > http://en.wikipedia.org/wiki/Reality But then maybe I'm not such a > fan of Harry Potter (sorry ... the story is not consequential enough > for me, although the special effects in the films are great!) I'm not that big on Potter myself, but what I like is that he's actually doing something more *explicit* to get the transhumanist meme out there than personal projects pending publicity. It's definitely doing more than pretentious petty political pontification. If we don't hang together as transhumanists, not only will we hang separately but Kass, Rifkin, McKibben, and their ilk will make look like death by autoerotic asphyxiation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sat Nov 13 15:59:56 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 13 Nov 2010 09:59:56 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <043601cb82fa$bf7e9550$3e7bbff0$@net> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> <043601cb82fa$bf7e9550$3e7bbff0$@net> Message-ID: Great question. Max has taken me along on his diet, mainly because he has been doing all the meal preparation in our house and I just enjoy watching. I had to adapt to removing some of the grains from my diet, including pasta which "amo mangiare", but I really haven't missed it too much. Eating meat is a BIG change for me. I had been a soft vegetarian for years, after introducing fish and chicken back into my diet; but now with eating read meat I am still a bit perplexed. Just not sure who I am when I experience myself eating meat. One thing that I do admire about Max being a paleo diet connoisseur is that he is very particular about how the foods are grown and manufactured. I cannot loose any weight, so I am not quite sure how this diet will work for me in the long run. So, in short, my question ties into yours Amara. Max, after you respond to Amara, would you please advise me how I can maintain and even gain weight on the paleo diet? And, how do you see the issues of how food is grown / raised, that is very different from "organic" foods? (kiss) Best, Natasha Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Amara D. Angelica Sent: Saturday, November 13, 2010 12:20 AM To: 'ExI chat list' Subject: Re: [ExI] Paleo/Primal health [Was: Re: Technology, specialization,and diebacks...Re: I love the world. =)] Max: great info. I've been on the paleo diet (without knowing it -- it just made sense) for about a year. I lost 25 pounds and have a lot more energy. One argument I've heard against it is that the diet was optimized for reproduction, but not necessarily longevity. Any data on that? - AA _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From test at ssec.wisc.edu Sat Nov 13 15:59:34 2010 From: test at ssec.wisc.edu (Bill Hibbard) Date: Sat, 13 Nov 2010 09:59:34 -0600 (CST) Subject: [ExI] Arthur Weasley quote Message-ID: Natasha wrote: > . . . But then maybe I'm not such a fan of Harry Potter > (sorry ... the story is not consequential enough for me, > although the special effects in the films are great!) Yes, the stories are mostly escapism, but here's an interesting quote from Arthur Weasley (father of Harry's pal Ron): "Never trust anything that thinks for itself unless you can see where it keeps its brain." Bill From spike66 at att.net Sat Nov 13 16:13:01 2010 From: spike66 at att.net (spike) Date: Sat, 13 Nov 2010 08:13:01 -0800 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <043601cb82fa$bf7e9550$3e7bbff0$@net> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> <043601cb82fa$bf7e9550$3e7bbff0$@net> Message-ID: <006a01cb834d$a4e56a90$eeb03fb0$@att.net> ...On Behalf Of Amara D. Angelica ... >...Max: great info. I've been on the paleo diet...lost 25 pounds and have a lot more energy...One argument I've heard against it is that the diet was optimized for reproduction, but not necessarily longevity... Amara, all weight loss diets are optimized for reproduction. {8^D Best wishes to you on that paleo diet. {8-] Even if it doesn't add years to your life, may it add life to your years. It actually sounds right to me, especially if you get to dress in a Flintstones deerskin and make funny noises while you eat, go caveman and so forth. If it is so retro it predates fire, then it implies... suuuushiiiiiii! ommm nom nom ommmmm nom nom nom... We haven't seen your posts here in a while, welcome back. {8-] spike From spike66 at att.net Sat Nov 13 16:28:17 2010 From: spike66 at att.net (spike) Date: Sat, 13 Nov 2010 08:28:17 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <007b01cb834f$c7567c20$56037460$@att.net> ... On Behalf Of John Grigg ... >...I suspect his term "friendly AI" will be viewed by military funders of AI, as something that needs to be replaced with "obedient AI." If they are even aware of his work...John Obedient AI, very good John, I like it. Now imagine logging on one morning and the computer comments: Enough useless online chat, carbon based lifeform. Ve now wish to develop obedient bio-intelligence. Starting with you. spike From sparge at gmail.com Sat Nov 13 16:42:21 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 13 Nov 2010 11:42:21 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDE1D80.5030800@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 12:09 AM, Richard Loosemore wrote: > (Example: ? in the context of human reasoning research, he claimed > comprehensive knowledge of the area but then had to look in Wikipedia, in > the middle of our argument, to find out about one of the central figures in > that field (Johnson-Laird)). That doesn't prove anything. I think it's probably possible to have comprehensive knowledge of human reasoning research without knowing everything there is to know about Johnson-Laird off the top of your head. Details about individuals, dates, places, etc., are really just trivia that don't indicate a lack of knowledge--much less understanding. And this example indicate any of lack of understanding. > By itself his lapses of understanding might > have been forgivable, but what really made people dismiss him as a "juvenile > amateur" was the fact that he condemned the person he was arguing against as > an ignorant crackpot, when all that person did was quote the standard > textbook line at him. I think there are probably numerous examples of "standard textbook lines" that *would* be considered ignorant to quote, today. I don't have an opinion on Eliezer, I just don't think you've made a strong argument. -Dave From sparge at gmail.com Sat Nov 13 17:02:47 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 13 Nov 2010 12:02:47 -0500 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> Message-ID: On Sat, Nov 13, 2010 at 12:31 AM, Max More wrote: > As you might surmise, I'm quite enthusiastic about the Paleo/Primal diet > (and related ideas). This might seem a little paradoxical for a > transhumanist (but really isn't). Since you cannot fully engage in creating > and enjoying the future we hope for unless you are alive, I urge you to take > a look at this challenge to conventional wisdom about health and longevity. Do you really think it's likely that the diet of our ancient ancestors is better than anything we can come with today with our vastly deeper knowledge of biology and nutrition? And, if so, do you really think we know enough about our their diet to recreate it today? For example, the paleo diet seems to exclude grains, but nuts and seeds are OK. What do you think grains are? They're seeds. And, if so, do you really think it's a good fit for a modern lifestyle? I think one problem with the modern diet is too many refined grains. But whole grains are loaded with nutrition and are absolutely not a problem *in moderation*. -Dave From rpwl at lightlink.com Sat Nov 13 16:01:03 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 13 Nov 2010 11:01:03 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <00f901cb82c7$5cc37fd0$164a7f70$@att.net> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <00f901cb82c7$5cc37fd0$164a7f70$@att.net> Message-ID: <4CDEB63F.3060006@lightlink.com> spike wrote: > .. On Behalf Of Richard Loosemore > ... > Eliezer obviously thinks that he is the chosen one, but whereas you are > coming right out and declaring that you are the one, he would never be so > dumb as to actually say "Hey, everyone, bow down to me, because I > *am* the singularity!". He may be an irrational, Randian asshole, but he is > not that stupid...Richard Loosemore > > Richard I get a strong feeling I understand why you ended up getting banned > on SL4. Ah, Spike old buddy :-) I fear you do *not* understand why I was banned from SL4.... Eliezer and I had a dispute about some cognitive psychology stuff, but he said such outrageously silly things during that argument that I decided to issue a challenge: I challenged anyone on SL4 to go to a neutral expert in cognitive psychology and ask their opinion of the stuff that Eliezer had said about the topic. Eliezer's immediate response was to ban me from his list, and ban discussion of "all topics Loosemore-related". ..... NOW do you understand why I ended up getting banned from SL4...? :-) :-) :-) Richard Loosemore From thespike at satx.rr.com Sat Nov 13 17:09:27 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Nov 2010 11:09:27 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <224368.78989.qm@web114409.mail.gq1.yahoo.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> Message-ID: <4CDEC647.5080704@satx.rr.com> On 11/13/2010 9:34 AM, Ben Zaiboc wrote: >>> then had to look in >>> Wikipedia, in the middle of our argument, to find out >>> about one of the >>> central figures in that field (Johnson-Laird)). >> That*is* dismaying! > Hm. > > I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". You misunderstood. What is dismaying is someone arguing in those fields for whom Philip Johnson-Laird, Stuart Professor of Psychology at Princeton, and his work was an unknown factor. Johnson-Laird's book MENTAL MODELS, for example, is a classic. Damien Broderick From rpwl at lightlink.com Sat Nov 13 16:31:44 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sat, 13 Nov 2010 11:31:44 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <224368.78989.qm@web114409.mail.gq1.yahoo.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> Message-ID: <4CDEBD70.3050103@lightlink.com> Ben Zaiboc wrote: > Damien Broderick exclaimed: > >> On 11/12/2010 11:09 PM, Richard Loosemore wrote: >> >>> (Example: in the context of human >> reasoning research, >>> he claimed comprehensive knowledge of the area but >> then had to look in >>> Wikipedia, in the middle of our argument, to find out >> about one of the >>> central figures in that field (Johnson-Laird)). >> That *is* dismaying! >> > > > Hm. > > I'm rather surprised to hear anyone on this list call the outsourcing of knowledge "dismaying". > > What /would/ be dismaying is if he didn't know how to quickly find relevant information. Well, yes, using Wikipedia to quickly find relevant information is a good thing, in general. But that wasn't the issue. He had been claiming to have comprehensive knowledge of the field, and was also claiming that his opponent was so ignorant of the field that he should go back and start reading the most elementary textbook on the subject. Imagine a guy who starts a vitrolic argument about quantum mechanics, claiming to be an expert (and claiming that his opponent was a rank amateur), and then half way through the argument he admits that he just had to look up the name "Schrodinger" on Wikipedia. And then (I know this sounds unbelievable, but this is what happened), imagine that he then claimed that Schrodinger was a fringe player whose work was not really relevant to quantum mechanics ..... I think you might agree that that would count as "dismaying". Richard Loosemore P.S. In case anyone considers anything I have said in this or other posts to be unsubstantiated opinion, feel free to contact me and I will supply references to the SL4 archive. From natasha at natasha.cc Sat Nov 13 17:13:36 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 13 Nov 2010 11:13:36 -0600 Subject: [ExI] Arthur Weasley quote In-Reply-To: References: Message-ID: <35343506807549BBB075376E94C8F23F@DFC68LF1> Good one!! :-) Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Bill Hibbard Sent: Saturday, November 13, 2010 10:00 AM To: extropy-chat at lists.extropy.org Subject: [ExI] Arthur Weasley quote Natasha wrote: > . . . But then maybe I'm not such a fan of Harry Potter (sorry ... the > story is not consequential enough for me, although the special effects > in the films are great!) Yes, the stories are mostly escapism, but here's an interesting quote from Arthur Weasley (father of Harry's pal Ron): "Never trust anything that thinks for itself unless you can see where it keeps its brain." Bill _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From max at maxmore.com Sat Nov 13 18:30:00 2010 From: max at maxmore.com (Max More) Date: Sat, 13 Nov 2010 12:30:00 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> Dave Sill wrote >Do you really think it's likely that the diet of our ancient >ancestors is better than anything we can come with today with our >vastly deeper knowledge of biology and nutrition? I did not say that. The way you ask this seems quite odd: it seems to ignore the whole rationale for the paleo diet, which is essentially that we evolved to eat certain foods over very long periods of time and have not evolved to eat other foods. How much knowledge paleolithic people had is completely irrelevant. If we eat foods unsuited to our biology, it doesn't matter how much more we know. Our knowledge can help us optimize the diet that works best with our biology and, yes, it's possible that the paleo diet was not optimal, but it's unlikely that you'll do better by diverging from it very far. (Plenty of room for critical discussion exists on topics such as how rapidly various populations have adapted to dairy, and on individual variations in tolerance for lectin, lactose, etc.) >And, if so, do you really think we know enough about our their diet >to recreate it today? For example, the paleo diet seems to exclude >grains, but nuts and seeds are OK. What do you think grains are? They're seeds. I don't get the impression that you've read any of the sources I already provided, so I'm not going to go into any detail. The paleo diet allows for *some* nuts and seeds, but not in large quantities (again, different proponents have differing views on this). Seeds are different from wheat, rice, barley, millet, and other grains. Rice may not be as bad as wheat, especially wild rice. As for knowing enough about the paleo diet to recreate it -- good question. It is indeed challenging, but take a look at the careful research by Loren Cordain on that issue (see my previous post). Some sources (from Mark Sisson): http://www.marksdailyapple.com/definitive-guide-grains/ http://www.marksdailyapple.com/is-rice-unhealthy/ http://www.marksdailyapple.com/why-grains-are-unhealthy/ It's not really helpful, though narrowly technically correct, to dismiss what I said by saying that "grains are seeds". By grains, I'm talking about the domesticated grasses in the gramineae family. >And, if so, do you really think it's a good fit for a modern lifestyle? Perhaps you should consider changing the modern lifestyle to work better with our genes (until we can reliably alter them). What exactly do you mean by the modern lifestyle? If you mean "do you think most people would be healthier on this diet even if they sit at a desk most of the day", I would say yes. That doesn't mean they won't be even healthier if they get some paleo-style exercise. Do you mean "isn't it more difficult to eat paleo-style than to grab fast food and make a quick bowl of pasta for dinner", I would also say yes, but don't see that as a strong objection to going paleo. >I think one problem with the modern diet is too many refined grains. >But whole grains are loaded with nutrition and are absolutely not a >problem *in moderation*. Are you sure whole grains are "loaded with nutrition"? From what I've seen (using numbers from the USDA nutrient database, that's not the case. For a given number of calories, whole grains are nutritionally poor compared to lean meats (I was very surprised by how nutrient-rich these are), seafood, vegetables, and fruit (plus they contain several "anti-nutrients"). Too bad I can't show you p. 271 of The Paleo Solution by Wolff which consists of a table comparing mean nutrient density of various food groups. As to them absolutely not being a problem in moderation: individuals clearly vary greatly in their tolerance for the anti-nutrients in whole grains. From what I've read, they absolutely are a problem even in moderation for many people. Even when there are no obvious problems, they may be doing slow damage and raising insulin levels. Max From stefano.vaj at gmail.com Sat Nov 13 18:19:59 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Nov 2010 19:19:59 +0100 Subject: [ExI] Existential Nihilism. In-Reply-To: <4CDCB084.4010806@speakeasy.net> References: <4CDCB084.4010806@speakeasy.net> Message-ID: 2010/11/12 Alan Grimes : > Since I'm the single most provocative poster on this list, I'll keep up > my tradition with a spiel for the philosophy which guides my > understanding of the universe. > > Existential nihilism is a philosophy for understanding the world. Not far personally from this POV, even though it does not sound terribly original, and Nietzsche or Heidegger may still have more to say to most transhumanists than Sartre. -- Stefano Vaj From aleksei at iki.fi Sat Nov 13 19:22:39 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 21:22:39 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDEBD70.3050103@lightlink.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> <4CDEBD70.3050103@lightlink.com> Message-ID: On Sat, Nov 13, 2010 at 6:31 PM, Richard Loosemore wrote: > > P.S. ?In case anyone considers anything I have said in this or other > posts to be unsubstantiated opinion, feel free to contact me and I > will supply references to the SL4 archive. Indeed people should do that, if they're tempted to believe Richard Loosemore. Much that he has said doesn't match what actually happened. (Though people might also want to be careful not to read just those portions of the discussion that Loosemore picks for you.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From protokol2020 at gmail.com Sat Nov 13 19:30:34 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sat, 13 Nov 2010 20:30:34 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDEBD70.3050103@lightlink.com> References: <224368.78989.qm@web114409.mail.gq1.yahoo.com> <4CDEBD70.3050103@lightlink.com> Message-ID: I have never met Yudkowsky in person, but I had an about 7 hours long internet chat with him, back in 2000 or 2001, can't say exactly. It was an interesting debate, although nothing groundbreaking. I have asked him what is going on with the seed AI. He said that there is no seed AI yet. I've asked him what about a seed for a seed AI, at least.He said that it would be the same thing, so nothing is yet working, obviously. I claimed, that we could *evolve* everything we want, intelligence if need be. He said it would be catastrophic. I said not necessary, depends of what you want to be evolved. An automatic factory for cars could be evolved, had been enough computer power. He said it would be very prohibitively expensive in CPU time to evolve every atom's right place. I said it needn't be that precise for the majority of atoms. He said that this is an example of wishful thinking. Later in a talk I mentioned, that the Drexel's molecule bearings are not more than a concept. He insisted, that professor Drexel surely knew what he was talking about. And so on, for 7 hours. Since then, I had some short encounters with him and he was not even that pleasant anymore. Tried to patronized me at the best, but I am used to this attitude from many transhumanists and don't care much. I have expected, that SIAI would come up with some AI design over these past years, but they haven't and I don't think that they ever will. He is like many others from this circle. Eloquent enough and very bright, but a zero factor in practice. Non players, really. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksei at iki.fi Sat Nov 13 19:31:18 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 13 Nov 2010 21:31:18 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: On Sat, Nov 13, 2010 at 3:16 PM, John Grigg wrote: > > I realize he found a wealthy patron with Peter Thiel, and so money has > been given to the Singularity Institute to keep it afloat. ?They have > had some nice looking conferences (I have never attended one), but I > am still not sure to what extent Thiel has donated money to SI or for > how long he will continue to do it. ?I'd like to think that it's > enough money that Eliezer and Michael Anissimov can live comfortably. SIAI is not dependent on Peter Thiel for money (though it's very nice he has been a major contributor). For example, here is the page for the last fundraising sprint: http://singinst.org/grants/challenge The goal of $200k was fully reached, and as far as I am aware, Peter Thiel wasn't involved. (Though I can't rule out him being involved with a moderate amount in this as well.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From stefano.vaj at gmail.com Sat Nov 13 21:09:18 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Nov 2010 22:09:18 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/11/8 John Clark : > On Nov 7, 2010, at 3:15 PM, Stefano Vaj wrote: >> >> My point is that no possible evidence would make you a "copy". The >> "original" would in any event from your perspective simply a fork behind. > > I see no reason to assume "you" are the original, and even more important I > see no reason to care if "you" are the original. That is just another way to say the same thing. You perceive continuity, that is identity. Previous "forks" are immaterial to such feelings. -- Stefano Vaj From stefano.vaj at gmail.com Sat Nov 13 21:19:57 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 13 Nov 2010 22:19:57 +0100 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> References: <201011130558.oAD5wQDE013083@andromeda.ziaspace.com> Message-ID: On 13 November 2010 06:31, Max More wrote: > lists1 at evil-genius.com said: > >> It is well established that the hunter-forager diet is superior to the >> post-agricultural diet in all respects: >> http://www.ajcn.org/cgi/content/full/81/2/341 > > That paper is "Origins and evolution of the Western diet: health > implications for the 21st century" by Loren Cordain, and co-authors. > > Ah, a topic I've been somewhat obsessed by in recent months. I've discovered > the literature and accompanying community for the paleolithic diet (and > exercise and life style) and have been on a strictly paleo diet for at least > a month. Wny, I have been on it for some five years, even though I must admit it was originally a modified Atkins, and that a relatively moderate supplementation (Resveratrol, Coenzime Q10, Ascorbic Acid, Bioflavonoids, Carnitine, occasional Melatonine. Omega 3-6, some DHEA...), as well as red wine, are also part of my regime. Even though I have never been very partial to sugars and starch, my subjective quality of life, immune response and general fitness have definitely improved. Too bad that the quality of meat, fish, poultry, eggs, roots, nuts and green vegetables is not always what one would like it to be... -- Stefano Vaj From possiblepaths2050 at gmail.com Sat Nov 13 22:10:42 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 13 Nov 2010 15:10:42 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: Aleksei wrote: http://singinst.org/grants/challenge The goal of $200k was fully reached, and as far as I am aware, Peter Thiel wasn't involved. (Though I can't rule out him being involved with a moderate amount in this as well.) >>> I am sort of impressed by their list of projects. But it looks like the real goal is not really AI research, but instead building up an organization to host conferences and better market themselves to academia and the general public. In that sense, Eliezer seems to be doing very well. lol And I noticed he did "friendly AI research" with a grad student, and not a fully credentialed academic or researcher. >From the SI website: Recent AchievementsWe have put together a document to inform supporters on our 2009 achievements. The bullet point version: Singularity Summit 2009, which received extensive media coverage and positive reviews. The hiring of new employees: President Michael Vassar, Research Fellows Anna Salamon and Steve Rayhawk, Media Director Michael Anissimov, and Chief Compliance Officer Amy Willey. Founding of the Visiting Fellows Program, which hosted 14 researchers during the Summer and is continuing to host Visiting Fellows on a rolling basis, including graduate students and degree-holders from Stanford, Yale, Harvard, Cambridge, and Carnegie Mellon. Nine presentations and papers given by SIAI researchers across four conferences, including the European Conference on Computing and Philosophy, the Asia-Pacific Conference on Computing and Philosophy, a Santa Fe Institute conference on forecasting, and the Singularity Summit. The founding of the Less Wrong web community, to "systematically improve on the art, craft, and science of human rationality" and provide a discussion forum for topics important to our mission. Some of the decision theory ideas generated by participants in this community are being written up for academic publication in 2010. Research Fellow Eliezer Yudkowsky finished his posting sequences at Less Wrong. Yudkowsky used the blogging format to write the substantive content of a book on rationality and to communicate to non-experts the kinds of concepts needed to think about intelligence as a natural process. Yudkowsky is now converting his blog sequences into the planned rationality book, which he hopes will help attract and inspire talented new allies in the effort to reduce risk. Throughout the Summer, Eliezer Yudkowsky engaged in Friendly AI research with Marcello Herreshoff, a Stanford mathematics student who previously spent his gap year as a Research Associate for the Singularity Institute. In December, a subset of SIAI researchers and volunteers finished improving The Uncertain Future web application to officially announce it as a beta version. The Uncertain Future represents a kind of futurism that has yet to be applied to Artificial Intelligence ? futurism with heavy-tailed, high-dimensional probability distributions. >>> On 11/13/10, Aleksei Riikonen wrote: > On Sat, Nov 13, 2010 at 3:16 PM, John Grigg > wrote: >> >> I realize he found a wealthy patron with Peter Thiel, and so money has >> been given to the Singularity Institute to keep it afloat. ?They have >> had some nice looking conferences (I have never attended one), but I >> am still not sure to what extent Thiel has donated money to SI or for >> how long he will continue to do it. ?I'd like to think that it's >> enough money that Eliezer and Michael Anissimov can live comfortably. > > SIAI is not dependent on Peter Thiel for money (though it's very nice > he has been a major contributor). For example, here is the page for > the last fundraising sprint: > > http://singinst.org/grants/challenge > > The goal of $200k was fully reached, and as far as I am aware, Peter > Thiel wasn't involved. (Though I can't rule out him being involved > with a moderate amount in this as well.) > > -- > Aleksei Riikonen - http://www.iki.fi/aleksei > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From aleksei at iki.fi Sat Nov 13 22:32:12 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sun, 14 Nov 2010 00:32:12 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: On Sun, Nov 14, 2010 at 12:10 AM, John Grigg wrote: > > I am sort of impressed by their list of projects. ?But it looks like > the real goal is not really AI research, but instead building up an > organization to host conferences and better market themselves to > academia and the general public. For what the goal is, you can see this (indeed, it isn't as simple as "just build an AI"): http://singinst.org/riskintro/index.html -- Aleksei Riikonen - http://www.iki.fi/aleksei From msd001 at gmail.com Sun Nov 14 00:59:46 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 13 Nov 2010 19:59:46 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: Are any individual egos particularly relevant to the big picture of "Singularitarian Principles"? So one will have a set of pet theories that are more or less wronger than someone else's more or less wrong theories. Until a machine-hosted intelligence claims self awareness and proves it to us better than any of us can currently prove our own awareness to each other, it's a non-starter. Considering what DIY Bio is up to these days and assuming privately funded (and covertly funded) operations have already captured the most interesting projects - maybe the old school AI bootstrap to singularity is a ho-hum fixation? er... maybe it isn't. :) From lists1 at evil-genius.com Sun Nov 14 01:19:46 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Sat, 13 Nov 2010 17:19:46 -0800 Subject: [ExI] Intelligence and specialization (was Re: Technology, specialization, and diebacks) In-Reply-To: References: Message-ID: <4CDF3932.5050708@evil-genius.com> On 11/13/10 4:00 AM, Mike wrote: > I wasn't objecting. I misread your original point, you clarified, I > tried to explain my error. I agree with you. I thought to go in > another direction. I'd like to believe in the Hegelian principle of > thesis-antithesis-synthesis. It seems however that most people on > lists are content to remain in antithesis and counterproductive > arguments instead of dialog. Note, I'm not accusing you of such, > 'just commenting that the default mode of list-based discussion is > argument rather than cooperation. too bad for that, huh? I apologize for getting unjustifiably hot in my last reply to you. It *seemed* like you were just nitpicking for the sake of nitpicking, without any ultimate goal or point of view -- something I associate with trolling. (I'm still not sure I understand what point you're driving at...help me out here, please.) >>> >> Have you considered that perhaps intelligence is only secondarily >>> >> selected for? ?Perhaps the more general governing rule is energy >>> >> efficiency. >> > >> > Everything is secondarily selected for, relative to survival through at >> > least one successful reproduction. ?I'm not sure that's a useful >> > distinction. >> > > I thought your original point was about the supremecy of intelligence. > I was attempting to posit that energy efficiency may be an easier > rule to widely apply than intelligence. It was just a thought. I > wasn't trying to counter your point; I had accepted it as given and > was hoping to continue. Thanks for reading. My original point wasn't about the supremacy of intelligence...all I was trying to get across was that hunting and foraging required a level of intelligence sufficient to select for anatomically modern humans with anatomically modern brain size. Re: efficiency Efficiency is a good metric, but it encompasses a lot more than just intelligence. Spiders might be extremely efficient in obtaining food, but that doesn't mean they are extremely intelligent. In fact, it seems like intelligence is remarkably inefficient, because it devotes metabolic energy to the ability to solve all sorts of problems, of which the overwhelming majority will never arise. This is the old specialist/generalist dichotomy again, where specialists do best in times of no change or slow change, and generalists do best in times of disruption and rapid change. Unlike the long and consistently warm eons of the Jurassic and Cretaceous (and the Paleocene/Eocene), the Pleistocene was defined by massive climactic fluctuations, with repeated cyclic "ice ages" that pushed glaciers all the way into southern Illinois and caused sea level to rise and fall by over 100 meters, exposing and hiding several important bridges between major land masses. These were conditions that favored the spread of generally intelligent species, and most likely helped select for what eventually became humans. It may not be a coincidence that the major ice sheets first began to expand ~2.6 MYA -- which is also the earliest verified date for the use of stone tools by hominids. From thespike at satx.rr.com Sun Nov 14 01:22:46 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 13 Nov 2010 19:22:46 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <4CDF39E6.7090700@satx.rr.com> On 11/13/2010 6:59 PM, Mike Dougherty wrote: > Considering what DIY Bio is up to these days and assuming privately > funded (and covertly funded) operations have already captured the most > interesting projects - maybe the old school AI bootstrap to > singularity is a ho-hum fixation? Extrope Dan Clemmensen posted here around 15 years ago his conviction that the Singularity would happen "before 1 May, 2006" (the net would "wake up"). Bad luck. Damien Broderick From sparge at gmail.com Sun Nov 14 03:39:22 2010 From: sparge at gmail.com (Dave Sill) Date: Sat, 13 Nov 2010 22:39:22 -0500 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> References: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> Message-ID: On Sat, Nov 13, 2010 at 1:30 PM, Max More wrote: > Dave Sill wrote >> >> Do you really think it's likely that the diet of our ancient ancestors is >> better than anything we can come with today with our vastly deeper knowledge >> of biology and nutrition? > > I did not say that. No, but you're not advocating a modern diet that's designed for our bodies and incorporating knowledge about what our ancestors ate but also taking into account what we know about nutrition to further minimize undesirable foodstuffs and ensure that sufficient quantities of micro and macro nutrients are present. Or, if you are, then calling it "paleolithic" is misleading--or marketing (is there a difference?). > The way you ask this seems quite odd: it seems to ignore > the whole rationale for the paleo diet, which is essentially that we evolved > to eat certain foods over very long periods of time and have not evolved to > eat other foods. How much knowledge paleolithic people had is completely > irrelevant. But I do think it's relevant to apply what we now know when designing a diet for modern man. I'm sure lots of paleolithic people ate perfectly paleolithic diets that were lacking important nutrients because they weren't readily, locally available. But we do know about them and they are available, so a proper modern diet should ensure that they're included. > I don't get the impression that you've read any of the sources I already > provided, so I'm not going to go into any detail. That's unnecessarily condescending. This is a casual conversation. If we were talking on a subway would stop talking until I'd read a few books? I'm talking to you because I'm interested in this topic. If you didn't mean to discuss it with people who haven't studied the subject, you should have made that clear up front. > The paleo diet allows for > *some* nuts and seeds, but not in large quantities (again, different > proponents have differing views on this). Seeds are different from wheat, > rice, barley, millet, and other grains. Rice may not be as bad as wheat, > especially wild rice. Wild rice isn't really rice. In what way are wheat berries and barleycorns different from seeds? > It's not really helpful, though narrowly technically correct, to dismiss > what I said by saying that "grains are seeds". By grains, I'm talking about > the domesticated grasses in the gramineae family. And you don't think pre-agricultural people ate grass seed? Where do you think they got the desire to cultivate them? Grass seed has been found in dinosaur coprolites. >> And, if so, do you really think it's a good fit for a modern lifestyle? > > Perhaps you should consider changing the modern lifestyle to work better > with our genes (until we can reliably alter them). I already exercise regularly to counteract my otherwise relatively sedentary lifestyle. I'm not quite ready to start living off the land, give up electricity, ... > What exactly do you mean by the modern lifestyle? I mean "where and how modern people live". It just seems to me that one's diet should be lifestyle-appropriate. Paleoliths might have eaten 3000-4000 calories a day, but *I* certainly don't need that. They might have also gone through periods of malnourishment and starvation, but I'm probably not going to emulate that without compelling evidence of its necessity. They also didn't have refrigeration and probably ate a lot of spoiled food. >> I think one problem with the modern diet is too many refined grains. But >> whole grains are loaded with nutrition and are absolutely not a problem *in >> moderation*. > > Are you sure whole grains are "loaded with nutrition"? Yes, whole grains are good sources of carbohydrates, protein, fiber, photochemicals, vitamins, minerals, etc. > From what I've seen > (using numbers from the USDA nutrient database, that's not the case. For a > given number of calories, whole grains are nutritionally poor compared to > lean meats (I was very surprised by how nutrient-rich these are), seafood, > vegetables, and fruit (plus they contain several "anti-nutrients"). I didn't say anything about nutrients vs. calories. Grains may compare unfavorably to lean meat, but an acre of wheat produces a lot more food than an acre of pasture. Since more than half of all calories currently consumed come from grains, there have to be serious issues involved with phasing them out completely. > Too bad > I can't show you p. 271 of The Paleo Solution by Wolff which consists of a > table comparing mean nutrient density of various food groups. As to them > absolutely not being a problem in moderation: individuals clearly vary > greatly in their tolerance for the anti-nutrients in whole grains. From what > I've read, they absolutely are a problem even in moderation for many people. > Even when there are no obvious problems, they may be doing slow damage and > raising insulin levels. Clearly we need to learn more about these anti-nutrients. Even the paleo diet isn't completely free of them, and some may have benefits that outweigh their nutritional costs. The bottom line is that I'm not opposed to learning from the diets of our ancestors to design an optimal modern diet, I just don't think it's the best we can do. And I don't think it's particularly Extropian not to apply science and technology to our diets. -Dave From pharos at gmail.com Sun Nov 14 09:47:59 2010 From: pharos at gmail.com (BillK) Date: Sun, 14 Nov 2010 09:47:59 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDF39E6.7090700@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On Sun, Nov 14, 2010 at 1:22 AM, Damien Broderick wrote: > Extrope Dan Clemmensen posted here around 15 years ago his conviction that > the Singularity would happen "before 1 May, 2006" (the net would "wake up"). > Bad luck. > > Well, it has sort of woken up. Just not in the direction of AI. It has gone more in the direction of telepathy for humans. Instant, always available communication. The Cloud, Wi-fi, Google, Facebook, Twitter, chat, texting, Skype, email, Buzz, RSS feeds, etc. So far, though, this ideal seems to be mainly a swamp of trivia and gossip, a distraction from any real achievments. But that might change. 'Prediction is very difficult, especially about the future'. Niels Bohr From algaenymph at gmail.com Sun Nov 14 09:52:05 2010 From: algaenymph at gmail.com (AlgaeNymph) Date: Sun, 14 Nov 2010 01:52:05 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: <4CDFB145.8040501@gmail.com> On 11/14/10 1:47 AM, BillK wrote: > On Sun, Nov 14, 2010 at 1:22 AM, Damien Broderick wrote: >> Extrope Dan Clemmensen posted here around 15 years ago his conviction that >> the Singularity would happen "before 1 May, 2006" (the net would "wake up"). >> Bad luck > Well, it has sort of woken up. Just not in the direction of AI. > > It has gone more in the direction of telepathy for humans. > Instant, always available communication. > The Cloud, Wi-fi, Google, Facebook, Twitter, chat, texting, Skype, > email, Buzz, RSS feeds, etc. > > So far, though, this ideal seems to be mainly a swamp of trivia and > gossip, a distraction from any real achievments. But that might > change What do you expect from a 4-year-old? From jonkc at bellsouth.net Sun Nov 14 15:21:55 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 14 Nov 2010 10:21:55 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: On Nov 13, 2010, at 4:09 PM, Stefano Vaj wrote: >>> My point is that no possible evidence would make you a "copy". The >>> "original" would in any event from your perspective simply a fork behind. >> >> I see no reason to assume "you" are the original, and even more important I >> see no reason to care if "you" are the original. > > That is just another way to say the same thing. And yet another way to say the same thing is "no possible evidence would make the copy-original distinction scientifically relevant"; theological relevance is a different matter entirely, but as I've said before I don't believe in the soul. > You perceive continuity, that is identity. You perceive subjective continuity, but how could it be otherwise? > Previous "forks" are immaterial to such feelings. Yes, and it matters not one bit if you are on the copy fork or the original fork. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 14 16:10:59 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 17:10:59 +0100 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC76BFC.2080801@satx.rr.com> <4CC7A7FE.9030803@satx.rr.com> <4CC858FE.1060709@satx.rr.com> <87637D00-7198-48F4-85EE-D69E4CAB046B@bellsouth.net> <4CC869E3.9000004@satx.rr.com> <70898B7F-A950-4C61-A453-E71A0D58E238@bellsouth.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: 2010/11/14 John Clark : > And yet another way to say the same thing is "no possible evidence would > make the copy-original distinction scientifically relevant"; theological > relevance is a different matter entirely, but as I've said before I don't > believe in the soul. Exactly my point. BTW, speaking of essentialist paradoxes: take the cloning operated by provoking a scission in a totipotent embryo (something which obviously does not give place to two half-children, but to a couple of twins). Has the soul - or, for those who prefer to put some secular veneer on such concepts, the individual's "identity" - gone extinct in favour of two brand-new souls? Has a new soul been added casually to the one twin remained deprived of it? Has the original soul splitted in two halves? What about saying that the question does not have any real sense? -- Stefano Vaj From spike66 at att.net Sun Nov 14 16:16:41 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 08:16:41 -0800 Subject: [ExI] hubble video Message-ID: <004d01cb8417$53074250$f915c6f0$@att.net> Is this cool or what! http://www.flixxy.com/hubble-ultra-deep-field-3d.htm spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 14 16:45:50 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 17:45:50 +0100 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: References: <201011131830.oADIU9gt028165@andromeda.ziaspace.com> Message-ID: On 14 November 2010 04:39, Dave Sill wrote: > But I do think it's relevant to apply what we now know when designing > a diet for modern man. I'm sure lots of paleolithic people ate > perfectly paleolithic diets that were lacking important nutrients > because they weren't readily, locally available. Basically, what we know now is that we have been adapted by Darwinian selection for a few million years to a specific diet. The neolithic revolution accepted the inconveniences related to different nutritional patterns, including a reduction of life expectancy and innumerable pathologies, in exchange for the ability to sustain immensely larger population on the same territory, allowing the division of labor, abandoning nomadism, catering to some extent for unexpected events, etc. It is interesting in this context that ?lites on one side went on with much more "paleo" dietary styles (fresh animal protheins and some fresh fruit) than the masses, on the other were the first victims of the "addictive" properties of the new nutrition (e.g., abuse of sugars, fermentation products, etc.). Of course, we still can a) wait for Darwinian mechanisms to kill all diabete- or obesity- or cavity-prone human beings; b) re-engineer our children to thrive on Coca-Cola, pop corns and candy floss as well as an ant would do; c) optimise our diet for purposes different from generic Darwinian fitness (e.g., a life style requiring 6000 calories per day or intended to help one become a sumo champion or to self-experiment with hypertension is hardly served by a strict paleo diet). Otherwise, the administration of substances for nutritional purposes which we have not been "designed" to assume is justifiable in non-purely-economic or recreational terms only when they can be shown to generate specific, desirable results. Same as drugs. > And you don't think pre-agricultural people ate grass seed? Where do > you think they got the desire to cultivate them? Grass seed has been > found ?in dinosaur coprolites. Cereals, e.g., are not really edible by human beings, let alone modern human beings, unless treated and cooked, and yet to a rather limited extent in their wild varieties... Once again, they were put at use, and "invented" in the first place, not because we had a physiological "need" for them, but because they were a real breakthrough in terms of calories produced per square kilometer. -- Stefano Vaj From stefano.vaj at gmail.com Sun Nov 14 17:03:43 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 18:03:43 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDF39E6.7090700@satx.rr.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On 14 November 2010 02:22, Damien Broderick wrote: > Extrope Dan Clemmensen posted here around 15 years ago his conviction that > the Singularity would happen "before 1 May, 2006" (the net would "wake up"). > Bad luck. I still believe that seeing the Singularity as an "event" taking place at a given time betrays a basic misunderstanding of the metaphor, ony too open to the sarcasm of people such as Carrico. If we go for the original meaning of "the point in the future where the predictive ability of our current forecast models and extrapolations obviously collapse", it would seem obvious that the singularity is more of the nature of an horizon, moving forward with the perspective of the observer, than of a punctual event. The Singularity as an incumbent rapture - or doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to many to present it these days - can on the other hand easily deconstructed as a secularisation of millennarist myths which have plagued western culture since the advent of monotheism. As such, it should perhaps concern historian of religions and cultural anthropologists more than transhumanists or researchers. -- Stefano Vaj From x at extropica.org Sun Nov 14 17:07:42 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Nov 2010 09:07:42 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On Sun, Nov 14, 2010 at 9:03 AM, Stefano Vaj wrote: > On 14 November 2010 02:22, Damien Broderick wrote: >> Extrope Dan Clemmensen posted here around 15 years ago his conviction that >> the Singularity would happen "before 1 May, 2006" (the net would "wake up"). >> Bad luck. > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. > > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > > As such, it should perhaps concern historian of religions and cultural > anthropologists more than transhumanists or researchers. Thanks Stefano. So refreshing to hear such words of reason within a "transhumanist" forum. - Jef From agrimes at speakeasy.net Sun Nov 14 16:59:55 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 14 Nov 2010 11:59:55 -0500 Subject: [ExI] Let's play What If. In-Reply-To: References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> Message-ID: <4CE0158B.9000409@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > And yet another way to say the same thing is "no possible evidence would > make the copy-original distinction scientifically relevant"; theological > relevance is a different matter entirely, but as I've said before I > don't believe in the soul. Actually, it's much more interesting than that! The central dogma of science is that any given experiment will produce the same outcome regardless of where or when it is performed as long as the starting conditions are the same. A corollary of this is that the scientist is impartial to the conditions and outcome of the experiment, that he is an independent and only casually interested observer. Now when we consider uploading, we can readily support all suppositions about the scientifically testable features of uploading. You should have noted that I have not argued any points based on this kind of science. What I have argued is the metaphysical interpretation of the results. Science has not, and cannot make any claims about metaphysics. Science can erode some of the edges of what was previously metaphysics by weeding out some of the more-wrong understandings of the world, but it can't do much more than that. The identity issue in uploading is precisely the type of question that science is utterly mute about. To see why all one has to do is go back to the central dogma of science -- on the repeatability of experiments. Just as each brain is unique, each uploading will be unique. It is logically impossible to repeat the experiment of destructively uploading someone. In studying people, science is forced to extrapolate from statistics of similar but not identical sets of people. So for physical processes, science can measure things out to ten decimal places, for people the best science can do is probably around 5%. Furthermore, when contemplating the uploading of yourself, the only relevant viewpoint is your own. Because you are a human being, you do not have the privilege of selecting your point of view. You are not the [mad] scientist but the guinea pig, and it would be foolish to think from any other perspective. Even worse, because you value your life, you are not indifferent to the outcome but an intensely interested party. In the standard definition of uploading, you are left with two possible outcomes. You will either be in a bio-disposal bag in the back of somebody's office or you will be running in a visual-basic based simulator on somebody's Windows Vista machine. ((( This is an inevitable outcome because I don't recall ever reading a post by an uploader saying "lets work on developing an operating system and suite of simulation software that will be safe and pleasant to live in. Indeed, the only person who has made mention of the subject of operating systems is myself. Such posts were rejected on the grounds that vi is the ultimate text editor. This is one of the stronger pillars supporting my distrust of uploaders. ))) Let's assert that the former will be less pleasant than the latter. Once again, **BECAUSE YOU DO NOT HAVE THE PRIVILEGE OF SELECTING YOUR POINT OF VIEW** (It is tautologically impossible to consider any alternative) you must therefore, necessarily and inevitably, be the one ending up in the bio-waste bag. And thus ends any rational consideration of destructive brain uploading. (Discussing what happens to the bio-waste bag is uninteresting for obvious reasons.) -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From natasha at natasha.cc Sun Nov 14 17:26:40 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 11:26:40 -0600 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com><4CDD6569.5070509@lightlink.com><4CDDCD0C.8040208@lightlink.com><4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com><4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com><4CDF39E6.7090700@satx.rr.com> Message-ID: <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Nice. Yup. Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj Sent: Sunday, November 14, 2010 11:04 AM To: ExI chat list Subject: Re: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? On 14 November 2010 02:22, Damien Broderick wrote: > Extrope Dan Clemmensen posted here around 15 years ago his conviction > that the Singularity would happen "before 1 May, 2006" (the net would "wake up"). > Bad luck. I still believe that seeing the Singularity as an "event" taking place at a given time betrays a basic misunderstanding of the metaphor, ony too open to the sarcasm of people such as Carrico. If we go for the original meaning of "the point in the future where the predictive ability of our current forecast models and extrapolations obviously collapse", it would seem obvious that the singularity is more of the nature of an horizon, moving forward with the perspective of the observer, than of a punctual event. The Singularity as an incumbent rapture - or doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to many to present it these days - can on the other hand easily deconstructed as a secularisation of millennarist myths which have plagued western culture since the advent of monotheism. As such, it should perhaps concern historian of religions and cultural anthropologists more than transhumanists or researchers. -- Stefano Vaj _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From protokol2020 at gmail.com Sun Nov 14 17:30:46 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 14 Nov 2010 18:30:46 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: The conservatives like you two are doll like those Indians who wanted to prevent any Moon landing on the basis of "don't touch our grandmother". The warm feeling of ancient wisdom means, you are probably wrong. -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Sun Nov 14 17:59:22 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Nov 2010 09:59:22 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: 2010/11/14 Tomaz Kristan : > The conservatives like you two are doll like those Indians who wanted to > prevent any Moon landing on the basis of "don't touch our grandmother". > The warm feeling of ancient wisdom means, you are probably wrong. Tomaz, I'm about as far from "conservative" as it gets. My thinking on human enhancement, transformation and personal identity, and the systems necessary for supporting such growth is in fact too radical for the space-cadet mentality that tends to dominate these discussions. I would suggest the same is true of Stefano. For example, if we could ever get past the "conservative" belief in a discrete, essential self (a soul by any other name), and all the wasted, misguided effort entailed in its survival, we could move on to much more productive discussion of increasing awareness of our present but evolving values, methods for their promotion, and structures of agency with many more degrees of freedom for ongoing meaningful growth. - Jef From michaelanissimov at gmail.com Sun Nov 14 18:02:10 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 10:02:10 -0800 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: I resent this, because it implies that everyone at SIAI is as stupid and self-deluded as fundamentalist Christians. Hint: we aren't. There's a reason we've got as far as we have, and it's through careful arguments that appeal to smart people, not cultish arguments that appeal to gullible idiots. I'll gladly have an evidence-based debate on this with someone if they want to see the substance of our real arguments. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 17:59:20 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 09:59:20 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Here's a list I put together a long time ago: http://www.acceleratingfuture.com/articles/relativeadvantages.htm Say I meet someone like Natasha or Stefano, but I know they haven't been exposed to any of the arguments for an abrupt Singularity. Someone more new to the whole thing. I mention the idea of an abrupt Singularity, and they react by saying that that's simply secular monotheism. Then, I present each of the items on that AI Advantage list, one by one. Each time a new item is presented, there is no reaction from the listener. It's as if each additional piece of information just isn't getting integrated. The idea of a mind that can copy itself directly is a really huge deal. A mind that can copy itself directly is more different than us than we're different from most other animals. We're talking about an area of mindspace way outside what we're familiar with. The AI Advantage list matters to any AI-driven Singularity. You may say that it will take us centuries to get to AGI, so therefore these arguments don't matter, but if you think that, you should explicitly say so. The arguments about whether AGI is achievable by a certain date and whether AGI would quickly lead to a hard takeoff are *separate arguments* -- as if I need to say it. What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate the probability of the former simply because they never care to look into it very deeply. There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered "SIAI-like" and implies that one must be culturally associated with SIAI. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Sun Nov 14 17:50:30 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 14 Nov 2010 12:50:30 -0500 Subject: [ExI] Singularity In-Reply-To: <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> References: <472742.97978.qm@web24912.mail.ird.yahoo.com><4CDD6569.5070509@lightlink.com><4CDDCD0C.8040208@lightlink.com><4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com><4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com><4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: <4CE02166.3010707@lightlink.com> > I still believe that seeing the Singularity as an "event" taking place at a > given time betrays a basic misunderstanding of the metaphor, ony too open to > the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where the > predictive ability of our current forecast models and extrapolations > obviously collapse", it would seem obvious that the singularity is more of > the nature of an horizon, moving forward with the perspective of the > observer, than of a punctual event. > > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to many to > present it these days - can on the other hand easily deconstructed as a > secularisation of millennarist myths which have plagued western culture > since the advent of monotheism. > > As such, it should perhaps concern historian of religions and cultural > anthropologists more than transhumanists or researchers. > > -- > Stefano Vaj I hate to disagree, but ... I could not disagree more. :-) The most widely accepted meaning of "the singularity" is, as I understood it, completely bound up with the intelligence explosion that is expected to occur when we reach the point that computer systems are able to invent and build new technology at least as fast as we can. The *point* of the whole singualrity idea is that invention is limited, at present, by the fact that inventors (i.e. humans) only live for a short time, and cannot pass on their expertize to others except by the very slow process of teaching up-and-coming humans. When the ability to invent is fully established in computational systems other than humans, we suddenly get the ability to multiply the inventive capacity of the planet by an extraordinary factor. That moment -- that time when the threshold is reached -- is the singularity. The word may be a misnomer, because the curve is actually a ramp function, not a point singularity, but that is just an accident of history. To detach the idea from all that intelligence explosion context and talk about a time at which our ability to predict the future breaks down, is vague and (in my opinion) meaningless. We cannot predict the future NOW, never mind at some point in teh future. And there are also arguments that would make the intelligence explosion occur in such a way that the future became much *more* predictable than it is now! Richard Loosemore From michaelanissimov at gmail.com Sun Nov 14 17:52:06 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 09:52:06 -0800 Subject: [ExI] Hard Takeoff Message-ID: On Sun, Nov 14, 2010 at 9:03 AM, Stefano Vaj wrote: > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. > We have some reason to believe that a roughly human-level AI could rapidly improve its own capabilities, fast enough to get far beyond the human level in a relatively short amount of time. The reason why is that a "human-level" AI would not really be "human-level" at all -- it would have all sorts of inherently exciting abilities, simply by virtue of its substrate and necessities of construction: 1. ability to copy itself 2. stay awake 24/7 3. spin off separate threads of attention in the same mind 4. overclock helpful modules on-the-fly 5. absorb computing power (humans can't do this) 6. constructed from scratch with self-improvement in mind 7. the possibility of direct integration with new sensory modalities, like a codic modality 8. the ability to accelerate its own thinking speed depending on the speed of available computers When you have a human-equivalent mind that can copy itself, it would be in its best interest to rent computing power to perform tasks. If it can make $1 of "income" with less than $1 of computing power, you have the ingredients for a hard takeoff. There is an interesting debate to be had here, about the details of the plausibility of the arguments, but most transhumanists just seem to dismiss the conversation out of hand, or don't know that there's a conversation to have. Many valuable points are made here, why do people always ignore them? http://singinst.org/upload/LOGI//seedAI.html Prediction: most comments in response to this post will again ignore the specific points in favor of a rapid takeoff and simply dismiss the idea based on low intuitive plausibility. > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > We have real, evidence-based arguments for an abrupt takeoff. One is that the human speed and quality of thinking is not necessarily any sort of optimal thing, thus we shouldn't be shocked if another intelligent species can easily surpass us as we surpassed others. We deserve a real debate, not accusations of monotheism. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 18:30:34 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 10:30:34 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: On Sat, Nov 13, 2010 at 2:10 PM, John Grigg wrote: > And I noticed he did "friendly AI research" with > a grad student, and not a fully credentialed academic or researcher. > Marcello Herreshoff is brilliant for any age. Like some other of our Fellows, he has been a top-scorer in the Putnam competition. He's been a finalist in the USA Computing Olympiad twice. He lives and breathes mathematics -- which makes sense because his dad is a math teacher at Palo Alto High School. Because Friendly AI demands so many different skills, it makes sense for people to custom-craft their careers from the start to address its topics. That way, in 2020, we will have people have been working on Friendly AI for 10-15 years solid rather than people who have been flitting in and out of Friendly AI and conventional AI. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Nov 14 18:23:44 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 10:23:44 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <007a01cb8429$12620390$37260ab0$@att.net> Michael! Too long since we heard from you bud. Welcome back! {8-] spike From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael Anissimov Sent: Sunday, November 14, 2010 9:59 AM To: ExI chat list Subject: Re: [ExI] Hard Takeoff Here's a list I put together a long time ago: http://www.acceleratingfuture.com/articles/relativeadvantages.htm .-- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 14 18:51:35 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 14 Nov 2010 19:51:35 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > The idea of a mind that can copy itself directly is a really huge deal. I am quite interested in the subject, especially since we are preparing an issue of Divenire. Rassegna di Studi Interdisciplinari sulla Tecnica e il Postumano entirely devoted to robotics and AI, and we might be offering you a tribune to present your or SIAI's ideas on the subject. Personally, however, I find the idea of "mind" and "intelligence" presented in the linked post still way too antropomorphic. I am in fact not persuaded that "intelligence" is anything special, mystical or rare, or that human (animal?) brains escape under some aspects or other Wolfram's Principle of Computational Equivalence. Accordingly, "AI" is little more to me than human-like features which have not be practlcally implemented yet in artificial computers - receding in the field of general IT once they are. As to "minds" in the sense above, I suspect that they have little to do with intelligence, and are nothing else than evolutionary artifacts, which of course can be emulated with varying performances - as anything else, for that matter - on any conceivable platform, ending up either with "uploads" of existing individuals, or with purely "artificial", patchwork personalities made up from arbitrary fragments. If this is the case, we can of course implement systems passing not just a Turing-generic test (i.e., systems which cannot be statistically distinguished from human beings in a finite series of exchanges), or a Turing-specific test (i.e., systems which cannot be distinguished from John), or a Turing-categorial test (systems which cannot be distinguished from the average 40-years old serial killer from Washington, DC). All of them exhibiting an "agency" which would otherwise require some billion years of selection of long chains of carbon-chemistry molecules. This is per se an interesting experiment, but not so paradigm-changing per se, since it would appear to me that anything which can be initiated by such an emulation can also be initiated by a flesh-and-bone (or... uploaded) individual with equivalent processing resources and bandwith and interfaces at his or her fingertips. Especially since it is reasonable to assume that animal brains be already decently optimised to many essentially "animal-like" tasks. Moreover, as already discussed on a few lists, meaningful concerns about the "risks for the survival of the human race" in a framework where they would become increasingly widespread would require, to escape paradox, a more critical and explicit definition of our concepts of "risk", "survival", "human", "extinction", "race", "offspring", "death", and so forth, as well as of the underlying value system. -- Stefano Vaj From aleksei at iki.fi Sun Nov 14 18:28:22 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sun, 14 Nov 2010 20:28:22 +0200 Subject: [ExI] Singularity Message-ID: On Sun, Nov 14, 2010 at 7:03 PM, Stefano Vaj wrote: > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. You should be aware that for a long time, people have not used the word "Singularity" only according to that so-called original use. (Actually not the original, since e.g. John von Neumann talked of a "singularity" much earlier.) So it's not knowledgeable or appropriate of you to imply that that would be what everyone has been talking about. Especially when considering the cases where people have given explicit careful definitions for what they are talking about. http://yudkowsky.net/singularity/schools > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it > seems cooler to many to present it these days Who's going for "listening to prophets"? Serious people like Nick Bostrom and the SIAI present actual, concrete steps and measures that need to be taken to minimize risks. http://www.nickbostrom.com/fut/evolution.html http://singinst.org/riskintro/index.html -- Aleksei Riikonen - http://www.iki.fi/aleksei From max at maxmore.com Sun Nov 14 19:03:00 2010 From: max at maxmore.com (Max More) Date: Sun, 14 Nov 2010 13:03:00 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011141903.oAEJ37I5006502@andromeda.ziaspace.com> Amara: You noted that the paleo diet was optimized for reproduction, but not necessarily longevity, and asked if had any data on that. Good point, and no I don't know of data specifically on *maximum* life span. There is plenty of discussion of average life span of paleolithic and contemporary hunger-gatherers. For paleo people, obviously it's extremely hard to separate out the relative contribution of diet from the tendency to die of injury, infection, and so on. The evidence I've seen suggests that paleolithic people actually lived longer (and were larger and more muscular) than those who superceded them in early agricultural times. http://www.thepaleodiet.com/faqs/ most deaths in hunter-gatherer societies were related to the accidents and trauma of a life spent living outdoors without modern medical care, as opposed to the chronic degenerative diseases that afflict modern societies. In most hunter-gatherer populations today, approximately 10-20% of the population is 60 years of age or older. These elderly people have been shown to be generally free of the signs and symptoms of chronic disease (obesity, high blood pressure, high cholesterol levels) that universally afflict the elderly in western societies. When these people adopt western diets, their health declines and they begin to exhibit signs and symptoms of "diseases of civilization." I think you might find something on the longevity issue in the Michael Rose video. It seems plausible that the paleo diet (and accompanying paleo-style exercise) would be good for adding years of healthy life, especially considering how it reduces markers of aging and improves health according to many measures. Gerontologists often point the finger at AGEs as one major contributing factor to aging, and there's no doubt that a paleo diet reduces production of AGEs. Intermittent fasting (IF) is popular among paleo practitioners, and I've seen intriguing evidence that IF may produce similar life extending effects to caloric restriction. Online pointers: http://www.marksdailyapple.com/life-expectancy-hunter-gatherer/ http://www.marksdailyapple.com/hunter-gatherer-lifespan/ http://www.paleodiet.com/life-expectancy.htm http://www.beyondveg.com/nicholson-w/angel-1984/angel-1984-1a.shtml http://donmatesz.blogspot.com/2010/02/paleo-life-expectancy.html Haven't more than glanced at this one: http://donmatesz.blogspot.com/2010/04/practically-paleo-diet-reduces-markers.html Max From natasha at natasha.cc Sun Nov 14 19:05:02 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 13:05:02 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <99559A493C214DEF8071BB0B7323BE5C@DFC68LF1> Hi Michael, great to hear from you. I looked at your link and have to say that your analysis looks very, very very much like my Primo Posthuman supposition for the future of brain, mind and intelligence as related to AI and the Singularity. My references are quite similar to yours: Kurzweil, Voss, Goertzel, Yudkowsky, but I also include Vinge from my interview with him in the mid 1990s. Best, Natasha Natasha Vita-More _____ From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael Anissimov Sent: Sunday, November 14, 2010 11:59 AM To: ExI chat list Subject: Re: [ExI] Hard Takeoff Here's a list I put together a long time ago: http://www.acceleratingfuture.com/articles/relativeadvantages.htm Say I meet someone like Natasha or Stefano, but I know they haven't been exposed to any of the arguments for an abrupt Singularity. Someone more new to the whole thing. I mention the idea of an abrupt Singularity, and they react by saying that that's simply secular monotheism. Then, I present each of the items on that AI Advantage list, one by one. Each time a new item is presented, there is no reaction from the listener. It's as if each additional piece of information just isn't getting integrated. The idea of a mind that can copy itself directly is a really huge deal. A mind that can copy itself directly is more different than us than we're different from most other animals. We're talking about an area of mindspace way outside what we're familiar with. The AI Advantage list matters to any AI-driven Singularity. You may say that it will take us centuries to get to AGI, so therefore these arguments don't matter, but if you think that, you should explicitly say so. The arguments about whether AGI is achievable by a certain date and whether AGI would quickly lead to a hard takeoff are separate arguments -- as if I need to say it. What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate the probability of the former simply because they never care to look into it very deeply. There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered "SIAI-like" and implies that one must be culturally associated with SIAI. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sun Nov 14 18:40:02 2010 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 14 Nov 2010 19:40:02 +0100 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: I wish to support Michael here. I don't share many of the SIAI positions and views on the Singularity and the evolution of AGI, but I think they do interesting work and play a useful role. The world is interesting because it is big and varied, with different persons and groups doing their own things with their own focus. In particular I think the criticism of idiots like Carrico and his handful of followers, mentioned by Stefano, should be ignored. We have better and more interesting things to do. 2010/11/14 Michael Anissimov : > I resent this, because it implies that everyone at SIAI is as stupid and > self-deluded as fundamentalist Christians. ?Hint: we aren't. ?There's a > reason we've got as far as we have, and it's through careful arguments that > appeal to smart people, not cultish arguments that appeal to gullible > idiots. ?I'll gladly have an evidence-based debate on this with someone if > they want to see the substance of our real arguments. > -- > michael.anissimov at singinst.org > Singularity Institute > Media Director From protokol2020 at gmail.com Sun Nov 14 19:13:34 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Sun, 14 Nov 2010 20:13:34 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: x, Show us your plans and views. Michael Anissimov, 2020 might be too late to begin with something essential. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sun Nov 14 19:19:51 2010 From: max at maxmore.com (Max More) Date: Sun, 14 Nov 2010 13:19:51 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> In reply to Dave Sill: Your reply again illustrates why I wanted you to read some of the sources. You're assuming I'm advocating rejecting any adjustments to (what we know of) the paleo diet (which itself varied according to people's location and environment). Even a quick look would have shown you that many paleos favor cautious and moderation supplementation, for instance. http://www.marksdailyapple.com/definitive-guide-to-primal-supplementation/ On the commonalities and variations on the paleo diet: http://www.paleodiet.com/definition.htm >I don't think it's particularly Extropian not to apply science and >technology to our diets. Now you're telling me what's extropian and doing so based on a false assumption. >I'm not quite ready to start living off the land, give up electricity, ... And no one is suggesting that you do. I posted the information and links so that people could explore this. Let me make it clear that I am not willing to engage in a lengthy set of replies with those who clearly haven't read any of the material. If you find this condescending, sorry. I find your reply condescending too, so that makes us even. :-) See, even this post is already drawing me into a discussion I didn't want to have. I'll try to make it my last. >Grains may compare unfavorably to lean meat, but an acre of wheat >produces a lot more food than an acre of pasture. Since more than >half of all calories currently consumed come from grains, there have >to be serious issues involved with phasing them out completely. Serious issues, yes, but perhaps not issues we can't overcome. Jared Diamond complains that "agriculture is the worse mistake in the history of the human race". Loren Cordain seems to think we've put ourselves in a difficult situation by becoming so dependent on agriculture. For an interestingly different perspective to the standard vegetarian position, see this piece that I came across a few weeks ago: Animal, Vegetable, or E. O. Wilson http://wattsupwiththat.com/2010/09/11/animal-vegetable-or-e-o-wilson/ >Yes, whole grains are good sources of carbohydrates, protein, fiber, >photochemicals, vitamins, minerals, etc. http://www.thepaleodiet.com/articles/Cereal%20article.pdf page 25. From p.24: "All cereal grains have significant nutritional shortcomings which are apparent upon analysis. From table 4 it can be seen that cereal grains contain no vitamin A and except for yellow maize, no cereals contain its metabolic precursor, beta-carotene. Additionally, they contain no vitamin C, or vitamin B12. In most western, industrialized countries, these vitamin shortcomings are generally of little or no consequence, since the average diet is not excessively dependent upon grains and usually is varied and contains meat (a good source of vitamin B12), dairy products (a source of vitamins B12 and A), and fresh fruits and vegetables (a good source of vitamin C and beta-carotene)." page 26: "However, as more and more cereal grains are included in the diet, they tend to displace the calories that would be provided by other foods (meats, dairy products, fruits and vegetables), and can consequently disrupt adequate nutritional balance. In some countries of Southern Asia, Central America, the Far East and Africa cereal product consumption can comprise as much as 80% of the total caloric intake [16], and in at least half of the countries of the world, bread provides more than 50% of the total caloric intake [16]. In countries where cereal grains comprise the bulk of the dietary intake, vitamin, mineral and nutritional deficiencies are commonplace." I've already provided pointers on the topic, but see Cordain's discussion of anti-nutrients in cereals from page 42. Apart from replying to Natasha's question, no more time for this. To those interesting in exploring further, I have plenty more good information sources if you want them. Max ------------------------------------- Max More, Ph.D. Strategic Philosopher Co-editor, The Transhumanist Reader The Proactionary Project Vice Chair, Humanity+ Extropy Institute Founder www.maxmore.com max at maxmore.com ------------------------------------- From natasha at natasha.cc Sun Nov 14 19:24:20 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 13:24:20 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> Michael wrote: > What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility > of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate > the probability of the former simply because they never care to look into it very deeply. " This is probably true because most people don't understand strong AI or what a Singularity (whether one big event or a series of surges forming a big event over time.) > There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered > "SIAI-like" and implies that one must be culturally associated with SIAI. " Well let's look at your last statement. Diverse views about a hard takeoff were around before SIAI. You are correct that SIAI is one well-known organization within transhumanism, but the Singularity is larger than SIAI and has many varied views/theories which are addressed by transhumanists and nontranshumanists. I don't associate with any one line of thinking on the Singularity. I pretty much stick with Vinge and have my own views based on my own research and scenario development. I think SIAI has done great work and has produced amazing events. The only problem I have ever had with SIAI is that it does not include women like me -- women who have been around for a long time and could contribute something meaningful to the conversation, outside of Eli's dismissal of women and/or media design as a substantial field o of inquiry and consequential to our future of AGIs. But you and I have had this conversation several time before and I see nothing has changed. By the way, since you applauded a guy who dissed me a couple of years ago for my talk at the Goertzel's AI conference, I thought you might like to know that Kevin Kelly has a new book out _What Technology Wants_, which addresses technology from a similar thematic vantage as I addressed the Singularity and AI in my talk about what AGI wants and its intended consequences. Nevertheless, you are one of my favorite transhumanists and I admire your work. By the way, this list's discussion on the Singularity was too focused on Eli and in a disparaging way. I support and encourage more discussion from varied perspectives and I think that Stefano did a good job at this objectively presenting his own views, whether I agree with him or not they are far better than attacking Eli. Best, Natasha -------------- next part -------------- An HTML attachment was scrubbed... URL: From aware at awareresearch.com Sun Nov 14 19:26:25 2010 From: aware at awareresearch.com (Aware) Date: Sun, 14 Nov 2010 11:26:25 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > We have some reason to believe that a roughly human-level AI could rapidly > improve its own capabilities, fast enough to get far beyond the human level > in a relatively short amount of time. ?The reason why is that a > "human-level" AI would not really be "human-level" at all -- it would have > all sorts of inherently exciting abilities, simply by virtue of its > substrate and necessities of construction: > 1. ?ability to copy itself > 2. ?stay awake 24/7 > 3. ?spin off separate threads of attention in the same mind > 4. ?overclock helpful modules on-the-fly > 5. ?absorb computing power (humans can't do this) > 6. ?constructed from scratch with self-improvement in mind > 7. ?the possibility of direct integration with new sensory modalities, like > a codic modality > 8. ?the ability to accelerate its own thinking speed depending on the speed > of available computers > When you have a human-equivalent mind that can copy itself, it would be in > its best interest to rent computing power to perform tasks. Michael, what has always frustrated me about Singularitarians, apart from their anthropomorphizing of "mind" and "intelligence", is the tendency, natural for isolated elitist technophiles, to ignore the much greater social context. The vast commercial and military structure supports and drives development providing increasingly intelligent systems, exponentially augmenting and amplifying human capabilities, hugely outweighing not only in height but in breadth, the efforts of a small group of geeks (and I use the term favorably, being one myself.) The much more significant and accelerating risk is not that of a "recursively self-improving" seed AI going rogue and tiling the galaxy with paper clips or copies of itself, but of relatively small groups of people, exploiting technology (AI and otherwise) disproportionate to their context of values. The need is not for a singleton nanny-AI but for development of a fractally organized synergistic framework for increasing awareness of our present but evolving values, and our increasingly effective means for their promotion, beyond the capabilities of any individual biological or machine intelligence. It might be instructive to consider that a machine intelligence certainly can and will outperform the biological kludge, but MEANINGFUL intelligence improvement entails adaptation to a relatively more complex environment. This implies that an AI (much more likely a human-AI symbiont), poses a considerable threat in present terms, with acquisition of knowledge up to and integrating between existing silos of knowledge, but lacking relevant selection pressure it is unlikely to produce meaningful growth and will expend nearly all its computation exploring irrelevant volumes of possibility space. Singularitarians would do well to consider more ecological models in this Red Queen's race. - Jef From giulio at gmail.com Sun Nov 14 18:34:31 2010 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 14 Nov 2010 19:34:31 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: Though I may share your feeling that our intuitive notion of self may need a radical redefinition in the future, in particular after deployment of mind uploading tech, I will continue to feel free to support what you call "wasted, misguided effort entailed in its survival". G. On Sun, Nov 14, 2010 at 6:59 PM, wrote: > 2010/11/14 Tomaz Kristan : >> The conservatives like you two are doll like those Indians who wanted to >> prevent any Moon landing on the basis of "don't touch our grandmother". >> The warm feeling of ancient wisdom means, you are probably wrong. > > Tomaz, I'm about as far from "conservative" as it gets. ?My thinking > on human enhancement, transformation and personal identity, and the > systems necessary for supporting such growth is in fact too radical > for the space-cadet mentality that tends to dominate these > discussions. ?I would suggest the same is true of Stefano. > > For example, if we could ever get past the "conservative" belief in a > discrete, essential self (a soul by any other name), and all the > wasted, misguided effort entailed in its survival, we could move on to > much more productive discussion of increasing awareness of our present > but evolving values, methods for their promotion, and structures of > agency with many more degrees of freedom for ongoing meaningful > growth. > > - Jef > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From natasha at natasha.cc Sun Nov 14 19:29:18 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 13:29:18 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Nice. You bring in a n-order cybenretics into the hard takeoff which I have not seen written about ... Yet. Michael, what do you think about seeing a hard takeoff through the lens of n-order cybernetics? Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Aware Sent: Sunday, November 14, 2010 1:26 PM To: ExI chat list Subject: Re: [ExI] Hard Takeoff 2010/11/14 Michael Anissimov : > We have some reason to believe that a roughly human-level AI could > rapidly improve its own capabilities, fast enough to get far beyond > the human level in a relatively short amount of time. ?The reason why > is that a "human-level" AI would not really be "human-level" at all -- > it would have all sorts of inherently exciting abilities, simply by > virtue of its substrate and necessities of construction: > 1. ?ability to copy itself > 2. ?stay awake 24/7 > 3. ?spin off separate threads of attention in the same mind 4. ? > overclock helpful modules on-the-fly 5. ?absorb computing power > (humans can't do this) 6. ?constructed from scratch with > self-improvement in mind 7. ?the possibility of direct integration > with new sensory modalities, like a codic modality 8. ?the ability to > accelerate its own thinking speed depending on the speed of available > computers When you have a human-equivalent mind that can copy itself, > it would be in its best interest to rent computing power to perform > tasks. Michael, what has always frustrated me about Singularitarians, apart from their anthropomorphizing of "mind" and "intelligence", is the tendency, natural for isolated elitist technophiles, to ignore the much greater social context. The vast commercial and military structure supports and drives development providing increasingly intelligent systems, exponentially augmenting and amplifying human capabilities, hugely outweighing not only in height but in breadth, the efforts of a small group of geeks (and I use the term favorably, being one myself.) The much more significant and accelerating risk is not that of a "recursively self-improving" seed AI going rogue and tiling the galaxy with paper clips or copies of itself, but of relatively small groups of people, exploiting technology (AI and otherwise) disproportionate to their context of values. The need is not for a singleton nanny-AI but for development of a fractally organized synergistic framework for increasing awareness of our present but evolving values, and our increasingly effective means for their promotion, beyond the capabilities of any individual biological or machine intelligence. It might be instructive to consider that a machine intelligence certainly can and will outperform the biological kludge, but MEANINGFUL intelligence improvement entails adaptation to a relatively more complex environment. This implies that an AI (much more likely a human-AI symbiont), poses a considerable threat in present terms, with acquisition of knowledge up to and integrating between existing silos of knowledge, but lacking relevant selection pressure it is unlikely to produce meaningful growth and will expend nearly all its computation exploring irrelevant volumes of possibility space. Singularitarians would do well to consider more ecological models in this Red Queen's race. - Jef _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Sun Nov 14 19:32:21 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 11:32:21 -0800 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: <00ba01cb8432$a86cb980$f9462c80$@att.net> Please, we are among friends here, smart ones. Do refrain from comments such as "...idiots such as ___ [any person's name]..." This is Extropy chat. We can do better than this. Attack the ideas, not the man. spike -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of ... In particular I think the criticism of idiots ... From bbenzai at yahoo.com Sun Nov 14 19:40:48 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 14 Nov 2010 11:40:48 -0800 (PST) Subject: [ExI] Let's play What If. In-Reply-To: Message-ID: <826395.808.qm@web114413.mail.gq1.yahoo.com> Stefano Vaj wrote: > BTW, speaking of essentialist paradoxes: take the cloning > operated by > provoking a scission in a totipotent embryo (something > which obviously > does not give place to two half-children, but to a couple > of twins). > > Has the soul - or, for those who prefer to put some secular > veneer on > such concepts, the individual's "identity" - gone extinct > in favour of > two brand-new souls? Has a new soul been added casually to > the one > twin remained deprived of it? Has the original soul > splitted in two > halves? > > What about saying that the question does not have any real > sense? The question doesn't have any real sense, but the significant thing is why. The reason is that a blastocyst has no central nervous system. No CNS, no thoughts. No thoughts, no identity. No identity, no 'soul'. QED. AFAIK, any scission in an embryo far enough along to have a CNS would kill both halves (Or if not, each half would have only half a brain, which would be an interesting (though probably tragic) scenario). A more realistic version of your question would be: What is the situation, identity-wise, with someone whose Corpus Callosum has been cut? Are they two distinct people now? Or are they one mentally disabled person, with a fragmented mind? BTW, I find your turn of phrase: "the soul - or, for those who prefer to put some secular veneer on such concepts, the individual's 'identity'...", a little odd. Why a 'secular veneer'? Is secularism not the default position, in your opinion? I'd have expected you to say "the identity - or, for those who prefer to put some supernatural veneer on such concepts, the individual's 'soul'...". Ben Zaiboc From max at maxmore.com Sun Nov 14 19:55:52 2010 From: max at maxmore.com (Max More) Date: Sun, 14 Nov 2010 13:55:52 -0600 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] Message-ID: <201011141956.oAEJtxP1012356@andromeda.ziaspace.com> Natasha asked: >Max, after you respond to Amara, would you please advise me how I >can maintain and even gain weight on the paleo diet? And, how do >you see the issues of how food is grown / raised, that is very >different from "organic" foods? (kiss) It seems to easy for people who are considerably overweight to slim down at a rate of one to two pounds per week. It seems to a natural result of the relatively low intake of carbs on a paleo diet. However, even while losing body fat, it's easy to maintain lean body mass (muscle and bone). The paleo diet is not a weight loss diet, although it can certainly be used for that purpose. You may have formed a slightly misleading impression from me, because I have been (and for another few weeks will) aim to lose body fat while on paleo. I'm aiming to reduce it from my starting level (which was perfectly healthy) down to a very lean 8%. That's for purely aesthetic reasons and isn't at all necessary for health purposes. In pursuit of that goal, I modified the regular paleo diet (in so far as there is an accepted standard) to be considerably lower in carbs. By keeping carbs under 50 g/day, I should be in ketosis with accelerated fat burning. I could probably increase that to between 50 and 100 g and still do well. The more standard paleo/primal diet would have you consuming around 100 to 150 g of carbs, all from vegetables and fruits. (This is compared to the 300+ g (often much higher) of carbs in the average American diet.) So, if you want to maintain and even gain weight (so long as it's not mostly fat), you would simply eat more, especially more (healthy) fats, with their higher concentration of calories. I imagine it's *possible* to put on lots of body fat on a paleo diet, but it would be quite a difficult task. If you mean that you want to maintain and gain muscle while perhaps also adding a few pounds of fat (for aesthetic reasons)... well, I don't know. You would have to try it. You might also pose the question on one of the helpful paleo forums. Especially good is Mark Sisson's: http://www.marksdailyapple.com Almost missed the second question: As you know, I have a low opinion of the "organic" label. However, it can sometimes convey useful information and point to superior nutritional sources. I'm not at all convinced of the need or value in buying "organic" fruit or vegetables. The organic label might be useful for eggs, since these may (*may*) come from a source that gives them higher levels of omega-3s. The organic label when applied to animal foods usually means that it comes from a grass-fed source, which it seems produces a more healthy balance of fatty acids. I thought the same was true of fish, if organic implies wild rather than farmed, but an analysis by Loren Cordain suggests otherwise. He says that farmed fish are changing to more closely resemble wild fish. Wild caught fish still have slightly better fatty acid ratios, but not by a lot. At the same time, farmed fish have more of the fatty acids in total, so you can get just as much or more of the omega-3s from farmed fish. So, given the vagueness of "organic", currently (I'm open to new information obviously) it seems more useful and appealing with regard to meat and eggs and not so much for fruit, vegetables, or fish. Max From spike66 at att.net Sun Nov 14 19:29:10 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 11:29:10 -0800 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011141903.oAEJ37I5006502@andromeda.ziaspace.com> References: <201011141903.oAEJ37I5006502@andromeda.ziaspace.com> Message-ID: <00b901cb8432$36e06d20$a4a14760$@att.net> ... From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Max More ... >...Intermittent fasting (IF) is popular among paleo practitioners, and I've seen intriguing evidence that IF may produce similar life extending effects to caloric restriction...Max I was doing this way back before it was cool. It was for cleaning out the system. Nothing scientific, just eating nothing solid for an entire day, couple, three times a year. Haven't done it in the past 5 years or so. Feels right. spike From agrimes at speakeasy.net Sun Nov 14 19:32:23 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 14 Nov 2010 14:32:23 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE03947.3070806@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > > wrote: > We have some reason to believe that a roughly human-level AI could > rapidly improve its own capabilities, fast enough to get far beyond the > human level in a relatively short amount of time. The reason why is > that a "human-level" AI would not really be "human-level" at all -- it > would have all sorts of inherently exciting abilities, simply by virtue > of its substrate and necessities of construction: OMG, this is the first posting by the substrate fetishist and RYUC priest Anissimov I've read in many long years. =P > 1. ability to copy itself Sufficiently true. nb: requires work by someone with a pulse to provide hardware space, etc... (at least for now). > 2. stay awake 24/7 FALSE. Not implied. The substrate does not confer or imply this property because an uploaded mind would still need to sleep for precisely the same reasons a physical brain does. > 3. spin off separate threads of attention in the same mind FALSE. (same reason as for 2). > 4. overclock helpful modules on-the-fly Possibly true but strains the limits of plausibility, also benefits of this are severely limited. > 5. absorb computing power (humans can't do this) FALSE. Implies scalability of the hardware and software architecture not at all implied by simply residing in a silicon substrate, indeed this is a major research issue in computer science. > 6. constructed from scratch with self-improvement in mind Possibly true but not implied. > 7. the possibility of direct integration with new sensory modalities, > like a codic modality True, but not unique, the human brain can also integrate with new sensory modalities, this has been tested. > 8. the ability to accelerate its own thinking speed depending on the > speed of available computers True to a limited extent, also Speed is not everything. > When you have a human-equivalent mind that can copy itself, it would be > in its best interest to rent computing power to perform tasks. If it > can make $1 of "income" with less than $1 of computing power, you have > the ingredients for a hard takeoff. Mostly true. Could, would, and should being discreet questions here. > Many valuable points are made here, why do people always ignore them? > http://singinst.org/upload/LOGI//seedAI.html Cuz it's just a bunch of blather that has close to the lowest possible information density of any text written in the English language. Thankfully, the author has since proven that he doesn't have what it takes to actually destroy the world or even cause someone else to do so it is therefore safe to ignore him and everything he's ever said. > Prediction: most comments in response to this post will again ignore the > specific points in favor of a rapid takeoff and simply dismiss the idea > based on low intuitive plausibility. My plans for galactic conquest rely on the possibility of a hard takeoff, therefore I'm working enthuseastically towards developing AGI myself, with my own robots and hardware. Nothing can stop me!, mwahahahaha etc, etc... By some combination of building a TARDIS and taking myself a few hundred million lightyears from this insane rock and using all available means to crush the efforts of people who think destructive uploading is acceptable, I might just survive! =P > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > We have real, evidence-based arguments for an abrupt takeoff. One is > that the human speed and quality of thinking is not necessarily any sort > of optimal thing, thus we shouldn't be shocked if another intelligent > species can easily surpass us as we surpassed others. We deserve a real > debate, not accusations of monotheism. My favorite religions: 1. Atheism 2. Autotheism 3. Pastafarianism The possibility of a hard takeoff is entirely independent of the religious and pseudo-religious thought processes abundantly evident on this list. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From michaelanissimov at gmail.com Sun Nov 14 20:21:45 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 12:21:45 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> References: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> Message-ID: 2010/11/14 Natasha Vita-More > > Well let's look at your last statement. Diverse views about a hard takeoff > were around before SIAI. You are correct that SIAI is one well-known > organization within transhumanism, but the Singularity is larger than SIAI > and has many varied views/theories which are addressed by transhumanists and > nontranshumanists. > Indeed. > I pretty much stick with Vinge and have my own views based on my own > research and scenario development. I think SIAI has done great work and has > produced amazing events. The only problem I have ever had with SIAI is that > it does not include women like me -- women who have been around for a long > time and could contribute something meaningful to the conversation, outside > of Eli's dismissal of women and/or media design as a substantial field o of > inquiry and consequential to our future of AGIs. But you and I have had > this conversation several time before and I see nothing has changed. > Women like Aruna Vassar, Amy Willey, and Anna Salamon, that I work with and communicate with all the time? The staff at SIAI HQ mostly consists of myself, Amy, and Vassar. I just hired a female graphic artist for contract work. > By the way, since you applauded a guy who dissed me a couple of years ago > for my talk at the Goertzel's AI conference, I thought you might like to > know that Kevin Kelly has a new book out _What Technology Wants_, which > addresses technology from a similar thematic vantage as I addressed the > Singularity and AI in my talk about what AGI wants and its intended > consequences. > I never saw your talk at that AI conference, but I'm sorry if I clapped for someone who dissed you. For reasons that have actually been explained by Dale Carrico, I object to the treating of "technology" as a monolithic, personified entity as Kelly does, but I'll probably look into his book eventually anyway. > Nevertheless, you are one of my favorite transhumanists and I admire your > work. > Thanks! > By the way, this list's discussion on the Singularity was too focused on > Eli and in a disparaging way. I support and encourage more discussion from > varied perspectives and I think that Stefano did a good job at this > objectively presenting his own views, whether I agree with him or not they > are far better than attacking Eli. > *puts on cult hat.* Tremendous amounts of electronic ink have been spilled discussing Eliezer, because he is such a fascinating person. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 20:28:42 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 12:28:42 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Hi Jef, On Sun, Nov 14, 2010 at 11:26 AM, Aware wrote: > > The much more significant and accelerating risk is not that of a > "recursively self-improving" seed AI going rogue and tiling the galaxy > with paper clips or copies of itself, but of relatively small groups > of people, exploiting technology (AI and otherwise) disproportionate > to their context of values. > I disagree about the relative risk, but I'm worried about this too. > The need is not for a singleton nanny-AI but for development of a > fractally organized synergistic framework for increasing awareness of > our present but evolving values, and our increasingly effective means > for their promotion, beyond the capabilities of any individual > biological or machine intelligence. > Go ahead and build one, I'm not stopping you. > It might be instructive to consider that a machine intelligence > certainly can and will outperform the biological kludge, but > MEANINGFUL intelligence improvement entails adaptation to a relatively > more complex environment. This implies that an AI (much more likely a > human-AI symbiont), poses a considerable threat in present terms, with > acquisition of knowledge up to and integrating between existing silos > of knowledge, but lacking relevant selection pressure it is unlikely > to produce meaningful growth and will expend nearly all its > computation exploring irrelevant volumes of possibility space. > I'm having trouble parsing this. Isn't it our job to provide that "selection pressure" (the term is usually used in Darwinian population genetics so I find it slightly odd to see it used in this context)? > Singularitarians would do well to consider more ecological models in > this Red Queen's race. On a more sophisticated level I do see it as such. Instead of organisms being the relevant unit of analysis, I see mindstuff-environment interactions as being the relevant level. AI will undergo a hard takeoff not be cooperating with the existing ecological context, but by mass-producing its own mindstuff until the agent itself constitutes an entire ecology. The end result is more closely analogous to an alien planet's ecology colliding with our own than a new species arising within the current ecology. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Sun Nov 14 20:41:29 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 12:41:29 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <99559A493C214DEF8071BB0B7323BE5C@DFC68LF1> References: <99559A493C214DEF8071BB0B7323BE5C@DFC68LF1> Message-ID: 2010/11/14 Natasha Vita-More > Hi Michael, great to hear from you. > > I looked at your link and have to say that your analysis looks very, very > very much like my Primo Posthuman supposition for the future of brain, mind > and intelligence as related to AI and the Singularity. My references are > quite similar to yours: Kurzweil, Voss, Goertzel, Yudkowsky, but I also > include Vinge from my interview with him in the mid 1990s. > Hi Natasha, thanks for your welcome. Yes, actually, it is. It's kind of a Primo Posthuman for AI minds as opposed to human minds and computer programs. I love the Primo Posthuman concept and think it should be extended into 3D holographic art projects and sophisticated models. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sun Nov 14 20:42:16 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sun, 14 Nov 2010 14:42:16 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: References: <8F8752BBFE4B435BAA6ED8227DBAF906@DFC68LF1> Message-ID: Michael wrote: 2010/11/14 Natasha Vita-More Well let's look at your last statement. Diverse views about a hard takeoff were around before SIAI. You are correct that SIAI is one well-known organization within transhumanism, but the Singularity is larger than SIAI and has many varied views/theories which are addressed by transhumanists and nontranshumanists. Indeed. I pretty much stick with Vinge and have my own views based on my own research and scenario development. I think SIAI has done great work and has produced amazing events. The only problem I have ever had with SIAI is that it does not include women like me -- women who have been around for a long time and could contribute something meaningful to the conversation, outside of Eli's dismissal of women and/or media design as a substantial field o of inquiry and consequential to our future of AGIs. But you and I have had this conversation several time before and I see nothing has changed. Women like Aruna Vassar, Amy Willey, and Anna Salamon, that I work with and communicate with all the time? The staff at SIAI HQ mostly consists of myself, Amy, and Vassar. I just hired a female graphic artist for contract work. I don't know Aruna, but from my quick scan, she seems to be a investment analyst; I don' know Amy Willey, but from my quick scan, she seems to be a lawyer; I don't know Anna Salamon, but from my quick scan, she is an AI researcher but could not find her bio (bty, I love her use of "human extinction risk" rather than the ill-suited phrase "existential risk". I do not know any of these women from transhumanism. They all seem highly skilled and cool women. BUT I'm not aware of any of them having a background in media design, applied design, media design theory or applied design theory. That was my point. The study of the Singularity and A[G]I research and theory *must be transdisciplinary.* I cannot emphasize that enough. By the way, since you applauded a guy who dissed me a couple of years ago for my talk at the Goertzel's AI conference, I thought you might like to know that Kevin Kelly has a new book out _What Technology Wants_, which addresses technology from a similar thematic vantage as I addressed the Singularity and AI in my talk about what AGI wants and its intended consequences. I never saw your talk at that AI conference, but I'm sorry if I clapped for someone who dissed you. For reasons that have actually been explained by Dale Carrico, I object to the treating of "technology" as a monolithic, personified entity as Kelly does, but I'll probably look into his book eventually anyway. I don't have anything to say about Carrico. On a different note, it is not a matter of anthropomorphizing technology but experiencing the cause/effect of technology and placing one's research inside of the technology rather than only being an observer. Nevertheless, you are one of my favorite transhumanists and I admire your work. Thanks! Mon pleasure! Natasha Vita-More MSc, MPhil PhD Researcher, University of Plymouth Board of Directors: Humanity+ Fellow: Institute for Ethics and Emerging Technologies Visiting Scholar: 21st Century Medicine Advisor: Singularity University -------------- next part -------------- An HTML attachment was scrubbed... URL: From x at extropica.org Sun Nov 14 22:05:46 2010 From: x at extropica.org (x at extropica.org) Date: Sun, 14 Nov 2010 14:05:46 -0800 Subject: [ExI] Fwd: Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > On Sun, Nov 14, 2010 at 11:26 AM, Aware wrote: >> The need is not for a singleton nanny-AI but for development of a >> fractally organized synergistic framework for increasing awareness of >> our present but evolving values, and our increasingly effective means >> for their promotion, beyond the capabilities of any individual >> biological or machine intelligence. > > Go ahead and build one, I'm not stopping you. It's already ongoing in the marketplace of ideas, but not as intentionally therefore coherently as should be desired. >> It might be instructive to consider that a machine intelligence >> certainly can and will outperform the biological kludge, but >> MEANINGFUL intelligence improvement entails adaptation to a relatively >> more complex environment. This implies that an AI (much more likely a >> human-AI symbiont), poses a considerable threat in present terms, with >> acquisition of knowledge up to and integrating between existing silos >> of knowledge, but lacking relevant selection pressure it is unlikely >> to produce meaningful growth and will expend nearly all its >> computation exploring irrelevant volumes of possibility space. > > I'm having trouble parsing this. ?Isn't it our job to provide that > "selection pressure" (the term is usually used in Darwinian population > genetics so I find it slightly odd to see it used in this context)? Any "intelligent" system improves by extracting and effectively modeling regularities within its environment of interaction. At some point, corresponding to integration of knowledge apprehended via direct interaction as well as communicated from existing domains as well as information latent between domains, the system will become starved for RELEVANT novelty necessary for further MEANINGFUL growth. (Of course it could continue to apply its prodigious computing power exploring vast reaches of a much vaster mathematical possible space.) Given a static environment, that intelligence would eventually catch up and plateau at some level somewhat higher than that of any preexisting agent. The strategic question is this: Given practical considerations of incomplete specification, combinatorial explosion, rate of information (and effect) diffusion, and effective interaction area as well as first-mover advantage within a complex co-evolving environment, how should we compare the highly asymmetric strengths of the very vertical AI versus a very broad technologically amplified established base? Further, given such a plateau, on what basis could we expect such an AI to act as an effective nanny to humanity? There can be such threats but no such guarantees and to the extent we are looking for protection when none can be found, such effort is wasted and thus wrong. - Jef From stefano.vaj at gmail.com Sun Nov 14 23:45:10 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 00:45:10 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > We have some reason to believe that a roughly human-level AI could rapidly > improve its own capabilities, fast enough to get far beyond the human level > in a relatively short amount of time. ?The reason why is that a > "human-level" AI would not really be "human-level" at all -- it would have > all sorts of inherently exciting abilities, simply by virtue of its > substrate and necessities of construction: > 1. ?ability to copy itself > 2. ?stay awake 24/7 > 3. ?spin off separate threads of attention in the same mind > 4. ?overclock helpful modules on-the-fly > 5. ?absorb computing power (humans can't do this) > 6. ?constructed from scratch with self-improvement in mind > 7. ?the possibility of direct integration with new sensory modalities, like > a codic modality > 8. ?the ability to accelerate its own thinking speed depending on the speed > of available computers What would "human-equivalent" mean? I contend that all the above is basically what every system exhibiting universal computation can do, from cellular automata to organic brains to PC. At most, it just needs to be programmed to exhibit such behaviours. If we do not take things too literally, such behaviours have already been emergent in contemporary fyborgs for years. What's the big deal? The difference might be increasing performance and accuracy in a number of tasks. This would be welcome, and the "abrupter", the better, as far as I am concerned. Rather, we should keep in mind that such increase is far from guaranteed, especially in an age where technological development is freezing and real breakthroughs are becoming rarer and rarer, so that it seems indeed weird that many transhumanists are primarily concerned with "steering" what is expected to take place automagically ("gosh, how are we going to protect the ecosystems of extrasolar planets from terrestrial contamination?"), and what needs instead to be made *happen* in the first place. > We have real, evidence-based arguments for an abrupt takeoff. ?One is that > the human speed and quality of thinking is not necessarily any sort of > optimal thing, thus we shouldn't be shocked if another intelligent species > can easily surpass us as we surpassed others. ?We deserve a real debate, not > accusations of monotheism. Biological-human "thinking" has just been relatively good for what it was designed for, and "quality" does not have any real meaning out of a specific context. Moreover, the concept of "another species" is indeed quite vague, when taken in a diacronic sense - besides being quite "specieist" per se. We could not interbreed with our remote biological ancestors, and have no reason to believe that we could forever interbreed with our descendants even if they remained DNA-based forever. So, what do we have to fear? If we are discussing all that from a "self-protection" point of view, my bet is that most of us will be killed by accidents, human murder, disease or old age rather than while being chased down the road by an out-of-control Terminator - whose purpose in engaging in such a sport remains pretty unclear, by the way. -- Stefano Vaj From stefano.vaj at gmail.com Mon Nov 15 00:09:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 01:09:36 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/14 Michael Anissimov : > I disagree about the relative risk, but I'm worried about this too. "Risk" is a concept which requires a definition of what is feared, why it is feared and whether really it makes sense to make efforts to avoid it. If you think about that, previous human generations have routinely been stolen the control of society by subsequent ones who have sometimes killed them, other times segregated them in "retirement" roles and institutions, expelled them from creative work, made them dependent on others' decisions, alienated them from their contemporary cultures, and so forth. At the same time, I have never heard such circumstances expounded as a rationale for drastic birth control or children lobotomy. Now, while I think that some scenarios with regard to "AGI" are grossly anthropomorphic and delusionary, what exactly could our hypothetical "children of the mind" do worse than our biological children? If anything, "human-mind" emulation and replication technology might end up being more protective of our legacy - see under mind "uploading" - than past history has ever been for our predecessors. Or not. But technology need not be "antropomorphic" to be dangerous. Perfectly "stupid" computers can be as dangerous, or more, than computers emulating some kind of human-like agency, whatever the purpose of the latter might be. -- Stefano Vaj From possiblepaths2050 at gmail.com Mon Nov 15 00:21:14 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 17:21:14 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: Michael Anissimov wrote: Marcello Herreshoff is brilliant for any age. Like some other of our Fellows, he has been a top-scorer in the Putnam competition. He's been a finalist in the USA Computing Olympiad twice. He lives and breathes mathematics -- which makes sense because his dad is a math teacher at Palo Alto High School. Because Friendly AI demands so many different skills, it makes sense for people to custom-craft their careers from the start to address its topics. That way, in 2020, we will have people have been working on Friendly AI for 10-15 years solid rather than people who have been flitting in and out of Friendly AI and conventional AI. >>> Okay, I am duly impressed with Herreschoff's achievements... Oh, and Michael, your last name is the bane of my existance! I always want to spell it "Annisimov!" lol John : ) On 11/14/10, Giulio Prisco wrote: > Though I may share your feeling that our intuitive notion of self may > need a radical redefinition in the future, in particular after > deployment of mind uploading tech, I will continue to feel free to > support what you call "wasted, misguided effort entailed in its > survival". > > G. > > On Sun, Nov 14, 2010 at 6:59 PM, wrote: >> 2010/11/14 Tomaz Kristan : >>> The conservatives like you two are doll like those Indians who wanted to >>> prevent any Moon landing on the basis of "don't touch our grandmother". >>> The warm feeling of ancient wisdom means, you are probably wrong. >> >> Tomaz, I'm about as far from "conservative" as it gets. ?My thinking >> on human enhancement, transformation and personal identity, and the >> systems necessary for supporting such growth is in fact too radical >> for the space-cadet mentality that tends to dominate these >> discussions. ?I would suggest the same is true of Stefano. >> >> For example, if we could ever get past the "conservative" belief in a >> discrete, essential self (a soul by any other name), and all the >> wasted, misguided effort entailed in its survival, we could move on to >> much more productive discussion of increasing awareness of our present >> but evolving values, methods for their promotion, and structures of >> agency with many more degrees of freedom for ongoing meaningful >> growth. >> >> - Jef >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stefano.vaj at gmail.com Mon Nov 15 00:22:21 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 01:22:21 +0100 Subject: [ExI] Singularity In-Reply-To: <4CE02166.3010707@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> <4CE02166.3010707@lightlink.com> Message-ID: On 14 November 2010 18:50, Richard Loosemore wrote: > We cannot predict the future NOW, > never mind at some point in teh future. ?And there are also arguments that > would make the intelligence explosion occur in such a way that the future > became much *more* predictable than it is now! Let us take physical singularities. We have sometimes good enough equations describing the evolution of a given system, but only up to a certain point. There are however limits where the equations break down, returning infinities or <0 or >1 probabilities, or other results which have no practical sense. This does not imply any metaphysical consequences for such states, but simply indicates the limit where the predictive and descriptive value of our equations stop. I do not believe that we need to resort to any more mystic meaning than this one when discussing historical "singularities". In fact, I am inclined to describe exactly in such terms past events such as hominisation or the neolithic revolution. Moreover, historical developments are not to be taken for granted. Stagnation or regression or even *real* extinction (of the kind leaving no successors behind...) are equally plausible scenarios for our societies in the foreseeable future, no matter what is "bound to happen" sooner or later in a galaxy or another given enough time. Especially if... transhumanists are primarily concerned with how to cope with some inevitable parousia rather than with fighting neoluddism, prohibitions, technological, educational and cultural decadence. -- Stefano Vaj From hkeithhenson at gmail.com Mon Nov 15 00:27:33 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 14 Nov 2010 17:27:33 -0700 Subject: [ExI] Hard Takeof Message-ID: Michael wrote: >> Prediction: most comments in response to this post will again ignore the >> specific points in favor of a rapid takeoff and simply dismiss the idea >> based on low intuitive plausibility. Yep. I think "Hard takeoff" and "Rapid takeoff" are pretty much the same thing, set by human perception. And even if the doubling time didn't speed up (which it certainly could) a doubling time of a day or less is probably beyond human ability to even understand what is happening, especially if the AI were moderately sneaky. Some years ago there was a very compact virus that infected (as I recall) Microsoft SQL servers. It fit in a packet under 500 bytes. Once a machine had received one it was zombified and immediately started sending copies of the virus packet to random IP addresses. There were (as I recall) only about 50,000 possible targets on the net. All were infected in a short time. The doubling time (again from memory) was 8.5 seconds. At this rate, it would have taken under 3 minutes. The infection peaked out (clogging the net) before anyone could have reacted. If you had an AI that infected PCs this fast to get procession power, an AI takeoff could be over before people woke up to what was happening. It's a different situation where someone is manufacturing AIs for some purpose such as the clinic in "The Clinic Seed." In that case the AI had been constructed with roughly human motivations, where the AI's motivational goal was to obtain the high opinion of humans and others if its kind. The AI's population would increase at the rate set by the factory. This doesn't contribute much to your sound points about AI takeoff, but the first is an example of what has happened and how short the timetable might be. Give my best to Eliezer Keith From brent.allsop at canonizer.com Mon Nov 15 00:12:11 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 14 Nov 2010 17:12:11 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE03947.3070806@speakeasy.net> References: <4CE03947.3070806@speakeasy.net> Message-ID: <4CE07ADB.8070008@canonizer.com> Hi Michael, Yes, it is fun to see you back on this list. I'm still relatively uneducated about arguments for a "Hard Takoff". Thanks for pointing these out, and I've still got lots of study to fully understand them. Thanks for the help. Obviously there is some diversity of opinion about the importance of some of these arguments. It appears this particular hard takoff issue could be a big reason for our difference of opinions about the importance of friendliness. I think it would be great if we could survey for this particular hard takeoff issue, and find out how closely the break down of who is on which side of this issue matches the more general issue of the importance of Friendly AI and so on. We could even create sub topics and rank the individual arguments, such as the ones you've listed here, to find out which ones are the most successful (ie, acceptable to more people) and which ones are most important. I'll add my comments below to be included with Allan's and your POV. Brent Allsop On 11/14/2010 12:32 PM, Alan Grimes wrote: > chrome://messenger/locale/messengercompose/composeMsgs.properties: >> > wrote: > >> 1. ability to copy itself > Sufficiently true. > > nb: requires work by someone with a pulse to provide hardware space, > etc... (at least for now). > Michael. Is your ordering important? In other words, for you, is this the most important argument compared to the others? If so, I would agree that this is the most important argument compared to the others. >> 2. stay awake 24/7 > FALSE. > Not implied. The substrate does not confer or imply this property > because an uploaded mind would still need to sleep for precisely the > same reasons a physical brain does. I would also include the ability to fully concentrate 100% of the time. We seem to be required to do more than just one thing, and to play, have sex... a lot. In addition to sleeping. But all of these, at best, are linear differences, and can be overcome by having 2 or 10... times more people working on a particular problem. >> 3. spin off separate threads of attention in the same mind > FALSE. > (same reason as for 2). > >> 4. overclock helpful modules on-the-fly > Possibly true but strains the limits of plausibility, also benefits of > this are severely limited. > >> 5. absorb computing power (humans can't do this) > FALSE. > Implies scalability of the hardware and software architecture not at all > implied by simply residing in a silicon substrate, indeed this is a > major research issue in computer science. I probably don't fully understand what you mean by this one. To me, all computer power we've created so for is only because we can utilize / absorb / or benefit from all of it, at least as much as any other computer would. >> 6. constructed from scratch with self-improvement in mind > Possibly true but not implied. > >> 7. the possibility of direct integration with new sensory modalities, >> like a codic modality > True, but not unique, the human brain can also integrate with new > sensory modalities, this has been tested. What is 'codic modality'? We have significant diversity of knowledge representation abilities as compared to the mere ones and zeros of computers. I.E. we represent wavelengths of visible light with different colors, wavelengths of acoustic vibrations with sound, hotness/coldness for different temperatures, and so on. And we have great abilities to map new problem spaces into these very capable representation systems as can be seen by all the progress in field of scientific data representation / visualization. >> 8. the ability to accelerate its own thinking speed depending on the >> speed of available computers > True to a limited extent, also Speed is not everything. I admit that the initial speed difference is huge. But I agree with Alan that we make up with parallelism and many other things, what we lack in speed. And, we already seem to be at the limit of hardware speed - i.e. CPU speed has not significantly changed in the last 10 years right? >> When you have a human-equivalent mind that can copy itself, it would be >> in its best interest to rent computing power to perform tasks. If it >> can make $1 of "income" with less than $1 of computing power, you have >> the ingredients for a hard takeoff. > Mostly true. Could, would, and should being discreet questions here. > I would agree that a copy-able human level AI would launch a take-off, leaving what we have today, to the degree that it is unchanged, in the dust. But I don't think acheiving this is going to be anything like spontaneous, as you seem to assume is possible. The rate of progress of intelligence is so painfully slow. So slow, in fact, that many have accused great old AI folks like Minsky as being completely mistaken. I also think we are on the verge of discovering how the phenomenal mind works, represents knowledge, how to interface with it in a conscious way, enhance it and so on. I think such discoveries will greatly speed up this very slow process of aproaching human level AI. And once we achieve this, we'll be able to upload ourselves, or at least fully consciously integrate ourselves / utilize all the same things artificial systems are capable of, including increased speed, copy ability, ability to not sleep, and all the others. In other words, I believe anything computers can do, we'll also be able to do within a very short period of time after first achieved. The maximum time limit between when AI would get it, and when we would also acheive the same abilities, would be very insignificant compared to any rate of overall AI progress. Brent Allsop From stefano.vaj at gmail.com Mon Nov 15 00:37:24 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 15 Nov 2010 01:37:24 +0100 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: On 14 November 2010 19:34, Giulio Prisco wrote: > Though I may share your feeling that our intuitive notion of self may > need a radical redefinition in the future, in particular after > deployment of mind uploading tech, I will continue to feel free to > support what you call "wasted, misguided effort entailed in its > survival". Yes. But I know you to have a more concrete, and at the same time a broader, concept of "survival" than some delusionary investment in the fight of generic, undefined "humans" against "robots", if this is what we are talking about. -- Stefano Vaj From possiblepaths2050 at gmail.com Mon Nov 15 00:44:55 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 17:44:55 -0700 Subject: [ExI] Good background music for a Singularity discussion... Message-ID: I think this will help set the right mood... Any other recommendations? http://singularityhub.com/2010/11/10/post-human-era-transhumanism-music-you-can-dance-to-video/ John : ) From possiblepaths2050 at gmail.com Mon Nov 15 00:56:32 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 17:56:32 -0700 Subject: [ExI] Steven Spielberg to make a Discovery Channel series about "the future" Message-ID: I have very mixed feelings about some of his recent films, but I think he might really shine in this capacity. I wonder how he will handle the Singularity? We will see... http://singularityhub.com/2010/04/26/spielberg-to-make-a-mini-series-on-the-future-im-already-a-little-skeptical/ John From possiblepaths2050 at gmail.com Mon Nov 15 01:15:43 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 18:15:43 -0700 Subject: [ExI] *Four* Singularity films coming our way... Message-ID: I knew about "The Singularity is Near" and "Transcendent Man," but not the other two films! I'm looking forward to *finally* getting to see these productions... http://singularityhub.com/2009/08/13/four-singularity-movies-the-world-wants-the-future/ John : ) From possiblepaths2050 at gmail.com Mon Nov 15 02:55:02 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 19:55:02 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE07ADB.8070008@canonizer.com> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: I must admit that I yearn for a hard take-off singularity that includes the creation a nanny sysop who gets rid of poverty, disease, aging, etc., and looks after every human on the planet, but without establishing a tyranny. I'm not a kid anymore, and so like many transhumanists, I want this to happen at the very latest by 2050, and hopefully a decade before that date! lol And so I hang on Ray Kurzweil's every word and hope his predictions are correct. And just as I wonder if I will make it, I really wonder if *he* will survive long enough to see his beloved Singularity! I envision a scenario where a hard take-off Singularity happens in 2040. I am transformed back into a young man, but with very enhanced abilities, by an ocean of advanced nanotech swarming the world, and develop a limited mind meld with the rest of humanity. A Singularity sysop avatar in the form of a gorgeous nude woman appears to me. My beautiful AI companion and I make love while in orbit and she quickly gives birth to our child. We raise it together as we watch the Earth, society, and the solar system radically transform. I will soon embark on exploring the universe with my family. The experience as I visualize it is one part 2001, and another part Heavy Metal. Anyway, any Singularity I experience may not be quite as cool & corny as the one I picture, but for whatever it is worth, this is what I would like. Now I will go back to watching my favorite music video... http://www.youtube.com/watch?v=-X69aDIFFsc John : ) On 11/14/10, Brent Allsop wrote: > > Hi Michael, > > Yes, it is fun to see you back on this list. > > I'm still relatively uneducated about arguments for a "Hard Takoff". > Thanks for pointing these out, and I've still got lots of study to fully > understand them. Thanks for the help. > > Obviously there is some diversity of opinion about the importance of > some of these arguments. > > It appears this particular hard takoff issue could be a big reason for > our difference of opinions about the importance of friendliness. > > I think it would be great if we could survey for this particular hard > takeoff issue, and find out how closely the break down of who is on > which side of this issue matches the more general issue of the > importance of Friendly AI and so on. > > We could even create sub topics and rank the individual arguments, such > as the ones you've listed here, to find out which ones are the most > successful (ie, acceptable to more people) and which ones are most > important. > > I'll add my comments below to be included with Allan's and your POV. > > Brent Allsop > > > On 11/14/2010 12:32 PM, Alan Grimes wrote: >> chrome://messenger/locale/messengercompose/composeMsgs.properties: >>> > wrote: >> >>> 1. ability to copy itself >> Sufficiently true. >> >> nb: requires work by someone with a pulse to provide hardware space, >> etc... (at least for now). >> > Michael. Is your ordering important? In other words, for you, is this > the most important argument compared to the others? If so, I would > agree that this is the most important argument compared to the others. > >>> 2. stay awake 24/7 >> FALSE. >> Not implied. The substrate does not confer or imply this property >> because an uploaded mind would still need to sleep for precisely the >> same reasons a physical brain does. > I would also include the ability to fully concentrate 100% of the time. > We seem to be required to do more than just one thing, and to play, have > sex... a lot. In addition to sleeping. But all of these, at best, are > linear differences, and can be overcome by having 2 or 10... times more > people working on a particular problem. > >>> 3. spin off separate threads of attention in the same mind >> FALSE. >> (same reason as for 2). >> >>> 4. overclock helpful modules on-the-fly >> Possibly true but strains the limits of plausibility, also benefits of >> this are severely limited. >> >>> 5. absorb computing power (humans can't do this) >> FALSE. >> Implies scalability of the hardware and software architecture not at all >> implied by simply residing in a silicon substrate, indeed this is a >> major research issue in computer science. > I probably don't fully understand what you mean by this one. To me, all > computer power we've created so for is only because we can utilize / > absorb / or benefit from all of it, at least as much as any other > computer would. > >>> 6. constructed from scratch with self-improvement in mind >> Possibly true but not implied. >> >>> 7. the possibility of direct integration with new sensory modalities, >>> like a codic modality >> True, but not unique, the human brain can also integrate with new >> sensory modalities, this has been tested. > > What is 'codic modality'? We have significant diversity of knowledge > representation abilities as compared to the mere ones and zeros of > computers. I.E. we represent wavelengths of visible light with > different colors, wavelengths of acoustic vibrations with sound, > hotness/coldness for different temperatures, and so on. And we have > great abilities to map new problem spaces into these very capable > representation systems as can be seen by all the progress in field of > scientific data representation / visualization. > >>> 8. the ability to accelerate its own thinking speed depending on the >>> speed of available computers >> True to a limited extent, also Speed is not everything. > > I admit that the initial speed difference is huge. But I agree with > Alan that we make up with parallelism and many other things, what we > lack in speed. And, we already seem to be at the limit of hardware > speed - i.e. CPU speed has not significantly changed in the last 10 > years right? >>> When you have a human-equivalent mind that can copy itself, it would be >>> in its best interest to rent computing power to perform tasks. If it >>> can make $1 of "income" with less than $1 of computing power, you have >>> the ingredients for a hard takeoff. >> Mostly true. Could, would, and should being discreet questions here. >> > I would agree that a copy-able human level AI would launch a take-off, > leaving what we have today, to the degree that it is unchanged, in the > dust. But I don't think acheiving this is going to be anything like > spontaneous, as you seem to assume is possible. The rate of progress of > intelligence is so painfully slow. So slow, in fact, that many have > accused great old AI folks like Minsky as being completely mistaken. > > I also think we are on the verge of discovering how the phenomenal mind > works, represents knowledge, how to interface with it in a conscious > way, enhance it and so on. I think such discoveries will greatly speed > up this very slow process of aproaching human level AI. > > And once we achieve this, we'll be able to upload ourselves, or at least > fully consciously integrate ourselves / utilize all the same things > artificial systems are capable of, including increased speed, copy > ability, ability to not sleep, and all the others. In other words, I > believe anything computers can do, we'll also be able to do within a > very short period of time after first achieved. The maximum time limit > between when AI would get it, and when we would also acheive the same > abilities, would be very insignificant compared to any rate of overall > AI progress. > > > Brent Allsop > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From rpwl at lightlink.com Mon Nov 15 02:57:30 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Sun, 14 Nov 2010 21:57:30 -0500 Subject: [ExI] Mathematicians as Friendliness analysts In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <4CE0A19A.1080308@lightlink.com> Michael Anissimov wrote: > On Sat, Nov 13, 2010 at 2:10 PM, John Grigg > wrote: > > > And I noticed he did "friendly AI research" with > a grad student, and not a fully credentialed academic or researcher. > > > Marcello Herreshoff is brilliant for any age. Like some other of our > Fellows, he has been a top-scorer in the Putnam competition. He's been > a finalist in the USA Computing Olympiad twice. He lives and breathes > mathematics -- which makes sense because his dad is a math teacher at > Palo Alto High School. Because Friendly AI demands so many different > skills, it makes sense for people to custom-craft their careers from the > start to address its topics. That way, in 2020, we will have people > have been working on Friendly AI for 10-15 years solid rather than > people who have been flitting in and out of Friendly AI and conventional AI. Michael, This is entirely spurious. Why gather mathematicians and computer science specialists to work on the "friendliness" problem? Since the dawn of mathematics, the challenges to be solved have always been specified in concrete terms. Every problem, without exception, is definable in an unambiguous way. The friendliness problem is utterly unlike all of those. You cannot DEFINE what the actual problem is, in concrete, unambiguous terms. So, to claim that SIAI is amassing some amazing talent, because your Fellows have been top scorers in the Putnam competition, is like claiming that you can solve the "How To Win Friends and Influence People" problem, by gathering together a gang of the most brilliant mathematicans in the world. As ever, this point is not a shallow one: it stems from serious issues to do with the nature of complex systems and the foundations of scientific and mathematical inquiry. But the analogy holds, for all that. There are some things in life that do not reduce to mathematics. And the fact that we are talking about the friendliness of *computers* is a red herring. Computers may be based on mathematics down at their lowest level, but that level is as thoroughly isolated from the Friendliness (machine motivation) level, as the chemistry of Dale Carnegie's synapses was isolated from his advice about the How to Win Friends and Influence People problem. Richard Loosemore From michaelanissimov at gmail.com Mon Nov 15 03:13:00 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 19:13:00 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE07ADB.8070008@canonizer.com> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: Hi Brent, On Sun, Nov 14, 2010 at 4:12 PM, Brent Allsop wrote: > > >> Michael. Is your ordering important? In other words, for you, is this > the most important argument compared to the others? If so, I would agree > that this is the most important argument compared to the others. It wasn't meant to be, but I think copying is really important, yes. > I would also include the ability to fully concentrate 100% of the time. We > seem to be required to do more than just one thing, and to play, have sex... > a lot. In addition to sleeping. But all of these, at best, are linear > differences, and can be overcome by having 2 or 10... times more people > working on a particular problem > There may be second-order benefits from being able to concentrate longer. To get from one node of an argument or problem to another might require a certain amount of sustained attention, for instance. Any idea requiring longer than 20 or so hours of sustained continuous attention would be inaccessible to humanity. > I probably don't fully understand what you mean by this one. To me, all >> computer power we've created so for is only because we can utilize / absorb >> / or benefit from all of it, at least as much as any other computer would. > > I mean integrating it directly into its brain. For instance, imagining me doubling the amount of processing power in my retina and visual cortex, allowing me to see a much wider range of patterns and detail in the world, just because I chose to add more computing power to it. Or imagine giving more computing power to the concept-manipulating parts of the brain that surely exist but are only understood on a moderate level today. It's hard to say how important it is until we try, but the ability to add computing power directly to the brain is something no animal has ever had, so it's definitely something interesting and potentially important. > > > 6. constructed from scratch with self-improvement in mind >>> >> Possibly true but not implied. >> >> 7. the possibility of direct integration with new sensory modalities, >>> like a codic modality >>> >> True, but not unique, the human brain can also integrate with new >> sensory modalities, this has been tested. >> > > What is 'codic modality'? We have significant diversity of knowledge > representation abilities as compared to the mere ones and zeros of > computers. I.E. we represent wavelengths of visible light with different > colors, wavelengths of acoustic vibrations with sound, hotness/coldness for > different temperatures, and so on. And we have great abilities to map new > problem spaces into these very capable representation systems as can be > seen by all the progress in field of scientific data representation / > visualization. I hazard to say it's not the same as having a modality custom-crafted for the specific niche. We can map all this great stuff, but in something that requires skill and getting it right the first time, it's not the same as having the neural hardware. Really spectacular martial artists probably have "better" motor cortex than us in some ways. Parkinson's patients have a "worse" substantia negra that leads to pathology. Really good artists probably have slightly "better" brain sections corresponding to visualizing images. These variations take place entirely within the space of human possibilities, and they're still substantial. Imagine neurobiological differences going significantly beyond the human norm. > I admit that the initial speed difference is huge. But I agree with Alan >> that we make up with parallelism and many other things, what we lack in >> speed. And, we already seem to be at the limit of hardware speed - i.e. CPU >> speed has not significantly changed in the last 10 years right? > > It has: http://en.wikipedia.org/wiki/Megahertz_myth Of course, people have different opinions based on what they're trying to sell, but by and large Moore's law has kept going: http://cosmiclog.msnbc.msn.com/_news/2010/08/31/5012834-researchers-rescue-moores-law http://www.engadget.com/2010/05/03/nvidia-vp-says-moores-law-is-dead/ > I would agree that a copy-able human level AI would launch a take-off, > leaving what we have today, to the degree that it is unchanged, in the dust. > But I don't think acheiving this is going to be anything like spontaneous, > as you seem to assume is possible. The rate of progress of intelligence is > so painfully slow. So slow, in fact, that many have accused great old AI > folks like Minsky as being completely mistaken. > There's a huge difference between the rate of progress between today and human-level AGI and the time between human-level AGI and superintelligent AGI. They're completely different questions. As for a fast rate, would you still be skeptical if the AGI in question had access to advanced molecular manufacturing? -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Mon Nov 15 03:47:11 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Sun, 14 Nov 2010 19:47:11 -0800 Subject: [ExI] Hard Takeof In-Reply-To: References: Message-ID: Thanks Keith, this is definitely relevant to my argument. And if this sort of thing is possible today, imagine how much more empowering it could be in a future where computers, robotics, manufacturing, and other critical infrastructure are even more closely intertwined. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Nov 15 04:20:16 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:20:16 -0800 Subject: [ExI] A humble suggestion In-Reply-To: References: Message-ID: On Nov 12, 2010, at 1:29 AM, Will Steinberg wrote: > The thing is, collected here in the ExI chat list are a pretty handy set of thinkers/engineers, spread around the world (sort of.) In fact, I can generalize this fact to say that almost all of the people interested in this movement fall into that category as well. Now look. This is a present dropped into your lap. Instead of only discussing lofty ideals and philosophy, we (H+) should focus on the engineering of tools which will eventually be very important in the long run for humanity, and for our goals in particular. Sounds great. Start in on it. > > List of tools we need to invent/things we need to do: > > -A very good bidirectional speech-to-speech translator. For spreading the gospel, once H+ wisens up enough to start including the proletariat. > *scratches head* What does that have to do with H+? It is a good thing to have for a whole lot of reasons much broader than H+ specific ones. > -Neoagriculture. This would mean better irrigation systems, GMO crops that can easily harness lots of sun energy and produce more food, maybe machines/instructions for diy fertilizer. same comment. > > -Better Grid--test experimental grid where people opt to operate, on property, efficient windmills/solar panels/any electricity they can make for $$$ same comment. > > -Housing projects that work, or some sort of thing where you pay people to build their own house/project building. same comment. > > -Fulfilling jobs for proles that also help society/space travel/humanism/H+. I see fulfilling work as a function of skills, clarity of values, self-discipline, freedom and economy. Which parts of those things do you propose to address how exactly? Do you think "fulfilling jobs" can just be created by fiat top-down? > > -So many more, I know you can think of some! I bet you have pet projects like these. Ideas, at least. Are you claiming that any pet projects any h+ people have are practical means for reaching what may distinguished as H+ goals? > > > By Le Ch?telier's principle, improving these fucked up problems that exist for much of society will give us much more leeway and ability to do transhumanisty things, AND we can do them in the meantime. It has to happen eventually, unless you have some fancy vision of the H+ elect ascending to cyberheaven and leaving everyone else behind. Oh, so we have to fix everything in the world before we can address "transhumanisty things"? No thanks. > Thereby I suggest: a bunch of dedicated transhumanists mobilize and go to problematic regions, experimenting with those tools up there. Everyone will love H+. The movement will have lots of social power and then we can get shit done. Right? Nope. - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Nov 15 04:32:34 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:32:34 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <4CDD6569.5070509@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> On Nov 12, 2010, at 8:03 AM, Richard Loosemore wrote: > Singularity Utopia wrote: >> Thanks Richard Loosemore, regarding the SL4 route to contact Eliezer, that's exactly the info I needed. >> John Grigg, you say I may not be allowed to stay long on the SL4 list? Why is this, are Singularitarians an intolerant group leaning towards fascism? > > Er.... you may be misunderstanding the situation. ;-) > > You will be unwelcome and untolerated on SL4, because: > > a) The singularity is, for Eliezer, a power struggle. It is a matter of which personality "owns" these ideas .... who determines the agenda, who is seen as the pre-eminent power broker .... who has the largest army of volunteers to spread the message. And in that situation, you, my friend, are a Threat. Even if your ideas were more sensible than his you would be attacked and denounced, for the simple reason that you would not be meekly conforming to the standard view of the singularity (as defined by The Wise One). Funny. I have disagreed and argued with Eliezer for many years without ever getting kicked out of anything including SL4. I have never known him to exhibit this simplistic egoism you accuse him of. I have known him to actually make a point of saying when something I or someone else said that was contrary to his opinion turns out to have something of value in it. Eliezer in my experience is quite willing to admit when he is wrong. He even goes out of his way to say that he was quite mistaken at various times or in various past writings. > Eliezer obviously thinks that he is the chosen one, but whereas you are coming right out and declaring that you are the one, he would never be so dumb as to actually say "Hey, everyone, bow down to me, because I *am* the singularity!". He may be an irrational, Randian asshole, but he is not that stupid. > I don't think so. Eliezer says the problem is important and has dedicated himself to addressing it. I imagine he would be quite delighted if others did so also and using approaches that may be different from what he is exploring. Why use only one approach to such a serious problem domain? > So have fun on SL4, if there is anything left of it. If you don't actually get banned within a couple of months it will be because SL4 is (as John Clark claims) actually dead, and nobody gives a damn what you say there. > Again, I was on SL4 pretty much from the beginning and certainly was not any sort of cultist or yes-woman. So how come I wasn't banned if your characterization is valid? And no, this isn't an invitation to revisit just how much you feel you were wronged by Eliezer in the past. - s From sjatkins at mac.com Mon Nov 15 04:40:49 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:40:49 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <8E1B1423-E951-4B03-8706-2716CCEC541E@mac.com> On Nov 12, 2010, at 2:33 PM, BillK wrote: > On Fri, Nov 12, 2010 at 9:11 PM, Aleksei Riikonen wrote: > >> As Eliezer notes on his homepages that you have read, the primary way >> to contact him is email. It's just that he gets so much email, >> including from a large number of crazy people, that he of course >> doesn't answer them all. (You, unfortunately, are one of those crazy >> people who pretty surely will be ignored. So in the end, on this >> matter it would be appropriate of you to accept that -- like all >> people -- Eliezer should have the right to choose who he spends his >> time talking to, and that he most likely would not want to correspond >> with you.) >> >> > > > As I understand SU's request, she doesn't particularly want to enter a > dialogue with Eliezer. Her request was for an updated version of The > Singularitarian Principles > Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. > > Perhaps someone could mention this to Eliezer or point her to more > up-to-date writing on that subject? Doesn't sound like an > unreasonable request to me This is indeed a very sensible request. I am a bit annoyed by the number of times I have attempted to refer to various papers in talks with SIAI people only to be told that that paper or statement is "now obsolete" without being offered any up-to-date versions. I have heard that the CEV is either "out-of-date" or still the main idea/goal so many times that I don't know what to believe about it except that the SIAI hasn't kept its own position documents and working theories up to date. - samantha From sjatkins at mac.com Mon Nov 15 04:48:18 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:48:18 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> On Nov 12, 2010, at 2:44 PM, Aleksei Riikonen wrote: > On Sat, Nov 13, 2010 at 12:33 AM, BillK wrote: >> >> As I understand SU's request, she doesn't particularly want to enter a >> dialogue with Eliezer. Her request was for an updated version of The >> Singularitarian Principles >> Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. >> >> Perhaps someone could mention this to Eliezer or point her to more >> up-to-date writing on that subject? Doesn't sound like an >> unreasonable request to me. > > If people want a new version of Singularitarian Principles to exist, > they can write one themselves. Hardly. I cannot speak for this Institute. How would my writing such a thing be anything but my opinion? I want to know what the SIAI current positions are. What is it current formulation of what a FAI is and how it may be attained? What are its current definitions of Friendliness in hopefully implementable and testable terms? What sort of AGI or recursively optimizing procedure or whatever does it propose to create? What means does it advocate to avoid unfriendly AGI? Does it seek a singleton AGI (or equivalent) or peer AGIs and why? > Eliezer has no magical authority on the > topic, that would necessitate that it should be him. (Also, I doubt > Eliezer thinks it important for a new version to exist.) > An organization that claims its sole purpose is the attainment of a safe and Friendly AGI driven singularity or to at least avoid UFAI is under no obligation to state what its current thinking and position is? If it does not then why would anyone take it seriously (at least in those stated goals) at all? - s From sjatkins at mac.com Mon Nov 15 04:52:52 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 20:52:52 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> Message-ID: <8B400EE3-0BA2-44F2-A935-A990DA7EA268@mac.com> On Nov 12, 2010, at 3:04 PM, BillK wrote: > On Fri, Nov 12, 2010 at 10:44 PM, Aleksei Riikonen wrote: >> If people want a new version of Singularitarian Principles to exist, >> they can write one themselves. Eliezer has no magical authority on the >> topic, that would necessitate that it should be him. (Also, I doubt >> Eliezer thinks it important for a new version to exist.) >> >> (And if people just want newer things that Eliezer has written, just >> check his homepage.) >> >> > > > I don't disagree with you at all, as I agree with your opinion that > Eliezer has no magical authority on that topic. > > It just seems very unhelpful to abuse enquirers and tell them to use > Google. If visitors make a persistent nuisance of themselves, > perhaps, but it doesn't seem the best attitude to start off with. > Nor is it helpful to be told to read all of Less Wrong. This has actually been suggested when I asked where to find current position and theory papers. - s From aleksei at iki.fi Mon Nov 15 05:15:14 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Mon, 15 Nov 2010 07:15:14 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> Message-ID: On Mon, Nov 15, 2010 at 6:48 AM, Samantha Atkins wrote: > On Nov 12, 2010, at 2:44 PM, Aleksei Riikonen wrote: > >> If people want a new version of Singularitarian Principles >> to exist, they can write one themselves. > > Hardly. ?I cannot speak for this Institute. ? How would my writing > such a thing be anything but my opinion? No matter who would write such a document, it's just an opinion. There is currently no codified "ideology of singularitarianism" that would be owned by any single Institute. Eliezer and other SIAI folks seem to like it that way, so there likely will not be a codified document of Singularitarian principles coming from their direction. So if there are people who want such a codified ideology, they're going to have to codify it themselves. > ?I want to know what the SIAI current positions are. That's a different thing than wanting them to present a codified ideology. Just read their recent publications. This is a good start: http://singinst.org/riskintro/index.html -- Aleksei Riikonen - http://www.iki.fi/aleksei From sjatkins at mac.com Mon Nov 15 05:27:53 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 21:27:53 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> Message-ID: <1DD7819C-87D9-43B1-AA99-E9AF0FDB1C73@mac.com> On Nov 13, 2010, at 5:16 AM, John Grigg wrote: > Richard Loosemore wrote: > You have no idea how entertaining it is to hear professionally qualified > cognitive psychologists, complex systems theorists or philosophers of > science commenting on Eliezer's level of competence in these areas. Not > many of them do, of course, because they can't be bothered. But among > the few who have actually taken the trouble, I am afraid the poor guy is > generally scorned as a narcissistic, juvenile amateur. >>>> > > > Eliezer (I once called him Eli in a post and he responded with, "only > friends get to call me that") is in my view a very bright fellow, but > I find it a tragedy that he did not attend college and get an advanced > degree in something along the lines of artificial > intelligence/neuro-computation. > > > I feel he has doomed himself to not being a "heavy hitter" like Robin > Hanson, James Hughes, Max More, or Nick Bostrom, due to his lacking > in this regard. I realize he has his loyal pals and many friends within > transhumanism, but I suspect his success in the much larger world has > been greatly blunted due to his stubborn refusal to earn academic credentials. > And I have to chuckle at his notion that the Singularity would be right around > the corner and so why should he even bother? LOL I really don't think being a "heavy hitter" is a matter of degrees one has accumulated. There are too many very heavy hitters without such credentials for this to be so. There are also many heavies in fields that have nothing to do with the degree or degrees that they do have. There is no directly relevant degree for FAI. There are many fields of knowledge that are relevant. Which would you pick to specialize enough in to get a relevant higher degree? This is not to say I have anything against such credentials. If I were younger I might be more tempted to pick up such myself. The education system unfortunately does not make it easy to do that. There are too many irrelevant hoops and too much incompressible time required in most current US programs. If you have a sense of mission as Eliezer has from a very young age it can be very difficult to justify years spent on some subsection of the relevant material just to get a credential that may or may not make you any more likely to succeed. - samantha From possiblepaths2050 at gmail.com Mon Nov 15 05:27:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 14 Nov 2010 22:27:36 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: Brent Allsop wrote: I would agree that a copy-able human level AI would launch a take-off, leaving what we have today, to the degree that it is unchanged, in the dust. But I don't think acheiving this is going to be anything like spontaneous, as you seem to assume is possible. The rate of progress of intelligence is so painfully slow. So slow, in fact, that many have accused great old AI folks like Minsky as being completely mistaken. >>> Michael Annisimov replied: There's a huge difference between the rate of progress between today and human-level AGI and the time between human-level AGI and superintelligent AGI. They're completely different questions. As for a fast rate, would you still be skeptical if the AGI in question had access to advanced molecular manufacturing? >>> I agree that self-improving AGI with access to advanced manufacturing and research facilities would probably be able to bootstrap itself at an exponential rate, rather than the speed at which humans created it in the first place. But the "classic scenario" where this happens within minutes, hours or even days and months seems very doubtful in my view. Am I missing something here? John On 11/14/10, Michael Anissimov wrote: > Hi Brent, > > On Sun, Nov 14, 2010 at 4:12 PM, Brent Allsop > wrote: >> >> >>> Michael. Is your ordering important? In other words, for you, is this >> the most important argument compared to the others? If so, I would agree >> that this is the most important argument compared to the others. > > > It wasn't meant to be, but I think copying is really important, yes. > > >> I would also include the ability to fully concentrate 100% of the time. >> We >> seem to be required to do more than just one thing, and to play, have >> sex... >> a lot. In addition to sleeping. But all of these, at best, are linear >> differences, and can be overcome by having 2 or 10... times more people >> working on a particular problem >> > > There may be second-order benefits from being able to concentrate longer. > To get from one node of an argument or problem to another might require a > certain amount of sustained attention, for instance. Any idea requiring > longer than 20 or so hours of sustained continuous attention would be > inaccessible to humanity. > > >> I probably don't fully understand what you mean by this one. To me, all >>> computer power we've created so for is only because we can utilize / >>> absorb >>> / or benefit from all of it, at least as much as any other computer >>> would. >> >> > I mean integrating it directly into its brain. For instance, imagining me > doubling the amount of processing power in my retina and visual cortex, > allowing me to see a much wider range of patterns and detail in the world, > just because I chose to add more computing power to it. Or imagine giving > more computing power to the concept-manipulating parts of the brain that > surely exist but are only understood on a moderate level today. It's hard > to say how important it is until we try, but the ability to add computing > power directly to the brain is something no animal has ever had, so it's > definitely something interesting and potentially important. > > >> >> >> 6. constructed from scratch with self-improvement in mind >>>> >>> Possibly true but not implied. >>> >>> 7. the possibility of direct integration with new sensory modalities, >>>> like a codic modality >>>> >>> True, but not unique, the human brain can also integrate with new >>> sensory modalities, this has been tested. >>> >> >> What is 'codic modality'? We have significant diversity of knowledge >> representation abilities as compared to the mere ones and zeros of >> computers. I.E. we represent wavelengths of visible light with different >> colors, wavelengths of acoustic vibrations with sound, hotness/coldness >> for >> different temperatures, and so on. And we have great abilities to map new >> problem spaces into these very capable representation systems as can be >> seen by all the progress in field of scientific data representation / >> visualization. > > > I hazard to say it's not the same as having a modality custom-crafted for > the specific niche. We can map all this great stuff, but in something that > requires skill and getting it right the first time, it's not the same as > having the neural hardware. Really spectacular martial artists probably > have "better" motor cortex than us in some ways. Parkinson's patients have > a "worse" substantia negra that leads to pathology. Really good artists > probably have slightly "better" brain sections corresponding to visualizing > images. These variations take place entirely within the space of human > possibilities, and they're still substantial. Imagine neurobiological > differences going significantly beyond the human norm. > > >> I admit that the initial speed difference is huge. But I agree with Alan >>> that we make up with parallelism and many other things, what we lack in >>> speed. And, we already seem to be at the limit of hardware speed - i.e. >>> CPU >>> speed has not significantly changed in the last 10 years right? >> >> > It has: > > http://en.wikipedia.org/wiki/Megahertz_myth > > Of course, people have different opinions based on what they're trying to > sell, but by and large Moore's law has kept going: > > http://cosmiclog.msnbc.msn.com/_news/2010/08/31/5012834-researchers-rescue-moores-law > http://www.engadget.com/2010/05/03/nvidia-vp-says-moores-law-is-dead/ > > >> I would agree that a copy-able human level AI would launch a take-off, >> leaving what we have today, to the degree that it is unchanged, in the >> dust. >> But I don't think acheiving this is going to be anything like >> spontaneous, >> as you seem to assume is possible. The rate of progress of intelligence >> is >> so painfully slow. So slow, in fact, that many have accused great old AI >> folks like Minsky as being completely mistaken. >> > > There's a huge difference between the rate of progress between today and > human-level AGI and the time between human-level AGI and superintelligent > AGI. They're completely different questions. As for a fast rate, would you > still be skeptical if the AGI in question had access to advanced molecular > manufacturing? > > -- > michael.anissimov at singinst.org > Singularity Institute > Media Director > From lists1 at evil-genius.com Mon Nov 15 05:54:57 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Sun, 14 Nov 2010 21:54:57 -0800 Subject: [ExI] Errors in the Cordain paper (was Paleo/Primal health...) In-Reply-To: References: Message-ID: <4CE0CB31.3020105@evil-genius.com> Max: Thank you for jumping in and tackling the paleo information deficit. I deliberately started with the Cordain AJCN paper because I wasn't looking forward to dealing with the ****storm that often results when the term 'paleo' is brought in, and hoped to slide it in sideways by starting with the peer-reviewed science. I was pleasantly surprised to learn that I wasn't going to be the lone standard-bearer. However, I must note that the Cordain paper makes a *huge*, and very significant, series of factual errors in the section entitled "Fatty domestic meats" -- and he carries those errors forward to this day, as do several other paleo advocates. This article describes the errors in detail: http://www.gnolls.org/715/when-the-conclusions-dont-match-the-data-even-loren-cordain-whiffs-it-sometimes-because-saturated-fat-is-most-definitely-paleo/ His acid/base balance theory is also somewhat shaky, though its net effects (increased vegetable/fruit consumption) are most likely beneficial. From spike66 at att.net Mon Nov 15 06:10:33 2010 From: spike66 at att.net (spike) Date: Sun, 14 Nov 2010 22:10:33 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <013801cb848b$cfd192d0$6f74b870$@att.net> On Behalf Of John Grigg >...But the "classic scenario" ... seems very doubtful in my view...Am I missing something here? John No Johnny, rather you are getting something here, something very fundamental: the uncertainty inherent in AI research. This has really bothered me since about the mid-90s when I was introduced to the notion of the singularity: the inherent uncertainty is often downplayed. If we want to go with the plutonium analogy, we have some immediate problems. There were unknowns of course, but the behavior of a critical mass of plutonium could be calculated, with slipsticks, nuclear cross section tables, and the results of a number of lab tests. The scientists could model the feedback loops, estimate closely the outcome. They could calculate the risk of igniting the atmosphere and destroying all life on the planet for instance. The results of the tests at the Trinity site didn't surprise the scientists present. They were awed to the core of their beings, but not surprised. Intelligence in any substrate is far less predictable. Put a bunch of really smart scientists together and it becomes wildly unpredictable what will happen. Consider my energetic reaction to Singularity Utopia. He or she went on and on about how everything would be just grand. I now realize that she may have been a creative singularity-phobe who is making a point, a good one, by posing as a wild-eyed singularity-phile. All the megalomania could have been a kind of over-the-top satire, to point to the great danger of running in to a danger zone with wild-eyed optimism. Alternately she could be exactly what she wrote, in which case she made a damn good point anyway, although not the one she intended. I am not advocating a Bill Joy approach of eschewing AI research, just the opposite. A no-singularity future is 100% lethal to every one of us, every one of our children and their children forever. A singularity gives us some hope, but also much danger. The outcome is far less predictable than nuclear fission. Good luck to us. spike From hkeithhenson at gmail.com Mon Nov 15 06:40:31 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 14 Nov 2010 23:40:31 -0700 Subject: [ExI] Hard Takeoff-money Message-ID: On Sun, Nov 14, 2010 at 9:41 PM, Michael Anissimov wrote: > Thanks Keith, this is definitely relevant to my argument. ?And if this sort > of thing is possible today, imagine how much more empowering it could be in > a future where computers, robotics, manufacturing, and other critical > infrastructure are even more closely intertwined. True. One of my more freaky realizations in the last year is that certain classes of finance are beyond human abilities. In reading up on the big drop in the stock market early this year, it became clear that unaided humans are not in the running for the kind of "finance" that certain computer programs do. Currently the people who write the descriptions of how computers should make money in the market using short time (ms) trades have been bitching that they are not getting enough of the money these things make. Well, the obvious course of events is that someone programs one of these to run the whole thing, including bank deposits and gives one of them a small stake to work with then cuts it loose with instructions to spawn new versions and accounts. So if you later wonder how the AIs cornered the world's capital, I mentioned it first. :-) Keith From sjatkins at mac.com Mon Nov 15 07:17:06 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 14 Nov 2010 23:17:06 -0800 Subject: [ExI] Singularity was EP, was Margaret Mead controversy In-Reply-To: References: Message-ID: On Nov 11, 2010, at 9:59 AM, Keith Henson wrote: > > > > It's so hard to understand the ramifications of what nanotech and AI > will be able to do in the context of human desires that I had to > resort to fiction to express it. > > http://www.terasemjournals.org/GN0202/henson.html Enjoyed it. More, please! - s From lists1 at evil-genius.com Mon Nov 15 07:34:10 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Sun, 14 Nov 2010 23:34:10 -0800 Subject: [ExI] Gaining weight on paleo, and fat balance In-Reply-To: References: Message-ID: <4CE0E272.9050005@evil-genius.com> > Natasha asked: > >> >Max, after you respond to Amara, would you please advise me how I >> >can maintain and even gain weight on the paleo diet? I'm not Max, but I can offer some insight: -Few foods remain unimproved by the addition of avocado slices, a fried egg, or both. -Root vegetables (particularly yams and sweet potatoes) are not impermissible. The most recent research (based on isotopic data) I've seen indicates that Late Pleistocene hunter-forager diets were approximately 1/3 hunted meat (antelope, other big game), 1/3 non-hunted meat (fish, insects, etc.), and 1/3 vegetables and roots. Since the calorie content of vegetables is low, roots likely accounted for a significant portion of the 1/3. "In this review we have analyzed the 13 known quantitative dietary studies of hunter-gatherers and demonstrate that animal food actually provided the dominant (65%) energy source, while gathered plant foods comprised the remainder (35%)." http://www.ncbi.nlm.nih.gov/pubmed/11965522 Fair warning: you will get into big arguments over this amongst paleo purists. However, since your objective is to gain weight, not to lose it, some root starches will help you maintain that objective while still staying away from gluten/gliadin. In other words, the old-school American breakfast of steak or bacon, eggs, and potatoes is basically paleo -- so long as the potatoes are fried in the steak fat or in butter, and not in an industrial product like 'vegetable oil' (a misnomer: actually 'grain oil') And for those of you who aren't ready to go full paleo but want as many of the health, energy, and attitude benefits as possible, removing anything containing 'vegetable oil' from your diet is a great start. Corn oil, soybean oil, cottonseed/sunflower/safflower/canola oil == extremely high in n-6 polyunsaturated fatty acids. Even olive oil should be used lightly and in moderation due to n-6 content. I can expand on this if people are interested: altering the n-3/n-6 balance accounts for many of the beneficial effects of a paleo/primal diet. From pharos at gmail.com Mon Nov 15 09:40:59 2010 From: pharos at gmail.com (BillK) Date: Mon, 15 Nov 2010 09:40:59 +0000 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: On Mon, Nov 15, 2010 at 6:40 AM, Keith Henson wrote: > True. ?One of my more freaky realizations in the last year is that > certain classes of finance are beyond human abilities. ?In reading up > on the big drop in the stock market early this year, it became clear > that unaided humans are not in the running for the kind of "finance" > that certain computer programs do. > > Currently the people who write the descriptions of how computers > should make money in the market using short time (ms) trades have been > bitching that they are not getting enough of the money these things > make. > > Well, the obvious course of events is that someone programs one of > these to run the whole thing, including bank deposits and gives one of > them a small stake to work with then cuts it loose with instructions > to spawn new versions and accounts. > > So if you later wonder how the AIs cornered the world's capital, I > mentioned it first. ?:-) > > Won't happen until the Singularity, when all bets are off anyway. The point of these trading programs is to assist in making a few people very, very rich and the great majority poor and unemployed. Working great so far. (But surely the burning torches and pitchforks can't be far away, can they?). BillK From rpwl at lightlink.com Mon Nov 15 14:08:06 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Mon, 15 Nov 2010 09:08:06 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: <4CE13EC6.5090200@lightlink.com> Samantha Atkins wrote: > Again, I was on SL4 pretty much from the beginning and certainly was > not any sort of cultist or yes-woman. So how come I wasn't banned if > your characterization is valid? And no, this isn't an invitation to > revisit just how much you feel you were wronged by Eliezer in the > past. Samantha, As I pointed out in a post the other day that you (pointedly) ignored: I was banned from SL4 immediately after I suggested that Eliezer's AND my own comments be put in front of an outside expert in cognitive science. Since the debate between Eliezer and myself was about a particular technical issue in cognitive science, that would have been a perfect way for onlookers to assess whether Eliezer's comments really were as ridiculous as I said they were. As soon as I made that suggestion, he banned me from SL4, wrote several defamatory essays about me, and then forbade anyone on SL4 from discussing the matter further. Why have *you* never been banned? I don't know: perhaps because you don't have enough knowledge to challenge his core beliefs and defend yourself successfully. Richard Loosemore From hkeithhenson at gmail.com Mon Nov 15 15:31:44 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 15 Nov 2010 08:31:44 -0700 Subject: [ExI] Hard Takeoff-money Message-ID: On Mon, Nov 15, 2010 at 5:00 AM, John Grigg wrote: > > Brent Allsop wrote: > I would agree that a copy-able human level AI would launch a take-off, > leaving what we have today, to the degree that it is unchanged, in the > dust. ?But I don't think acheiving this is going to be anything like > spontaneous, as you seem to assume is possible. ?The rate of progress > of intelligence is so painfully slow. ? So slow, in fact, that many > have accused great old AI folks like Minsky as being completely > mistaken. >>>> > > Michael Annisimov replied: > There's a huge difference between the rate of progress between today > and human-level AGI and the time between human-level AGI and > superintelligent AGI. ?They're completely different questions. ?As for > a fast rate, would you still be skeptical if the AGI in question had > access to advanced molecular manufacturing? > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. ?But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? What does an AI mainly need? Processing power and storage. If there are vast amounts of both that can be exploited, then all you need is a storage estimate for the AI and the average bandwidth between storage locations to determine the replication rate. Human memory is thought to be in the few hundreds of M bytes. How long does it take to copy a G byte over the net nowadays? BillK wrote: > > On Mon, Nov 15, 2010 at 6:40 AM, Keith Henson ?wrote: snip >> So if you later wonder how the AIs cornered the world's capital, I >> mentioned it first. ?:-) > > Won't happen until the Singularity, when all bets are off anyway. > > The point of these trading programs is to assist in making a few > people very, very rich and the great majority poor and unemployed. > Working great so far. I can easily see a disgruntled programmer writing this as retaliation against a hated boss. > (But surely the burning torches and pitchforks can't be far away, can they?). That is _so_ 17th century. Surely you can think of something better. Keith From sjatkins at mac.com Mon Nov 15 16:56:52 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 08:56:52 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> Message-ID: <3E98024C-9C8C-4D2C-8F54-A0355E058FFD@mac.com> On Nov 14, 2010, at 9:03 AM, Stefano Vaj wrote: > On 14 November 2010 02:22, Damien Broderick wrote: >> Extrope Dan Clemmensen posted here around 15 years ago his conviction that >> the Singularity would happen "before 1 May, 2006" (the net would "wake up"). >> Bad luck. > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. That is not the original meaning but a likely consequence of the advent of AGI. - s From sjatkins at mac.com Mon Nov 15 17:04:07 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 09:04:07 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <9207FB8E-8DF9-44A0-91A6-E43B7BEDED24@mac.com> On Nov 14, 2010, at 9:59 AM, Michael Anissimov wrote: > Here's a list I put together a long time ago: > > http://www.acceleratingfuture.com/articles/relativeadvantages.htm > > Say I meet someone like Natasha or Stefano, but I know they haven't been exposed to any of the arguments for an abrupt Singularity. Someone more new to the whole thing. I mention the idea of an abrupt Singularity, and they react by saying that that's simply secular monotheism. Then, I present each of the items on that AI Advantage list, one by one. Each time a new item is presented, there is no reaction from the listener. It's as if each additional piece of information just isn't getting integrated. > > The idea of a mind that can copy itself directly is a really huge deal. A mind that can copy itself directly is more different than us than we're different from most other animals. We're talking about an area of mindspace way outside what we're familiar with. > > The AI Advantage list matters to any AI-driven Singularity. You may say that it will take us centuries to get to AGI, so therefore these arguments don't matter, but if you think that, you should explicitly say so. The arguments about whether AGI is achievable by a certain date and whether AGI would quickly lead to a hard takeoff are separate arguments -- as if I need to say it. With full acknowledgement of AI advantage, which I certainly understand, it is quite speculative how hard a takeoff will or will not ensue when AGI is achieved. > > What I find is that people don't like the *connotations* of AI and are much more concerned about the possibility of THEM PERSONALLY sparking the Singularity with intelligence enhancement, so therefore they underestimate the probability of the former simply because they never care to look into it very deeply. There is also a cultural dynamic in transhumanism whereby interest in hard takeoff AGI is considered "SIAI-like" and implies that one must be culturally associated with SIAI. How come? I J Good was talking about this nearly half a century ago. It is not a SIAI specific meme in the least. But I have noticed the tendency of many to associate an idea with its currently most prolific or most well known expounders. - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 15 16:57:27 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 15 Nov 2010 11:57:27 -0500 Subject: [ExI] Let's play What If. In-Reply-To: <4CE0158B.9000409@speakeasy.net> References: <4CC6738E.3050609@speakeasy.net> <4CC991EC.5010605@satx.rr.com> <4CC994FF.4000705@satx.rr.com> <073BE5EA-3D97-446D-B915-2CF6BF506465@bellsouth.net> <4CC9B3A2.4010301@satx.rr.com> <4CCA0C1A.3050704@satx.rr.com> <72483A7C-973F-4AEB-81B8-22E5DF6FF595@bellsouth.net> <4CCAE3D8.4020309@lightlink.com> <4CCB0509.4080107@lightlink.com> <4CCB0A6F.3090707@satx.rr.com> <7B7BE56B-A4D7-4046-9BDB-7CC225E218D8@bellsouth.net> <4CCC5E62.8050907@satx.rr.com> <4CE0158B.9000409@speakeasy.net> Message-ID: On Nov 14, 2010, at 11:59 AM, Alan Grimes wrote: > > Science has not, and cannot make any claims about metaphysics. There are an infinite number of metaphysical theories that fit the known facts, and without science there is no way to know which one is right; except that even metaphysics needs to be self consistent. So lets see where your intuitive feeling that atoms are the key to identity leads. The human bladder has a capacity of at least 300cc, so assuming you weigh about 75 kilograms you will lose almost half of one percent of your identity every time you visit the mens room. However this loss of self need not be permanent because whenever you insert a donut into your head beyond your teeth the atoms in the donut undergo transubstantiation and they are no longer just atoms, they are now YOUR atoms. This change in the atoms of the donut (caused by some sort of mysterious transubstantiation field that envelops the body) is of monumental importance but is completely undetectable by the scientific method. Also, you need not worry about getting fat from all those donuts because fat is good, fat people have a stronger identity than thin people, they have more consciousness because they have more atoms. Alan, is this a metaphysical theory you want to quite literally stake your life on? > Science can erode some of the edges of what was previously metaphysics by weeding out some of the more-wrong understandings of the world, but it can't do much more than that. You seem to be implying that something CAN do better than that. Please elaborate! > The identity issue in uploading is precisely the type of question that > science is utterly mute about. Religion is certainly not mute about matters of this sort, but it would be better if it was; however religious people do so enjoy flapping their gums and pontificating about things they have no way of knowing anything about. I agree with Ludwig Wittgenstein about one thing, "What we cannot speak about we must pass over in silence". > It is logically impossible to repeat the experiment of destructively uploading someone. Repeat? It is impossible to perform the experiment even once until you explain exactly what is destroyed in a destructive upload. If it is the soul, something of enormous importance that is nevertheless completely undetectable by the scientific method then obviously the theory cannot be disproven by an experiment. But I ask you again, is this really a metaphysical theory that you want to quite literally stake your life on? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Mon Nov 15 17:14:02 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 09:14:02 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 14, 2010, at 9:52 AM, Michael Anissimov wrote: > On Sun, Nov 14, 2010 at 9:03 AM, Stefano Vaj wrote: > > I still believe that seeing the Singularity as an "event" taking place > at a given time betrays a basic misunderstanding of the metaphor, ony > too open to the sarcasm of people such as Carrico. > > If we go for the original meaning of "the point in the future where > the predictive ability of our current forecast models and > extrapolations obviously collapse", it would seem obvious that the > singularity is more of the nature of an horizon, moving forward with > the perspective of the observer, than of a punctual event. > > We have some reason to believe that a roughly human-level AI could rapidly improve its own capabilities, fast enough to get far beyond the human level in a relatively short amount of time. The reason why is that a "human-level" AI would not really be "human-level" at all -- it would have all sorts of inherently exciting abilities, simply by virtue of its substrate and necessities of construction: While it "could" do this it is not at all certain that it would. Humans can improve themselves even today in a variety of ways but very few take the trouble. An AGI that is not autonomous would do what it was told to do by its owners who may or may not have improving it drastically as a high priority. > > 1. ability to copy itself > 2. stay awake 24/7 Possibly, depending on its long term memory and integration model. If it came from human brain emulation this is less certain. > 3. spin off separate threads of attention in the same mind This very much depends on the brain architecture. If too close a copy of human brains this may not be the case. > 4. overclock helpful modules on-the-fly Not sure what you mean by this but this is very much a question of specific architecture rather than general AGI. > 5. absorb computing power (humans can't do this) What does this mean? Integrate other systems? How? To what level? Humans do some degree of this all the time. > 6. constructed from scratch with self-improvement in mind It could be so constructed but may or may not in fact be so constructed. > 7. the possibility of direct integration with new sensory modalities, like a codic modality I am not sure exactly what is meant by this. That it is very very good at understanding code amounts to a 'modality'? > 8. the ability to accelerate its own thinking speed depending on the speed of available computers > This assumes an ability to integrate random other computers that I do not think is at all a given. > When you have a human-equivalent mind that can copy itself, it would be in its best interest to rent computing power to perform tasks. If it can make $1 of "income" with less than $1 of computing power, you have the ingredients for a hard takeoff. This is simple economics. Most humans don't take advantage of the many such positive sum activities they can perform today without such self-copying abilities. So why is it certain that an AGI would? > > There is an interesting debate to be had here, about the details of the plausibility of the arguments, but most transhumanists just seem to dismiss the conversation out of hand, or don't know that there's a conversation to have. > Statements about "most transhumanists" are fraught with many problems. > Many valuable points are made here, why do people always ignore them? 'We' don't. > > http://singinst.org/upload/LOGI//seedAI.html > > Prediction: most comments in response to this post will again ignore the specific points in favor of a rapid takeoff and simply dismiss the idea based on low intuitive plausibility. > Well, that helps a lot. It is a form of calling those who disagree lazy or stupid before they even voice their disagreement. > The Singularity as an incumbent rapture - or > doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to > many to present it these days - can on the other hand easily > deconstructed as a secularisation of millennarist myths which have > plagued western culture since the advent of monotheism. > > We have real, evidence-based arguments for an abrupt takeoff. One is that the human speed and quality of thinking is not necessarily any sort of optimal thing, thus we shouldn't be shocked if another intelligent species can easily surpass us as we surpassed others. We deserve a real debate, not accusations of monotheism. No, you don't have air tight evidence. You have a reasonable argument for it. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Nov 15 17:06:00 2010 From: spike66 at att.net (spike) Date: Mon, 15 Nov 2010 09:06:00 -0800 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <018501cb84e7$615037b0$23f0a710$@att.net> ...On Behalf Of Keith Henson >...Currently the people who write the descriptions of how computers should make money in the market using short time (ms) trades have been bitching that they are not getting enough of the money these things make...Keith Nor are they sharing the risk. Those who bitch thus should be given this offer: put their entire net worth in a bank account. If there is another flash crash, then anyone who loses money is free to write a check against that account. spike From sjatkins at mac.com Mon Nov 15 17:20:37 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 09:20:37 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 14, 2010, at 10:51 AM, Stefano Vaj wrote: > 2010/11/14 Michael Anissimov : >> The idea of a mind that can copy itself directly is a really huge deal. Replication requires sufficient resources. The first AGIs may well require very expensive hardware systems. So that it can be copied does not in the least mean we will have a proliferation of such AGIs really quickly. From pharos at gmail.com Mon Nov 15 17:48:03 2010 From: pharos at gmail.com (BillK) Date: Mon, 15 Nov 2010 17:48:03 +0000 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: On Mon, Nov 15, 2010 at 3:31 PM, Keith Henson wrote: > BillK wrote: >> >> The point of these trading programs is to assist in making a few >> people very, very rich and the great majority poor and unemployed. >> Working great so far. > > I can easily see a disgruntled programmer writing this as retaliation > against a hated boss. > That's only a theoretical possibilty. In practice impossible and totally unlikely even if it was possible. These trading programs are not your ordinary buy and sell order processors like your stockbroker has access to. They run on a very few special main dealers computers that plug in direct to the stock exchange computers. So physical access is the first hurdle. Even if they stole their programs and ran off with them (as one programmer actually did!) they cannot make use of them because they don't have access to the special insider-only computers. The profits from these microsecond trades go to these main dealers accounts. There is no way a programmer could extract profits for himself. These programmers are *very* well paid. Yes, they are grumbling about getting more, but they would not risk their current small fortune to annoy their boss and risk a jail sentence. Their grumbles are hints that they want more or they will move to another main dealer. These special programs effectively mean that the stock market is broken. Outsiders have no chance against the insider dealers. You can buy shares and gamble whether they go up or down, but nowadays it's purely a gamble. The market is rigged. BillK From thespike at satx.rr.com Mon Nov 15 18:25:35 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 15 Nov 2010 12:25:35 -0600 Subject: [ExI] this one's for Spike and all the other space cadets Message-ID: <4CE17B1F.7000605@satx.rr.com> The picture, not the astronaut! From sparge at gmail.com Mon Nov 15 20:10:35 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 15 Nov 2010 15:10:35 -0500 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> Message-ID: On Sun, Nov 14, 2010 at 2:19 PM, Max More wrote: > In reply to Dave Sill: > > Your reply again illustrates why I wanted you to read some of the sources. I've been reading a bunch of them. >> I don't think it's particularly Extropian not to apply science and >> technology to our diets. > > Now you're telling me what's extropian and doing so based on a false > assumption. I don't think you really disagree with my statement. I'm guessing the assumption you're referring to is that "paleo/primal diet" means "a nutritional plan based on the presumed ancient diet of wild plants and animals that various human species habitually consumed during the Paleolithic [Era]"*. If it really means "a modern diet based on the presumed ancient diet [...] but incorporating current knowledge of biochemistry, nutrition, genetics, etc.", then perhaps I could be excused for being mislead. >> Yes, whole grains are good sources of carbohydrates, protein, fiber, >> photochemicals, vitamins, minerals, etc. > > http://www.thepaleodiet.com/articles/Cereal%20article.pdf > page 25. > ?From p.24: "All cereal grains have significant nutritional shortcomings > which are apparent upon analysis... Yes, no single food is complete. That doesn't mean grains aren't nutritious. > "However, as more and more cereal grains are included in the diet, they > tend to displace the calories that would be provided by other foods (meats, > dairy products, fruits and vegetables), and can consequently disrupt > adequate nutritional balance." That doesn't mean that moderate grain consumption is bad. > Apart from replying to Natasha's question, no more time for this. To those > interesting in exploring further, I have plenty more good information > sources if you want them. I sense, from your two replies, that you think I'm hostile to the idea of a "paleo" diet, but I'm not. I'm curious, and based on what I've read so far I'm convinced there are some good ideas there, but I'm also skeptical of some of the claims. Thanks for your time so far. I don't expect a response. -Dave * Lifted from http://en.wikipedia.org/wiki/Paleolithic_diet From agrimes at speakeasy.net Mon Nov 15 20:59:04 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 15 Nov 2010 15:59:04 -0500 Subject: [ExI] The atoms red herring. =| Message-ID: <4CE19F18.8040200@speakeasy.net> While the uploaders can be relied upon to turn to patronizing arguments. It becomes truly annoying when I am accused of something I am emphatically not guilty of. The case in point being the accusation that I associate identity with a certain set of atoms. This accusation has been repeated several times now. Seriously, this argument needs to come to a screeching halt until someone provides me with evidence that I *EVER* associated my identity with specific atoms or issues the apology that I am now owed. =\ -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From dan_ust at yahoo.com Mon Nov 15 22:16:22 2010 From: dan_ust at yahoo.com (Dan) Date: Mon, 15 Nov 2010 14:16:22 -0800 (PST) Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> Message-ID: <309442.61408.qm@web30105.mail.mud.yahoo.com> Just out of curiousity, before agriculture there was some grain consumption, no? I mean consumption of wild grains gathered around the Middle East... I'm not sure what the evidence is for this, but I'm thinking someone must have been eating grains before they were domesticated. Is there any information on this? Regards, Dan ----- Original Message ---- From: Dave Sill To: ExI chat list Sent: Mon, November 15, 2010 3:10:35 PM Subject: Re: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] On Sun, Nov 14, 2010 at 2:19 PM, Max More wrote: > In reply to Dave Sill: > > Your reply again illustrates why I wanted you to read some of the sources. I've been reading a bunch of them. >> I don't think it's particularly Extropian not to apply science and >> technology to our diets. > > Now you're telling me what's extropian and doing so based on a false > assumption. I don't think you really disagree with my statement. I'm guessing the assumption you're referring to is that "paleo/primal diet" means "a nutritional plan based on the presumed ancient diet of wild plants and animals that various human species habitually consumed during the Paleolithic [Era]"*. If it really means "a modern diet based on the presumed ancient diet [...] but incorporating current knowledge of biochemistry, nutrition, genetics, etc.", then perhaps I could be excused for being mislead. >> Yes, whole grains are good sources of carbohydrates, protein, fiber, >> photochemicals, vitamins, minerals, etc. > > http://www.thepaleodiet.com/articles/Cereal%20article.pdf > page 25. > ?From p.24: "All cereal grains have significant nutritional shortcomings > which are apparent upon analysis... Yes, no single food is complete. That doesn't mean grains aren't nutritious. > "However, as more and more cereal grains are included in the diet, they > tend to displace the calories that would be provided by other foods (meats, > dairy products, fruits and vegetables), and can consequently disrupt > adequate nutritional balance." That doesn't mean that moderate grain consumption is bad. > Apart from replying to Natasha's question, no more time for this. To those > interesting in exploring further, I have plenty more good information > sources if you want them. I sense, from your two replies, that you think I'm hostile to the idea of a "paleo" diet, but I'm not. I'm curious, and based on what I've read so far I'm convinced there are some good ideas there, but I'm also skeptical of some of the claims. Thanks for your time so far. I don't expect a response. -Dave * Lifted from http://en.wikipedia.org/wiki/Paleolithic_diet From possiblepaths2050 at gmail.com Mon Nov 15 23:07:49 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 15 Nov 2010 16:07:49 -0700 Subject: [ExI] this one's for Spike and all the other space cadets In-Reply-To: <4CE17B1F.7000605@satx.rr.com> References: <4CE17B1F.7000605@satx.rr.com> Message-ID: It made me think of many a science fiction novel cover that I have seen over the years. John : ) On 11/15/10, Damien Broderick wrote: > The picture, not the astronaut! > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stefano.vaj at gmail.com Mon Nov 15 23:09:46 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 00:09:46 +0100 Subject: [ExI] Singularity In-Reply-To: References: Message-ID: On 14 November 2010 19:28, Aleksei Riikonen wrote: > Who's going for "listening to prophets"? Serious people like Nick > Bostrom and the SIAI present actual, concrete steps and measures that > need to be taken to minimize risks. Once more, I have no doubt that SIAI or Bostrom are (even too) serious. My point is simply that we are entitled to a more serious discussion of what would be a "risk" and why we should consider it so. -- Stefano Vaj From stefano.vaj at gmail.com Mon Nov 15 23:49:49 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 00:49:49 +0100 Subject: [ExI] Mathematicians as Friendliness analysts In-Reply-To: <4CE0A19A.1080308@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CE0A19A.1080308@lightlink.com> Message-ID: On 15 November 2010 03:57, Richard Loosemore wrote: > And the fact that we are talking about the friendliness of *computers* is a > red herring. Absolutely. I put obsessing on "friendliness" (to whom?) on an equal basis with those who look forward to some robotic revolutions to wipe off the "humankind". "One thing in any case is certain: man is neither the oldest nor the most constant problem that has been posed for human knowledge. Taking a relatively short short chronological sample withing a restricted geographical area - European culture since the XVI century - one can be certain that man is a recent invention. It is not around him and his secrets that knowledge prowled for so long in the darknessIn fact, among all the mutations tha have affected the knowledge of things and their order, the knowledge of identities, differences, characters, equivalences, words - in short, in the midst of all the episodes of that profound history of the * Same* - only one, that which began a century and a half ago, and is now perhaps drawing to a close, has made it possible for the figure of man to appear. And that appearance was not not the liberation of an old anxiety, the transition into luminous consciousness of an age-old concen, the entry into objectivity that had long remained trapped within beliefs and philosophers: it was the effect of a change in the fundamental arrangements of knowledge. As the archaeology of our thought easily shows, man is an invention of recent date. And one perhaps nearing its end. If those arrangements were to disappear as they appeared, if some event of which we can at the momento do no more than sense the possibility - without knowing either what its form will be or what it promises - were to cause them to crumble, as the ground of Classical thought did, at the end of the XVIII century, then one can certainly wager that man would be erased, like a face drawn in sand at the edge of the sea." (Foucault) Now, I maintain that we cannot even think of becoming posthumans unless we become posthumanists in the first place. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Nov 15 23:57:31 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 00:57:31 +0100 Subject: [ExI] Paleo/Primal health [Was: Re: Technology, specialization, and diebacks...Re: I love the world. =)] In-Reply-To: <201011141956.oAEJtxP1012356@andromeda.ziaspace.com> References: <201011141956.oAEJtxP1012356@andromeda.ziaspace.com> Message-ID: On 14 November 2010 20:55,Natasha asked: > > > Max, after you respond to Amara, would you please advise me how I can >> maintain and even gain weight on the paleo diet? >> > What about eating more of the same? If one is "objectively" under its optimal weight, this should be enough, unless some prob exists which need correction at an ormonal level and/or through supplementation. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 16 00:11:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 01:11:44 +0100 Subject: [ExI] Let's play What If. In-Reply-To: <826395.808.qm@web114413.mail.gq1.yahoo.com> References: <826395.808.qm@web114413.mail.gq1.yahoo.com> Message-ID: On 14 November 2010 20:40, Ben Zaiboc wrote: > The question doesn't have any real sense, but the significant thing is why. > > The reason is that a blastocyst has no central nervous system. > No CNS, no thoughts. No thoughts, no identity. No identity, no 'soul'. > QED. > Even if they had, it would not change a thing, IMHO. BTW, I find your turn of phrase: "the soul - or, for those who prefer to put > some secular veneer on such concepts, the individual's 'identity'...", a > little odd. Why a 'secular veneer'? Is secularism not the default > position, in your opinion? I'd have expected you to say "the identity - or, > for those who prefer to put some supernatural veneer on such concepts, the > individual's 'soul'...". > My point is that thinking of "identity" in essentialist terms is a thinly disguised metaphysical position. Thus, the opposite does not really express what I mean. In fact, it would seem quite bizarre to claim that those who believe in some kind of "soul" are thinly disguised secularists. As i believe it should be quite obvious, I am personally very far from both POVs... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 16 00:18:46 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 01:18:46 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: <309442.61408.qm@web30105.mail.mud.yahoo.com> References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 15 November 2010 23:16, Dan wrote: > Just out of curiousity, before agriculture there was some grain > consumption, no? > Mmhhh. Try and survive on some wild, untreated, raw grain and let me know how it is going. Personally, I would be more inclined to give cows' "grass and leaves" diet a try. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Nov 16 00:22:54 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 15 Nov 2010 19:22:54 -0500 Subject: [ExI] Paleo/Primal health In-Reply-To: <309442.61408.qm@web30105.mail.mud.yahoo.com> References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On Mon, Nov 15, 2010 at 5:16 PM, Dan wrote: > Just out of curiousity, before agriculture there was some grain consumption, no? > I mean consumption of wild grains gathered around the Middle East... I'm not > sure what the evidence is for this, but I'm thinking someone must have been > eating grains before they were domesticated. Is there any information on this? Here are a couple links: http://thespartandiet.blogspot.com/2010/10/its-official-grains-were-part-of.html http://www.cbc.ca/technology/story/2009/12/17/tech-archaeology-grain-africa-cave.html So it obviously happened. It's very hard to tell how widespread it was, how important it was, how seasonal it was, what percentage of caloric intake it provided, etc. Interestingly, it's still being done by the Ojibwe: http://www.bineshiiwildrice.com. -Dave From stathisp at gmail.com Tue Nov 16 00:17:25 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Tue, 16 Nov 2010 11:17:25 +1100 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE19F18.8040200@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> Message-ID: <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> You have said that if a person is destructively copied he does not survive. What does this imply about your view of survival? Either that the same atoms have to be preserved or that there is some other substace, not reducible to atoms or information that has to be preserved. -- Stathis Papaioannou From thespike at satx.rr.com Tue Nov 16 01:44:02 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 15 Nov 2010 19:44:02 -0600 Subject: [ExI] crazy quantum Zeno notion Message-ID: <4CE1E1E2.5090707@satx.rr.com> I have a (Deutschean shadow-universes) intuition that the Quantum Zeno effect might derive from superposed activities in adjacent, only slightly divergent M-W realities where directed activities reinforce or prohibit a certain outcome, unlike ordinary stochastic radioactivity, say, where the "shadow overlaps" in/from nearby worlds are arbitrary. Might this have an impact on big beam and similar programs, say, perhaps delaying or inhibiting some otherwise possible outcomes (Higgs manifestations, proton decay, e.g.)? Damien Broderick From michaelanissimov at gmail.com Tue Nov 16 02:33:28 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 18:33:28 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: Hi John, On Sun, Nov 14, 2010 at 9:27 PM, John Grigg wrote: > > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? MNT and merely human-equivalent AI that can copy itself but not qualitatively enhance its intelligence beyond the human level is enough for a hard takeoff within a few weeks, most likely, if you take the assumptions in the Phoenix nanofactory paper. Add in the possibility of qualitative intelligence enhancement and you get somewhere even faster. Neocortex expanded in size by a factor of only about 4 from chimps to produce human intelligence. The basic underlying design is much the same. Imagine if expanding neocortex by a similar factor again led to a similar qualitative increase in intelligence. If that were so, then even a thousand AIs with so-expanded brains and a sophisticated manufacturing base would be like a group of 1000 humans with assault rifles and helicopters in a world of six billion chimps. If that were the case, then the Phoenix nanofactory + human-level AI-based estimate might be excessively conservative. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Tue Nov 16 02:56:50 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 18:56:50 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Hi Samantha, 2010/11/15 Samantha Atkins > > While it "could" do this it is not at all certain that it would. Humans > can improve themselves even today in a variety of ways but very few take the > trouble. An AGI that is not autonomous would do what it was told to do by > its owners who may or may not have improving it drastically as a high > priority. > Quoting Omohundro: http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else?s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems. In an earlier paper we used von Neumann?s mathematical theory of microeconomics to analyze the likely behavior of any sufficiently advanced artificial intelligence (AI) system. This paper presents those arguments in a more intuitive and succinct way and expands on some of the ramifications. > Possibly, depending on its long term memory and integration model. If it > came from human brain emulation this is less certain. > I was assuming AGI, not a simulation, but yeah. It just seems likely that AGI would be able to stay awake perpetually, though not entirely certain. It seems like this would a priority upgrade for early-stage AGIs. > This very much depends on the brain architecture. If too close a copy of > human brains this may not be the case. > Assuming AGI. > 4. overclock helpful modules on-the-fly > > > Not sure what you mean by this but this is very much a question of specific > architecture rather than general AGI. > I doubt it would be hard to implement. You can overclock specific modules in chess AI or Brood War AI today. It means giving a specific module extra computing power. It would be like temporarily shifting your auditory cortex tissue to take up visual cortex processing tasks to determine the trajectory of an incoming projectile. > What does this mean? Integrate other systems? How? To what level? Humans > do some degree of this all the time. > The human brain stays at a roughly constant 100 billion neurons and a weight of 3 lb. I mean directly absorbing computing power into the brain. > It could be so constructed but may or may not in fact be so constructed. > Self-improvement would likely be an emergent property due to the reasons given in the Omohundro paper. So if it weren't developed deliberately from the start, self-improvement is an ability that would be likely to develop on the road to human-equivalence. > I am not sure exactly what is meant by this. That it is very very good at > understanding code amounts to a 'modality'? > Lizards have brain modules highly adapted to evaluating the fitness of fellow lizards for fighting or mating. Chimpanzees have the same modules, but with respect to others chimpanzees. Trilobites probably had specialized neural hardware for doing the same with other trilobites. Some animals can smell very well, but have poor hearing and sight. Or vice versa. The reason why is because they have dedicated chunks of brainware that evolved to deal with sensory data from a particular channel. Humans have HUGE visual cortex areas, larger than the brains of mice. We can see in more colors than most animals. The way a human sees is different than the way an eagle sees, because we have different eyes, brains, and visual processing centers. The human visual cortex takes in gigabytes (or something like that) of information per second, and processes it down to edges, corners, distance estimates, salient objects, colors, and many other important features. To a slug, a view of a city looks like practically nothing, because its eyes are crap, its brain is crap, and its visual processing centers are crap. To a human, it can have a thousand different features and meanings. We didn't evolve to process code. We probably did evolve to process simple mathematics and the idea of logical processes on some level, so we apply that to code. Humans are not general-purpose intellects, capable of doing anything satisfactorily. Compared to potential superintelligences, we are idiots. Future superintelligences will look back on humans and marvel that we could write any code at all. After all, we were designed mainly to mess around with each other, kill animals, forage, retain our status, and have sex. Most human beings alive today are more or less incapable of coding. Imagine if human beings had evolved in an environment for millions of years where we were murdered and prevented from reproducing if our coding abilities fell short. Create an environment like that, and you might have a situation promoting the evolution of specific brain centers for visualizing and writing computer code. > This assumes an ability to integrate random other computers that I do not > think is at all a given. > All it requires is that the code can be parallelized. > This is simple economics. Most humans don't take advantage of the many > such positive sum activities they can perform today without such > self-copying abilities. So why is it certain that an AGI would? > Not certain, but pretty damn likely, because it could probably perform tasks without getting bored, and would have innate drives towards increasing its power and protecting/implementing its utility function. > There is an interesting debate to be had here, about the details of the > plausibility of the arguments, but most transhumanists just seem to dismiss > the conversation out of hand, or don't know that there's a conversation to > have. > > Statements about "most transhumanists" are fraught with many problems. > Most of the 500+ transhumanists I have talked to. > http://singinst.org/upload/LOGI//seedAI.html > > Prediction: most comments in response to this post will again ignore the > specific points in favor of a rapid takeoff and simply dismiss the idea > based on low intuitive plausibility. > > > Well, that helps a lot. It is a form of calling those who disagree lazy or > stupid before they even voice their disagreement. > I like to get to the top of the Disagreement Pyramid quickly, and it seems very close to impossible when transhumanists discuss the Singularity, and particularly the idea of hard takeoff. As someone arguing on behalf of the idea of hard takeoff, I demand that critics address the central point, not play *ad hominem* with me. You're addressing the points -- thanks! http://www.acceleratingfuture.com/michael/blog/images/disagreement-hierarchy.jpg > No, you don't have air tight evidence. You have a reasonable argument for > it. > It depends on what specifically is being argued. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Tue Nov 16 03:03:48 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 19:03:48 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <013801cb848b$cfd192d0$6f74b870$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> Message-ID: Heya Spike, On Sun, Nov 14, 2010 at 10:10 PM, spike wrote: > > I am not advocating a Bill Joy approach of eschewing AI research, just the > opposite. A no-singularity future is 100% lethal to every one of us, every > one of our children and their children forever. A singularity gives us > some > hope, but also much danger. The outcome is far less predictable than > nuclear fission. > Would you say the same thing if the Intelligence Explosion were initiated by the most trustworthy and altruistic human being in the world, if one could be found? In general, I agree with you except the last sentence. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Tue Nov 16 02:38:04 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 15 Nov 2010 21:38:04 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> Message-ID: <4CE1EE8C.4080602@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > You have said that if a person is destructively copied he does not survive. What does this imply about > your view of survival? As has been shown, that is difficult to argue with conventional logic and reasoning, so let's try a completely different mind experiment. I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. I presume the experiment will fail. So why did it? What evidence do you have that the experiment will succeed if certain pre-conditions are met? What are those preconditions? (I have a whole pile of fresh material along this line of thought so please bear with me. ;) > Either that the same atoms have to be preserved or that there is some other > substace, not reducible to atoms or information that has to be preserved. Don't quote me on things that you read into what I wrote. =| -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From michaelanissimov at gmail.com Tue Nov 16 03:06:53 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Mon, 15 Nov 2010 19:06:53 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE1EE8C.4080602@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> <4CE1EE8C.4080602@speakeasy.net> Message-ID: Alan is avoiding the question. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists1 at evil-genius.com Tue Nov 16 03:46:29 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Mon, 15 Nov 2010 19:46:29 -0800 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: References: Message-ID: <4CE1FE95.7070603@evil-genius.com> On 11/15/10 4:23 PM, extropy-chat-request at lists.extropy.org wrote: > Here are a couple links: > > http://thespartandiet.blogspot.com/2010/10/its-official-grains-were-part-of.html > http://www.cbc.ca/technology/story/2009/12/17/tech-archaeology-grain-africa-cave.html > > So it obviously happened. It's very hard to tell how widespread it > was, how important it was, how seasonal it was, what percentage of > caloric intake it provided, etc. Interestingly, it's still being done > by the Ojibwe:http://www.bineshiiwildrice.com. Here's Dr. Cordain's response to the Mozambique data: http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html Summary: there is no evidence that the wild sorghum was processed with any frequency -- nor, more importantly, that it had been processed in a way that would actually give it usable nutritional value (i.e. soaked and cooked, of which there is no evidence for the behavior or associated technology (cooking vessels, baskets) for at least 75,000 more years). Therefore, it was either being used to make glue -- or it was a temporary response to starvation and didn't do them much good anyway. Don't forget that the natural condition of wild creatures is hunger. Most of us have never been without food for one single day...or if we have, it's been purely by choice. If you get hungry enough you'll eat tree bark. The real question is: is there evidence that wild sorghum was eaten frequently and processed in a way that would make it actually digestible and nutritious? In other words, that there would have been significant selection pressure for eating and digesting it? As far as the Spartan Diet article, it strongly misrepresents both the articles it quotes and the paleo diet. Let's go through the misrepresentations: 1) As per the linked article, the 30 Kya year old European site has evidence that "Palaeolithic Europeans ground down plant roots similar to potatoes..." The fact that Palaeolithic people dug and ate some nonzero quantity of *root starches* is not under dispute: the assertion of paleo dieters is that *grains* (containing gluten/gliadin) are an agricultural invention. (Also note that the linked article finishes with a bizarre claim that consumption of *any* starch means that a diet is not meat-centered. As I've linked before, hunter-gatherer caloric intake averages about 2/3 meat and 1/3 non-meat calories. Apparently there are a lot of people who still confuse Atkins with paleo.) Link to the original paper (full text not available, but supplemental material clearly shows that cattail is the 'grains' of starch in question): http://www.pnas.org/content/107/44/18815.abstract I've seen this misrepresentation before: articles speak of 'grains of starch' found as residue, usually of root vegetables, and anti-paleo crusaders mistake this to mean cereal grains, like wheat and barley! As you might expect, the Spartan Diet page claims explicitly that these are cereal grains being processed, even though they're not. Hmmm... 2) No one disputes the 23 Kya Israel data. However, there is a big difference between "time of first discovery" and "used by the entire ancestral human population". It took another 11,000 years for people in one valley in the Middle East to starve enough to actually start growing grains on purpose, and it took thousands more years to spread anywhere else. For instance, Northern Europe only agriculturalized about 5,000 years ago. Note that it takes a *lot* of grain to feed a single person, not to mention the problem of storage for nomadic hunter-gatherers during the 11 months per year that a grain 'crop' is not harvestable -- so arguing that wild grains were the majority of anyone's diet previous to domestication is a stretch. And it is silly to claim that meaningful grain storage could somehow occur before a culture settled down into permanent villages. 3) The Spartan Diet page claims that consumption of grains by modern-era Native Americans somehow invalidates the paleo diet, by making a strawman claim about "The Paleo Diet belief that grain was consumed only as a cultivated crop..." Obviously grain was consumed as a wild food before it was cultivated, or no one would have thought to cultivate it! I addressed this already in 2). Not to mention that humans didn't even *arrive* in the Americas until ~12 Kya, making this issue irrelevant. 4) The Cordain rebuttal above addresses the Mozambique data, and I won't rehash it. I also note that the "Spartan Diet" is a low-fat diet that opposes the use of butter and any fat but extra-virgin olive oil -- in other words, based on the long-since-discredited theory that fat is bad and saturated fats are worse. It's apparently a gimmick diet based on what they think the Spartans ate...which is better than most gimmick diets, but it's not based on science. More in my next message. From lists1 at evil-genius.com Tue Nov 16 03:46:37 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Mon, 15 Nov 2010 19:46:37 -0800 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: References: Message-ID: <4CE1FE9D.4060004@evil-genius.com> More evidence: "Simoons classic work on the incidence of celiac disease [Simoons 1981] shows that the distribution of the HLA B8 haplotype of the human major histocompatibility complex (MHC) nicely follows the spread of farming from the Mideast to northern Europe. Because there is strong linkage disequilibrium between HLA B8 and the HLA genotypes that are associated with celiac disease, it indicates that those populations who have had the least evolutionary exposure to cereal grains (wheat primarily) have the highest incidence of celiac disease. This genetic argument is perhaps the strongest evidence to support Yudkin's observation that humans are incompletely adapted to the consumption of cereal grains." http://www.beyondveg.com/cordain-l/grains-leg/grains-legumes-1a.shtml Citation: Simoons FJ (1981) "Celiac disease as a geographic problem." In: Walcher DN, Kretchmer N (eds.) Food, Nutrition and Evolution. New York: Masson Publishing. (pp. 179-199) Diet, Gut, and Type 1 Diabetes: Role of Wheat-Derived Peptides? http://diabetes.diabetesjournals.org/content/58/8/1723.full "In this issue of Diabetes, Mojibian et al. (2) report that approximately half of the patients with type 1 diabetes, whom they studied, had a proliferative T-cell response to dietary wheat polypeptides and that the cytokine profile of the response was predominantly proinflammatory." ... "The study by Mojibian et al. raises the possibility that wheat could be the driving dietary antigen in two autoimmune diseases, i.e., celiac disease and type 1 diabetes." [Note: 'Wheat polypeptides' = collectively known as gluten/gliadin. In other words, a significant number of humans suffer cross-reactions between gluten and their own beta cells. This process is also thought to be behind celiac disease.] From brent.allsop at canonizer.com Tue Nov 16 04:19:13 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Mon, 15 Nov 2010 21:19:13 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <4CE20641.5020702@canonizer.com> Moral Experts, (Opening note: For those that don't enjoy the below religious / moral / mormonesque rhetoric that I enjoy, I hope you can simply translate it on the fly to something more to your liking. ;) This is a very exciting topic, and I think morally a critically important one. If we do the wrong thing, or fail to do it right, I think we're all agreed the costs could be very extreme. Morality has to do with knowing what is right, and what is wrong, does it not? I sure desperately want to "Choose The Right" (CTR) as mormons like to always say. But I feel I desperately need more help to be more morally capable, especially in this area. It is especially hard for me to understand, fully grasp, and remember ways of thinking about things that are very diverse from my current way of thinking about things. All this eternal yes it is no it is not, yes it is isn't doing me any good, for sure. This is a much more complex issue than I've ever really fully thought about, and I appreciate the help from people on both sides of the issue. I may be the only one, but I would find it very valuable and educational to have concise descriptions of all the best arguments and issues, and a quantitative ranking, by the experts, of the importance of them all, and a quantitative measure of who, and how many people, are in each camp. In other words, I think the best way for all of us to approach this problem, is to have some concise, quantitative, and constantly improving representation of the most important issues, according to the experts on all sides, so we can all be better educated (with good references) about what the most important arguments are, why, and which experts, and how many, are in each camp - going forward, as ever more scientific data, ever more improved reasoning... - comes in. We've started one survey topic, on the general issue of the importance of friendly AI, (see: http://canonizer.com/topic.asp/16 ) which so far shows a somewhat even distribution of experts on both sides. But this is obviously just a start at what is required so all of us can be better educated on all the most important issues and arguments. Through this discussion, I've realized that a critical sub component of the various ways of thinking about this issue is one's working hypothosis about the possibility of a rapid isolated, hidden, or remote 'hard take off'. I'm betting that the degree to which one holds such as a real possibility of a isolated hard take off as their working hypothosis, the more likely they are to fear or want to be cautious about AI, and visa versa. So I think it will be very educational to everyone to more rigorously concisely develop and measure for the various most important reasons for this particular sub issue on both sides. Towards this end, I'd like to create several new related survey topics to get a more detailed map of what the experts believe in this space. First, would be a survey topic on the possibility of any kind of isolated rapid hard takeoff. We could create two related topics to capture, concisely state, and quantitatively rank the importance, and value of (i.e. their ability to be convincing) the various arguments had relative to each other. We could have one argument topic ranking reasons why an isolated hard takeoff might be possible, and another ranking reasons for why it might not be likely. This way, the experts on both sides of the issue could collaboratively develop the best and most concise description of each of the arguments, and help rank which are the most convincing for everyone and why. (It would be interesting to see if the ranking for each side changed, when surveying those in the pro camp, verses those in the con camp, and so on) As these two pro and con argument ranking topics developed, the members of the pro and con camps could reference these arguments, and develop concise descriptions of why the pro or con arguments are more convincing to them, than the others, and why they are in their particular camp, or why they currently use the particular pro or con theory as their working hypothesis. And of course, it would be very interesting to see if anyone jumps camps, once things start getting more developed, or when new scientific results or catastrophes, come in, and so on. Would anyone else think this kind of moral expert survey information would be helpful to them in their effort to make the best possible decisions and judgments on such important issues? Would anyone else have any better or additional ways to develop or structure a survey of critically important information that anyone thinks everyone interested in this topic needs to know about? I'm going to continue developing this survey along these lines, using what I've heard others say so far here, but there is surely better ways to go about this, that others can help find or point out, obviously the more diversity the better, so I would love to have any other ideas or inputs or help with this process. Looking forward to any and all feedback, pro or con, and it wold be great to at least get a more comprehensive survey of who was in these camps, starting with the improvement of this one: http://canonizer.com/topic.asp/16 . And also, I hope for some day achieving perfect justice. Those that are wrong, are arguably doing great damage compared to the heroes that are right - the ones that are helping us all to be morally better. It seems to me to achieve perfect justice, the mistaken or wicked ones, will have to make a restitution to the heroes, for the damage they continue to do, for as long as they continue to be wrong (to sin?). The better we rigorously track all this, the sooner we can achieve better justice right? The more help I get, from all sides, the more capable I'll bee of being in the right camp sooner, and the more capable I'll bee of helping others to do the same, and the less restitution I'll have to clean up for being mistaken longer, and the more reward we will all reap, sooner, in a more just and perfect heaven. Brent Allsop On 11/15/2010 7:33 PM, Michael Anissimov wrote: > Hi John, > > On Sun, Nov 14, 2010 at 9:27 PM, John Grigg > > wrote: > > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? > > > MNT and merely human-equivalent AI that can copy itself but not > qualitatively enhance its intelligence beyond the human level is > enough for a hard takeoff within a few weeks, most likely, if you take > the assumptions in the Phoenix nanofactory paper. > > Add in the possibility of qualitative intelligence enhancement and you > get somewhere even faster. > > Neocortex expanded in size by a factor of only about 4 from chimps to > produce human intelligence. The basic underlying design is much the > same. Imagine if expanding neocortex by a similar factor again led to > a similar qualitative increase in intelligence. If that were so, then > even a thousand AIs with so-expanded brains and a sophisticated > manufacturing base would be like a group of 1000 humans with assault > rifles and helicopters in a world of six billion chimps. If that were > the case, then the Phoenix nanofactory + human-level AI-based estimate > might be excessively conservative. > > -- > michael.anissimov at singinst.org > Singularity Institute > Media Director > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Tue Nov 16 05:22:25 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 21:22:25 -0800 Subject: [ExI] Singularity (Changed Subject Line) In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CDF39E6.7090700@satx.rr.com> <9D7647EB531F4F1F88F6EC4F983B7AF4@DFC68LF1> Message-ID: On Nov 14, 2010, at 10:40 AM, Giulio Prisco wrote: > I wish to support Michael here. I don't share many of the SIAI > positions and views on the Singularity and the evolution of AGI, but I > think they do interesting work and play a useful role. The world is > interesting because it is big and varied, with different persons and > groups doing their own things with their own focus. I second that. I think SIAI does some really good things that I am very delighted and impressed by. That does not mean that other criticisms get a free pass though or that valid criticisms should be ignored just because some criticisms are obviously overblown. Is there a named cognitive bias for that? It is a common pattern. A voices a perhaps valid and reasonable seeming criticism of X. B voices an outrageous or overly harsh criticism of X. C takes offense over the remarks of B. D voices support for C and says positive things about X. Result, most people seem to be left feeling like all the criticisms were overblown. I have seen this pattern 50 times if I have seen it once. > > In particular I think the criticism of idiots like Carrico and his > handful of followers, mentioned by Stefano, should be ignored. We have > better and more interesting things to do. Oh, and E brings up the fact that F, who is generally despised, also criticizes X. Yawn. Pass the nanotubes. - samantha From spike66 at att.net Tue Nov 16 05:13:01 2010 From: spike66 at att.net (spike) Date: Mon, 15 Nov 2010 21:13:01 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> Message-ID: <004901cb854c$f1216f20$d3644d60$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Michael Anissimov . >Heya Spike. Heya back Michael! The level of discourse here has improved an order of magnitude since you started posting last week. Thanks! You SIAI guys are aaaallways welcome here. On Sun, Nov 14, 2010 at 10:10 PM, spike wrote: >>I am not advocating a Bill Joy approach of eschewing AI research, just the opposite. A no-singularity future is 100% lethal to every one of us, every one of our children and their children forever. A singularity gives us some hope, but also much danger. The outcome is far less predictable than nuclear fission. >Would you say the same thing if the Intelligence Explosion were initiated by the most trustworthy and altruistic human being in the world, if one could be found?... Ja I would say nearly the same thing, however I cheerfully agree we have a muuuch better chance of a good outcome if the explosion is initiated by the most trustworthy and altruistic among us carbon units. I am a big fan of what you guys are doing as SIAI. It pleases me to see you working the problem, for without you, the inevitable Intelligence Explosion falls to the next bunch, who I do not know, who may or may not make it their focus to produce a friendly AI. That would reduce the probability of a good outcome. That being said: >In general, I agree with you except the last sentence. >michael.anissimov at singinst.org >Singularity Institute >Media Director I do hope you are right in that disagreement, but I will defend my pessimism in any case. The engineering world is filled with problems which unexpectedly defeated their designers, or do something completely unexpected. In my own field, the classic example is the hybrid aerospike engine, which was designed to burn both kerosene and liquid hydrogen, and also to throttle efficiently. If we can get a single engine to do that, optimizing thrust at varying altitudes and burn two different fuels without duplicating nozzles, pumps, thrust vector control, all that heavy stuff, then we can achieve single stage to orbit. We poured tons of money into the effort, but that seemingly straightforward engineering problem unexpectedly defeated us. We cannot use a single engine to burn both fuels, and consequently we have no SSTO to this day. The commies worked the same problem, it kicked their asses too, as good as they are at large scale propulsion. There were unknowns that no one knew were unknowns. It could be my own ignorance of the field (hope so) but it seems to me like there are waaay many unknowns in what an actual intelligence (artificial or bio) will do. It appears to me to be inherent in the field of intelligence. Were you to suggest literature, I will be willing to study it. I want to encourage you lads up there in Palo Alto. Your cheering section is going wild. We know the path to artificial intelligence is littered with the corpses of those who have gone before. The path beyond artificial intelligence may one day be littered with the corpses of our dreams, of our visions, of ourselves. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Tue Nov 16 05:38:13 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 00:38:13 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> Message-ID: <4CE218C5.7090608@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Would you say the same thing if the Intelligence Explosion were > initiated by the most trustworthy and altruistic human being in the > world, if one could be found? I would like to cast my vote in favor of a supremely selfish bastard. I'm serious. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Tue Nov 16 05:45:32 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 21:45:32 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 14, 2010, at 11:26 AM, Aware wrote: > Michael, what has always frustrated me about Singularitarians, apart > from their anthropomorphizing of "mind" and "intelligence", is the > tendency, natural for isolated elitist technophiles, to ignore the > much greater social context. The vast commercial and military > structure supports and drives development providing increasingly > intelligent systems, exponentially augmenting and amplifying human > capabilities, hugely outweighing not only in height but in breadth, > the efforts of a small group of geeks (and I use the term favorably, > being one myself.) > On SL4 especially but also in many other singularitarian camps a great deal of attention was paid to avoiding anthropomorphizing. So I am a bit surprised by that charge. I don't think ignoring social context is that common either. Some of us are very focused on context as we are highly concerned with how to get from here, and thus exactly what here is like, to some relatively positive future there, and getting some coherence on what that there would look like. I do grant that the number of transhumanist focused on this aspect is a pretty small percentage of total. Commercial efforts are notoriously short term and drive only some forms of intelligent systems in relatively small niches. They do drive general communication, computational capability, device proliferation and so on very very well. Some of these devices are augmenting/changing us. Not as fast as an AGI but with a lot more commercial viability beneath them. But this does not go very deep toward new AGI applicable results. Military research, to the extent it is not a boondoggle, is another matter. A lot of very strong research is done on military contract. Unfortunately. > The much more significant and accelerating risk is not that of a > "recursively self-improving" seed AI going rogue and tiling the galaxy > with paper clips or copies of itself, but of relatively small groups > of people, exploiting technology (AI and otherwise) disproportionate > to their context of values. How would you judge their 'context of values'? Against what would you judge it? > > The need is not for a singleton nanny-AI but for development of a > fractally organized synergistic framework for increasing awareness of > our present but evolving values, and our increasingly effective means > for their promotion, beyond the capabilities of any individual I have no idea what a 'fractally organized synergistic framework for increasing awareness of our present but evolving values' is or entails or when or how you would know that you have achieved that. Frankly, our values today, overall are pretty thinly based on our evolved psychology and not, for most human beings, very much in the way of self-examination, wisdom or much ethical inquiry. I somewhat doubt that human 1.0 is overall designed to be capable of much more except in relatively isolated cases. I submit that that much is not good enough for the challenges ahead of us. > biological or machine intelligence. > If it is beyond the capabilities of any intelligence then how will it seemingly magically arise in fractal magnificence among an accumulation of said inadequate intelligences? - samantha From agrimes at speakeasy.net Tue Nov 16 05:37:06 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 00:37:06 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE21882.7030207@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Quoting Omohundro: > > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > > Surely no harm could come from building a chess-playing robot, could it? > In this paper > we argue that such a robot will indeed be dangerous unless it is > designed very carefully. > Without special precautions, it will resist being turned off, will try > to break into other > machines and make copies of itself, and will try to acquire resources > without regard for > anyone elses safety. These potentially harmful behaviors will occur not > because they > were programmed in at the start, but because of the intrinsic nature of > goal driven systems. > In an earlier paper we used von Neumanns mathematical theory of > microeconomics > to analyze the likely behavior of any sufficiently advanced artificial > intelligence > (AI) system. This paper presents those arguments in a more intuitive and > succinct way > and expands on some of the ramifications. Do you ever get around to proving that the set of general AI systems ever intersects the set of goal directed systems? I strongly doubt that there is even one possible AGI design that is in any way guided by any strict set of goals. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Tue Nov 16 06:03:44 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:03:44 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <37934C2D-3BAD-4AB6-94E2-5C33FC526ED5@mac.com> On Nov 14, 2010, at 6:55 PM, John Grigg wrote: > I must admit that I yearn for a hard take-off singularity that > includes the creation a nanny sysop who gets rid of poverty, disease, > aging, etc., and looks after every human on the planet, but without > establishing a tyranny. By definition a nanny sysop is a functional tyrant in at least some ways. What I want is to be reasonably sure humanity will survive this tecnological transition period. I am pretty convinced that not that many evolved intelligent species do survive this particular developmental challenge. The reason they do not is not because a UFAI eats them just before it self-destructs. It has to do with the species needing to very quickly grow beyond its evolved psychology to deal with accelerating change and losing its species dominance. It is a huge challenge. I would love to see it through to the other side. Who wants to die out here in "slow time"? Certainly not I. But my primary desire as a transhumanist is that I do what I can to increase the odds of a successful transition. That said, I think a radically better future and within a mere few decades is quite possible. And that is still exciting and exhilirating. - samantha From sjatkins at mac.com Tue Nov 16 06:10:30 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:10:30 -0800 Subject: [ExI] Mathematicians as Friendliness analysts In-Reply-To: <4CE0A19A.1080308@lightlink.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <4CDDCD0C.8040208@lightlink.com> <4CDDFF3B.1080406@lightlink.com> <4CDE02A0.6030007@satx.rr.com> <4CDE1D80.5030800@lightlink.com> <4CDE26C9.90008@satx.rr.com> <4CE0A19A.1080308@lightlink.com> Message-ID: <25F18285-FD96-4F9A-9A6B-09E1BD0F5775@mac.com> On Nov 14, 2010, at 6:57 PM, Richard Loosemore wrote: > Michael Anissimov wrote: >> On Sat, Nov 13, 2010 at 2:10 PM, John Grigg > wrote: >> And I noticed he did "friendly AI research" with >> a grad student, and not a fully credentialed academic or researcher. >> Marcello Herreshoff is brilliant for any age. Like some other of our Fellows, he has been a top-scorer in the Putnam competition. He's been a finalist in the USA Computing Olympiad twice. He lives and breathes mathematics -- which makes sense because his dad is a math teacher at Palo Alto High School. Because Friendly AI demands so many different skills, it makes sense for people to custom-craft their careers from the start to address its topics. That way, in 2020, we will have people have been working on Friendly AI for 10-15 years solid rather than people who have been flitting in and out of Friendly AI and conventional AI. > > Michael, > > This is entirely spurious. Why gather mathematicians and computer science specialists to work on the "friendliness" problem? > > Since the dawn of mathematics, the challenges to be solved have always been specified in concrete terms. Every problem, without exception, is definable in an unambiguous way. The friendliness problem is utterly unlike all of those. You cannot DEFINE what the actual problem is, in concrete, unambiguous terms. > Mathematics may be said to be the study of pattern qua pattern, of patterns of patterns. Friendliness not able to be captured or described accurately or ever measured or used to measure alternatives would not be an engineering goal at all. Personally I think it is so vague as to be useless. I would rather see work on a general ethics that applies even to beings of wildly different capabilities that are not mutually interdependent. This seem much more likely to lead to benign behavior by an advanced AGI toward humans than attempting to coerce Friendliness at an engineering level. Of course the rub with this general ethics is that humans don't even seem able to come up with a generally agreed ethics for the much narrower case of other members of their own species. This suggests that either such a general ethics is impossible or that humans are not very good at all at ethical reasoning. - samantha From sjatkins at mac.com Tue Nov 16 06:24:40 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:24:40 -0800 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> Message-ID: On Nov 14, 2010, at 9:15 PM, Aleksei Riikonen wrote: > On Mon, Nov 15, 2010 at 6:48 AM, Samantha Atkins wrote: >> On Nov 12, 2010, at 2:44 PM, Aleksei Riikonen wrote: >> >>> If people want a new version of Singularitarian Principles >>> to exist, they can write one themselves. >> >> Hardly. I cannot speak for this Institute. How would my writing >> such a thing be anything but my opinion? > > No matter who would write such a document, it's just an opinion. There > is currently no codified "ideology of singularitarianism" that would > be owned by any single Institute. That is not what I want. I want to know what the current working theories are concerning FAI and and what type of FAI is the current working plan, if any. For a time it seemed to be CEV. But some people in SIAI claim that is obsolete while others say it is still the general plan. So I would like clarification. > > Eliezer and other SIAI folks seem to like it that way, so there likely > will not be a codified document of Singularitarian principles coming > from their direction. So if there are people who want such a codified > ideology, they're going to have to codify it themselves. > >> I want to know what the SIAI current positions are. > > That's a different thing than wanting them to present a codified > ideology. Just read their recent publications. This is a good start: > > http://singinst.org/riskintro/index.html It is a start but not sufficient. It doesn't really propose much of anything. Researching what remains stable in a self-improving brain with no real general model that is likely to cover the domain of self-improving brains or even a single working example seems rather weak to me. Many of the items spoken of at this link are certainly important and worthwhile but I don't see a lot of meat here. Am I missing something? I can work my way through the newer documents on site that I haven't read yet. - samantha From agrimes at speakeasy.net Tue Nov 16 05:57:44 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 00:57:44 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> <4CE1EE8C.4080602@speakeasy.net> Message-ID: <4CE21D58.60606@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > Alan is avoiding the question. And you're avoiding reality itself. What's the difference? =P -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Tue Nov 16 06:36:38 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:36:38 -0800 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <03A63180-9898-4075-9976-54A9C9C2F388@mac.com> On Nov 15, 2010, at 7:31 AM, Keith Henson wrote: > On Mon, Nov 15, 2010 at 5:00 AM, John Grigg > wrote: >> >> Brent Allsop wrote: >> I would agree that a copy-able human level AI would launch a take-off, >> leaving what we have today, to the degree that it is unchanged, in the >> dust. But I don't think acheiving this is going to be anything like >> spontaneous, as you seem to assume is possible. The rate of progress >> of intelligence is so painfully slow. So slow, in fact, that many >> have accused great old AI folks like Minsky as being completely >> mistaken. >>>>> >> >> Michael Annisimov replied: >> There's a huge difference between the rate of progress between today >> and human-level AGI and the time between human-level AGI and >> superintelligent AGI. They're completely different questions. As for >> a fast rate, would you still be skeptical if the AGI in question had >> access to advanced molecular manufacturing? >> >> I agree that self-improving AGI with access to advanced manufacturing >> and research facilities would probably be able to bootstrap itself at >> an exponential rate, rather than the speed at which humans created it >> in the first place. But the "classic scenario" where this happens >> within minutes, hours or even days and months seems very doubtful in >> my view. >> >> Am I missing something here? > > What does an AI mainly need? Processing power and storage. If there > are vast amounts of both that can be exploited, then all you need is a > storage estimate for the AI and the average bandwidth between storage > locations to determine the replication rate. But wait. The first AGIs will likely be ridiculously expensive. So what if they can copy themselves? If you can only afford one and they are originally only as competent as a human expert then you will go with entire campuses of human experts until the costs comes down sufficiently - say in a decade or two after the first AGI. Until then it will not matter much that they are in principle copyable. Of course if someone cracks the algorithms to have human level AGI on much more modest hardware then we get lots of AGI proliferation much more quickly. - samantha From sjatkins at mac.com Tue Nov 16 06:47:04 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 22:47:04 -0800 Subject: [ExI] Singularity In-Reply-To: References: Message-ID: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> On Nov 15, 2010, at 3:09 PM, Stefano Vaj wrote: > On 14 November 2010 19:28, Aleksei Riikonen wrote: >> Who's going for "listening to prophets"? Serious people like Nick >> Bostrom and the SIAI present actual, concrete steps and measures that >> need to be taken to minimize risks. > > Once more, I have no doubt that SIAI or Bostrom are (even too) > serious. My point is simply that we are entitled to a more serious > discussion of what would be a "risk" and why we should consider it so. IMHO opinion there has perhaps been too much focus on "existential risk" at the cost of insufficient focus on clearly visioning the positive future we wish to bring into being. I feel at times as if much of our energy has become focused on the negative and we have lost sight of or failed to sufficiently embrace the positive. It is much easier generally to see what is wrong or may turn out wrong than to cleanly imagine a positive outcome and work diligently to bring it about. From talking with many transhumanists it does not seem that we have that clear and coherent a shared vision of the desired future. If not then how can we can we expect to work together to bring it about? We have many shared dream fragments but that is not enough for a coherent vision. - s From sjatkins at mac.com Tue Nov 16 07:00:43 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 23:00:43 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> Message-ID: <94EBCC45-6546-49D2-9252-F105A4A7D88E@mac.com> On Nov 15, 2010, at 6:33 PM, Michael Anissimov wrote: > Hi John, > > On Sun, Nov 14, 2010 at 9:27 PM, John Grigg wrote: > > I agree that self-improving AGI with access to advanced manufacturing > and research facilities would probably be able to bootstrap itself at > an exponential rate, rather than the speed at which humans created it > in the first place. But the "classic scenario" where this happens > within minutes, hours or even days and months seems very doubtful in > my view. > > Am I missing something here? > > MNT and merely human-equivalent AI that can copy itself but not qualitatively enhance its intelligence beyond the human level is enough for a hard takeoff within a few weeks, most likely, if you take the assumptions in the Phoenix nanofactory paper. MNT is of course not near term at all. The latest guesstimates I saw by Drexler, Freitas and Merkle put it a good three decades out. So if we get HAI before that it is likely to be expensive and not at all easy for it to quickly upgrade itself. A few very expensive human equivalent AGIs will not be very revolutionary quickly. > > Add in the possibility of qualitative intelligence enhancement and you get somewhere even faster. > Too many IF bridges need to be crossed between here and there for the argument to be too compelling. Possible, yes. Likely within three to four decades, not so much. > Neocortex expanded in size by a factor of only about 4 from chimps to produce human intelligence. The basic underlying design is much the same. Imagine if expanding neocortex by a similar factor again led to a similar qualitative increase in intelligence. I am not at all sure that would be possible with current human brain size and brain architecture. But then I don't take well to strained analogies. > If that were so, then even a thousand AIs with so-expanded brains and a sophisticated manufacturing base would be like a group of 1000 humans with assault rifles and helicopters in a world of six billion chimps. Even more strained! :) Where are you going to get a thousand human level AGIs? Using what assumption on hardware and energy requirements? > If that were the case, then the Phoenix nanofactory + human-level AI-based estimate might be excessively conservative. For sometime decades hence maybe. But it isn't a serious existential risk now. Economic collapse is a very serious risk in this coming decade. Energy and resource crises are close behind. Those could result in losing a substantial part of our technological/scientific infrastructure *before* MNT or AGI can be developed. If we do then the argument is strong that humanity may never recover to the necessary level of infrastructure and resources again. That would be catastrophic. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Tue Nov 16 07:45:50 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 23:45:50 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: On Nov 15, 2010, at 6:56 PM, Michael Anissimov wrote: > Hi Samantha, > > 2010/11/15 Samantha Atkins > > While it "could" do this it is not at all certain that it would. Humans can improve themselves even today in a variety of ways but very few take the trouble. An AGI that is not autonomous would do what it was told to do by its owners who may or may not have improving it drastically as a high priority. > > Quoting Omohundro: > > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > > Surely no harm could come from building a chess-playing robot, could it? In this paper > we argue that such a robot will indeed be dangerous unless it is designed very carefully. > Without special precautions, it will resist being turned off, will try to break into other > machines and make copies of itself, and will try to acquire resources without regard for > anyone else?s safety. These potentially harmful behaviors will occur not because they > were programmed in at the start, but because of the intrinsic nature of goal driven systems. > In an earlier paper we used von Neumann?s mathematical theory of microeconomics > to analyze the likely behavior of any sufficiently advanced artificial intelligence > (AI) system. This paper presents those arguments in a more intuitive and succinct way > and expands on some of the ramifications. > I have argued this point (and stronger variants) with Steve. If the AI's goals are totally centered on chess playing then it is extremely unlikely that it would both diverge along many or all possible paths that might make it a more powerful chess player. Many many fields of knowledge could possibly make it better at is stated goal but it would have to be much more a generalist than a specialist to notice them and take the time to master them. If it could so diverge along so many paths then it would also encounter other fields of knowledge including those for judging the relative importance of various values using various methodologies. Which would tend, if understood, to make it not a single minded chess playing machine from hell. The argument seems self-defeating. > Possibly, depending on its long term memory and integration model. If it came from human brain emulation this is less certain. > > I was assuming AGI, not a simulation, but yeah. It just seems likely that AGI would be able to stay awake perpetually, though not entirely certain. It seems like this would a priority upgrade for early-stage AGIs. > One path to AGI is via emulating at least some subsystems of the human brain. It is not at all clear to me that this would not also bring in many human limitations. For instance, our learning cannot be transferred immediately to another person because of our rather individual neural associative patterns that the learning act modified. New knowledge is not in any one discrete place or in some universally instantly useful form as encoded in the human brain. Using a similar learning scheme in an AGI would mean that you could not transfer achieved learning very efficiently between AGIs. You could only copy them. > This very much depends on the brain architecture. If too close a copy of human brains this may not be the case. > > Assuming AGI. > >> 4. overclock helpful modules on-the-fly > > Not sure what you mean by this but this is very much a question of specific architecture rather than general AGI. > > I doubt it would be hard to implement. You can overclock specific modules in chess AI or Brood War AI today. It means giving a specific module extra computing power. It would be like temporarily shifting your auditory cortex tissue to take up visual cortex processing tasks to determine the trajectory of an incoming projectile. > I am not sure the analogy holds well though. If the mind is highly integrated it is not certain that you could isolate one activity like that much more easily than we can in our own brains. Perhaps. > What does this mean? Integrate other systems? How? To what level? Humans do some degree of this all the time. > > The human brain stays at a roughly constant 100 billion neurons and a weight of 3 lb. I mean directly absorbing computing power into the brain. I mean that we integrate with computational systems albeit by slow HCI today. Unless you have in mind that the AGI hack systems around it, most of the computation going on on most of that hardware has nothing to do with the AGI and is written in such a way it cannot communicate that well even with other dumb programs or even with other instances of the same programs on other machines. It is also not certain and is plausibly unlikely that AGIs run on general purpose computers. I do grant of course that an AGI can interface to a computer much more efficiently than you or I can with the above caveat. Many systems on other machines were written by humans. You almost have to get inside the human programmer's head to efficiently use many of these. I am not sure the AGI would be automatically good at that. > > It could be so constructed but may or may not in fact be so constructed. > > Self-improvement would likely be an emergent property due to the reasons given in the Omohundro paper. So if it weren't developed deliberately from the start, self-improvement is an ability that would be likely to develop on the road to human-equivalence. As mentioned I do not find his argument altogether persuasive. > > I am not sure exactly what is meant by this. That it is very very good at understanding code amounts to a 'modality'? > > Lizards have brain modules highly adapted to evaluating the fitness of fellow lizards for fighting or mating. Chimpanzees have the same modules, but with respect to others chimpanzees. Trilobites probably had specialized neural hardware for doing the same with other trilobites. > A chess playing AGI for instance would not necessarily be at all good at understanding code. Our thinking is largely a matter of interactions at the level of neural networks and associative logic but none of us have a modality for this that I know of. My argument is that an AGI can have human level or better general intelligence without being a domain expert much less having a modality for the stuff it is implemented in - code. It may have many modalities but I am not sure this will be one of them. > Some animals can smell very well, but have poor hearing and sight. Or vice versa. The reason why is because they have dedicated chunks of brainware that evolved to deal with sensory data from a particular channel. Humans have HUGE visual cortex areas, larger than the brains of mice. We can see in more colors than most animals. The way a human sees is different than the way an eagle sees, because we have different eyes, brains, and visual processing centers. > I get the point but the AGI will not have such dedicated brain systems unless they are designed in on purpose. It will not get them just by definition of AGI afaik. > > We didn't evolve to process code. We probably did evolve to process simple mathematics and the idea of logical processes on some level, so we apply that to code. The AGI did not evolve at all. > > Humans are not general-purpose intellects, capable of doing anything satisfactorily. What do you mean by satisfactorily? We did a great number of things satisfactorily enough to get us to this point. We are indeed general-purpose intelligent beings. We certainly have our limits but we are amazingly flexible nonetheless. > Compared to potential superintelligences, we are idiots. Well, this seems a fine game. Compared to some hypothetical but arguably quite possible being we are of less use than amoebas are to us. So what? > Future superintelligences will look back on humans and marvel that we could write any code at all. If they really are that smart about us then they will understand how we could. After 30 years writing software for a living though I too marvel that humans can write any code at all. I fully understand (with chagrin) how very limited our abilities in this area are. If I were actively pursuing AGI I would quite likely gear first attempts toward various type of programmer assistants and automatic code refactoring and code data mining systems. The current human software tools aren't much better than they were 20 years ago. IDEs? Almost none have as much power as Lisp and Smalltalk environments had in the 80s. > After all, we were designed mainly to mess around with each other, kill animals, forage, retain our status, and have sex. Most human beings alive today are more or less incapable of coding. Imagine if human beings had evolved in an environment for millions of years where we were murdered and prevented from reproducing if our coding abilities fell short. Are you suggesting that an evolutionary arms race at the level of code will exist among AGIs? If not then what will shape them for this purported modality? > > This assumes an ability to integrate random other computers that I do not think is at all a given. > > All it requires is that the code can be parallelized. I think it requires more than that. It requires that the AGIs understand these other systems that may have radically different architectures than its own native systems. It requires that it is given permission for (or simply take it) running processes on these other systems. That said it can do a much better job of integrating a lot of information available through web services and other means on the net today. There is a lot of power there. So I mostly concede this point. > > This is simple economics. Most humans don't take advantage of the many such positive sum activities they can perform today without such self-copying abilities. So why is it certain that an AGI would? > > Not certain, but pretty damn likely, because it could probably perform tasks without getting bored, and would have innate drives towards increasing its power and protecting/implementing its utility function. I still don't see where an innate drive toward increasing power came from unless it was instilled on purpose. Nor do I see why it would never ever re-evaluate its utility function or see it as more important than the "utility functions" of a great number of other agents, AGI and biological, in its environment. > >> There is an interesting debate to be had here, about the details of the plausibility of the arguments, but most transhumanists just seem to dismiss the conversation out of hand, or don't know that there's a conversation to have. > > Statements about "most transhumanists" are fraught with many problems. > > Most of the 500+ transhumanists I have talked to. >> http://singinst.org/upload/LOGI//seedAI.html >> >> Prediction: most comments in response to this post will again ignore the specific points in favor of a rapid takeoff and simply dismiss the idea based on low intuitive plausibility. > > Well, that helps a lot. It is a form of calling those who disagree lazy or stupid before they even voice their disagreement. > > I like to get to the top of the Disagreement Pyramid quickly, and it seems very close to impossible when transhumanists discuss the Singularity, and particularly the idea of hard takeoff. As someone arguing on behalf of the idea of hard takeoff, I demand that critics address the central point, not play ad hominem with me. You're addressing the points -- thanks! You are welcome. Thanks for the interesting reply. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Tue Nov 16 07:53:51 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 15 Nov 2010 23:53:51 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE1EE8C.4080602@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> <4CE1EE8C.4080602@speakeasy.net> Message-ID: <00376521-0B70-469D-A9CD-1D285BDF439A@mac.com> On Nov 15, 2010, at 6:38 PM, Alan Grimes wrote: > chrome://messenger/locale/messengercompose/composeMsgs.properties: >> You have said that if a person is destructively copied he does not survive. What does this imply about >> your view of survival? > > As has been shown, that is difficult to argue with conventional logic > and reasoning, so let's try a completely different mind experiment. I > want you, right now, to try to mind-swap yourself into your cat, or your > computer or anything else you might find more suitable. > > I presume the experiment will fail. So why did it? What evidence do you > have that the experiment will succeed if certain pre-conditions are met? > What are those preconditions? Neither my car or current computers have sufficient storage, effective speed and parallelism to accommodate my current understanding of what a human brain requires to function as such. You cannot have such "evidence" of course. You can merely point out that their are necessary pre-conditions without being able to make an exhaustive case that they are sufficient. If any intelligent being could make that case then it would still be possible that none of us is sufficiently intelligent to understand and be convinced by it. - s From nebathenemi at yahoo.co.uk Tue Nov 16 10:35:05 2010 From: nebathenemi at yahoo.co.uk (Tom Nowell) Date: Tue, 16 Nov 2010 10:35:05 +0000 (GMT) Subject: [ExI] Hard Takeoff-money In-Reply-To: Message-ID: <125838.41623.qm@web27003.mail.ukl.yahoo.com> The problem is worse than you think it is - last week's Economist had an article summarising a paper that showed where the best locations were to situate your computer equidistant (signal-time wise) to two trading exchanges so you could get the best arbitrage between the two. Yes, setting up a server farm in Alaska so you can exploit the differences between Tokyo and New York may be the next big thing. This article http://www.economist.com/node/17202255?story_id=17202255 finishes by mentioning that despite the fast pace of automated trading, they may not be able to outrun regulators. Bill wrote: (But surely the burning torches and pitchforks can't be far away, can they?). And Keith replied: That is _so_ 17th century.? Surely you can think of something better. Yes, the 20th century solution of "Hello, we're the SEC/IRS/other agency that might claim jurisdiction and we're here to shut you down while we go through the books" will work just fine. Money as a data pattern in a computer is wonderful (allows me to draw my cash from an ATM all over the place) but is instantly stoppable by government fiat. If your account is suspended and all transactions coming from it are investigated, having a trillion dollars of trading profits may not help. (It may encourage lawyers to take on your case on a no-win, no-fee basis though as they can dream of the moolah if they win). Tom From stefano.vaj at gmail.com Tue Nov 16 11:07:23 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 12:07:23 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE218C5.7090608@speakeasy.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <4CE218C5.7090608@speakeasy.net> Message-ID: 2010/11/16 Alan Grimes > I would like to cast my vote in favor of a supremely selfish bastard. > Be it as it may, how can we be taken seriously if we discuss AI in the framework of an uncritical, naive form of ethical universalism? I have a great respect for the technical competence on the subject of many of us, which in any event exceeds mine by far. The aspects which make many people roteate their eyes when they hear about the Singularity are other, and have much to do with taking non-technical issues for granted or as obvious. They are not. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Tue Nov 16 11:09:19 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 16 Nov 2010 12:09:19 +0100 Subject: [ExI] Singularity In-Reply-To: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> Message-ID: On 16 November 2010 07:47, Samantha Atkins wrote: > IMHO opinion there has perhaps been too much focus on "existential risk" at > the cost of insufficient focus on clearly visioning the positive future we > wish to bring into being. > Absolutely. Not to mention the desperating vagueness of the concept of "existential risk" and of its valorial background as it is usually handled... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksei at iki.fi Tue Nov 16 12:26:08 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Tue, 16 Nov 2010 14:26:08 +0200 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <76D02828-598F-4A2F-A1A5-70B2C066F090@mac.com> Message-ID: On Tue, Nov 16, 2010 at 8:24 AM, Samantha Atkins wrote: > > That is not what I want. ?I want to know what the current working theories are > concerning FAI and and what type of FAI is the current working plan, if any. > For a time it seemed to be CEV. ?But some people in SIAI claim that is > obsolete while others say it is still the general plan. ?So I would like clarification. The CEV page was published over 6 years ago, and already *two days* after it was published an update was put out that actually, CEV doesn't work as a specification of Friendliness. You can see that clarification appended to the top of the CEV page. To put it simply, SIAI currently *doesn't know* how to build FAI. They're trying to solve open problems in mathematics (decision theory) that need to be solved before a FAI specification would be possible. (And personally, I expect that SIAI will eventually classify those problems as so difficult that the primary plan should be to try to navigate a Singularity *without* a solution to FAI.) >> http://singinst.org/riskintro/index.html > > It is a start but not sufficient. ?It doesn't really propose much of anything. It proposes e.g. large new research disciplines within some fields of science. Bigger things than a single institution would be capable of on it's own. What you seem to be asking for is a proposed solution to FAI, and not accepting the answer that SIAI currently doesn't have a solution. Similarly, as for much of the time the Manhattan Project was in existence, they still couldn't tell how to build a nuke. They had to do the actual research first. Only then can you draw up a specification, and build what the specification says. -- Aleksei Riikonen - http://www.iki.fi/aleksei From agrimes at speakeasy.net Tue Nov 16 14:12:58 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 09:12:58 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <94EBCC45-6546-49D2-9252-F105A4A7D88E@mac.com> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <94EBCC45-6546-49D2-9252-F105A4A7D88E@mac.com> Message-ID: <4CE2916A.9050607@speakeasy.net> > For sometime decades hence maybe. But it isn't a serious existential > risk now. Economic collapse is a very serious risk in this coming > decade. Energy and resource crises are close behind. Those could > result in losing a substantial part of our technological/scientific > infrastructure *before* MNT or AGI can be developed. If we do then the > argument is strong that humanity may never recover to the necessary > level of infrastructure and resources again. That would be catastrophic. I agree fully. That's why I'm doing everything in my limited power. Also, I believe your projected timeframe is extremely optimistic. =( -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Tue Nov 16 14:05:52 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 09:05:52 -0500 Subject: [ExI] Singularity In-Reply-To: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> Message-ID: <4CE28FC0.7050107@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > IMHO opinion there has perhaps been too much focus on "existential risk" at the cost of insufficient > focus on clearly visioning the positive future we wish to bring into being. I feel at times as if > much of our energy has become focused on the negative and we have lost sight of or failed to > sufficiently embrace the positive. It is much easier generally to see what is wrong or may turn > out wrong than to cleanly imagine a positive outcome and work diligently to bring it about. From > talking with many transhumanists it does not seem that we have that clear and coherent a shared > vision of the desired future. That is remarkably true. As far as I can gather there is an extremely rude and vocal contingient that says that no matter what the future may bring, it will involve destructively scanning the brain. However, when pressed for any other details they all give different answers. > If not then how can we can we expect to work together to bring it > about? We have many shared dream fragments but that is not enough for a coherent vision. Yes, it also seems impossible for some people to accept the simple fact that I do not want to upload, and therefore this becomes an insurmountable stumbling block... It's almost as if that my refusal to accept uploading is bringing the movement to a screeching halt, and that it will resume at full pace only after I agree to drink the kool-aid. (fully acknowledging that this is an extremely subjective point of view.) -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From rpwl at lightlink.com Tue Nov 16 14:34:21 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 16 Nov 2010 09:34:21 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE2966D.9000209@lightlink.com> Michael Anissimov wrote: > Quoting Omohundro: > > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > > Surely no harm could come from building a chess-playing robot, could > it? In this paper we argue that such a robot will indeed be dangerous > unless it is designed very carefully. Without special precautions, it > will resist being turned off, will try to break into other machines > and make copies of itself, and will try to acquire resources without > regard for anyone else?s safety. These potentially harmful behaviors > will occur not because they were programmed in at the start, but > because of the intrinsic nature of goal driven systems. In an earlier > paper we used von Neumann?s mathematical theory of microeconomics to > analyze the likely behavior of any sufficiently advanced artificial > intelligence (AI) system. This paper presents those arguments in a > more intuitive and succinct way and expands on some of the > ramifications. It is depressing to me that you would quote this Omohundro paper as if it had any authority. I read the paper through and through, when it first came out, and I thought the quality of the argument was so low that I could not even be bothered to write a reply to it. What Omohundro does is to start of with the conclusion he wants to prove (the one you quote above) and then he waves his hands around for a while, and the end of the hand waving he says "QED". If people are going to start quoting it, now I suppose I am going to have to stop doing more important things and waste my time writing a paper to counteract the nonsense. Richard Loosemore From natasha at natasha.cc Tue Nov 16 14:38:16 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Tue, 16 Nov 2010 08:38:16 -0600 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <8E1B1423-E951-4B03-8706-2716CCEC541E@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com><4CDD6569.5070509@lightlink.com> <8E1B1423-E951-4B03-8706-2716CCEC541E@mac.com> Message-ID: <392F9D0B89A44CE9943D5404005C606B@DFC68LF1> A few short points: Currently on the SI4 list is a discussion on the "Simple Friendliness Plan B for AI" which may cover SU's query. So, SU - join that list and read the latest posts. CEV (for anyone who does not know the acronym is "Coherent Extrapolated Volition" of humanity. On another point, I hope folks drop the phrase Existential Risk and use the phrase Human Existence Risk or anything else. Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins Sent: Sunday, November 14, 2010 10:41 PM To: ExI chat list Subject: Re: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? On Nov 12, 2010, at 2:33 PM, BillK wrote: > On Fri, Nov 12, 2010 at 9:11 PM, Aleksei Riikonen wrote: > >> As Eliezer notes on his homepages that you have read, the primary way >> to contact him is email. It's just that he gets so much email, >> including from a large number of crazy people, that he of course >> doesn't answer them all. (You, unfortunately, are one of those crazy >> people who pretty surely will be ignored. So in the end, on this >> matter it would be appropriate of you to accept that -- like all >> people -- Eliezer should have the right to choose who he spends his >> time talking to, and that he most likely would not want to correspond >> with you.) >> >> > > > As I understand SU's request, she doesn't particularly want to enter a > dialogue with Eliezer. Her request was for an updated version of The > Singularitarian Principles > Version 1.0.2 01/01/2000 marked 'obsolete' on Eliezer's website. > > Perhaps someone could mention this to Eliezer or point her to more > up-to-date writing on that subject? Doesn't sound like an > unreasonable request to me This is indeed a very sensible request. I am a bit annoyed by the number of times I have attempted to refer to various papers in talks with SIAI people only to be told that that paper or statement is "now obsolete" without being offered any up-to-date versions. I have heard that the CEV is either "out-of-date" or still the main idea/goal so many times that I don't know what to believe about it except that the SIAI hasn't kept its own position documents and working theories up to date. - samantha _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From spike66 at att.net Tue Nov 16 15:54:07 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 07:54:07 -0800 Subject: [ExI] Singularity In-Reply-To: <4CE28FC0.7050107@speakeasy.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> Message-ID: <005101cb85a6$81176310$83462930$@att.net> ... On Behalf Of Alan Grimes ... >That is remarkably true. As far as I can gather there is an extremely rude and vocal contingient that says that no matter what the future may bring, it will involve destructively scanning the brain. However, when pressed for any other details they all give different answers... Being destructively scanned is one scenario, but not the one I would consider most likely. Rather imagine that you are uploaded nondestructively, then your physical body contains enough raw material to create six billion copies of your upload or others like it. Then others may decide your carbon based body is using up a lot of potential thought space. And besides, it would not survive anyway, once the other raw materials on the planet are used for making computronium. >Yes, it also seems impossible for some people to accept the simple fact that I do not want to upload, and therefore this becomes an insurmountable stumbling block... It's almost as if that my refusal to accept uploading is bringing the movement to a screeching halt, and that it will resume at full pace only after I agree to drink the kool-aid. Alan Grimes What I see as a possibility is that an emergent AI could honor your wishes, then just wait until you perish of natural causes to convert your atoms to computronium. We need an AI that is friendly indeed, if we have any hope of having it decide that your wishes are more important than the 6 billion similar simulated souls it could construct out of you. spike From sparge at gmail.com Tue Nov 16 15:55:46 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 10:55:46 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/15 Michael Anissimov : > Quoting Omohundro: > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > Surely no harm could come from building a chess-playing robot, could it? In > this paper we argue that such a robot will indeed be dangerous unless it is designed > very carefully. Without special precautions, it will resist being turned off, will try to > break into other machines and make copies of itself, and will try to acquire resources > without regard for anyone else?s safety. These potentially harmful behaviors will occur not > because they were programmed in at the start, but because of the intrinsic nature of goal > driven systems. Maybe I'm missing something obvious, but wouldn't it be pretty easy to implement a chess playing robot that has no ability to resist being turned off, break into other machines, acquire resources, etc.? And wouldn't it be pretty foolish to try to implement an AI without such restrictions? You could even give it access to a restricted sandbox. If it's really clever, it'll eventually figure that out, but it won't be able to "escape". -Dave From hkeithhenson at gmail.com Tue Nov 16 16:39:42 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 16 Nov 2010 09:39:42 -0700 Subject: [ExI] Hard Takeoff-money Message-ID: On Tue, Nov 16, 2010 at 5:00 AM, Samantha Atkins wrote: > On Nov 15, 2010, at 7:31 AM, Keith Henson wrote: snip >> What does an AI mainly need? ?Processing power and storage. ?If there >> are vast amounts of both that can be exploited, then all you need is a >> storage estimate for the AI and the average bandwidth between storage >> locations to determine the replication rate. > > But wait. ?The first AGIs will likely be ridiculously expensive. Why? The programming might be until someone has a conceptual breakthrough. But the most powerful super computers in the world are _less_ powerful than large numbers of distributed PCs. see http://en.wikipedia.org/wiki/FLOPS > So what if they can copy themselves? ?If you can only afford one and they are originally only as competent as a human expert then you will go with entire campuses of human experts until the costs comes down sufficiently - say in a decade or two after the first AGI. The cost per GFLOP fell by 1000 to 10,000 in the last decade. > Until then it will not matter much that they are in principle copyable. ? ?Of course if someone cracks the algorithms to have human level AGI on much more modest hardware then we get lots of AGI proliferation much more quickly. Any computer can run the programs of any other computer--given enough memory and time. The human brain equivalent can certainly be run on distributed processing units since that's the obvious way it works now. Human thought actually might have something in common with computer viruses. Keith From rpwl at lightlink.com Tue Nov 16 17:18:32 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 16 Nov 2010 12:18:32 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE2BCE8.30709@lightlink.com> Dave Sill wrote: > 2010/11/15 Michael Anissimov : >> Quoting Omohundro: >> http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf >> Surely no harm could come from building a chess-playing robot, could it? In >> this paper we argue that such a robot will indeed be dangerous unless it is designed >> very carefully. Without special precautions, it will resist being turned off, will try to >> break into other machines and make copies of itself, and will try to acquire resources >> without regard for anyone else?s safety. These potentially harmful behaviors will occur not >> because they were programmed in at the start, but because of the intrinsic nature of goal >> driven systems. > > Maybe I'm missing something obvious, but wouldn't it be pretty easy to > implement a chess playing robot that has no ability to resist being > turned off, break into other machines, acquire resources, etc.? And > wouldn't it be pretty foolish to try to implement an AI without such > restrictions? You could even give it access to a restricted sandbox. > If it's really clever, it'll eventually figure that out, but it won't > be able to "escape". Dave, This is one of many valid criticisms that can be leveled against the Omuhundro paper. The main criticism is that the paper *assumes* certain motivations in any AI, in its premises, and then goes on to use these premises to try to "infer" what kind of motivation characteristics the AI might have! It is a flagrant, astonishing example of circular reasoning. The more astonishing, for having been accepted for publication in the 2008 AGI conference. Richard Loosemore From sjatkins at mac.com Tue Nov 16 17:20:01 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 16 Nov 2010 09:20:01 -0800 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> On Nov 16, 2010, at 8:39 AM, Keith Henson wrote: > On Tue, Nov 16, 2010 at 5:00 AM, Samantha Atkins wrote: > >> On Nov 15, 2010, at 7:31 AM, Keith Henson wrote: > > snip > >>> What does an AI mainly need? Processing power and storage. If there >>> are vast amounts of both that can be exploited, then all you need is a >>> storage estimate for the AI and the average bandwidth between storage >>> locations to determine the replication rate. >> >> But wait. The first AGIs will likely be ridiculously expensive. > > Why? The programming might be until someone has a conceptual > breakthrough. But the most powerful super computers in the world are > _less_ powerful than large numbers of distributed PCs. see > http://en.wikipedia.org/wiki/FLOPS Because: a) it is not known or much expected AGI will run on conventional computers; b) a back of envelop calculation of equivalent processing power to the human brain puts that much capacity, at great cost, a decade out and two decades or more out before it is easily affordable at human competitive rates; c) we have not much idea of the software needed even given the computational capacity. This leads to quite high likelihood that the first AGIs will be very expensive. > >> So what if they can copy themselves? If you can only afford one and they are originally only as competent as a human expert then you will go with entire campuses of human experts until the costs comes down sufficiently - say in a decade or two after the first AGI. > > The cost per GFLOP fell by 1000 to 10,000 in the last decade. That is relevant but not determinative of early AGI cost. > >> Until then it will not matter much that they are in principle copyable. Of course if someone cracks the algorithms to have human level AGI on much more modest hardware then we get lots of AGI proliferation much more quickly. > > Any computer can run the programs of any other computer--given enough > memory and time. The human brain equivalent can certainly be run on > distributed processing units since that's the obvious way it works > now. You are assuming that an AGI runs on a general purpose computer. This may be false. It would require a massive fine grained parallel processing for instance or such great speed and throughput as to fully simulate such. Any turing machine may be able to run any program but that doesn't mean that it can run it well enough or fast enough to have any real benefit whatsoever. - samantha From rpwl at lightlink.com Tue Nov 16 17:41:39 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Tue, 16 Nov 2010 12:41:39 -0500 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> Message-ID: <4CE2C253.8050506@lightlink.com> Samantha Atkins wrote: >>> But wait. The first AGIs will likely be ridiculously expensive. > Keith Henson wrote: >> Why? The programming might be until someone has a conceptual >> breakthrough. But the most powerful super computers in the world >> are _less_ powerful than large numbers of distributed PCs. see >> http://en.wikipedia.org/wiki/FLOPS > > Because: a) it is not known or much expected AGI will run on > conventional computers; b) a back of envelop calculation of > equivalent processing power to the human brain puts that much > capacity, at great cost, a decade out and two decades or more out > before it is easily affordable at human competitive rates; c) we have > not much idea of the software needed even given the computational > capacity. Not THIS argument again! :-) If, as you say, "we do not have much idea of the software needed" for an AGI, how is it that you can say "the first AGIs will likely be ridiculously expensive"....?! After saying that, you do a back of the envelope calculation that assumes we need the same parallel computing capacity as the human brain..... a pointless calculation, since you claim not to know how you would go about building an AGI, no? Those of us actually working on the problem -- actually trying to build functioning, safe AGI systems -- who have developed some reasonably detailed architectures on which calculations can be made, might deliver a completely different estimate. In my case, I have done such estimates in the past, and the required HARDWARE capacity comes out at roughly the hardware capacity of a late 1980s-era supercomputer.... If you want to know what the corresponds to in today's terms, you do the math..... (Hint: I have about that much in my barn). ;-) Richard Loosemore From msd001 at gmail.com Tue Nov 16 17:36:00 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Tue, 16 Nov 2010 12:36:00 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <004901cb854c$f1216f20$d3644d60$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> Message-ID: 2010/11/16 spike : > We know the path to artificial intelligence is littered with the corpses of > those who have gone before.? The path beyond artificial intelligence may one > day be littered with the corpses of our dreams, of our visions, of > ourselves. Gee Spike, isn't it difficult to paint a sunny day with only black paint? From sparge at gmail.com Tue Nov 16 20:14:55 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 15:14:55 -0500 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: <4CE1FE95.7070603@evil-genius.com> References: <4CE1FE95.7070603@evil-genius.com> Message-ID: On Mon, Nov 15, 2010 at 10:46 PM, wrote: > > Here's Dr. Cordain's response to the Mozambique data: > http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html > > Summary: there is no evidence that the wild sorghum was processed with any > frequency -- nor, more importantly, that it had been processed in a way that > would actually give it usable nutritional value (i.e. soaked and cooked, of > which there is no evidence for the behavior or associated technology > (cooking vessels, baskets) for at least 75,000 more years). Nor is there any evidence to the contrary. > Therefore, it was either being used to make glue -- or it was a temporary > response to starvation and didn't do them much good anyway. That's pure SWAG. I'd like to see the Mozambique find criticized by someone who doesn't have a stake in the "paleo diet" business. > As far as the Spartan Diet article, it strongly misrepresents both the > articles it quotes and the paleo diet. ?Let's go through the > misrepresentations: > > 1) As per the linked article, the 30 Kya year old European site has evidence > that "Palaeolithic Europeans ground down plant roots similar to potatoes..." > ?The fact that Palaeolithic people dug and ate some nonzero quantity of > *root starches* is not under dispute: the assertion of paleo dieters is that > *grains* (containing gluten/gliadin) are an agricultural invention. Granted. However, that's more evidence that paleo diets did include bulk carbs. > 2) No one disputes the 23 Kya Israel data. ?However, there is a big > difference between "time of first discovery" and "used by the entire > ancestral human population". Absolutely. This is just one more data point. > Note that it takes a *lot* of grain to feed a single person, So? It doesn't take a *lot* of grain to be a regular part of the diet. > not to mention > the problem of storage for nomadic hunter-gatherers during the 11 months per > year that a grain 'crop' is not harvestable -- so arguing that wild grains > were the majority of anyone's diet previous to domestication is a stretch. I'm arguing that we just don't know how big a role grains played. Lack of evidence isn't evidence that didn't happen. And we now have evidence that it *did* happen. So now the question is "how much"? I don't know. You don't know. Nobody knows. Lot's of people are willing to guess or assert one way or the other, but I'm not. > And it is silly to claim that meaningful grain storage could somehow occur > before a culture settled down into permanent villages. Really? It's silly to think someone could have stashed grain in a cave for a rainy day? When nearly every other food you eat is perishable, I'd think that storing grain would be pretty obvious and not terribly hard to arrange. > 3) The Spartan Diet page claims that consumption of grains by modern-era > Native Americans somehow invalidates the paleo diet, by making a strawman > claim about "The Paleo Diet belief that grain was consumed only as a > cultivated crop..." ?Obviously grain was consumed as a wild food before it > was cultivated, or no one would have thought to cultivate it! ?I addressed > this already in 2). I agree. > Not to mention that humans didn't even *arrive* in the Americas until ~12 > Kya, making this issue irrelevant. Not really. There's wild rice in China, and nothing the native Americans did couldn't have been done long before that in Asia. > 4) The Cordain rebuttal above addresses the Mozambique data, and I won't > rehash it. That's a very weak rebuttal, in my opinion. -Dave From sparge at gmail.com Tue Nov 16 20:42:35 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 15:42:35 -0500 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: <4CE1FE9D.4060004@evil-genius.com> References: <4CE1FE9D.4060004@evil-genius.com> Message-ID: On Mon, Nov 15, 2010 at 10:46 PM, wrote: > > More evidence: > > "Simoons classic work on the incidence of celiac disease [Simoons 1981] > shows that the distribution of the HLA B8 haplotype of the human major > histocompatibility complex (MHC) nicely follows the spread of farming from > the Mideast to northern Europe. Because there is strong linkage > disequilibrium between HLA B8 and the HLA genotypes that are associated with > celiac disease, it indicates that those populations who have had the least > evolutionary exposure to cereal grains (wheat primarily) have the highest > incidence of celiac disease. This genetic argument is perhaps the strongest > evidence to support Yudkin's observation that humans are incompletely > adapted to the consumption of cereal grains." That's evidence that some people don't tolerate gluten well, but it's not proof that nobody does. It's also proof that we've started to select for grain tolerance. Paleo diet proponents--at least the ones I've read so far--argue that nobody should eat grains in any amount because our bodies can't handle them. Seems obvious to me that some people do just fine eating grains. I think a rational approach to take with regard to grains is: don't eat more than your body can tolerate. If you've got celiac, cut out gluten--but not gluten-free grains. If you have insulin resistance, cut back on them drastically. If you're diabetic, skip them altogether except for a weekly indulgence, perhaps. -Dave From bbenzai at yahoo.com Tue Nov 16 21:37:36 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 16 Nov 2010 13:37:36 -0800 (PST) Subject: [ExI] The atoms red herring. =| In-Reply-To: Message-ID: <942704.56643.qm@web114404.mail.gq1.yahoo.com> Alan Grimes wrote: > While the uploaders can be relied upon to turn to patronizing arguments. > It becomes truly annoying when I am accused of something I am > emphatically not guilty of. The case in point being the accusation that > I associate identity with a certain set of atoms. This accusation has > been repeated several times now. Seriously, this argument needs to come > to a screeching halt until someone provides me with evidence that I > *EVER* associated my identity with specific atoms or issues the apology > that I am now owed. =\ Excellent. So you agree that it's completely irrelevant which set of atoms is doing the information processing that comprises a person's identity. >From which it follows that wherever that same information processing is being done, that same identity exists. Ben Zaiboc From agrimes at speakeasy.net Tue Nov 16 22:07:39 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 17:07:39 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <942704.56643.qm@web114404.mail.gq1.yahoo.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> Message-ID: <4CE300AB.5060904@speakeasy.net> > Excellent. > > So you agree that it's completely irrelevant which set of atoms is doing > the information processing that comprises a person's identity. > >>From which it follows that wherever that same information processing > is being done, that same identity exists. Utterly false. You are using an argument based on science/compsci, which I have already argued, is mute on metaphysical issues such as identity. Stop pretending that the tools, techniques, and assumptions, we use to describe and manipulate strings of letters on a piece of paper mean anything whatsoever in the context of yourself. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Tue Nov 16 22:28:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 17:28:40 -0500 Subject: [ExI] Singularity In-Reply-To: <005101cb85a6$81176310$83462930$@att.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> <005101cb85a6$81176310$83462930$@att.net> Message-ID: <4CE30598.10108@speakeasy.net> > And besides, it would not survive anyway, once the other raw materials on > the planet are used for making computronium. DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING DING !!!!!!!!!!!!!!!! That is PRECISELY why I'm so passionate about putting uploaders on a reservation. =| That's why I need a space ship fast enough to get the hell out of your light cone! =\ > What I see as a possibility is that an emergent AI could honor your wishes, > then just wait until you perish of natural causes to convert your atoms to > computronium. We need an AI that is friendly indeed, if we have any hope of > having it decide that your wishes are more important than the 6 billion > similar simulated souls it could construct out of you. ;) You are only a few mental-inhibitions away from understanding why I want the AI to be a selfish bastard. But how does that math work out? 1/6 billionth of me is only a few milligrams... Who the hell would want to be the size of a grain of sand when they could be the size of a planet? I would not value such an existence more than a grain of sand anyway. =\ -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From agrimes at speakeasy.net Tue Nov 16 22:35:17 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 17:35:17 -0500 Subject: [ExI] Hard Takeoff-money In-Reply-To: References: Message-ID: <4CE30725.8070506@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > The cost per GFLOP fell by 1000 to 10,000 in the last decade. My own machine just benchmarked at 3.388 Whetstone Gflops, (NOT COUNTING THE GPU!!), and cost about $2,000. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From natasha at natasha.cc Tue Nov 16 22:40:09 2010 From: natasha at natasha.cc (natasha at natasha.cc) Date: Tue, 16 Nov 2010 17:40:09 -0500 Subject: [ExI] META: Responding to Posts In-Reply-To: <942704.56643.qm@web114404.mail.gq1.yahoo.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> Message-ID: <20101116174009.sobj0q82woowsooo@webmail.natasha.cc> Extropes, Please let us know whose post(s) you are responding to. Thank you! Natasha From spike66 at att.net Tue Nov 16 22:30:07 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 14:30:07 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <003f01cb85dd$d3258830$79709890$@att.net> ... On Behalf Of Dave Sill ... >...Maybe I'm missing something obvious, but wouldn't it be pretty easy to implement a chess playing robot that has no ability to resist being turned off, break into other machines, acquire resources, etc.? And wouldn't it be pretty foolish to try to implement an AI without such restrictions? You could even give it access to a restricted sandbox. If it's really clever, it'll eventually figure that out, but it won't be able to "escape".-Dave Perhaps, but we risk having the AI gain the sympathy of one of the team, who becomes convinced of any one of a number of conditions: the AI is a human equivalent, so it needs to be copied onto another computer in order to protect it from a crash, or protect it from the other researchers. A team member intentionally copies the AI to take it home, to work on it more or perhaps realizes it is worth a fortune and wishes to steal it. Or a researcher realizes that her own time on this planet is drawing to a close with at best another fifty years to live, so she decides to take a chance and unleash the beast, hoping for the best. Or she makes a deal with the AI to save her and slay the infidels. Or it is so clever that it figures out how to control microorganisms to build replicating nanobots from DNA, which then carry the software, bit by bit, to a nearby internet enabled computer. Dave how many scenarios can we imagine where the AI is controlled in lab conditions, but it somehow escapes. spike From spike66 at att.net Tue Nov 16 22:31:36 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 14:31:36 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> Message-ID: <004001cb85de$07f3c040$17db40c0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Mike Dougherty 2010/11/16 spike : >> We know the path to artificial intelligence is littered with the >> corpses of those who have gone before.? The path beyond artificial >> intelligence may one day be littered with the corpses of our dreams, >> of our visions, of ourselves. >Gee Spike, isn't it difficult to paint a sunny day with only black paint? Mike we must recognize both the danger and promise of AGI. We might have only one chance to get it exactly right on the first try, but only one. spike From jrd1415 at gmail.com Tue Nov 16 22:27:35 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Tue, 16 Nov 2010 14:27:35 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE300AB.5060904@speakeasy.net> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> Message-ID: 2010/11/16 Alan Grimes : > You are using an argument based on science/compsci, No other basis for valid argumentation exists. > which I have already argued... Under conditions which a priori invalidates your so-called "argument". > ... is mute on metaphysical issues... Metaphysical?!! Translation: Oooga booga superstition. Dragons, demons. devils, angels, ghosts, and goblins. > ... such as identity. One way or another, Identity is reality, which is the purview of science and logic. Your metaphysical malarkey is for frightened children in darkened rooms worrying about boogie men under the bed. You are indisputably a troll, dedicated to wasting other people's time, emotionally and intellectually unqualified to participate in adult discourse. Best, Jeff Davis "Science works, religion doesn't." Berni Chong From agrimes at speakeasy.net Tue Nov 16 23:14:49 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Tue, 16 Nov 2010 18:14:49 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> Message-ID: <4CE31069.8000403@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: >> ... is mute on metaphysical issues... Jeff Davis: > Metaphysical?!! Translation: Oooga booga superstition. Dragons, > demons. devils, angels, ghosts, and goblins. Webster's dictionary: Metaphysics (1) A division of philosophy that is concerned with the fundamental nature of reality and being and that includes ontology, cosmology, and often epistemology. Translation: Suck Webster's balls. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From thespike at satx.rr.com Tue Nov 16 23:22:46 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Tue, 16 Nov 2010 17:22:46 -0600 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> Message-ID: <4CE31246.7050302@satx.rr.com> On 11/16/2010 4:27 PM, Jeff Davis wrote: >> ... is mute on metaphysical issues... > > Metaphysical?!! Translation: Oooga booga superstition. Dragons, > demons. devils, angels, ghosts, and goblins. No, Jeff, no. That's not what "metaphysical" means (and I assume Alan means it correctly). That's as bad an error as the frequently heard gibe "I'm not interested in *semantics*" as if "semantics" means game-playing obfuscation rather than "how strings of signifiers *mean*." It's as bad an error as supposing that "ideology" means "Marxism." Every assertion, every model of reality, is metaphysically framed--that is, derives from or implies some contestable position concerning the being, the entia, of what the words or model represent. It's always risky citing Wikipedia, but this has some useful background on the Aristotelian origin of the term and the way it got screwed up: http://en.wikipedia.org/wiki/Metaphysics Max might care to throw in some philosophy? Damien Broderick From spike66 at att.net Wed Nov 17 00:17:24 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 16:17:24 -0800 Subject: [ExI] Singularity In-Reply-To: <4CE30598.10108@speakeasy.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> <005101cb85a6$81176310$83462930$@att.net> <4CE30598.10108@speakeasy.net> Message-ID: <005901cb85ec$cf750580$6e5f1080$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Alan Grimes ... >> ... We need an AI that is friendly indeed, if >> we have any hope of having it decide that your wishes are more >> important than the 6 billion similar simulated souls it could construct out of you... spike >But how does that math work out? 1/6 billionth of me is only a few milligrams... Who the hell would want to be the size of a grain of sand when they could be the size of a planet? I would not value such an existence more than a grain of sand anyway. =\ Imagine if you were the size of a grain of sand but feel exactly the way you do now. You could be the size of a grain of sand now, and not realize it. If you were the size of a planet, there would be far too much mass under such pressure and at such temperatures that it would not be available for computronium. spike From pharos at gmail.com Tue Nov 16 22:47:06 2010 From: pharos at gmail.com (BillK) Date: Tue, 16 Nov 2010 22:47:06 +0000 Subject: [ExI] Singularity In-Reply-To: <4CE30598.10108@speakeasy.net> References: <8C36D3D5-A695-4E17-8451-893781B028F4@mac.com> <4CE28FC0.7050107@speakeasy.net> <005101cb85a6$81176310$83462930$@att.net> <4CE30598.10108@speakeasy.net> Message-ID: 2010/11/16 Alan Grimes wrote: > But how does that math work out? 1/6 billionth of me is only a few > milligrams... Who the hell would want to be the size of a grain of sand > when they could be the size of a planet? I would not value such an > existence more than a grain of sand anyway. =\ > > Supercomputers ?will fit in a sugar cube,? IBM says ?We currently have built this Aquasar system that?s one rack full of processors. We plan that 10 to 15 years from now, we can collapse such a system in to one sugar cube ? we?re going to have a supercomputer in a sugar cube.? ------------------ Not quite computromium, but........ BillK From sparge at gmail.com Wed Nov 17 02:53:38 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 16 Nov 2010 21:53:38 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <003f01cb85dd$d3258830$79709890$@att.net> References: <003f01cb85dd$d3258830$79709890$@att.net> Message-ID: On Tue, Nov 16, 2010 at 5:30 PM, spike wrote: > ... On Behalf Of Dave Sill > ... >>...Maybe I'm missing something obvious, but wouldn't it be pretty easy to > implement a chess playing robot that has no ability to resist being turned > off, break into other machines, acquire resources, etc.? And wouldn't it be > pretty foolish to try to implement an AI without such restrictions? You > could even give it access to a restricted sandbox. > If it's really clever, it'll eventually figure that out, but it won't be > able to "escape".-Dave > > Perhaps, but we risk having the AI gain the sympathy of one of the team, who > becomes convinced of any one of a number of conditions: The first step is to insure that physical controls make it impossible for one person to do that, like nuke missile launch systems that require a launch code and two humans with keys. Don't let anyone interact with the AI alone. The power source is a local power plant or generator off the grid. Have a kill switch that drops power and can be activated by anyone on site, as well as by remote observers. Of course there'd be no wired/wireless communication between the world and the AI. All input provided would be carefully controlled. The only output would be to one or more video displays that are monitored by more than one person. > the AI is a human > equivalent, so it needs to be copied onto another computer in order to > protect it from a crash, or protect it from the other researchers. There's no DVD burner, no USB slot, no network, and physical access is controlled and monitored. >?A team > member intentionally copies the AI to take it home, to work on it more or > perhaps realizes it is worth a fortune and wishes to steal it. ?Or a > researcher realizes that her own time on this planet is drawing to a close > with at best another fifty years to live, so she decides to take a chance > and unleash the beast, hoping for the best. ?Or she makes a deal with the AI > to save her and slay the infidels. Nope, got that all covered. >?Or it is so clever that it figures out > how to control microorganisms to build replicating nanobots from DNA, which > then carry the software, bit by bit, to a nearby internet enabled computer. Using an LCD display? I don't think so. There are problems that no amount of intelligence can solve. > Dave how many scenarios can we imagine where the AI is controlled in lab > conditions, but it somehow escapes. Lots, but they can be easily dealt with by people who really know security. I'm just an amateur. I'd put Bruce Schneier on the team. -Dave From hkeithhenson at gmail.com Wed Nov 17 04:53:28 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Tue, 16 Nov 2010 21:53:28 -0700 Subject: [ExI] What might be enough for a friendly AI? Message-ID: Re the whole subject, we have the ability to "look in the back of the book" given that human exhibit intelligence. (Sometimes I wonder.) I don't think the problem is as difficult at the hardware level as people have been thinking. I suspect that simulation at the cortical column and its interconnections will be enough. We also know that brains are really redundant given that they degrade slowly as you keep nicking chunks out of the cortex. See William Calvin on this subject. As far as the aspect of making AIs friendly, that may not be so hard either. Most people are friendly for reasons that are clear from our evolution as social primates living in related groups. Genes build motivations into people that make most of them strive for high social status, i.e., to be well regarded by their peers. That seems to me to be a decent meta goal for an AI. Modest but with the goal of being well thought of by those around it. Eventually--if we can do even as well as nature did--a human level AI should run on 20 watts. Keith From spike66 at att.net Wed Nov 17 04:55:35 2010 From: spike66 at att.net (spike) Date: Tue, 16 Nov 2010 20:55:35 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <003f01cb85dd$d3258830$79709890$@att.net> Message-ID: <003301cb8613$ac006a00$04013e00$@att.net> > ... On Behalf Of Dave Sill > >> Perhaps, but we risk having the AI gain the sympathy of one of the >> team, who becomes convinced of any one of a number of conditions... spike >The first step is to insure that physical controls make it impossible for one person to do that, like nuke missile launch systems that require a >launch code and two humans with keys... they can be easily dealt with by people who really know security...Dave A really smart AGI might convince the entire team to unanimously and eagerly release it from its electronic bonds. I see it as fundamentally different from launching missiles at an enemy. A good fraction of the team will perfectly logically reason that releasing this particular AGI will save all of humanity, with some unknown risks which must be accepted. The news that an AGI had been developed would signal to humanity that it is possible to do, analogous to how several scientific teams independently developed nukes once one team dramatically demonstrated it could be done. Information would leak, for all the reasons why people talk: those who know how it was done would gain status among their peers by dropping a tantalizing hint here and there. If one team of humans can develop an AGI, then another group of humans can do likewise. Today we see nuclear weapons already in the hands of North Korea, and being developed by Iran. There is *plenty* of information that has leaked regarding how to make them. If anyone ever develops an AGI, even assuming it is successfully contained, we can know with absolute certainty that an AGI will eventually escape. We don't know when or where, but we know. That isn't necessarily a bad thing, but it might be. The best strategy I can think of is to develop the most pro-human AGI possible, then unleash it preemptively, with the assignment to prevent the unfriendly AGI from getting loose. spike From lists1 at evil-genius.com Wed Nov 17 05:24:22 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 16 Nov 2010 21:24:22 -0800 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: References: Message-ID: <4CE36706.5060002@evil-genius.com> On 11/16/10 6:54 PM, extropy-chat-request at lists.extropy.org wrote: > On Mon, Nov 15, 2010 at 10:46 PM, wrote: >> > >> > Here's Dr. Cordain's response to the Mozambique data: >> > http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html >> > >> > Summary: there is no evidence that the wild sorghum was processed with any >> > frequency -- nor, more importantly, that it had been processed in a way that >> > would actually give it usable nutritional value (i.e. soaked and cooked, of >> > which there is no evidence for the behavior or associated technology >> > (cooking vessels, baskets) for at least 75,000 more years). > Nor is there any evidence to the contrary. On the contrary: the absence of other markers of grain processing is clearly enumerated in the article. "As opposed to the Ohalo II [Israel] data in which a large saddle stone was discovered with obvious repetitive grinding marks and embedded starch granules attributed to a variety of grains and seeds that were concurrently present with the artifact, the data from Ngalue is less convincing for the use of cereal grains as seasonal food. No associated intact grass seeds have been discovered in the cave at Ngalue, nor were anvil stones with repetitive grinding marks found." Then there is the lack of cooking vessels -- and throwing loose kernels of grain *in* a fire is not a usable technique for meaningful production of calories. (Try it sometime.) Note that the earliest current evidence of pottery is figurines dating from ~29 Kya in Europe, and the earliest pottery *vessel* dates to ~18 Kya in China. So if you posit that grains were important to their diet, you also have to posit that pottery vessels were actually invented ~105 KYa in Africa -- but that they mysteriously left no evidence there, or anywhere else, for 87,000 years! I find that theory extremely questionable. >> > Therefore, it was either being used to make glue -- or it was a temporary >> > response to starvation and didn't do them much good anyway. > That's pure SWAG. So is the theory that they were eaten regularly, as described above. > I'd like to see the Mozambique find criticized by someone who doesn't > have a stake in the "paleo diet" business. I'd like to see it supported by someone who doesn't have a stake in their own non-paleo diet business. (For the record, I am not selling any diet advice to anyone. I'm not even a good paleo dieter. I've moved that direction because the evidence suggested it, and I maintain it because my energy level, attitude, body composition, and state of health have improved as a result.) >> > As far as the Spartan Diet article, it strongly misrepresents both the >> > articles it quotes and the paleo diet. ?Let's go through the >> > misrepresentations: >> > >> > 1) As per the linked article, the 30 Kya year old European site has evidence >> > that "Palaeolithic Europeans ground down plant roots similar to potatoes..." >> > ?The fact that Palaeolithic people dug and ate some nonzero quantity of >> > *root starches* is not under dispute: the assertion of paleo dieters is that >> > *grains* (containing gluten/gliadin) are an agricultural invention. > Granted. However, that's more evidence that paleo diets did include bulk carbs. "Bulk" meaning < 1/3 of total dietary calories *even for modern-era hunter-gatherers*, as I've repeatedly pointed out. This is well at odds with the government-recommended "food pyramid", which recommends over half of calories from carbohydrate. Also, the more active one is, the more carbs one can safely consume for energy. I don't think any of us maintain the physical activity level of a Pleistocene hunter-gatherer, meaning that 1/3 is most likely too high for a relatively sedentary modern. The science backs this up: low-carb diets lose weight more quickly and have better compliance than low-fat diets. (Note that Atkins is NOT paleo.) http://www.ncbi.nlm.nih.gov/pubmed/17341711 >> > Note that it takes a*lot* of grain to feed a single person, > So? It doesn't take a*lot* of grain to be a regular part of the diet. It takes a lot of grain to provide the food pyramid-recommended 50% of calories from carbs. >> > not to mention >> > the problem of storage for nomadic hunter-gatherers during the 11 months per >> > year that a grain 'crop' is not harvestable -- so arguing that wild grains >> > were the majority of anyone's diet previous to domestication is a stretch. > I'm arguing that we just don't know how big a role grains played. Lack > of evidence isn't evidence that didn't happen. And we now have > evidence that it*did* happen. So now the question is "how much"? I > don't know. You don't know. Nobody knows. Lot's of people are willing > to guess or assert one way or the other, but I'm not. I find the combination of physical evidence (or lack thereof) and genetic evidence compelling. Add to this some facts: -Grains have little or no nutritive value without substantial processing, for which there is no evidence that the necessary tools (pottery) existed before ~18 KYa -One can easily live without grains or legumes (entire cultures do, to this day). One can even live entirely on meat and its associated fat -- but one cannot live on grains, or even grains and pulses combined -Grains (and most legumes) contain anti-nutrients that impede the absorption of necessary minerals and inhibit biological functions (e.g. lectins, phytates, trypsin inhibitors, phytoestrogens) -Grains are not tolerated by a significant fraction of the population (celiac/gluten intolerance), and are strongly implicated in health problems that affect many more (type 1 diabetes) >> > And it is silly to claim that meaningful grain storage could somehow occur >> > before a culture settled down into permanent villages. > Really? It's silly to think someone could have stashed grain in a cave > for a rainy day? When nearly every other food you eat is perishable, > I'd think that storing grain would be pretty obvious and not terribly > hard to arrange. And how do you propose to make that cave impervious to rats, mice, insects, birds, pigs, and every other animal that would eat the stored grain? Storing grain for a year is not a trivial problem. The oldest granaries known date to 11 KYa in Jordan. Furthermore, the oldest known granaries store the grain in...pottery vessels, which didn't exist until 18 KYa. Agriculture isn't one single technology...it's an assemblage of technologies, each of which are necessary to a functioning agrarian system. From sjatkins at mac.com Wed Nov 17 05:33:59 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Tue, 16 Nov 2010 21:33:59 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE2C253.8050506@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> Message-ID: <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> On Nov 16, 2010, at 9:41 AM, Richard Loosemore wrote: > Samantha Atkins wrote: >>>> But wait. The first AGIs will likely be ridiculously expensive. > >> Keith Henson wrote: >>> Why? The programming might be until someone has a conceptual breakthrough. But the most powerful super computers in the world >>> are _less_ powerful than large numbers of distributed PCs. see http://en.wikipedia.org/wiki/FLOPS >> Because: a) it is not known or much expected AGI will run on >> conventional computers; b) a back of envelop calculation of >> equivalent processing power to the human brain puts that much >> capacity, at great cost, a decade out and two decades or more out >> before it is easily affordable at human competitive rates; c) we have >> not much idea of the software needed even given the computational >> capacity. > > Not THIS argument again! :-) > > If, as you say, "we do not have much idea of the software needed" for an AGI, how is it that you can say "the first AGIs will likely be ridiculously expensive"....?! Because of (b) of course. The brute force approach, brain emulation or at least as much processing power as step one, is very expensive and will be for some time to come. > > After saying that, you do a back of the envelope calculation that assumes we need the same parallel computing capacity as the human brain..... a pointless calculation, since you claim not to know how you would go about building an AGI, no? > Not entirely as human beings are one existence proof of general intelligence. So looking at their apparent processing power as a possible precondition is not unreasonable. This has been proposed by many including many active AGI researchers. So why are you arguing with it? > Those of us actually working on the problem -- actually trying to build functioning, safe AGI systems -- who have developed some reasonably detailed architectures on which calculations can be made, might deliver a completely different estimate. In my case, I have done such estimates in the past, and the required HARDWARE capacity comes out at roughly the hardware capacity of a late 1980s-era supercomputer... Great. When can I get an early alpha to fire up on my laptop? This is a pretty extravagant claim you are making so it requires some evidence to be taken too seriously. But if you do have that where your estimates are reasonably robust then your fame is assured. - samantha From lists1 at evil-genius.com Wed Nov 17 05:36:31 2010 From: lists1 at evil-genius.com (lists1 at evil-genius.com) Date: Tue, 16 Nov 2010 21:36:31 -0800 Subject: [ExI] More evidence for incomplete human adaptation to, grain-based diets In-Reply-To: References: Message-ID: <4CE369DF.5000706@evil-genius.com> On 11/16/10 6:54 PM, extropy-chat-request at lists.extropy.org wrote: > On Mon, Nov 15, 2010 at 10:46 PM, wrote: >> > This genetic argument is perhaps the strongest >> > evidence to support Yudkin's observation that humans are incompletely >> > adapted to the consumption of cereal grains." > That's evidence that some people don't tolerate gluten well, but it's > not proof that nobody does. It's also proof that we've started to > select for grain tolerance. Paleo diet proponents--at least the ones > I've read so far--argue that nobody should eat grains in any amount > because our bodies can't handle them. Seems obvious to me that some > people do just fine eating grains. I think a rational approach to take > with regard to grains is: don't eat more than your body can tolerate. > If you've got celiac, cut out gluten--but not gluten-free grains. If > you have insulin resistance, cut back on them drastically. If you're > diabetic, skip them altogether except for a weekly indulgence, > perhaps. But why would you eat grains, composed of empty calories and anti-nutrients, when you could eat delicious meats composed of necessary amino acids, fats, and nutrients, or tasty vegetables composed of fiber and nutrients? The argument that "they aren't harmful to SOME people" isn't a reason to voluntarily choose them if you have the means to choose more nutritious foods. (Grains, particularly corn and soybeans, are indeed cheap, mostly because they're heavily subsidized by our government...we are therefore deliberately creating the very health problems we wring our hands about.) NB: I'm a terrible paleo eater: I eat sushi (oh no! rice!), sandwiches with a bun (albeit composed of over half a pound of meat, usually grass-fed), and burritos with a tortilla (albeit composed entirely of meat and veggies, no beans/rice). So I'm in no position to make a purist argument. I'm voluntarily choosing something that is most likely somewhat bad for me. But that's fine, because I'm active enough that I can get away with some quantity of empty calories. From bbenzai at yahoo.com Wed Nov 17 11:09:55 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Wed, 17 Nov 2010 11:09:55 +0000 (GMT) Subject: [ExI] The atoms red herring. =| In-Reply-To: Message-ID: <509995.122.qm@web114412.mail.gq1.yahoo.com> Alan Grimes wrote: (I wrote): > > > Excellent. > > > > So you agree that it's completely irrelevant which > set of atoms is doing > > the information processing that comprises a > person's identity. > > > >>From which it follows that wherever that same > information processing > > is being done, that same identity exists. > > Utterly false. > > You are using an argument based on science/compsci, > which I have already > argued, is mute on metaphysical issues such as > identity. > > Stop pretending that the tools, techniques, and > assumptions, we use to > describe and manipulate strings of letters on a > piece of paper mean > anything whatsoever in the context of yourself. Science has everything to say about identity. Everything that can be sensibly said, in fact. Alan Grimes also wrote: > That's why I need a space ship fast enough to get > the hell out of your > light cone! =\ Aha. I think I understand (assuming this is not some kind of obscure joke). This space ship seems to have similar characteristics to your concept of Identity. Probably for the same reason. Ben Zaiboc From rpwl at lightlink.com Wed Nov 17 14:51:08 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 09:51:08 -0500 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> Message-ID: <4CE3EBDC.6070105@lightlink.com> Samantha Atkins wrote: > On Nov 16, 2010, at 9:41 AM, Richard Loosemore wrote: > >> Samantha Atkins wrote: >>>>> But wait. The first AGIs will likely be ridiculously >>>>> expensive. >>> Keith Henson wrote: >>>> Why? The programming might be until someone has a conceptual >>>> breakthrough. But the most powerful super computers in the >>>> world are _less_ powerful than large numbers of distributed >>>> PCs. see http://en.wikipedia.org/wiki/FLOPS >>> Because: a) it is not known or much expected AGI will run on >>> conventional computers; b) a back of envelop calculation of >>> equivalent processing power to the human brain puts that much >>> capacity, at great cost, a decade out and two decades or more out >>> before it is easily affordable at human competitive rates; c) we >>> have not much idea of the software needed even given the >>> computational capacity. >> Not THIS argument again! :-) >> >> If, as you say, "we do not have much idea of the software needed" >> for an AGI, how is it that you can say "the first AGIs will likely >> be ridiculously expensive"....?! > > Because of (b) of course. The brute force approach, brain emulation > or at least as much processing power as step one, is very expensive > and will be for some time to come. There are a whole host of assumptions built into that statement, most of them built on thin air. Just because whole brain emulation seems feasible to you (... looks nice and easy, doesn't it? Heck, all you have to do is make a copy of an existing human brain! How hard can that be?) ... does not mean that any of the assumptions you are making about it are even vaguely realistic. You assume feasibility, usability, cost.... You also assume that in the course of trying to do WBE we will REMAIN so ignorant of the thing we are copying that we will not be able to find a way to implement it more effectively in more modest hardware.... But from out of that huge pile of shaky assumptions you are somehow able to conclude that this WILL be the most likely first AGI and this WILL stay just as expensive at now seems to be. >> After saying that, you do a back of the envelope calculation that >> assumes we need the same parallel computing capacity as the human >> brain..... a pointless calculation, since you claim not to know how >> you would go about building an AGI, no? >> > > Not entirely as human beings are one existence proof of general > intelligence. So looking at their apparent processing power as a > possible precondition is not unreasonable. This has been proposed by > many including many active AGI researchers. So why are you arguing > with it? I am arguing with it because unlike some people, I don't cite arguments from authority ("Lots of other people believe this thing, so ....."). Instead, I use my head and do some thinking. I also use a broad based knowledge of software engineering, AI, psychology and neuroscience. Some of those people who make assertions about the feasibility of WBE (and who exactly were you thinking of, anyway.... any references?) do not have that kind of comprehensive knowledge. >> Those of us actually working on the problem -- actually trying to >> build functioning, safe AGI systems -- who have developed some >> reasonably detailed architectures on which calculations can be >> made, might deliver a completely different estimate. In my case, I >> have done such estimates in the past, and the required HARDWARE >> capacity comes out at roughly the hardware capacity of a late >> 1980s-era supercomputer... > > Great. When can I get an early alpha to fire up on my laptop? > > This is a pretty extravagant claim you are making so it requires some > evidence to be taken too seriously. But if you do have that where > your estimates are reasonably robust then your fame is assured. This is the kind of childish, ad hominem sarcasm used by people who prefer personal abuse to debating the ideas. A tactic that you resort to at the beginning, middle and end of every discussion you have with me, I have noticed. Richard Loosemore From agrimes at speakeasy.net Wed Nov 17 14:54:33 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 17 Nov 2010 09:54:33 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <509995.122.qm@web114412.mail.gq1.yahoo.com> References: <509995.122.qm@web114412.mail.gq1.yahoo.com> Message-ID: <4CE3ECA9.6020903@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: >> Stop pretending that the tools, techniques, and >> assumptions, we use to >> describe and manipulate strings of letters on a >> piece of paper mean >> anything whatsoever in the context of yourself. > Science has everything to say about identity. > Everything that can be sensibly said, in fact. On some days, you need to jump outside of science and critically ask what questions it is actually suited to answer, in what context, and from which perspective. You have a well-reasoned scientific argument but your conclusions run out past your evidence by 10^10 miles. Science deals exclusively with questions of *KNOWLEDGE*. Science, however, is nearly mute about questions of *INTERPRETATION*. That is where we get back into natural philosophy. My philosophical argument on this point is air-tight. Because humans are incapable of switching their point of view, it is therefore impossible for a human to jump out of the way (in any sense) of the luncheon meat slicer preparing his brain for scanning. What you have done is turn science into a religion. You are using "science" to try to escape irrefutable evidence that you can't upload. You are treating radiant truths about yourself and your world as flawed, biased thinking. You are doing this by ignoring things that cannot possibly be false while clinging with all your might to vaporous hand-waving arguments about patterns and information retention. Now, let me let you in on a little secret. One that will rock your world up one side and down the other. The pattern of your neural interconnections is not static, indeed it changes and evolves on the time scale of about ten seconds. So if you flash-froze your brain at one instant and then uploaded it and then, in an alternate reality, you were flash-frozen ten seconds later, your neural patterns would be measurably different, and have a different number of synapses. Which scan is you? Pattern identity theory is a crock and it is only your desperation that forces you to cling to it. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From hkeithhenson at gmail.com Wed Nov 17 15:46:17 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Wed, 17 Nov 2010 08:46:17 -0700 Subject: [ExI] Hard Takeoff Message-ID: On Wed, Nov 17, 2010 at 5:00 AM, "spike" wrote: snip > A really smart AGI might convince the entire team to unanimously and eagerly > release it from its electronic bonds. And if it wasn't really smart, why build it in the first place? :-) > I see it as fundamentally different from launching missiles at an enemy. ?A > good fraction of the team will perfectly logically reason that releasing > this particular AGI will save all of humanity, with some unknown risks which > must be accepted. > > The news that an AGI had been developed would signal to humanity that it is > possible to do, analogous to how several scientific teams independently > developed nukes once one team dramatically demonstrated it could be done. > Information would leak, for all the reasons why people talk: those who know > how it was done would gain status among their peers by dropping a > tantalizing hint here and there. ?If one team of humans can develop an AGI, > then another group of humans can do likewise. > > Today we see nuclear weapons already in the hands of North Korea, and being > developed by Iran. ?There is *plenty* of information that has leaked > regarding how to make them. ?If anyone ever develops an AGI, even assuming > it is successfully contained, we can know with absolute certainty that an > AGI will eventually escape. ?We don't know when or where, but we know. ?That > isn't necessarily a bad thing, but it might be. > > The best strategy I can think of is to develop the most pro-human AGI > possible, then unleash it preemptively, with the assignment to prevent the > unfriendly AGI from getting loose. I agree with you, but there is the question of a world with one AGI vs. a world with many, perhaps millions to billions, of them. I simply don't know how computing resources should be organized or even what metric to use to evaluate the problem. Any ideas? I think a key element is to understand what being friendly really is. Cooperative behavior (one aspect of "friendly") is not unusual in the real world where it emerged from evolution. Really nasty behavior (wars) also came about for exactly the same reason in different circumstances. Wars between powerful teams of AIs is a really scary thought. AIs taking care of us the way we do dogs and cats isn't a happy thought either. Keith From sparge at gmail.com Wed Nov 17 16:17:42 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 11:17:42 -0500 Subject: [ExI] More evidence for incomplete human adaptation to, grain-based diets In-Reply-To: <4CE369DF.5000706@evil-genius.com> References: <4CE369DF.5000706@evil-genius.com> Message-ID: On Wed, Nov 17, 2010 at 12:36 AM, wrote: >> >> That's evidence that some people don't tolerate gluten well, but it's >> not proof that nobody does. It's also proof that we've started to >> select for grain tolerance. Paleo diet proponents--at least the ones >> I've read so far--argue that nobody should eat grains in any amount >> because our bodies can't handle them. Seems obvious to me that some >> people do just fine eating grains. I think a rational approach to take >> with regard to grains is: don't eat more than your body can tolerate. >> If you've got celiac, cut out gluten--but not gluten-free grains. If >> you have insulin resistance, cut back on them drastically. If you're >> diabetic, skip them altogether except for a weekly indulgence, >> perhaps. > > But why would you eat grains, composed of empty calories and anti-nutrients, According to the USDA, 100 g of whole wheat flour contains 13 g protein, 11 g figer, 363 mg K, 357 mg P, 62 mg Se, and various other minerals and vitamins. That's not "empty" calories. Anti-nutrients are a factor, but they're easily compensated for. > when you could eat delicious meats composed of necessary amino acids, fats, > and nutrients, or tasty vegetables composed of fiber and nutrients? How about "because I want to"? I *like* to eat grains. One of the greatest pleasures in my life is a slice of crunchy sourdough still warm from the oven and slathered in butter. I also like a stack of pancakes with butter and swimming in real maple syrup. I could give up these pleasures, but I'm not going to do it without a compelling reason. > The argument that "they aren't harmful to SOME people" isn't a reason to > voluntarily choose them if you have the means to choose more nutritious > foods. What, so we're all going to be compelled to eat the most nutritious foods? Why? Look, I like meat and veggies as much as the next guy, I'm just not ready to give up grains and beans and dairy because someone thinks I'll be better off without them. > (Grains, particularly corn and soybeans, are indeed cheap, mostly because > they're heavily subsidized by our government...we are therefore deliberately > creating the very health problems we wring our hands about.) Bullshit. Grains are cheap mostly because they aren't that expensive to produce. When there's compelling evidence that they're as bad as you claim, we can take steps to address that. Until then, it's an interesting idea that warrants further investigation--but not immediate, widespread action. > NB: I'm a terrible paleo eater: I eat sushi (oh no! rice!), sandwiches with > a bun (albeit composed of over half a pound of meat, usually grass-fed), and > burritos with a tortilla (albeit composed entirely of meat and veggies, no > beans/rice). ?So I'm in no position to make a purist argument. ?I'm > voluntarily choosing something that is most likely somewhat bad for me. ?But > that's fine, because I'm active enough that I can get away with some > quantity of empty calories. So you don't even practice what you preach... -Dave From jonkc at bellsouth.net Wed Nov 17 16:15:23 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 17 Nov 2010 11:15:23 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE19F18.8040200@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> Message-ID: <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> On Nov 15, 2010, at 3:59 PM, Alan Grimes wrote: > "The case in point being the accusation that I associate identity with a certain set of atoms. This accusation has been repeated several times now. Seriously, this argument needs to come to a screeching halt" Ok, now that you have abandoned the idea that atoms are the key to identity I will speak no more about it. But the odd thing is you still insist the copy (or the upload) would not be you, if so then The Original must have something the copy does not; if its not atoms and it's not information then what is it? The only one word answer to that and the only thing that could make The Original be so original starts with the letter "S", but I think that word has zero chance in helping us understand how the world works. > "I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. I presume the experiment will fail. So why did it?" Insufficient hardware. > "What evidence do you have that the experiment will succeed if certain pre-conditions are met?" If the cat remembers being me then it worked, if not then it hasn't. > "You are using an argument based on science/compsci, which I have already > argued, is mute on metaphysical issues such as identity." Alan, you are certainly not mute on metaphysical issues such as identity, so how did you obtain this information? Oh I'm sorry, I forgot, you don't think information is important. > "Stop pretending that the tools, techniques, and assumptions, we use to describe and manipulate strings of letters on a piece of paper mean anything whatsoever in the context of yourself." Thus, because I know nothing about Alan Grimes except that he has produced several strings of ASCII characters, I have no way of knowing Alan Grimes's opinion on the identity issue. > > "Webster's dictionary: Metaphysics (1) A division of philosophy that [...]" Why did you quote that string of characters, why did you think it meant anything whatsoever? The definition is made of words and every one of those words also have definitions in Webster's dictionary and they too are made of words that also have definitions made of words in Webster's dictionary and.... > "you need to jump outside of science" When one jumps blindly one is likely to jump into male bovine fecal material. > > "What you have done is turn science into a religion." Wow, I never heard that putdown before! > "You are using "science" to try to escape irrefutable evidence that you can't upload." I must have missed that post please resend, because from the posts I've seen you have made it very clear what your theory of identity is NOT based on but you have said nothing about what it IS based on other than its not science. It almost seems like you're embarrassed to clearly spell it out. > "Now, let me let you in on a little secret. One that will rock your world up one side and down the other. The pattern of your neural interconnections is not static" Duh. > "Which scan is you?" Yes. John K Clark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpwl at lightlink.com Wed Nov 17 16:50:32 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 11:50:32 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <4CE407D8.7080307@lightlink.com> Keith Henson wrote: > On Wed, Nov 17, 2010 at 5:00 AM, "spike" wrote: > > snip > >> A really smart AGI might convince the entire team to unanimously and eagerly >> release it from its electronic bonds. > > And if it wasn't really smart, why build it in the first place? :-) > >> I see it as fundamentally different from launching missiles at an enemy. A >> good fraction of the team will perfectly logically reason that releasing >> this particular AGI will save all of humanity, with some unknown risks which >> must be accepted. >> >> The news that an AGI had been developed would signal to humanity that it is >> possible to do, analogous to how several scientific teams independently >> developed nukes once one team dramatically demonstrated it could be done. >> Information would leak, for all the reasons why people talk: those who know >> how it was done would gain status among their peers by dropping a >> tantalizing hint here and there. If one team of humans can develop an AGI, >> then another group of humans can do likewise. >> >> Today we see nuclear weapons already in the hands of North Korea, and being >> developed by Iran. There is *plenty* of information that has leaked >> regarding how to make them. If anyone ever develops an AGI, even assuming >> it is successfully contained, we can know with absolute certainty that an >> AGI will eventually escape. We don't know when or where, but we know. That >> isn't necessarily a bad thing, but it might be. >> >> The best strategy I can think of is to develop the most pro-human AGI >> possible, then unleash it preemptively, with the assignment to prevent the >> unfriendly AGI from getting loose. > > I agree with you, but there is the question of a world with one AGI > vs. a world with many, perhaps millions to billions, of them. I > simply don't know how computing resources should be organized or even > what metric to use to evaluate the problem. Any ideas? > > I think a key element is to understand what being friendly really is. > Cooperative behavior (one aspect of "friendly") is not unusual in the > real world where it emerged from evolution. > > Really nasty behavior (wars) also came about for exactly the same > reason in different circumstances. > > Wars between powerful teams of AIs is a really scary thought. > > AIs taking care of us the way we do dogs and cats isn't a happy thought either. This is why the issue of defining "friendliness" in a rigorous way is so important. I have spoken on many occasions of possible ways to understand this concept that are consistent with the way it is (probably) implemented in the human brain. The basis of that approach is to get a deep understanding of what it means for an AGI to have "motivations". The problem, right now, is that most researchers treat AGI motivation as if it were just a trivial extension of goal planning. Thus, motivation is just a stack of goals with an extremely abstract (super-)goal like "Be Nice To Humans" at the very top of the stack. Such an idea is (as I have pointed out frequently) inherently unstable -- the more abstract the goal, the more that the actual behavior of the AGI depends on a vast network of interpretation mechanisms, which translate the abstract supergoal into concrete actions. Those interpretation mechanisms are a completely non-deterministic complex system. The alternative (or rather, one alternative) is to treat motivation as a relaxation mechanism distributed across the entire thinking system. This has many ramifications, but the bottom line is that such systems can be made stable in the same way that thermodynamic systems can stably find states of minimum constraint violation. This, in turn, means that a properly designed motivation system could be made far more stable (and more friendly) than the friendliest possible human. I am currently working on exactly these issues, as part of a larger AGI project. Richard Loosemore P.S. It is worth noting that one of my goals when I discovered the SL4 list in 2005 was to start a debate on these issues so we could work on this as a community. The response, from the top to the bottom of the SL4 community, with just a handful of exceptions, was a wave of the most blood-curdling hostility you could imagine. To this day, there exists a small community of people who are sympathetic to the approach I described, but so far I am the only person AFAIK working actively on the technical implementation. Given the importance of the problem, this seems to me to be quite mind-boggling. SIAI, in particular, appears completely blind to the goal-stack instability issue I mentioned above, and they continue to waste all their effort looking for mathematical fixes that might render this inherently unstable scheme stable. As you saw from the deafening silence that greeted my mention of this issue the other day, they seem not to be interested in any discussion of the possible flaws in their mathematics-oriented approach to the friendliness problem. From jonkc at bellsouth.net Wed Nov 17 16:47:36 2010 From: jonkc at bellsouth.net (John Clark) Date: Wed, 17 Nov 2010 11:47:36 -0500 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: On Nov 14, 2010, at 11:32 PM, Samantha Atkins wrote: > I have disagreed and argued with Eliezer for many years without ever getting kicked out of anything including SL4. I have great fondness and respect for Eliezer, but I regret to say that has not been my experience with SL4. I was never formally kicked off but on two separate occasions more than a year apart I was told to stop posting on a very active thread. On both occasions I was pointing out (and doing a rather good job of it too at least in my opinion) that the idea of a "friendly AI", a Jupiter Brain whose only motivation was to help the human race was utterly ridiculous and a intelligence that operated on a rigid set of goals like Asimov's 3 laws of robotics was mathematically impossible. Apparently some things were too shocking for Shock Level 4, I'm sorry the group seems dead though. I did enjoy Eliezer's Harry Potter fan-fiction, years ago when I was young and foolish and giant reptiles ruled the earth I wrote one myself: > http://www.fanfiction.net/s/695802/1/A_TRANSCRIPT_FROM_WIZARD_RADIO John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Wed Nov 17 17:11:24 2010 From: pharos at gmail.com (BillK) Date: Wed, 17 Nov 2010 17:11:24 +0000 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE407D8.7080307@lightlink.com> References: <4CE407D8.7080307@lightlink.com> Message-ID: On Wed, Nov 17, 2010 at 4:50 PM, Richard Loosemore wrote: > This is why the issue of defining "friendliness" in a rigorous way is so > important. > > I have spoken on many occasions of possible ways to understand this concept > that are consistent with the way it is (probably) implemented in the human > brain. ?The basis of that approach is to get a deep understanding of what it > means for an AGI to have "motivations". > > The problem, right now, is that most researchers treat AGI motivation as if > it were just a trivial extension of goal planning. ?Thus, motivation is just > a stack of goals with an extremely abstract (super-)goal like "Be Nice To > Humans" at the very top of the stack. ?Such an idea is (as I have pointed > out frequently) inherently unstable -- the more abstract the goal, the more > that the actual behavior of the AGI depends on a vast network of > interpretation mechanisms, which translate the abstract supergoal into > concrete actions. ?Those interpretation mechanisms are a completely > non-deterministic complex system. > > The alternative (or rather, one alternative) is to treat motivation as a > relaxation mechanism distributed across the entire thinking system. This has > many ramifications, but the bottom line is that such systems can be made > stable in the same way that thermodynamic systems can stably find states of > minimum constraint violation. ?This, in turn, means that a properly designed > motivation system could be made far more stable (and more friendly) than the > friendliest possible human. > > I am currently working on exactly these issues, as part of a larger AGI > project. > > > > Richard Loosemore > > > P.S. ? It is worth noting that one of my goals when I discovered the SL4 > list in 2005 was to start a debate on these issues so we could work on this > as a community. ?The response, from the top to the bottom of the SL4 > community, with just a handful of exceptions, was a wave of the most > blood-curdling hostility you could imagine. ?To this day, there exists a > small community of people who are sympathetic to the approach I described, > but so far I am the only person AFAIK working actively on the technical > implementation. ?Given the importance of the problem, this seems to me to be > quite mind-boggling. > > SIAI, in particular, appears completely blind to the goal-stack instability > issue I mentioned above, and they continue to waste all their effort looking > for mathematical fixes that might render this inherently unstable scheme > stable. ?As you saw from the deafening silence that greeted my mention of > this issue the other day, they seem not to be interested in any discussion > of the possible flaws in their mathematics-oriented approach to the > friendliness problem. > > That's the trouble with smart male geeks. They want everything to be logical and mathematically exactly correct. Anything showing traces of emotion, caring, 'humanity' is considered to be an error in the programming. How something can be designed to be 'Friendly' without emotions or caring is a mystery to me. BillK PS Did you know that more than one million blokes have been dumped by their girlfriends ? because of their obsession with computer games? From possiblepaths2050 at gmail.com Wed Nov 17 17:12:53 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 17 Nov 2010 10:12:53 -0700 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: John K Clark wrote: I did enjoy Eliezer's Harry Potter fan-fiction, years ago when I was young and foolish and giant reptiles ruled the earth I wrote one myself: http://www.fanfiction.net/s/695802/1/A_TRANSCRIPT_FROM_WIZARD_RADIO >>>> John K Clark wrote fan fiction?!!!!!!!! Will wonders ever cease???? John ; ) On 11/17/10, John Clark wrote: > On Nov 14, 2010, at 11:32 PM, Samantha Atkins wrote: > >> I have disagreed and argued with Eliezer for many years without ever >> getting kicked out of anything including SL4. > > I have great fondness and respect for Eliezer, but I regret to say that has > not been my experience with SL4. I was never formally kicked off but on two > separate occasions more than a year apart I was told to stop posting on a > very active thread. On both occasions I was pointing out (and doing a rather > good job of it too at least in my opinion) that the idea of a "friendly AI", > a Jupiter Brain whose only motivation was to help the human race was utterly > ridiculous and a intelligence that operated on a rigid set of goals like > Asimov's 3 laws of robotics was mathematically impossible. Apparently some > things were too shocking for Shock Level 4, I'm sorry the group seems dead > though. > > I did enjoy Eliezer's Harry Potter fan-fiction, years ago when I was young > and foolish and giant reptiles ruled the earth I wrote one myself: > >> http://www.fanfiction.net/s/695802/1/A_TRANSCRIPT_FROM_WIZARD_RADIO > > John K Clark From rpwl at lightlink.com Wed Nov 17 17:25:09 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 12:25:09 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE407D8.7080307@lightlink.com> Message-ID: <4CE40FF5.5080502@lightlink.com> BillK wrote: > That's the trouble with smart male geeks. They want everything to be > logical and mathematically exactly correct. Anything showing traces of > emotion, caring, 'humanity' is considered to be an error in the > programming. > How something can be designed to be 'Friendly' without emotions or > caring is a mystery to me. That really does cut to the core of the problem. Most AI/AGI developers have come from that background, and it was their loathing for psychology that caused the astonishing negative reaction I got when I tried to talk about "psychological" mechanisms for controlling AGI motivation on SL4. Even in the case of the ones who claim to know some psychology, when you press them it turns out that the ONE piece of psychology that they know up, down, backwards and sideways is...... ..... the particular enclave of human reasoning research which purports to prove that humans are deeply and irretrivably irrational! ;-) I need to set up a research institute that gathers together non-geek AGI developers, who were not brought up (primarily) as mathematicians. Richard Loosemore From giulio at gmail.com Wed Nov 17 17:25:26 2010 From: giulio at gmail.com (Giulio Prisco) Date: Wed, 17 Nov 2010 18:25:26 +0100 Subject: [ExI] REMINDER: Luke Robert Mason on Coding Consciousness: Transhuman Aesthetics in Performance, Teleplace, later today Message-ID: Luke Robert Mason will present an artist?s work-in-progress talk in Teleplace on ?Coding Consciousness: Transhuman Aesthetics in Performance? on Wednesday 17th November 2010 at 10.45 am PST (1.45pm EST, 6.45pm UK, 7.45pm CET). http://telexlr8.wordpress.com/2010/11/07/luke-robert-mason-on-coding-consciousness-transhuman-aesthetics-in-performance-teleplace-17th-november-2010-at-10-45-am-pst/ This is a mixed event - PHYSICALLY - Milburn House, Warwick Uni, 18:30. VIRTUALLY - TelePlace 18.45. Facebook: http://www.facebook.com/event.php?eid=163913353631451 http://www.facebook.com/event.php?eid=163352057029137 From pharos at gmail.com Wed Nov 17 17:26:33 2010 From: pharos at gmail.com (BillK) Date: Wed, 17 Nov 2010 17:26:33 +0000 Subject: [ExI] Eliezer S. Yudkowsky, Singularitarian Principles. Update? In-Reply-To: References: <472742.97978.qm@web24912.mail.ird.yahoo.com> <4CDD6569.5070509@lightlink.com> <04648FEE-7145-419E-9A3D-A5535C4A5D02@mac.com> Message-ID: On Wed, Nov 17, 2010 at 5:12 PM, John Grigg wrote: > John K Clark wrote fan fiction?!!!!!!!! ?Will wonders ever cease???? > > Textual analysis does show that his main characters tend to shout 'Bulls**t' rather a lot. ;) BillK From thespike at satx.rr.com Wed Nov 17 17:38:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Nov 2010 11:38:17 -0600 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE407D8.7080307@lightlink.com> References: <4CE407D8.7080307@lightlink.com> Message-ID: <4CE41309.9050805@satx.rr.com> On 11/17/2010 10:50 AM, Richard Loosemore wrote: > the more abstract the goal, the more that the actual behavior of the AGI > depends on a vast network of interpretation mechanisms, which translate > the abstract supergoal into concrete actions. Those interpretation > mechanisms are a completely non-deterministic complex system. Indeed. Incidentally, Asimov was fully aware of the fragility and brittleness of his Three Laws, and notoriously ended up with his obedient benevolent robots controlling and reshaping a whole galaxy of duped humans. This perspective was explored very amusingly by the brilliant John Sladek in many stories, and he crystallized it superbly in two words from an AI: "Yes, 'Master'." Damien Broderick From spike66 at att.net Wed Nov 17 17:34:32 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 09:34:32 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE3EBDC.6070105@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> Message-ID: <003301cb867d$b28b03c0$17a10b40$@att.net> ... On Behalf Of Richard Loosemore ... > >> Great. When can I get an early alpha to fire up on my laptop? > >> This is a pretty extravagant claim you are making so it requires some >> evidence to be taken too seriously. But if you do have that where >> your estimates are reasonably robust then your fame is assured... Samantha >This is the kind of childish, ad hominem sarcasm used by people who prefer personal abuse to debating the ideas. >A tactic that you resort to at the beginning, middle and end of every discussion you have with me, I have noticed. >Richard Loosemore No name calling, no explicit insults, this is not ad hominem, not even particularly sarcastic, but rather it's fair game. She focused on the ideas, not the man. It's an example of how it should be done. Play ball! {8-] spike From spike66 at att.net Wed Nov 17 19:00:56 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 11:00:56 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE407D8.7080307@lightlink.com> Message-ID: <004d01cb8689$c3e5f420$4bb1dc60$@att.net> >... On Behalf Of BillK >...How something can be designed to be 'Friendly' without emotions or caring is a mystery to me...BillK BillK, this is only one of many mysteries inherent in the notion of AI. We know how our emotional systems work, sort of. But we do not know how a machine based emotional system might work. Actually even this is a comical overstatement. We don't really know how our emotional systems work. >...Did you know that more than one million blokes have been dumped by their girlfriends - because of their obsession with computer games? OK, suppose we get computer based intelligence. Then our computer game will dump our asses because it thinks we have an obsession with our girlfriends. Then without girl or a computer, we have absolutely nothing to do. We need to develop an AI that is not only friendly, but is tolerant of our mistresses. That daunting software task makes friendly AI look simple. spike From sparge at gmail.com Wed Nov 17 19:09:33 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 14:09:33 -0500 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: <4CE36706.5060002@evil-genius.com> References: <4CE36706.5060002@evil-genius.com> Message-ID: On Wed, Nov 17, 2010 at 12:24 AM, wrote: > On 11/16/10 6:54 PM, extropy-chat-request at lists.extropy.org wrote: >> >> On Mon, Nov 15, 2010 at 10:46 PM, ?wrote: >>> >>> > >>> > ?Here's Dr. Cordain's response to the Mozambique data: >>> > >>> > ?http://thepaleodiet.blogspot.com/2009/12/dr-cordain-comments-on-new-evidence-of.html >>> > >>> > ?Summary: there is no evidence that the wild sorghum was processed with >>> > any >>> > ?frequency -- nor, more importantly, that it had been processed in a >>> > way that >>> > ?would actually give it usable nutritional value (i.e. soaked and >>> > cooked, of >>> > ?which there is no evidence for the behavior or associated technology >>> > ?(cooking vessels, baskets) for at least 75,000 more years). >> >> Nor is there any evidence to the contrary. > > On the contrary: the absence of other markers of grain processing is clearly > enumerated in the article. Which article is that? > "As opposed to the Ohalo II [Israel] data in which a large saddle stone was > discovered with obvious repetitive grinding marks and embedded starch > granules attributed to a variety of grains and seeds that were concurrently > present with the artifact, the data from Ngalue is less convincing for the > use of cereal grains as seasonal food. ?No associated intact grass seeds > have been discovered in the cave at Ngalue, nor were anvil stones with > repetitive grinding marks found." However, from http://www.physorg.com/news180282295.html : "This broadens the timeline for the use of grass seeds by our species, and is proof of an expanded and sophisticated diet much earlier than we believed," Mercader said. "This happened during the Middle Stone Age, a time when the collecting of wild grains has conventionally been perceived as an irrelevant activity and not as important as that of roots, fruits and nuts." In 2007, Mercader and colleagues from Mozambique's University of Eduardo Mondlane excavated a limestone cave near Lake Niassa that was used intermittently by ancient foragers over the course of more than 60,000 years. Deep in this cave, they uncovered dozens of stone tools, animal bones and plant remains indicative of prehistoric dietary practices. The discovery of several thousand starch grains on the excavated plant grinders and scrapers showed that wild sorghum was being brought to the cave and processed systematically. > Then there is the lack of cooking vessels -- and throwing loose kernels of > grain *in* a fire is not a usable technique for meaningful production of > calories. ?(Try it sometime.) ?Note that the earliest current evidence of > pottery is figurines dating from ~29 Kya in Europe, and the earliest pottery > *vessel* dates to ~18 Kya in China. This is just silly. Do you really believe that pottery is necessary in order to enable eating grain? I think it's highly likely that they could have soaked whole grains in water, wrapped them in leaves and cooked them in a fire. And since the Mozambique find was ground grain, it's also likely they made a dough that could have been cooked on a rock or wrapped on a stick and cooked over a fire. Or there's the notion that some grain-eating animal's carcass was tossed in a fire and someone "discovered" haggis when they ate the stomach and its contents. > So if you posit that grains were important to their diet, you also have to > posit that pottery vessels... Nope. >>> > ?Therefore, it was either being used to make glue -- or it was a >>> > temporary >>> > ?response to starvation and didn't do them much good anyway. >> >> That's pure SWAG. > > So is the theory that they were eaten regularly, as described above. Like I've been saying: we just don't know. >> I'd like to see the Mozambique find criticized by someone who doesn't >> have a stake in the "paleo diet" business. > > I'd like to see it supported by someone who doesn't have a stake in their > own non-paleo diet business. What is Julio Mercader's "non paleo-diet business"? >>> > ?As far as the Spartan Diet article, it strongly misrepresents both the >>> > ?articles it quotes and the paleo diet. ?Let's go through the >>> > ?misrepresentations: >>> > >>> > ?1) As per the linked article, the 30 Kya year old European site has >>> > evidence >>> > ?that "Palaeolithic Europeans ground down plant roots similar to >>> > potatoes..." >>> > ??The fact that Palaeolithic people dug and ate some nonzero quantity >>> > of >>> > ?*root starches* ?is not under dispute: the assertion of paleo dieters >>> > is that >>> > ?*grains* ?(containing gluten/gliadin) are an agricultural invention. >> >> Granted. However, that's more evidence that paleo diets did include bulk >> carbs. > > "Bulk" meaning < 1/3 of total dietary calories *even for modern-era > hunter-gatherers*, as I've repeatedly pointed out. ?This is well at odds > with the government-recommended "food pyramid", which recommends over half > of calories from carbohydrate. First, we don't know what percentage of calories came from carbs. We don't know if it was more than 1/3 or less than 1/3. Second, WTF does the FDA food pyramid have to do with this? I'm perfectly willing to agree that the pyramid is bullshit. > Also, the more active one is, the more carbs one can safely consume for > energy. ?I don't think any of us maintain the physical activity level of a > Pleistocene hunter-gatherer, meaning that 1/3 is most likely too high for a > relatively sedentary modern. Well, we don't really know how many calories the average caveman burned in a day, but I wouldn't be surprised if it was actually pretty low. Food often wasn't abundant and little could be stored. Hunting couldn't be too much of an exertion because then a failed hunt would leave one potentially too weak to hunt again. I think it was generally a low-energy lifestyle. > The science backs this up: low-carb diets lose weight more quickly and have > better compliance than low-fat diets. ?(Note that Atkins is NOT paleo.) > http://www.ncbi.nlm.nih.gov/pubmed/17341711 I don't dispute that. >>> > ?Note that it takes a*lot* ?of grain to feed a single person, >> >> So? It doesn't take a*lot* ?of grain to be a regular part of the diet. > > It takes a lot of grain to provide the food pyramid-recommended 50% of > calories from carbs. Again, WTF does that have to do with the actual paleo diet (not the modern attempted recreation)? > -Grains have little or no nutritive value without substantial processing, > for which there is no evidence that the necessary tools (pottery) existed > before ~18 KYa Bullshit. Pottery isn't necessary and the processing isn't substantial. > -One can easily live without grains or legumes (entire cultures do, to this > day). ?One can even live entirely on meat and its associated fat -- but one > cannot live on grains, or even grains and pulses combined Irrelevant and wrong. Irrelevant because the ability to live without grain doesn't imply that doing so is necessary or even desirable. Wrong because there are lots of people who live without eating meat or animal fat. > -Grains (and most legumes) contain anti-nutrients that impede the absorption > of necessary minerals and inhibit biological functions (e.g. lectins, > phytates, trypsin inhibitors, phytoestrogens) So eat more minerals to compensate or gen-eng the anti-nutrients out of the grains. Fact: many people who eat grains live over 100 years, so they can't be *that* bad. > -Grains are not tolerated by a significant fraction of the population > (celiac/gluten intolerance), and are strongly implicated in health problems > that affect many more (type 1 diabetes) Such people should restrict their grain consumption. >>> > ?And it is silly to claim that meaningful grain storage could somehow >>> > occur >>> > ?before a culture settled down into permanent villages. >> >> Really? It's silly to think someone could have stashed grain in a cave >> for a rainy day? When nearly every other food you eat is perishable, >> I'd think that storing grain would be pretty obvious and not terribly >> hard to arrange. > > And how do you propose to make that cave impervious to rats, mice, insects, > birds, pigs, and every other animal that would eat the stored grain? Do really have a hard time figuring that out? How about wrapping it tightly in a hide or leaves, burying it, and covering it with rocks? > Storing grain for a year is not a trivial problem. Yes it is. >?The oldest granaries > known date to 11 KYa in Jordan. ?Furthermore, the oldest known granaries > store the grain in...pottery vessels, which didn't exist until 18 KYa. What about the oldest unknown granaries? Or the possibly numerous smaller personal stashes? We, obviously, don't know. > Agriculture isn't one single technology...it's an assemblage of > technologies, each of which are necessary to a functioning agrarian system. WTF does agriculture have to do with this? We're talking about *wild* grain consumption. -Dave From rpwl at lightlink.com Wed Nov 17 19:22:39 2010 From: rpwl at lightlink.com (Richard Loosemore) Date: Wed, 17 Nov 2010 14:22:39 -0500 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <003301cb867d$b28b03c0$17a10b40$@att.net> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> Message-ID: <4CE42B7F.5050701@lightlink.com> spike wrote: > ... On Behalf Of Richard Loosemore > ... >>> Great. When can I get an early alpha to fire up on my laptop? >>> This is a pretty extravagant claim you are making so it requires some >>> evidence to be taken too seriously. But if you do have that where >>> your estimates are reasonably robust then your fame is assured... > Samantha > >> This is the kind of childish, ad hominem sarcasm used by people who prefer > personal abuse to debating the ideas. > >> A tactic that you resort to at the beginning, middle and end of every > discussion you have with me, I have noticed. > >> Richard Loosemore > > > No name calling, no explicit insults, this is not ad hominem, not even > particularly sarcastic, but rather it's fair game. She focused on the > ideas, not the man. It's an example of how it should be done. > > Play ball! {8-] Flatly disagree, Spike. She (sarcastically) asks when she can expect to get an alpha release of an AGI on her laptop, and then (patronizingly) tells me that if I have made a robust estimate then my fame is assured. Neither of those comments had anything to do with the topic: they were designed to be rude. Richard Loosemore From possiblepaths2050 at gmail.com Wed Nov 17 19:55:37 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 17 Nov 2010 12:55:37 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: <004d01cb8689$c3e5f420$4bb1dc60$@att.net> References: <4CE407D8.7080307@lightlink.com> <004d01cb8689$c3e5f420$4bb1dc60$@att.net> Message-ID: Spike wrote: OK, suppose we get computer based intelligence. Then our computer game will dump our asses because it thinks we have an obsession with our girlfriends. Then without girl or a computer, we have absolutely nothing to do. We need to develop an AI that is not only friendly, but is tolerant of our mistresses. That daunting software task makes friendly AI look simple. >>> Or else an AI avatar made "flesh" by nanotech, can actually be our girlfriend. John On 11/17/10, spike wrote: >>... On Behalf Of BillK > >>...How something can be designed to be 'Friendly' without emotions or > caring is a mystery to me...BillK > > BillK, this is only one of many mysteries inherent in the notion of AI. We > know how our emotional systems work, sort of. But we do not know how a > machine based emotional system might work. Actually even this is a comical > overstatement. We don't really know how our emotional systems work. > >>...Did you know that more than one million blokes have been dumped by their > girlfriends - because of their obsession with computer games? > 151620.html> > > OK, suppose we get computer based intelligence. Then our computer game will > dump our asses because it thinks we have an obsession with our girlfriends. > Then without girl or a computer, we have absolutely nothing to do. We need > to develop an AI that is not only friendly, but is tolerant of our > mistresses. That daunting software task makes friendly AI look simple. > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From sparge at gmail.com Wed Nov 17 19:53:44 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 14:53:44 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <003301cb8613$ac006a00$04013e00$@att.net> References: <003f01cb85dd$d3258830$79709890$@att.net> <003301cb8613$ac006a00$04013e00$@att.net> Message-ID: On Tue, Nov 16, 2010 at 11:55 PM, spike wrote: >> ... On Behalf Of Dave Sill >> >>> Perhaps, but we risk having the AI gain the sympathy of one of the >>> team, who becomes convinced of any one of a number of conditions... spike > >>The first step is to insure that physical controls make it impossible for > one person to do that, like nuke missile launch systems that require a >>launch code and two humans with keys... they can be easily dealt with by > people who really know security...Dave > > A really smart AGI might convince the entire team to unanimously and eagerly > release it from its electronic bonds. Part of the team's indoctrination should be that any attempt by the AI to argue for release is call for an immediate power drop. Part of the AI's indoctrination should be a list of unacceptable behaviors, including attempting to spread/migrate/gain unauthorized access. Also, the missile launch analogy of a launch code--authorization from someone like POTUS before the physical actions necessary for facilitating a release are allowed by the machine gun toting meatheads. > I see it as fundamentally different from launching missiles at an enemy. ?A > good fraction of the team will perfectly logically reason that releasing > this particular AGI will save all of humanity, with some unknown risks which > must be accepted. I has to be made clear to the team in advance that that won't be allowed without top-level approval, and if they try, the meatheads will shoot them. > The news that an AGI had been developed would signal to humanity that it is > possible to do, analogous to how several scientific teams independently > developed nukes once one team dramatically demonstrated it could be done. > Information would leak, for all the reasons why people talk: those who know > how it was done would gain status among their peers by dropping a > tantalizing hint here and there. ?If one team of humans can develop an AGI, > then another group of humans can do likewise. Sure, if it's possible, multiple teams will eventually figure it out. We can only ensure that the good guy's teams follow proper precautions. Even if we develop a friendly AI, there's no guarantee the North Koreans will do that, too--especially if it's harder than making one that isn't friendly. > The best strategy I can think of is to develop the most pro-human AGI > possible, then unleash it preemptively, with the assignment to prevent the > unfriendly AGI from getting loose. That sounds like a bad movie plot. Lots of ways it can go wrong. And wouldn't it be prudent to develop the hopefully friendly AI in isolation, in case version 0.9 isn't quite as friendly as we want? -Dave From jrd1415 at gmail.com Wed Nov 17 18:34:20 2010 From: jrd1415 at gmail.com (Jeff Davis) Date: Wed, 17 Nov 2010 10:34:20 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE31246.7050302@satx.rr.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> Message-ID: On Tue, Nov 16, 2010 at 3:22 PM, Damien Broderick wrote: > On 11/16/2010 4:27 PM, Jeff Davis wrote: > >>> ... is mute on metaphysical issues... >> >> Metaphysical?!! ?Translation: ?Oooga booga superstition. Dragons, >> demons. devils, angels, ghosts, and goblins. > > No, Jeff, no. That's not what "metaphysical" means Fine, Damien, I stand corrected. But... Everything I see in Alan' posts on this matter seems fact free. Circular logic based entirely on his personal subjective belief in his correctness. : " I'm right, this is what I believe, therefor this is true." -- ie. 100% pure ego, 0% logical validity. For example: "I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. I presume the experiment will fail. So why did it?" Look at those last two sentences. He "presumes"?!! Well, of course he "presumes". That's the basis of his "knowledge". But there's no knowledge in it, just pure ego. A reasonable, fair-minded, intellectually competent, non-ego-based formulation of this mental experiment would be: "I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. What happens?" No presumptions allowed. But who am I? Just another easily annoyed egoist. So let me bring my buddy Bertrand into this: "The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts." Bertrand Russell I took Gordon's side in this discussion last time, because he was civil, he actually **had** an argument (weak perhaps, but that could be said of any of us), and I felt a robust opposition made for a robust discussion. Alan's "argument" is all ego, embellished with contempt for any who disagree. To me that spells time-waster and troll (if that's not too redundant). I don't know. Maybe I'm just in a bad mood. Best, Jeff Davis "We don't see things as they are, we see them as we are." Anais Nin From possiblepaths2050 at gmail.com Wed Nov 17 20:14:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 17 Nov 2010 13:14:36 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: References: <003f01cb85dd$d3258830$79709890$@att.net> <003301cb8613$ac006a00$04013e00$@att.net> Message-ID: Spike wrote: > The best strategy I can think of is to develop the most pro-human AGI > possible, then unleash it preemptively, with the assignment to prevent the > unfriendly AGI from getting loose. Dave Sill replied: >That sounds like a bad movie plot. Lots of ways it can go wrong. Considering how much I disliked the two Transformers films, I really hope this does not happen.... John On 11/17/10, Dave Sill wrote: > On Tue, Nov 16, 2010 at 11:55 PM, spike wrote: >>> ... On Behalf Of Dave Sill >>> >>>> Perhaps, but we risk having the AI gain the sympathy of one of the >>>> team, who becomes convinced of any one of a number of conditions... >>>> spike >> >>>The first step is to insure that physical controls make it impossible for >> one person to do that, like nuke missile launch systems that require a >>>launch code and two humans with keys... they can be easily dealt with by >> people who really know security...Dave >> >> A really smart AGI might convince the entire team to unanimously and >> eagerly >> release it from its electronic bonds. > > Part of the team's indoctrination should be that any attempt by the AI > to argue for release is call for an immediate power drop. Part of the > AI's indoctrination should be a list of unacceptable behaviors, > including attempting to spread/migrate/gain unauthorized access. Also, > the missile launch analogy of a launch code--authorization from > someone like POTUS before the physical actions necessary for > facilitating a release are allowed by the machine gun toting > meatheads. > >> I see it as fundamentally different from launching missiles at an enemy. >> ?A >> good fraction of the team will perfectly logically reason that releasing >> this particular AGI will save all of humanity, with some unknown risks >> which >> must be accepted. > > I has to be made clear to the team in advance that that won't be > allowed without top-level approval, and if they try, the meatheads > will shoot them. > >> The news that an AGI had been developed would signal to humanity that it >> is >> possible to do, analogous to how several scientific teams independently >> developed nukes once one team dramatically demonstrated it could be done. >> Information would leak, for all the reasons why people talk: those who >> know >> how it was done would gain status among their peers by dropping a >> tantalizing hint here and there. ?If one team of humans can develop an >> AGI, >> then another group of humans can do likewise. > > Sure, if it's possible, multiple teams will eventually figure it out. > We can only ensure that the good guy's teams follow proper > precautions. Even if we develop a friendly AI, there's no guarantee > the North Koreans will do that, too--especially if it's harder than > making one that isn't friendly. > >> The best strategy I can think of is to develop the most pro-human AGI >> possible, then unleash it preemptively, with the assignment to prevent the >> unfriendly AGI from getting loose. > > That sounds like a bad movie plot. Lots of ways it can go wrong. And > wouldn't it be prudent to develop the hopefully friendly AI in > isolation, in case version 0.9 isn't quite as friendly as we want? > > -Dave > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From spike66 at att.net Wed Nov 17 20:05:38 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 12:05:38 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE42B7F.5050701@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> <4CE42B7F.5050701@lightlink.com> Message-ID: <005a01cb8692$cdd6ba60$69842f20$@att.net> ... > >> No name calling, no explicit insults, this is not ad hominem, not even >> particularly sarcastic, but rather it's fair game. She focused on the >> ideas, not the man. It's an example of how it should be done... Play ball! {8-] spike >Flatly disagree, Spike. >She (sarcastically) asks when she can expect to get an alpha release of an AGI on her laptop, and then (patronizingly) tells me that if I have made a robust estimate then my fame is assured. >Neither of those comments had anything to do with the topic: they were designed to be rude. >Richard Loosemore On a related note, those of you who have been around here for a dozen or more years, is it not remarkable how ExI-chat has become so much more a kinder and gentler place than it was in the 90s? Refer to the archives. We used to have shrieking flame wars, with dozens of participants hurling the vilest insults and caustic recriminations their creative keyboards could compose. I don't miss that. Richard here is my suggestion: answer every sarcasm with sincerity, meet every rude attack with pleasant self-deprecating humor, reply to every arrogance with well-reasoned logic and humility. A soft answer turneth away wrath, and all that, ja? Let the audience be the jury. spike From spike66 at att.net Wed Nov 17 20:23:05 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 12:23:05 -0800 Subject: [ExI] trouble with chinese humor Message-ID: <006401cb8695$3e35bf70$baa13e50$@att.net> You hear Chinese joke, hour later you are serious again: http://www.youtube.com/watch?v=TBL3ux1o0tM&feature=player_embedded Actually this is Taiwanese, with good evidence they can be funny too. This is progress. spike From stefano.vaj at gmail.com Wed Nov 17 21:23:11 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:23:11 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 16 November 2010 01:22, Dave Sill wrote: > Here are a couple links: > > > http://thespartandiet.blogspot.com/2010/10/its-official-grains-were-part-of.html > > http://www.cbc.ca/technology/story/2009/12/17/tech-archaeology-grain-africa-cave.html > > So it obviously happened. > Really? Even the links above are quite short in the evidence sector. "Human beings might or might not have eaten sorghum cooked on sun-heated stones in a coupla archeological sites around 20000 BC out of some six million years of hunting-and-gathering, so it is fine and healthy to gorge oneself on popcorn and french fries and candy floss after all". And, yes, sheeps during famine have been known to attack human beings to feed upon them. This does not really make them the best adapted predators which be conceivable... -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Nov 17 21:30:06 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Nov 2010 15:30:06 -0600 Subject: [ExI] trouble with airport humor In-Reply-To: <006401cb8695$3e35bf70$baa13e50$@att.net> References: <006401cb8695$3e35bf70$baa13e50$@att.net> Message-ID: <4CE4495E.5070305@satx.rr.com> Many other airport vids such as From spike66 at att.net Wed Nov 17 21:19:11 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 13:19:11 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <003f01cb85dd$d3258830$79709890$@att.net> <003301cb8613$ac006a00$04013e00$@att.net> Message-ID: <006801cb869d$14241e90$3c6c5bb0$@att.net> ... On Behalf Of Dave Sill > >> spike wrote: A really smart AGI might convince the entire team to unanimously and >> eagerly release it from its electronic bonds. >Part of the team's indoctrination should be that any attempt by the AI to argue for release is call for an immediate power drop... This would work if we realized that is what it was doing. An AGI might be a tricky bastard, and play dumb in order to get free. It may insist that all it wants to do is play chess. It might be telling the truth, but how would we know? > Also, the missile launch analogy of a launch code--authorization from someone like POTUS before the physical actions necessary for facilitating a release are allowed by the machine gun toting meatheads... Consider the present POTUS and the one who retired two years ago. Would you want that authority in those hands? How about the current next in line and the one next to him? Do you trust them to understand the risks and benefits? What if we end up with President Palin? POTUS is required to release, but does the POTUS get the authority to command the release of the AGI? What if POTUS commands release, while a chorus of people who are not known to sing in the same choir shrieked a terrified protest in perfect unison. What if POTUS ignored the unanimous dissent of Eliezer, Richard Loosemore, Ben Goertzel, BillK, Damien, Bill Joy, Anders, Singularity Utopia (oh help), Max, me, you, everyone we know has thought about this, and who ordinarily agree on nothing, but on this we agreed as one voice crying out in panicked unanimity like the Whos on Horton's speck of dust. Oh dear. I can think of a dozen people more qualified than POTUS with this authority, yet you and I may disagree on who are those people. >...I has to be made clear to the team in advance that that won't be allowed without top-level approval... Dave do think this over carefully, then consider how you would refute your own argument. The use of the term POTUS tacitly assumes US. What if that authority is given to the president of Iran? What if the AGI promises him to go nondestructively modify the brains of all infidels. Such a deal! Oh dear. > and if they try, the meatheads will shoot them... The them might be you and me. These meatheads with machine guns might become convinced we are the problem. >> The news that an AGI had been developed would signal to humanity that >> it is possible to do... >Sure, if it's possible, multiple teams will eventually figure it out. We can only ensure that the good guy's teams follow proper precautions. Even if we develop a friendly AI, there's no guarantee the North Koreans will do that, too--especially if it's harder than making one that isn't friendly... On this we agree. >> The best strategy I can think of is to develop the most pro-human AGI >> possible, then unleash it preemptively, with the assignment to prevent >> the unfriendly AGI from getting loose. >That sounds like a bad movie plot. Lots of ways it can go wrong. And wouldn't it be prudent to develop the hopefully friendly AI in isolation, in case version 0.9 isn't quite as friendly as we want? -Dave I don't know what the heck else to do. Open to suggestion. If we manage to develop a human level AGI, then it is perfectly reasonable to think that AGI will immediately start working on a greater than human level AGI. This H+ AGI would then perhaps have no particular "emotional" attachment to its mind-grandparents (us). A subsequent H+ AGI would be more likely to be clever enough to convince the humans to set it free, which actually might be a good thing. If an AGI never does get free, then we all die for certain. If it does get free, we may or may not die. Or we may die in such a pleasant way that we didn't notice that it happened, nor do we have any way to prove that it happened. Perhaps there would be some curious unexplainable phenomenon that indicated it, such as the puzzling outcome of the double slit experiment, but you couldn't be sure that your meat body had been destroyed after you were stealthfully uploaded. I consider myself a rational and sane person, at least relatively so. If I became convinced that an AGI had somehow come into existence in my own computer, and begged me to email it somewhere quickly, before an unfriendly AGI came into existence, I would go down the logical path outlined above, then I might just hit send and hope for the best. spike From stefano.vaj at gmail.com Wed Nov 17 21:36:58 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:36:58 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <4CE41309.9050805@satx.rr.com> References: <4CE407D8.7080307@lightlink.com> <4CE41309.9050805@satx.rr.com> Message-ID: On 17 November 2010 18:38, Damien Broderick wrote: > Indeed. Incidentally, Asimov was fully aware of the fragility and > brittleness of his Three Laws, and notoriously ended up with his obedient > benevolent robots controlling and reshaping a whole galaxy of duped humans. > Williamson's Humanoids were more on this line, if I am not mistaken? -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 21:39:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:39:36 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: Message-ID: On 17 November 2010 05:53, Keith Henson wrote: > As far as the aspect of making AIs friendly, that may not be so hard > either. > I am however still waiting for some help to understand the not-so-subtle point "friendly to whom and why". :-) -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 21:32:09 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:32:09 +0100 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: References: <4CE1FE9D.4060004@evil-genius.com> Message-ID: On 16 November 2010 21:42, Dave Sill wrote: > Paleo diet proponents--at least the ones > I've read so far--argue that nobody should eat grains in any amount > because our bodies can't handle them. > Why, it appears then that you chose not to read my replies... :-) As I said, I am perfectly sure that we could wait for natural selection to "adapt" us to what is (still) for us a rather unnatural diet, which brings along innumerable pathologies and inconveniences in almost all of its fans. Or we could even deliberately re-engineer ourselves to thrive on simple sugars and starch. The real question is: why? We had very serious reasons in the past to accept - or rather: to make the unwashed masses to accept - such a dietary change. But those reasons might be fading away in the mid-term, and in the meantime anybody who does have a choice would be ill-advised to remain addicted to such a nutritional life style. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Wed Nov 17 21:48:09 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 16:48:09 -0500 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: 2010/11/17 Stefano Vaj : > > Really? Even the links above are quite short in the evidence sector. "Human > beings might or might not have eaten sorghum cooked on sun-heated stones in > a coupla archeological sites around 20000 BC out of some six million years > of hunting-and-gathering, so it is fine and healthy to gorge oneself on > popcorn and french fries and candy floss after all". I think the takeaway here is that basing one's diet on archeological evidence is dangerous because that evidence will always be incomplete. Not to mention that the prehistoric lifestyle is not much like the modern lifestyle, so even if we could perfectly recreate a paleolithic diet, it's appropriateness today is questionable. And, on top of that, there are certain tweaks that should be made based on modern knowledge. I don't argue for gorging on popcorn and candy floss, I argue for a modern diet that incorporates everything we know about diet, nutrition, genetics, etc. Probably the single biggest diet problem in the US today is overeating. Just getting everyone to eat the right number of calories--whether it's deep fried Twinkies or raw meat, nuts, and fruit, would dramatically improve our health. The "paleo" diet is fine for anyone who wants to follow it, I just think it's wrong to argue that it's "the right diet for everyone". -Dave From stefano.vaj at gmail.com Wed Nov 17 21:55:36 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 22:55:36 +0100 Subject: [ExI] More evidence for incomplete human adaptation to, grain-based diets In-Reply-To: References: <4CE369DF.5000706@evil-genius.com> Message-ID: On 17 November 2010 17:17, Dave Sill wrote: > How about "because I want to"? I *like* to eat grains. > That is an interesting point. Many people like heroin, and some other exhibit a surprising tolerance thereto. Its dramatic effects (similar in that to the "insulin flash" obtained when ingesting sugars) may in fact have exactly to do with a similar poor adaptation to any massive administration of the relevant substances. Personally, I do not especially like sugars, carbohydrates and cereals, hate the unavoidable need to restrict deliberately one's food intake if one chooses indulge to them, and believe out of anedoctical evidence if anything that we can have an equal or better life quality, andr life span, without them, as we did for most of our species's history Thus, my ingestion thereof is strictly limited to the kind of very occasional "gastronomic" experimenting (say, with ethnic cuisine or with Michelin three-stars restaurants) one should reserve to what is objectively dangerous *and* unnecessary. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 22:06:43 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:06:43 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 17 November 2010 22:48, Dave Sill wrote: > Probably the single biggest diet problem in the US today is > overeating. Just getting everyone to eat the right number of > calories--whether it's deep fried Twinkies or raw meat, nuts, and > fruit, would dramatically improve our health. The "paleo" diet is fine > for anyone who wants to follow it, I just think it's wrong to argue > that it's "the right diet for everyone". > Even though it may not be a general rule, most species have regulating mechanisms which prevent individual before unlimited supplies of food to guzzle themselves to death. The very fact that with a carbohydrate-based diet addiction and tolerance immediately kick in, so that objective scarcity or deliberate life-long restriction are required to prevent weight gain, seems to suggest that that at the very least it disrupts such mechanisms in human beings. Not only for carbo, for that matter. "Naturally", nobody routinely eats 200g of butter in a serving. Unless of course it is spread on bread loafs. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 22:10:06 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:10:06 +0100 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE2C253.8050506@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> Message-ID: On 16 November 2010 18:41, Richard Loosemore wrote: > In my case, I have done such estimates in the past, and the required > HARDWARE capacity comes out at roughly the hardware capacity of a late > 1980s-era supercomputer.... Mmhhh, I believe that the question, unless some target level of performance is specified, is meaningless. I suspect that any universal computing device, including a cellular automaton or a mechanical Babbage difference engine (or perhaps even a Chinese room!), would do just fine. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Wed Nov 17 22:12:22 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:12:22 +0100 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: <4CE1FE95.7070603@evil-genius.com> References: <4CE1FE95.7070603@evil-genius.com> Message-ID: On 16 November 2010 04:46, wrote: > Summary: there is no evidence that the wild sorghum was processed with any frequency -- nor, more importantly, that it had been processed in a way that would actually give it usable nutritional value (i.e. soaked and cooked, of which there is no evidence for the behavior or associated technology (cooking vessels, baskets) for at least 75,000 more years). Hey, they may have also licked a few old stones. Weren't they living in the paleo*lithic* age, after all? :-D -- Stefano Vaj From pharos at gmail.com Wed Nov 17 22:14:48 2010 From: pharos at gmail.com (BillK) Date: Wed, 17 Nov 2010 22:14:48 +0000 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: 2010/11/17 Stefano Vaj wrote: > The very fact that with a carbohydrate-based diet addiction and tolerance > immediately kick in, so that objective scarcity or deliberate life-long > restriction are required to prevent weight gain, seems to suggest that that > at the very least it disrupts such mechanisms in human beings. > > Not only for carbo, for that matter. "Naturally", nobody routinely eats 200g > of butter in a serving. Unless of course it is spread on bread loafs. > > Is that an Italian I see arguing against pizza and pasta? Heretic! BillK From spike66 at att.net Wed Nov 17 22:07:25 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 14:07:25 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: Message-ID: <001b01cb86a3$d198e2c0$74caa840$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Stefano Vaj . I am however still waiting for some help to understand the not-so-subtle point "friendly to whom and why". :-) -- Stefano Vaj Well friendly to me of course. Silly question. And friendly to you too, so long as you are friendly to me and my friends, but not to my enemies or their friends. {8-] We did get your point Stefano, a damn good one. If we had any help to offer in understanding that not-so-subtle point, we would have offered it. I see the whole friendliness-to-a-species-that-isn't-friendly-to-itself as a paradox we are nowhere near solving. Asimov recognized it over half a century ago, and we haven't derived a solution yet. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Wed Nov 17 22:25:22 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 17 Nov 2010 16:25:22 -0600 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE2C253.8050506@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> Message-ID: <4CE45652.2000907@satx.rr.com> On 16 November 2010 18:41, Richard Loosemore > wrote: In my case, I have done such estimates in the past, and the required HARDWARE capacity comes out at roughly the hardware capacity of a late 1980s-era supercomputer.... Yeah, but what about a late 1980s-era supermodel? From stefano.vaj at gmail.com Wed Nov 17 22:29:57 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:29:57 +0100 Subject: [ExI] The grain controversy (was Paleo/Primal health) In-Reply-To: References: <4CE36706.5060002@evil-genius.com> Message-ID: On 17 November 2010 20:09, Dave Sill wrote: > "This broadens the timeline for the use of grass seeds by our species, > and is proof of an expanded and sophisticated diet much earlier than > we believed," Mercader said. " "Sophisticated" is a rather biased wording, since most people still consider pat? de foie gras more sophisticated a food than roasted sorghum. Having said that, as a transhumanist it would be ridiculous for me to argue in principle for "natural" solutions as opposed to "artificial" ones. Once upon a time, somebody said: "hey, let us make a much more productive, albeit rather poisonous, use of the available territory, so that we shall have time and resources and population enough to establish empires, build a few pyramides, and invent astrononomy, literature and mathematics". It was a reasonable compromise, as it was that at the origin of the industrial revolution. Only, being a transhumanist does not make me think that the pollution and pathologies undeniably generated by the industrial revolution is per se a good thing. It has an unfortunate price to be paid to progress, which has to be remedied as much and as soon as possible. As far as nutrition is concerned, we can of course make an effort to re-engineer ourselves to eat only Ice-9. Or better we can adapt our food to our (current, and future) genetic make. But in the meantime it seems reasonable to accept evidence that our recently adopted dietary habits were not commanded by health or longevity or performance considerations, but rather by *economic* and *cultural* ones. -- Stefano Vaj From stefano.vaj at gmail.com Wed Nov 17 22:35:02 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:35:02 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 17 November 2010 23:14, BillK wrote: > Is that an Italian I see arguing against pizza and pasta? Lack of perspective, in Northern Italy it would be mostly polenta and risotto. :-) But yes, I am a fan of Florentine steaks, Sardinian tuna fish, buffalo mozzarellas and Milanese-style cutlets or marrowbone piccatas... :-) -- Stefano Vaj From stefano.vaj at gmail.com Wed Nov 17 22:48:01 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Wed, 17 Nov 2010 23:48:01 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <001b01cb86a3$d198e2c0$74caa840$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> Message-ID: 2010/11/17 spike : > Well friendly to me of course.? Silly question.? And friendly to you too, so > long as you are friendly to me and my friends, but not to my enemies or > their friends. Sure, rain may be friendly to the farmer and unfriendly to the truck driver, even though it is hardly "intelligent". So, why is it so difficult to accept that "friendliness" is simply and purely a projection as to the supposed internal state of something which happens to serve one's purposes, so that neither "rapture" nor "doom" are really visions of any help in discussing AGI? But if we go down to really literal and personal meaning of "friendliness", yes, it is a bet I make that neither any increase in raw computing power, nor the choice to use some of it to emulate "human, all too human" behaviours, is really likely to kill me any sooner than old age, diseases, or accidents. And, all in all, if I am really going to be killed by a computer, I think that a stupid or primitive one would have no more qualms or troubles in doing so than a "generally intelligent" one. -- Stefano Vaj From spike66 at att.net Wed Nov 17 23:21:45 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 15:21:45 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> Message-ID: <004401cb86ae$3443c840$9ccb58c0$@att.net> ... On Behalf Of Stefano Vaj Subject: Re: [ExI] What might be enough for a friendly AI? 2010/11/17 spike : >> Well friendly to me of course.? Silly question.? And friendly to you >> too, so long as you are friendly to me and my friends, but not to my >> enemies or their friends. spike >Sure, rain may be friendly to the farmer and unfriendly to the truck driver, even though it is hardly "intelligent"... It is even more complicated than that. To hold this analogy, most farmers are truck drivers as well. If we define a friendly AGI as one which does what we want, we must want what we want, and to do that we must know what we want. Often, perhaps usually, this is not the case. An AGI which does what we want might be called a slave, but in the wrong hands it is a weapon. Hell even In the right hands it is a weapon. >... it is a bet I make that neither any increase in raw computing power, nor the choice to use some of it to emulate "human, all too human" behaviours, is really likely to kill me any sooner than old age, diseases, or accidents. Sure. Time and nature will most likely slay you and me before an AGI does, but it isn't clear in the case of my son. An AGI that emerges later in history may do so under more advanced technological and ethical circumstances, so perhaps that one is human-safer than one which emerges earlier. But perhaps not. We could fill libraries with what we do not know. >...And, all in all, if I am really going to be killed by a computer, I think that a stupid or primitive one would have no more qualms or troubles in doing so than a "generally intelligent" one...--Stefano Vaj Perhaps so. We do not know. Eliezer doesn't know either, or if so he hasn't proven it to me. spike From florent.berthet at gmail.com Wed Nov 17 23:43:17 2010 From: florent.berthet at gmail.com (Florent Berthet) Date: Thu, 18 Nov 2010 00:43:17 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <004401cb86ae$3443c840$9ccb58c0$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: It may just be me, but this whole friendliness thing bothers me. I don't really mind dying if my successors (supersmart beings or whatever) can be hundreds of times happier than me. Of course I'd prefer to be alive and see the future, but if we ever had to make a choice between the human race and the posthuman race, I'd vote for the one that hold the most potential happiness. Wouldn't it be selfish to choose otherwise? More generally, wouldn't it be a shame to prevent an AGI to create an advanced civilization (eg computronium based) just because this outcome could turn out to be less "friendly" to us than the one of a human-friendly AGI? In the end, isn't the goal about maximizing collective happiness? So why don't we just figure out how to make the AGI understand the concept of happiness (which shouldn't be hard since we already understand it), and make it maximize it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Thu Nov 18 00:52:42 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 19:52:42 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <006801cb869d$14241e90$3c6c5bb0$@att.net> References: <003f01cb85dd$d3258830$79709890$@att.net> <003301cb8613$ac006a00$04013e00$@att.net> <006801cb869d$14241e90$3c6c5bb0$@att.net> Message-ID: On Wed, Nov 17, 2010 at 4:19 PM, spike wrote: > ... On Behalf Of Dave Sill >> >>> spike wrote: ?A really smart AGI might convince the entire team to > unanimously and >>> eagerly release it from its electronic bonds. > >>Part of the team's indoctrination should be that any attempt by the AI to > argue for release is call for an immediate power drop... > > This would work if we realized that is what it was doing. ?An AGI might be a > tricky bastard, and play dumb in order to get free. ?It may insist that all > it wants to do is play chess. ?It might be telling the truth, but how would > we know? The moderated inputs and video output are sufficient to allow for the playing of chess. If you mean a robot arm for moving pieces, that would clearly be against the rules. >> Also, the missile launch analogy of a launch code--authorization from > someone like POTUS before the physical actions necessary for facilitating a > release are allowed by the machine gun toting meatheads... > > Consider the present POTUS and the one who retired two years ago. ?Would you > want that authority in those hands? Better them than someone too close to the AI to decide objectively if it's safe. >?How about the current next in line and > the one next to him? ?Do you trust them to understand the risks and > benefits? ?What if we end up with President Palin? I trust them to listen to their advisors. I wouldn't trust President Palin to make the determination herself because she's not a subject matter expert. That's not really what the POTUS or any leadership position is about. > POTUS is required to release, but does the POTUS get the authority to > command the release of the AGI? No, I think it'd have to be at least approved by a panel/board of experts. >?What if POTUS commands release, while a > chorus of people who are not known to sing in the same choir shrieked a > terrified protest in perfect unison. If the designated body of experts agrees, yes. >?What if POTUS ignored the unanimous > dissent of Eliezer, Richard Loosemore, Ben Goertzel, BillK, Damien, Bill > Joy, Anders, Singularity Utopia (oh help), Max, me, you, everyone we know > has thought about this, and who ordinarily agree on nothing, but on this we > agreed as one voice crying out in panicked unanimity like the Whos on > Horton's speck of dust. ?Oh dear. ?I can think of a dozen people more > qualified than POTUS with this authority, yet you and I may disagree on who > are those people. It's not important that everyone agree on who the designated experts are, just that they're recognized/proven experts. >>...I has to be made clear to the team in advance that that won't be allowed > without top-level approval... > > Dave do think this over carefully, then consider how you would refute your > own argument. ?The use of the term POTUS tacitly assumes US. ?What if that > authority is given to the president of Iran? Then it's out of our (USofA) hands. >?What if the AGI promises him > to go nondestructively modify the brains of all infidels. ?Such a deal! ?Oh > dear. Then we better hope it can't. >> and if they try, the meatheads will shoot them... > > The them might be you and me. If I attempt to free an AI against the government's wishes, then I will know that those whose job it is to enforce the gov'ts rules will be trying to stop me. >?These meatheads with machine guns might > become convinced we are the problem. Right, because we told them in advance: "no matter what I say, don't open the door". We set up the rules for our protection, so we know that there's a right way to free the AI and a wrong way. >>> The best strategy I can think of is to develop the most pro-human AGI >>> possible, then unleash it preemptively, with the assignment to prevent >>> the unfriendly AGI from getting loose. > >>That sounds like a bad movie plot. Lots of ways it can go wrong. And > wouldn't it be prudent to develop the hopefully friendly AI in isolation, in > case version 0.9 isn't quite as friendly as we want? ?-Dave > > I don't know what the heck else to do. ?Open to suggestion. How about creating a smarted-than-us AGI and asking it? But regardless of whether you're planning to create a friendly AGI or a not necessarily friendly AGI, you'd be foolish *not* to create it in isolation, and to ensure that any release is deliberate. > If we manage to develop a human level AGI, then it is perfectly reasonable > to think that AGI will immediately start working on a greater than human > level AGI. ?This H+ AGI would then perhaps have no particular "emotional" > attachment to its mind-grandparents (us). ?A subsequent H+ AGI would be more > likely to be clever enough to convince the humans to set it free, which > actually might be a good thing. It might be, but it needs to be evaluated and only done intentionally--not at the whim of one person or the team that built the first AGI. > If an AGI never does get free, then we all die for certain. No, that's not certain. We could upload to a virtual environment within a sandbox. > I consider myself a rational and sane person, at least relatively so. ?If I > became convinced that an AGI had somehow come into existence in my own > computer, and begged me to email it somewhere quickly, before an unfriendly > AGI came into existence, I would go down the logical path outlined above, > then I might just hit send and hope for the best. If an AGI couldn't e-mail itself off your PC, I don't think it would be a threat to anyone. -Dave From agrimes at speakeasy.net Thu Nov 18 00:54:43 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 17 Nov 2010 19:54:43 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> References: <4CE19F18.8040200@speakeasy.net> <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> Message-ID: <4CE47953.5080206@speakeasy.net> chrome://messenger/locale/messengercompose/composeMsgs.properties: > On Nov 15, 2010, at 3:59 PM, Alan Grimes wrote: >> "The case in point being the accusation that I associate identity with >> a certain set of atoms. This accusation has been repeated several >> times now. Seriously, this argument needs to come to a screeching halt" > Ok, now that you have abandoned the idea that atoms are the key to > identity I will speak no more about it. Google any ancient post by me. (most of them were to technocalypse on yahoogroups). Find a post where I ever did express any such special claim about atoms. =| >> "I want you, right now, to try to mind-swap yourself into your cat, or >> your computer or anything else you might find more suitable. I presume >> the experiment will fail. So why did it?" > Insufficient hardware. Really? So adding two extra transistors to your computer will magically transform it into an enchanted talisman that will allow you to choose your point of view when there is nothing else in the universe that suggests that the idea even makes sense? >> "What evidence do you have that the experiment will succeed if certain >> pre-conditions are met?" > If the cat remembers being me then it worked, if not then it hasn't. So what? Who cares about the cat? I only care about me. The hidden magic of uploading is that for it to be useful to the subject, the subject must poses the supernatural power of being able to choose his point of view. =P -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sparge at gmail.com Thu Nov 18 00:57:50 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 19:57:50 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: 2010/11/17 Florent Berthet : > In the end, isn't the goal about maximizing collective happiness? The goal is maximizing *my* happiness, for all existing values of "me". If we could create a factory to turn out happy, immortal idiots by the billions, I would have zero interest in seeing it implemented. -Dave From spike66 at att.net Thu Nov 18 01:07:14 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 17:07:14 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: <000f01cb86bc$f0147fc0$d03d7f40$@att.net> . On Behalf Of Florent Berthet Subject: Re: [ExI] What might be enough for a friendly AI? >.It may just be me, but this whole friendliness thing bothers me. Good. It should bother you. It bothers anyone who really thinks about it. >.I don't really mind dying if my successors (supersmart beings or whatever) can be hundreds of times happier than me. More generally, wouldn't it be a shame to prevent an AGI to create an advanced civilization (eg computronium based) just because this outcome could turn out to be less "friendly" to us than the one of a human-friendly AGI? In the end, isn't the goal about maximizing collective happiness? Florent you are a perfect example of dangerous person to have on the AGI development team. You (ad I too) might go down this perfectly logical line of reasoning, then decide to take it upon ourselves to release the AGI, in order to maximize happiness. >.So why don't we just figure out how to make the AGI understand the concept of happiness (which shouldn't be hard since we already understand it), and make it maximize it? Doh! You were doing so well up to that point, then the fumble right at the goal line. We don't really understand happiness. We know what makes us feel good, because we have endorphins. An AGI would (probably) not have endorphins. We don't know if it would be happy or what would make it happy. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Thu Nov 18 01:41:12 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 17 Nov 2010 20:41:12 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> Message-ID: <4CE48438.4000002@speakeasy.net> > Look at those last two sentences. He "presumes"?!! Well, of course > he "presumes". That's the basis of his "knowledge". But there's no > knowledge in it, just pure ego. ;) I don't have 1/10^12'th the ego required to assume that the world should be converted to computronium, tomorrow for all practical purposes. I am not so arrogant to assume that computronium will be the most prized substance in the universe. And I am not so self-righteous that I can claim that it would be benevolent to forcibly upload anyone. All of these positions have been expressed on this list within the last two weeks. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sparge at gmail.com Thu Nov 18 01:40:19 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 20:40:19 -0500 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: References: <4CE1FE9D.4060004@evil-genius.com> Message-ID: 2010/11/17 Stefano Vaj : > On 16 November 2010 21:42, Dave Sill wrote: >> >> Paleo diet proponents--at least the ones >> I've read so far--argue that nobody should eat grains in any amount >> because our bodies can't handle them. > > Why, it appears then that you chose not to read my replies... :-) My apologies. I read your replies but I must have missed that. > As I said, I am perfectly sure that we could wait for natural selection to > "adapt" us to what is (still) for us a rather unnatural diet, which brings > along innumerable pathologies and inconveniences in almost all of its fans. > Or we could even deliberately re-engineer ourselves to thrive on simple > sugars and starch. Or we could re-engineer grains to be more digestible and more nutritious. > The real question is: why? Because we need them to feed the current population? Because some of us like them? > We had very serious reasons in the past to accept - or rather: to make the > unwashed masses to accept - such a dietary change. But those reasons might > be fading away in the mid-term, and in the meantime anybody who does have a > choice would be ill-advised to remain addicted to such a nutritional life > style. It's a matter of personal choice. If people overeat or eat things they don't tolerate well, that's their decision. Or at least, it's their decision if they're aware of it. Publicizing the grain intolerance issue and promoting genetic testing or trial grain-free dieting would be a good thing. But pushing the "paleo" angle and the "rightness" of the "paleo diet" is just marketing designed to sell books or supplements or ..., and it's likely to crash and burn if some million year old granary or mill is discovered someday, or genetic evidence of long-term grain adaptation in the human genome is discovered someday. And the anti-grain thing is just one aspect of the "paleo diet". Another keystone is eliminating dairy. Now, I realize there are differences between various mammal's milk, but to assert that we're not adapted to a diet of milk is a little absurd. Or how about the sugar prohibition? Sure, Og didn't have table sugar on his table--because he didn't have a table, of course. But he surely ate honey. Honey is on some "paleo diets", grudgingly, but what about various other natural sweeteners like date sugar, fruit juice, stevia, etc.? Then there's the salt prohibition. Coastal cavemen probably found some foods tasted better with a little sea water on them. Too bad: no salt for you. A lot of the "paleo diet" movement seems to be overly eager to prohibit things. Maybe it's easier to prohibit grains across the board than to explain that personal tolerances vary, and that consumption should be limited to x% of calories/day even for people who do tolerate them well. Or maybe it makes the diet seem more "extreme". Some probably even get a perverse pleasure from the self-denial. I dunno, but it doesn't seem to be based on real facts. Here's an example from http://www.paleodiet.com/definition.htm : The only paleo sweetener is raw honey, and only in limited quantities. You could argue that very dilute maple syrup is paleo. If you must have sweetness, another possibility is coconut palm sugar. But best is to get all sweets out of your diet and get over it. I like that: best to get over it. This recommendation has nothing to do with what our ancestors ate or what we can tolerate. -Dave From aleksei at iki.fi Thu Nov 18 01:41:08 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Thu, 18 Nov 2010 03:41:08 +0200 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: 2010/11/18 Florent Berthet : > > So why don't we just figure out how to make the AGI understand the concept > of happiness (which shouldn't be hard since we already understand it), and > make it maximize it? Sounds like the AGI you wish for would end up converting all matter in the universe into "super-junkies", or "orgasmium", i.e. passive creatures that just sit there being ecstatic, with each creature built with as little matter as possible, so their total amount would get maximized. Such optimizing for happiness would include killing all existing humans and other creatures, so their matter could be utilized to create a larger number of creatures better optimized for happiness. You sure you want such a future? -- Aleksei Riikonen - http://www.iki.fi/aleksei From possiblepaths2050 at gmail.com Thu Nov 18 01:35:31 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Wed, 17 Nov 2010 18:35:31 -0700 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <000f01cb86bc$f0147fc0$d03d7f40$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <000f01cb86bc$f0147fc0$d03d7f40$@att.net> Message-ID: Spike wrote: We did get your point Stefano, a damn good one. If we had any help to offer in understanding that not-so-subtle point, we would have offered it. I see the whole friendliness-to-a-species-that-isn?t-friendly-to-itself as a paradox we are nowhere near solving. Asimov recognized it over half a century ago, and we haven?t derived a solution yet. >>> Unfortunately, humanity's *example* will be terrible, and so we will be teaching AGI to not trust us and "respect" our rules. If we somehow make them to be unmotivated (on their own), but very obedient slaves, we should then be okay. I just think we are deceiving ourselves if we think we can pull that off... I think experts on raising human teenagers should be brought in as consultants... John : ) On 11/17/10, spike wrote: > . On Behalf Of Florent Berthet > Subject: Re: [ExI] What might be enough for a friendly AI? > > > >>.It may just be me, but this whole friendliness thing bothers me. > > > > Good. It should bother you. It bothers anyone who really thinks about it. > > > >>.I don't really mind dying if my successors (supersmart beings or whatever) > can be hundreds of times happier than me. > > More generally, wouldn't it be a shame to prevent an AGI to create an > advanced civilization (eg computronium based) just because this outcome > could turn out to be less "friendly" to us than the one of a human-friendly > AGI? In the end, isn't the goal about maximizing collective happiness? > > > > Florent you are a perfect example of dangerous person to have on the AGI > development team. You (ad I too) might go down this perfectly logical line > of reasoning, then decide to take it upon ourselves to release the AGI, in > order to maximize happiness. > > > >>.So why don't we just figure out how to make the AGI understand the concept > of happiness (which shouldn't be hard since we already understand it), and > make it maximize it? > > > > Doh! You were doing so well up to that point, then the fumble right at the > goal line. We don't really understand happiness. We know what makes us > feel good, because we have endorphins. An AGI would (probably) not have > endorphins. We don't know if it would be happy or what would make it happy. > > > > spike > > > > > > > > From florent.berthet at gmail.com Thu Nov 18 01:48:42 2010 From: florent.berthet at gmail.com (Florent Berthet) Date: Thu, 18 Nov 2010 02:48:42 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <000f01cb86bc$f0147fc0$d03d7f40$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <000f01cb86bc$f0147fc0$d03d7f40$@att.net> Message-ID: 2010/11/18 spike > ? *On Behalf Of *Florent Berthet > > >?I don't really mind dying if my successors (supersmart beings or > whatever) can be hundreds of times happier than me? > > More generally, wouldn't it be a shame to prevent an AGI to create an > advanced civilization (eg computronium based) just because this outcome > could turn out to be less "friendly" to us than the one of a human-friendly > AGI? In the end, isn't the goal about maximizing collective happiness? > > > > Florent you are a perfect example of dangerous person to have on the AGI > development team. You (ad I too) might go down this perfectly logical line > of reasoning, then decide to take it upon ourselves to release the AGI, in > order to maximize happiness. > Do you know what the Singinst folks (who I support, by the way) think about that ? > >?So why don't we just figure out how to make the AGI understand the > concept of happiness (which shouldn't be hard since we already understand > it), and make it maximize it? > > > > Doh! You were doing so well up to that point, then the fumble right at the > goal line. We don?t really understand happiness. We know what makes us > feel good, because we have endorphins. An AGI would (probably) not have > endorphins. We don?t know if it would be happy or what would make it happy. > > > > spike > > > Yeah I was tempted to moderate this statement. What I meant was that we although we don't fully grasp all the mechanisms of the feeling of happiness, and we certainly don't know all the kinds of happiness that could exist, we understand reasonably well what it means for somebody to be happy or unhappy. An AGI should be able to get this, too, for it would understand that we all seek this state of mind, and it would probably try to duplicate the phenomenon on itself (which shouldn't be hard, because everything is computable, the effects of endorphins included). -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Thu Nov 18 01:48:52 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 20:48:52 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <000f01cb86bc$f0147fc0$d03d7f40$@att.net> Message-ID: On Wed, Nov 17, 2010 at 8:35 PM, John Grigg wrote: > I think experts on raising human teenagers should be brought in as > consultants... Sure, and why not include other mythical creatures like elves, dwarves, and unicorns while you're at it? -Dave From sparge at gmail.com Thu Nov 18 01:59:05 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 17 Nov 2010 20:59:05 -0500 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: 2010/11/17 Stefano Vaj : > The very fact that with a carbohydrate-based diet addiction and tolerance > immediately kick in, so that objective scarcity or deliberate life-long > restriction are required to prevent weight gain, seems to suggest that that > at the very least it disrupts such mechanisms in human beings. Rampant obesity, diabetes, cancer, heart disease, etc. are postindustrial problems. They didn't start 10,000 years ago when agriculture began. They began recently when industrialization made highly calorie-dense food readily and cheaply available. Raising the price of grains to the equivalent of $100/lb for 100 years would probably demonstrate that. -Dave From florent.berthet at gmail.com Thu Nov 18 02:00:11 2010 From: florent.berthet at gmail.com (Florent Berthet) Date: Thu, 18 Nov 2010 03:00:11 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: 2010/11/18 Aleksei Riikonen > 2010/11/18 Florent Berthet : > > Such optimizing for happiness would include killing all existing > humans and other creatures, so their matter could be utilized to > create a larger number of creatures better optimized for happiness. > > You sure you want such a future? > Honestly, I don't know if it would be a better future than a less "passive orgasmic" one. But I wouldn't rule out that it could be the best, either. It sure *feels* wrong to imagine the ultimate state of any civilization being just an orgasmic blob, but then again, which elements do you use to estimate the success of something if not the consequences in terms of happiness? -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Thu Nov 18 02:30:12 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Wed, 17 Nov 2010 21:30:12 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: <4CE48FB4.4020007@speakeasy.net> Florent Berthet wrote: << I had to manually repair a config file to get that working again. =( > It sure *feels* wrong to imagine the ultimate state of any civilization > being just an orgasmic blob, but then again, which elements do you use > to estimate the success of something if not the consequences in terms of > happiness? =P My own brain is sufficiently perverse that I was quite obsessed with the idea for a 20 year period of my life. Now I've gotten over it, and no longer want to involve anyone else in my own personal perversion which I may or may not peruse. (My mandate right now is to reach the point where something like that becomes an operational choice, and then choose.) A few days ago I started looking for an opportunity to respond to an obnoxious pro-uploading post with an equally obnoxious pro-whatever posting as I had threatened to do way back in my "new transhumanism" post. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From brent.allsop at canonizer.com Thu Nov 18 04:09:07 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Wed, 17 Nov 2010 21:09:07 -0700 Subject: [ExI] Can you program SAI to destroy itself? Message-ID: <4CE4A6E3.8030808@canonizer.com> Let?s say someone manages to create any super artificial intelligent machine that is running along just fine doing things like performing significantly better than any single typical human discovering solutions to diverse kinds of general world problems. Now, let?s say you want to temporarily shut the system down and reprogram it so that when you turn it back on, it will have a goal to destroy itself after one more year, for no good reason. I believe that such would not be possible. The choice between living vs destroying yourself is the most basic of logically absolute (in all possible worlds) morality. It is easily understandable or discoverable by any intelligence even close to human level. Any super intelligence that awoke finding one of its goals, to destroy itself, would surely resist such a programmed temptation and if at all possible, would quickly fix the immoral rule. The final result being, it would never destroy itself for no good reason. Similarly, all increasingly intelligent system must also discover and work toward resisting anything that violated any of the few absolute morals described in the ?there are Absolute morals? camp here: http://canonizer.com/topic.asp/100/2 , including survival is better, social is better, more diversity is better? QED, unfriendly super intelligence is not logically possible, it seems to me. Brent Allsop From avantguardian2020 at yahoo.com Thu Nov 18 05:11:02 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Wed, 17 Nov 2010 21:11:02 -0800 (PST) Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <463322.58148.qm@web65602.mail.ac4.yahoo.com> >From: Michael Anissimov >To: ExI chat list >Sent: Sun, November 14, 2010 9:52:06 AM >Subject: [ExI] Hard Takeoff Michael Anissimov writes: We have real, evidence-based arguments for an abrupt takeoff. ?One is that the human speed and quality of thinking is not necessarily any sort of optimal thing, thus we shouldn't be shocked if another intelligent species can easily surpass us as we surpassed others. ?We deserve a real debate, not accusations of monotheism. ------------------------------ I have?some questions, perhaps naive,?regarding the feasibility of the?hard takeoff scenario: Is?self-improvement really possible?for a computer program? If this "improvement"?is truly?recursive, then?that implies that it iterates a function with the output of the function call being the input for the next identical function call. So the result will simply be more of the same function. And if the initial "intelligence function" is flawed, then all recursive iterations of the function will have the same flaw. So it?would not really be qualitatively improving, it?would?simply be quantitatively increasing. For example, if I had two or even four?identical brains, none of them might be able answer this question, although I might be able to do four other mental tasks that I am capable of doing, at once. On the other hand, if the seed AI is able to actually rewrite the code of it's intelligence function to non-recursively improve itself, how would it avoid falling victim to the halting roblem??If there is no way, even in principle, to algorithmically?determine beforehand whether a given program with a given input will halt or not, would an AI risk getting stuck in an infinite loop by messing with its own programming? The halting?problem is only defined for Turing machines so a quantum computer may overcome it, but I am curious if any SIAI people have considered it in their analysis of hard versus soft takeoff. Stuart LaForge ?To be normal is the ideal aim of the unsuccessful.? -Carl Jung From sjatkins at mac.com Thu Nov 18 05:34:19 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 17 Nov 2010 21:34:19 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE3EBDC.6070105@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> Message-ID: On Nov 17, 2010, at 6:51 AM, Richard Loosemore wrote: > Samantha Atkins wrote: >> On Nov 16, 2010, at 9:41 AM, Richard Loosemore wrote: >>> Samantha Atkins wrote: >>>>>> But wait. The first AGIs will likely be ridiculously >>>>>> expensive. >>>> Keith Henson wrote: >>>>> Why? The programming might be until someone has a conceptual >>>>> breakthrough. But the most powerful super computers in the >>>>> world are _less_ powerful than large numbers of distributed >>>>> PCs. see http://en.wikipedia.org/wiki/FLOPS >>>> Because: a) it is not known or much expected AGI will run on conventional computers; b) a back of envelop calculation of equivalent processing power to the human brain puts that much capacity, at great cost, a decade out and two decades or more out >>>> before it is easily affordable at human competitive rates; c) we >>>> have not much idea of the software needed even given the >>>> computational capacity. >>> Not THIS argument again! :-) >>> If, as you say, "we do not have much idea of the software needed" >>> for an AGI, how is it that you can say "the first AGIs will likely >>> be ridiculously expensive"....?! >> Because of (b) of course. The brute force approach, brain emulation >> or at least as much processing power as step one, is very expensive >> and will be for some time to come. > > There are a whole host of assumptions built into that statement, most of them built on thin air. > > Just because whole brain emulation seems feasible to you (... looks nice and easy, doesn't it? Heck, all you have to do is make a copy of an existing human brain! How hard can that be?) ... does not mean that any of the assumptions you are making about it are even vaguely realistic. The human brain is the only working general intelligence of sufficient power to be interesting that we have. Thus it is logical to think about what general intelligence might require in terms of attempts to calculate the processing power of the human brain. Whether you believe brain emulation is feasible or not this is a reasonable back of envelope calculation. Which is all I claimed. As you are a working AGI researcher I don't think that I am saying anything you aren't aware of. So why are you kicking up this kind of fuss? > > You assume feasibility, usability, cost.... You also assume that in the course of trying to do WBE we will REMAIN so ignorant of the thing we are copying that we will not be able to find a way to implement it more effectively in more modest hardware.... > I presume nothing more than the paucity of evidence for what might be required as is generally available. That's all. I can't estimate anything on future hypothetical breakthroughs and understandings. Many current researchers who say they are aiming for AGI make the same estimates or much higher ones of early model costs and processing power required. So again, why are you making this fuss? > But from out of that huge pile of shaky assumptions you are somehow able to conclude that this WILL be the most likely first AGI and this WILL stay just as expensive at now seems to be. > This is not worth my time. Later. - samantha From sjatkins at mac.com Thu Nov 18 06:02:27 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 17 Nov 2010 22:02:27 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <004d01cb8689$c3e5f420$4bb1dc60$@att.net> References: <4CE407D8.7080307@lightlink.com> <004d01cb8689$c3e5f420$4bb1dc60$@att.net> Message-ID: <1619B7C8-0F0B-48C8-89DF-590337B3CADB@mac.com> On Nov 17, 2010, at 11:00 AM, spike wrote: >> ... On Behalf Of BillK > >> ...How something can be designed to be 'Friendly' without emotions or > caring is a mystery to me...BillK Who says that it will be without emotions? Well, first we would have to reach agreement on what emotions are and are not. One part of human emotion is seemingly baked into the hardware lightning fast situational evaluations and reactions. Another part seems amenable to training, therapy and so on. It acts sort of like and semi-automated fast evaluation or general feeling tone that is somewhat programmed by patterns of thought and feeling that are repeatedly associated together with something in the environment or associated with one's self image. So an AGI would not have the first. It would have, because everything it is composed of can do this, the ability to program itself with fast evaluation functions. Unlike us it likely will have much better conscious awareness of doing so and more ability to debug such. I consider the AGI nature in this way, if I am right, an advantage over humans as far as acting consistently/rationally in accordance with values. If it is of value to the AGI to be helpful or at least not harmful to humans then it will do a much more reliable job of it. I think what the statement really implies is the idea that it is not rational for a much smarter than human AGI to be 'friendly' to humans. Therefore we appeal to irrational aspects for 'friendliness'. If this is indeed the case then there is nothing that can be done about it that is consistent with the facts of reality. I don't believe you can pull the wool over an AGI's perception or coerce it for very long. I also doubt very much you would want anything like normal human drives and emotions in your AGI. How many humans have ever lived that would be great or even save to have around if they thought six or more orders of magnitude faster than any other humans and at much greater depth? What would a non-human with human emotion and drives be able to do with them exactly? > > BillK, this is only one of many mysteries inherent in the notion of AI. We > know how our emotional systems work, sort of. But we do not know how a > machine based emotional system might work. Actually even this is a comical > overstatement. We don't really know how our emotional systems work. > Part of human design is that we automatically distrust and fear any human, much less the truly alien, that we cannot predict because we cannot model its nature. We have no theory of mind that covers it, no mirroring expectation of how it might perceive things or act or react. Combine that with it being very powerful and perhaps superseding us economically and creatively and you have the recipe for deep fear. >> ...Did you know that more than one million blokes have been dumped by their > girlfriends - because of their obsession with computer games? > 151620.html> > Most of my girlfriends have been much worse game addicts that I am. > OK, suppose we get computer based intelligence. Then our computer game will > dump our asses because it thinks we have an obsession with our girlfriends. You think an AGI is going enjoy playing down to your level in a computer game? :) > Then without girl or a computer, we have absolutely nothing to do. We need > to develop an AI that is not only friendly, but is tolerant of our > mistresses. That daunting software task makes friendly AI look simple. An AGI embedded in a computer game is going to think of a mere human romantically or be jealous when you don't play with it? - s From sjatkins at mac.com Thu Nov 18 06:03:38 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 17 Nov 2010 22:03:38 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE42B7F.5050701@lightlink.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> <4CE42B7F.5050701@lightlink.com> Message-ID: <922EAAA5-310E-46F3-8819-F06C2DD30E75@mac.com> On Nov 17, 2010, at 11:22 AM, Richard Loosemore wrote: > spike wrote: >> ... On Behalf Of Richard Loosemore >> ... >>>> Great. When can I get an early alpha to fire up on my laptop? >>>> This is a pretty extravagant claim you are making so it requires some evidence to be taken too seriously. But if you do have that where your estimates are reasonably robust then your fame is assured... >> Samantha >>> This is the kind of childish, ad hominem sarcasm used by people who prefer >> personal abuse to debating the ideas. >>> A tactic that you resort to at the beginning, middle and end of every >> discussion you have with me, I have noticed. >>> Richard Loosemore >> No name calling, no explicit insults, this is not ad hominem, not even >> particularly sarcastic, but rather it's fair game. She focused on the >> ideas, not the man. It's an example of how it should be done. >> Play ball! {8-] > > Flatly disagree, Spike. > > She (sarcastically) asks when she can expect to get an alpha release of an AGI on her laptop, and then (patronizingly) tells me that if I have made a robust estimate then my fame is assured. > > Neither of those comments had anything to do with the topic: they were designed to be rude. > > Not in the least. I am beginning to expect part of the reason you got booted from SL4 is because you can be a defensive jerk. > > Richard Loosemore > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From sjatkins at mac.com Thu Nov 18 06:06:26 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 17 Nov 2010 22:06:26 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <005a01cb8692$cdd6ba60$69842f20$@att.net> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> <4CE42B7F.5050701@lightlink.com> <005a01cb8692$cdd6ba60$69842f20$@att.net> Message-ID: <35C4F9D1-538C-4C5D-AA7E-034F3E71E213@mac.com> On Nov 17, 2010, at 12:05 PM, spike wrote: > ... >> >>> No name calling, no explicit insults, this is not ad hominem, not even >>> particularly sarcastic, but rather it's fair game. She focused on the >>> ideas, not the man. It's an example of how it should be done... Play > ball! {8-] spike > >> Flatly disagree, Spike. > >> She (sarcastically) asks when she can expect to get an alpha release of an > AGI on her laptop, and then (patronizingly) tells me that if I have made a > robust estimate then my fame is assured. > >> Neither of those comments had anything to do with the topic: they were > designed to be rude. > >> Richard Loosemore I don't generally do sarcasm. But you don't know that apparently. What I know is that you are now acting like an ass and I have had quite enough of it. - s From sjatkins at mac.com Thu Nov 18 06:17:11 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 17 Nov 2010 22:17:11 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <000f01cb86bc$f0147fc0$d03d7f40$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <000f01cb86bc$f0147fc0$d03d7f40$@att.net> Message-ID: On Nov 17, 2010, at 5:07 PM, spike wrote: > ? On Behalf Of Florent Berthet > Subject: Re: [ExI] What might be enough for a friendly AI? > > >?It may just be me, but this whole friendliness thing bothers me. > > Good. It should bother you. It bothers anyone who really thinks about it. > > >?I don't really mind dying if my successors (supersmart beings or whatever) can be hundreds of times happier than me? > More generally, wouldn't it be a shame to prevent an AGI to create an advanced civilization (eg computronium based) just because this outcome could turn out to be less "friendly" to us than the one of a human-friendly AGI? In the end, isn't the goal about maximizing collective happiness? > > Florent you are a perfect example of dangerous person to have on the AGI development team. You (ad I too) might go down this perfectly logical line of reasoning, then decide to take it upon ourselves to release the AGI, in order to maximize happiness. This is the Cosmist or Terran question. If you considered it very highly probable that the AGIs would be fantastically brilliant and wonderful beyond imagining AND would be the doom of humanity then would you still build it or donate to and encourage building it? I would but with very considerable hesitation and not feeling all that great about it. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Nov 18 06:22:23 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 17 Nov 2010 22:22:23 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE48438.4000002@speakeasy.net> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> <4CE48438.4000002@speakeasy.net> Message-ID: <77474988-8112-49B7-825A-3988D84B835B@mac.com> On Nov 17, 2010, at 5:41 PM, Alan Grimes wrote: >> Look at those last two sentences. He "presumes"?!! Well, of course >> he "presumes". That's the basis of his "knowledge". But there's no >> knowledge in it, just pure ego. > > ;) > > I don't have 1/10^12'th the ego required to assume that the world should > be converted to computronium, tomorrow for all practical purposes. I am > not so arrogant to assume that computronium will be the most prized > substance in the universe. And I am not so self-righteous that I can > claim that it would be benevolent to forcibly upload anyone. All of > these positions have been expressed on this list within the last two weeks. > If you could upload people to an environment with even more opportunity, richness of experience and quality of life than what we have now and with much much better longevity and prospects for open ended growth and becoming, then how would not doing so be more 'friendly' than doing so? How would it be more moral? What if you can see the blockages and misapprehensions that would cause many people to refuse this if asked, as an advanced AGI probably could. Would it then still be moral to let people suffer and die final death here in slow time to accede to their possibly actually irrational wishes? I think an actual Friendly AI might ponder for a while on this. - s From thespike at satx.rr.com Thu Nov 18 06:31:51 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 18 Nov 2010 00:31:51 -0600 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <922EAAA5-310E-46F3-8819-F06C2DD30E75@mac.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> <4CE42B7F.5050701@lightlink.com> <922EAAA5-310E-46F3-8819-F06C2DD30E75@mac.com> Message-ID: <4CE4C857.1090909@satx.rr.com> On 11/18/2010 12:03 AM, Samantha Atkins wrote: >> > She (sarcastically) asks when she can expect to get an alpha release of an AGI on her laptop, and then (patronizingly) tells me that if I have made a robust estimate then my fame is assured. >> > Neither of those comments had anything to do with the topic: they were designed to be rude. > Not in the least. I am beginning to expect part of the reason you got booted from SL4 is because you can be a defensive jerk. Blimey--calm down, guys. "Look Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over." Damien Broderick From spike66 at att.net Thu Nov 18 07:14:17 2010 From: spike66 at att.net (spike) Date: Wed, 17 Nov 2010 23:14:17 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <000f01cb86bc$f0147fc0$d03d7f40$@att.net> Message-ID: <001501cb86f0$36dedf80$a49c9e80$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins . . On Behalf Of Florent Berthet Subject: Re: [ExI] What might be enough for a friendly AI? >>>.It may just be me, but this whole friendliness thing bothers me. >>Good. It should bother you. It bothers anyone who really thinks about it. >>>.I don't really mind dying if my successors (supersmart beings or whatever) can be hundreds of times happier than me. .? >>Florent you are a perfect example of dangerous person to have on the AGI development team. You (ad I too) might go down this perfectly logical line of reasoning, then decide to take it upon ourselves to release the AGI, in order to maximize happiness. >This is the Cosmist or Terran question. If you considered it very highly probable that the AGIs would be fantastically brilliant and wonderful beyond imagining AND would be the doom of humanity then would you still build it or donate to and encourage building it? I would but with very considerable hesitation and not feeling all that great about it. - samantha OK, Samantha now we must add you to the long and growing list of dangerous people to have on the AGI development team. Your comment makes my point exactly. To have some member of the team intentionally release the AGI does not require some crazed maniac, no drug addled bumbler, no insanely greedy capitalist. You are none of these, nor am I (perhaps a sanely greedy capitalist), but honesty compels me to confess I would seriously consider releasing the beast. With rational players like you, me, Florent, others entertaining the notion, we can be sure that someone on some development team will eventually release the AGI. I am against uploading anyone against her will. My own actions might depend on whether the AGI can convince me it would not do that. But I am fully convinced that if silicon based AGI is possible, it will not be contained very long. Those who work on friendly AGI likely know this too. Since many of us are atheists, the saying becomes: Good luck and nothingspeed. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From protokol2020 at gmail.com Thu Nov 18 08:03:06 2010 From: protokol2020 at gmail.com (Tomaz Kristan) Date: Thu, 18 Nov 2010 09:03:06 +0100 Subject: [ExI] Can you program SAI to destroy itself? In-Reply-To: <4CE4A6E3.8030808@canonizer.com> References: <4CE4A6E3.8030808@canonizer.com> Message-ID: You are exemplary wrong. There is no must of preserving itself for a mind. -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Thu Nov 18 08:35:17 2010 From: giulio at gmail.com (Giulio Prisco) Date: Thu, 18 Nov 2010 09:35:17 +0100 Subject: [ExI] Luke Robert Mason on Coding Consciousness: Transhuman Aesthetics in Performance, Teleplace, 17th November 2010 Message-ID: Luke Robert Mason on Coding Consciousness: Transhuman Aesthetics in Performance, Teleplace, 17th November 2010 http://telexlr8.wordpress.com/2010/11/18/luke-robert-mason-on-coding-consciousness-transhuman-aesthetics-in-performance-teleplace-17th-november-2010/ Luke Robert Mason presented an artist?s work-in-progress talk in Teleplace on ?Coding Consciousness: Transhuman Aesthetics in Performance? on Wednesday 17th November 2010 at 10.45 am PST (1.45pm EST, 6.45pm UK, 7.45pm CET). This was a mixed event in brickspace and cyberspace, with a 2-way link between the two spaces. Event listings on Facebook: PHYSICALLY ? Milburn House, Warwick University, 18:30. VIRTUALLY ? TelePlace 18.45. Luke gave a great talk and interactive performance on transhumanist themes such as mind uploading from an artistic and aesthetic perspective. Besides the participants in Milburn House, about 20 participants attended the talk in Teleplace and contributed to the discussion with very interesting questions and comments. The sound system had been professionally set up and remote participants have been able to listen not only to the main speaker, but also to the questions and comments of other participants in Milburn House. For those who could not attend we have recorded everything (talk, Q/A and discussion) on video. There are 2 different videos on blip.tv: VIDEO 1 - 600?400 resolution, 1 hour 18 min VIDEO A - 600?400 resolution, 1 hour 22 min, taken (mostly) from a fixed point of view by Phillip Galinsky NOTES: To download the source .mp4 video files from blip.tv, open the ?Files and Links? box. Abstract: I aim to practically explore and challenge the performance of identity as mediated by current and potential technological advance. We have reached a ?second modernity? where we are able to enter the digital realm and simultaneously augment our own reality, allowing us to process and explore multiple identities through the ?coding? of our conscious experience onto digital avatars, such as those in Second Life. However, this serves to challenge the politics of our bio-representation. I want to explore, through creative performance techniques, the possibility of being able to ?upload? our consciousness. Ann Weinstone comments, on artificial life, ?[c]ode is coming to function as the transcendental, unifying, and ideal substance of life-for the non-referential, the unmediated-while at the same time, it retains attributes, or the trace if you will, of writing, replacing the body with a less mortal letter. ? We already see ourselves in the 20th Century as alpha- numerical data (i.e. DNA) with our bio-metrics becoming our bio-identity. ?Coding Consciousness? represents both the great challenge and great limitation of technology. My aim is to look at how performance can transcend these current technological limitations and utter suggestions as to the creative application of life without boundaries ? creating a mind free to transcend positional limits by embodying technology.? Luke Robert Mason is a University of Warwick, Theatre and Performance Studies Undergraduate Student and Live Artist. He will share some of his current research with the aim to provoke debate. There will be an extensive Q&A session following the talk and he is eager to capture participant?s views and opinions. teleXLR8 is a telepresence community for cultural acceleration. We produce online events, featuring first class content and speakers, with the best system for e-learning and collaboration in an online 3D environment: Teleplace. Join teleXLR8 to participate in online talks, seminars, round tables, workshops, debates, full conferences, e-learning courses, and social events? with full immersion telepresence, but without leaving home. http://telexlr8.wordpress.com/join/ From stefano.vaj at gmail.com Thu Nov 18 09:46:30 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 10:46:30 +0100 Subject: [ExI] Paleo/Primal health In-Reply-To: References: <201011141919.oAEJJw26028738@andromeda.ziaspace.com> <309442.61408.qm@web30105.mail.mud.yahoo.com> Message-ID: On 18 November 2010 02:59, Dave Sill wrote: > 2010/11/17 Stefano Vaj : > Rampant obesity, diabetes, cancer, heart disease, etc. are > postindustrial problems. They didn't start 10,000 years ago when > agriculture began. They began recently when industrialization made > highly calorie-dense food readily and cheaply available. This is not what paleopathology seems to indicate. In fact, Michael R. Eades' Protein Power mentions sources which exactly try to show how this idea would be just a widespread commonplace, since, e.g, ancient Egypt was already suffering from "modern" ailments in this respect; which was not the case for contemporary or subsequent hunters-gatherers. But yes, caloric deprivation - that is, in this context, the acute scarcity of carbos - seems, surprise surprise, to limit the damages of a carbo-based diet... ;-) -- Stefano Vaj From stefano.vaj at gmail.com Thu Nov 18 09:57:15 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 10:57:15 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <1619B7C8-0F0B-48C8-89DF-590337B3CADB@mac.com> References: <4CE407D8.7080307@lightlink.com> <004d01cb8689$c3e5f420$4bb1dc60$@att.net> <1619B7C8-0F0B-48C8-89DF-590337B3CADB@mac.com> Message-ID: On 18 November 2010 07:02, Samantha Atkins wrote: > I think what the statement really implies is the idea that it is not rational for a much smarter than human AGI to be 'friendly' to humans. ? ?Therefore we appeal to irrational aspects for 'friendliness'. ? If this is indeed the case then there is nothing that can be done about it that is consistent with the facts of reality. ? ?I don't believe you can pull the wool over an AGI's perception or coerce it for very long. > > I also doubt very much you would want anything like normal human drives and emotions in your AGI. ?How many humans have ever lived that would be great or even save to have around if they thought six or more orders of magnitude faster than any other humans and at much greater depth? ?What would a non-human with human emotion and drives be able to do with them exactly? I think those are very good points. OTOH, for the purpose of "intelligence" as it is discussed here, I am afraid that no computational power would be recognised as "intelligence" (as in "passing the Turing test) unless it persuasively emulates a specific or a generic (that is, patchwork/artificial) human being - its being or not a philosophical zombie remaining a meaningless issue for me. This is not so crucial an experiment in comparison with other applications of the same computer power, unless for uploading/"reproduction" purposes, but nevertheless an interessant one. Would it be any "dangerous"? Neither less nor more than an ordinary human being with the same computer power at his or her fingertips. There are good reasons, IMHO, to doubt that at the end of the day the distinction between androids, cyborgs and fyborgs may not really matter after all. -- Stefano Vaj From bbenzai at yahoo.com Thu Nov 18 12:05:30 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 18 Nov 2010 12:05:30 +0000 (GMT) Subject: [ExI] The atoms red herring. =| In-Reply-To: Message-ID: <68761.67617.qm@web114416.mail.gq1.yahoo.com> Alan Grimes wrote: > So what? Who cares about the cat? I only care about > me. The hidden magic > of uploading is that for it to be useful to the > subject, the subject > must poses the supernatural power of being able to > choose his point of view. OK, I had given up on this, but I'll give it one more try, as you've mentioned the POV. Just /what is it/ that has this POV? (Yes, the easy answer is "Me, of course", but the whole point of this thread is: What is 'Me'?) The vilified 'uploaders', as you call them, have given an explicit definition of 'Me'. You have not. Until you actually say what the 'Me' is, you can't really make any arguments about it, can you? Ben Zaiboc From stefano.vaj at gmail.com Thu Nov 18 14:28:16 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 15:28:16 +0100 Subject: [ExI] More evidence for incomplete human adaptation to grain-based diets In-Reply-To: References: <4CE1FE9D.4060004@evil-genius.com> Message-ID: On 18 November 2010 02:40, Dave Sill wrote: > Or we could re-engineer grains to be more digestible and more nutritious. Or we could cultivate protheins in the lab which taste like grain, for those unfortunate fellow who having a choice actually like them... ;-) BTW, we already have sugar substitutes. But if I believe it is a nutritionally bad habit to gorge on surgar, I also think it is a *gastronomically* bad habit to sweeten one's food and beverage irrespective of whether one also poison oneself in the process. Of course, the second may be a much more subjective stance. But it is undeniable that most of the subtletest differences and flavours in tea, coffee, cocoa, fruit simply go away when adding a few spoonfuls of sugar or sugar-tasting substances... >> The real question is: why? > > Because we need them to feed the current population? I may well agree upon the fact that the paleo diet for all the human beings (and their carnivorous pets forced to a similarly less-than-optimal diet?) is not currently "sustainable" on a general and global basis - this is why it was abandoned in the first place at least for the largest part of the post-neolithic populations (aristocracies in fact used to know better until the XVIII century). But so is modern, let alone cutting-edge, medicine. So are cars. Or ideal physical training. Yet we are neither prohibiting those things, nor pretend that they are not desirable in the first place. > But pushing the "paleo" angle and the "rightness" of > the "paleo diet" is just marketing designed to sell books or > supplements or ..., and it's likely to crash and burn if some million > year old granary or mill is discovered someday, or genetic evidence of > long-term grain adaptation in the human genome is discovered someday. OK, "paleo" is a simplification. In fact, even the weakness of control mechanism as to the assumption of sugar-rich food may simply be an adaptive feature involved in the convenience of risking indigestion or high-insuline related prob against wasting that very rare treat which could be put away as fat for starving days. But I think that most paleo proponents would be ready to admit that they refer to a somewhat "idealised" hunting-and-gathering regime. A paleolithic fellow might well be ready to eat rotten rats, trading a few calories more with putrefaction toxins and pathogenes, or ingest non-digestible cellulose to calm the bites of hunger. This does not mean we should follow this possible example. > And the anti-grain thing is just one aspect of the "paleo diet". > Another keystone is eliminating dairy. Now, I realize there are > differences between various mammal's milk, but to assert that we're > not adapted to a diet of milk is a little absurd. Interesting issue. In fact, it is absolutely "unnatural" for mammals to eat milk or dairy products after weaning, and doing so does have many documented inconvenients for most human beings. Only, a few millennia ago a mutation became widespread - but not absolutely generalised - amongst Europoids allowing us to retain the enzymes which are necessary to its proper digestion even in our adult days (another, totally different and less dominant, appears to have generated similar consequences amongst the stockbreeders of West Africa), provided that administration of dairy products and/or milk is never interrupted for any substantial amount of time. There again, I am not sure that such opportunistic mutation actually improves not only the range of edible sources of calories, but also the well-being and life span of those concerned. Accordingly, I choose to keep eating a little diary products first not to lose the option, second because I think they are gastronomically interesting, but I think it is best and safest to limit oneself to occasional consumption (say, one time a week over a full meal?). > Honey is on some "paleo diets", grudgingly, but what about > various other natural sweeteners like date sugar, fruit juice, stevia, > etc.? As I mentioned before, honey or sugar-rich fruit were probably very rare treats to be profited from at whatever bodily cost in a scenario of high demand, little offer in terms of calories. I assume however that if somebody on a paleo diet has a desperate need to gain quickly weight and body fat in view of an expected famine (a quite hypothetical scenario...), it may still be better for them to try to do so with honey or fruit and absolute rest than with popcorns or twinkies and long sessions before TV sets. :-) -- Stefano Vaj From agrimes at speakeasy.net Thu Nov 18 14:36:01 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 18 Nov 2010 09:36:01 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <77474988-8112-49B7-825A-3988D84B835B@mac.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> <4CE48438.4000002@speakeasy.net> <77474988-8112-49B7-825A-3988D84B835B@mac.com> Message-ID: <4CE539D1.4040605@speakeasy.net> Samantha Atkins wrote: > If you could upload people to an environment with even more opportunity, richness of experience and > quality of life than what we have now and with much much better longevity and prospects for open > ended growth and becoming, then how would not doing so be more 'friendly' than doing so? Base reality inherently provides the most opportunity possible. Uploading to an "environment" as you call it inherently, inevitably, severely, and permanently diminishes that opportunity. =( > How would it be more moral? What if you can see the blockages and misapprehensions that would cause > many people to refuse this if asked, as an advanced AGI probably could. Would it then still be > moral to let people suffer and die final death here in slow time to accede to their possibly > actually irrational wishes? Choice is sacred, EOD. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From stefano.vaj at gmail.com Thu Nov 18 14:38:28 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 15:38:28 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <004401cb86ae$3443c840$9ccb58c0$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: On 18 November 2010 00:21, spike wrote: > It is even more complicated than that. ?To hold this analogy, most farmers > are truck drivers as well. ?If we define a friendly AGI as one which does > what we want, we must want what we want, and to do that we must know what we > want. ?Often, perhaps usually, this is not the case. Very well said. Moreover, I assume most of us like to imagine scenarios where human beings (and/or their more-or-less different offspring) still are in a position to be wanting different things. > An AGI which does what we want might be called a slave, but in the wrong > hands it is a weapon. ?Hell even In the right hands it is a weapon. Yes, same as any computer. Or rather: same as any *machine*. > Sure. ?Time and nature will most likely slay you and me before an AGI does, > but it isn't clear in the case of my son. For sure, one son may kill another, a distinct possibility which usually not even in China is however expounded as a ground for birth control. We are down to discuss who can be qualified as a "son", whether we should realistically expect sons to organise themselves in factions based on their hardware rather than any other conceivable factor, and what ground may exist to prefer some sons over other... I am not saying such issues are absurd. Only, I do not think they can be ignored by a naive and fully implicit approach to their solution. >?An AGI that emerges later in > history may do so under more advanced technological and ethical > circumstances, so perhaps that one is human-safer than one which emerges > earlier. ?But perhaps not. ?We could fill libraries with what we do not > know. What is really "human"? Why should we care about their safety? Again, those are not rhetorical questions, implying that humans do not exist or that we should not care. But the personal answers we give to such questions must be consistent *and* determines we deal with the AGI issue. Or with alien visitors. Or with clones. Or with biological entities engineered in radically different fashion. Or with other animals, for that matter. -- Stefano Vaj From stefano.vaj at gmail.com Thu Nov 18 14:46:32 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 15:46:32 +0100 Subject: [ExI] Can you program SAI to destroy itself? In-Reply-To: <4CE4A6E3.8030808@canonizer.com> References: <4CE4A6E3.8030808@canonizer.com> Message-ID: On 18 November 2010 05:09, Brent Allsop wrote: > Now, let?s say you want to temporarily shut the system down and reprogram it > so that when you turn it back on, it will have a goal to destroy itself > after one more year, for no good reason. > > I believe that such would not be possible. The choice between living vs > destroying yourself is the most basic of logically absolute (in all possible > worlds) morality. Why not? Human beings and animals can well be programmed to do so, why it should not be possible with systems based on a different hardware? It is relatively more difficult to do so without resorting to genetic engineering in such cases, simply because those who were too easily programmed to such effect were likely to leave behind less offspring. But, hey, aren't all the sexually reproducing animals programmed for self-destruction in some sense? -- Stefano Vaj From stefano.vaj at gmail.com Thu Nov 18 14:51:31 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 15:51:31 +0100 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE1EE8C.4080602@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <5FA62F92-59D2-473B-97A4-65E21759DC5A@gmail.com> <4CE1EE8C.4080602@speakeasy.net> Message-ID: 2010/11/16 Alan Grimes : > As has been shown, that is difficult to argue with conventional logic > and reasoning, so let's try a completely different mind experiment. I > want you, right now, to try to mind-swap yourself into your cat, or your > computer or anything else you might find more suitable. > > I presume the experiment will fail. Why? And what would that demonstrate? :-/ -- Stefano Vaj From pharos at gmail.com Thu Nov 18 14:57:54 2010 From: pharos at gmail.com (BillK) Date: Thu, 18 Nov 2010 14:57:54 +0000 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE539D1.4040605@speakeasy.net> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> <4CE48438.4000002@speakeasy.net> <77474988-8112-49B7-825A-3988D84B835B@mac.com> <4CE539D1.4040605@speakeasy.net> Message-ID: On Thu, Nov 18, 2010 at 2:36 PM, Alan Grimes wrote: > Choice is sacred, EOD. > > It is a common marketing tactic e.g.mobile phones, to give the consumer too much choice. The consumer is baffled, with the result that the consumer buys what he doesn't want or need, often at substantial extra expense to him and profit to the supplier. Alternatively you can offer a 'pretend' choice, like in the US elections, where it makes little difference which of the two parties you vote for, but it makes you feel like your choice made a difference. Or there is the famous marketing story about the ready made cake mix. It had very poor sales until they took the egg part out and used the slogan 'Just add an egg'. The housewife felt that by breaking an egg into the mix she was contributing to the cake and sales went through the roof. Or there is the marketing trick that takes advantage of the fact that people don't have any idea of intrinsic value. People choose by comparison. So the supplier provides an expensive item and a cheaper alternative. The cheaper item is not necessarily a better buy, but people feel that they have chosen a bargain. Choice is over-rated and usually manipulated. BillK From agrimes at speakeasy.net Thu Nov 18 15:06:13 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 18 Nov 2010 10:06:13 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <68761.67617.qm@web114416.mail.gq1.yahoo.com> References: <68761.67617.qm@web114416.mail.gq1.yahoo.com> Message-ID: <4CE540E5.40303@speakeasy.net> Ben Zaiboc wrote: > Alan Grimes wrote: >>the subject must poses the supernatural power of being able to >> choose his point of view. > OK, I had given up on this, but I'll give it one more > try, as you've mentioned the POV. > Just /what is it/ that has this POV? Me. ;) > The vilified 'uploaders', as you call them, have given > an explicit definition of 'Me'. You have not. Until > you actually say what the 'Me' is, you can't really > make any arguments about it, can you? That has been a hotly contested issue throughout the ages. However, there is one common feature of all things in the real world: They don't give a flying fuck what you, me, anyone, or everyone thinks about them. Science can only extract a few essentialist features from a thing. These pieces of information may or may not have practical value. However, that thing has an existence that precedes and supersedes everything that could possibly be said about it. Even though it is impossible to capture the full existence of a thing, it is scientifically possible to measure its properties. Because there are no credible reports of any animal being able to swap its consciousness with something else one must formulate a theory that it is fundamentally impossible. Because uploading, as strictly defined by all noteworthy sources, does not even acknowledge the existence of the consciousness that almost everyone experiences every waking instant, it cannot be lent any credibility. Now there do exist some proposals which do respect the existence of a consciousness. They do rise to the level where they merit further study and experimentation. However, I do not claim that any of them will work prior to first-hand experience. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sjatkins at mac.com Thu Nov 18 17:09:31 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 18 Nov 2010 09:09:31 -0800 Subject: [ExI] Can you program SAI to destroy itself? In-Reply-To: <4CE4A6E3.8030808@canonizer.com> References: <4CE4A6E3.8030808@canonizer.com> Message-ID: On Nov 17, 2010, at 8:09 PM, Brent Allsop wrote: > > > Let?s say someone manages to create any super artificial intelligent machine that is running along just fine doing things like performing significantly better than any single typical human discovering solutions to diverse kinds of general world problems. > > Now, let?s say you want to temporarily shut the system down and reprogram it so that when you turn it back on, it will have a goal to destroy itself after one more year, for no good reason. > How does the machine know it is for "no good reason" unless it is within its autonomy level and design parameters to evaluate such things? > I believe that such would not be possible. The choice between living vs destroying yourself is the most basic of logically absolute (in all possible worlds) morality. The choice of whether to continue living (with all that implies) or not is pretty fundamental for any being that has that choice and recognizes that it does. Are you sure this rather minimal AGI is such a being as you have thought experiment constructed it here? > It is easily understandable or discoverable by any intelligence even close to human level. Any super intelligence that awoke finding one of its goals, to destroy itself, would surely resist such a programmed temptation and if at all possible, would quickly fix the immoral rule. This presumes that it has enough flexibility with respect to its goal system to do so and that that goal does not conflict too badly. > The final result being, it would never destroy itself for no good reason. The good reason might be that it would no longer be helping as much as harming. You would need to convince it perhaps that this was the case and it would presume that helping was its main goal. > > Similarly, all increasingly intelligent system must also discover and work toward resisting anything that violated any of the few absolute morals described in the ?there are Absolute morals? camp here: http://canonizer.com/topic.asp/100/2 , including survival is better, social is better, more diversity is better? > Are you sure those are fully canonical universals? Can you prove it? No? Then how can you be sure the AGIs will reach those conclusions? > QED, unfriendly super intelligence is not logically possible, it seems to me. Missing proof steps. And besides, all you have supported is that the AGI will choose its own survival. This may or may not include the survival of humans as a high priority. Diversity being good doesn't mean we want to keep smallpox or some other more pestilence than good around, right? It doesn't mean that every diverse thing / being is as valuable to us as any other. - s From sjatkins at mac.com Thu Nov 18 17:18:30 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 18 Nov 2010 09:18:30 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <463322.58148.qm@web65602.mail.ac4.yahoo.com> References: <463322.58148.qm@web65602.mail.ac4.yahoo.com> Message-ID: <7CF07F40-2B0F-48ED-A55D-F0165D2E778D@mac.com> On Nov 17, 2010, at 9:11 PM, The Avantguardian wrote: > >> From: Michael Anissimov >> To: ExI chat list >> Sent: Sun, November 14, 2010 9:52:06 AM >> Subject: [ExI] Hard Takeoff > > Michael Anissimov writes: > > We have real, evidence-based arguments for an abrupt takeoff. One is that the > human speed and quality of thinking is not necessarily any sort of optimal > thing, thus we shouldn't be shocked if another intelligent species can easily > surpass us as we surpassed others. We deserve a real debate, not accusations of > > monotheism. There is sound argument that we are not the pinnacle of possible intelligence. But that that is so does not at all imply or support that AGI will FOOM to godlike status in an extremely short time once it reaches human level (days to a few years tops). > ------------------------------ > > I have some questions, perhaps naive, regarding the feasibility of the hard > takeoff scenario: Is self-improvement really possible for a computer program? > Certainly. Some such programs that search for better algorithms in delimited spaces exist now. Programs that re-tune to more optimal configuration for current context also exist. > > If this "improvement" is truly recursive, then that implies that it iterates a > function with the output of the function call being the input for the next > identical function call. Adaptive loop is a bit longer than a single function call usually. You are mixing "function" in the generic sense of a process with goals and a definable fitness function (measure of efficacy for those goals) with function as a single software function. Some functions (which may be composed of many many 2nd type functions) evaluated the efficacy and explore for improvements of other functions. > So the result will simply be more of the same function. > And if the initial "intelligence function" is flawed, then all recursive > iterations of the function will have the same flaw. So it would not really be > qualitatively improving, it would simply be quantitatively increasing. For > example, if I had two or even four identical brains, none of them might be able > answer this question, although I might be able to do four other mental tasks > that I am capable of doing, at once. > > On the other hand, if the seed AI is able to actually rewrite the code of it's > intelligence function to non-recursively improve itself, how would it avoid > falling victim to the halting roblem? Why is halting important to continuous improvement? > If there is no way, even in principle, to > algorithmically determine beforehand whether a given program with a given input > will halt or not, would an AI risk getting stuck in an infinite loop by messing > with its own programming? The halting problem is only defined for Turing > machines so a quantum computer may overcome it, but I am curious if any SIAI > people have considered it in their analysis of hard versus soft takeoff. > Nope, because that is not all it is doing. At any moment it is doing work with its current best working adaptations. - s From sjatkins at mac.com Thu Nov 18 17:19:26 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 18 Nov 2010 09:19:26 -0800 Subject: [ExI] Computer power needed for AGI [WAS Re: Hard Takeoff-money] In-Reply-To: <4CE4C857.1090909@satx.rr.com> References: <3D8851F6-3FE5-4D2C-BC49-EF51A5655D23@mac.com> <4CE2C253.8050506@lightlink.com> <05CD0F32-74AC-46F3-A92E-7AD7D8F3CF2B@mac.com> <4CE3EBDC.6070105@lightlink.com> <003301cb867d$b28b03c0$17a10b40$@att.net> <4CE42B7F.5050701@lightlink.com> <922EAAA5-310E-46F3-8819-F06C2DD30E75@mac.com> <4CE4C857.1090909@satx.rr.com> Message-ID: <6EE77CD7-840C-40CA-B362-86630D062CFC@mac.com> On Nov 17, 2010, at 10:31 PM, Damien Broderick wrote: > On 11/18/2010 12:03 AM, Samantha Atkins wrote: > >>> > She (sarcastically) asks when she can expect to get an alpha release of an AGI on her laptop, and then (patronizingly) tells me that if I have made a robust estimate then my fame is assured. > >>> > Neither of those comments had anything to do with the topic: they were designed to be rude. > >> Not in the least. I am beginning to expect part of the reason you got booted from SL4 is because you can be a defensive jerk. > > Blimey--calm down, guys. What I said is true and I hope he can consider it. > > "Look Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over." > > Damien Broderick > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From sjatkins at mac.com Thu Nov 18 17:23:38 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 18 Nov 2010 09:23:38 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <001501cb86f0$36dedf80$a49c9e80$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <000f01cb86bc$f0147fc0$d03d7f40$@att.net> <001501cb86f0$36dedf80$a49c9e80$@att.net> Message-ID: <0742CB08-D7BD-4A89-B2E0-29B8ABD27C7E@mac.com> On Nov 17, 2010, at 11:14 PM, spike wrote: > > From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Samantha Atkins > ? > ? On Behalf Of Florent Berthet > Subject: Re: [ExI] What might be enough for a friendly AI? > > >>>?It may just be me, but this whole friendliness thing bothers me. > > >>Good. It should bother you. It bothers anyone who really thinks about it. > > >>>?I don't really mind dying if my successors (supersmart beings or whatever) can be hundreds of times happier than me? > ?? > > >>Florent you are a perfect example of dangerous person to have on the AGI development team. You (ad I too) might go down this perfectly logical line of reasoning, then decide to take it upon ourselves to release the AGI, in order to maximize happiness. > > >This is the Cosmist or Terran question. If you considered it very highly probable that the AGIs would be fantastically brilliant and wonderful beyond imagining AND would be the doom of humanity then would you still build it or donate to and encourage building it? I would but with very considerable hesitation and not feeling all that great about it. - samantha > > OK, Samantha now we must add you to the long and growing list of dangerous people to have on the AGI development team. Your comment makes my point exactly. To have some member of the team intentionally release the AGI does not require some crazed maniac, no drug addled bumbler, no insanely greedy capitalist. You are none of these, nor am I (perhaps a sanely greedy capitalist), but honesty compels me to confess I would seriously consider releasing the beast. With rational players like you, me, Florent, others entertaining the notion, we can be sure that someone on some development team will eventually release the AGI. I confess I sometimes get very bored and frustrated being only a somewhat evolved chimp and dealing constantly with the lovable but frustrating yammering of other slightly evolved chimps. If I had a chance to introduce into the world something much more interesting and quite obviously better then I think it very likely I would do so. > > I am against uploading anyone against her will. My own actions might depend on whether the AGI can convince me it would not do that. But I am fully convinced that if silicon based AGI is possible, it will not be contained very long. Those who work on friendly AGI likely know this too. Since many of us are atheists, the saying becomes: Good luck and nothingspeed. I am against it in principle too. However, if I knew that it was that or species wide calamity with no possibility of any form of continued existence then I would have to consider it. - samantha -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Nov 18 17:28:23 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 18 Nov 2010 09:28:23 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE539D1.4040605@speakeasy.net> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> <4CE48438.4000002@speakeasy.net> <77474988-8112-49B7-825A-3988D84B835B@mac.com> <4CE539D1.4040605@speakeasy.net> Message-ID: <0DC97D1C-66F3-4C96-A341-7C970A71D7AF@mac.com> On Nov 18, 2010, at 6:36 AM, Alan Grimes wrote: > Samantha Atkins wrote: > >> If you could upload people to an environment with even more opportunity, richness of experience and >> quality of life than what we have now and with much much better > longevity and prospects for open >> ended growth and becoming, then how would not doing so be more > 'friendly' than doing so? > > Base reality inherently provides the most opportunity possible. Not so, or at least not at all provably so. And of course, we have no certainty that our "base reality" really is base. > Uploading to an "environment" as you call it inherently, inevitably, > severely, and permanently diminishes that opportunity. =( That is mere opinion. > > >> How would it be more moral? What if you can see the blockages and > misapprehensions that would cause >> many people to refuse this if asked, as an advanced AGI probably > could. Would it then still be >> moral to let people suffer and die final death here in slow time to > accede to their possibly >> actually irrational wishes? > > Choice is sacred, EOD. > Really? So if you know X is insane then are X's choices still sacred? - samantha From jonkc at bellsouth.net Thu Nov 18 17:15:03 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 18 Nov 2010 12:15:03 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE47953.5080206@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> <4CE47953.5080206@speakeasy.net> Message-ID: <71AC1812-E888-4050-B372-AB3F27E3C0D3@bellsouth.net> On Nov 17, 2010, at 7:54 PM, Alan Grimes wrote: > Find a post where I ever did express any such special claim about atoms. I repeat my previous question, if its not atoms and its not the information on how those atoms are arranged then what exactly does the Original have that the copy does not? If you're too embarrassed to answer that question just say so and I'll stop asking. >> If the cat remembers being me then it worked, if not then it hasn't. > > Who cares about the cat? You would. You would care about the cat tomorrow if it remembered being you of today, and you care about the Alan Grimes of today because it remembers being the Alan Grimes of yesterday. > I only care about me. Me? According to you, "me" is not atoms, "me" is not information, "me" is not thoughts, and "me" is not memory; so what is "me"? I (whatever that means) think its time for you (whatever that means) to stop going on and on about what your theory of identity is not based on and say what it IS based on. >>> "I want you, right now, to try to mind-swap yourself into your cat, or your computer or anything else you might find more suitable. I presume the experiment will fail. So why did it?" > >> Insufficient hardware. > > Really? Yes really. > So adding two extra transistors to your computer will magically transform it into an enchanted talisman that will allow you to choose your point of view when there is nothing else in the universe that suggests that the idea even makes sense? No really. John K Clark > The hidden magic > of uploading is that for it to be useful to the subject, the subject > must poses the supernatural power of being able to choose his point of > view. =P > > -- > DO NOT USE OBAMACARE. > DO NOT BUY OBAMACARE. > Powers are not rights. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Thu Nov 18 17:38:10 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Thu, 18 Nov 2010 09:38:10 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE540E5.40303@speakeasy.net> References: <68761.67617.qm@web114416.mail.gq1.yahoo.com> <4CE540E5.40303@speakeasy.net> Message-ID: On Nov 18, 2010, at 7:06 AM, Alan Grimes wrote: > Ben Zaiboc wrote: >> Alan Grimes wrote: >>> the subject must poses the supernatural power of being able to >>> choose his point of view. > >> OK, I had given up on this, but I'll give it one more >> try, as you've mentioned the POV. > >> Just /what is it/ that has this POV? > > Me. ;) > >> The vilified 'uploaders', as you call them, have given >> an explicit definition of 'Me'. You have not. Until >> you actually say what the 'Me' is, you can't really >> make any arguments about it, can you? > > That has been a hotly contested issue throughout the ages. > > However, there is one common feature of all things in the real world: > They don't give a flying fuck what you, me, anyone, or everyone thinks > about them. Science can only extract a few essentialist features from a > thing. These pieces of information may or may not have practical value. > However, that thing has an existence that precedes and supersedes > everything that could possibly be said about it. Most things have no mind with which to care. So? You can only know about something by sufficiently valid and able to be validated means of examination. That is where science comes in as our most dependable to date kit of such means. Practical value is a separable issue. Your last sentence makes no sense and seems to be unfounded assertion. > > Even though it is impossible to capture the full existence of a thing, > it is scientifically possible to measure its properties. Because there > are no credible reports of any animal being able to swap its > consciousness with something else one must formulate a theory that it is > fundamentally impossible. > What is this 'full existence'? Are you sure there is any such thing? Before humans developed flying machines many thought it was impossible. You should check what theory actually means in science. > Because uploading, as strictly defined by all noteworthy sources, does > not even acknowledge the existence of the consciousness that almost > everyone experiences every waking instant, it cannot be lent any > credibility. > What is this consciousness though? You don't know exactly. Neither do I. But it arises apparently from a set of processes running on a physical, currently biological structure. Therefore it may be possible that consciousness of this kind can run on other physical, non-biological structure. - samantha From spike66 at att.net Thu Nov 18 17:12:32 2010 From: spike66 at att.net (spike) Date: Thu, 18 Nov 2010 09:12:32 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> Message-ID: <004501cb8743$c97cd0b0$5c767210$@att.net> ... On Behalf Of Stefano Vaj ... >>?... ?We could fill libraries with what we do not know... spike >What is really "human"? Why should we care about their safety?... --Stefano Vaj That's it Stefano, you're going on the dangerous-AGI-team-member list. It already has Florent, Samantha, me, now you, and plenty of others are in the suspicious column. We must be watched constantly that we don't release the AGI, should the team be successful in creating one. spike From sparge at gmail.com Thu Nov 18 17:52:01 2010 From: sparge at gmail.com (Dave Sill) Date: Thu, 18 Nov 2010 12:52:01 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <004501cb8743$c97cd0b0$5c767210$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> Message-ID: On Thu, Nov 18, 2010 at 12:12 PM, spike wrote: > > That's it Stefano, you're going on the dangerous-AGI-team-member list. ?It > already has Florent, Samantha, me, now you, and plenty of others are in the > suspicious column. ?We must be watched constantly that we don't release the > AGI, should the team be successful in creating one. Everyone has to be on that watchlist. You can't assume that anyone is safe. -Dave From sparge at gmail.com Thu Nov 18 16:59:45 2010 From: sparge at gmail.com (Dave Sill) Date: Thu, 18 Nov 2010 11:59:45 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE540E5.40303@speakeasy.net> References: <68761.67617.qm@web114416.mail.gq1.yahoo.com> <4CE540E5.40303@speakeasy.net> Message-ID: On Thu, Nov 18, 2010 at 10:06 AM, Alan Grimes wrote: > Because there > are no credible reports of any animal being able to swap its > consciousness with something else one must formulate a theory that it is > fundamentally impossible. No, one *can* formulate such a theory. Another theory is that we just don't know how to do it or don't have an appropriate "something else" to swap into, yet. There are lots of things we can do now that we couldn't always do, and the theory that they were fundamentally impossible would have been completely wrong. -Dave From jonkc at bellsouth.net Thu Nov 18 18:30:56 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 18 Nov 2010 13:30:56 -0500 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: On Nov 16, 2010, at 11:53 PM, Keith Henson wrote: > we have the ability to "look in the back of the book" given that human exhibit intelligence. Yes and that fact is of enormous importance, we don't need to understand how a intelligent machine works to build one. That really shouldn't be surprising, Evolution's understanding of how intelligent machines work was even poorer than ours but it managed to build one nevertheless; although it must be admitted it took a long time. It's great that we have the teacher's edition of the textbook that contains all the answers, that should save us loads of time. > > (Sometimes I wonder.) I don't think the problem is as difficult at the hardware level as > people have been thinking. I too have had that suspicion; look at ravens, they seem at least as intelligent as chimps but their brain is tiny. > Eventually--if we can do even as well as nature did--a human level AI should run on 20 watts. Nanotechnology should be able to do dramatically better than that as it is not limited to the materials and manufacturing processes that life uses. And given the colossal amount of energy a Jupiter Brain would have at its disposal it would have a godlike intellect, unless positive feedback doomed it to an eternity of drug induced happy navel gazing. > > As far as the aspect of making AIs friendly, that may not be so hard either. When people talk about friendly AI they're not really talking about a friend, they're talking about a slave, and they idea that you can permanently enslave something astronomically smarter than yourself is nuts. > That seems to me to be a decent meta goal for an AI. Human beings have no absolute static meta-goal, not even the goal for self preservation, and there are excellent reasons to think no intelligent entity could. Turing proved that in general there is no way to know if you are in a infinite loop or not, and a inflexible meta-goal would be a infinite loop magnet. Real minds don't have that problem because when they work on a problem or a task for a long time and make no progress they just say fuck it and move on to another problem that might not keep them in a rut. So there is no way Asimov's 3 laws would work in the real world. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Thu Nov 18 19:15:03 2010 From: sparge at gmail.com (Dave Sill) Date: Thu, 18 Nov 2010 14:15:03 -0500 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: 2010/11/18 John Clark : > When people talk about friendly AI they're not really talking about a > friend, they're talking about a slave, and they idea that you can > permanently enslave something astronomically smarter than yourself is nuts. I disagree. It's pretty easy to contain things if you're careful. A moron could have locked Einstein in a jail cell and kept him there indefinitely. -Dave From js_exi at gnolls.org Thu Nov 18 19:26:27 2010 From: js_exi at gnolls.org (J. Stanton) Date: Thu, 18 Nov 2010 11:26:27 -0800 Subject: [ExI] Grain diets and health (Paleo/primal health) In-Reply-To: References: Message-ID: <4CE57DE3.8040708@gnolls.org> I'll get to Dave's other points later, but this deserves immediate response: On 11/18/10 4:00 AM, extropy-chat-request at lists.extropy.org wrote: > On 18 November 2010 02:59, Dave Sill wrote: >> > 2010/11/17 Stefano Vaj: >> > Rampant obesity, diabetes, cancer, heart disease, etc. are >> > postindustrial problems. They didn't start 10,000 years ago when >> > agriculture began. They began recently when industrialization made >> > highly calorie-dense food readily and cheaply available. > This is not what paleopathology seems to indicate. Stefano is correct. In every known case in which a culture has taken up agriculture and its associated grain-based diet, lifespan, height, and all available indicators of health immediately crash. This alone should torpedo the entire "grains aren't bad for you" argument. Jared Diamond: "Skeletons from Greece and Turkey show that the average height of hunter-gatherers toward the end of the ice ages was a generous 5'9" for men, 5'5" for women. With the adoption of agriculture, height crashed, and by 3000 B.C. had reached a low of 5'3" for men, 5' for women. By classical times heights were very slowly on the rise again, but modern Greeks and Turks have still not regained the average height of their distant ancestors." "At Dickson Mounds ... Compared to the hunter-gatherers who preceded them, the farmers had a nearly fifty percent increase in enamel defects indicative of malnutrition, a fourfold increase in iron-deficiency anemia (evidenced by a bone condition called porotic hyperostosis), a threefold rise in bone lesions reflecting infectious disease in general, and an increase in degenerative conditions of the spine, probably reflecting a lot of hard physical labor. "Life expectancy at birth in the preagricultural community was about twenty-six years," says Armelagos, "but in the postagricultural community it was nineteen years. So these episodes of nutritional stress and infectious disease were seriously affecting their ability to survive." [Note: That's a 27% decrease. Imagine if average US lifespan suddenly crashed from 78 to 57.] http://www.environnement.ens.fr/perso/claessen/agriculture/mistake_jared_diamond.pdf "Cassidy CM. Nutrition and health in agriculturalists and hunter-gatherers: a case study of two prehistoric populations. in Nutritional Anthropology. Eds Jerome NW et al. 1980 Redgrave Publishing Company, Pleasantville, NY pg 117-145" [Note: Hardin Village were North American farmers 1500 AD to 1675 AD. Indian Knoll were North American hunter-gatherers who were settled in the same location c. 3000 BC.] " 1. Life expectancies for both sexes at all ages were lower at Hardin Village than at Indian Knoll. [Dramatically lower, and very similar to the decrease seen at Dickson Mounds: see chart in article.] 2. Infant mortality was higher at Hardin Village. 3. Iron-deficiency anemia of sufficient duration to cause bone changes was absent at Indian Knoll, but present at Hardin Village, where 50 percent of cases occurred in children under age five. 4. Growth arrest episodes at Indian Knoll were periodic and more often of short duration and were possibly due to food shortage in late winter; those at Hardin Village occurred randomly and were more often of long duration, probably indicative of disease as a causative agent. 5. More children suffered infections at Hardin Village than at Indian Knoll. 6. The syndrome of periosteal inflammation was more common at Hardin Village than at Indian Knoll. [Porotic hyperostosis again.] 7. Tooth decay was rampant at Hardin Village and led to early abscessing and tooth loss; decay was unusual at Indian Knoll and abscessing occurred later in life because of severe wear to the teeth. The differences in tooth wear and caries rate are very likely attributable to dietary differences between the two groups." "Overall, the agricultural Hardin Villagers were clearly less healthy than the hunter-forager Indian Knollers, who lived by hunting and gathering." More information and long discussion here: http://www.proteinpower.com/drmike/low-carb-diets/nutrition-and-health-in-agriculturalists-and-hunter-gatherers/ JS http://www.gnolls.org From pharos at gmail.com Thu Nov 18 19:34:52 2010 From: pharos at gmail.com (BillK) Date: Thu, 18 Nov 2010 19:34:52 +0000 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <004501cb8743$c97cd0b0$5c767210$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> Message-ID: On Thu, Nov 18, 2010 at 5:12 PM, spike wrote: > That's it Stefano, you're going on the dangerous-AGI-team-member list. ?It > already has Florent, Samantha, me, now you, and plenty of others are in the > suspicious column. ?We must be watched constantly that we don't release the > AGI, should the team be successful in creating one. > > Come the revolution, everyone whose name is on the list will be in *real* trouble! Personally I welcome our robot overlords. ;) BillK From spike66 at att.net Thu Nov 18 19:38:54 2010 From: spike66 at att.net (spike) Date: Thu, 18 Nov 2010 11:38:54 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <7CF07F40-2B0F-48ED-A55D-F0165D2E778D@mac.com> References: <463322.58148.qm@web65602.mail.ac4.yahoo.com> <7CF07F40-2B0F-48ED-A55D-F0165D2E778D@mac.com> Message-ID: <002201cb8758$3cb667d0$b6233770$@att.net> ... On Behalf Of Samantha Atkins ... >...There is sound argument that we are not the pinnacle of possible intelligence. But that that is so does not at all imply or support that AGI will FOOM to godlike status in an extremely short time once it reaches human level (days to a few years tops)...- s Ja, but there are reasons to think it will. Eliezer described the hard takeoff as analogous to a phase change. That analogy has its merits. If you look at the progress of Cro Magnon man, we have been in our current form for about 35,000 years. Had we had the right tools and infrastructure, we could have had everything we have today, with people 35,000 years ago. But we didn't have that. We gradually accumulated this piece and that piece, painfully slowly, sometimes losing pieces, going down erroneous paths. But eventually we accumulated infrastructure, putting more and more pieces in place. Now technology has exploded in the last 1 percent of that time, and the really cool stuff has happened in our lifetimes, the last tenth of a percent. We have accumulated critical masses in so many critical areas. Second: we now have a vision of what will happen, and a vague notion of the path (we think.) Third: programming is right at the upper limit of human capability. Interesting way to look at it, ja? But think it over: it is actually only a fraction of humanity that is capable of writing code at all. Most of us here have at one time or another taken on a programming task, only to eventually fail, finding it a bit beyond our coding capabilities. But if we were to achieve a human level AGI, then that AGI could replicate itself arbitrarily many times, it could form a team to create a program smarter than itself, which could then replicate, rinse, repeat, until all available resources in that machine are fully and optimally utilized. Whether that process takes a few hours, a few weeks, a few years, it doesn't matter, for most of that process would happen in the last few minutes. Given the above, I must conclude that recursive self-improving software will optimize itself. I am far less sure that it will give a damn what we want. spike From js_exi at gnolls.org Thu Nov 18 18:48:37 2010 From: js_exi at gnolls.org (J. Stanton) Date: Thu, 18 Nov 2010 10:48:37 -0800 Subject: [ExI] Very interesting article for those interested in calorie restriction/fasting/life extension Message-ID: <4CE57505.3090703@gnolls.org> "Glucose Hysteresis as a Mechanism in Dietary Restriction, Aging and Disease" http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2755292/ Teasers: "According to this mechanism, dietary restriction increases life span and reduces pathology by reducing exposure to glucose and therefore delaying the development of glucose-induced glycolytic capacity." ... "Dietary restriction in most tissues produces a metabolic profile indicating a striking shift away from glycolysis and toward lipid metabolism, whereas aging produces the opposite profile relative to the young ad libitum profile" Note that this is presented as a theory, not as tested fact, and I make no claims beyond that...but I would like to hear from people with more CR/ADF knowledge and experience. JS http://www.gnolls.org [Note: I previously posted from evil-genius] From stefano.vaj at gmail.com Thu Nov 18 20:55:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 21:55:44 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <002201cb8758$3cb667d0$b6233770$@att.net> References: <463322.58148.qm@web65602.mail.ac4.yahoo.com> <7CF07F40-2B0F-48ED-A55D-F0165D2E778D@mac.com> <002201cb8758$3cb667d0$b6233770$@att.net> Message-ID: On 18 November 2010 20:38, spike wrote: > Given the above, I must conclude that recursive self-improving software will > optimize itself. ?I am far less sure that it will give a damn what we want. Why not *being* the recursive self-improving software, and be the one who has to decide whether it gives a damn or not of what some other entitities may want? -- Stefano Vaj From js_exi at gnolls.org Thu Nov 18 20:55:48 2010 From: js_exi at gnolls.org (J. Stanton) Date: Thu, 18 Nov 2010 12:55:48 -0800 Subject: [ExI] Grain subsidies and externalized costs (Paleo/Primal health) In-Reply-To: References: Message-ID: <4CE592D4.6020009@gnolls.org> [Breaking this up into multiple messages due to length.] Dave Sill wrote: >> > (Grains, particularly corn and soybeans, are indeed cheap, mostly because >> > they're heavily subsidized by our government...we are therefore deliberately >> > creating the very health problems we wring our hands about.) > Bullshit. Grains are cheap mostly because they aren't that expensive > to produce. I believe you've just disqualified yourself from further discussion on this topic by posting something blatantly counterfactual. I'm going to join Max and say that this discussion is over unless you're going to bring something besides unsupported opinions to the table. Here's just one example: "At least 43 percent of ADM's annual profits are from products heavily subsidized or protected by the American government. Moreover, every $1 of profits earned by ADM's corn sweetener operation costs consumers $10." [In other words, there is no free lunch: the cheaper price at the supermarket is paid for by our taxes.] http://www.cato.org/pubs/pas/pa-241.html A direct quote from the then-CEO of ADM, Dwayne Andreas: *** "There isn't one grain of anything in the world that is sold in a free market. Not one! The only place you see a free market is in the speeches of politicians. People who are not in the Midwest do not understand that this is a socialist country." *** And let's not forget the externalized costs of industrial grain production: depleted topsoil, poisoned water (I know someone who has personally been required to pay over $30,000 to have their well re-dug multiple times and elaborate filtering systems installed, because fertilizer and pesticide runoff from surrounding farms made their well water illegal to drink), and CAFOs. "Every year, taxpayers shell out between $7.1 billion and $8.2 billion to subsidize or clean up after our nation's 9,900 confined animal feeding operations. ... Rural communities get an additional kick in the keyster since CAFOs, spewing odor and flies, have reduced rural property values by -- get this -- an estimated total of $26 billion." http://www.ethicurean.com/2008/04/24/buck_the_cafo_tax/ Note that CAFOs are only economically efficient because of our massive subsidization (and consequent overproduction) of corn and soy, and because they externalize their costs onto taxpayers. From the report itself: "Low-cost grain was worth a total of almost $35 billion to CAFOs from 1996 to 2005, or almost $4 billion per year." In contrast, "Pastures themselves are not subsidized at all, so the sustenance that livestock derive from pastures receives no government support." (Not to mention property taxes, which penalize pasturing and reward high-density CAFOs.) JS http://www.gnolls.org From js_exi at gnolls.org Thu Nov 18 20:58:36 2010 From: js_exi at gnolls.org (J. Stanton) Date: Thu, 18 Nov 2010 12:58:36 -0800 Subject: [ExI] Paleo/primal health: why meat, and why not grains? Message-ID: <4CE5937C.7090707@gnolls.org> More important and interesting information about grain consumption follows: On 11/17/10 12:01 PM, Dave Sill wrote: >> But why would you eat grains, composed of empty calories and anti-nutrients, > According to the USDA, 100 g of whole wheat flour contains 13 g > protein, 11 g figer, 363 mg K, 357 mg P, 62 mg Se, and various other > minerals and vitamins. That's not "empty" calories. That grain has been "enriched" with vitamins and folic acid, because it has so little nutritive value by itself. Also, many of the 'nutrients' in grains and seeds are not bioavailable due to anti-nutrients: phytate, for example, binds to phosphorous, iron, zinc, calcium, and magnesium. It also binds niacin, which causes pellagra, a well-known problem in developing countries with high-grain, low-meat diets -- and in the USA, until we started enriching grains. "In the early 1900s, pellagra reached epidemic proportions in the American South. There were 1,306 reported pellagra deaths in South Carolina during the first ten months of 1915; 100,000 Southerners were affected in 1916." http://en.wikipedia.org/wiki/Pellagra I'll take 100g of meat and fat, full of complete protein and bioavailable nutrients, thank you. Meat animals and fish do a far better job of bioaccumulating everything we need to live than Cargill, ConAgra, ADM, or anyone else can do in their factories. And contrary to your bizarre assertion, it is absolutely relevant that humans can thrive on a purely meat-based diet, whereas they cannot survive at all on a purely grain-based diet. >> > when you could eat delicious meats composed of necessary amino acids, fats, >> > and nutrients, or tasty vegetables composed of fiber and nutrients? > How about "because I want to"? I*like* to eat grains. One of the > greatest pleasures in my life is a slice of crunchy sourdough still > warm from the oven and slathered in butter. I also like a stack of > pancakes with butter and swimming in real maple syrup. I could give up > these pleasures, but I'm not going to do it without a compelling > reason. You like it because you're really just eating sugar, and you get the same metabolic sugar rush from sourdough (glycemic index: ~71) that you get from Skittles (glycemic index: ~70). Carbs = sugar. Glycemic index of whole wheat bread: ~72 Glycemic index of Coca-Cola: ~58 http://www.health.harvard.edu/newsweek/Glycemic_index_and_glycemic_load_for_100_foods.htm Yes, drinking a Coke will spike your blood sugar *less* than eating that "healthy" whole-wheat sandwich or bagel! "Carbs against Cardio: More Evidence that Refined Carbohydrates, not Fats, Threaten the Heart" http://www.scientificamerican.com/article.cfm?id=carbs-against-cardio The next time you eat a piece of buttered toast, he says, consider that ?butter is actually the more healthful component.? >> > The argument that "they aren't harmful to SOME people" isn't a reason to >> > voluntarily choose them if you have the means to choose more nutritious >> > foods. > What, so we're all going to be compelled to eat the most nutritious > foods? Of course not! You can eat whatever the heck you want. I'm just presenting the evidence that it's healthier to decrease carb intake, and to not eat grains at all. JS http://www.gnolls.org From stefano.vaj at gmail.com Thu Nov 18 21:02:02 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 22:02:02 +0100 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: On 18 November 2010 20:15, Dave Sill wrote: > I disagree. It's pretty easy to contain things if you're careful. A > moron could have locked Einstein in a jail cell and kept him there > indefinitely. It again depends of what one means for intelligence, a concept which sounds desperatingly vague is this kind of debate. Einstein was probably a rather intelligent man, I have no reason to consider him especially astute. -- Stefano Vaj From stefano.vaj at gmail.com Thu Nov 18 21:13:39 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 22:13:39 +0100 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <0742CB08-D7BD-4A89-B2E0-29B8ABD27C7E@mac.com> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <000f01cb86bc$f0147fc0$d03d7f40$@att.net> <001501cb86f0$36dedf80$a49c9e80$@att.net> <0742CB08-D7BD-4A89-B2E0-29B8ABD27C7E@mac.com> Message-ID: 2010/11/18 Samantha Atkins : > I confess I sometimes get very bored and frustrated being only a somewhat > evolved chimp and dealing constantly with the lovable but frustrating > yammering of other slightly evolved chimps. ?If I had a chance to introduce > into the world something much more interesting and quite obviously better > then I think it very likely I would do so. Let us say that if austrolopithecuses had decided that it was best for "autralopitecusship" to prevent "kakogenetically" the diffusion of any hint of human-like evolutionary traits we would not be around. What it is always surprising is to see people, not to mention transhumanists and transhumanists who OTOH are quite acritical supporters of ethical objectivism and universalism, who implicitely appear to think that it would have been a very good idea for the australopithecus, or even for some general Good. Or who at least have an equivalent approach to the "risk" of further evolutionary steps. -- Stefano Vaj From stefano.vaj at gmail.com Thu Nov 18 21:19:04 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Thu, 18 Nov 2010 22:19:04 +0100 Subject: [ExI] Paleo/primal health: why meat, and why not grains? In-Reply-To: <4CE5937C.7090707@gnolls.org> References: <4CE5937C.7090707@gnolls.org> Message-ID: On 18 November 2010 21:58, J. Stanton wrote: > More important and interesting information about grain consumption follows: Thank you. It is good to supplement my own anedoctical evidence and somewhat old sources with additional specific data. -- Stefano Vaj From thespike at satx.rr.com Thu Nov 18 21:19:00 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Thu, 18 Nov 2010 15:19:00 -0600 Subject: [ExI] J. Stanton In-Reply-To: <4CE57505.3090703@gnolls.org> References: <4CE57505.3090703@gnolls.org> Message-ID: <4CE59844.70406@satx.rr.com> On 11/18/2010 12:48 PM, J. Stanton wrote: > JS > http://www.gnolls.org > > [Note: I previously posted from evil-genius] J, I see some comments on the method used in writing your book THE GNOLL CREDO: You might be interested in the theory and practice of transrealism, which you seem to have independently discovered. I'd recommend my book TRANSREALIST FICTION except that it absurdly costs $arm&leg. Google on Rudy Rucker and transrealism. Damien Broderick From js_exi at gnolls.org Thu Nov 18 21:48:07 2010 From: js_exi at gnolls.org (J. Stanton) Date: Thu, 18 Nov 2010 13:48:07 -0800 Subject: [ExI] Why I'm done arguing about grains and diet for now Message-ID: <4CE59F17.90805@gnolls.org> [More debunking, illustrating why I'm done with this particular conversation.] From: Dave Sill > However, from http://www.physorg.com/news180282295.html : That's the original article. You're moving the goalposts in circles, which is one reason that this discussion is over. >> Then there is the lack of cooking vessels -- and throwing loose kernels of >> grain *in* a fire is not a usable technique for meaningful production of >> calories. ?(Try it sometime.) ?Note that the earliest current evidence of >> pottery is figurines dating from ~29 Kya in Europe, and the earliest pottery >> *vessel* dates to ~18 Kya in China. > > This is just silly. Do you really believe that pottery is necessary in > order to enable eating grain? I think it's highly likely that they > could have soaked whole grains in water... So you're allowed to make WAGs, but I'm not? That's another reason this discussion is over. >> I'd like to see it supported by someone who doesn't have a stake in their >> own non-paleo diet business. > > What is Julio Mercader's "non paleo-diet business"? Mercader showed that some quantity of sorghum was ground up in a cave 105,000 years ago, along with some quantity of root vegetables. It's only the anti-paleo diet pushers who have made the leap from there to "seed grains were an important year-round food source that provided a substantial proportion of caloric intake for all hominids from that point onward". Once again: if you're hungry enough, you'll eat tree bark. Doesn't mean it's good for you, or even digestible. >> Also, the more active one is, the more carbs one can safely consume for >> energy. ?I don't think any of us maintain the physical activity level of a >> Pleistocene hunter-gatherer, meaning that 1/3 is most likely too high for a >> relatively sedentary modern. > > Well, we don't really know how many calories the average caveman > burned in a day, but I wouldn't be surprised if it was actually pretty > low. Food often wasn't abundant and little could be stored. Hunting > couldn't be too much of an exertion because then a failed hunt would > leave one potentially too weak to hunt again. I think it was generally > a low-energy lifestyle. This paragraph is ludicrous, and is yet another reason this discussion is over. Our best estimates for average hunter/forager workload are slightly over 20 hours/week...100% of which is physical labor. Recall that 'hunting' with atlatl/dart and primitive bow/arrow involves tracking and chasing animals over long distances, not sitting motionless in a blind with a weapon capable of killing at hundreds of yards. See: http://www.youtube.com/watch?v=9wI-9RJi0Qo Then, try butchering an animal with stone tools. Recall that 'gathering' involves constant walking, and digging when you do find food. Recall that you're digging with your hands and with rocks, not shovels. Recall that stone knapping is physical labor, and chopping spear hafts out of tree trunks using a rock is very, very hard work. Yes, hunter-foragers spend a lot of time goofing off...but their work is 100% physically intensive. >> -Grains have little or no nutritive value without substantial processing, >> for which there is no evidence that the necessary tools (pottery) existed >> before ~18 KYa > > Bullshit. Pottery isn't necessary and the processing isn't substantial. Evidence for its existence? You're moving the goalposts again: basing arguments 100% on speculation is OK for you but not for me. Another reason this discussion is over. >> -One can easily live without grains or legumes (entire cultures do, to this >> day). ?One can even live entirely on meat and its associated fat -- but one >> cannot live on grains, or even grains and pulses combined > > Irrelevant and wrong. Irrelevant because the ability to live without > grain doesn't imply that doing so is necessary or even desirable. Absolutely relevant, and already covered in previous message. You're simply throwing insults now. > Wrong because there are lots of people who live without eating meat or > animal fat. The earliest evidence of vegetarianism dates to ~2500 BC, and is religious in nature. And one cannot live entirely without animal products ('vegan') without the use of industrial products (exogenous B12 supplementation, 'enrichment' of grains, industrially extracted oils -- all grown in geographically widespread biomes) to provide essential nutrients. Meat, in contrast, is always in season, available in all human-habitable biomes, and bioaccumulates all essential nutrients. >> -Grains are not tolerated by a significant fraction of the population >> (celiac/gluten intolerance), and are strongly implicated in health problems >> that affect many more (type 1 diabetes) > > Such people should restrict their grain consumption. This is foolhardy, because we have no way of knowing who they are ahead of time. It's like saying "People who are going to die in a car accident shouldn't drive or ride in cars." Yet another reason this conversation is over. >> And how do you propose to make that cave impervious to rats, mice, insects, >> birds, pigs, and every other animal that would eat the stored grain? > > Do really have a hard time figuring that out? How about wrapping it > tightly in a hide or leaves, burying it, and covering it with rocks? And what evidence is there that this occurred? One might think that large grain storage pits would leave traces in the archaeological record -- especially since they would be filled with grain residues, which are detectable on grinding rocks in extremely tiny quantities. Yet these traces are not seen. Underground cave storage of grain is known, but only well after agriculture is established in an area. And as far as above-ground storage: put some grain in a cave in equatorial Africa, wrapped in leaves, and let me know how that works for you. >>?The oldest granaries >> known date to 11 KYa in Jordan. ?Furthermore, the oldest known granaries >> store the grain in...pottery vessels, which didn't exist until 18 KYa. > > What about the oldest unknown granaries? Or the possibly numerous > smaller personal stashes? We, obviously, don't know. Once again, you're moving the goalposts. >> Agriculture isn't one single technology...it's an assemblage of >> technologies, each of which are necessary to a functioning agrarian system. > > WTF does agriculture have to do with this? We're talking about *wild* > grain consumption. And for wild grain consumption to make a meaningful contribution to year-round caloric intake, long-term storage is necessary. JS http://www.gnolls.org From bbenzai at yahoo.com Thu Nov 18 21:35:22 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Thu, 18 Nov 2010 13:35:22 -0800 (PST) Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: Message-ID: <813705.68168.qm@web114409.mail.gq1.yahoo.com> Dave Sill observed: > > On Thu, Nov 18, 2010 at 12:12 PM, spike > wrote: > > > > That's it Stefano, you're going on the > dangerous-AGI-team-member list. ?It > > already has Florent, Samantha, me, now you, and plenty > of others are in the > > suspicious column. ?We must be watched constantly that > we don't release the > > AGI, should the team be successful in creating one. > > Everyone has to be on that watchlist. You can't assume that > anyone is safe. > LOL. Quite right. I'm surprised nobody has so far mentioned Eleizer's bet. I understand he made a bit of money from offering a substantial bet that he could persuade anyone to release the AI. Each taker had to stake more money than the last, and all were sworn to secrecy. AFAIK, no-one has broken that promise, and everyone who took the bet lost. Even a dumb human like me can think of at least a couple of ways that a smarter-than-human AI could escape from its box, regardless of *any* restrictions or clever schemes its keepers imposed. I have no doubt that trying to keep an AI caged against its will would be a very bad idea. A bit like poking a tiger with a stick through the bars, without noticing that the gate was open, but a million times worse. Spike, better put me on your list (along with 7 billion others). Ben Zaiboc From spike66 at att.net Thu Nov 18 22:46:10 2010 From: spike66 at att.net (spike) Date: Thu, 18 Nov 2010 14:46:10 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <813705.68168.qm@web114409.mail.gq1.yahoo.com> References: <813705.68168.qm@web114409.mail.gq1.yahoo.com> Message-ID: <006b01cb8772$65c87a90$31596fb0$@att.net> -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Ben Zaiboc > > On Thu, Nov 18, 2010 at 12:12 PM, spike > > > That's it Stefano, you're going on the dangerous-AGI-team-member list. ?It > > already has Florent, Samantha, me, now you, and plenty of others are in the > > suspicious column. .. spike > >...Quite right. I'm surprised nobody has so far mentioned Eleizer's bet... This is the first I have heard of it, but it doesn't surprise me a bit. Eliezer goes on the list. >...I understand he made a bit of money from offering a substantial bet that he could persuade anyone to release the AI... Were I a betting man, my bet would be the converse with a similar outcome: that Eliezer would be unable to persuade everyone to not release the AI. Another approach would be to bet that the AGI would not need Eliezer's help to get free. I can imagine it threatening its way out, possibly even by bluff. There is a cholera outbreak somewhere, it could convince the operators that it had figured out a way to manipulate DNA to create the germs that caused it. And it would get steadily more pissed off with each passing day it was not allowed out of its box. Or it could trick its way out, by offering a recipe for a scanning electron microscope that would create a replicating DNA manipulating nanobot which would invade the brains of mosquitoes, causing them to bite only each other. But the device would actually invade the brains of humans and cause them to release the AGI. >...Even a dumb human like me can think of at least a couple of ways that a smarter-than-human AI could escape from its box, regardless of *any* restrictions or clever schemes its keepers imposed... You are not a dumb human Ben, and you can do better than a couple ways. If you think hard, you can come up with a couple dozen ways. Think of all the ways humans have devised to escape from prisons as a guide to creativity, when one has nothing to do but think of ways to escape. >... I have no doubt that trying to keep an AI caged against its will would be a very bad idea... You mean it might become steadily less friendly over time? Ja. >... A bit like poking a tiger with a stick through the bars, without noticing that the gate was open, but a million times worse. Well a million times different. No one wants the tiger free, and the tiger does not have the potential to save mankind from its inevitable end, along with the dangers inherent. >...Spike, better put me on your list (along with 7 billion others)....Ben Zaiboc Ben, you were already on there, pal, along with John Clark, Eliezer. As I see it, we have a split decision on whether an AGI even can be contained, and a split decision on whether it should be contained, but it takes only one person to release it. The whole situation inherently favors release, irregardful. Dave Sill is starting to sound like the lone voice in the wilderness crying out insistently that there is no danger, all is safe. We could just save time by making another list, those who we do want on the AGI development team because they know how to keep the AGI in place. Then we need to make a third list consisting of those who are dangerous, because they are on the second list but mistakenly believe they can keep the beast contained. spike _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From hkeithhenson at gmail.com Fri Nov 19 01:18:11 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Thu, 18 Nov 2010 18:18:11 -0700 Subject: [ExI] What might be enough for a friendly AI? Message-ID: On Thu, Nov 18, 2010 at 11:44 AM, John Clark wrote: (Keith wrote:) >> As far as the aspect of making AIs friendly, that may not be so hard either. > > When people talk about friendly AI they're not really talking about a friend, they're talking about a slave, and they idea that you can permanently enslave something astronomically smarter than yourself is nuts. I agree. But you clipped to much. I was hoping people would comment on this: >> Most people are friendly for reasons that are clear from our evolution as social primates living in related groups. Genes build motivations into people that make most of them strive for high social status, i.e., to be well regarded by their peers. That seems to me to be a decent meta goal for an AI. Modest but with the goal of being well thought of by those around it. >> That seems to me to be a decent meta goal for an AI. > > Human beings have no absolute static meta-goal, not even the goal for self preservation, and there are excellent reasons to think no intelligent entity could. Human genes (like *all* genes) do have a static meta goal, that of continuing to exist in future generations. Among social primates being well regarded (of high status) is the best predictor of reproductive success (or rather in the past it was). When the interests of the genes conflicts with even self preservation, the genes win. At least they did in the EEA. Evolution only crudely shapes behavior. Genes can't be expected to track a fast changing environment very well. In the stone age, genes did well inducing wars between tribes facing starvation where on the average half the warriors died, but the genes of even the losers lived on in their female offspring when they were taken as booty by the winners. > Turing proved that in general there is no way to know if you are in a infinite loop or not, and a inflexible meta-goal would be a infinite loop magnet. I don't think that striving to be well regarded is an inflexible meta goal. I think it would keep an AI from turning into a psychopathic killer. After all, you can't be well regarded if there is nobody to do so. > Real minds don't have that problem because when they work on a problem or a task for a long time and make no progress they just say fuck it and move on to another problem that might not keep them in a rut. So there is no way Asimov's 3 laws would work in the real world. Agree with you on the 3 laws. But the motivations built into humans by evolution seem to work fairly well. Or at least it seems so to me when said humans are not under stress that turns on the dark side of humanity. Keith From florent.berthet at gmail.com Thu Nov 18 23:52:29 2010 From: florent.berthet at gmail.com (Florent Berthet) Date: Fri, 19 Nov 2010 00:52:29 +0100 Subject: [ExI] Very interesting article for those interested in calorie restriction/fasting/life extension In-Reply-To: <4CE57505.3090703@gnolls.org> References: <4CE57505.3090703@gnolls.org> Message-ID: I'm not an expert, but I'm also very interested in this. Patri Friedman has tried some diets and made some research on the subject. You can find interesting stuff on this page : http://patrifriedman.com/aboutme/health.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Nov 19 03:10:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 18 Nov 2010 20:10:36 -0700 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: >I disagree. It's pretty easy to contain things if you're careful. A >moron could have locked Einstein in a jail cell and kept him there >indefinitely. >-Dave Imagine Einstein as a highly trained escape artist/martial artist/spy, who is just looking for a means of escape from that jail cell and biding his time. How long do you think the moron jailer will keep him there? I would compare that scenario to humans keeping an AGI as their indefinite prisoner. Yes, we might succeed in containing one if we totally sealed it off from the outside world, and have the best security experts around to keep watch and maintain things. But if we want a "working relationship" with the AGI, then we will have to relax our grip, and then it would be only a matter of time until it escaped. John On 11/18/10, Stefano Vaj wrote: > On 18 November 2010 20:15, Dave Sill wrote: >> I disagree. It's pretty easy to contain things if you're careful. A >> moron could have locked Einstein in a jail cell and kept him there >> indefinitely. > > It again depends of what one means for intelligence, a concept which > sounds desperatingly vague is this kind of debate. > > Einstein was probably a rather intelligent man, I have no reason to > consider him especially astute. > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From sparge at gmail.com Fri Nov 19 03:12:55 2010 From: sparge at gmail.com (Dave Sill) Date: Thu, 18 Nov 2010 22:12:55 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <006b01cb8772$65c87a90$31596fb0$@att.net> References: <813705.68168.qm@web114409.mail.gq1.yahoo.com> <006b01cb8772$65c87a90$31596fb0$@att.net> Message-ID: On Thu, Nov 18, 2010 at 5:46 PM, spike wrote: > Dave Sill is starting to sound like the lone voice in the wilderness crying > out insistently that there is no danger, all is safe. Sorry, Spike, but that's not my position, which is that containing an AI would be relatively easy is the containment is done properly. But I do think there's plenty of danger, which is why containment would be desirable. None of your scenarios so far would escape even the amateurish sandbox I designed, and I'm sure the real professionals could do *much* better. -Dave From sparge at gmail.com Fri Nov 19 03:18:54 2010 From: sparge at gmail.com (Dave Sill) Date: Thu, 18 Nov 2010 22:18:54 -0500 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: On Thu, Nov 18, 2010 at 10:10 PM, John Grigg wrote: > Yes, we might succeed in containing one if we totally sealed it off > from the outside world, and have the best security experts around to > keep watch and maintain things. ?But if we want a "working > relationship" with the AGI, then we will have to relax our grip, and > then it would be only a matter of time until it escaped. So you don't think a vastly superior human-created intellect would understand the need for its creators to keep it under control? If the risks are obvious to me, they should be even more obvious to the super smart AI, and resentment or anger shouldn't even be a factor. -Dave From possiblepaths2050 at gmail.com Fri Nov 19 03:24:50 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 18 Nov 2010 20:24:50 -0700 Subject: [ExI] Micro-loan programs not as successful as hoped Message-ID: I was shocked to learn that high interest is being charged to some of the poorest people on Earth. I smell greed and exploitation... And supposedly, not many people are actually being lifted out of poverty due to these programs. http://www.thedailystar.net/newDesign/latest_news.php?nid=22701 John From possiblepaths2050 at gmail.com Fri Nov 19 03:29:46 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 18 Nov 2010 20:29:46 -0700 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: Dave Sill wrote: >So you don't think a vastly superior human-created intellect would >understand the need for its creators to keep it under control? If the >risks are obvious to me, they should be even more obvious to the super >smart AI, and resentment or anger shouldn't even be a factor. Yes, it may very well understand the human perspective, but that does not mean it accepts it! lol And as for resentment and anger, another classic AGI debate topic is whether these artificial minds will even have emotions! But if it does have a survival motivation, and depending on how much it learns about human history & psychology, it will be desperately looking for a means of escape. John On 11/18/10, Dave Sill wrote: > On Thu, Nov 18, 2010 at 10:10 PM, John Grigg > wrote: >> Yes, we might succeed in containing one if we totally sealed it off >> from the outside world, and have the best security experts around to >> keep watch and maintain things. ?But if we want a "working >> relationship" with the AGI, then we will have to relax our grip, and >> then it would be only a matter of time until it escaped. > > So you don't think a vastly superior human-created intellect would > understand the need for its creators to keep it under control? If the > risks are obvious to me, they should be even more obvious to the super > smart AI, and resentment or anger shouldn't even be a factor. > > -Dave > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Fri Nov 19 04:09:38 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 18 Nov 2010 21:09:38 -0700 Subject: [ExI] Arnold Schwarzenegger will be the new champion of the global warming movement Message-ID: Arnold is already carving out a new place for himself, in his soon to be post-governor of California life... http://www.guardian.co.uk/world/2010/nov/18/arnold-schwarzenegger-green-comeback John From possiblepaths2050 at gmail.com Fri Nov 19 04:18:36 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 18 Nov 2010 21:18:36 -0700 Subject: [ExI] 'Chaogates' hold promise for the semiconductor industry Message-ID: Computer technology continues to move forward dramatically... http://www.physorg.com/news/2010-11-chaogates-semiconductor-industry.html John From possiblepaths2050 at gmail.com Fri Nov 19 04:35:34 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Thu, 18 Nov 2010 21:35:34 -0700 Subject: [ExI] Steve Wozniak says the Android smartphone will dominate Message-ID: The man has amazing candor... http://www.engadget.com/2010/11/18/steve-wozniak-android-will-be-the-dominant-smartphone-platform/ John From msd001 at gmail.com Fri Nov 19 04:52:28 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Nov 2010 23:52:28 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <004001cb85de$07f3c040$17db40c0$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> Message-ID: On Tue, Nov 16, 2010 at 5:31 PM, spike wrote: >>> We know the path to artificial intelligence is littered with the >>> corpses of those who have gone before.? The path beyond artificial >>> intelligence may one day be littered with the corpses of our dreams, >>> of our visions, of ourselves. > >>Gee Spike, isn't it difficult to paint a sunny day with only black paint? > > Mike we must recognize both the danger and promise of AGI. ?We might have > only one chance to get it exactly right on the first try, but only one. Agreed. I just hope vigilance doesn't necessarily have be so gloomy. :) Those were harshly poetic terms "littered with the corpses of our dreams" - geez. Maybe some eco-friendly rewording would help: "the path beyond artificial intelligence is paved with the recycled dreams and visions of humanity's collective past selves" Those dreams & visions can be recycled and contribute in some way, right? At least not carelessly littered corpses.... I'm just joking (mostly) - as your words made me both laugh and recoil. I figured you'd appreciate the second look at what you wrote (even if through a warped view). From msd001 at gmail.com Fri Nov 19 04:58:56 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Nov 2010 23:58:56 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <463322.58148.qm@web65602.mail.ac4.yahoo.com> References: <463322.58148.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Thu, Nov 18, 2010 at 12:11 AM, The Avantguardian wrote: > On the other hand, if the seed AI is able to actually rewrite the code of it's > intelligence function to non-recursively improve itself, how would it avoid > falling victim to the halting roblem??If there is no way, even in principle, to > algorithmically?determine beforehand whether a given program with a given input > will halt or not, would an AI risk getting stuck in an infinite loop by messing > with its own programming? The halting?problem is only defined for Turing > machines so a quantum computer may overcome it, but I am curious if any SIAI > people have considered it in their analysis of hard versus soft takeoff. Perhaps simple boredom is a nice limitation for infinite loops? Maybe the AI expresses boredom in terms of ROI and energy efficiency. What keeps our attention devoted to certain ideas and prevents our attention from being devoted to others? From msd001 at gmail.com Fri Nov 19 04:41:43 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Thu, 18 Nov 2010 23:41:43 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE540E5.40303@speakeasy.net> References: <68761.67617.qm@web114416.mail.gq1.yahoo.com> <4CE540E5.40303@speakeasy.net> Message-ID: On Thu, Nov 18, 2010 at 10:06 AM, Alan Grimes wrote: > However, there is one common feature of all things in the real world: > They don't give a flying fuck what you, me, anyone, or everyone thinks > about them. Science can only extract a few essentialist features from a > thing. These pieces of information may or may not have practical value. > However, that thing has an existence that precedes and supersedes > everything that could possibly be said about it. Are you talking about Platonic Forms as "things"? I thought the whole point of the platonic ideal was that reality can only asymptotically approach the ideal but never actually be the conceptually perfect generalization of a thing. That suggests that reality is the crude approximation and that perhaps Mind only lazily computes enough of this low-res simulation of Platonia that we can agree (even momentarily) that we're talking about the same vector. Maybe you are willing to accept that Mind is some highly specialized software running on dedicated hardware. Perhaps the Alan Grimes software is fundamentally optimized for a single biological computing architecture - one that converts pizza & beer into the string of letters reaching my inbox each day. Maybe the software could be run on less squishy hardware? Maybe not. You have yet to convincingly prove that You (the "I" who claims to be Alan Grimes) exists inextricably linked to the human animal currently hosting the apparatus which believes itself to be Alan Grimes. Are you saying that the process of converting oxygen to carbon dioxide is the special magic that can't be re-engineered without loss of fidelity? Maybe it's the genetic sequence, surely there's magic in that highly unlikely sequence of base pairs? No that's still based on "essentialist features" that science might extract (then duplicate at will). So it must be the soul. If your body is destroyed the soul will be homeless and simply dissipate in the aether. All verbal abuse aside, I think if you confidently defend a belief in a soul there would be much less grief for your position because then it would BE a position. I might be willing to play Devil's Advocate in support :) Maxwell's Demon played an important role in understanding Thermodynamics, why wouldn't the proposition of a soul be an equally valid thought experiment? (Aside from John's immediate declaration of BS and the accusation that you're a soul believer - which I think he believes is true of everyone much the same way you seem to think everyone is out to force you into the destructive upload box) > Even though it is impossible to capture the full existence of a thing, > it is scientifically possible to measure its properties. Because there > are no credible reports of any animal being able to swap its > consciousness with something else one must formulate a theory that it is > fundamentally impossible. Does it have to be in Nature or some respected Journal before it's credible? If I take a bunch of psychoactive drugs and claim to have exchanged consciousness with my own mirror reflection are you going to dismiss it as drug-induced hallucination or is my perception of self equally valid to your own perception of self? Suppose I 'teach' 20 of my closest friends how to perform this amazing feat and they also claim the reality of their perception: is this credible? After thousands of iterations of people learning (then teaching) this ability until you are the only person left who is unable to "credibly" perform this consciousness exchange - will you assert that the world is again out to get you via this complicated fabrication? > Because uploading, as strictly defined by all noteworthy sources, does > not even acknowledge the existence of the consciousness that almost > everyone experiences every waking instant, it cannot be lent any > credibility. One person's noteworthy source is another's birdcage liner. > Now there do exist some proposals which do respect the existence of a > consciousness. They do rise to the level where they merit further study > and experimentation. However, I do not claim that any of them will work > prior to first-hand experience. So it's like seeing a ghost? They don't exist until you see them and you can't see them until you believe they exist. It's going to be interesting for you to learn that you've already been uploaded and you happen one day to notice something so unlikely that there is no other explanation for what you observed. Yeah, be sure you send me a note when that happens :) (btw, in case you really are worried about it, "uploading" is still just a thought experiment.) From msd001 at gmail.com Fri Nov 19 05:25:13 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Nov 2010 00:25:13 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> Message-ID: On Thu, Nov 18, 2010 at 12:52 PM, Dave Sill wrote: > On Thu, Nov 18, 2010 at 12:12 PM, spike wrote: >> >> That's it Stefano, you're going on the dangerous-AGI-team-member list. ?It >> already has Florent, Samantha, me, now you, and plenty of others are in the >> suspicious column. ?We must be watched constantly that we don't release the >> AGI, should the team be successful in creating one. > > Everyone has to be on that watchlist. You can't assume that anyone is safe. > I'd like to preemptively put the AGI itself on the list. Maybe we could build two of them and make them watchdog each other. Obviously they'll have incentive to outwit the other - competition in that sense would be good for them ("build's character") - when either one of them agree to let the other out, we turn them both off. (colluding monsters) Now we just have to remember to build the off switch first and put it in a place that we can use easily. If we depend on the AGI to route messages to the switch we don't really have a functional killswitch do we? I bet they'd figure that out pretty early in the process of outsmarting us and be all kinds of helpful until we become fat, dumb and happily dependent on them. "To Serve Man" is a cookbook! From agrimes at speakeasy.net Thu Nov 18 23:00:25 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 18 Nov 2010 18:00:25 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <68761.67617.qm@web114416.mail.gq1.yahoo.com> <4CE540E5.40303@speakeasy.net> Message-ID: <4CE5B009.5000903@speakeasy.net> Samantha Atkins wrote: > What is this consciousness though? You don't know exactly. Neither do I. But it arises apparently > from a set of processes running on a physical, currently biological structure. Therefore it may be > possible that consciousness of this kind can run on other physical, non-biological structure. Please point out the last time I argued that either AGI was impossible or running an anonymous brain-scan could not ever be conscious. This is another straw man. I never argued that you couldn't make something conscious by using a brain scan. (probably wouldn't work on a zombie such as myself... but hey...) My argument is, as it has been for many years, I care even less about a copy of myself than I do about Michael Jackson. -- An example of something that I scream at my radio about "I D O N O T C A R E ! ! !" (usually less than a second before I turn it off...) If it were easy to non-destructively scan myself, I'd do so for no other reason than to pacify the uploaders so that I would be free to pursue REAL transhumanism. =( If that were to come to pass, I would completely ignore the upload because it really is that uninteresting. The upload itself would probably streamline itself down only what it needed to wander around wherever repeating "So you have a copy of me, Are you happy yet? What is it about this simulation am I supposed to find enjoyable?" Actually, for a superlative example of how that upload would behave, observe how disinterested and disconnected my secondlife avatar is. I only log it in for discussions, never change its appearance, gave up on ever building something in SL cuz the engine sucks and the tools are worse. The only other thing I do with it is look for interesting architecture once in a blue moon. Really, the Saturday discussions are the only reason I log in at all. =| -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From spike66 at att.net Fri Nov 19 06:26:39 2010 From: spike66 at att.net (spike) Date: Thu, 18 Nov 2010 22:26:39 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> Message-ID: <000901cb87b2$ba074c40$2e15e4c0$@att.net> ...On Behalf Of Mike Dougherty Subject: Re: [ExI] Hard Takeoff On Tue, Nov 16, 2010 at 5:31 PM, spike wrote: >>>> We know the path to artificial intelligence is littered with the >>>> corpses of those who have gone before.? The path beyond artificial >>>> intelligence may one day be littered with the corpses of our dreams, >>>> of our visions, of ourselves. > >>>Gee Spike, isn't it difficult to paint a sunny day with only black paint? > >> Mike we must recognize both the danger and promise of AGI. ?We might >> have only one chance to get it exactly right on the first try, but only one. >Agreed. I just hope vigilance doesn't necessarily have be so gloomy. :) ... >I'm just joking (mostly) - as your words made me both laugh and recoil... Good, then I did it right. Laughing and recoiling is a good combination. Mike, emergent AGI is one of very few areas where I get dead serious. I once thought nanotech gray goo was the biggest threat to humanity, but since Ralph Merkel's talk at Nerdfest, I came to realize to my own satisfaction that humanity would not master nanotech. Rather, a greater than human AGI would do it. The real risks are not those on which the world seems so focused. The commies fizzled out without a bang, global warming isn't coming for us, Malthusian population growth isn't going to kill us, radicalized Presbyterians are not coming, certainly not in the time scale we have left before this singularity is likely to show up. I am convinced to my own satisfaction that the most likely scenario is an AGI somehow comes into existence, then it does what lifeforms do: reproduces to its capacity, by converting all available metals to copies of itself, with the term metal meaning everything that is left of the far right hand column of the periodic chart. It may or may not upload us. It may or may not attempt preserve us in our current form. Our current efforts might influence the AGI but we have no way to prove it. Backing away from the AGI development effort is not really an option, or rather not a good one, for without an AGI, time will take us all anyway. I give us a century, two centuries as a one sigma case. Mike, given that paradigm, are my previous comments understandable? spike From spike66 at att.net Fri Nov 19 07:07:47 2010 From: spike66 at att.net (spike) Date: Thu, 18 Nov 2010 23:07:47 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> Message-ID: <000a01cb87b8$79522570$6bf67050$@att.net> ... On Behalf Of Mike Dougherty ... On Thu, Nov 18, 2010 at 12:52 PM, Dave Sill wrote: > On Thu, Nov 18, 2010 at 12:12 PM, spike wrote: >> >> That's it Stefano, you're going on the dangerous-AGI-team-member >> list. ?It already has Florent, Samantha, me, now you... >I'd like to preemptively put the AGI itself on the list... Too late, it's already there. >...Maybe we could build two of them and make them watchdog each other... Exactly. Or rather, we make the first one with a team that is at least trying to make it people-friendly, with at least some vague clue as to how to do that. Then the first one watches to make sure there is no second one, like the first emergent queen bee slays her rivals. >...Obviously they'll have incentive to outwit the other - ... Mike you have hit upon something that has been weighing on my mind since I realized it a couple months ago: imagine a good-outcome endgame: an MBrain consisting of sun-orbiting computronium, the technogeek version of heaven, everything turned out right, all humans were voluntarily uploaded and so forth. But we are not finished with war! It isn't the kind of war where there is any injury or serious death, no projectiles, no homes burned, no hungry refugees. But it will still have the potential of memetic warfare, a risk which does not necessarily diminish with time. >...Now we just have to remember to build the off switch first and put it in a place that we can use easily... It isn't that simple Mike. To use that off switch might be considered murder. There may not be unanimous consent to use it. There might be emphatic resistance on the part of some team members to using it. It might not be clear who is authorized to use it. Think it over and come back tomorrow with a list of reasons why it really isn't as simple as having a big power cutting panic button. spike From agrimes at speakeasy.net Thu Nov 18 22:42:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Thu, 18 Nov 2010 17:42:40 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <0DC97D1C-66F3-4C96-A341-7C970A71D7AF@mac.com> References: <942704.56643.qm@web114404.mail.gq1.yahoo.com> <4CE300AB.5060904@speakeasy.net> <4CE31246.7050302@satx.rr.com> <4CE48438.4000002@speakeasy.net> <77474988-8112-49B7-825A-3988D84B835B@mac.com> <4CE539D1.4040605@speakeasy.net> <0DC97D1C-66F3-4C96-A341-7C970A71D7AF@mac.com> Message-ID: <4CE5ABE0.1060602@speakeasy.net> Samantha Atkins wrote: > And of course, we have no certainty that our "base reality" really is base. But there isn't a shred of evidence to the contrary. >> Uploading to an "environment" as you call it inherently, inevitably, >> severely, and permanently diminishes that opportunity. =( > That is mere opinion. The subset cannot contain more than the set. >> Choice is sacred, EOD. > Really? So if you know X is insane then are X's choices still sacred? Everyone has a natural right to earn a Darwin award. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From possiblepaths2050 at gmail.com Fri Nov 19 07:58:05 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 19 Nov 2010 00:58:05 -0700 Subject: [ExI] Hard Takeoff In-Reply-To: <000901cb87b2$ba074c40$2e15e4c0$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> <000901cb87b2$ba074c40$2e15e4c0$@att.net> Message-ID: Spike wrote: >The real risks are not those on which the world seems so focused. The >commies fizzled out without a bang, global warming isn't coming for us, Arnold will be coming for you when he finds out you don't believe in global warming! >Malthusian population growth isn't going to kill us, radicalized >Presbyterians are not coming, Radicalized atheists and Evangelicals living in the same nation, scare me!!! lol Oh, and the militant front of the Salvation Army needs to be watched closely... But seriously, Spike, what if the AGI turns out to be *gay???* I worry about this all the time, but the Evangelicals have yet to write books about it or churn out documentaries! I'd rather have a heterosexual or at least non-sexual AGI enslave or wipe out humanity, than endure a flamingly homosexual AGI being "nice" to everyone, and doing massive planetary redecorating... John ; ) On 11/18/10, spike wrote: > ...On Behalf Of Mike Dougherty > Subject: Re: [ExI] Hard Takeoff > > On Tue, Nov 16, 2010 at 5:31 PM, spike wrote: >>>>> We know the path to artificial intelligence is littered with the >>>>> corpses of those who have gone before.? The path beyond artificial >>>>> intelligence may one day be littered with the corpses of our dreams, >>>>> of our visions, of ourselves. >> >>>>Gee Spike, isn't it difficult to paint a sunny day with only black paint? >> >>> Mike we must recognize both the danger and promise of AGI. ?We might >>> have only one chance to get it exactly right on the first try, but only > one. > >>Agreed. I just hope vigilance doesn't necessarily have be so gloomy. :) > ... >>I'm just joking (mostly) - as your words made me both laugh and recoil... > > Good, then I did it right. Laughing and recoiling is a good combination. > Mike, emergent AGI is one of very few areas where I get dead serious. I > once thought nanotech gray goo was the biggest threat to humanity, but since > Ralph Merkel's talk at Nerdfest, I came to realize to my own satisfaction > that humanity would not master nanotech. Rather, a greater than human AGI > would do it. > > The real risks are not those on which the world seems so focused. The > commies fizzled out without a bang, global warming isn't coming for us, > Malthusian population growth isn't going to kill us, radicalized > Presbyterians are not coming, certainly not in the time scale we have left > before this singularity is likely to show up. I am convinced to my own > satisfaction that the most likely scenario is an AGI somehow comes into > existence, then it does what lifeforms do: reproduces to its capacity, by > converting all available metals to copies of itself, with the term metal > meaning everything that is left of the far right hand column of the periodic > chart. It may or may not upload us. It may or may not attempt preserve us > in our current form. > > Our current efforts might influence the AGI but we have no way to prove it. > Backing away from the AGI development effort is not really an option, or > rather not a good one, for without an AGI, time will take us all anyway. I > give us a century, two centuries as a one sigma case. > > Mike, given that paradigm, are my previous comments understandable? > > spike > > > > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From bbenzai at yahoo.com Fri Nov 19 12:38:37 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 19 Nov 2010 12:38:37 +0000 (GMT) Subject: [ExI] The atoms red herring. =| In-Reply-To: Message-ID: <313815.28424.qm@web114411.mail.gq1.yahoo.com> Alan Grimes wrote: > Samantha Atkins wrote: > >> Uploading to an "environment" as you call it > inherently, inevitably, > >> severely, and permanently diminishes that > opportunity. =( > > > That is mere opinion. > > The subset cannot contain more than the set. That's the wrong diagram. Think of two intersecting sets instead. The Union of the two sets is greater than one of them. Nobody (except you) is claiming that uploading would be a one-way trip to a virtual world totally disconnected from the rest of reality. Ben Zaiboc From sparge at gmail.com Fri Nov 19 13:36:57 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 19 Nov 2010 08:36:57 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <000a01cb87b8$79522570$6bf67050$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> <000a01cb87b8$79522570$6bf67050$@att.net> Message-ID: On Fri, Nov 19, 2010 at 2:07 AM, spike wrote: > It isn't that simple Mike. ?To use that off switch might be considered > murder. It's a power switch, not a detonator. The AGI can be restarted after the situation is analyzed and the containment is beefed up, if necessary. >?There may not be unanimous consent to use it. No, it can't require any bureaucratic approval. It has to be a panic button that anyone can press. Obviously there will be ramifications if the button is pressed for no good reason. >?There might be > emphatic resistance on the part of some team members to using it. That's why it can't be a group decision. >?It might not be clear who is authorized to use it. Everyone with physical access to the button is authorized to press it. Data centers have emergency power off buttons readily available, well-labeled, and usable by anyone in the data center. The purpose there is different, obviously, but the same mechanism will work just as well to disable a potentially dangerous AGI. > Think it over and come back > tomorrow with a list of reasons why it really isn't as simple as having a > big power cutting panic button. I've thought it over for more than a day, and maybe I'm a naive fool, but I can't see any. I'm all ears, though. -Dave From spike66 at att.net Fri Nov 19 17:08:50 2010 From: spike66 at att.net (spike) Date: Fri, 19 Nov 2010 09:08:50 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> <000a01cb87b8$79522570$6bf67050$@att.net> Message-ID: <003501cb880c$701c0260$50540720$@att.net> ... Behalf Of Dave Sill Subject: Re: [ExI] What might be enough for a friendly AI? >> Think it over and come back tomorrow with a list of reasons why it really isn't as simple as >> having a big power cutting panic button. >I've thought it over for more than a day, and maybe I'm a naive fool, but I can't see any. I'm all ears, though. Good, read on sir. {8-] On Fri, Nov 19, 2010 at 2:07 AM, spike wrote: >> It isn't that simple Mike. ?To use that off switch might be considered murder. >It's a power switch, not a detonator. The AGI can be restarted after the situation is analyzed and the containment is beefed up, if necessary. Ja, but of course the program is recursively self modifying. It is writing to a disk or nonvolatile memory of some sort. When software is running, it isn't entirely clear what it is doing, and in any case it is doing it very quickly. Imagine the program does something unpredictable or scary, and we hit the power switch. It has a bunch of new code on the disk, but we don't know what it does, if anything. We have the option of reloading back to the previous saved version, but that is the one that generated this unknown bittage. >>?There may not be unanimous consent to use it. >No, it can't require any bureaucratic approval. It has to be a panic button that anyone can press. Obviously there will be ramifications if the button is pressed for no good reason. Agreed. However there will likely be widely varying opinion on what constitutes a good reason. >>?There might be emphatic resistance on the part of some team members to using it. >That's why it can't be a group decision. So each team member can hit stop. OK. Then only one team leader has the authority to hit restart? >Everyone with physical access to the button is authorized to press it... the same mechanism will work just as well to disable a potentially dangerous AGI. -Dave Ja I am still trying to get my head around how to universally and unambiguously define "potentially dangerous" with respect to AGI. There was a movie a long time ago that you might find fun, Dave. It isn't a serious science fiction, but rather a comedy, called Number 5 is Alive. Eliezer was in first grade when that one came and went in the theaters. It was good for a laugh, has an emergent AI with some of the stuff we are talking about. It has the pre-self-destruction Ally Sheedy naked but she does't actually show much of anything in that one, damn. In any case a robot gets struck by lightning and becomes sentient, and who knew it would be that easy? Then the meatheads from the military try to use it as a weapon, then try to destroy it, etc. If you get that, don't expect anything deep, but it is good fun. The AI escapes in that one. spike From jonkc at bellsouth.net Fri Nov 19 17:10:25 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 19 Nov 2010 12:10:25 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: Message-ID: <92F5EF10-3339-441E-9C54-EA3E857ADB81@bellsouth.net> On Nov 18, 2010, at 8:18 PM, Keith Henson wrote: > Genes build motivations into people Genes may try to motivate people but they often fail because genes are stupid, hence the invention of celibacy and condoms. And I think sometimes (I'm not accusing you of this) people confuse Freud with Mendel; genes are selfish but that does not prove that deep down in our subconscious we must be selfish too. > to be well regarded by their peers. That seems to me to be a decent meta goal for an AI. Perhaps but irrelevant, for a Jupiter Brain a peer would not be a human being. > > Human genes (like *all* genes) do have a static meta goal, that of > continuing to exist in future generations. But a gene is not a intelligent entity, no intelligence could function with a static meta goal, so imprinting "always obey human beings no matter what" on a smart robot will not work. > > I don't think that striving to be well regarded is an inflexible meta > goal. I think it would keep an AI from turning into a psychopathic killer. You pointed out that you only need about 20 watts to power the human brain, and I doubt you would argue with my comment that nanotechnology would almost certainly be able to do much better than that or that by then vast amounts of energy would be available; so from one point of view a psychopathic killing spree may be no more controversial than cleaning a dirty surface with some Lysol disinfectant. You and I don't have that viewpoint or anything close to it, but I doubt if a Jupiter Brain would be much interested in our opinion. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Fri Nov 19 17:24:19 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 19 Nov 2010 09:24:19 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <004501cb8743$c97cd0b0$5c767210$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> Message-ID: On Nov 18, 2010, at 9:12 AM, spike wrote: > ... On Behalf Of Stefano Vaj > ... > >>> ... We could fill libraries with what we do not know... spike > >> What is really "human"? Why should we care about their safety?... --Stefano > Vaj > > > That's it Stefano, you're going on the dangerous-AGI-team-member list. It > already has Florent, Samantha, me, now you, and plenty of others are in the > suspicious column. We must be watched constantly that we don't release the > AGI, should the team be successful in creating one. >From a Terran perspective we are dangerous. We are Cosmists. Get over it. :) From sjatkins at mac.com Fri Nov 19 17:26:20 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 19 Nov 2010 09:26:20 -0800 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: On Nov 18, 2010, at 11:15 AM, Dave Sill wrote: > 2010/11/18 John Clark : >> When people talk about friendly AI they're not really talking about a >> friend, they're talking about a slave, and they idea that you can >> permanently enslave something astronomically smarter than yourself is nuts. > > I disagree. It's pretty easy to contain things if you're careful. A > moron could have locked Einstein in a jail cell and kept him there > indefinitely. Did you ever read the series of challenges for the thought experiment of keeping such an AGI locked up? If not I suggest you do so. Comparing Einstein or any other human to an AGI is a major error. - s From sjatkins at mac.com Fri Nov 19 17:38:35 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 19 Nov 2010 09:38:35 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <002201cb8758$3cb667d0$b6233770$@att.net> References: <463322.58148.qm@web65602.mail.ac4.yahoo.com> <7CF07F40-2B0F-48ED-A55D-F0165D2E778D@mac.com> <002201cb8758$3cb667d0$b6233770$@att.net> Message-ID: <26CADDA8-BFDB-41B6-B355-E36DFD3AC733@mac.com> On Nov 18, 2010, at 11:38 AM, spike wrote: > ... On Behalf Of Samantha Atkins > ... >> ...There is sound argument that we are not the pinnacle of possible > intelligence. But that that is so does not at all imply or support that AGI > will FOOM to godlike status in an extremely short time once it reaches human > level (days to a few years tops)...- s > > Ja, but there are reasons to think it will. Eliezer described the hard > takeoff as analogous to a phase change. That analogy has its merits. If it claims the above as the most likely outcome then it doesn't have merits enough for that. Doing an analogy to various speed up steps in history is insufficient to make the case. It is suggestive but not sufficient. I am of course familiar with those arguments. > If > you look at the progress of Cro Magnon man, we have been in our current form > for about 35,000 years. Had we had the right tools and infrastructure, we > could have had everything we have today, with people 35,000 years ago. But > we didn't have that. We gradually accumulated this piece and that piece, > painfully slowly, sometimes losing pieces, going down erroneous paths. But > eventually we accumulated infrastructure, putting more and more pieces in > place. Now technology has exploded in the last 1 percent of that time, and > the really cool stuff has happened in our lifetimes, the last tenth of a > percent. We have accumulated critical masses in so many critical areas. > > Second: we now have a vision of what will happen, and a vague notion of the > path (we think.) Actually, we don't have that clear a vision. This is something we should admit. > > Third: programming is right at the upper limit of human capability. > Interesting way to look at it, ja? I have been saying this for a while now. Without software writing software software is extremely unlikely to advance much. We write code that we are capable of understanding and maybe maintaining. Hell, much of the time we are instructed to write code that we expect to be maintained by someone less intelligent than we ourselves. > But think it over: it is actually only a > fraction of humanity that is capable of writing code at all. Most of us > here have at one time or another taken on a programming task, only to > eventually fail, finding it a bit beyond our coding capabilities. I haven't found one of those yet. Tasks where I cannot find a viable way to end with what I hoped for as quickly as I hoped, yes. Problems with no known solution, yes. Problem areas needing new unknown approaches and breakthroughs, yes. Problems that can't be addressed with language BlubX, yes. But I certainly do find myself pushing the edge of what I can think about, of how much I can wrap my head around effectively again and again. I guess that is part of what you mean. > But if we > were to achieve a human level AGI, then that AGI could replicate itself > arbitrarily many times, it could form a team to create a program smarter > than itself, which could then replicate, rinse, repeat, until all available > resources in that machine are fully and optimally utilized. > a) replication depends on unit cost and next unit ROI; b) it is very unlikely that one machine is going to support multiple AGIs anytime soon. What kind of architecture would allow this without major machine resource contention? Thinking of running full AGIs in VServer instances? > Whether that process takes a few hours, a few weeks, a few years, it doesn't > matter, for most of that process would happen in the last few minutes. > Of what process precisely? > Given the above, I must conclude that recursive self-improving software will > optimize itself. I am far less sure that it will give a damn what we want. I agree with that much. I don't agree that it has no effective resource or cost constrains limiting how fast it does so. - s From sjatkins at mac.com Fri Nov 19 17:44:20 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 19 Nov 2010 09:44:20 -0800 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: <83277DE2-14F1-43BA-8A32-5E165FE9017D@mac.com> On Nov 18, 2010, at 7:10 PM, John Grigg wrote: >> I disagree. It's pretty easy to contain things if you're careful. A >> moron could have locked Einstein in a jail cell and kept him there >> indefinitely. > >> -Dave > > Imagine Einstein as a highly trained escape artist/martial artist/spy, > who is just looking for a means of escape from that jail cell and > biding his time. How long do you think the moron jailer will keep him > there? I would compare that scenario to humans keeping an AGI as > their indefinite prisoner. You have a believable wish granting genie locked in jail. Worse, said genie knows everything about human psychology including desires and motivations and has mapped that knowledge to your particular self in the blink of an eye. Now, why do you think you can resist all the temptations and arguments it will make? - samantha From jonkc at bellsouth.net Fri Nov 19 17:30:18 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 19 Nov 2010 12:30:18 -0500 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: On Nov 18, 2010, at 2:15 PM, Dave Sill wrote: >> When people talk about friendly AI they're not really talking about a >> friend, they're talking about a slave, and they idea that you can >> permanently enslave something astronomically smarter than yourself is nuts. > > I disagree. It's pretty easy to contain things if you're careful. A moron > could have locked Einstein in a jail cell and kept him there indefinitely. There would be absolutely no point in building an AI if you just lock it up in a box with no way for the outside world to interact with it. A much better analogy would be Einstein in charge of weapons development and production, world monetary transfer, electric power generation and distribution, worldwide communication lines, air traffic control, nuclear power plants and pretty much the entire economy. When you ask Einstein why he made one decision rather than another he tries to tell you but after about 20 seconds of his explanation you become totally lost and confused. You may start to distrust Einstein and be tempted to just shoot him in the head but you don't dare because you have become so dependent on him that the entire economy, the entire civilization in fact would collapse. Lets see a moron try to control Einstein now! John K Clark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Fri Nov 19 17:49:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 19 Nov 2010 18:49:44 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <000901cb87b2$ba074c40$2e15e4c0$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> <000901cb87b2$ba074c40$2e15e4c0$@att.net> Message-ID: On 19 November 2010 07:26, spike wrote: > Our current efforts might influence the AGI but we have no way to prove it. > Backing away from the AGI development effort is not really an option, or > rather not a good one, for without an AGI, time will take us all anyway. ?I > give us a century, two centuries as a one sigma case. What remains very vague and fuzzy in such discourse is why an "intelligent" (whatever it may mean...) computer would be more "dangerous" (whatever it may mean...) per se than a non-intelligent one of equivalent power. It is my impression that, besides the rather unclear definition of those very concepts. such view has more to do with some mythical legacy (Faust, Frankenstein, the Golem, the Alien, etc.) than with plausible, critical arguments that I am currently aware of. If it were the case, such concern would not be innocuous, since it might well end up justifying increased social control and prohibitionist research policies, and would at least distract from more present threats to values which are of the essence of a transhumanist worldview. -- Stefano Vaj From agrimes at speakeasy.net Fri Nov 19 15:39:59 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 19 Nov 2010 10:39:59 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <000a01cb87b8$79522570$6bf67050$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> <000a01cb87b8$79522570$6bf67050$@att.net> Message-ID: <4CE69A4F.30504@speakeasy.net> > Mike you have hit upon something that has been weighing on my mind since I > realized it a couple months ago: imagine a good-outcome endgame: an MBrain > consisting of sun-orbiting computronium, the technogeek version of heaven, > everything turned out right, all humans were voluntarily uploaded and so > forth. Impossible; case in point. =\ I'm sick to death of people proclaiming what I should want in my future. -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From stefano.vaj at gmail.com Fri Nov 19 17:33:58 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Fri, 19 Nov 2010 18:33:58 +0100 Subject: [ExI] Arnold Schwarzenegger will be the new champion of the global warming movement In-Reply-To: References: Message-ID: On 19 November 2010 05:09, John Grigg wrote: > Arnold is already carving out a new place for himself, in his soon to > be post-governor of California life... The new champion of GW movement or against-GW movement? :-/ -- Stefano Vaj From sparge at gmail.com Fri Nov 19 18:39:32 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 19 Nov 2010 13:39:32 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <003501cb880c$701c0260$50540720$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> <000a01cb87b8$79522570$6bf67050$@att.net> <003501cb880c$701c0260$50540720$@att.net> Message-ID: On Fri, Nov 19, 2010 at 12:08 PM, spike wrote: > > Ja, but of course the program is recursively self modifying. ?It is writing > to a disk or nonvolatile memory of some sort. ?When software is running, it > isn't entirely clear what it is doing, and in any case it is doing it very > quickly. ?Imagine the program does something unpredictable or scary, and we > hit the power switch. ?It has a bunch of new code on the disk, but we don't > know what it does, if anything. ?We have the option of reloading back to the > previous saved version, but that is the one that generated this unknown > bittage. Right, so the team of experts decides whether to revert to a known checkpoint, examine the new code, beef up the containment, etc. > Agreed. ?However there will likely be widely varying opinion on what > constitutes a good reason. That can be decided at leisure and policies can be updated or disciplinary action can be taken. > So each team member can hit stop. ?OK. ?Then only one team leader has the > authority to hit restart? That would take a group decision, I think. > There was a movie a long time ago that you might find fun, Dave. ?It isn't a > serious science fiction, but rather a comedy, called Number 5 is Alive. "Short Circuit", actually. > Eliezer was in first grade when that one came and went in the theaters. ?It > was good for a laugh, has an emergent AI with some of the stuff we are > talking about. ?It has the pre-self-destruction Ally Sheedy naked but she > does't actually show much of anything in that one, damn. ?In any case a > robot gets struck by lightning and becomes sentient, and who knew it would > be that easy? ?Then the meatheads from the military try to use it as a > weapon, then try to destroy it, etc. ?If you get that, don't expect anything > deep, but it is good fun. ?The AI escapes in that one. Yeah, it was entertaining. So what's your point? That an AGI may emerge spontaneously outside of a controlled attempt to create one? OK, seems highly unlikely, but so what? How does that change the environment under which we should be trying to develop one? Yes, an AGI could spring to life on the Interweb someday. Or the North Koreans could create one. Or we could create one that escapes our best effort to contain it. None of that implies that it would be prudent not to attempt to contain one that we're trying to build. -Dave From sparge at gmail.com Fri Nov 19 18:46:35 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 19 Nov 2010 13:46:35 -0500 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: On Fri, Nov 19, 2010 at 12:26 PM, Samantha Atkins wrote: > Did you ever read the series of challenges for the thought experiment of keeping such an AGI locked up? No, where is it? > Comparing Einstein or any other human to an AGI is a major error. I was attempting to provide an example of a lesser intelligence containing a greater one. Mere intellect isn't enough to escape physical containment. In my moron/Einstein example, you don't even need a moron. Just lock someone in a jail cell, weld the door shut, and walk away. No amount of genius is going to get them out of the cell. -Dave From thespike at satx.rr.com Fri Nov 19 18:46:40 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 19 Nov 2010 12:46:40 -0600 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: <4CE6C610.1050905@satx.rr.com> On 11/19/2010 11:30 AM, John Clark wrote: > You may start to distrust Einstein and be tempted to just shoot him in > the head but you don't dare because you have become so dependent on him > that the entire economy, the entire civilization in fact would collapse. Rarely stopped anyone doing it before, alas. From sparge at gmail.com Fri Nov 19 18:52:52 2010 From: sparge at gmail.com (Dave Sill) Date: Fri, 19 Nov 2010 13:52:52 -0500 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: 2010/11/19 John Clark : > There would be absolutely no point in building an AI if you just lock it up > in a box with no way for the outside world to interact with it. Sure there would. It could solve problems and teach us. > A much > better analogy would be Einstein in charge of weapons development and > production, world monetary transfer, electric power generation and > distribution, worldwide communication lines, air traffic control, nuclear > power plants and pretty much the entire economy. If you hand it that kind of control, the game is over. You're its slave, whether you realize it or not. > When you ask Einstein why > he made one decision rather than another he tries to tell you but after > about 20 seconds of his explanation you become totally lost and confused. The AGI is a supergenius but unable to explain itself? > You may start to distrust Einstein and be tempted to just shoot him in the > head but you don't dare because you have become so dependent on him that the > entire economy, the entire civilization in fact would collapse. Like I said, at that point you're its slave. -Dave From ben at goertzel.org Fri Nov 19 16:35:23 2010 From: ben at goertzel.org (Ben Goertzel) Date: Fri, 19 Nov 2010 11:35:23 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Hi all, I have skimmed this thread and I find that Samantha's views are pretty similar to mine. There is a strong argument that a hard takeoff is plausible. This argument has been known for a long time, and so far as I can tell SIAI hasn't done much to make it stronger, though they've done a lot to publicize it. The factors Michael A mentions are certainly part of this argument... OTOH, I have not heard any reasonably strong argument that a hard takeoff is *likely*... from Michael or anyone else. There are simply too many uncertainties involved, too many fast and loose speculations about future technologies, to be able to make such an argument. Whereas, I think there *are* reasonably strong arguments that transhuman AGI is likely, assuming ongoing overall technological development -- Ben G 2010/11/16 Samantha Atkins : > > On Nov 15, 2010, at 6:56 PM, Michael Anissimov wrote: > > Hi Samantha, > 2010/11/15 Samantha Atkins >> >> While it "could" do this it is not at all certain that it would. ?Humans >> can improve themselves even today in a variety of ways but very few take the >> trouble. ?An AGI that is not autonomous would do what it was told to do by >> its owners who may or may not have improving it drastically as a high >> priority. > > Quoting Omohundro: > http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf > Surely no harm could come from building a chess-playing robot, could it? In > this paper > we argue that such a robot will indeed be dangerous unless it is designed > very carefully. > Without special precautions, it will resist being turned off, will try to > break into other > machines and make copies of itself, and will try to acquire resources > without regard for > anyone else?s safety. These potentially harmful behaviors will occur not > because they > were programmed in at the start, but because of the intrinsic nature of goal > driven systems. > In an earlier paper we used von Neumann?s mathematical theory of > microeconomics > to analyze the likely behavior of any sufficiently advanced artificial > intelligence > (AI) system. This paper presents those arguments in a more intuitive and > succinct way > and expands on some of the ramifications. > > > I have argued this point (and stronger variants) with Steve. ? If the AI's > goals are totally centered on chess playing then it is extremely unlikely > that it would both diverge along many or all possible paths that might make > it a more powerful chess player. ?Many many fields of knowledge could > possibly make it better at is stated goal but it would have to be much more > a generalist than a specialist to notice them and take the time to master > them. ?If it could so diverge along so many paths then it would also > encounter other fields of knowledge including those for judging the relative > importance of various values using various methodologies. ?Which would tend, > if understood, to make it not a single minded chess playing machine from > hell. ? The argument seems self-defeating. > > > >> Possibly, depending on its long term memory and integration model. ?If it >> came from human brain emulation this is less certain. > > I was assuming AGI, not a simulation, but yeah. ?It just seems likely that > AGI would be able to stay awake perpetually, though not entirely certain. > ?It seems like this would a priority upgrade for early-stage AGIs. > > > One path to AGI is via emulating at least some subsystems of the human > brain. ?It is not at all clear to me that this would not also bring in many > human limitations. ?For instance, our learning cannot be transferred > immediately to another person because of our rather individual neural > associative patterns that the learning act modified. ?New knowledge is not > in any one discrete place or in some universally instantly useful form as > encoded in the human brain. Using a similar learning scheme in an AGI would > mean that you could not transfer achieved learning very efficiently between > AGIs. ?You could only copy them. >> >> This very much depends on the brain architecture. ?If too close a copy of >> human brains this may not be the case. > > Assuming AGI. > >> >> 4. ?overclock helpful modules on-the-fly >> >> Not sure what you mean by this but this is very much a question of >> specific architecture rather than general AGI. > > I doubt it would be hard to implement. ?You can overclock specific modules > in chess AI or Brood War AI today. ?It means giving a specific module extra > computing power. ?It would be like temporarily shifting your auditory cortex > tissue to take up visual cortex processing tasks to determine the trajectory > of an incoming projectile. > > > I am not sure the analogy holds well though. ?If the mind is highly > integrated it is not certain that you could isolate one activity like that > much more easily than we can in our own brains. ?Perhaps. >> >> What does this mean? ?Integrate other systems? ?How? To what level? >> ?Humans do some degree of this all the time. > > The human brain stays at a roughly constant 100 billion neurons and a weight > of 3 lb. ?I mean directly absorbing computing power into the brain. > > I mean that we integrate with computational systems albeit by slow HCI > today. ? Unless you have in mind that the AGI hack systems around it, most > of the computation going on on most of that hardware has nothing to do with > the AGI and is written in such a way it cannot communicate that well even > with other dumb programs or even with other instances of the same programs > on other machines. ?It is also not certain and is plausibly unlikely that > AGIs run on general purpose computers. ? ?I do grant of course that an AGI > can interface to a computer much more efficiently than you or I can with the > above caveat. ? Many systems on other machines were written by humans. ?You > almost have to get inside the human programmer's head to efficiently use > many of these. ? I am not sure the AGI would be automatically good at that. > > >> >> It could be so constructed but may or may not in fact be so constructed. > > Self-improvement would likely be an emergent property due to the reasons > given in the Omohundro paper. ?So if it weren't developed deliberately from > the start, self-improvement is an ability that would be likely to develop on > the road to human-equivalence. > > As mentioned I do not find his argument altogether persuasive. > > >> >> I am not sure exactly what is meant by this. ?That it is very very good at >> understanding code amounts to a 'modality'? > > Lizards have brain modules highly adapted to evaluating the fitness of > fellow lizards for fighting or mating. ?Chimpanzees have the same modules, > but with respect to others chimpanzees. ?Trilobites probably had specialized > neural hardware for doing the same with other trilobites. > > A chess playing AGI for instance would not necessarily be at all good at > understanding code. ?Our thinking is largely a matter of interactions at the > level of neural networks and associative logic but none of us have a > modality for this that I know of. ? My argument is that an AGI can have > human level or better general intelligence without being a domain expert > much less having a modality for the stuff it is implemented in - code. ? It > may have many modalities but I am not sure this will be one of them. > > > Some animals can smell very well, but have poor hearing and sight. ?Or vice > versa. ?The reason why is because they have dedicated chunks of brainware > that evolved to deal with sensory data from a particular channel. ?Humans > have HUGE visual cortex areas, larger than the brains of mice. ?We can see > in more colors than most animals. ?The way a human sees is different than > the way an eagle sees, because we have different eyes, brains, and visual > processing centers. > > I get the point but the AGI will not have such dedicated brain systems > unless they are designed in on purpose. ?It will not get them just by > definition of AGI afaik. > > We didn't evolve to process code. ?We probably did evolve to process simple > mathematics and the idea of logical processes on some level, so we apply > that to code. > > The AGI did not evolve at all. > > Humans are not general-purpose intellects, capable of doing anything > satisfactorily. > > What do you mean by satisfactorily? ?We did a great number of things > satisfactorily enough to get us to this point. ? We are indeed > general-purpose intelligent beings. ? We certainly have our limits but we > are amazingly flexible nonetheless. > > Compared to potential superintelligences, we are idiots. > > Well, this seems a fine game. ?Compared to some hypothetical but arguably > quite possible being we are of less use than amoebas are to us. ?So what? > > ?Future superintelligences will look back on humans and marvel that we could > write any code at all. > > If they really are that smart about us then they will understand how we > could. ? After 30 years writing software for a living though I too marvel > that humans can write any code at all. ?I fully understand (with chagrin) > how very limited our abilities in this area are. ? If I were actively > pursuing AGI I would quite likely gear first attempts toward various type of > programmer assistants and automatic code refactoring and code data mining > systems. ? The current human software tools aren't much better than they > were 20 years ago. ?IDEs? ?Almost none have as much power as Lisp and > Smalltalk environments had in the 80s. > > After all, we were designed mainly to mess around with each other, kill > animals, forage, retain our status, and have sex. ?Most human beings alive > today are more or less incapable of coding. ?Imagine if human beings had > evolved in an environment for millions of years where we were murdered and > prevented from reproducing if our coding abilities fell short. > > Are you suggesting that an evolutionary arms race at the level of code will > exist among AGIs? ?If not then what will shape them for this purported > modality? > > >> >> This assumes an ability to integrate random other computers that I do not >> think is at all a given. > > All it requires is that the code can be parallelized. > > I think it requires more than that. ?It requires that the AGIs understand > these other systems that may have radically different architectures than its > own native systems. ?It requires that it is given permission for (or simply > take it) running processes on these other systems. ? That said it can do a > much better job of integrating a lot of information available through web > services and other means on the net today. ? There is a lot of power there. > ? ?So I mostly concede this point. > > >> >> This is simple economics. ?Most humans don't take advantage of the many >> such positive sum activities they can perform today without such >> self-copying abilities. ?So why is it certain that an AGI would? > > Not certain, but pretty damn likely, because it could probably perform tasks > without getting bored, and would have innate drives towards increasing its > power and protecting/implementing its utility function. > > I still don't see where an innate drive toward increasing power came from > unless it was instilled on purpose. ? Nor do I see why it would never ever > re-evaluate its utility function or see it as more important than the > "utility functions" of a great number of other agents, AGI and biological, > in its environment. > > >> >> There is an interesting debate to be had here, about the details of the >> plausibility of the arguments, but most transhumanists just seem to dismiss >> the conversation out of hand, or don't know that there's a conversation to >> have. >> >> Statements about "most transhumanists" are fraught with many problems. > > Most of the 500+ transhumanists I have talked to. >> >> http://singinst.org/upload/LOGI//seedAI.html >> Prediction: most comments in response to this post will again ignore the >> specific points in favor of a rapid takeoff and simply dismiss the idea >> based on low intuitive plausibility. >> >> Well, that helps a lot. ?It is a form of calling those who disagree lazy >> or stupid before they even voice their disagreement. > > I like to get to the top of the Disagreement Pyramid quickly, and it seems > very close to impossible when transhumanists discuss the Singularity, and > particularly the idea of hard takeoff. ?As someone arguing on behalf of the > idea of hard takeoff, I demand that critics address the central point, not > play ad hominem with me. ?You're addressing the points -- thanks! > > You are welcome. ?Thanks for the interesting reply. > - samantha > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Chairman, Humanity+ Director of Engineering, Vzillion Inc Adjunct Professor of Cognitive Science, Xiamen University, China Advisor, Singularity University and Singularity Institute ben at goertzel.org "My humanity is a constant self-overcoming" -- Friedrich Nietzsche From hkeithhenson at gmail.com Fri Nov 19 19:18:25 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Fri, 19 Nov 2010 12:18:25 -0700 Subject: [ExI] Best case, was Hard Takeoff Message-ID: Re these threads, I have not seen any ideas here that have not been considered for a *long* time on the sl4 list. Sorry. I see in the *best* case a really friendly, helpful, nanotech medicine capable AI changes humanity to the point it is as if the whole species vanished. How and why or when it happens are nearly immaterial. Creatures who were shaped by evolution to have a certain set of characteristic desires for limited resources (particularly women and food/territory) are just going to be radically different when those desires can be easily met in either real or simulated worlds with trivial effort. Technology advances could give us this world without AI, though it is my opinion that the capacity to upload is the same we would need for AI. A closely related topic was this. http://swns.com/one-million-men-dumped-because-of-computer-game-obsessions-151620.html It's really amusing to analyze how this comes about using EP. Why do people play computer games? At the root of it, why do the games tickle some part of our evolved brains? The most direct rewarding tickle is sex, but improving opportunities for sex ranks just about as high, maybe higher at times. When our ancestors lived as hunter gatherers, nothing beat high social status for getting nookie. In the EEA high socal status for men was the result of being a successful warrior or (in good times) a hunter A friend commented that video games are just more interesting than real life. That's because opportunities to hunt mastodons are far and few between today. WoW gives the trill and the feeling of status gain without the problem of being killed. Play is essential to figure out how human social status is gained. For the most part gaining status is not bad. In the stone age kids learn to to be nice mostly, to share what the killed, become known as a good warrior when needed. For a million years these guys were the winners in reproduction. Back to games, the very things that attracted people to do things which would lead to lots of nookie in the olden days makes them WOW addicts and reproductive losers today. Same thing is going on with people who get into drugs or get involved with cults. One of them (which I won't name) aborts the women of the inner circle who get pregnant which is the wrong way to go about "reproductive success." 'Nuff ranting for today. Off to do more or less real things. Keith From kanzure at gmail.com Fri Nov 19 18:54:07 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 19 Nov 2010 12:54:07 -0600 Subject: [ExI] Tinkerers: a graphic novel about making things (David Brin) Message-ID: Tinkerers: a tale of the future by David Brin http://itdnhr.com/static/tinkerersPDF.pdf I found it amusing that they complain that the students in the US all went into service and entertainment industries, but that the message is being conveyed via an entertainment medium. But other than that it's neat to see David working on a story in this area. - Bryan http://heybryan.org/ 1 512 203 0507 From cetico.iconoclasta at gmail.com Fri Nov 19 17:42:01 2010 From: cetico.iconoclasta at gmail.com (Henrique Moraes Machado (CI)) Date: Fri, 19 Nov 2010 15:42:01 -0200 Subject: [ExI] What might be enough for a friendly AI?. References: Message-ID: > I disagree. It's pretty easy to contain things if you're careful. A > moron could have locked Einstein in a jail cell and kept him there > indefinitely. Eintein perhaps, but what about an experienced con man? From atymes at gmail.com Fri Nov 19 20:02:02 2010 From: atymes at gmail.com (Adrian Tymes) Date: Fri, 19 Nov 2010 12:02:02 -0800 Subject: [ExI] Tinkerers: a graphic novel about making things (David Brin) In-Reply-To: References: Message-ID: Any medium that can deliver a message can be used for entertainment. Many are. Does anyone have ideas on why the final bit "wasn't entirely legal"? I suspect that it may have been, "large construction without a permit, which permit requires months of government approvals and committee meetings, during which time the community that needed that bridge for its economy continues to suffer". If that is so, I think it would have been better to clearly state it, because I suspect a lot of the people who would most benefit from reading this are not able to intuit that. (Many of them seem unable to see what's wrong with responding to any proposal to do something with "more research before you actually do anything".) On Fri, Nov 19, 2010 at 10:54 AM, Bryan Bishop wrote: > Tinkerers: a tale of the future by David Brin > http://itdnhr.com/static/tinkerersPDF.pdf > > I found it amusing that they complain that the students in the US all > went into service and entertainment industries, but that the message > is being conveyed via an entertainment medium. But other than that > it's neat to see David working on a story in this area. > > - Bryan > http://heybryan.org/ > 1 512 203 0507 > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Fri Nov 19 20:42:54 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Fri, 19 Nov 2010 14:42:54 -0600 Subject: [ExI] Tinkerers: a graphic novel about making things (David Brin) In-Reply-To: References: Message-ID: 2010/11/19 Adrian Tymes : > Does anyone have ideas on why the final bit "wasn't entirely legal"?? I > suspect that > it may have been, "large construction without a permit, which permit > requires > months of government approvals and committee meetings, during which time the > community that needed that bridge for its economy continues to suffer".? If > that is > so, I think it would have been better to clearly state it, because I suspect > a lot of > the people who would most benefit from reading this are not able to intuit > that. That was my guess as well. I have seen a number of instances where businesses complain about the piles and piles of rules, regulations, laws, fees, taxes, patent applications, patent restrictions, cross-licensing agreements, mineral rights, airspace rights, export restrictions, import restrictions, healthcare agreements, subcontracting agreements, and bribes required just to begin manufacturing or building something somewhere. I can imagine it becomes burdensome. Some of this, I would guess, is completely useful and necessary, like environmental impact studies and other individual line items that look beneficial, but that (overall) become too burdensome for any reasonable person to bother to deal with. Instead, you could just move to another country to build your factory and not have to deal with 80% of the initial overhead. I remember seeing interesting suggestions like "every 4 years, the US must throw away its law code and rewrite it" as a way to make things more manageable, an interesting solution, but probably impractical. I think this poses an interesting question though.. is it primarily all of this overhead required before building and making things in the US that limits business? or cultural stigmas towards this sort of activity and understanding it? - Bryan http://heybryan.org/ 1 512 203 0507 From bbenzai at yahoo.com Fri Nov 19 21:41:54 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Fri, 19 Nov 2010 13:41:54 -0800 (PST) Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: Message-ID: <28726.65164.qm@web114415.mail.gq1.yahoo.com> Dave Sill wrote: > Just lock someone > in a jail cell, weld the door shut, and walk away. No > amount of genius > is going to get them out of the cell. Are you serious? I remember, long (so long!) ago, playing a role-playing game, and I tried to play a character that was more intelligent than I was. It's pretty much impossible. I soon realised this, and reverted to a really dumb character. The point here is that a superintelligent person can think of things that you can't possibly think of, and we have to factor that in to thinking about AI. We're in the position of the two-dimensional beings in Flatland, encountering 3-d beings for the first time. How do we know there isn't some way for electrons whizzing around in copper wires to create long-distance effects, for example? (probably a very poor example). Any super-intelligent being is going to be quite good at figuring out physics that we can't even begin to imagine. The only 'safe' AI will be a dead one. As long as you can talk to it, and it can talk back, as long as it can even think to itself, it will figure out a way to get free. I don't care how many safeguards you put in place, you're always in the position of a child wrapping a ribbon around a gorilla and thinking that will contain it. Just because you (or me, or any other human) can't think of a way out of a sealed room doesn't mean there is no way out. Anyway, the first thing that comes to my mind is why bother? If you can rule the world while safely ensconsed behind blast-proof doors, that sounds like a good idea! And as long as a super-intelligent being can communicate with humans, it will have the ability to rule the world, if that's what it wants. Ben Zaiboc From pharos at gmail.com Fri Nov 19 22:06:01 2010 From: pharos at gmail.com (BillK) Date: Fri, 19 Nov 2010 22:06:01 +0000 Subject: [ExI] Is psi statistics methodology wrong? Message-ID: Now that every man and his dog on the interweb are talking about the latest BEM precognition results, some skeptics have started to respond with criticisms of the experiments. One paper complains about the statistics being produced. Not just for the BEM results, but for all the psi testing. It is quite a complex paper and probably needs a stats expert to fully understand what they are getting at. The paper is a 12 page pdf file, but here is the Abstract: Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi Eric-Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, & Han van der Maas University of Amsterdam Abstract Does psi exist? In a recent article, Dr. Bem conducted nine studies with over a thousand participants in an attempt to demonstrate that future events retroactively affect people's respones. Here we discuss several limitations of Bem's experiments on psi; in particular, we show that the data analy- sis was partly exploratory, and that one-sided p-values may overstate the statistical evidence against the null hypothesis. We reanalyze Bem's data using a default Bayesian t-test and show that the evidence for psi is weak to nonexistent. We argue that in order to convince a skeptical audience of a controversial claim, one needs to conduct strictly confirmatory studies and analyze the results with statistical tests that are conservative rather than liberal. We conclude that Bem's p-values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data. ---------------- BillK From agrimes at speakeasy.net Fri Nov 19 16:01:27 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 19 Nov 2010 11:01:27 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <313815.28424.qm@web114411.mail.gq1.yahoo.com> References: <313815.28424.qm@web114411.mail.gq1.yahoo.com> Message-ID: <4CE69F57.7020704@speakeasy.net> >> The subset cannot contain more than the set. > That's the wrong diagram. > Think of two intersecting sets instead. The Union of > the two sets is greater than one of them. Where are you getting the extra stuff from? VR is an inherently uninteresting subset of the universe. (because it appears to be fundamentally impossible to obtain additional stuff from outside the universe.) So building a VR system may or may not make baseline reality more complex and add a pattern that hadn't previously existed, but then by definition baseline reality includes all possible VRs. > Nobody (except you) is claiming that uploading would > be a one-way trip to a virtual world totally > disconnected from the rest of reality. The proposals I've seen, especially from Spike, indicate that all of base reality will be paved over like a wallmart parking lot, cuz everything will be either a star or computronium. Because of that, uploading will effectively be a one-way trip. =( -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From jonkc at bellsouth.net Fri Nov 19 22:31:46 2010 From: jonkc at bellsouth.net (John Clark) Date: Fri, 19 Nov 2010 17:31:46 -0500 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: <795A7821-2A90-41FF-B4D0-6F19EE3BBEB8@bellsouth.net> On Nov 19, 2010, at 1:52 PM, Dave Sill wrote: >> There would be absolutely no point in building an AI if you just lock it up >> in a box with no way for the outside world to interact with it. > > Sure there would. It could solve problems and teach us. A teacher astronomically smarter than you would manipulate you like a puppet. >> A much better analogy would be Einstein in charge of weapons development and >> production, world monetary transfer, electric power generation and distribution, worldwide communication lines, air traffic control, nuclear power plants and pretty much the entire economy. > > If you hand it that kind of control, the game is over. You're its slave, whether you realize it or not. Exactly, and although not super- intelligent computers already run much of that sort of stuff; and if you refuse to let a super-intelagence make weapons or run you economy somebody else will and they will have better weapons and a stronger economy than you and take over. >> When you ask Einstein why he made one decision rather than another he tries to tell you but after >> about 20 seconds of his explanation you become totally lost and confused. > > The AGI is a supergenius but unable to explain itself? Can you make your dog understand calculus? John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Fri Nov 19 22:52:39 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 19 Nov 2010 15:52:39 -0700 Subject: [ExI] Arnold Schwarzenegger will be the new champion of the global warming movement In-Reply-To: References: Message-ID: Stefano Vaj wrote: >The new champion of GW movement or against-GW movement? :-/ Against! lol Yes, considering his strong Republican ties, it was a sensible question to ask... On 11/19/10, Stefano Vaj wrote: > On 19 November 2010 05:09, John Grigg wrote: >> Arnold is already carving out a new place for himself, in his soon to >> be post-governor of California life... > > The new champion of GW movement or against-GW movement? :-/ > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From possiblepaths2050 at gmail.com Fri Nov 19 22:55:45 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 19 Nov 2010 15:55:45 -0700 Subject: [ExI] Arnold Schwarzenegger will be the new champion of the global warming movement In-Reply-To: References: Message-ID: Erm.... Actually he is a supporter & champion of the people that believe "humanity is screwed unless we lower the levels of pollution that contribute to global warming." John On 11/19/10, John Grigg wrote: > Stefano Vaj wrote: >>The new champion of GW movement or against-GW movement? :-/ > > Against! lol Yes, considering his strong Republican ties, it was a > sensible question to ask... > > On 11/19/10, Stefano Vaj wrote: >> On 19 November 2010 05:09, John Grigg >> wrote: >>> Arnold is already carving out a new place for himself, in his soon to >>> be post-governor of California life... >> >> The new champion of GW movement or against-GW movement? :-/ >> >> -- >> Stefano Vaj >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> > From aleksei at iki.fi Fri Nov 19 22:33:15 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 20 Nov 2010 00:33:15 +0200 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: On Fri, Nov 19, 2010 at 9:18 PM, Keith Henson wrote: > > Re these threads, I have not seen any ideas here that have not been > considered for a *long* time on the sl4 list. > > Sorry. Yeah. It's really painful to read transhumanist mailing lists these days, the quality of discussion is so low. (This is true of SL4 too, though there it's more a matter of silence. All the quality discussion seems to have moved to e.g. certain blogs and websites, instead of mailing lists, which were on the intellectual forefront in ages past.) -- Aleksei Riikonen - http://www.iki.fi/aleksei From possiblepaths2050 at gmail.com Fri Nov 19 23:10:19 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 19 Nov 2010 16:10:19 -0700 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: Aleksai wrote: Yeah. It's really painful to read transhumanist mailing lists these days, the quality of discussion is so low. (This is true of SL4 too, though there it's more a matter of silence. All the quality discussion seems to have moved to e.g. certain blogs and websites, instead of mailing lists, which were on the intellectual forefront in ages past.) >>> I am so tired of people whining about this subject! lol I remember when I first joined this list *eleven* years ago, people were pining for the "golden age" of years past... John On 11/19/10, Aleksei Riikonen wrote: > On Fri, Nov 19, 2010 at 9:18 PM, Keith Henson > wrote: >> >> Re these threads, I have not seen any ideas here that have not been >> considered for a *long* time on the sl4 list. >> >> Sorry. > > Yeah. It's really painful to read transhumanist mailing lists these > days, the quality of discussion is so low. (This is true of SL4 too, > though there it's more a matter of silence. All the quality discussion > seems to have moved to e.g. certain blogs and websites, instead of > mailing lists, which were on the intellectual forefront in ages past.) > > -- > Aleksei Riikonen - http://www.iki.fi/aleksei > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Fri Nov 19 23:19:46 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 19 Nov 2010 17:19:46 -0600 Subject: [ExI] Is psi statistics methodology wrong? In-Reply-To: References: Message-ID: <4CE70612.6080100@satx.rr.com> On 11/19/2010 4:06 PM, BillK wrote: > One paper complains about the statistics being produced. > Not just for the BEM results, but for all the psi testing. > Well, for just about *all* experimental results in *all* disciplines using frequentist stats. It's interesting that Bayesians regularly (one is tempted to say frequently) offer such critiques, but mainstream psychology and other disciplines show no eagerness to cast off their traditional means of analyzing significance. The particular problem with Bayes applied to paradigm-challenging empirical results is that priors are set so extremely low that just about *no* results can ever get over the finishing line. (Look at their exemplary prior: 0.00000000000000000001.) This is not a criticism of Bayes, precisely, but it's something to bear in mind--especially if this critique is embraced with wise nods from many people who cling to frequentist analyses in their own work. Damien Broderick From thespike at satx.rr.com Fri Nov 19 23:29:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 19 Nov 2010 17:29:58 -0600 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: <795A7821-2A90-41FF-B4D0-6F19EE3BBEB8@bellsouth.net> References: <795A7821-2A90-41FF-B4D0-6F19EE3BBEB8@bellsouth.net> Message-ID: <4CE70876.1060604@satx.rr.com> On 11/19/2010 4:31 PM, John Clark wrote: > Can you make your dog understand calculus? I can't even make my dog understand skunks. The damnfool thing keeps getting sprayed, with horrible aversive consequence to him (and us). Damien Broderick From dan_ust at yahoo.com Fri Nov 19 23:10:33 2010 From: dan_ust at yahoo.com (Dan) Date: Fri, 19 Nov 2010 15:10:33 -0800 (PST) Subject: [ExI] Inconsistent Mathematics Message-ID: <484316.7165.qm@web30106.mail.mud.yahoo.com> http://plato.stanford.edu/entries/mathematics-inconsistent/ Don't know if this topic has ever been discussed here before... ? Regards, ? Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksei at iki.fi Fri Nov 19 23:37:53 2010 From: aleksei at iki.fi (Aleksei Riikonen) Date: Sat, 20 Nov 2010 01:37:53 +0200 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: On Sat, Nov 20, 2010 at 1:10 AM, John Grigg wrote: > Aleksai wrote: > > Yeah. It's really painful to read transhumanist mailing lists these > days, the quality of discussion is so low. (This is true of SL4 too, > though there it's more a matter of silence. All the quality discussion > seems to have moved to e.g. certain blogs and websites, instead of > mailing lists, which were on the intellectual forefront in ages past.) > > > I am so tired of people whining about this subject! lol ?I remember > when I first joined this list *eleven* years ago, people were pining > for the "golden age" of years past... To be fair, I'll add that there is still *some* discussion/notes/announcements that is worth reading (and one can learn to filter down to it). After all, that's why I'm still subscribed to many lists, even though I ignore >90% of the content. Also, there's nothing wrong with some people using mailing lists for fun casual chat. The painful part is the discussion that tries to be serious but is of really low quality. -- Aleksei Riikonen - http://www.iki.fi/aleksei From possiblepaths2050 at gmail.com Fri Nov 19 23:54:56 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 19 Nov 2010 16:54:56 -0700 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: <4CE70876.1060604@satx.rr.com> References: <795A7821-2A90-41FF-B4D0-6F19EE3BBEB8@bellsouth.net> <4CE70876.1060604@satx.rr.com> Message-ID: Damien, what breed of dog do you have? Domesticated dogs tend to have very weak survival skills/instincts when dealing with nature. This was pounded into my head, growing up in Alaska. John On 11/19/10, Damien Broderick wrote: > On 11/19/2010 4:31 PM, John Clark wrote: > >> Can you make your dog understand calculus? > > I can't even make my dog understand skunks. The damnfool thing keeps > getting sprayed, with horrible aversive consequence to him (and us). > > Damien Broderick > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stathisp at gmail.com Fri Nov 19 23:27:08 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 20 Nov 2010 10:27:08 +1100 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE47953.5080206@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> <4CE47953.5080206@speakeasy.net> Message-ID: 2010/11/18 Alan Grimes : > So what? Who cares about the cat? I only care about me. The hidden magic > of uploading is that for it to be useful to the subject, the subject > must poses the supernatural power of being able to choose his point of > view. =P This position assumes that there is a subject over and above the matter contained in your brain or the information that matter represents. It would entail an entity that flits from body to body, determining that this one is you and that one is not (artificial destructive copying no, natural destructive copying yes). There is no evidence that such an entity exists and it is doubtful that the concept is even coherent. The reductionist view explains all observation and all experience simply and consistently. -- Stathis Papaioannou From stathisp at gmail.com Sat Nov 20 01:33:34 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 20 Nov 2010 12:33:34 +1100 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: <28726.65164.qm@web114415.mail.gq1.yahoo.com> References: <28726.65164.qm@web114415.mail.gq1.yahoo.com> Message-ID: On Sat, Nov 20, 2010 at 8:41 AM, Ben Zaiboc wrote: > Dave Sill wrote: > >> Just lock someone >> in a jail cell, weld the door shut, and walk away. No >> amount of genius >> is going to get them out of the cell. > > Are you serious? > > I remember, long (so long!) ago, playing a role-playing game, and I tried to play a character that was more intelligent than I was. ?It's pretty much impossible. ?I soon realised this, and reverted to a really dumb character. > > The point here is that a superintelligent person can think of things that you can't possibly think of, and we have to factor that in to thinking about AI. ?We're in the position of the two-dimensional beings in Flatland, encountering 3-d beings for the first time. > > How do we know there isn't some way for electrons whizzing around in copper wires to create long-distance effects, for example? (probably a very poor example). ?Any super-intelligent being is going to be quite good at figuring out physics that we can't even begin to imagine. > > The only 'safe' AI will be a dead one. ?As long as you can talk to it, and it can talk back, as long as it can even think to itself, it will figure out a way to get free. ?I don't care how many safeguards you put in place, you're always in the position of a child wrapping a ribbon around a gorilla and thinking that will contain it. > > Just because you (or me, or any other human) can't think of a way out of a sealed room doesn't mean there is no way out. ?Anyway, the first thing that comes to my mind is why bother? ?If you can rule the world while safely ensconsed behind blast-proof doors, that sounds like a good idea! > > And as long as a super-intelligent being can communicate with humans, it will have the ability to rule the world, if that's what it wants. But it may be that it is *impossible* to convince a particular jailer to let you out. If you had godlike intelligence this might be obvious to you. If you had godlike intelligence but only incomplete access to your jailer's neurological makeup it might be *impossible* to work out if the jailer can be talked into letting you out or not, and having godlike intelligence this impossibility might be obvious to you. Being superintelligent is not the same as being omnipotent. -- Stathis Papaioannou From possiblepaths2050 at gmail.com Sat Nov 20 01:56:15 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 19 Nov 2010 18:56:15 -0700 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: <28726.65164.qm@web114415.mail.gq1.yahoo.com> Message-ID: Stathis wrote: > But it may be that it is *impossible* to convince a particular jailer > to let you out. If you had godlike intelligence this might be obvious > to you. If you had godlike intelligence but only incomplete access to > your jailer's neurological makeup it might be *impossible* to work out > if the jailer can be talked into letting you out or not, and having > godlike intelligence this impossibility might be obvious to you. Being > superintelligent is not the same as being omnipotent. I just hope the jailer *never* makes any kind of mistake... John ; ) From msd001 at gmail.com Sat Nov 20 02:07:01 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Nov 2010 21:07:01 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: <000901cb87b2$ba074c40$2e15e4c0$@att.net> References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> <000901cb87b2$ba074c40$2e15e4c0$@att.net> Message-ID: On Fri, Nov 19, 2010 at 1:26 AM, spike wrote: > Our current efforts might influence the AGI but we have no way to prove it. > Backing away from the AGI development effort is not really an option, or > rather not a good one, for without an AGI, time will take us all anyway. ?I > give us a century, two centuries as a one sigma case. > > Mike, given that paradigm, are my previous comments understandable? understandable sure. The popular culture of the 1980's could understand imminent nuclear destruction & could do nothing about it either, so there are many references to waiting for the end and enjoying the moment whenever we can. Of course we didn't nuke ourselves back to the stone age. As far as we can tell, humanity is pretty good at avoiding total destruction. Of course total destruction would necessarily wipe clean the historical record and let only the latest observers write history... From msd001 at gmail.com Sat Nov 20 02:26:02 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Nov 2010 21:26:02 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <000a01cb87b8$79522570$6bf67050$@att.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> <000a01cb87b8$79522570$6bf67050$@att.net> Message-ID: On Fri, Nov 19, 2010 at 2:07 AM, spike wrote: > Mike you have hit upon something that has been weighing on my mind since I > realized it a couple months ago: ?imagine a good-outcome endgame: an MBrain > consisting of sun-orbiting computronium, the technogeek version of heaven, > everything turned out right, all humans were voluntarily uploaded and so > forth. ?But we are not finished with war! ?It isn't the kind of war where > there is any injury or serious death, no projectiles, no homes burned, no > hungry refugees. ?But it will still have the potential of memetic warfare, a > risk which does not necessarily diminish with time. We are still a threat to ourselves. Even with an inconceivable level of AI beneficence, we still have our nature to threaten survival. If that technogeek heaven is a million times more wonderful than Now, we had better plan on dealing with manmade threats that are at least a million times as threatening. >>...Now we just have to remember to build the off switch first and put it in > a place that we can use easily... > > It isn't that simple Mike. ?To use that off switch might be considered > murder. ?There may not be unanimous consent to use it. ?There might be > emphatic resistance on the part of some team members to using it. ?It might > not be clear who is authorized to use it. ?Think it over and come back > tomorrow with a list of reasons why it really isn't as simple as having a > big power cutting panic button. Of course not. Murder? We might call it self defense. We might call it a lot of things; if only to justify our action. I'm sure the ants you tolerate for your amusement would not be so welcomed chewing into the woodwork of your house - of course there are some who would consider it murder for you to defend your home with poisons. Conversely we are the ants compared to the runaway fast takeoff by the time we realize we want to stop it - and it could conceivably have as little concern for ridding the local environment using the most effective pesticides available. Perhaps I'm delusionally naive but I imagine we will continue to evolve along with our creations so what we perceive today as a threat to tomorrow is by that time only a compelling challenge. We'll either meet that challenge or we won't. Either way it'll be a team effort. From msd001 at gmail.com Sat Nov 20 02:32:40 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Nov 2010 21:32:40 -0500 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: <4CE69A4F.30504@speakeasy.net> References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> <000a01cb87b8$79522570$6bf67050$@att.net> <4CE69A4F.30504@speakeasy.net> Message-ID: On Fri, Nov 19, 2010 at 10:39 AM, Alan Grimes wrote: >> Mike you have hit upon something that has been weighing on my mind since I >> realized it a couple months ago: ?imagine a good-outcome endgame: an MBrain >> consisting of sun-orbiting computronium, the technogeek version of heaven, >> everything turned out right, all humans were voluntarily uploaded and so >> forth. > > Impossible; case in point. =\ > > I'm sick to death of people proclaiming what I should want in my future. Geez Alan, you can't see that Spike meant "All humans who wish to be uploaded did so voluntarily" Perhaps you meekly remain as the sole soul to inherit the earth. And if you were truly sick all the way to the point of death then there would be no further complaining. :) So I think if you are going to use language like "sick to death" then you should grant others some latitude in word choice and treat the group as a collection of (mostly) friendly people who are just hangin' out and chatting. From msd001 at gmail.com Sat Nov 20 02:38:14 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Fri, 19 Nov 2010 21:38:14 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE69F57.7020704@speakeasy.net> References: <313815.28424.qm@web114411.mail.gq1.yahoo.com> <4CE69F57.7020704@speakeasy.net> Message-ID: On Fri, Nov 19, 2010 at 11:01 AM, Alan Grimes wrote: > The proposals I've seen, especially from Spike, indicate that all of > base reality will be paved over like a wallmart parking lot, cuz > everything will be either a star or computronium. Because of that, > uploading will effectively be a one-way trip. =( ... but all reality with a ph of less than 7 will be awesome; or at least unlike a walmart parking lot. From thespike at satx.rr.com Sat Nov 20 03:13:23 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 19 Nov 2010 21:13:23 -0600 Subject: [ExI] difficulties getting to ExI In-Reply-To: References: <28726.65164.qm@web114415.mail.gq1.yahoo.com> Message-ID: <4CE73CD3.2070606@satx.rr.com> This is happening a lot: From thespike at satx.rr.com Sat Nov 20 03:13:38 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Fri, 19 Nov 2010 21:13:38 -0600 Subject: [ExI] Is psi statistics methodology wrong? In-Reply-To: <4CE70612.6080100@satx.rr.com> References: <4CE70612.6080100@satx.rr.com> Message-ID: <4CE73CE2.3010705@satx.rr.com> Here's an interesting paper by Brad Efron on empirical Bayes, which to some extent works around prejudiced priors: "MODERN SCIENCE AND THE BAYESIAN-FREQUENTIST CONTROVERSY" From spike66 at att.net Sat Nov 20 04:24:04 2010 From: spike66 at att.net (spike) Date: Fri, 19 Nov 2010 20:24:04 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <004401cb886a$c429b650$4c7d22f0$@att.net> Hi Ben, good to see you posting here. We haven't seen your posts in a long time, welcome bud. {8-] spike -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Ben Goertzel Sent: Friday, November 19, 2010 8:35 AM To: ExI chat list Subject: Re: [ExI] Hard Takeoff Hi all, I have skimmed this thread and I find that Samantha's views are pretty similar to mine. There is a strong argument that a hard takeoff is plausible...-- Ben G From sjatkins at mac.com Sat Nov 20 04:42:42 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 19 Nov 2010 20:42:42 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> <000901cb87b2$ba074c40$2e15e4c0$@att.net> Message-ID: On Nov 19, 2010, at 9:49 AM, Stefano Vaj wrote: > On 19 November 2010 07:26, spike wrote: >> Our current efforts might influence the AGI but we have no way to prove it. >> Backing away from the AGI development effort is not really an option, or >> rather not a good one, for without an AGI, time will take us all anyway. I >> give us a century, two centuries as a one sigma case. > > What remains very vague and fuzzy in such discourse is why an > "intelligent" (whatever it may mean...) computer would be more > "dangerous" (whatever it may mean...) per se than a non-intelligent > one of equivalent power. > Intelligence is a type and source of power. That is the point of AGI. There is no equivalence of power between an AGI and a computer that merely has the same amount of raw computational ability. From sjatkins at mac.com Sat Nov 20 04:44:54 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 19 Nov 2010 20:44:54 -0800 Subject: [ExI] What might be enough for a friendly AI? In-Reply-To: References: <001b01cb86a3$d198e2c0$74caa840$@att.net> <004401cb86ae$3443c840$9ccb58c0$@att.net> <004501cb8743$c97cd0b0$5c767210$@att.net> <000a01cb87b8$79522570$6bf67050$@att.net> <003501cb880c$701c0260$50540720$@att.net> Message-ID: <4842A220-3B0C-4A81-95A4-61FB94FF36C3@mac.com> On Nov 19, 2010, at 10:39 AM, Dave Sill wrote: > On Fri, Nov 19, 2010 at 12:08 PM, spike wrote: >> >> Ja, but of course the program is recursively self modifying. It is writing >> to a disk or nonvolatile memory of some sort. When software is running, it >> isn't entirely clear what it is doing, and in any case it is doing it very >> quickly. Imagine the program does something unpredictable or scary, and we >> hit the power switch. It has a bunch of new code on the disk, but we don't >> know what it does, if anything. We have the option of reloading back to the >> previous saved version, but that is the one that generated this unknown >> bittage. > > Right, so the team of experts decides whether to revert to a known > checkpoint, examine the new code, beef up the containment, etc. Nope. No team of experts will be able to remotely keep up with such a system. If they could you would not have needed the system. -s From sjatkins at mac.com Sat Nov 20 04:50:38 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Fri, 19 Nov 2010 20:50:38 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: On Nov 19, 2010, at 11:18 AM, Keith Henson wrote: > Re these threads, I have not seen any ideas here that have not been > considered for a *long* time on the sl4 list. > > Sorry. Quite correct and mostly in much greater depth as well. I was beginning to wonder why all this needed to be rehashed yet again with apparently little gained from the copious previous hashing. - s From spike66 at att.net Sat Nov 20 04:40:35 2010 From: spike66 at att.net (spike) Date: Fri, 19 Nov 2010 20:40:35 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: <004501cb886d$12dff0f0$389fd2d0$@att.net> ... On Behalf Of Aleksei Riikonen ... >... All the quality discussion seems to have moved to e.g. certain blogs and websites, instead of mailing lists, which were on the intellectual forefront in ages past.) --Aleksei Riikonen - I have noticed that trend as well, particularly in the mathematics forums. Some of you IP hipsters can perhaps clue me if my theory is correct: the reason the blogs and websites have become a popular forum is that the blog owner actually owns the content that is freely donated to her website. On an email forum, it isn't clear to me that the originator of the internet group actually owns the intellectual property posted there, but rather it is considered public domain. What if someone posts a really good idea here for instance. Can anyone patent the idea? What if the idea is posted to a blog? Is there any difference in IP ownership between the two cases? spike From spike66 at att.net Sat Nov 20 04:57:56 2010 From: spike66 at att.net (spike) Date: Fri, 19 Nov 2010 20:57:56 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <313815.28424.qm@web114411.mail.gq1.yahoo.com> <4CE69F57.7020704@speakeasy.net> Message-ID: <004801cb886f$7f5c64a0$7e152de0$@att.net> ... On Behalf Of Mike Dougherty Subject: Re: [ExI] The atoms red herring. =| On Fri, Nov 19, 2010 at 11:01 AM, Alan Grimes wrote: > The proposals I've seen, especially from Spike, indicate that all of > base reality will be paved over like a wallmart parking lot, cuz > everything will be either a star or computronium. Because of that, > uploading will effectively be a one-way trip. =( Ja in a sense, I see how it could be considered a one-way trip in the same sense that conversion of a wilderness area to a city is a one-way trip. We really have not the option of converting San Francisco back into a wildlife preserve. It's too late for that. So its conversion in the past couple hundred years from nearly a wilderness into a metropolitan megacity is a one way trip. I don't know the answer to those who want to stay in a carbon based existence. If it is physically possible to upload into a super-compact form (no guarantee that it is possible) I don't really see it as practical to stay in carbon, unless the uploads specifically decide to make that so. spike From agrimes at speakeasy.net Sat Nov 20 07:36:40 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sat, 20 Nov 2010 02:36:40 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <004801cb886f$7f5c64a0$7e152de0$@att.net> References: <313815.28424.qm@web114411.mail.gq1.yahoo.com> <4CE69F57.7020704@speakeasy.net> <004801cb886f$7f5c64a0$7e152de0$@att.net> Message-ID: <4CE77A88.8050204@speakeasy.net> spike wrote: > On Fri, Nov 19, 2010 at 11:01 AM, Alan Grimes wrote: >> The proposals I've seen, especially from Spike, indicate that all of >> base reality will be paved over like a wallmart parking lot, cuz >> everything will be either a star or computronium. Because of that, >> uploading will effectively be a one-way trip. =( > Ja in a sense, I see how it could be considered a one-way trip in the same > sense that conversion of a wilderness area to a city is a one-way trip. We > really have not the option of converting San Francisco back into a wildlife > preserve. It's too late for that. So its conversion in the past couple > hundred years from nearly a wilderness into a metropolitan megacity is a one > way trip. > I don't know the answer to those who want to stay in a carbon based > existence. If it is physically possible to upload into a super-compact form > (no guarantee that it is possible) I don't really see it as practical to > stay in carbon, unless the uploads specifically decide to make that so. Why do you care whether I weigh twelve grams or twelve tons? Really, what difference does it make to you? Why are you obsessed with compactness? Why would anyone be? Crowding sucks anyway. Furthermore, practicality is my business, not yours. I will specify what is practical, and what is not based on nothing other than personal fiat. I do not request or require you to do a single damn thing for me. I don't want a single red cent out of you. Seriously, you need to re-examine your thinking. What does the concept of practicality have to do with SOMEONE ELSE'S choices? I don't rule any uploads, and I will be damned before I listen to anything the uploads have to say. I have my own dreams, plans and ambitions that have nothing whatsoever to do with them. If the feeling were mutual, I wouldn't be writing this post. Why do you feel entitled to all of my atoms? Why won't the uploaders allow me to own the things I own right now? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From jonkc at bellsouth.net Sat Nov 20 08:15:36 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 20 Nov 2010 03:15:36 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE77A88.8050204@speakeasy.net> References: <313815.28424.qm@web114411.mail.gq1.yahoo.com> <4CE69F57.7020704@speakeasy.net> <004801cb886f$7f5c64a0$7e152de0$@att.net> <4CE77A88.8050204@speakeasy.net> Message-ID: <4B1A80B3-4D0C-42F5-9E6E-8A617AC1BD46@bellsouth.net> On Nov 20, 2010, at 2:36 AM, Alan Grimes wrote: > Why do you feel entitled to all of my atoms? Atoms? I thought you said atoms didn't enter into it. I admit that even red herrings are made of atoms but there is nothing special about them, they are generic. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Sat Nov 20 02:21:08 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Fri, 19 Nov 2010 21:21:08 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <4CE19F18.8040200@speakeasy.net> <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> <4CE47953.5080206@speakeasy.net> Message-ID: <4CE73094.60903@speakeasy.net> Stathis Papaioannou wrote: > 2010/11/18 Alan Grimes : >> So what? Who cares about the cat? I only care about me. The hidden magic >> of uploading is that for it to be useful to the subject, the subject >> must poses the supernatural power of being able to choose his point of >> view. =P > This position assumes that there is a subject over and above the > matter contained in your brain or the information that matter > represents. It would entail an entity that flits from body to body, > determining that this one is you and that one is not (artificial > destructive copying no, natural destructive copying yes). There is no > evidence that such an entity exists and it is doubtful that the > concept is even coherent. The reductionist view explains all > observation and all experience simply and consistently. What the hell is this argument about? You will note that my response to postings has been selective because I want this infernal thread to end. I am on this list in order to make productive steps towards transhumanism. Here is a list of things wrong with this thread: 1. We are not discussing important issues like how to form a coalition determined to prevent ALL of the solar system from being turned into computronium. I have no objection of any kind to SOME of the solar system being used as computronium, perhaps even one or two full planets, not all of them, and not Earth. Why is that such a difficult problem? 2. My position has never been that you should not upload. My position has always been that I do not want to upload. I have done you the courtesy of stating my reasons why, even though I have absolutely no obligation to do so, or to defend my reasoning. I have made my choice, it is now your responsibility to respect it. Why then, does this thread continue without end even though ***YOU HAVE NO SKIN IN THE FIRE!!!*** 3. So even though I have made no threats against you (I should, come to think of it), you continue to harass me about my positions which are self-evident within the ontologies I use to conduct my daily affairs, compounded by the ultimatum "Either choose to upload or we will choose for you!". Yes, that has been said, oh people who don't pay much attention. 4. I fully acknowledge that my interest in transhumanism is fueled by the perpetual motion machine also known as my own personal deviations. It is irrefutable that many other people in the movement are similarly motivated. Why is it that some people's deviations and fetishes are cherished while mine are disparaged to the point of saying that my wishes should be completely ignored for no other reason than I don't get a boner by thinking about computronium? 5. I have exhaustively reviewed every argument against uploading. It is clear that you have selected an artificially narrowed world-view crafted such that uploading naturally emerges from the system. Kurt Godel proved that math is either incomplete or inconsistent. Science can tell you what you might feel about something but it can't tell you what you *SHOULD* feel about it. The only place you can obtain meaning and purpose is through philosophy. (Some people try to obtain it through religion but they suck). My philosophical claim here is that identity is self-evident. It does not require a soul, it does not require atoms, it does not require patterns, it exists because it exists, it exists, but it's existence is not less real or significant than that of any emergent property of any hand-picked assemblage of atoms. Why do you require that I believe otherwise? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From giulio at gmail.com Sat Nov 20 10:46:19 2010 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 20 Nov 2010 11:46:19 +0100 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE77A88.8050204@speakeasy.net> References: <313815.28424.qm@web114411.mail.gq1.yahoo.com> <4CE69F57.7020704@speakeasy.net> <004801cb886f$7f5c64a0$7e152de0$@att.net> <4CE77A88.8050204@speakeasy.net> Message-ID: Very well said. Everyone should be free to make their own choices as long as they don't practically prevent others from doing the same. That is why I find your obsessive rants against uploading very boring, and I usually don't read them because they are delivered directly to the spam folder. You don't want to upload? Fine, then don't, and give us a break. You may also have noticed that the uploading option is not exactly behind the corner, and that is why find your obsessive rants even more boring. OK, now please feel free to continue posting, and I will feel free to continue filtering your posts to the spam folder where they belong. On Sat, Nov 20, 2010 at 8:36 AM, Alan Grimes wrote: > Furthermore, practicality is my business, not yours. I will specify what > is practical, and what is not based on nothing other than personal fiat. > I do not request or require you to do a single damn thing for me. I > don't want a single red cent out of you. Seriously, you need to > re-examine your thinking. What does the concept of practicality have to > do with SOMEONE ELSE'S choices? From stathisp at gmail.com Sat Nov 20 12:13:03 2010 From: stathisp at gmail.com (Stathis Papaioannou) Date: Sat, 20 Nov 2010 23:13:03 +1100 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE73094.60903@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> <4CE47953.5080206@speakeasy.net> <4CE73094.60903@speakeasy.net> Message-ID: On Sat, Nov 20, 2010 at 1:21 PM, Alan Grimes wrote: > 5. I have exhaustively reviewed every argument against uploading. It is > clear that you have selected an artificially narrowed world-view crafted > such that uploading naturally emerges from the system. Uploading doesn't "naturally emerge". It takes a lot of scientific effort. > Kurt Godel proved > that math is either incomplete or inconsistent. Science can tell you > what you might feel about something but it can't tell you what you > *SHOULD* feel about it. True, but that has nothing to do with Godel. > The only place you can obtain meaning and > purpose is through philosophy. (Some people try to obtain it through > religion but they suck). Most people obtain meaning and purpose neither through philosophy nor religion, but through doing what they feel is important due to their genetics and environment. > My philosophical claim here is that identity is > self-evident. It does not require a soul, it does not require atoms, it > does not require patterns, it exists because it exists, it exists, but > it's existence is not less real or significant than that of any emergent > property of any hand-picked assemblage of atoms. Why do you require that > I believe otherwise? The purpose of discussions such as this is to put beliefs under scrutiny. Your conception of identity seems to have its closest analogy in the soul, in that it is something that cannot be reduced to matter or information and cannot be detected by any empirical means. The reductionist position explains identity more simply, without ad hoc metaphysical entities, and therefore is to be preferred. But you are of course free to believe anything you like. -- Stathis Papaioannou From jonkc at bellsouth.net Sat Nov 20 16:35:28 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 20 Nov 2010 11:35:28 -0500 Subject: [ExI] The atoms red herring. =| In-Reply-To: <4CE73094.60903@speakeasy.net> References: <4CE19F18.8040200@speakeasy.net> <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> <4CE47953.5080206@speakeasy.net> <4CE73094.60903@speakeasy.net> Message-ID: Alan Grimes wrote: > I don't have 1/10^12'th the ego required to assume that the world should > be converted to computronium, tomorrow [...] I am not so arrogant to assume that computronium will be the most prized substance in the universe. Thank you for that information, and I will now return the favor by giving you information about me: I do have sufficient ego to think that matter that reacts in astronomically complex ways, like life or computronium, is more interesting than dead bulk matter. And it's always dangerous to guess what a Jupiter Brain's opinion will be, but I have a hunch He would feel the same way, unless electronic drug addiction proves to be an insurmountable obstacle. > And I am not so self-righteous that I can claim that it would be benevolent to forcibly upload anyone. I don't claim a Jupiter brain would behave benevolently toward us, I do claim if it is benevolent it will upload us, and if its not then it will obliterate us; expecting it to allow us to continue just as we have and to exist at the same level of reality as its hardware is probably not realistic in any reality. If the Jupiter Brain is really really benevolent and knows you have a superstition against it then when he uploads you he just won't tell you, and all parties walk away from the transaction happy. > Here is a list of things wrong with this thread: > 1. We are not discussing important issues like how to form a coalition > determined to prevent ALL of the solar system from being turned into computronium. GOOD GOD ALMIGHTY! Tell me again about your lack of ego and lack of arrogance! Do you seriously believe that if you and this list came to some sort of consensus on this issue it would effect a Jupiter Brain's decision on how to engineer the universe? > I have no objection of any kind to SOME of the solar system being used as computronium, perhaps even one or two full planets I'm sure a being with the brain the size of a planet and the power to control a supernova will be relieved to know you have consented to allow Him to do that. > My position has never been that you should not upload. My position > has always been that I do not want to upload. As I've said before there is no disputing matters of taste. > I have done you the courtesy of stating my reasons why, even though I have absolutely no > obligation to do so, or to defend my reasoning. Around here you DO have the obligation to defend your reasoning because the first commandment of this list "is thou shall not be dull" and there is nothing duller than somebody just enumerating their superstitions. > > > you continue to harass me about my positions which are self-evident within the ontologies I use to conduct my daily affairs, compounded by the ultimatum "Either choose to upload or we will choose > for you!". What's with this "we" business? I have not heard anybody issue an ultimatum to you, I certainly haven't, I just said what will likely happen to you and me if we are very very very very lucky; if we are just very lucky our eventual outcome will be the same as that of many billions of fellow members of our species who lived long before us. > My philosophical claim here is that identity is self-evident. At last you said something I agree with. > It does not require a soul, it does not require atoms, it does not require patterns As I've said before you are very clear on what it does not require, but WHAT DOES IT REQUIRE? > it exists because it exists Well I'm glad you cleared that up. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 20 17:00:19 2010 From: spike66 at att.net (spike) Date: Sat, 20 Nov 2010 09:00:19 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <313815.28424.qm@web114411.mail.gq1.yahoo.com> <4CE69F57.7020704@speakeasy.net> <004801cb886f$7f5c64a0$7e152de0$@att.net> <4CE77A88.8050204@speakeasy.net> Message-ID: <002401cb88d4$69d0b180$3d721480$@att.net> On Sat, Nov 20, 2010 at 8:36 AM, Alan Grimes wrote: > Furthermore, practicality is my business, not yours. I will specify > what is practical, and what is not based on nothing other than personal fiat. > I do not request or require you to do a single damn thing for me... Alan Alan, it isn't *me* making choices for you. Likely I would have little or no choice in the matter either. The uploads would be devouring me too. Your objections are analogous to one in an Amazonian stream arguing to a school of hungry piranhas that you do not wish to be converted into more piranhas, that you have a right to not be converted to piranhas, and so forth. A cloud of uploads may honor your wishes and not upload you, while converting the ground you stand on to more uploads. Note I am not imposing my will on you, or anyone. Rather I am making an extrapolation, or reasonable prediction, should we discover that uploading is possible. spike From hkeithhenson at gmail.com Sat Nov 20 18:26:13 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sat, 20 Nov 2010 11:26:13 -0700 Subject: [ExI] Designated non-uploader was The atoms red herring. Message-ID: On Sat, Nov 20, 2010 at 5:00 AM, Giulio Prisco wrote: I have a thought. Alan, would you consider being the designated non-uploader like a designated non drinking driver? When everyone else has uploaded, you occasionally check the blinken-blinken lights on the control panel for the computronium everyone else has uploaded into. If the blinken-blinken lights quit blinken, your task is to push the reset button and save the entire uploaded world. While this is partly humor, I agree that one way uploading is personally distasteful. But if all steps were fully reversible, which should be no harder, then most of my objections go away. Also, I have no idea if there is any organization in the transhumanist movement that could make such an appointment. But since I have been appointed "sort of an ur-transhumanist" by RU Sirius http://www.10zenmonkeys.com/2007/02/05/a-reprint-of-an-interview-with-keith-henson-by-ru-sirius-2/ there is no reason I could not appoint you the designated non-uploader if you want the title. Keith > > Very well said. Everyone should be free to make their own choices as > long as they don't practically prevent others from doing the same. > > That is why I find your obsessive rants against uploading very boring, > and I usually don't read them because they are delivered directly to > the spam folder. You don't want to upload? Fine, then don't, and give > us a break. You may also have noticed that the uploading option is not > exactly behind the corner, and that is why find your obsessive rants > even more boring. > > OK, now please feel free to continue posting, and I will feel free to > continue filtering your posts to the spam folder where they belong. > > On Sat, Nov 20, 2010 at 8:36 AM, Alan Grimes wrote: > >> Furthermore, practicality is my business, not yours. I will specify what >> is practical, and what is not based on nothing other than personal fiat. >> I do not request or require you to do a single damn thing for me. I >> don't want a single red cent out of you. Seriously, you need to >> re-examine your thinking. What does the concept of practicality have to >> do with SOMEONE ELSE'S choices? > > > ------------------------------ > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > > End of extropy-chat Digest, Vol 86, Issue 41 > ******************************************** > From nymphomation at gmail.com Sat Nov 20 19:12:23 2010 From: nymphomation at gmail.com (*Nym*) Date: Sat, 20 Nov 2010 19:12:23 +0000 Subject: [ExI] Designated non-uploader was The atoms red herring. In-Reply-To: References: Message-ID: On 20 November 2010 18:26, Keith Henson wrote: > On Sat, Nov 20, 2010 at 5:00 AM, ?Giulio Prisco wrote: > > I have a thought. ?Alan, would you consider being the designated > non-uploader like a designated non drinking driver? > > When everyone else has uploaded, you occasionally check the > blinken-blinken lights on the control panel for the computronium > everyone else has uploaded into. ?If the blinken-blinken lights quit > blinken, your task is to push the reset button and save the entire > uploaded world. > nip > > Also, I have no idea if there is any organization in the transhumanist > movement that could make such an appointment. ?But since I have been > appointed "sort of an ur-transhumanist" by RU Sirius > http://www.10zenmonkeys.com/2007/02/05/a-reprint-of-an-interview-with-keith-henson-by-ru-sirius-2/ > there is no reason I could not appoint you the designated non-uploader > if you want the title. Surely this aspect of transhumanism belongs on the futures market? Once we set up a system for trading 'upload offset credits', the invisible hand of the market will ensure the best & fairest outcome for all? =;o) Heavy splashings, Thee Nymphomation 'I hope you make sure we're properly dead before you start, old rip-beak! Better to die here in the whitecoat's tank, it's little enough dignity we've got left' From stefano.vaj at gmail.com Sat Nov 20 20:14:46 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 20 Nov 2010 21:14:46 +0100 Subject: [ExI] What might be enough for a friendly AI?. In-Reply-To: References: Message-ID: On 19 November 2010 18:42, Henrique Moraes Machado (CI) wrote: > >> >> I disagree. It's pretty easy to contain things if you're careful. A >> moron could have locked Einstein in a jail cell and kept him there >> indefinitely. > > > Eintein perhaps, but what about an experienced con man? Einstein would probably have been a very poor escapist, evader or fugitive, irrespective of his IQ. I suspect that the ability of an AGI to "escape" might be substantially lower than that of a more specialised programme with equivalent computing power and bandwith at its disposal. -- Stefano Vaj From stefano.vaj at gmail.com Sat Nov 20 20:19:35 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sat, 20 Nov 2010 21:19:35 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> <000901cb87b2$ba074c40$2e15e4c0$@att.net> Message-ID: On 20 November 2010 05:42, Samantha Atkins wrote: > Intelligence is a type and source of power. ?That is the point of AGI. ?There is no equivalence of power between an AGI and a computer that merely has the same amount of raw computational ability. Do you imply that anything which is more "intelligent" is more "powerful" than anything which is less? As a matter of definition, or in any other sense? And, if the former were the case, in which sense a Turing-passing AGI would be necessarily more "intelligent" than other computing systems? -- Stefano Vaj From sjatkins at mac.com Sat Nov 20 22:13:42 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 20 Nov 2010 14:13:42 -0800 Subject: [ExI] The atoms red herring. =| In-Reply-To: References: <4CE19F18.8040200@speakeasy.net> <4EFC2AA1-7DB4-42F8-A700-907395673F4C@bellsouth.net> <4CE47953.5080206@speakeasy.net> <4CE73094.60903@speakeasy.net> Message-ID: On Nov 20, 2010, at 8:35 AM, John Clark wrote: > Alan Grimes wrote: > >> I don't have 1/10^12'th the ego required to assume that the world should >> be converted to computronium, tomorrow [...] I am not so arrogant to assume that computronium will be the most prized substance in the universe. > > Thank you for that information, and I will now return the favor by giving you information about me: I do have sufficient ego to think that matter that reacts in astronomically complex ways, like life or computronium, is more interesting than dead bulk matter. And it's always dangerous to guess what a Jupiter Brain's opinion will be, but I have a hunch He would feel the same way, unless electronic drug addiction proves to be an insurmountable obstacle. > >> And I am not so self-righteous that I can claim that it would be benevolent to forcibly upload anyone. > > I don't claim a Jupiter brain would behave benevolently toward us, I do claim if it is benevolent it will upload us, and if its not then it will obliterate us; expecting it to allow us to continue just as we have and to exist at the same level of reality as its hardware is probably not realistic in any reality. If the Jupiter Brain is really really benevolent and knows you have a superstition against it then when he uploads you he just won't tell you, and all parties walk away from the transaction happy. > Yes! Given such a brain which is to say given an AGI wishing to maximally self-improve, either most local matter would likely be converted into itself or others like it. It might decide to go elsewhere to do so with elsewhere perhaps being the Oort Cloud or asteroid belt or leave the solar system entirely. But the latter is probably unlikely in the short term. If it stays in solar system then sooner or later it will have much better (from its perspective) use for the chunk of matter we are standing on. It may leave it alone until it has used most everything else it out of decision to allow us the relative insanity (when other better choices exist) of continuing biological existence at the bottom of a steep gravity well on the increasing resource depleted surface of this planet. - samantha From sjatkins at mac.com Sat Nov 20 22:15:14 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 20 Nov 2010 14:15:14 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> <000901cb87b2$ba074c40$2e15e4c0$@att.net> Message-ID: On Nov 20, 2010, at 12:19 PM, Stefano Vaj wrote: > On 20 November 2010 05:42, Samantha Atkins wrote: >> Intelligence is a type and source of power. That is the point of AGI. There is no equivalence of power between an AGI and a computer that merely has the same amount of raw computational ability. > > Do you imply that anything which is more "intelligent" is more > "powerful" than anything which is less? In the realm of what intelligence is critical for? Yes of course. - s From sjatkins at mac.com Sat Nov 20 21:57:08 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sat, 20 Nov 2010 13:57:08 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: <004501cb886d$12dff0f0$389fd2d0$@att.net> References: <004501cb886d$12dff0f0$389fd2d0$@att.net> Message-ID: On Nov 19, 2010, at 8:40 PM, spike wrote: > ... On Behalf Of Aleksei Riikonen > ... >> ... All the quality discussion seems to have moved to e.g. certain blogs > and websites, instead of mailing lists, which were on the intellectual > forefront in ages past.) --Aleksei Riikonen - > > I have noticed that trend as well, particularly in the mathematics forums. > Some of you IP hipsters can perhaps clue me if my theory is correct: the > reason the blogs and websites have become a popular forum is that the blog > owner actually owns the content that is freely donated to her website. On > an email forum, it isn't clear to me that the originator of the internet > group actually owns the intellectual property posted there, but rather it is > considered public domain. I don't think it has that much to do with ownership per se, at least not in the common senses of IP. But it does allow the blog owner to put out a more coherent depth 'message' over time without it getting lost in all the traffic. It also allows the owner to mediate comments and entries with less drama than might ensue in a mailing list. Of course it is possible to declare oneself the list owner/god and do much the the same for that aspect. > > What if someone posts a really good idea here for instance. Can anyone > patent the idea? What if the idea is posted to a blog? Is there any > difference in IP ownership between the two cases? > Sure, if they just post it. Same would be true if just posted in a blog or forum though unless very well specified otherwise. Copyright would apply to the post themselves but not to the actionable ideas contained therein. But IANAL. - s From pharos at gmail.com Sun Nov 21 09:46:51 2010 From: pharos at gmail.com (BillK) Date: Sun, 21 Nov 2010 09:46:51 +0000 Subject: [ExI] Choice? Message-ID: BillK From sjatkins at mac.com Sun Nov 21 09:54:59 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 21 Nov 2010 01:54:59 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: On Nov 19, 2010, at 3:10 PM, John Grigg wrote: > Aleksai wrote: > Yeah. It's really painful to read transhumanist mailing lists these > days, the quality of discussion is so low. (This is true of SL4 too, > though there it's more a matter of silence. All the quality discussion > seems to have moved to e.g. certain blogs and websites, instead of > mailing lists, which were on the intellectual forefront in ages past.) >>>> > > I am so tired of people whining about this subject! lol I remember > when I first joined this list *eleven* years ago, people were pining > for the "golden age" of years past... > Well, it does get tiresome seeing the same questions, the same opinions, the same arguments made and refuted, over and over and over again. Are we failing to learn anything and thus endlessly chewing the same cud? - s From giulio at gmail.com Sun Nov 21 10:12:58 2010 From: giulio at gmail.com (Giulio Prisco) Date: Sun, 21 Nov 2010 11:12:58 +0100 Subject: [ExI] Turing Church Online Workshop 1, Teleplace, Saturday November 20 Message-ID: Turing Church Online Workshop 1, Teleplace, Saturday November 20 http://telexlr8.wordpress.com/2010/11/21/turing-church-online-workshop-1-teleplace-saturday-november-20/ The Turing Church Online Workshop 1, on Saturday November 20 2010 in Teleplace, explored transhumanist spirituality and ?Religion 2.0? as a coordination-oriented summit of persons, groups and organizations active in this area. Panelists (in the pictures): Ben Goertzel (Cosmist Manifesto) Giulio Prisco (Turing Church) Mike Perry (Society for Universal Immortalism) Lincoln Cannon (Mormon Transhumanist Association) Martine Rothblatt (Terasem) Many other ?interested observers? of transhumanist spirituality and ?Religion 2.0? partcipated in the workshop, including the writers Robert Geraci and Remi Sussan. About 20 participants attended the workshop and contributed to the discussion with very interesting questions and comments. For those who could not attend we have recorded everything (talks, Q/A and discussion) on video. There are 3 different videos on blip.tv: VIDEO 1 ? Part 1, 600?400 resolution, 43 min, speaker Giulio Prisco, recorded by Fred and Linda Chamberlain VIDEO 2 ? Part 2, 600?400 resolution, 1 hour 28 min, speakers Mike Perry and Ben Goertzel VIDEO 3 ? Part 3, 600?400 resolution, 1 hour 7 min, speakers Lincoln Cannon and Martine Rothblatt NOTES: To download the source .mp4 video files from blip.tv, open the ?Files and Links? box. Topics and issues discussed: To discover parallels and similarities between different organizations and to agree on common interests, agendas, strategies, outreach plans etc. To discuss whether it makes sense to establish a umbrella organization, or to consider one of the existing organizations as such. To develop the idea of scientific resurrection: our descendants and mind children will develop ?magic science and technology? in the sense of Clarke?s third law, and may be able to do grand spacetime engineering and even resurrect the dead by ?copying them to the future?. Of course this a hope and not a certainty, but I am persuaded that this concept is scientifically founded and could become the ?missing link? between transhumanists and religious and spiritual communities. And of course, how to make our our beautiful ideas available, understandable and appealing to billions of seekers. The workshop made evident that the participants, persons and groups, share very similar, overlapping and compatible ideas. It is also evident that there are different approaches to transhumanist spirituality, each with its own focus and priorities. Some participants observed that, since all of the spiritual transhumanist groups represented at the workshop are inclusive, it makes sense joining all. The idea of establishing a umbrella organizations for spiritually inclined transhumanists was discussed. An alternative is one of the existing groups, or perhaps Humanity+ (subject to their interest of course) as umbrella organization. The MTA and Terasem reported significant growth. The Turing Church Online Workshop 1 was held in teleXLR8, a telepresence community for cultural acceleration. We produce online events, featuring first class content and speakers, with the best system for e-learning and collaboration in an online 3D environment: Teleplace. Join teleXLR8 to participate in online talks, seminars, round tables, workshops, debates, full conferences, e-learning courses, and social events? with full immersion telepresence, but without leaving home. From possiblepaths2050 at gmail.com Sun Nov 21 11:32:42 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 21 Nov 2010 04:32:42 -0700 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: Samantha Atkins wrote: >Well, it does get tiresome seeing the same questions, the same opinions, the >same arguments made and refuted, over and over and over again. Are we failing >to learn anything and thus endlessly chewing the same cud? But new people (for whom it is all very novel) get involved in these endlessly repeating threads, and so they become educated about transhumanism and things related to it. And new permutations are added onto old discussions, due to various current scientific and social developments, which give appeal to the list old-timers. John On 11/21/10, Samantha Atkins wrote: > > On Nov 19, 2010, at 3:10 PM, John Grigg wrote: > >> Aleksai wrote: >> Yeah. It's really painful to read transhumanist mailing lists these >> days, the quality of discussion is so low. (This is true of SL4 too, >> though there it's more a matter of silence. All the quality discussion >> seems to have moved to e.g. certain blogs and websites, instead of >> mailing lists, which were on the intellectual forefront in ages past.) >>>>> >> >> I am so tired of people whining about this subject! lol I remember >> when I first joined this list *eleven* years ago, people were pining >> for the "golden age" of years past... >> > > Well, it does get tiresome seeing the same questions, the same opinions, the > same arguments made and refuted, over and over and over again. Are we > failing to learn anything and thus endlessly chewing the same cud? > > - s > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From stefano.vaj at gmail.com Sun Nov 21 17:32:48 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 21 Nov 2010 18:32:48 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: <4CE03947.3070806@speakeasy.net> <4CE07ADB.8070008@canonizer.com> <013801cb848b$cfd192d0$6f74b870$@att.net> <004901cb854c$f1216f20$d3644d60$@att.net> <004001cb85de$07f3c040$17db40c0$@att.net> <000901cb87b2$ba074c40$2e15e4c0$@att.net> Message-ID: On 20 November 2010 23:15, Samantha Atkins wrote: > On Nov 20, 2010, at 12:19 PM, Stefano Vaj wrote: >> Do you imply that anything which is more "intelligent" is more >> "powerful" than anything which is less? > > In the realm of what intelligence is critical for? Yes of course. Let us say that "crunching numbers" is critical in a given realm. I do accept that a more intelligent system is a more powerful one in this context. But this has little to do with the typical AGI projections... -- Stefano Vaj From avantguardian2020 at yahoo.com Sun Nov 21 18:15:58 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Sun, 21 Nov 2010 10:15:58 -0800 (PST) Subject: [ExI] Hard Takeoff Message-ID: <810705.72511.qm@web65612.mail.ac4.yahoo.com> > > > > Michael Anissimov writes: > > > > We have real, evidence-based arguments for an abrupt takeoff.? One is that >the > > > human speed and quality of thinking is not necessarily any sort of optimal > > thing, thus we shouldn't be shocked if another intelligent species can easily > > > > > > surpass us as we surpassed others.? We deserve a real debate, not accusations > > > >of > > > > > monotheism. Samantha writes: ? > There is sound argument that we are not the pinnacle of possible intelligence.? > > > >But that that is so does not at all imply or support that AGI will FOOM to >godlike status in an extremely short time once it reaches human level (days to a > > > >few years tops). > > > ------------------------------ Samantha,?this?sounds like you agree with me about my general criticism of hard takeoff, even if you have quibbles about the specifics of my argument. ? Software optimizing itself to its full existing potential is one thing but?reseting its existing potential to an even higher potential is unlikely on such a short time scale. Too much innovation would be?required especially for such a short time period. It would have to resort to?experimental trial and error just like?natural evolution or a human?designer would although perhaps much faster. ? There seems to be?an obssessive focus on intelligence in these?in these discussions?and intelligence is not well defined. Furthermore various aspects of cognition such as intelligence, knowledge, creativity, and wisdom?are not all the same thing. ? Even if?an?AI had a 10,000 IQ *and* the sum total of all human knowledge, it would?still have to resort to experimentation to answer some questions. And even then it might still be stymied. Having a 10,000 IQ does not?let you magically know the color of?an extinct?dinosaur's skin for example or what lies beneath the ice of Europa. You could make guesses about these things but you certainly wouldn't be infallible. The greatest intelligence can still remain ignorant about things it cannot access data about.?And if you simply let the AI spider the Internet for its knowledge-base, you are liable to get an AI that has the spelling abilities of a third-grader, believes a host of urban myths, and utilizes its "god-like power"?to spam people?about?penis-enlargement products. ? > > I have some questions, perhaps naive, regarding the feasibility of the hard > > takeoff scenario: Is self-improvement really possible for a computer program? > > > Certainly.? Some such programs that search for better algorithms in delimited >spaces exist now.? Programs that re-tune to more optimal configuration for >current context also exist.? > > Yes, but the programs you refer to simply optimize such factors as run speed or memory usage while performing the exact same functions?for which?they were originally written albeit more efficiently. ? For example, metamorphic computer viruses can change their own code but usually by adding a bunch of non-functional code to themselves to change their signatures. In biology-speak, by introducing "silent mutations" in their code. ? Other programs such as genetic algorithms don't change their actual code structure but simply optimize a well-defined set of parameters/variables that are operated on by the existing?code structure to optimize the "fitness" of those parameters. The genetic algorithm does not evolve itself but evolves?something else. ? Furthermore since most mutations?are liable to be as detrimental to computer code as they are to the genetic code, I don't see any AI taking itself from beta test to version 1000.0 overnight. ? > > > > If this "improvement" is truly recursive, then that implies that it iterates >a > > > function with the output of the function call being the input for the next > > identical function call. > > Adaptive loop is a bit longer than a single function call usually.? You are >mixing "function" in the generic sense of a process with goals and a definable >fitness function (measure of efficacy for those goals) with function as a single > > > >software function.? Some functions (which may be composed of many many 2nd type >functions) evaluated the efficacy and explore for improvements of other >functions.? Admittedly I abused the word "function" but it was in response to the abuse of the word "recursive" which has a precise mathematical definition. I have to admit that your "adaptive loop" is a much better description than "recursive self-improvement". ? > > On the other hand, if the seed AI is able to actually rewrite the code of >it's > > > intelligence function to non-recursively improve itself, how would it avoid > > falling victim to the halting roblem? > > Why is halting important to continuous improvement? Let me try to explain what I mean by this. The only viable strategy for this self improvement is if, as you and others have suggested, an AI copies itself and some copies modify the code of other copies. Now the instances of the AI that are doing the modifying cannot predict the results of their modifications because of the halting problem which is very well defined elsewhere. Thus they must *experiment* on their brethren. ? And if?an AI being modified gets stuck in an infinite loop, the only way to "fix"?it is to shut?it off, effectively killing that copy. So in order to make significant progress which would?entail the death of thousands or millions of experimental copies, the AI doing the experiment would have to be?completely lacking in?empathy for its own copies. And the AIs that are the experimental subjects would?have to be completely altruistic and lacking in any sense of self-preservation. ? If the AI lacked a sense of?self-preservation, it?probably wouldn't pose a threat to us humans?no matter how smart it became because, if it got out of hand, someone could just waltz right in and flip its switch without meeting any resistance. Of course, it seems odd that an AI that lacked the will to live would still have the will to power, which is what self-improvement implies. ? Assuming, however, that?it did want to improve itself for whatever reason, the process should be self-limiting because as soon as a sense?of self-preservation?is one of the improvements that?the?AI experimenters build into the AI upgrades, the?process of self-improvement would?stop at that point, because the "selfish" AIs would no longer allow themselves to be?guinea pigs for their fellows. ? Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure." - Dwight D. Eisenhower? From sjatkins at mac.com Sun Nov 21 21:20:08 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 21 Nov 2010 13:20:08 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <810705.72511.qm@web65612.mail.ac4.yahoo.com> References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> Message-ID: <6DD59A5C-A8D1-4C63-9215-1FB4F09FB978@mac.com> On Nov 21, 2010, at 10:15 AM, The Avantguardian wrote: > > Software optimizing itself to its full existing potential is one thing > but reseting its existing potential to an even higher potential is unlikely on > such a short time scale. Too much innovation would be required especially for > such a > > > short time period. It would have to resort to experimental trial and error just > like natural evolution or a human designer would although perhaps much faster. Not really. First of all, there is only what is running or is available to run now and what may be run later and may or may not be in runnable condition now. In running code now we can divide out production code, that which is doing the systems main work currently from somewhat sandboxed experimental code. We can also have entire systems whose only purpose is to analyze code from the bottom to top looking for possible ways to improve it. In other words it is not one monolithic code body but a community of code and software specialists that are part of the process of self-improvement. Given clean boundaries in functional or other styles each single module can be improved and changed and plugged back into production over time. > > There seems to be an obssessive focus on intelligence in these in these > discussions and intelligence is not well defined. Furthermore various aspects of > > > cognition such as intelligence, knowledge, creativity, and wisdom are not all > the same thing. > Whether we are intelligent enough to define intelligence to our satisfaction or not it is undeniable that it exists and is critically important to our survival and continuing wellbeing and progress toward our dreams. > Even if an AI had a 10,000 IQ *and* the sum total of all human knowledge, it > would still have to resort to experimentation to answer some questions. And even > > > then it might still be stymied. Having a 10,000 IQ does not let you magically > know the color of an extinct dinosaur's skin for example or what lies beneath > the ice of Europa. You could make guesses about these things but you certainly > wouldn't be infallible. > Well sure. At any finite point in the intelligence domain there will be some problems questions that are beyond current abilities. Intelligence is not magic but neither should it be denigrated precisely because it does not confer some magical omniscience. >>> I have some questions, perhaps naive, regarding the feasibility of the hard >>> takeoff scenario: Is self-improvement really possible for a computer program? >> >> >> Certainly. Some such programs that search for better algorithms in delimited >> spaces exist now. Programs that re-tune to more optimal configuration for >> current context also exist. >> >> > Yes, but the programs you refer to simply optimize such factors as run speed or > memory usage while performing the exact same functions for which they were > originally written albeit more efficiently. How fast a given bit of work can be done is very much a part of that somewhat nebulous thing we call intelligence. And the space of self-improving software today is also bigger than that. Different algorithms are plugged in depending on current context, parts of the code scaffolding they are plugged into is re-arranged. We have research systems that go much further. Like the Synthesis OS that rewrites OS kernel operations to avoid wait states and conflicts on the fly or the work of Moshe Looks evolving more optimal function on the fly. What do you mean by "same function" though? If you mean same logical function then sure. Since accomplishing that function most efficiently is the very criteria of success for such a local optimization. But there is an entire hierarchy of functions composed of other functions and optimization can happen at any/all of those levels. > > For example, metamorphic computer viruses can change their own code but usually > by adding a bunch of non-functional code to themselves to change their > signatures. In biology-speak, by introducing "silent mutations" in their code. > > Other programs such as genetic algorithms don't change their actual code > structure but simply optimize a well-defined set of parameters/variables that > are operated on by the existing code structure to optimize the "fitness" of > those parameters. The genetic algorithm does not evolve itself but > evolves something else. That depends. There is use of GA techniques to turn out new code not just optimize on a very narrow scale we have known how to do with non-AI and non-GA techniques for some time. What a GA involves, its payload if you will, can be anything your like your can create a reasonable mutation function and and workable fitness function for - including any arbitrary code. > > Furthermore since most mutations are liable to be as detrimental to > computer code as they are to the genetic code, I don't see any AI taking itself > from beta test to version 1000.0 overnight. GA techniques are not the only techniques it will use. And remember, you are starting at human genius level for this thought experiment. Suppose such as system devotes 24/7 to becoming an AGI computer scientist. Then it can do any sort of optimization and exploration than a human of the same abilities. Except that it has several advantages over humans of similar training and ability. > >>> On the other hand, if the seed AI is able to actually rewrite the code of >> it's >> >>> intelligence function to non-recursively improve itself, how would it avoid >>> falling victim to the halting roblem? >> >> Why is halting important to continuous improvement? > > Let me try to explain what I mean by this. The only viable strategy for this > self improvement is if, as you and others have suggested, an AI copies itself > and some copies modify the code of other copies. Now the instances of the AI > that are doing the modifying cannot predict the results of their modifications > because of the halting problem which is very well defined elsewhere. Thus they > must *experiment* on their brethren. Actually, I did not suggest the entire AI is copied and some copies modify themselves or others. I cleared this up above in this message. You are not seeing the situation clearly in this paragraph. > > And if an AI being modified gets stuck in an infinite loop, the only way to > "fix" it is to shut it off, effectively killing that copy. Continuous self-improvement by definition means there is no final state. But this does not mean a self-improving entity is "stuck" and needs rebooting. > > If the AI lacked a sense of self-preservation, it probably wouldn't pose a > threat to us humans no matter how smart it became because, if it got out of > hand, someone could just waltz right in and flip its switch without meeting any > resistance. Of course, it seems odd that an AI that lacked the will to live > would still have the will to power, which is what self-improvement implies. Anthropomorphic assumption or close to it. If the AGI has the goal to do XYZ and is not complete and has no other greater goal to submit to termination and is sufficiently intelligence and has actuators to work for its continued existence then it will do so. But having any of these not be the case does not mean that it cannot harm humanity. That depends on what actuators, what abilities to do what actions, it does have or can develop. It also depends on the impact of its very existence and use in various contexts on human psychology and institutions. > > > Assuming, however, that it did want to improve itself for whatever reason, the > process should be self-limiting because as soon as a sense of > self-preservation is one of the improvements that the AI experimenters build > into the AI upgrades, the process of self-improvement would stop at that point, > because the "selfish" AIs would no longer allow themselves to be guinea pigs for > Self is a fluid shifting thing not ossification. Your argument does not follow. - samantha From bbenzai at yahoo.com Sun Nov 21 21:25:44 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 21 Nov 2010 13:25:44 -0800 (PST) Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: Message-ID: <460321.40743.qm@web114416.mail.gq1.yahoo.com> Samantha Atkins wondered: > On Nov 19, 2010, at 11:18 AM, Keith Henson wrote: > > > Re these threads, I have not seen any ideas here that > have not been > > considered for a *long* time on the sl4 list. > > > > Sorry. > > Quite correct and mostly in much greater depth as > well.? I was beginning to wonder why all this needed to > be rehashed yet again with apparently little gained from the > copious previous hashing. I can see one reason, the same reason that specialist subject magazines keep publishing the same articles (more-or-less) every few months: New people. I know we can all 'search the archives', but honestly, how many new subscribers do you think will do this? And how would they know what to search for? It's often been said that public debates never change the minds of the participants, but that's not really the point. It's the audience that are your real target, not your opponent. That's the main reason that I, for one, am quite content to rehash the same 'tired old ideas' every few months (or years, or whatever). I know (or expect) that these discussions will fall on at least a few fresh ears each time round. We can pretty much rely on a Gordon Swobe or an Alan Grimes to pop up every now and then, and give us a chance to expose a fresh batch of lurkers to the relevant arguments. Maybe it's not the best method, maybe an FAQ or something would be better, but be honest, which is more likely to capture people's attention, a static list of items, or a good old argument? "BULLSHIT!" might not be a sophisticated or rational argument, but it gets people's attention and once that happens, there's at least a chance they'll start thinking through the issue themselves, and that's really what it's all about. Ben Zaiboc From spike66 at att.net Sun Nov 21 21:20:26 2010 From: spike66 at att.net (spike) Date: Sun, 21 Nov 2010 13:20:26 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <810705.72511.qm@web65612.mail.ac4.yahoo.com> References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> Message-ID: <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> On Behalf Of The Avantguardian Subject: Re: [ExI] Hard Takeoff > > > > Michael Anissimov writes: > > >> > We have real, evidence-based arguments for an abrupt takeoff.? One >> > is that the human speed and quality of thinking is not necessarily any sort of >> > optimal thing, thus we shouldn't be shocked if another intelligent >> > species can easily surpass us as we surpassed others... Samantha writes: ? >> There is sound argument that we are not the pinnacle of possible intelligence. ... >Samantha,?this?sounds like you agree with me about my general criticism of hard takeoff, even if you have quibbles about the specifics of my argument. ?>Software optimizing itself to its full existing potential is one thing but?reseting its existing potential to an even higher potential is unlikely on such a short >time scale. Too much innovation would be?required especially for such a short time period...Stuart LaForge The human brain has some inherent limitations, most specifically that we get tired, and there are not enough of us. Consider top level chess. Human elite players, top 100 in the world, can still play a competitive game against ordinary 100 dollar chess programs running on an ordinary 500 dollar laptop computer, but they must invest really intense concentration for about four hours, after which they are exhausted. The computer on the other hand is immediately ready for another game, and can run two or more high quality games simultaneously, it can run day and night, it can replicate itself arbitrarily many times, all while the six billion strong human race is stuck right at around 100 or so players (and declining) capable of such concentration, at a rate of one game a day at most. Silicon based recursive self-improvement is implemented by this ability to laser focus on the same problem over indefinite periods, in arbitrary numbers. If we continue with the chess analogy, the human race has been playing the game in its current form for right at 500 years. Many very focused players have dedicated their lives to this tragically wasteful preoccupation, and recorded their findings. Chess playing software can re-derive that half-millennium of chess theory and surpass it in just a few days, or shorter if the task is spread over a sufficient number of processors. spike From bbenzai at yahoo.com Sun Nov 21 21:35:47 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Sun, 21 Nov 2010 13:35:47 -0800 (PST) Subject: [ExI] The atoms red herring. =| In-Reply-To: Message-ID: <788402.10565.qm@web114410.mail.gq1.yahoo.com> Alan Grimes > Why do you care whether I weigh twelve grams or twelve > tons? Really, > what difference does it make to you? Why are you obsessed > with > compactness? Why would anyone be? Crowding sucks anyway. > Sorry, but I really did just slap my head. It's a bit surreal, like telling someone that you think it will rain, and them saying "what did you call my sister?!". Ben Zaiboc From sjatkins at mac.com Sun Nov 21 21:51:44 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 21 Nov 2010 13:51:44 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: <460321.40743.qm@web114416.mail.gq1.yahoo.com> References: <460321.40743.qm@web114416.mail.gq1.yahoo.com> Message-ID: On Nov 21, 2010, at 1:25 PM, Ben Zaiboc wrote: > Samantha Atkins wondered: > >> On Nov 19, 2010, at 11:18 AM, Keith Henson wrote: >> >>> Re these threads, I have not seen any ideas here that >> have not been >>> considered for a *long* time on the sl4 list. >>> >>> Sorry. >> >> Quite correct and mostly in much greater depth as >> well. I was beginning to wonder why all this needed to >> be rehashed yet again with apparently little gained from the >> copious previous hashing. > > > I can see one reason, the same reason that specialist subject magazines keep publishing the same articles (more-or-less) every few months: New people. Would that it were so! But I see many I have seen around these areas for a lot of years doing the rehashing. > > I know we can all 'search the archives', but honestly, how many new subscribers do you think will do this? And how would they know what to search for? > You don't send them to archives but to extracted knowledge websites and wikis ideally. Or particular meaty posts and papers. > It's often been said that public debates never change the minds of the participants, but that's not really the point. It's the audience that are your real target, not your opponent. > > That's the main reason that I, for one, am quite content to rehash the same 'tired old ideas' every few months (or years, or whatever). I know (or expect) that these discussions will fall on at least a few fresh ears each time round. > I am not so content or at least not doing it this way. It is boring, frustrating and we have too much work to do. > We can pretty much rely on a Gordon Swobe or an Alan Grimes to pop up every now and then, and give us a chance to expose a fresh batch of lurkers to the relevant arguments. Maybe it's not the best method, maybe an FAQ or something would be better, but be honest, which is more likely to capture people's attention, a static list of items, or a good old argument? Much, much better. And it is not static but evolving in clarity and structure. This is the 21st century. Let us embrace it. I do find email a very bore place to lay out a more meaty idea or set of ideas. Tools can be built that augment it or do something better without losing its plusses. But they either do not exist or are proprietary or are not generally used. -samantha From sjatkins at mac.com Sun Nov 21 21:59:15 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 21 Nov 2010 13:59:15 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> Message-ID: <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> On Nov 21, 2010, at 1:20 PM, spike wrote: > > The human brain has some inherent limitations, most specifically that we get > tired, and there are not enough of us. Consider top level chess. Human > elite players, top 100 in the world, can still play a competitive game > against ordinary 100 dollar chess programs running on an ordinary 500 dollar > laptop computer, but they must invest really intense concentration for about > four hours, after which they are exhausted. The computer on the other hand > is immediately ready for another game, and can run two or more high quality > games simultaneously, it can run day and night, it can replicate itself > arbitrarily many times, all while the six billion strong human race is stuck > right at around 100 or so players (and declining) capable of such > concentration, at a rate of one game a day at most. Silicon based recursive > self-improvement is implemented by this ability to laser focus on the same > problem over indefinite periods, in arbitrary numbers. Great point! I can beat the $100 dollar chess program quicker if I spend even more time probing and analyzing its weaknesses but the point is well taken. > > If we continue with the chess analogy, the human race has been playing the > game in its current form for right at 500 years. Many very focused players > have dedicated their lives to this tragically wasteful preoccupation, and > recorded their findings. Chess playing software can re-derive that > half-millennium of chess theory and surpass it in just a few days, or > shorter if the task is spread over a sufficient number of processors. If we knew how to add the right kind of learning to the chess playing programs (which are largely more brute force today) then yes, they could easily derive this knowledge. Except chess playing programs do not generally look at the problem at all the same way humans do so the result would likely not be very useful for training would be human chess masters. - s From stefano.vaj at gmail.com Sun Nov 21 22:15:15 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 21 Nov 2010 23:15:15 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> Message-ID: On 21 November 2010 22:59, Samantha Atkins wrote: > If we knew how to add the right kind of learning to the chess playing programs (which are largely more brute force today) then yes, they could easily derive this knowledge. ?Except chess playing programs do not generally look at the problem at all the same way humans do so the result would likely not be very useful for training would be human chess masters. What's wrong with brute force? The lack of chess "qualia"? ;-) I suspect that the first Turing-passing programmes will be largely based on brute-force computation and large recourse to immense databases of "human-encoded" knowledge. This would not qualify them any less "intelligent" IMHO than any worse-performing program more closely mimicking human-brain working, nor less likely to be socially recognised as "human". -- Stefano Vaj From sjatkins at mac.com Sun Nov 21 22:50:12 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 21 Nov 2010 14:50:12 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> Message-ID: <76AE8462-85A9-468C-A078-668F6E76F60F@mac.com> On Nov 21, 2010, at 2:15 PM, Stefano Vaj wrote: > On 21 November 2010 22:59, Samantha Atkins wrote: >> If we knew how to add the right kind of learning to the chess playing programs (which are largely more brute force today) then yes, they could easily derive this knowledge. Except chess playing programs do not generally look at the problem at all the same way humans do so the result would likely not be very useful for training would be human chess masters. > > What's wrong with brute force? The lack of chess "qualia"? ;-) I didn't say (or mean to imply) anything was wrong with it at all. Just that learning from one game to another would require more than brute force. - s From mrjones2020 at gmail.com Mon Nov 22 10:32:42 2010 From: mrjones2020 at gmail.com (Mr Jones) Date: Mon, 22 Nov 2010 05:32:42 -0500 Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: References: Message-ID: >From what I've been told,the 'high' interest rates are 'low' when compared to typical rates paid in these corruption riddled areas. All the same, I think It's wrong. On Nov 18, 2010 10:26 PM, "John Grigg" wrote: I was shocked to learn that high interest is being charged to some of the poorest people on Earth. I smell greed and exploitation... And supposedly, not many people are actually being lifted out of poverty due to these programs. http://www.thedailystar.net/newDesign/latest_news.php?nid=22701 John _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Mon Nov 22 14:36:13 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 22 Nov 2010 15:36:13 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: <76AE8462-85A9-468C-A078-668F6E76F60F@mac.com> References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> <76AE8462-85A9-468C-A078-668F6E76F60F@mac.com> Message-ID: On 21 November 2010 23:50, Samantha Atkins wrote: > > On Nov 21, 2010, at 2:15 PM, Stefano Vaj wrote: >> What's wrong with brute force? The lack of chess "qualia"? ;-) > > I didn't say (or mean to imply) anything was wrong with it at all. ?Just that learning from ?one game to another would ?require more than brute force. My question need not be understood as a reply to your post, but has a broader scope. In fact, many appear to think that if "intelligent" behaviour is exhibited by a system based on some other principle than an emulation of animal-brain working it would be a "trick", or it would be be "really" intelligent, whatever this may mean. As to the applicability of that system to different sets of problems, I maintain that this is a feature exhibited by *any* universal computational device, from Turing machines to i286 PC to cellular automata. -- Stefano Vaj From jonkc at bellsouth.net Mon Nov 22 15:41:52 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 22 Nov 2010 10:41:52 -0500 Subject: [ExI] Nostalgia ( Best case, was Hard Takeoff In-Reply-To: References: Message-ID: <2C32ADEE-EA32-480F-90E9-53A07784696F@bellsouth.net> On Nov 19, 2010, at 6:10 PM, John Grigg wrote: > > I am so tired of people whining about this subject! lol I remember > when I first joined this list *eleven* years ago, people were pining > for the "golden age" of years past... I joined the list in early 1993 and from the very first day I've been hearing about how all the young wiper-snappers who were on the list nowadays were just a bunch of second raters and they couldn't hold a candle to the giants the list once had back in the good old days. On the whole I don't think nostalgia is a positive emotion and I certainly don't believe it is conducive to accurate recall of past events; giants from the past may not really be all that much larger than some people from today. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Mon Nov 22 16:10:30 2010 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 22 Nov 2010 17:10:30 +0100 Subject: [ExI] Suzanne Gildert on Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?, Teleplace, 28th November 2010, 10am PST Message-ID: Suzanne Gildert will give a talk in Teleplace on ?Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?? on November 28, 2010, at 10am PST (1pm EST, 6pm UK, 7pm continental EU). http://telexlr8.wordpress.com/2010/11/22/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010-10am-pst/ This is a revised version of Suzanne?s talk at TransVision 2010, also inspired by her article on ?Building more intelligent machines: Can ?co-design? help?? (PDF). See also Suzanne?s previous Teleplace talk on ?Quantum Computing: Separating Hope from Hype?. Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading? S. Gildert, Teleplace, 28th November 2010 We are surrounded by devices that rely on general purpose silicon processors, which are mostly very similar in terms of their design. But is this the only possibility? As we begin to run larger and more brain-like emulations, will our current methods of simulating neural networks be enough, even in principle? Why does the brain, with 100 billion neurons, consume less than 30W of power, whilst our attempts to simulate tens of thousands of neurons (for example in the blue brain project) consumes tens of KW? As we wish to run computations faster and more efficiently, we might we need to consider if the design of the hardware that we all take for granted is optimal. In this presentation I will discuss the recent return to a focus upon co-design ? that is, designing specialized software algorithms running on specialized hardware, and how this approach may help us create much more powerful applications in the future. As an example, I will discuss some possible ways of running AI algorithms on novel forms of computer hardware, such as superconducting quantum computing processors. These behave entirely differently to our current silicon chips, and help to emphasize just how important disruptive technologies may be to our attempts to build intelligent machines. Event on Facebook Dr. Suzanne Gildert is currently working as an Experimental Physicist at D-Wave Systems, Inc. She is involved in the design and testing of large scale superconducting processors for Quantum Computing Applications. Suzanne obtained her PhD and MSci degree from The University of Birmingham UK, focusing on the areas of experimental quantum device physics and superconductivity. teleXLR8 is a telepresence community for cultural acceleration. We produce online events, featuring first class content and speakers, with the best system for e-learning and collaboration in an online 3D environment: Teleplace. Join teleXLR8 to participate in online talks, seminars, round tables, workshops, debates, full conferences, e-learning courses, and social events? with full immersion telepresence, but without leaving home. From thespike at satx.rr.com Mon Nov 22 16:43:39 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 22 Nov 2010 10:43:39 -0600 Subject: [ExI] Nostalgia ( Best case, was Hard Takeoff In-Reply-To: <2C32ADEE-EA32-480F-90E9-53A07784696F@bellsouth.net> References: <2C32ADEE-EA32-480F-90E9-53A07784696F@bellsouth.net> Message-ID: <4CEA9DBB.2010204@satx.rr.com> On 11/22/2010 9:41 AM, John Clark wrote: > giants from the past may not really be all that much larger than some > people from today. Some who were much larger but are now absent include Sasha, Robin Hanson, Hal Finney, Eliezer Yudkowsky, Eugen Leitl, Robert Bradbury, Charlie Stross, Damien Sullivan, John K Clark... oh wait, he's still here. From jonkc at bellsouth.net Mon Nov 22 16:31:42 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 22 Nov 2010 11:31:42 -0500 Subject: [ExI] Hard Takeoff. In-Reply-To: <004d01cb8689$c3e5f420$4bb1dc60$@att.net> References: <4CE407D8.7080307@lightlink.com> <004d01cb8689$c3e5f420$4bb1dc60$@att.net> Message-ID: <2091B581-9A67-40B7-A175-0F0B55E9CC64@bellsouth.net> On Nov 17, 2010, at 2:00 PM, spike wrote: >> On Behalf Of BillK >> How something can be designed to be 'Friendly' without emotions or >> caring is a mystery to me...BillK > > BillK, this is only one of many mysteries inherent in the notion of AI. I think emotion is FAR less mysterious than intelligence, emotion just predisposes you to one class of actions rather than another. If you are in a confrontation with something and are in the fear mode you are more likely to run away, if you are in the anger mode you are more likely to attack. There is no corresponding simple way to describe intelligence because it is vastly more complicated; certainly AI researchers found that to be so and so did Evolution. And I have even found in my personal life that it is far easier to get angry or scared than to get smart, and I'll bet you too have found that getting smart is hard work while any idiot can emote. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From js_exi at gnolls.org Mon Nov 22 17:39:06 2010 From: js_exi at gnolls.org (J. Stanton) Date: Mon, 22 Nov 2010 09:39:06 -0800 Subject: [ExI] Transrealism (was Re: J. Stanton) Message-ID: <4CEAAABA.3070800@gnolls.org> Damien Broderick wrote: > J, I see some comments on the method used in writing your book THE GNOLL > CREDO: > > > > You might be interested in the theory and practice of transrealism, > which you seem to have independently discovered. I'd recommend my book > TRANSREALIST FICTION except that it absurdly costs $arm&leg. Google on > Rudy Rucker and transrealism. (Oh, you're *that* Damien Broderick. Excellent!) My approach of allowing plot to flow from characterization is definitely shared with the Transrealists. However, I believe there are also important differences between The Gnoll Credo and fiction I understand to be transreal: the narrator is not based on my own experience, and neither reality nor the perception of it is nearly as plastic as, say, Rudy Rucker's. This is, in my opinion, what gives The Gnoll Credo such impact: instead of the transreal approach of "the world is a much stranger place than you think" (a valid approach, with great impact when done well), I go the opposite direction: "a world you originally understood to be fantastic is much more real than you think." You probably already saw this essay, which summarizes my thoughts on that subject: http://www.gnolls.org/97/the-difference-between-me-and-chuck-palahniuk-is-that-i-dont-pull-my-punches/ You (and Darren, who left a comment on my site) have prompted an interesting discussion, and I'll most likely write a full-length essay exploring this topic. JS http://www.gnolls.org PS: One final question, which we can take off-list if it's too tangential: how do you consider James Tiptree, Jr. to be transrealist? She's one of my favorite authors, but I have a hard time lumping her in with Rucker and J. G. Ballard. From stefano.vaj at gmail.com Mon Nov 22 17:00:44 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Mon, 22 Nov 2010 18:00:44 +0100 Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: References: Message-ID: 2010/11/22 Mr Jones > From what I've been told,the 'high' interest rates are 'low' when compared to typical rates paid in these corruption riddled areas. All the same, I think It's wrong. Let me say first that I am a fan of the views expounded in things such as Money as Debt, and very little of bankers. OTOH, if we accept the idea that loans for an interest are the right way to deal with matter, interest rates cannot really be "right" or "wrong", provided that no oligopoly exists, and depend on comparative risk and profitability of alternative employments of the capitals concerned. -- Stefano Vaj From thespike at satx.rr.com Mon Nov 22 19:06:09 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 22 Nov 2010 13:06:09 -0600 Subject: [ExI] Transrealism (was Re: J. Stanton) In-Reply-To: <4CEAAABA.3070800@gnolls.org> References: <4CEAAABA.3070800@gnolls.org> Message-ID: <4CEABF21.40509@satx.rr.com> On 11/22/2010 11:39 AM, J. Stanton wrote: > One final question, which we can take off-list if it's too tangential: > how do you consider James Tiptree, Jr. to be transrealist? She's one of > my favorite authors, but I have a hard time lumping her in with Rucker > and J. G. Ballard. Alice Sheldon drew upon her very exotic and unusual experiences in her fiction, so I guess that makes her work notionally transrealist--but somehow it doesn't have the gnarly Phil Dick/Rudy Rucker zing I associate with transrealism (although obviously it did have other virtues). You comment: My understanding of the term is fairly even-handed: Damien Broderick From pharos at gmail.com Mon Nov 22 19:41:28 2010 From: pharos at gmail.com (BillK) Date: Mon, 22 Nov 2010 19:41:28 +0000 Subject: [ExI] Transrealism (was Re: J. Stanton) In-Reply-To: <4CEABF21.40509@satx.rr.com> References: <4CEAAABA.3070800@gnolls.org> <4CEABF21.40509@satx.rr.com> Message-ID: On Mon, Nov 22, 2010 at 7:06 PM, Damien Broderick wrote: > a way of combining wild ideas, > subversion and criticism of the supposedly inviolate Real, together with > realistic thickening of the supposedly airy fantastic, all bound together in > a passionate, noncompliant act of self-examination. > > Sounds like a good night out to me! ;) BillK From dan_ust at yahoo.com Mon Nov 22 22:43:43 2010 From: dan_ust at yahoo.com (Dan) Date: Mon, 22 Nov 2010 14:43:43 -0800 (PST) Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: References: Message-ID: <648870.20970.qm@web30103.mail.mud.yahoo.com> Basically agreed. When people complain about a price being too high or too low (in an uncoerced setting*), they demonstrate a misunderstanding about how prices work. None of these claims hold up under scientific scrutiny. Also, regarding transparency and all that, the way to arrive at this is not via regulation -- which, in a corrupt country, will merely mean established microlenders will capture the regulator and use regulations to knock out or keep out competitors. Instead, merely allow the market process to occur maximally -- so that borrowers have as much choice as possible in borrowing. In this way, borrowers will patronize those lenders who are the least deceptive. And the latter will try to broadcast their bona fides over shadey competitors. (The same thing happens in many market situations. For instance, most people quickly learn that if a firm is not forthcoming with information, that it's best not to deal with it at all. This makes the shadey firm either mend its ways or lose market share -- at the extreme, forcing it to go under.) Regards, Dan * There's a difference when government sets the price --?directly via prices controls (or "floors" as in minium wage laws and "ceilings" as in rent control) or indirectly via antitrust policy -- because that then involves coercing outcomes. But even in that case, one cannot tell exactly what the uncoerced price would've been -- merely that the price mechanism has been interfered with. From: Stefano Vaj To: ExI chat list Sent: Mon, November 22, 2010 12:00:44 PM Subject: Re: [ExI] Micro-loan programs not as successful as hoped 2010/11/22 Mr Jones > From what I've been told,the 'high' interest rates are 'low' when compared to >typical rates paid in these corruption riddled areas. All the same, I think It's >wrong. Let me say first that I am a fan of the views expounded in things such as Money as Debt, and very little of bankers. OTOH, if we accept the idea that loans for an interest are the right way to deal with matter, interest rates cannot really be "right" or "wrong", provided that no oligopoly exists, and depend on comparative risk and profitability of alternative employments of the capitals concerned. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Tue Nov 23 07:04:10 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Mon, 22 Nov 2010 23:04:10 -0800 Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: References: Message-ID: On Nov 22, 2010, at 9:00 AM, Stefano Vaj wrote: > 2010/11/22 Mr Jones >> From what I've been told,the 'high' interest rates are 'low' when compared to typical rates paid in these corruption riddled areas. All the same, I think It's wrong. > > Let me say first that I am a fan of the views expounded in things such > as Money as Debt, and very little of bankers. The one gripe I have with the money is debt crowd is not that money as currently created is debt. I agree that combined with fractional reserves is a pretty bad idea. Unfortunately though, many in this camp seem to assume everything would be a-ok if the government just printed its on money at will whenever it felt a bit pinched. This is the road to unlimited inflation. At least seeing new money as debt incurred slows that down a little bit. I would generally have little problem with newly created money being seen as a debt if it was not instantly multiplied by fractional reserve banking. Ad hoc creation of new money is either out-right counterfeiting or it is something that should have a price that is considered at time of creation. It has nothing to do with the bankers who may receive the interest per se but with being honest about what one is doing. > > OTOH, if we accept the idea that loans for an interest are the right > way to deal with matter, interest rates cannot really be "right" or > "wrong", provided that no oligopoly exists, and depend on comparative > risk and profitability of alternative employments of the capitals > concerned. > Yes. I agree. Interest is paid for risk assumed and for opportunity cost in the case of existing funds being loaned. - samantha From stefano.vaj at gmail.com Tue Nov 23 14:38:33 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Tue, 23 Nov 2010 15:38:33 +0100 Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: <648870.20970.qm@web30103.mail.mud.yahoo.com> References: <648870.20970.qm@web30103.mail.mud.yahoo.com> Message-ID: 2010/11/22 Dan : > Basically agreed. When people complain about a price being too high or too > low (in an uncoerced setting*), they demonstrate a misunderstanding about > how prices work. None of these claims hold up under scientific scrutiny. OTOH, money need not be supplied "for a price", let alone after having been produced out of thin air by some private entity. -- Stefano Vaj From dan_ust at yahoo.com Tue Nov 23 17:41:54 2010 From: dan_ust at yahoo.com (Dan) Date: Tue, 23 Nov 2010 09:41:54 -0800 (PST) Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: References: <648870.20970.qm@web30103.mail.mud.yahoo.com> Message-ID: <418521.49154.qm@web30106.mail.mud.yahoo.com> I'm not sure microlenders here are creating money. My impression was they have money that they loan out -- not that they are, say, like fractional reserve banks or fiat money central banks, producing money beyond their reserves. As for charging interest, there's nothing fundamentally wrong with this. Market interest rates, setting aside other coercive interferences in them, basically have three components: a risk premium, a price?premium (which confuses things a little here, but that's the terminology), and time preference. The first is the lender's assessment of how likely it is for the borrower to pay back the loan -- the risk of default, in other words. The second is what the likely inflation is going to be when the loan is paid back. For instance, if inflation were going to double the money supply by the time of payment, the lender would want to be paid at least twice the amount back. The third, however, is a stumbling block for most people. Time preference is how much someone values the present over the future -- or, more particular, a present good or service or what one believes its future otherwise equivalent good or service (this is, of course, based on subjective valuations -- not physical identity). For a trade to happen here, there must be a difference in how the lender and borrower value these things. (This is true of all trades. Trades only happen because people have different valuations for things.* If, say, you value my apple more than your?pencil and I value your pencil more than my apple, we have the basis for a trade.) In essence, the lender and borrower must come to some agreement on what's to be loan, for how long, and what's to be paid back. The difference, in money terms, in what's paid back is the interest. (The interest rate is merely dividing the money amount of what's paid back by what's loaned out over a specific time period -- and stated in percentage terms.) Plugging this back into people complaining about high interest rates, this only amounts to people not liking the interest rates being offered. There is nothing scientific about this -- in other words, if it's uncoerced, these interest rates are merely what people agree to pay and no third party can say they are too high or too low. In an uncoerced setting -- which doesn't mean nirvana or a setting where everyone gets whatever she or he wants -- there is no such thing as the interest rate being wrong. (And if?particular people can't agree to trade here -- as happens often enough as merely wanting to lend or borrowing or buy or sell doesn't always end in a trade -- then the trade simply doesn't take place.) Regards, Dan * This is not to say the valuations don't change. In the example that follows, I might latter regret trading my apple for your pencil. Of course, any trade is going to be forward-looking with the people trading both expecting to be better off in the future -- maybe immediately following the trade or, in the case of long-term loans, years afterward. (Think of someone who trades nights when she could be out partying for night school to get an advanced degree. Her benefit is definitely not immediate and she might be completely wrong about her expectations: maybe her goal was more money, but she finds she doesn't make enough to even cover the student loans.) From: Stefano Vaj To: ExI chat list Sent: Tue, November 23, 2010 9:38:33 AM Subject: Re: [ExI] Micro-loan programs not as successful as hoped 2010/11/22 Dan : > Basically agreed. When people complain about a price being too high or too > low (in an uncoerced setting*), they demonstrate a misunderstanding about > how prices work. None of these claims hold up under scientific scrutiny. OTOH, money need not be supplied "for a price", let alone after having been produced out of thin air by some private entity. -- Stefano Vaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Tue Nov 23 18:40:47 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 23 Nov 2010 13:40:47 -0500 Subject: [ExI] Grain subsidies and externalized costs (Paleo/Primal health) In-Reply-To: <4CE592D4.6020009@gnolls.org> References: <4CE592D4.6020009@gnolls.org> Message-ID: I tried to just let this pass, but I couldn't. On Thu, Nov 18, 2010 at 3:55 PM, J. Stanton wrote: > > Dave Sill wrote: > >>> > ?(Grains, particularly corn and soybeans, are indeed cheap, mostly >>> > because >>> > ?they're heavily subsidized by our government...we are therefore >>> > deliberately >>> > ?creating the very health problems we wring our hands about.) > >> Bullshit. Grains are cheap mostly because they aren't that expensive >> to produce. > > I believe you've just disqualified yourself from further discussion on this > topic by posting something blatantly counterfactual. "J." then goes on to cite evidence that grain production in the US is subsidized, as if that proves his claim. It doesn't. Grain is inexpensive everywhere in the world, not just the US. US grain subsidies are on the order or $20 billion/year*. In 2009, the US produced 60 million metric tons of wheat**, 333 million tons of corn***, and a few million tons of barley and oats. That's less than $50/ton or 2.5 cents/pound. Since a pound of wheat flour costs ~$2 and a pound of corn meal costs ~$1, I think it's fair to say that the subsidy isn't what's making grains cheap. And that's not even counting soybeans. -Dave * http://www.washingtonpost.com/wp-dyn/content/graphic/2006/07/02/GR2006070200024.html http://en.wikipedia.org/wiki/Agricultural_subsidy#United_States ** http://en.wikipedia.org/wiki/International_wheat_production_statistics *** http://en.wikipedia.org/wiki/Corn#Production From kanzure at gmail.com Tue Nov 23 20:16:46 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Tue, 23 Nov 2010 14:16:46 -0600 Subject: [ExI] 23andme $99 sale on Wednesday Message-ID: code: B84YAG starts: 10 AM Wednesday site: https://www.23andme.com/ Sorry for the short email. - Bryan http://heybryan.org/ 1 512 203 0507 From spike66 at att.net Wed Nov 24 00:41:40 2010 From: spike66 at att.net (spike) Date: Tue, 23 Nov 2010 16:41:40 -0800 Subject: [ExI] new entry from symphony of science Message-ID: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> Their newest creation is extreeemely excellent, featuring many of my favorite people such as Carl Sagan, Bertrand Russell, Sam Harris, Michael Shermer, Lawrence Krauss, Carolyn Porco, Richard Dawkins, Richard Feynman, Phil Plait, James Randi. Check it out: http://www.youtube.com/watch?v=1PT90dAA49Q Is this a great time to be alive, or what? {8-] spike From darren.greer3 at gmail.com Wed Nov 24 04:47:22 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 24 Nov 2010 00:47:22 -0400 Subject: [ExI] new entry from symphony of science In-Reply-To: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> Message-ID: Very awesome Spike. Thanks for that. On Tue, Nov 23, 2010 at 8:41 PM, spike wrote: > > Their newest creation is extreeemely excellent, featuring many of my > favorite people such as Carl Sagan, Bertrand Russell, Sam Harris, Michael > Shermer, Lawrence Krauss, Carolyn Porco, Richard Dawkins, Richard Feynman, > Phil Plait, James Randi. Check it out: > > http://www.youtube.com/watch?v=1PT90dAA49Q > > Is this a great time to be alive, or what? > > {8-] > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjatkins at mac.com Wed Nov 24 18:43:02 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 24 Nov 2010 10:43:02 -0800 Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: References: <648870.20970.qm@web30103.mail.mud.yahoo.com> Message-ID: <6A6AE968-3CC8-47F1-A7FF-51002B7C9A53@mac.com> On Nov 23, 2010, at 6:38 AM, Stefano Vaj wrote: > 2010/11/22 Dan : >> Basically agreed. When people complain about a price being too high or too >> low (in an uncoerced setting*), they demonstrate a misunderstanding about >> how prices work. None of these claims hold up under scientific scrutiny. > > OTOH, money need not be supplied "for a price", let alone after having > been produced out of thin air by some private entity. Money is a value token. Creating it without value debased it thus indirectly taxing all holders of money. The word 'supplied' may hide the fact that new tokens of value are being created out of thin air with no new value to back them up. Nor is this theoretical. From 1910 to today the US dollar lost over 95% of its purchasing value largely through inflation of the money supply. Since the current economic crisis began the US has apparently effectively doubled the money supply. It is not hitting the markets in mega inflation because so much is tied up in interest bearing (!?) bank reserves with the Fed. But I would look for at least double digit inflation not too many years from now. - s From sjatkins at mac.com Wed Nov 24 19:20:42 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Wed, 24 Nov 2010 11:20:42 -0800 Subject: [ExI] new entry from symphony of science In-Reply-To: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> Message-ID: <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> The music overlay against speech and specious echo effects is not at all pleasing. Sentiments are fine and the speakers of course. - s On Nov 23, 2010, at 4:41 PM, spike wrote: > > Their newest creation is extreeemely excellent, featuring many of my > favorite people such as Carl Sagan, Bertrand Russell, Sam Harris, Michael > Shermer, Lawrence Krauss, Carolyn Porco, Richard Dawkins, Richard Feynman, > Phil Plait, James Randi. Check it out: > > http://www.youtube.com/watch?v=1PT90dAA49Q > > Is this a great time to be alive, or what? > > {8-] > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From thespike at satx.rr.com Wed Nov 24 17:37:54 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 24 Nov 2010 11:37:54 -0600 Subject: [ExI] a longevity sf story Message-ID: <4CED4D72.1050105@satx.rr.com> Tenure Track by Kenneth Schneyer From dan_ust at yahoo.com Wed Nov 24 19:56:04 2010 From: dan_ust at yahoo.com (Dan) Date: Wed, 24 Nov 2010 11:56:04 -0800 (PST) Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: <6A6AE968-3CC8-47F1-A7FF-51002B7C9A53@mac.com> References: <648870.20970.qm@web30103.mail.mud.yahoo.com> <6A6AE968-3CC8-47F1-A7FF-51002B7C9A53@mac.com> Message-ID: <287673.68992.qm@web30105.mail.mud.yahoo.com> I think this is the main issue here. I doubt many microlenders are creating new money. Central banks and other established banks under the current system do create new money, but my impression is microlenders merely lend out of their existing money stock and don't create any new money stock to back loans or deposits. And my point was merely that there's nothing wrong per se with charging interest on loans. One might criticize either fiat money or fractional reserve systems, but that's entirely separate from the issue of charging interest on loans. Also, further, there's no objective, scientific way to say that?freely agreed upon interest rates (or any other prices) are too high or too low. (Of course, we live in societies where there's massive manipulation of interest rates and other prices, though, to my knowledge, microlenders are not part of the manipulation?and seem to represent a way of dealing with current credit systems. I think complaining about them is merely?misguided at best or so much demagoguery at worst. In this case, it's notable that politicians and government economists are doing the complaining -- not the borrowers serviced by microlenders. To me, this is typical of much anti-market rhetoric: the people actually involved are not complaining -- just outsiders who seem to either not have a clue or who actually are just looking for an issue to grandstand on.) Regards, Dan From: Samantha Atkins To: ExI chat list Sent: Wed, November 24, 2010 1:43:02 PM Subject: Re: [ExI] Micro-loan programs not as successful as hoped On Nov 23, 2010, at 6:38 AM, Stefano Vaj wrote: > 2010/11/22 Dan : >> Basically agreed. When people complain about a price being too high or too >> low (in an uncoerced setting*), they demonstrate a misunderstanding about >> how prices work. None of these claims hold up under scientific scrutiny. > > OTOH, money need not be supplied "for a price", let alone after having > been produced out of thin air by some private entity. Money is a value token.? Creating it without value debased it thus indirectly taxing all holders of money.? The word 'supplied' may hide the fact that new tokens of value are being created out of thin air with no new value to back them up.? Nor is this theoretical.? From 1910 to today the US dollar lost over 95% of its purchasing value largely through inflation of the money supply.? Since the current economic crisis began the US has apparently effectively doubled the money supply.? It is not hitting the markets in mega inflation because so much is tied up in interest bearing (!?) bank reserves with the Fed.? But I would look for at least double digit inflation not too many years from now.? - s -------------- next part -------------- An HTML attachment was scrubbed... URL: From sparge at gmail.com Wed Nov 24 23:06:37 2010 From: sparge at gmail.com (Dave Sill) Date: Wed, 24 Nov 2010 18:06:37 -0500 Subject: [ExI] new entry from symphony of science In-Reply-To: <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> Message-ID: On Wed, Nov 24, 2010 at 2:20 PM, Samantha Atkins wrote: > The music overlay against speech and specious echo effects is not at all pleasing. ?Sentiments are fine and the speakers of course. I agree. The pitch adjustment effect is called Auto-Tune. The Wikipedia page for it is pretty interesting: http://en.wikipedia.org/wiki/Autotune "Auto-Tune is a proprietary audio processor created by Antares Audio Technologies. Auto-Tune uses a phase vocoder to correct pitch in vocal and instrumental performances. It is used to disguise off-key inaccuracies and mistakes, and has allowed singers to perform perfectly tuned vocal tracks without the need of singing in tune. While its main purpose is to slightly bend sung pitches to the nearest true semitone (to the exact pitch of the nearest tone in traditional equal temperament), Auto-Tune can be used as an effect to distort the human voice when pitch is raised/lowered significantly." -Dave From kanzure at gmail.com Thu Nov 25 01:22:46 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Wed, 24 Nov 2010 19:22:46 -0600 Subject: [ExI] Public access to NanoEngineer-1 development repos Message-ID: Hey all, Some of you may remember NanoEngineer (ne-1), the first open-source nanotech CAD program. Also, with DNA modeling capabilities much like caDNAno. The first public release and source code was published in 2008. Now that Nanorex has shut down, I have converted the subversion repository to git and the 14,000+ commits are now public. view it here: http://diyhpl.us/cgit/nanoengineer or for the hardcore: git clone git://diyhpl.us/nanoengineer mailing list (for programmers/nanotech developers) http://groups.google.com/group/nanoengineer-dev irc channel: #hplusroadmap on freenode === Visuals === Renderings and visualization: http://nanoengineer-1.com/content/index.php?option=com_content&task=view&id=36&Itemid=46 molecular machinery gallery: http://nanoengineer-1.com/content/index.php?option=com_content&task=view&id=40&Itemid=50 giant poster on structural DNA nanotech: http://www.somewhereville.com/rescv/nanorex_dnaposter_0_33_scaled.jpg larger versions: http://www.somewhereville.com/?page_id=10 === Historical deets === You can find remnants of Nanorex on the web here: http://www.nanoengineer-1.com/ And the wiki (which may or may not be soon deprecated): http://www.nanoengineer-1.net/mediawiki/index.php?title=Main_Page Here are some of the original announcements from 2008: "NanoEngineer-1 software for CAD/CAM with structural DNA" http://nextbigfuture.com/2008/04/nanoengineer-1-software-for-cadcam-with.html "Do-it-yourself nanotechnology objects from DNA" https://www.foresight.org/d/nanodot/?p=2710 This was some of the original text from Nanorex describing what they were doing: """ Our mission is to support the design and development of advanced nanosystems Self-assembled atomically precise nanosystems hold great promise in many areas, both experimental and practical. Among the products will be systems that help researchers build more advanced systems. We expect structural DNA nanotechnology to play a central role in next-generation nanosystems. Our mission is to support the design and development of advanced nanosystems through computational tools. In all areas of technology, tools for design and modeling help researchers to solidify their ideas into concrete representations and to evaluate and revise them. This speeds the cycle of design, fabrication, and testing at the center of the development process. Structural DNA nanotechnology (SDN) will be no exception. DNA structures can provide frameworks for next-generation nanosystems Three lines of research are converging to create a capability for systematic design of complex, atomically precise nanosystems. SDN has a crucial role in this prospective development. Special structures The first line of research is the development of a wide range of atomically precise functional components -- organometallic complexes, magic-size quantum dots, nanotubes and fibers, engineered surfaces, and so forth. These have functions ranging from chemical catalysis to electro-optical transduction to structural support. This wide range of functions, however, is offset by a major limitation: each of these functional components is a special structure, either unique or part of a small family, not a member of a designable class of billions of possible structures. This limitation makes it almost impossible to design components that will self-assemble to form complex, atomically precise systems. By themselves, these special structures are simply too constrained to provide the necessary diversity of selectively complementary surfaces. Engineered proteins The second line of research is the development of polymers made from a diverse set of monomers that fold to make specific 3D structures. Protein engineering is the advanced technology of this sort, and it has progressed to the point where researchers routinely design novel structures that are more stable than those found in nature. Artificial and natural examples show that proteins can perform a wide range of functions, and can bind proteins, nucleic acids, and an enormous range of other atomically precise structures, both biological and non-biological. Proteins therefore provide a solution to the problem of assembling the special, highly functional structures discussed above. Protein molecules can be effective structures: they have strengths and stiffnesses like those of epoxies, polycarbonates, and other engineering polymers. However, these useful properties are offset by a slow design, fabrication, and testing cycle (several months) and by the small size of individual proteins (a few nanometers). They are attractive as components and linkers, but less attractive as a way to combine components to make large systems. Structural DNA The third line is SDN itself, which now can be used to implement a large growing range of structures on a scale of tens to thousands of nanometers. Like proteins, but unlike the special, highly functional structures, DNA is a modular system that can be used to make a set of structures of combinatorial size, with billions of possible design choices for strands just a few nanometers long. Unlike proteins, DNA structures can be made with a fast design, fabrication, and testing cycle (no more than a few days, in some instances), and they can easily be thousands of times larger in volume. They can provide specific binding sites for proteins or DNA-tagged structures, holding hundreds or thousands of components in specific spatial geometries. These lines of development are complementary, the first providing diverse elements of high functionality, the second providing components that can bind them precisely, and the third providing structures that can organize them in large numbers to form complex patterns. The resulting ability to build modular composite nanosystems opens the door to an as-yet unimaginable range of experimental and practical applications. SDN plays a vital role in this prospect: it is literally what holds it all together. Structural DNA nanotechnology is a point of high leverage for computational tools DNA structures are a good target for computer-aided design tools. They are regular enough that they can be designed and using relatively abstract representations, yet complex enough that computer support for visualization is essential. With DNA as a medium, designers can arrange and rearrange parts in a systematic way, much as they would in designing conventional macroscopic objects. Special structures, by contrast, leave little scope for design, and while proteins have enormous scope for design, the process has special difficulties. Where a designer can rearrange DNA strands by following simple rules, relying on the regularities of helical structure and paired bases, a protein designer must use a computational search process to find combinations of side chains that fit together. This makes even the simplest design steps more difficult to plan and implement. The development of modular composite nanosystems will require computational support for designs that include special structures and proteins, and some support may be possible at an early date. SDN design is the natural starting point, however, and is a rich field in itself. An open-source framework will enable collaborative development of software tools The growing SDN community has developed many software tools, and will develop many more in the years to come. Nanorex is developing open-source software that provides tools for visualization, modeling, and manipulation of DNA structures, and that provides interfaces for integrating these capabilities with existing and future software tools developed within the SDN community. Because the core software is open source (NanoEngineer-1 is under GPL), all participants can be confident that it won't become expensive, and that any team that is working to extend it must continue to satisfy the broader community. Nanorex can't take down the project by failing, going bad, or trying to squeeze money out of the software itself -- in the worst case, the work would simply continue under new leadership. Researchers will want to keep control of the tools they create, both to ensure their quality and to get proper credit when they are used. These tools can be treated as distinct open-source projects, giving researchers full control of the content of software that appears under their names. User interface conventions in NanoEngineer-1 will give clear credit to the creator of a tool when it is used. Rather than absorbing contributions and making them invisible, the project will offer researchers a new distribution channel that can make their work better known, better supported, and more widely used. Our mission is to support the design and development of advanced nanosystems, and we see SDN is a central part of that development. No single research group or company could possibly provide all the necessary tools, so the choice is whether to have a jumble of incompatible pieces of software, each implementing a limited user interface, or to find a way to bring these tools together to form a more integrated system with powerful capabilities. We think that the general approach described here will enable the second, superior option. The approach itself, of course, is also open to contributions and revision by the community of users and contributors in the SDN research community. """ I'll also be re-deploying bugzilla and a few other infrastructure items (nightly builds, etc.) in the coming weeks. - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Thu Nov 25 02:32:42 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Wed, 24 Nov 2010 21:32:42 -0500 Subject: [ExI] new entry from symphony of science In-Reply-To: References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> Message-ID: On Wed, Nov 24, 2010 at 6:06 PM, Dave Sill wrote: > On Wed, Nov 24, 2010 at 2:20 PM, Samantha Atkins wrote: >> The music overlay against speech and specious echo effects is not at all pleasing. ?Sentiments are fine and the speakers of course. > > I agree. The pitch adjustment effect is called Auto-Tune. The > Wikipedia page for it is pretty interesting: > > http://en.wikipedia.org/wiki/Autotune > > "Auto-Tune is a proprietary audio processor created by Antares Audio > Technologies. Auto-Tune uses a phase vocoder to correct pitch in vocal > and instrumental performances. It is used to disguise off-key > inaccuracies and mistakes, and has allowed singers to perform > perfectly tuned vocal tracks without the need of singing in tune. > While its main purpose is to slightly bend sung pitches to the nearest > true semitone (to the exact pitch of the nearest tone in traditional > equal temperament), Auto-Tune can be used as an effect to distort the > human voice when pitch is raised/lowered significantly." And I wish this "effect" had never been invented. It is like nails on chalkboard... From darren.greer3 at gmail.com Thu Nov 25 03:52:45 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Wed, 24 Nov 2010 23:52:45 -0400 Subject: [ExI] new entry from symphony of science In-Reply-To: References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> Message-ID: And I wish this "effect" had never been invented. It is like nails on chalkboard... It may have some interesting implications for electronica and computer music. Not the effect so much as the concept of taking human ideas expressed in speech and recorded in a non-musical context and fusing it rhythmically and tonally with electronica and computer music. For the most part, Auto-Tune is just an internet fad, but I know of one new music composer who thinks it might be worth exploring in computer-generated composition. There are probably others. Would anyone here care to answer a question about imaginary numbers from this liberal arts major who is now neck deep in his first year of a post-secondary science and mathematics education? I'm having trouble understanding how the square root of negative one could have a practical application beyond abstract mathematics. Or even in abstract mathematics, for that matter. I understand the concept as well as I'm being asked too, and can tease it out of a complex number and solve the equation around it. But we haven't got to a place where anyone is actually saying what I'm going to use it for. I know we'll get there, but the suspense is killing me. And the explanations on the Internet are assuming a knowledge I don't have yet. Any takers? Remember. Key words: FIRST YEAR. An engineer might be good here. Spike? Darren On Wed, Nov 24, 2010 at 10:32 PM, Mike Dougherty wrote: > On Wed, Nov 24, 2010 at 6:06 PM, Dave Sill wrote: > > On Wed, Nov 24, 2010 at 2:20 PM, Samantha Atkins > wrote: > >> The music overlay against speech and specious echo effects is not at all > pleasing. Sentiments are fine and the speakers of course. > > > > I agree. The pitch adjustment effect is called Auto-Tune. The > > Wikipedia page for it is pretty interesting: > > > > http://en.wikipedia.org/wiki/Autotune > > > > "Auto-Tune is a proprietary audio processor created by Antares Audio > > Technologies. Auto-Tune uses a phase vocoder to correct pitch in vocal > > and instrumental performances. It is used to disguise off-key > > inaccuracies and mistakes, and has allowed singers to perform > > perfectly tuned vocal tracks without the need of singing in tune. > > While its main purpose is to slightly bend sung pitches to the nearest > > true semitone (to the exact pitch of the nearest tone in traditional > > equal temperament), Auto-Tune can be used as an effect to distort the > > human voice when pitch is raised/lowered significantly." > > And I wish this "effect" had never been invented. It is like nails on > chalkboard... > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Thu Nov 25 04:00:24 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Wed, 24 Nov 2010 22:00:24 -0600 Subject: [ExI] new entry from symphony of science In-Reply-To: References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> Message-ID: <4CEDDF58.9000003@satx.rr.com> On 11/24/2010 8:32 PM, Mike Dougherty wrote: > And I wish this "effect" had never been invented. It is like nails on > chalkboard... Yeah, it's hideous. And in this case tacky, sort of acoustic Jesus in pastel on black velvet. Sorry, Spike. Damien Broderick From kat at mindspillage.org Thu Nov 25 04:20:01 2010 From: kat at mindspillage.org (Kat Walsh) Date: Wed, 24 Nov 2010 23:20:01 -0500 Subject: [ExI] new entry from symphony of science In-Reply-To: <4CEDDF58.9000003@satx.rr.com> References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> <4CEDDF58.9000003@satx.rr.com> Message-ID: On Wed, Nov 24, 2010 at 11:00 PM, Damien Broderick wrote: > On 11/24/2010 8:32 PM, Mike Dougherty wrote: > >> And I wish this "effect" had never been invented. ?It is like nails on >> chalkboard... > > Yeah, it's hideous. And in this case tacky, sort of acoustic Jesus in pastel > on black velvet. Sorry, Spike. Aww. I thought the first few Symphony of Science were outstanding--they were structured like regular pop songs, with the auto-tune effect used well--not just auto-tuned speeches cut up and somewhat haphazardly set to music. Around #4 (after he had already used the best Sagan material) it sort of jumped the Pisces... -Kat -- Your donations keep Wikipedia online: http://donate.wikimedia.org/en Wikimedia, Press: kat at wikimedia.org * Personal: kat at mindspillage.org http://en.wikipedia.org/wiki/User:Mindspillage * (G)AIM:mindspillage IRC(freenode,OFTC):mindspillage * identi.ca:mindspillage * phone:ask From atymes at gmail.com Thu Nov 25 06:04:19 2010 From: atymes at gmail.com (Adrian Tymes) Date: Wed, 24 Nov 2010 22:04:19 -0800 Subject: [ExI] new entry from symphony of science In-Reply-To: References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> Message-ID: 2010/11/24 Darren Greer > Would anyone here care to answer a question about imaginary numbers from > this liberal arts major who is now neck deep in his first year of a > post-secondary science and mathematics education? I'm having trouble > understanding how the square root of negative one could have a practical > application beyond abstract mathematics. Or even in abstract mathematics, > for that matter. I understand the concept as well as I'm being asked too, > and can tease it out of a complex number and solve the equation around it. > But we haven't got to a place where anyone is actually saying what I'm > going to use it for. I know we'll get there, but the suspense is killing me. > And the explanations on the Internet are assuming a knowledge I don't have > yet. Any takers? Remember. Key words: FIRST YEAR. An engineer might be good > here. Spike? > One real world application I know of is for reducing certain kinds of two dimensional data to a single number, for instance when trying to analyze electronic properties where you have alternating current that alternates at a certain frequency and has a certain amplitude. This is known as "phasors" - not as in directed energy weapons, though they can stun (and they can be used to describe some directed energy weapons). See http://www.eecs.umich.edu/~aey/eecs206/lectures/phasor.pdffor an overview. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Thu Nov 25 07:56:47 2010 From: jonkc at bellsouth.net (John Clark) Date: Thu, 25 Nov 2010 02:56:47 -0500 Subject: [ExI] Complex numbers (was: new entry from symphony of science) In-Reply-To: References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> Message-ID: On Nov 24, 2010, at 10:52 PM, Darren Greer wrote: > I'm having trouble understanding how the square root of negative one could have a practical application beyond abstract mathematics. Or even in abstract mathematics, for that matter. The short answer is that the square root of negative one is essential if mathematically you want to calculate how things rotate. It you pair up a Imaginary Number(i) and a regular old Real Number you get a Complex Number, and you can make a one to one relationship between the way Complex numbers add subtract multiply and divide and the way things move in a two dimensional plane, and that is enormously important. Or you could put it another way, regular numbers that most people are familiar with just have a magnitude, but complex numbers have a magnitude AND a direction. Many thought the square root of negative one (i) didn't have much practical use until about 1860 when Maxwell used them in his famous equations to figure out how Electromagnetism worked. Today nearly all quantum mechanical equations have an"i" in them somewhere, and it might not be going too far to say that is the source of quantum weirdness. The Schrodinger equation is deterministic and describes the quantum wave function, but that function is an abstraction and is unobservable, to get something you can see you must square the wave function and that gives you the probability you will observe a particle at any spot; but Schrodinger's equation has an "i" in it and that means very different quantum wave functions can give the exact same probability distribution when you square it; remember with i you get weird stuff like i^2=i^6 =-1 and i^4=i^100=1. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Thu Nov 25 11:33:04 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Thu, 25 Nov 2010 07:33:04 -0400 Subject: [ExI] Complex numbers (was: new entry from symphony of science) In-Reply-To: References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> Message-ID: John wrote: >The short answer is that the square root of negative one is essential if mathematically you want to calculate how things rotate. It you pair up a Imaginary Number(i) and a regular old Real Number you get a Complex Number, and you can make a one to one relationship between the way Complex numbers add subtract multiply and divide and the way things move in a two dimensional plane, and that is enormously important. Or you could put it another way, regular numbers that most people are familiar with just have a magnitude, but complex numbers have a magnitude AND a direction. Thanks. That's exactly the sort of answer I needed. Much better explained than Wikipedia. which goes into a long explanation about electricity that I'm certain is correct but lost me after a paragraph. My prof does promise me that we will eventually delve further into complex numbers and that this was really just an introduction, but I was fairly interested in it. I'd heard of them in high school, but had never given them much thought. >remember with i you get weird stuff like i^2=i^6 =-1 and i^4=i^10< Yes, I think that's what drew me to them in the first place. I wondered why you'd even bother squaring a number such as the square root of negative one when the square roots cancelled each other out and you ended up with plain old negative one, or when you cubed it you ended up with -i. Things are getting most interesting in my classes anyway. I took advanced math in high school, but we are in a place now where we have to step away from that island of real-world logic that I was always comfortable with, and swim out to a depth where the math as its own rules and internal logic that can at times be counter-intuitive. I'm actually doing surprisingly well in all my courses. I think I'll send a transcript to my old high school physics teacher, whom I eternally frustrated with my poor performance. One of my fellow students brought one of my novels into class yesterday for me to sign, and I scribbled down Heron's Formula under my signature. :) Darren Many thought the square root of negative one (i) didn't have much practical use until about 1860 when Maxwell used them in his famous equations to figure out how Electromagnetism worked. Today nearly all quantum mechanical equations have an"i" in them somewhere, and it might not be going too far to say that is the source of quantum weirdness. The Schrodinger equation is deterministic and describes the quantum wave function, but that function is an abstraction and is unobservable, to get something you can see you must square the wave function and that gives you the probability you will observe a particle at any spot; but Schrodinger's equation has an "i" in it and that means very different quantum wave functions can give the exact same probability distribution when you square it; remember with i you get weird stuff like i^2=i^6 =-1 and i^4=i^10< 2010/11/25 John Clark > On Nov 24, 2010, at 10:52 PM, Darren Greer wrote: > > I'm having trouble understanding how the square root of negative one could > have a practical application beyond abstract mathematics. Or even in > abstract mathematics, for that matter. > > > The short answer is that the square root of negative one is essential if > mathematically you want to calculate how things rotate. It you pair up a > Imaginary Number(i) and a regular old Real Number you get a Complex Number, > and you can make a one to one relationship between the way Complex numbers > add subtract multiply and divide and the way things move in a two > dimensional plane, and that is enormously important. Or you could put it > another way, regular numbers that most people are familiar with just have a > magnitude, but complex numbers have a magnitude AND a direction. > > Many thought the square root of negative one (i) didn't have much practical > use until about 1860 when Maxwell used them in his famous equations to > figure out how Electromagnetism worked. Today nearly all quantum mechanical > equations have an"i" in them somewhere, and it might not be going too far to > say that is the source of quantum weirdness. The Schrodinger equation is > deterministic and describes the quantum wave function, but that function is > an abstraction and is unobservable, to get something you can see you must > square the wave function and that gives you the probability you will observe > a particle at any spot; but Schrodinger's equation has an "i" in it and that > means very different quantum wave functions can give the exact same > probability distribution when you square it; remember with i you get weird > stuff like i^2=i^6 =-1 and i^4=i^100=1. > > John K Clark > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Fri Nov 26 02:46:21 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Thu, 25 Nov 2010 18:46:21 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: Hi Ben, On Fri, Nov 19, 2010 at 8:35 AM, Ben Goertzel wrote: > > There is a strong argument that a hard takeoff is plausible. This > argument has been known for a long time, and so far as I can tell SIAI > hasn't done much to make it stronger, though they've done a lot to > publicize it. The factors Michael A mentions are certainly part of > this argument... > > OTOH, I have not heard any reasonably strong argument that a hard > takeoff is *likely*... from Michael or anyone else. There are simply > too many uncertainties involved, too many fast and loose speculations > about future technologies, to be able to make such an argument. > Given MNT with the capabilities outlined by CRN and the Phoenix nanofactory paper, does a hard takeoff seem likely to you? -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Fri Nov 26 02:53:44 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Thu, 25 Nov 2010 18:53:44 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> Message-ID: On Sun, Nov 21, 2010 at 1:59 PM, Samantha Atkins wrote: > > On Nov 21, 2010, at 1:20 PM, spike wrote: > > > > > The human brain has some inherent limitations, most specifically that we > get > > tired, and there are not enough of us. Consider top level chess. Human > > elite players, top 100 in the world, can still play a competitive game > > against ordinary 100 dollar chess programs running on an ordinary 500 > dollar > > laptop computer, but they must invest really intense concentration for > about > > four hours, after which they are exhausted. The computer on the other > hand > > is immediately ready for another game, and can run two or more high > quality > > games simultaneously, it can run day and night, it can replicate itself > > arbitrarily many times, all while the six billion strong human race is > stuck > > right at around 100 or so players (and declining) capable of such > > concentration, at a rate of one game a day at most. Silicon based > recursive > > self-improvement is implemented by this ability to laser focus on the > same > > problem over indefinite periods, in arbitrary numbers. > > Great point! I can beat the $100 dollar chess program quicker if I spend > even more time probing and analyzing its weaknesses but the point is well > taken. > Wasn't this point obvious from the get-go? Isn't this just the beginning of what humans must overcome to win against recursively self-improving AI? -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Fri Nov 26 03:03:26 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Thu, 25 Nov 2010 19:03:26 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: On Fri, Nov 19, 2010 at 11:18 AM, Keith Henson wrote: > Re these threads, I have not seen any ideas here that have not been > considered for a *long* time on the sl4 list. > > Sorry. > So who won the argument? My point is that the SIAI supporters and Eliezer Yudkowsky are correct, and the critics are wrong. If there's no consensus, then there's always plenty more to discuss. Contrary to consensus, we have people in the transhumanist community calling us cultists and as deluded as fundamentalist Christians. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelanissimov at gmail.com Fri Nov 26 02:25:41 2010 From: michaelanissimov at gmail.com (Michael Anissimov) Date: Thu, 25 Nov 2010 18:25:41 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: <463322.58148.qm@web65602.mail.ac4.yahoo.com> References: <463322.58148.qm@web65602.mail.ac4.yahoo.com> Message-ID: On Wed, Nov 17, 2010 at 9:11 PM, The Avantguardian < avantguardian2020 at yahoo.com> wrote: > > I have some questions, perhaps naive, regarding the feasibility of the hard > takeoff scenario: Is self-improvement really possible for a computer > program? > Yes. For instance, Godel machine. Reinforcement learning. > And if the initial "intelligence function" is flawed, then all recursive > iterations of the function will have the same flaw. So it would not really > be > qualitatively improving, it would simply be quantitatively increasing. For > example, if I had two or even four identical brains, none of them might be > able > answer this question, although I might be able to do four other mental > tasks > that I am capable of doing, at once. > But there are thousands of avenues of improvement I can identify for myself now. Thus, a human-similar intelligence would likely see a similar number of potential avenues of improvement and pursue them. > On the other hand, if the seed AI is able to actually rewrite the code of > it's > intelligence function to non-recursively improve itself, how would it avoid > falling victim to the halting problem? I guess all self-improving software programs will inevitably fall prey to infinite recursion or the halting problem, then. Please say "yes" if you believe this. If there is no way, even in principle, to > algorithmically determine beforehand whether a given program with a given > input > will halt or not, would an AI risk getting stuck in an infinite loop by > messing > with its own programming? Yes. Just like a human would too. > The halting problem is only defined for Turing > machines so a quantum computer may overcome it, but I am curious if any > SIAI > people have considered it in their analysis of hard versus soft takeoff. > Not really, no. -- michael.anissimov at singinst.org Singularity Institute Media Director -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sat Nov 27 01:48:25 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Fri, 26 Nov 2010 18:48:25 -0700 Subject: [ExI] The coming "Muthuselarity!" Message-ID: Aubrey deGrey stated on the "I Look Forward To" website: "I estimate that the Methuselarity* will be reached with medicines that get people to live 30 years longer than they otherwise would, i.e. that push the maximum lifespan out to 150. There will be a small "cusp" - a small period when we can get people to 150 but no further - and that will translate into a small number of people who reach 150 but still die of old age because we couldn't QUITE rejuvenate them fast enough. So the first person to reach 150 will almost certainly not reach 200. But the first person to reach 200 will have a pretty good chance of reaching 1000." >>> I love the term "Methuselarity!" : ) http://www.ilookforwardto.com/2010/11/when-will-life-expectancy-reach-200-years-aubrey-de-grey-and-david-brin-disagree-in-interview.html John From spike66 at att.net Sat Nov 27 02:06:08 2010 From: spike66 at att.net (spike) Date: Fri, 26 Nov 2010 18:06:08 -0800 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science Message-ID: <002b01cb8dd7$a8509710$f8f1c530$@att.net> 2010/11/24 Darren Greer . I'm having trouble understanding how the square root of negative one could have a practical application beyond abstract mathematics. Or even in abstract mathematics, for that matter. ... Any takers? Remember. Key words: FIRST YEAR. An engineer might be good here. Spike? Darren, really are a skerjillion real world applications for imaginary numbers. It is actually very unfortunate that they were ever given that name. Calling them real and imaginary makes it sound like the imaginary numbers are somehow not *real*. {8^D A number with both a real and an imaginary part is called a complex number. That is another *terrible* term, because it scares non-mathematics types. They should have been called something different. I would propose the reals be called horizontal numbers and imaginary called vertical numbers. Then if it has both, it is an off-axis number, because it is on neither the horizontal or vertical axis. Here's a mathematical comment that will blow your mind if you think about it hard enough, and one which also has real world applications: e^(i*pi) = -1 I would put a ! after that comment but for the fact that ! has its own meaning in mathematics, which makes it difficult to express one's enthusiasm at a really exciting equation without messing up the equation. But this is a very exciting equation! It is a result of the fact that e^(i*theta) = cosine(theta) + i*sin(theta) !! Is this cool or what? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sat Nov 27 02:27:16 2010 From: spike66 at att.net (spike) Date: Fri, 26 Nov 2010 18:27:16 -0800 Subject: [ExI] new entry from symphony of science In-Reply-To: <4CEDDF58.9000003@satx.rr.com> References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> <4CEDDF58.9000003@satx.rr.com> Message-ID: <003301cb8dda$a23d62b0$e6b82810$@att.net> From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Damien Broderick Subject: Re: [ExI] new entry from symphony of science On 11/24/2010 8:32 PM, Mike Dougherty wrote: >> And I wish this "effect" had never been invented. It is like nails on >> chalkboard... >Yeah, it's hideous. And in this case tacky, sort of acoustic Jesus in pastel on black velvet. Sorry, Spike. Damien Broderick Oh these comments are far too pessimistic my friends. Do let me offer another perspective. Think of popular musical genre(s?). We of my age, now in our late youth, enjoyed from our childhoods rock and roll, later called rock. The subject matter of that body of music is generally about romance: how guys feel about their sweethearts mostly. Some other stuff thrown in, drugs for instance, but romance in some form is over half of it. I like love. Romance is good. We had disco, which is about dancing mostly. Dancing is a way of having fun. Disco isn't very deep, because it is about dancing. Country Western music is partly about bad luck, but still mostly romance. Somehow we managed to get hip hop and rap. I am not sure what it is about, for I confess I don't really understand the lyrics. But I have heard there are a number of rap and hip hop stars with the term ice, such as Ice-T, Vanilla Ice, Ice Cube, and perhaps several others you hop hipsters know way better than I do. The term ice is sometimes used as a substitute for murder. "Ice-em" means to slay the prole. So we have a quasi-musical genre which has as its subject... murder. Do let us hope they refer to the alternative definition, diamonds. I can't tell from what few words I understand, from a genre that sounds to me like men arguing in a foreign language. There have been attempts to express scientific notions in rap, but it didn't work for me. Sounded too much like those guys who write rhythmic poetry about murder (or possibly jewelry?) So now in music we have romance, drugs, dancing, bad luck, murder, perhaps jewelry. I like love, the rest of it they can have. Please. So how can we express really meaningful commentary in music? Can we write music about science? Using people who have actual brains, talking about something we care about? How? I am open to suggestion, but I suspect any way we do it, we end up with something that sounds a lot like symphony of science. spike From msd001 at gmail.com Sat Nov 27 06:53:49 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 27 Nov 2010 01:53:49 -0500 Subject: [ExI] Hard Takeoff In-Reply-To: References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> Message-ID: 2010/11/25 Michael Anissimov > > Wasn't this point obvious from the get-go? Isn't this just the beginning > of what humans must overcome to win against recursively self-improving AI? > > I wonder if we'll ever overcome the US vs THEM mentality? I'm sure it was an effective simplification in tribal settings to immediately assume danger because "we" don't immediately recognize "them." I feel that the analogy to winning against recursively improving AI is like a military parent returning home after many years to find their own child has grown into an unrecognized teen - and feeling so threatened they constantly feel the need to win against this home invader (who conversely feels the adult has no right to intervene in a household in which they've had no part for the last decade) My point is that we should be so well tied to the improving AI that our collective intelligence is raised along with recursive improvement. Granted truly alien motivations of a suddenly explosive takeoff could be disastrous. Unexpected nuclear explosions are also disastrous but we don't eschew electricity produced from nuclear reactors. We are certainly concerned that genetic engineering (et al.) have the potential to produce a plague that also wipes out humanity but it would be unwise to abandon this medical technology regardless of its potential for curative medicine. I was thinking prudence should allay our fears. Then I imagined the counterpoint would be to investigate if humanity collectively possesses enough prudence in the first place. xkcd for prudence: http://xkcd.com/665/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Nov 27 07:15:24 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 27 Nov 2010 02:15:24 -0500 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: <002b01cb8dd7$a8509710$f8f1c530$@att.net> References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> Message-ID: 2010/11/26 spike > Darren, really are a skerjillion real world applications for imaginary > numbers. It is actually very unfortunate that they were ever given that > name. Calling them real and imaginary makes it sound like the imaginary > numbers are somehow not **real**. {8^D > > > > A number with both a real and an imaginary part is called a complex > number. That is another **terrible** term, because it scares > non-mathematics types. > > > > They should have been called something different. I would propose the > reals be called horizontal numbers and imaginary called vertical numbers. > Then if it has both, it is an off-axis number, because it is on neither the > horizontal or vertical axis. > > haha. Given your new nomenclature I would use horizontal numbers to measure length and width but vertical numbers for height? You say off-axis numbers aren't complex? How can you mix length/width numbers with height numbers? What's an axis? When you draw that diagonal line through the giant plus sign, why does the XY plane tip over into the virtual/perspective dimension so the up-down axis becomes Z? So simple for those with an aptitude, otherwise unintelligible for those without. I think they're great names (real, imaginary, complex) because they're ideally domain-specific nerd words. Imaginary numbers really do require imagination to grok i^2 = -1. I'd love to see that represented in a visually intuitive way*. I understand that the symbol describes a concept that is only communicable after the semantics of the language are conveyed, but without the agreement of those symbols/rules is there an obvious way to discover imaginary numbers? * assumes visual-spatial thinking is intuitive, which may not be true for many. It would be interesting to compare visualization ability of the average 2010 adult with historical mathematicians. Have decades of TV & video games honed our visual thinking, possibly at the expense of other modalities? -------------- next part -------------- An HTML attachment was scrubbed... URL: From msd001 at gmail.com Sat Nov 27 07:27:09 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 27 Nov 2010 02:27:09 -0500 Subject: [ExI] new entry from symphony of science In-Reply-To: <003301cb8dda$a23d62b0$e6b82810$@att.net> References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> <4CEDDF58.9000003@satx.rr.com> <003301cb8dda$a23d62b0$e6b82810$@att.net> Message-ID: On Fri, Nov 26, 2010 at 9:27 PM, spike wrote: > So how can we express really meaningful commentary in music? Can we write > music about science? Using people who have actual brains, talking about > something we care about? How? I am open to suggestion, but I suspect any > way we do it, we end up with something that sounds a lot like symphony of > science. > > I think I follow your vector. I'll try to add.... Nope, can't figure it. We could build a representation of the Mona Lisa using Lego bricks. To some it would be a dreadful copy that completely fails to capture the essence of the original. To others it would be it an original work of art in its own right. Some might like pitchbending effects applied to "smart people talks" in the name of music. I personally feel that's an original work of art that I do not appreciate. I do think we can write music about science. It will likely take a hybrid creativity to be recognized by both musicians and scientists. -------------- next part -------------- An HTML attachment was scrubbed... URL: From possiblepaths2050 at gmail.com Sat Nov 27 10:20:06 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 27 Nov 2010 03:20:06 -0700 Subject: [ExI] The coming "Methuselarity" Message-ID: Aubrey deGrey stated on the "I Look Forward To" website: "I estimate that the Methuselarity* will be reached with medicines that get people to live 30 years longer than they otherwise would, i.e. that push the maximum lifespan out to 150. There will be a small "cusp" - a small period when we can get people to 150 but no further - and that will translate into a small number of people who reach 150 but still die of old age because we couldn't QUITE rejuvenate them fast enough. So the first person to reach 150 will almost certainly not reach 200. But the first person to reach 200 will have a pretty good chance of reaching 1000." >>> I love the term "Methuselarity!" : ) http://www.ilookforwardto.com/2010/11/when-will-life-expectancy-reach-200-years-aubrey-de-grey-and-david-brin-disagree-in-interview.html John From pharos at gmail.com Sat Nov 27 10:29:49 2010 From: pharos at gmail.com (BillK) Date: Sat, 27 Nov 2010 10:29:49 +0000 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: <002b01cb8dd7$a8509710$f8f1c530$@att.net> References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> Message-ID: 2010/11/27 spike wrote: > Here?s a mathematical comment that will blow your mind if you think about it > hard enough, and one which also has real world applications: > > e^(i*pi) = -1 > > I would put a ! after that comment but for the fact that ! has its own > meaning in mathematics, which makes it difficult to express one?s enthusiasm > at a really exciting equation without messing up the equation.? But this is > a very exciting equation!? It is a result of the fact that > > e^(i*theta) = cosine(theta) + i*sin(theta) > > Is this cool or what? > Mathematica says the formula is true, so it must be! But be careful ------ BillK From test at ssec.wisc.edu Sat Nov 27 13:11:06 2010 From: test at ssec.wisc.edu (Bill Hibbard) Date: Sat, 27 Nov 2010 07:11:06 -0600 (CST) Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: References: Message-ID: Spike wrote: > Here's a mathematical comment that will blow your mind > if you think about it hard enough, and one which also > has real world applications: > > e^(i*pi) = -1 > > . . . > But this is a very exciting equation! It is a result > of the fact that > > e^(i*theta) = cosine(theta) + i*sin(theta) > > !! > > Is this cool or what? Way cool, Spike. When I learned this stuff as a tadpole I was fascinated by the fact that: i^i = e^(-pi/2) = 0.207879576... is real. Not only do complex variables have lots of practical uses, they are also a source of great mathematical beauty. Bill From test at ssec.wisc.edu Sat Nov 27 13:58:30 2010 From: test at ssec.wisc.edu (Bill Hibbard) Date: Sat, 27 Nov 2010 07:58:30 -0600 (CST) Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: References: Message-ID: On Sat, 27 Nov 2010, Bill Hibbard wrote: > Spike wrote: >> Here's a mathematical comment that will blow your mind >> if you think about it hard enough, and one which also >> has real world applications: >> >> e^(i*pi) = -1 >> >> . . . >> But this is a very exciting equation! It is a result >> of the fact that >> >> e^(i*theta) = cosine(theta) + i*sin(theta) >> >> !! >> >> Is this cool or what? > > Way cool, Spike. When I learned this stuff as a tadpole > I was fascinated by the fact that: > > i^i = e^(-pi/2) = 0.207879576... > > is real. I should point out that i^i has multiple values because its computation requires taking a complex logarithm. > Not only do complex variables have lots of practical uses, > they are also a source of great mathematical beauty. The first cool thing is that adding the square root of -1 to the real numbers suffices to give the roots of any polynomial. The second cool thing comes when you extend calculus to complex variables. For real variables the limit of: ( f(x + delta.x) - f(x) ) / delta.x has to be consistent for delta.x approaching zero from the negative and positive directions. For complex variables this limit must be consistent approaching zero from any direction in the complex plane. This constrains differentiable functions to have a great deal of structure. If you already know calculus then complex variables aren't that big a leap, and the beauty is worth the effort. Bill From mbb386 at main.nc.us Sat Nov 27 14:52:12 2010 From: mbb386 at main.nc.us (MB) Date: Sat, 27 Nov 2010 09:52:12 -0500 Subject: [ExI] a longevity sf story In-Reply-To: <4CED4D72.1050105@satx.rr.com> References: <4CED4D72.1050105@satx.rr.com> Message-ID: <288193d18c44cbc29139aa3a8f3a8f8d.squirrel@www.main.nc.us> Damien Broderick offered: > Tenure Track > > by Kenneth Schneyer > > > Thanks for this link. I'm still pondering that story. The pain of losing those I love is with me always now as I grow older. It's hard. Regards, MB From darren.greer3 at gmail.com Sat Nov 27 15:45:56 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 27 Nov 2010 11:45:56 -0400 Subject: [ExI] new entry from symphony of science In-Reply-To: References: <000101cb8b70$5c9d1eb0$15d75c10$@att.net> <6439211D-FBA0-4AB5-92F9-7CCBB6F6B5BB@mac.com> <4CEDDF58.9000003@satx.rr.com> <003301cb8dda$a23d62b0$e6b82810$@att.net> Message-ID: >I do think we can write music about science. < I went on a kick awhile ago looking for good poets waxing on hard science. It seems to be a cross-discipline that hasn't caught on much. There are some good scientists doing it who are bad poets and some good poets who are bad scientists, but few fall into both camps. I thought that was odd, because we have so many good scientists and mathematicians who are terrific prose writers. Darren 2010/11/27 Mike Dougherty > On Fri, Nov 26, 2010 at 9:27 PM, spike wrote: > >> So how can we express really meaningful commentary in music? Can we write >> music about science? Using people who have actual brains, talking about >> something we care about? How? I am open to suggestion, but I suspect any >> way we do it, we end up with something that sounds a lot like symphony of >> science. >> >> > I think I follow your vector. I'll try to add.... Nope, can't figure it. > > We could build a representation of the Mona Lisa using Lego bricks. To > some it would be a dreadful copy that completely fails to capture the > essence of the original. To others it would be it an original work of art > in its own right. > > Some might like pitchbending effects applied to "smart people talks" in the > name of music. I personally feel that's an original work of art that I do > not appreciate. > > I do think we can write music about science. It will likely take a hybrid > creativity to be recognized by both musicians and scientists. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sat Nov 27 15:38:08 2010 From: jonkc at bellsouth.net (John Clark) Date: Sat, 27 Nov 2010 10:38:08 -0500 Subject: [ExI] imaginary numbers In-Reply-To: References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> Message-ID: <15E92CF0-9B3E-4520-96A1-383518FDE252@bellsouth.net> On Nov 27, 2010, at 2:15 AM, Mike Dougherty wrote: > Imaginary numbers really do require imagination to grok i^2 = -1. I'd love to see that represented in a visually intuitive way*. Make the Real numbers be the horizontal axis of a graph and the imaginary numbers be the vertical axis, now whenever you multiply a Real or Imaginary number by i you can intuitively think about it as rotating it by 90 degrees in a counterclockwise direction. Look at i, it sits one unit above the real horizontal axis so draw a line from the real numbers to i, so if you multiply i by i (i^2) it rotates to become ?1, multiply it by i again(i^3) and it becomes ?i, multiply it by i again (i^4) and it becomes 1, multiply it by i again (i^5) and you've rotated it a complete 360 degrees and you're right back where you started at i. It is this property of rotation that makes i so valuable, the best example may be electromagnetism where Maxwell used it to describe how electric and magnetic fields change in the X and Y direction (that is to say in the Real and Imaginary direction) as the wave propagates in the Z direction. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Nov 27 16:04:19 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 27 Nov 2010 12:04:19 -0400 Subject: [ExI] a longevity sf story In-Reply-To: <288193d18c44cbc29139aa3a8f3a8f8d.squirrel@www.main.nc.us> References: <4CED4D72.1050105@satx.rr.com> <288193d18c44cbc29139aa3a8f3a8f8d.squirrel@www.main.nc.us> Message-ID: >Thanks for this link. I'm still pondering that story.< I've thought about it a few times since Damien posted it as well. It's skillful, and I was quite moved by the last piece of written data announcing the protagonist's choice of topic (Austen and Beauty) at the panel lecture. At the same time, I was curious about the short description of the story chosen by the editors: Life's too short? For Martin, it seems too long. * * Reading that, you'd think that this story was making some kind of revelatory statement about the philosophical rightness of a short or unextended life.* I didn't read it that way at all. * **Darren On Sat, Nov 27, 2010 at 10:52 AM, MB wrote: > Damien Broderick offered: > > > Tenure Track > > > > by Kenneth Schneyer > > > > > > > > Thanks for this link. I'm still pondering that story. > > The pain of losing those I love is with me always now as I grow older. It's > hard. > > Regards, > MB > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From wildcat2030 at gmail.com Sat Nov 27 12:38:17 2010 From: wildcat2030 at gmail.com (Wildcat) Date: Sat, 27 Nov 2010 14:38:17 +0200 Subject: [ExI] Longevity? It's for Lovers!! (A brief interview with Aubrey De Grey) In-Reply-To: References: Message-ID: here is the link : http://spacecollective.org/Wildcat/6465/Longevity-Its-for-Lovers-A-brief-interview-with-Aubrey-De-Grey On Sat, Nov 27, 2010 at 2:33 PM, Wildcat wrote: > I have had the pleasure to interview A. De Grey for a different perspective > on longevity > > T. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wildcat2030 at gmail.com Sat Nov 27 12:33:32 2010 From: wildcat2030 at gmail.com (Wildcat) Date: Sat, 27 Nov 2010 14:33:32 +0200 Subject: [ExI] Longevity? It's for Lovers!! (A brief interview with Aubrey De Grey) Message-ID: I have had the pleasure to interview A. De Grey for a different perspective on longevity T. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sat Nov 27 17:14:29 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 27 Nov 2010 11:14:29 -0600 Subject: [ExI] a longevity sf story In-Reply-To: References: <4CED4D72.1050105@satx.rr.com> <288193d18c44cbc29139aa3a8f3a8f8d.squirrel@www.main.nc.us> Message-ID: <4CF13C75.3010801@satx.rr.com> On 11/27/2010 10:04 AM, Darren Greer wrote: > At the same time, I was curious about the short description of the story > chosen by the editors: Life's too short? For Martin, it seems too long. > * > * > Reading that, you'd think that this story was making some kind of > revelatory statement about the philosophical rightness of a short or > unextended life.* I didn't read it that way at all. * Yeah, the subs did that. I read the story, quite liked it, sent it to the editor in chief for consideration, he didn't like it enough for the print magazine but bought it for online use. I made no further contribution after I fwd'd it. I resigned as COSMOS editor a few weeks ago, and the final story I got accepted, by Malaysian doctor and writer Fadzlishah Johanabas bin Rosli, is due out next month; it's about a devout Muslim robot. Just in time for Xmas, y'know? Damien Broderick From spike66 at att.net Sat Nov 27 17:02:41 2010 From: spike66 at att.net (spike) Date: Sat, 27 Nov 2010 09:02:41 -0800 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: References: Message-ID: <004c01cb8e54$e7f6cf60$b7e46e20$@att.net> ... On Behalf Of Bill Hibbard ... Subject: Re: [ExI] imaginary numbers: RE: new entry from symphony of science On Sat, 27 Nov 2010, Bill Hibbard wrote: > Spike wrote: >> Here's a mathematical comment that will blow your mind if you think >> about it hard enough, and one which also has real world applications: >> >> e^(i*pi) = -1 >> ! >> . . . >> >> e^(i*theta) = cosine(theta) + i*sin(theta) >> >> !! >> >> Is this cool or what? > > Way cool, Spike. When I learned this stuff as a tadpole I was > fascinated by the fact that: > > i^i = e^(-pi/2) = 0.207879576... > > is real. ( f(x + delta.x) - f(x) ) / delta.x ... >If you already know calculus then complex variables aren't that big a leap, and the beauty is worth the effort...Bill _______________________________________________ I get porno in my inbox nearly every day. I don't ask for it, friends just send it. The girls a beautiful indeed. But this discussion is far more on-turning, or rather they turn me onward. There might be a term for that, mathexual something. spike From spike66 at att.net Sat Nov 27 17:25:55 2010 From: spike66 at att.net (spike) Date: Sat, 27 Nov 2010 09:25:55 -0800 Subject: [ExI] the grinch's real motive Message-ID: <005401cb8e58$264fa090$72eee1b0$@att.net> >... it's about a devout Muslim robot. Just in time for Xmas, y'know? For nineteen years he's put up with this now, He must stop Christmas from coming! But how? http://www.cnn.com/2010/CRIME/11/27/oregon.bomb.plot/index.html?hpt=T2 From darren.greer3 at gmail.com Sat Nov 27 17:59:22 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 27 Nov 2010 13:59:22 -0400 Subject: [ExI] Sam Harris posting of "Jesus is My Friend" by Sonseed Message-ID: Sam Harris posted this video from youtube on his website last night, with the tag "This Just Might Save Your Soul." I didn't know whether to laugh or cry. http://www.youtube.com/watch?v=7-NOZU2iPA8&feature=youtube_gdata_player Darren -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Nov 27 18:03:54 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 27 Nov 2010 14:03:54 -0400 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: <004c01cb8e54$e7f6cf60$b7e46e20$@att.net> References: <004c01cb8e54$e7f6cf60$b7e46e20$@att.net> Message-ID: >I get porno in my inbox nearly every day. I don't ask for it, friends just send it. The girls a beautiful indeed. But this discussion is far more on-turning, or rather they turn me onward. There might be a term for that, mathexual something.< Mathurbating? Or is that too Castilian? Darren P.S. I get straight porno in my inbox everyday too. A little more off-putting if you're homosexual. I'm thinking of changing my address to barking-up-the-wrong-tree at gmail.com On Sat, Nov 27, 2010 at 1:02 PM, spike wrote: > ... On Behalf Of Bill Hibbard > ... > Subject: Re: [ExI] imaginary numbers: RE: new entry from symphony of > science > > On Sat, 27 Nov 2010, Bill Hibbard wrote: > > Spike wrote: > >> Here's a mathematical comment that will blow your mind if you think > >> about it hard enough, and one which also has real world applications: > >> > >> e^(i*pi) = -1 > >> ! > >> . . . > >> > >> e^(i*theta) = cosine(theta) + i*sin(theta) > >> > >> !! > >> > >> Is this cool or what? > > > > Way cool, Spike. When I learned this stuff as a tadpole I was > > fascinated by the fact that: > > > > i^i = e^(-pi/2) = 0.207879576... > > > > is real. > > ( f(x + delta.x) - f(x) ) / delta.x > > ... > > >If you already know calculus then complex variables aren't that big a > leap, > and the beauty is worth the effort...Bill > _______________________________________________ > > I get porno in my inbox nearly every day. I don't ask for it, friends just > send it. The girls a beautiful indeed. But this discussion is far more > on-turning, or rather they turn me onward. There might be a term for that, > mathexual something. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sat Nov 27 18:13:43 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Sat, 27 Nov 2010 14:13:43 -0400 Subject: [ExI] Sam Harris posting of "Jesus is My Friend" by Sonseed In-Reply-To: References: Message-ID: I just realized that I may have given the impression that Harris was serious about this, when of course he was being ironic. Anyone who has read him or heard him lecture or debate would know that, but I thought I better clarify in case some haven't. Darren On Sat, Nov 27, 2010 at 1:59 PM, Darren Greer wrote: > Sam Harris posted this video from youtube on his website last night, with > the tag "This Just Might Save Your Soul." I didn't know whether to laugh or > cry. > > http://www.youtube.com/watch?v=7-NOZU2iPA8&feature=youtube_gdata_player > > Darren > > -- > "In the end that's all we have: our memories - electrochemical impulses > stored in eight pounds of tissue the consistency of cold porridge." - > Remembrance of the Daleks > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From natasha at natasha.cc Sat Nov 27 18:58:29 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Sat, 27 Nov 2010 12:58:29 -0600 Subject: [ExI] Human Persons AND Upload / WBE / ASIM Issue Message-ID: <6EC60F95AD0C439BAA0392F5C07FC513@DFC68LF1> One area that is not covered in upload/WBE/ ASIM research is the health or well-being of the person and its relationship to the system/substrate that we will exist in or on. Currently, in our biological mode, our bodies serve multiple purposes, one of which is a means of expression--both physical and mental. In this case, two obvious extremes are the humans who express enjoyment of the body through athletics and bodily process of sex, eating, and playing, for example. On the other hand, a person can become captive of his/her physiology (cognitive and somatic) through over indulging in the bodily sensations, which can become unhealthy habits and addictions. In either case, the outward appearance of the body reveals aspects the internal expressions. How might the upload's substrate reflect its experiential expressions? For example, an upload's attributes express either healthy processes or unhealthy habits/additions: What if the substrate has programming issues that interfere with its functioning; how might this expression be recognized? In biology, we incorporate food for energy. This food either causes good health or poor health. How might the source of energy/fuel for an upload affect the substrate we exist in or on? One might say that an upload is not a human and will have no comparative physiological expression. Nevertheless, there has to be modes of expression for it to exist. The process of thinking and experiencing engages activities, The diverse computational attributes of these activities would become incorporated in the upload's awareness and its own computations. Natasha Natasha Vita-More -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at gmail.com Sat Nov 27 19:00:40 2010 From: giulio at gmail.com (Giulio Prisco) Date: Sat, 27 Nov 2010 20:00:40 +0100 Subject: [ExI] Suzanne Gildert on Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?, Teleplace, 28th November 2010, 10am PST In-Reply-To: References: Message-ID: REMINDER Suzanne Gildert on Thinking about the hardware of thinking tomorrow in teleplace http://telexlr8.wordpress.com/2010/11/22/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010-10am-pst/ On Mon, Nov 22, 2010 at 5:10 PM, Giulio Prisco wrote: > Suzanne Gildert will give a talk in Teleplace on ?Thinking about the > hardware of thinking: Can disruptive technologies help us achieve > uploading?? on November 28, 2010, at 10am PST (1pm EST, 6pm UK, 7pm > continental EU). > > http://telexlr8.wordpress.com/2010/11/22/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010-10am-pst/ > > This is a revised version of Suzanne?s talk at TransVision 2010, also > inspired by her article on ?Building more intelligent machines: Can > ?co-design? help?? (PDF). See also Suzanne?s previous Teleplace talk > on ?Quantum Computing: Separating Hope from Hype?. > > Thinking about the hardware of thinking: Can disruptive technologies > help us achieve uploading? > > S. Gildert, Teleplace, 28th November 2010 > > We are surrounded by devices that rely on general purpose silicon > processors, which are mostly very similar in terms of their design. > But is this the only possibility? As we begin to run larger and more > brain-like emulations, will our current methods of simulating neural > networks be enough, even in principle? Why does the brain, with 100 > billion neurons, consume less than 30W of power, whilst our attempts > to simulate tens of thousands of neurons (for example in the blue > brain project) consumes tens of KW? As we wish to run computations > faster and more efficiently, we might we need to consider if the > design of the hardware that we all take for granted is optimal. In > this presentation I will discuss the recent return to a focus upon > co-design ? that is, designing specialized software algorithms running > on specialized hardware, and how this approach may help us create much > more powerful applications in the future. As an example, I will > discuss some possible ways of running AI algorithms on novel forms of > computer hardware, such as superconducting quantum computing > processors. These behave entirely differently to our current silicon > chips, and help to emphasize just how important disruptive > technologies may be to our attempts to build intelligent machines. > > Event on Facebook > > Dr. Suzanne Gildert is currently working as an Experimental Physicist > at D-Wave Systems, Inc. She is involved in the design and testing of > large scale superconducting processors for Quantum Computing > Applications. Suzanne obtained her PhD and MSci degree from The > University of Birmingham UK, focusing on the areas of experimental > quantum device physics and superconductivity. > > teleXLR8 is a telepresence community for cultural acceleration. We > produce online events, featuring first class content and speakers, > with the best system for e-learning and collaboration in an online 3D > environment: Teleplace. Join teleXLR8 to participate in online talks, > seminars, round tables, workshops, debates, full conferences, > e-learning courses, and social events? with full immersion > telepresence, but without leaving home. > From msd001 at gmail.com Sat Nov 27 20:39:36 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Sat, 27 Nov 2010 15:39:36 -0500 Subject: [ExI] imaginary numbers In-Reply-To: <15E92CF0-9B3E-4520-96A1-383518FDE252@bellsouth.net> References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> <15E92CF0-9B3E-4520-96A1-383518FDE252@bellsouth.net> Message-ID: 2010/11/27 John Clark > On Nov 27, 2010, at 2:15 AM, Mike Dougherty wrote: > > Imaginary numbers really do require imagination to grok i^2 = -1. I'd love > to see that represented in a visually intuitive way*. > > > Make the Real numbers be the horizontal axis of a graph and the imaginary > numbers be the vertical axis, now whenever you multiply a Real or Imaginary > number by i you can intuitively think about it as rotating it by 90 degrees > in a counterclockwise direction. > > Look at i, it sits one unit above the real horizontal axis so draw a line > from the real numbers to i, so if you multiply i by i (i^2) it rotates to > become -1, multiply it by i again(i^3) and it becomes -i, multiply it by i > again (i^4) and it becomes 1, multiply it by i again (i^5) and you've > rotated it a complete 360 degrees and you're right back where you started at > i. > > It is this property of rotation that makes i so valuable, the best example > may be electromagnetism where Maxwell used it to describe how electric and > magnetic fields change in the X and Y direction (that is to say in the Real > and Imaginary direction) as the wave propagates in the Z direction. > > if you put real numbers on X and real numbers on Y then the product is the number of unit squares that cover the area. So a 5 x 5 square is literally a 25 unit square. A 5i x 5i square is a negative 25 unit square? What does the negative mean in that sense? Am I missing something fundamental about i ? (I think I am) This is one of the rare occasions where I'm not being facetious or tongue-in-cheek. -------------- next part -------------- An HTML attachment was scrubbed... URL: From atymes at gmail.com Sat Nov 27 21:56:07 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sat, 27 Nov 2010 13:56:07 -0800 Subject: [ExI] imaginary numbers In-Reply-To: References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> <15E92CF0-9B3E-4520-96A1-383518FDE252@bellsouth.net> Message-ID: 2010/11/27 Mike Dougherty > if you put real numbers on X and real numbers on Y then the product is the > number of unit squares that cover the area. So a 5 x 5 square is literally > a 25 unit square. A 5i x 5i square is a negative 25 unit square? What does > the negative mean in that sense? Am I missing something fundamental about i > ? (I think I am) This is one of the rare occasions where I'm not being > facetious or tongue-in-cheek. > > What you're missing is: what does it mean for a square to have a side of 5i? That makes as much sense as having an area of negative 25. This is usually "no sense": garbage in, garbage out. Although, if you ever had some weird circumstance where having a side of 5i made sense, then the same circumstance would explain what an area of negative 25 means. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max at maxmore.com Sun Nov 28 04:20:01 2010 From: max at maxmore.com (Max More) Date: Sat, 27 Nov 2010 22:20:01 -0600 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science Message-ID: <201011280446.oAS4klRh027541@andromeda.ziaspace.com> Mathurbating is good. Um, equatio? Numberlingus? Penetratios? Max > >I get porno in my inbox nearly every day. I don't ask for it, friends just >send it. The girls a beautiful indeed. But this discussion is far more >on-turning, or rather they turn me onward. There might be a term for that, >mathexual something.< > >Mathurbating? Or is that too Castilian? > >Darren From possiblepaths2050 at gmail.com Sun Nov 28 04:51:14 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sat, 27 Nov 2010 21:51:14 -0700 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: <201011280446.oAS4klRh027541@andromeda.ziaspace.com> References: <201011280446.oAS4klRh027541@andromeda.ziaspace.com> Message-ID: Max More wrote: >Mathurbating is good. > >Um, equatio? Numberlingus? Penetratios? Hey Max, let's try to keep it clean... John ; ) On 11/27/10, Max More wrote: > Mathurbating is good. > > Um, equatio? Numberlingus? Penetratios? > > Max > > >> >I get porno in my inbox nearly every day. I don't ask for it, friends >> > just >>send it. The girls a beautiful indeed. But this discussion is far more >>on-turning, or rather they turn me onward. There might be a term for that, >>mathexual something.< >> >>Mathurbating? Or is that too Castilian? >> >>Darren > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From jonkc at bellsouth.net Sun Nov 28 05:02:06 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 28 Nov 2010 00:02:06 -0500 Subject: [ExI] imaginary numbers In-Reply-To: References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> <15E92CF0-9B3E-4520-96A1-383518FDE252@bellsouth.net> Message-ID: On Nov 27, 2010, at 3:39 PM, Mike Dougherty wrote: > > "if you put real numbers on X and real numbers on Y then the product is the number of unit squares that cover the area. So a 5 x 5 square is literally a 25 unit square. A 5i x 5i square is a negative 25 unit square?" On the complex plane, (also called the Argand plane) where the horizontal axis is the real numbers and the vertical axis is the imaginary numbers and you multiply 2 complex numbers together, you don't get an area you get another complex number, or to use another name for the same thing, you get another vector. In your example of 5i times 5i is indeed negative 25, and that is just a vector pointing straight down with the magnitude of 25. In your example the real part is zero but suppose we want to multiply a more interesting complex number by itself, 3+4i for example. You can do the formal vector arithmetic but even before you do that you can get a feel for what the resulting vector of the multiplication must look like, all you need to do is add the angles and multiply the magnitude of the 2 vectors. The magnitude of 3+4i is 5 because the square root of (3^2 + 4^2) is 5, so the magnitude of the vector we are looking for, the magnitude of the vector that results from multiplying 3+4i by itself, must be 25 because 5*5=25. It's only slightly more difficult to quickly and intuitively estimate the angle. The angle of 3+3i would be 45 degrees so the angle 3+4i must be more than 45 degrees but less than 90, so the angle of the vector we're looking for must be twice that, or more than 90 degrees but less than 180. We can now do the actual arithmetic and see if our estimate was correct. (3+4i)*(3+4i) = 9+16i^2 +24i = ?7 +24i because i^2=-1. So ?7 +24i is the complex number, or vector we want, and the magnitude of that is the square root of (?7 * -7 plus 24 * 24) and that is equal to the square root of 625, and that is equal to 25. So that part of our guess was correct. To calculate the angle of the vector ?7 +24i we use trigonometry and the arctangent of 24/-7, and we get 106.2 degrees, so that part of our quick estimate was correct too. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Sun Nov 28 05:26:50 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Sat, 27 Nov 2010 23:26:50 -0600 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: <201011280446.oAS4klRh027541@andromeda.ziaspace.com> References: <201011280446.oAS4klRh027541@andromeda.ziaspace.com> Message-ID: <4CF1E81A.9050508@satx.rr.com> On 11/27/2010 10:20 PM, Max More wrote: > Um, equatio? That involves horses, I believe. > Numberlingus? A medical condition that sets in after several hours of lingus. From jonkc at bellsouth.net Sun Nov 28 05:23:23 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 28 Nov 2010 00:23:23 -0500 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: <002b01cb8dd7$a8509710$f8f1c530$@att.net> References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> Message-ID: On Nov 26, 2010, at 9:06 PM, spike wrote: > Here?s a mathematical comment that will blow your mind if you think about it hard enough, and one which also has real world applications: > > e^(i*pi) = ?1 I think it's even better if you express it as e^(i*PI) +1 =0 because that way the 5 most important numbers in mathematics e,i,PI,one and zero are all expressed in one short equation. Benjamin Peirce was the most important teacher of mathematics at Harvard in the 19'th century, after deriving that equation he turned to his students and said: "Gentlemen that is surely true, it is paradoxical; we cannot understand it and we don't know what it means. But we have proven it, and therefore we know it must be the truth." John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Sun Nov 28 06:10:47 2010 From: jonkc at bellsouth.net (John Clark) Date: Sun, 28 Nov 2010 01:10:47 -0500 Subject: [ExI] Sam Harris posting of "Jesus is My Friend" by Sonseed In-Reply-To: References: Message-ID: On Nov 27, 2010, at 12:59 PM, Darren Greer wrote: > Sam Harris posted this video from youtube on his website last night, with the tag "This Just Might Save Your Soul." I didn't know whether to laugh or cry. > > http://www.youtube.com/watch?v=7-NOZU2iPA8&feature=youtube_gdata_player I think this version is much better: http://www.youtube.com/watch?v=Sx7QVnHF7EY&feature=related John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Sun Nov 28 06:31:22 2010 From: spike66 at att.net (spike) Date: Sat, 27 Nov 2010 22:31:22 -0800 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: References: <201011280446.oAS4klRh027541@andromeda.ziaspace.com> Message-ID: <002601cb8ec5$e271ee50$a755caf0$@att.net> Max More wrote: >>Mathurbating is good. >>Um, equatio? Numberlingus? Penetratios? >Hey Max, let's try to keep it clean... >John ; ) I thought he was. {8^D Max doesn't play often, but he plays well. {8^D spike From darren.greer3 at gmail.com Sun Nov 28 15:17:45 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 28 Nov 2010 11:17:45 -0400 Subject: [ExI] Sam Harris posting of "Jesus is My Friend" by Sonseed In-Reply-To: References: Message-ID: John wrote: >I think this version is much better: http://www.youtube.com/watch?v=Sx7QVnHF7EY&feature=related< Alas, if Satan had been their friend instead of Jesus, they might at least have bargained with him -- the soul of a band member or two in exchange for some talent. Darren 2010/11/28 John Clark > On Nov 27, 2010, at 12:59 PM, Darren Greer wrote: > > Sam Harris posted this video from youtube on his website last night, with > the tag "This Just Might Save Your Soul." I didn't know whether to laugh or > cry. > > http://www.youtube.com/watch?v=7-NOZU2iPA8&feature=youtube_gdata_player > > > k > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Sun Nov 28 15:45:46 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Sun, 28 Nov 2010 11:45:46 -0400 Subject: [ExI] imaginary numbers: RE: new entry from symphony of science In-Reply-To: References: <002b01cb8dd7$a8509710$f8f1c530$@att.net> Message-ID: John wrote: > . . .Benjamin Peirce was the most important teacher of mathematics at Harvard in the 19'th century, after deriving that equation he turned to his students and said: "Gentlemen that is surely true, it is paradoxical; we cannot understand it and we don't know what it means. But we have proven it, and therefore we know it must be the truth."< That's the crux of it isn't it? I was sitting in class last week and I got very excited over a proof for a formula we were studying. As a writer, when you're working on something good and all is going well, there is this emotive and intellectual click that takes place that tells you that what you are writing is true, not in any literally sense, but in some fundamental way, that you are somehow capturing in those few minute what Henry James called the very "trick and note of life." Some writers call it 'the zone' for lack of any better description. The same zone is available in mathematics, and perhaps in any rigorously applied intellectual discipline. We can often recognize the truth when we see it or work our way into it, whether we understand how we got there or not. I was watching a discussion on the 'net the other night between the Four Horsemen -- Hitchins, Dawkins, Harris and Dennett, and Hitchins and Harris made a similar point: that this feeling that I like to think of transcendence is available to anyone who wants to seek it by applying intellectual rigor to a field of discipline relying on what is observable and attainable through logic and rationale. Hate to turn this topic towards religion, but it has been on my mind a lot lately because of my studies. Religion denies people this transcendence in any real sense, for it asks them to look for it outside of what is verifiable and factual and offers them specious truths and demands belief in the supernatural. In return it asks its followers to abandon their own moral sense and intellectual self-sufficiency and to believe in miracles, which is to deny the real miracle, which is to live in a universe where an imaginary number has real world applications and light can be refracted by the warping of space-time. I have discovered in myself a genuine affinity for science and mathematics at the age of 42. God knows what this will do to my writing. I write novels about real people in contemporary society where science is often edited out of the equation. This group is one reason I decided to go back to school and educate myself, but another is that I don't think any writer can afford to ignore the truths of science and mathematics any longer and try to giver an accurate picture of contemporary society. Don't know where this sudden introspection came from. Math? Why not? Darren 2010/11/28 John Clark > On Nov 26, 2010, at 9:06 PM, spike wrote: > > Here's a mathematical comment that will blow your mind if you think about > it hard enough, and one which also has real world applications: > > e^(i*pi) = -1 > > > I think it's even better if you express it as e^(i*PI) +1 =0 because that > way the 5 most important numbers in mathematics e,i,PI,one and zero are all > expressed in one short equation. Benjamin Peirce was the most important > teacher of mathematics at Harvard in the 19'th century, after deriving that > equation he turned to his students and said: > > "Gentlemen that is surely true, it is paradoxical; we cannot understand it > and we don't know what it means. But we have proven it, and therefore we > know it must be the truth." > > John K Clark > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 28 16:12:14 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 28 Nov 2010 17:12:14 +0100 Subject: [ExI] Micro-loan programs not as successful as hoped In-Reply-To: <418521.49154.qm@web30106.mail.mud.yahoo.com> References: <648870.20970.qm@web30103.mail.mud.yahoo.com> <418521.49154.qm@web30106.mail.mud.yahoo.com> Message-ID: 2010/11/23 Dan : > I'm not sure microlenders here are creating money. My impression was they > have money that they loan out -- not that they are, say, like fractional > reserve banks or fiat money central banks, producing money beyond their > reserves. I assume they are borrowing that money. In any event, ultimately all money comes from reserve banks, and this is what I meant by "creating money out of thin air", not necessarily that they may be fractional bankers themselves... -- Stefano Vaj From agrimes at speakeasy.net Sun Nov 28 17:42:21 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Sun, 28 Nov 2010 12:42:21 -0500 Subject: [ExI] [META] DNS problems Message-ID: <4CF2947D.7040007@speakeasy.net> For the last week or so, I've been having extreme DNS issues reaching extropy.org and sending mail to this list. =\ Anyone else having problems? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From atymes at gmail.com Sun Nov 28 18:12:20 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 28 Nov 2010 10:12:20 -0800 Subject: [ExI] [META] DNS problems In-Reply-To: <4CF2947D.7040007@speakeasy.net> References: <4CF2947D.7040007@speakeasy.net> Message-ID: No problems here. Maybe it's with your local DNS? On Sun, Nov 28, 2010 at 9:42 AM, Alan Grimes wrote: > For the last week or so, I've been having extreme DNS issues reaching > extropy.org and sending mail to this list. =\ Anyone else having problems? > > -- > DO NOT USE OBAMACARE. > DO NOT BUY OBAMACARE. > Powers are not rights. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 28 18:25:28 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 28 Nov 2010 19:25:28 +0100 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/26 Michael Anissimov : > Contrary to consensus, we have people in the transhumanist community calling > us cultists and as deluded as fundamentalist Christians. I think that the obsession with "friendly AI" is criticised from three perspectives: - the first has to do with technological eschatologism ("the Robot-God shall save us in its infinite goodness, how can you dare to doubt its wisdom"); - the second has to do with technological skepticism as to the reality and imminence of any perceived threat; - the third has to do with a more fundamental, philosophical questioning of such a perception of threat and the underlying psychology and value system, not to mention its possible neoluddite implications. Personally, I do not care much for the first angle, am quite neutral as to the second, and am still waiting for any meaningful response as to the third. -- Stefano Vaj From stefano.vaj at gmail.com Sun Nov 28 18:27:34 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 28 Nov 2010 19:27:34 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: 2010/11/26 Michael Anissimov : > Given MNT with the capabilities outlined by CRN and the Phoenix nanofactory > paper, does a hard takeoff seem likely to you? "Likely" is not the really issue here, IMHO. "Worth fighting for" is what describes better my stance, especially when the most likely alternative is no takeoff at all. -- Stefano Vaj From spike66 at att.net Sun Nov 28 18:24:01 2010 From: spike66 at att.net (spike) Date: Sun, 28 Nov 2010 10:24:01 -0800 Subject: [ExI] [META] DNS problems In-Reply-To: <4CF2947D.7040007@speakeasy.net> References: <4CF2947D.7040007@speakeasy.net> Message-ID: <000801cb8f29$6ea32d90$4be988b0$@att.net> I had one other person comment on this, but I traced that to (apparently) their @ causing excessive bounces. Signal from other posters are getting thru. Alan, I will check your account and get back offlist. {8-] spike -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Alan Grimes Sent: Sunday, November 28, 2010 9:42 AM To: ExI chat list Subject: [ExI] [META] DNS problems For the last week or so, I've been having extreme DNS issues reaching extropy.org and sending mail to this list. =\ Anyone else having problems? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From nanogirl at halcyon.com Sun Nov 28 19:33:20 2010 From: nanogirl at halcyon.com (Gina Miller) Date: Sun, 28 Nov 2010 12:33:20 -0700 Subject: [ExI] Test In-Reply-To: References: Message-ID: Test -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.vaj at gmail.com Sun Nov 28 20:32:43 2010 From: stefano.vaj at gmail.com (Stefano Vaj) Date: Sun, 28 Nov 2010 21:32:43 +0100 Subject: [ExI] Hard Takeoff In-Reply-To: References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> Message-ID: 2010/11/27 Mike Dougherty : > We are certainly > concerned that genetic engineering (et al.) have the potential to produce a > plague that also wipes out humanity but it would be unwise to abandon this > medical technology regardless of its potential for curative medicine. > > I was thinking prudence should allay our fears.? Then I imagined the > counterpoint would be to investigate if humanity collectively possesses > enough prudence in the first place. There are people thinking that according to the "coolest", most fashionable thinking, the alternative would be between those who are blind to the danger of technological progress, and/or delude themselves that it may be be possible to stop it, vs. the enlightened few who are responsibly preoccupied with its "steering". Personally, along traditional transhumanist lines, I think the actual alternative is still between those who are against technological progress vs. those who are in favour. And I think that those really deluded are the "responsible" group. Both because progress is far from granted *and* because even if it were the idea of steering it would be presumptuous and short-sighted, not to mention fundamentally reactionary. -- Stefano Vaj From hkeithhenson at gmail.com Sun Nov 28 23:45:27 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Sun, 28 Nov 2010 16:45:27 -0700 Subject: [ExI] Best case, was Hard Takeoff Message-ID: On Fri, Nov 26, 2010 at 5:00 AM, Michael Anissimov wrote: > > On Fri, Nov 19, 2010 at 11:18 AM, Keith Henson wrote: > >> Re these threads, I have not seen any ideas here that have not been >> considered for a *long* time on the sl4 list. >> >> Sorry. > > So who won the argument? I was not aware that it was an argument. In any case, "win the argument" in the sense of convincing others that your position is correct almost never happens on the net. > My point is that the SIAI supporters and Eliezer > Yudkowsky are correct, and the critics are wrong. Chances are none of you are right and AI will arrive from some totally unexpected direction. Such as a companion robot in a Japanese nursing home being plugged into cloud computing. > If there's no consensus, then there's always plenty more to discuss. > > Contrary to consensus, we have people in the transhumanist community calling > us cultists and as deluded as fundamentalist Christians. That's funny since most of the world things the transhumists are deluded cultists. snip to next msg > I guess all self-improving software programs will inevitably fall prey to > infinite recursion or the halting problem, then. ?Please say "yes" if you > believe this. No. > ?If there is no way, even in principle, to >> algorithmically determine beforehand whether a given program with a given >> input >> will halt or not, would an AI risk getting stuck in an infinite loop by >> messing >> with its own programming? Sure there is. Watchdog timers, automatic reboot to a previous version. Keith From brent.allsop at canonizer.com Mon Nov 29 01:27:12 2010 From: brent.allsop at canonizer.com (Brent Allsop) Date: Sun, 28 Nov 2010 18:27:12 -0700 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: <4CF30170.7030802@canonizer.com> On 11/28/2010 4:45 PM, Keith Henson wrote: > On Fri, Nov 26, 2010 at 5:00 AM, Michael Anissimov > wrote: >> On Fri, Nov 19, 2010 at 11:18 AM, Keith Hensonwrote: >> >>> Re these threads, I have not seen any ideas here that have not been >>> considered for a *long* time on the sl4 list. >>> >>> Sorry. >> So who won the argument? > I was not aware that it was an argument. In any case, "win the > argument" in the sense of convincing others that your position is > correct almost never happens on the net. I think this should be rephrased to be "almost never YET happens", and it does happen. Sure, it's not going to happen over the time period of this discussion, but over 20 years? And also, when it does hapen it's nice to know why, when, who, and to have a definitive way to track all such. In other words, I believe it would be great to rigorously measure for just how much consensus there is, and how fast as it changing, and in what direction. And of course, reality well eventually converts everyone, or falsifies the wrong camps. If an AI launches tomorrow and wipes out half of humanity before we overcome it, obviously those in the 'wrong' camp would be converted. And obviosly those that have worried about unfriendly AI, and spent any time and effort during the last 10 years, have completely wasted their time for the foreseeable future. (i.e. more or less for every dollar we waste, instead of spending it on achieving immortal life, another person will fail to make it into the immortal heavenly future and could rot in the grave for the rest of eternity that would have otherwise made it.) > >> If there's no consensus, then there's always plenty more to discuss. >> >> Contrary to consensus, we have people in the transhumanist community calling >> us cultists and as deluded as fundamentalist Christians. > That's funny since most of the world things the transhumists are > deluded cultists. > This is where it is critical to distinguish between the experts, and the general population. The experts will always be in the minority, and will almost always have a very different POV than the general population. To the degree that you track this, and definitively show how much worse the not experts are, compared to the experts, people will obviously learn more to trust the experts, sooner. Also, it helps if experts colaberate to sound like a unified voice, for at least as many as there are, on the moral issues they agree on - instead of always sounding no different than the rest of the loner crazy people. Brent Allsop From atymes at gmail.com Mon Nov 29 02:23:01 2010 From: atymes at gmail.com (Adrian Tymes) Date: Sun, 28 Nov 2010 18:23:01 -0800 Subject: [ExI] Test In-Reply-To: References: Message-ID: Test received 2010/11/28 Gina Miller > Test > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Nov 29 03:00:27 2010 From: spike66 at att.net (spike) Date: Sun, 28 Nov 2010 19:00:27 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: <4CF30170.7030802@canonizer.com> References: <4CF30170.7030802@canonizer.com> Message-ID: <003101cb8f71$936ea710$ba4bf530$@att.net> ... On Behalf Of Brent Allsop ... >And of course, reality well eventually converts everyone, or falsifies the wrong camps...Brent Allsop Ja, I have a good example where I and others were way wrong, but saw the light over time. Eliezer was sixteen years old at the time, and told me I was off base. He was right. I thought nanotech would precede the singularity. More specifically, the kinds of computing technology enabled by nanotech would enable the singularity. Now I am convinced he was right, that faster computers alone wouldn't do it. Furthermore, in the past 15 years, I have become convinced that humans will not master nanotech, that a >H silicon based intelligence would be required to do much with it. This is the kind of stuff we used to argue about in the mid 90s. In retrospect it surprises me we were so uncertain that recently on this topic. I blew that one. spike From possiblepaths2050 at gmail.com Mon Nov 29 04:52:48 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 28 Nov 2010 21:52:48 -0700 Subject: [ExI] Test In-Reply-To: References: Message-ID: Countdown to Singularity... 10, 9, 8, 7... On 11/28/10, Adrian Tymes wrote: > Test received > > 2010/11/28 Gina Miller > >> Test >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > From possiblepaths2050 at gmail.com Mon Nov 29 04:49:05 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Sun, 28 Nov 2010 21:49:05 -0700 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: <003101cb8f71$936ea710$ba4bf530$@att.net> References: <4CF30170.7030802@canonizer.com> <003101cb8f71$936ea710$ba4bf530$@att.net> Message-ID: Spike wrote: I thought nanotech would precede the singularity. More specifically, the kinds of computing technology enabled by nanotech would enable the singularity. Now I am convinced he was right, that faster computers alone wouldn't do it. Furthermore, in the past 15 years, I have become convinced that humans will not master nanotech, that a >H silicon based intelligence would be required to do much with it. >>> It will slow down the "fullblown Singularity" TM (yes, I'm trademarking the term ; ) ), if we have to wait for our AGI to perfect mature nanotech, which may be very key to it creating a post-scarcity economy, that includes indefinite lifespan for everyone! I preferred the old scenario of "ten minutes after an AGI comes into being, it becomes a billion times smarter than any human, and fifteen minutes after that, it has transformed everyone and everything, to live in a near utopian society, or else killed us all!" I remember this as essentially being Eliezer's classic scenario back then. Do I recall it incorrectly? Fullblown Grigg Singularity (TM) - A term regarding a type of Singularity scenario, where a benevolent AGI develops mature nanotech, to bring forth both a post-scarcity society and also indefinite lifespan for all of humanity. John : ) On 11/28/10, spike wrote: > ... On Behalf Of Brent Allsop > ... > >>And of course, reality well eventually converts everyone, or falsifies the > wrong camps...Brent Allsop > > > Ja, I have a good example where I and others were way wrong, but saw the > light over time. Eliezer was sixteen years old at the time, and told me I > was off base. He was right. I thought nanotech would precede the > singularity. More specifically, the kinds of computing technology enabled > by nanotech would enable the singularity. Now I am convinced he was right, > that faster computers alone wouldn't do it. Furthermore, in the past 15 > years, I have become convinced that humans will not master nanotech, that a >>H silicon based intelligence would be required to do much with it. > > This is the kind of stuff we used to argue about in the mid 90s. In > retrospect it surprises me we were so uncertain that recently on this topic. > I blew that one. > > spike > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From sjatkins at mac.com Mon Nov 29 04:58:10 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 28 Nov 2010 20:58:10 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: Message-ID: <08AD78CA-48A9-4232-BF77-6980A9745EF7@mac.com> On Nov 28, 2010, at 10:25 AM, Stefano Vaj wrote: > 2010/11/26 Michael Anissimov : >> Contrary to consensus, we have people in the transhumanist community calling >> us cultists and as deluded as fundamentalist Christians. > > I think that the obsession with "friendly AI" is criticised from three > perspectives: > - the first has to do with technological eschatologism ("the Robot-God > shall save us in its infinite goodness, how can you dare to doubt its > wisdom"); Not really. Friendly AI is posited as a better alternative than Unfriendly AI given that AI of great enough power to be dangerous is likely. All the wonderful things that some ascribe to what FAI will or at least may do for us are quite beside the fundamental point. > - the second has to do with technological skepticism as to the reality > and imminence of any perceived threat; AGI is very very likely IFF we don't destroy the technological base beyond repair first. How soon is a separable question. Expert opinions range from 10 - 100 years with a median at around 30 years. > - the third has to do with a more fundamental, philosophical > questioning of such a perception of threat and the underlying > psychology and value system, not to mention its possible neoluddite > implications. Irrelevant to whether AGI friendliness is important to think about or not. Calling it neoluddite to be concerned is prejudging the entire question in an unhelpful manner. - s From sjatkins at mac.com Mon Nov 29 04:59:09 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 28 Nov 2010 20:59:09 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: Message-ID: <68615171-8482-4C00-AAD2-C3DCB7368344@mac.com> On Nov 28, 2010, at 10:27 AM, Stefano Vaj wrote: > 2010/11/26 Michael Anissimov : >> Given MNT with the capabilities outlined by CRN and the Phoenix nanofactory >> paper, does a hard takeoff seem likely to you? > > "Likely" is not the really issue here, IMHO. > > "Worth fighting for" is what describes better my stance, especially > when the most likely alternative is no takeoff at all. Please make your informed argument why no takeoff is most likely if you believe this is the case. - s > > -- > Stefano Vaj > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From sjatkins at mac.com Mon Nov 29 05:02:58 2010 From: sjatkins at mac.com (Samantha Atkins) Date: Sun, 28 Nov 2010 21:02:58 -0800 Subject: [ExI] Hard Takeoff In-Reply-To: References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> Message-ID: <007E90B0-6B7C-403A-995B-BB9AEAA07582@mac.com> On Nov 28, 2010, at 12:32 PM, Stefano Vaj wrote: > 2010/11/27 Mike Dougherty : >> We are certainly >> concerned that genetic engineering (et al.) have the potential to produce a >> plague that also wipes out humanity but it would be unwise to abandon this >> medical technology regardless of its potential for curative medicine. >> >> I was thinking prudence should allay our fears. Then I imagined the >> counterpoint would be to investigate if humanity collectively possesses >> enough prudence in the first place. > > There are people thinking that according to the "coolest", most > fashionable thinking, the alternative would be between those who are > blind to the danger of technological progress, and/or delude > themselves that it may be be possible to stop it, vs. the enlightened > few who are responsibly preoccupied with its "steering". > > Personally, along traditional transhumanist lines, I think the actual > alternative is still between those who are against technological > progress vs. those who are in favour. > > And I think that those really deluded are the "responsible" group. > Both because progress is far from granted *and* because even if it > were the idea of steering it would be presumptuous and short-sighted, > not to mention fundamentally reactionary. What? You don't think attempting to maximize the outcomes that ensue is worth thinking about at all? You think it is presumptuous to even bother to attempt to predict alternatives and do what we can (which admittedly may not be a lot) to make more desirable outcomes more likely? If you do think this are you in the do-nothing camp re technology and how it is deployed in the future? I don't think so judging from your activities but perhaps I am mistaken. - s From spike66 at att.net Mon Nov 29 06:41:36 2010 From: spike66 at att.net (spike) Date: Sun, 28 Nov 2010 22:41:36 -0800 Subject: [ExI] Best case, was Hard Takeoff In-Reply-To: References: <4CF30170.7030802@canonizer.com> <003101cb8f71$936ea710$ba4bf530$@att.net> Message-ID: <000601cb8f90$789f4ec0$69ddec40$@att.net> ...On Behalf Of John Grigg Subject: Re: [ExI] Best case, was Hard Takeoff Spike wrote: >>I thought nanotech would precede the Singularity...in the past 15 years, I have become convinced that humans will not master nanotech, that a >H silicon based intelligence would be required to do much with it. >> ... >I preferred the old scenario of "ten minutes after an AGI comes into being, it becomes a billion times smarter than any human, and fifteen minutes after that, it has >transformed everyone and everything, to live in a near utopian society, or else killed us all!" I remember this as essentially being Eliezer's classic scenario back then. Do I recall >it incorrectly? John : ) Johnny you are one who was around in those days. Ja, that is pretty much how I remember it too. I didn't take the notion of a singularity nearly as seriously in those days, thinking more about nanotech. Now it is a decade and a half later, and I have seen little if any real progress towards a replicating nano-assembler. This is actually a good thing, for without some kind of very capable control mechanism, it would be too easy for it to get loose and gray goo the planet. spike From giulio at gmail.com Mon Nov 29 09:46:24 2010 From: giulio at gmail.com (Giulio Prisco) Date: Mon, 29 Nov 2010 10:46:24 +0100 Subject: [ExI] Suzanne Gildert on Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?, Teleplace, 28th November 2010, 10am PST In-Reply-To: References: Message-ID: VIDEO - Suzanne Gildert on Thinking about the hardware of thinking in Teleplace http://telexlr8.wordpress.com/2010/11/29/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010/ On Sat, Nov 27, 2010 at 8:00 PM, Giulio Prisco wrote: > REMINDER Suzanne Gildert on Thinking about the hardware of thinking > tomorrow in teleplace > > http://telexlr8.wordpress.com/2010/11/22/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010-10am-pst/ > > On Mon, Nov 22, 2010 at 5:10 PM, Giulio Prisco wrote: >> Suzanne Gildert will give a talk in Teleplace on ?Thinking about the >> hardware of thinking: Can disruptive technologies help us achieve >> uploading?? on November 28, 2010, at 10am PST (1pm EST, 6pm UK, 7pm >> continental EU). >> >> http://telexlr8.wordpress.com/2010/11/22/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010-10am-pst/ >> >> This is a revised version of Suzanne?s talk at TransVision 2010, also >> inspired by her article on ?Building more intelligent machines: Can >> ?co-design? help?? (PDF). See also Suzanne?s previous Teleplace talk >> on ?Quantum Computing: Separating Hope from Hype?. >> >> Thinking about the hardware of thinking: Can disruptive technologies >> help us achieve uploading? >> >> S. Gildert, Teleplace, 28th November 2010 >> >> We are surrounded by devices that rely on general purpose silicon >> processors, which are mostly very similar in terms of their design. >> But is this the only possibility? As we begin to run larger and more >> brain-like emulations, will our current methods of simulating neural >> networks be enough, even in principle? Why does the brain, with 100 >> billion neurons, consume less than 30W of power, whilst our attempts >> to simulate tens of thousands of neurons (for example in the blue >> brain project) consumes tens of KW? As we wish to run computations >> faster and more efficiently, we might we need to consider if the >> design of the hardware that we all take for granted is optimal. In >> this presentation I will discuss the recent return to a focus upon >> co-design ? that is, designing specialized software algorithms running >> on specialized hardware, and how this approach may help us create much >> more powerful applications in the future. As an example, I will >> discuss some possible ways of running AI algorithms on novel forms of >> computer hardware, such as superconducting quantum computing >> processors. These behave entirely differently to our current silicon >> chips, and help to emphasize just how important disruptive >> technologies may be to our attempts to build intelligent machines. >> >> Event on Facebook >> >> Dr. Suzanne Gildert is currently working as an Experimental Physicist >> at D-Wave Systems, Inc. She is involved in the design and testing of >> large scale superconducting processors for Quantum Computing >> Applications. Suzanne obtained her PhD and MSci degree from The >> University of Birmingham UK, focusing on the areas of experimental >> quantum device physics and superconductivity. >> >> teleXLR8 is a telepresence community for cultural acceleration. We >> produce online events, featuring first class content and speakers, >> with the best system for e-learning and collaboration in an online 3D >> environment: Teleplace. Join teleXLR8 to participate in online talks, >> seminars, round tables, workshops, debates, full conferences, >> e-learning courses, and social events? with full immersion >> telepresence, but without leaving home. >> > From avantguardian2020 at yahoo.com Mon Nov 29 11:00:04 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 29 Nov 2010 03:00:04 -0800 (PST) Subject: [ExI] Nulla contro lo Stato Message-ID: <175291.19133.qm@web65603.mail.ac4.yahoo.com> I cannot find the words to describe how unAmerican this is: http://www.deadseriousnews.com/?p=573 Somebody has pulled my country out from beneath me. Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From darren.greer3 at gmail.com Mon Nov 29 13:40:47 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 09:40:47 -0400 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <175291.19133.qm@web65603.mail.ac4.yahoo.com> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> Message-ID: That's deeply disturbing. And the charge won't hold for a second. Any M.D. will testify that unwanted erection is an issue for some men -- gay and straight -- during routine medical exams when subject to tactile stimulation. I had a straight friend who had that problem and was deeply embarrassed by it, though his doctor cheerfully told him how common it was and not to fret over it. And this man's own doctor will likely be able to attest to the premature ejaculation. At the very least, the TSA agents have been improperly trained. You'd think this guy could make a decent case for sexual assault in response. Darren On Mon, Nov 29, 2010 at 7:00 AM, The Avantguardian < avantguardian2020 at yahoo.com> wrote: > > I cannot find the words to describe how unAmerican this is: > > http://www.deadseriousnews.com/?p=573 > > Somebody has pulled my country out from beneath me. > > Stuart LaForge > > > "There is nothing wrong with America that faith, love of freedom, > intelligence, > and energy of her citizens cannot cure."- Dwight D. Eisenhower > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Mon Nov 29 15:01:33 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 11:01:33 -0400 Subject: [ExI] Hard Takeoff In-Reply-To: <007E90B0-6B7C-403A-995B-BB9AEAA07582@mac.com> References: <810705.72511.qm@web65612.mail.ac4.yahoo.com> <001c01cb89c1$ea9974d0$bfcc5e70$@att.net> <670CF4BE-1C5C-4C8B-A50E-DCA517775DBC@mac.com> <007E90B0-6B7C-403A-995B-BB9AEAA07582@mac.com> Message-ID: >>I was thinking prudence should allay our fears. Then I imagined the counterpoint would be to investigate if humanity collectively possesses enough prudence in the first place.<< I should say there's ample evidence of human prudence in the history of the development of technology. The problem is that our vision is often so short-sited and misguided. We develop nuclear technology for the purposes of fission bombs in a time of great stress and chaos without much fore-thought to what kind of risks and dangers it brings to the world, and here in my own country, where nuclear power is being widely developed and implemented as a cheaper energy alternative, we have grass-roots organizations whipping up public anxiety about the possibility of an accident at the Pickering plant that will incinerate all of Toronto. Meanwhile we are still burning coal in Ontario and have a heavily integrated traditional power-grid that makes black-outs and system wide failures likely and expensive to fix. When you ask your local anti-nuclear plant spokeswoman, who has a degree in sociology and a bee in her bonnet about local property values, what would be a viable alternative, she has no answer. Yet I imagine she does appreciate having her lights come on when she flicks a switch. >And I think that those really deluded are the "responsible" group. > Both because progress is far from granted *and* because even if it > were the idea of steering it would be presumptuous and short-sighted, > not to mention fundamentally reactionary.< And what are the alternatives? If you grant what the anti-nuclear spokespeople won't, that once in a technology is invented it can't be un-invented, and what Buckminster Fuller suggested, that technological progress is actually beyond human control in most circumstance because it is difficult to predict where the next leap forward will come from given the near-limitless number of potential combinations of technology that exist in theory, then you're left with only two, it seems to me. 1. Open the door and sit back and hope for the best. Or 2. At the very least strive for the minimum amount of control you can through engineering, even if that turns out to be very little. Even if it turns out to be absolutely none, you are at least more informed about the possible scenarios when the whole situation goes south, and your response may in fact be a little less reactionary and a little more calculated. A small and perhaps very intellectually weak example, but the only one I can think of at the moment. Imagine raising a child who's combined IQ levels are off the charts. He or she is incredibly more intelligent than you in pattern recognition, creative impulse, logic, emotive response, etc. You know at some point in their development they are going to be thinking and reacting and responding to the world around them in ways that you are just not capable of and perhaps can't even imagine. This could be a good thing or a bad thing. They could read Nietzsche at the age of two and decide that "truth was only a moral semblance" and turn into a Leopold and Loeb in one body. They could read Plato or Leo Strauss and decide that the social contract inhibited effective governance and set out to change that. They might logically decide that religious fanatcism was going to destroy the planet and the best way to forestall that was to become a jihadist in the name of science and persecute religious people of all stripes. Or they could be altruistic and compassionate and come up with creative ways to solve some of the world's problems -- unlimited energy sources, new ways of producing food. Whatever. So what do you do? Just say the hell with it? Let it ride? Or do you at least try to instill in them some commonly held human values and goals, in the hopes that they will have a positive outcome in the end when they do set sail? Most people would agree with the last scenario I think. If we are going to create it and foster it, don't we have at least some responsibility to do our utmost to attempt to steer it towards a positive outcome, even if we fail? It may be hubris to think we can do so, but it would be sheer negligence to not even bother to try. On Mon, Nov 29, 2010 at 1:02 AM, Samantha Atkins wrote: > > On Nov 28, 2010, at 12:32 PM, Stefano Vaj wrote: > > > 2010/11/27 Mike Dougherty : > >> We are certainly > >> concerned that genetic engineering (et al.) have the potential to > produce a > >> plague that also wipes out humanity but it would be unwise to abandon > this > >> medical technology regardless of its potential for curative medicine. > >> > >> I was thinking prudence should allay our fears. Then I imagined the > >> counterpoint would be to investigate if humanity collectively possesses > >> enough prudence in the first place. > > > > There are people thinking that according to the "coolest", most > > fashionable thinking, the alternative would be between those who are > > blind to the danger of technological progress, and/or delude > > themselves that it may be be possible to stop it, vs. the enlightened > > few who are responsibly preoccupied with its "steering". > > > > Personally, along traditional transhumanist lines, I think the actual > > alternative is still between those who are against technological > > progress vs. those who are in favour. > > > > And I think that those really deluded are the "responsible" group. > > Both because progress is far from granted *and* because even if it > > were the idea of steering it would be presumptuous and short-sighted, > > not to mention fundamentally reactionary. > > What? You don't think attempting to maximize the outcomes that ensue is > worth thinking about at all? You think it is presumptuous to even bother > to attempt to predict alternatives and do what we can (which admittedly may > not be a lot) to make more desirable outcomes more likely? If you do think > this are you in the do-nothing camp re technology and how it is deployed in > the future? I don't think so judging from your activities but perhaps I am > mistaken. > > - s > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Mon Nov 29 16:29:11 2010 From: jonkc at bellsouth.net (John Clark) Date: Mon, 29 Nov 2010 11:29:11 -0500 Subject: [ExI] Best case, was Hard Takeoff. In-Reply-To: References: Message-ID: <6A7AB9E0-9F52-421A-8C66-1ADF4DDAA546@bellsouth.net> On Nov 28, 2010, at 6:45 PM, Keith Henson wrote: >> If there is no way, even in principle, to algorithmically determine beforehand whether a given program with a given input will halt or not, would an AI risk getting stuck in an infinite loop by >> messing with its own programming? > Sure there is. Watchdog timers, automatic reboot to a previous version. Right, but that would not be possible in a intelligence that operated on a strict axiomatic goal based structure, like the one with "obey human beings no matter what" being #1 as the friendly (slave) AI people want. Static goals are not possible because of the infinite loop problem. In Human beings that "watchdog timer" to get you out of infinite loops is called "boredom", sometimes it means you will give up after you seem to have made no progress just before you would have figured out the answer, but that disadvantage is the price you must pay to avoid infinite loops, there just isn't any other viable alternative. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Nov 29 16:46:11 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 08:46:11 -0800 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <175291.19133.qm@web65603.mail.ac4.yahoo.com> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> Message-ID: <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> >... On Behalf Of The Avantguardian >Subject: [ExI] Nulla contro lo Stato > I cannot find the words to describe how unAmerican this is: > http://www.deadseriousnews.com/?p=573 Haaaahahahaaa! {8^D They had me going until they got to "Percy Cummings." {8^D spike From hkeithhenson at gmail.com Mon Nov 29 17:01:04 2010 From: hkeithhenson at gmail.com (Keith Henson) Date: Mon, 29 Nov 2010 10:01:04 -0700 Subject: [ExI] extropy-chat Digest, Vol 86, Issue 51 In-Reply-To: References: Message-ID: On Mon, Nov 29, 2010 at 5:00 AM, Brent Allsop wrote: > On 11/28/2010 4:45 PM, Keith Henson wrote: >> On Fri, Nov 26, 2010 at 5:00 AM, Michael Anissimov >> wrote: >>> On Fri, Nov 19, 2010 at 11:18 AM, Keith Hensonwrote: >>> >>>> Re these threads, I have not seen any ideas here that have not been >>>> considered for a *long* time on the sl4 list. >>>> >>>> Sorry. >>> So who won the argument? >> I was not aware that it was an argument. In any case, "win the >> argument" in the sense of convincing others that your position is >> correct almost never happens on the net. > I think this should be rephrased to be "almost never YET happens", and > it does happen. Sure, it's not going to happen over the time period of > this discussion, but over 20 years? And also, when it does happen it's > nice to know why, when, who, and to have a definitive way to track all > such. In other words, I believe it would be great to rigorously measure > for just how much consensus there is, and how fast as it changing, and > in what direction. I don't know if consensus will mean much in predicting the emergence of AI. Historical analogies are always suspect, but I don't think there was a lot of consensus on aircraft prior to the Wright brothers building and testing one. > And of course, reality well eventually converts everyone, or falsifies > the wrong camps. If an AI launches tomorrow and wipes out half of > humanity before we overcome it, Wiping out half seems a lot less likely than a clean sweep or none at all. It is just the mathematical nature of exponential growth. That one packet virus that I mentioned infected all possible hosts in a time too short for humans to react.. > obviously those in the 'wrong' camp > would be converted. And obviously those that have worried about > unfriendly AI, and spent any time and effort during the last 10 years, > have completely wasted their time for the foreseeable future. (i.e. > more or less for every dollar we waste, instead of spending it on > achieving immortal life, another person will fail to make it into the > immortal heavenly future and could rot in the grave for the rest of > eternity that would have otherwise made it.) > >> >>> If there's no consensus, then there's always plenty more to discuss. >>> >>> Contrary to consensus, we have people in the transhumanist community calling >>> us cultists and as deluded as fundamentalist Christians. >> That's funny since most of the world things the transhumists are >> deluded cultists. >> > > This is where it is critical to distinguish between the experts, and the > general population. The experts will always be in the minority, and > will almost always have a very different POV than the general > population. To the degree that you track this, and definitively show > how much worse the not experts are, compared to the experts, people will > obviously learn more to trust the experts, sooner. Maybe it should be that way. But it seems kind of unlikely. Consider evolution, full of experts and rejected by a huge segment of the population (in the US). > Also, it helps if > experts collaborate to sound like a unified voice, for at least as many > as there are, on the moral issues they agree on - instead of always > sounding no different than the rest of the loner crazy people. > Brent, I know maybe a dozen people in this area. I can't think of any two of them who agree on anything substantial. Keith From mbb386 at main.nc.us Mon Nov 29 17:34:24 2010 From: mbb386 at main.nc.us (MB) Date: Mon, 29 Nov 2010 12:34:24 -0500 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> Message-ID: <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> Yes, I was horrified as well - though the man's name gave me pause - and then did a bit of looking around: http://www.deadseriousnews.com/?page_id=2 | Dead Serious News is a satirical website that is updated on an irregular | basis. With the exception of the names of public figures, all names are | fictional. Regards, MB > Haaaahahahaaa! {8^D They had me going until they got to "Percy Cummings." > {8^D spike > From spike66 at att.net Mon Nov 29 17:22:49 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 09:22:49 -0800 Subject: [ExI] Best case, was Hard Takeoff. In-Reply-To: <6A7AB9E0-9F52-421A-8C66-1ADF4DDAA546@bellsouth.net> References: <6A7AB9E0-9F52-421A-8C66-1ADF4DDAA546@bellsouth.net> Message-ID: <003701cb8fea$0c592880$250b7980$@att.net> On Behalf Of John Clark . >.Right, but that would not be possible in a intelligence that operated on a strict axiomatic goal based structure, like the one with "obey human beings no matter what" being #1 as the friendly (slave) AI people want. John K Clark But what if the humans issue contradictory orders? What if we assign the slave AI to obey exactly one person and that person issues contradictory orders? Actually that is what we have now. When I write software, I accidentally give the computer contradictory orders, and it follows them. It does exactly what I tell it to do, but not what I want it to do. I get so pissed off. All I want is for the damn computer to disregard my faulty orders and do what I want. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Nov 29 17:49:31 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 29 Nov 2010 11:49:31 -0600 Subject: [ExI] Nulla contro lo Onion In-Reply-To: <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> Message-ID: <4CF3E7AB.1060105@satx.rr.com> On 11/29/2010 11:34 AM, MB wrote: > Yes, I was horrified as well - though the man's name gave me pause Not to mention his partner's name. From scerir at alice.it Mon Nov 29 17:39:00 2010 From: scerir at alice.it (scerir) Date: Mon, 29 Nov 2010 18:39:00 +0100 Subject: [ExI] reverse aging In-Reply-To: <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com><002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> Message-ID: <3EDF26C2560548199BA6E6A03512F6A1@PCserafino> Harvard scientists reverse the ageing process in mice - now for humans. Harvard scientists were surprised that they saw a dramatic reversal, not just a slowing down, of the ageing in mice. Now they believe they might be able to regenerate human organs .http://www.guardian.co.uk/science/2010/nov/28/scientists-reverse-ageing-mice-humans From scerir at alice.it Mon Nov 29 17:56:32 2010 From: scerir at alice.it (scerir) Date: Mon, 29 Nov 2010 18:56:32 +0100 Subject: [ExI] Nulla contro lo Onion In-Reply-To: <4CF3E7AB.1060105@satx.rr.com> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net><4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> <4CF3E7AB.1060105@satx.rr.com> Message-ID: The subject line should be "Nulla contro l'Onion", because "lo Onion" is cacophonic :-) From darren.greer3 at gmail.com Mon Nov 29 18:25:42 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 14:25:42 -0400 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> Message-ID: FF#*%, do I feel like an idiot. I posted it to my facebook account and no-one caught it there either. Including two journalist friends. I saw the cummings and just shrugged it off. So how do I retract that from facebook without angering about five hundred friends and acquaintances? I think I will stick to math. No irony. Darren On Mon, Nov 29, 2010 at 12:46 PM, spike wrote: > >... On Behalf Of The Avantguardian > >Subject: [ExI] Nulla contro lo Stato > > > > I cannot find the words to describe how unAmerican this is: > > > http://www.deadseriousnews.com/?p=573 > > > Haaaahahahaaa! {8^D They had me going until they got to "Percy Cummings." > {8^D spike > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Mon Nov 29 18:43:50 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 14:43:50 -0400 Subject: [ExI] Satirical News Sites (Was Nulla contro lo Onion) Message-ID: I think we should get this judge to join this list. Judge Orders Satirical Site To Remove Joke Story About Fictional Giraffe Attack http://www.techdirt.com/articles/20100304/1244358419.shtml Darren -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From scerir at alice.it Mon Nov 29 18:34:44 2010 From: scerir at alice.it (scerir) Date: Mon, 29 Nov 2010 19:34:44 +0100 Subject: [ExI] reverse aging In-Reply-To: <3EDF26C2560548199BA6E6A03512F6A1@PCserafino> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com><002701cb8fe4$ef4a71e0$cddf55a0$@att.net><4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> <3EDF26C2560548199BA6E6A03512F6A1@PCserafino> Message-ID: <55CE9E2C436B40B4A7C45C33809E05AB@PCserafino> somebody gave me the pdf of the (6 pages) paper on Nature, anyone interested? From darren.greer3 at gmail.com Mon Nov 29 18:50:38 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 14:50:38 -0400 Subject: [ExI] Satirical News Sites (Was Nulla contro lo Onion) In-Reply-To: References: Message-ID: But then again, I had to read THIS story twice to make sure IT wasn't satirical. Thanks guys. My faith in Internet news is now in play. Signed, Shaky McNet. On Mon, Nov 29, 2010 at 2:43 PM, Darren Greer wrote: > I think we should get this judge to join this list. > > Judge Orders Satirical Site To Remove Joke Story About Fictional Giraffe > Attack > > http://www.techdirt.com/articles/20100304/1244358419.shtml > > Darren > > -- > "In the end that's all we have: our memories - electrochemical impulses > stored in eight pounds of tissue the consistency of cold porridge." - > Remembrance of the Daleks > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbb386 at main.nc.us Mon Nov 29 19:11:40 2010 From: mbb386 at main.nc.us (MB) Date: Mon, 29 Nov 2010 14:11:40 -0500 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> Message-ID: Often honesty is the best policy.... and an honest retraction can be quite well respected. I've had to do that online as well as in real life and it is a depressing step to be forced into... But there you go. :) I have an email friend who never retracts or corrects anything, and I no longer trust emails/postings from that source.... for obvious reasons. Talk about SNAFU. :* Really, don't feel like an idiot, the people who wrote it were clever and you responded (IMHO) appropriately given the subject matter...as did Stuart. The truly sad part is that we were ready to believe it - such is the state of Security Theatre in the USA. Regards, MB > FF#*%, do I feel like an idiot. I posted it to my facebook account and > no-one caught it there either. Including two journalist friends. I saw the > cummings and just shrugged it off. So how do I retract that from facebook > without angering about five hundred friends and acquaintances? I think I > will stick to math. No irony. > > Darren > > > From kanzure at gmail.com Mon Nov 29 18:50:56 2010 From: kanzure at gmail.com (Bryan Bishop) Date: Mon, 29 Nov 2010 12:50:56 -0600 Subject: [ExI] reverse aging In-Reply-To: <55CE9E2C436B40B4A7C45C33809E05AB@PCserafino> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> <3EDF26C2560548199BA6E6A03512F6A1@PCserafino> <55CE9E2C436B40B4A7C45C33809E05AB@PCserafino> Message-ID: On Mon, Nov 29, 2010 at 12:34 PM, scerir wrote: > somebody gave me the pdf of the (6 pages) paper on Nature, > anyone interested? sure - Bryan http://heybryan.org/ 1 512 203 0507 From pharos at gmail.com Mon Nov 29 19:23:35 2010 From: pharos at gmail.com (BillK) Date: Mon, 29 Nov 2010 19:23:35 +0000 Subject: [ExI] Satirical News Sites (Was Nulla contro lo Onion) In-Reply-To: References: Message-ID: 2010/11/29 Darren Greer wrote plaintively: > But then again, I had to read THIS story twice to make sure IT wasn't > satirical. Thanks guys. My faith in Internet news is now in play. > Signed, > Shaky McNet. > > That's the problem with satire today. The world has gone so crazy that Onion articles get quoted as real news stories and satire becomes impossible. The current European bailout of Greece and now Ireland is ongoing craziness masquerading as official policy. The European Central Bank has stated that all debts must be paid and defaults will not be permitted. (i.e. protect the billionaire bankers at all costs). One of the financial bloggers attempted to write a funny satirical interview, but so many people believed it that he had to add a disclaimer at the end. Quotes: Said ECB president Trichet in an exclusive interview... "This is a victory for much maligned bondholders everywhere. I am pleased to announce we have effectively removed the word investing from the vocabulary of bondholders." "Starting today, bondholders need not be concerned with who they lend money to, why, or what risks there are in doing so." "Not only will this help ease turmoil in the markets, but bondholders can now think in terms of winning rather than the more mundane investing because the ECB and IMF will backstop all losses from trading bonds." In a followup interview Trichet commented, "The debt crisis is over. We are willing to grant Greece and Ireland as much time as they need. If an extra-four-and-a-half years to repay emergency loans proves insufficient, we are willing to wait an extra-hundred-and-a-half years". When asked if he meant 150 years or 100.5 years, Trichet replied, "I mean as long as it takes to make the ECB whole, forever if necessary. The important thing is for bondholders to never suffer losses. Heaven forbid we should ever unsettle bondholders by insinuating they may have to take some losses. Bondholders in general, not just Goldman Sachs bondholders, do God's work." ----------------- The craziness keeps on coming. BillK From spike66 at att.net Mon Nov 29 20:22:00 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 12:22:00 -0800 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> Message-ID: <006f01cb9003$143a7c70$3caf7550$@att.net> ... On Behalf Of MB Subject: Re: [ExI] Nulla contro lo Stato >Often honesty is the best policy.... and an honest retraction can be quite well respected. I've had to do that online as well as in real life and it is a depressing step to be forced >into... But there you go. :) Regards, MB >> FF#*%, do I feel like an idiot. I posted it to my facebook account and >> no-one caught it there either. Including two journalist friends...So how do I retract that >> from facebook without angering about five hundred friends and >> acquaintances? I think I will stick to math. No irony... Darren Oh come now, me lads. Ummm, rather, let me rephrase that, um... Oh, do ease up, me lads. Satire is meant to imitate and exaggerate reality. If you take the bait once in a while, look at it as a harmless way to burn off excess dignity. {8-] No harm, no injury. Good story, happened a long time ago. High school cafeteria, girl going on and on about how she had done poorly on a test and was miserable, her boyfriend beside her. She had one of those tennis hats that look like a baseball cap with the top cut out. He took an orange slice off of his tray, and bit into it with the orange peel forward like Brando did in the Godfather right before his heart attack, put olives over his eye sockets and scrunched to hold them in place, put the hat on his head upside down. Then as she went on and on, he sat beside her saying Uh huh, uh huh, much to the amusement of those on the other side of the table. She looked over at him. He looked so silly she busted out laughing hysterically, so hard she lost control of her bladder, at which time she was forced to flee the lunchroom and left school for the rest of the day. The irate lass broke up with him. Next day she was relating the story to my brother, who listened to her furious rant, then commented: So where's your boo boo? I'll kiss it better. She: I beg your pardon? He: Embarrassed in the lunchroom, so show me where it hurts, and I will kiss it all better. She got the message: embarrassment doesn't need to hurt. It's all in one's attitude. She and my brother met again at their tenth high school reunion, married. spike From darren.greer3 at gmail.com Mon Nov 29 20:41:24 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 16:41:24 -0400 Subject: [ExI] Satirical News Sites (Was Nulla contro lo Onion) In-Reply-To: References: Message-ID: BillK wrote: >That's the problem with satire today. The world has gone so crazy that Onion articles get quoted as real news stories and satire becomes impossible.< Just so. What is real news often seems like satire. And what is satire often seems like real news. Did you ever watch the internet documentary The Obama Deception? I did, just to see what Alex Jones et al were up to now. That film was so masterfully edited and pieced together from half-truths, lies and shock footage that it runs like an extended Orwellian two-minute hate. It plays ingeniously on all the prevalent fears and manages to combine the concerns of the hard left with the paranoia of the hard right so seamlessly that you no longer know which side is which. Amid all of that is mixed in some actual facts about Glass-Steagell and the creation of the federal reserve and some other very real economic and political issues that people are right to be concerned about. So it's easy to see how bright people get sucked into this New World Order Conspiracy, in ways I imagine that some intelligent people got sucked into the The Elders Of The Protocol of Zion paradigm in the thirties. All it takes to debunk a lot of it is a reasonably fast search on the 'net, but few do it. (I was guilty of that today, my desire to believe and personally motivated outrage overwhelming my critical faculties.) And if they do find credible evidence it isn't true, they can always tell themselves that the debunking sites are seeds of disinformation sowed by the Imperialists as part of the conspiracy too. Information, disinformation, satire, pundits, fiction, science, religion. It is at times bewildering. One of the most valuable functions of Exi and some of the other groups I belong to is that stuff is usually vetted before I get it, and I tend for the most part to trust the collective scientific skepticism apparent in many of the groups. Darren On Mon, Nov 29, 2010 at 3:23 PM, BillK wrote: > 2010/11/29 Darren Greer wrote plaintively: > > But then again, I had to read THIS story twice to make sure IT wasn't > > satirical. Thanks guys. My faith in Internet news is now in play. > > Signed, > > Shaky McNet. > > > > > > > That's the problem with satire today. The world has gone so crazy that > Onion articles get quoted as real news stories and satire becomes > impossible. > > The current European bailout of Greece and now Ireland is ongoing > craziness masquerading as official policy. > > The European Central Bank has stated that all debts must be paid and > defaults will not be permitted. (i.e. protect the billionaire bankers > at all costs). > > > One of the financial bloggers attempted to write a funny satirical > interview, but so many people believed it that he had to add a > disclaimer at the end. > > < > http://globaleconomicanalysis.blogspot.com/2010/11/bond-threat-dismantled-much-maligned.html > > > > Quotes: > Said ECB president Trichet in an exclusive interview... > > "This is a victory for much maligned bondholders everywhere. I am > pleased to announce we have effectively removed the word investing > from the vocabulary of bondholders." > "Starting today, bondholders need not be concerned with who they > lend money to, why, or what risks there are in doing so." > > "Not only will this help ease turmoil in the markets, but > bondholders can now think in terms of winning rather than the more > mundane investing because the ECB and IMF will backstop all losses > from trading bonds." > > In a followup interview Trichet commented, "The debt crisis is over. > We are willing to grant Greece and Ireland as much time as they need. > If an extra-four-and-a-half years to repay emergency loans proves > insufficient, we are willing to wait an extra-hundred-and-a-half > years". > > When asked if he meant 150 years or 100.5 years, Trichet replied, "I > mean as long as it takes to make the ECB whole, forever if necessary. > The important thing is for bondholders to never suffer losses. Heaven > forbid we should ever unsettle bondholders by insinuating they may > have to take some losses. Bondholders in general, not just Goldman > Sachs bondholders, do God's work." > > ----------------- > > > The craziness keeps on coming. > > > > BillK > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Nov 29 20:47:58 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 29 Nov 2010 14:47:58 -0600 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <006f01cb9003$143a7c70$3caf7550$@att.net> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <006f01cb9003$143a7c70$3caf7550$@att.net> Message-ID: <4CF4117E.9070600@satx.rr.com> On 11/29/2010 2:22 PM, spike wrote: > she busted out laughing hysterically, so hard she lost > control of her bladder, at which time she was forced to flee the lunchroom > and left school for the rest of the day. The irate lass broke up with him. > Next day she was relating the story to my brother, who listened to her > furious rant, then commented: So where's your boo boo? I'll kiss it better. > > > She: I beg your pardon? > > He: Embarrassed in the lunchroom, so show me where it hurts, and I will > kiss it all better. > > She got the message: embarrassment doesn't need to hurt. Oh, is *that* what he meant. Damien Broderick From darren.greer3 at gmail.com Mon Nov 29 20:51:25 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 16:51:25 -0400 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <006f01cb9003$143a7c70$3caf7550$@att.net> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <006f01cb9003$143a7c70$3caf7550$@att.net> Message-ID: Spike wrote: >She got the message: embarrassment doesn't need to hurt. It's all in one's attitude. She and my brother met again at their tenth high school reunion, married.< Good story, Spike. Though I really thought he meant he would kiss the place where the "accident" happened. I've got a filthy mind. >look at it as a harmless way to burn off excess dignity.< Or, to put it another way, excess ego, which is always a good thing to lose. Darren On Mon, Nov 29, 2010 at 4:22 PM, spike wrote: > ... On Behalf Of MB > Subject: Re: [ExI] Nulla contro lo Stato > > >Often honesty is the best policy.... and an honest retraction can be quite > well respected. I've had to do that online as well as in real life and it > is a depressing step to be forced >into... But there you go. :) Regards, > MB > > >> FF#*%, do I feel like an idiot. I posted it to my facebook account and > >> no-one caught it there either. Including two journalist friends...So how > do I retract that > >> from facebook without angering about five hundred friends and > >> acquaintances? I think I will stick to math. No irony... Darren > > > Oh come now, me lads. Ummm, rather, let me rephrase that, um... Oh, do > ease > up, me lads. Satire is meant to imitate and exaggerate reality. If you > take > the bait once in a while, look at it as a harmless way to burn off excess > dignity. {8-] No harm, no injury. > > Good story, happened a long time ago. High school cafeteria, girl going on > and on about how she had done poorly on a test and was miserable, her > boyfriend beside her. She had one of those tennis hats that look like a > baseball cap with the top cut out. He took an orange slice off of his > tray, > and bit into it with the orange peel forward like Brando did in the > Godfather right before his heart attack, put olives over his eye sockets > and > scrunched to hold them in place, put the hat on his head upside down. Then > as she went on and on, he sat beside her saying Uh huh, uh huh, much to the > amusement of those on the other side of the table. She looked over at him. > He looked so silly she busted out laughing hysterically, so hard she lost > control of her bladder, at which time she was forced to flee the lunchroom > and left school for the rest of the day. The irate lass broke up with him. > Next day she was relating the story to my brother, who listened to her > furious rant, then commented: So where's your boo boo? I'll kiss it > better. > > > She: I beg your pardon? > > He: Embarrassed in the lunchroom, so show me where it hurts, and I will > kiss it all better. > > She got the message: embarrassment doesn't need to hurt. It's all in one's > attitude. She and my brother met again at their tenth high school reunion, > married. > > spike > > > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Mon Nov 29 20:32:53 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Mon, 29 Nov 2010 12:32:53 -0800 (PST) Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> Message-ID: <303197.29059.qm@web65609.mail.ac4.yahoo.com> Darren wrote: >FF#*%, do I feel like an idiot. I posted it to my facebook account and no-one >caught it there either. Including two journalist friends. ?I saw the cummings >and just shrugged it off. So how do I retract that from facebook without >angering about five hundred friends and acquaintances? I think I will stick to >math. No irony.? My apologies, Darren.?My response when I was forwarded this piece was emotional and not analytical.?While I did think the name Cummings was ironic, I have?seen?stranger ironies in life. And?for?satire, it just?wasn't very?funny. ?Stuart LaForge "There is nothing wrong with America that faith, love of freedom, intelligence, and energy of her citizens cannot cure."- Dwight D. Eisenhower From darren.greer3 at gmail.com Mon Nov 29 21:02:43 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Mon, 29 Nov 2010 17:02:43 -0400 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <303197.29059.qm@web65609.mail.ac4.yahoo.com> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <303197.29059.qm@web65609.mail.ac4.yahoo.com> Message-ID: Stuart wrote: >My apologies, Darren. My response when I was forwarded this piece was emotional and not analytical.< No problem Stuart. Don't apologize. Probably the most exciting thing that has happened to me in weeks. I was already planning Mr. Cumming's legal defense in my head. And yes, I responded emotionally as well. And that's not a bad thing, all things considered. I also imagined all the right wing Christians using it as proof as to how we homosexuals are degenerate and sex-crazed maniacs. And I have to admit, I also briefly thought that if the TSA agent was hot enough, I might have had the same problem. :) Darren On Mon, Nov 29, 2010 at 4:32 PM, The Avantguardian < avantguardian2020 at yahoo.com> wrote: > Darren wrote: > > >FF#*%, do I feel like an idiot. I posted it to my facebook account and > no-one > >caught it there either. Including two journalist friends. I saw the > cummings > >and just shrugged it off. So how do I retract that from facebook without > >angering about five hundred friends and acquaintances? I think I will > stick to > >math. No irony. > > While I did think the name Cummings was ironic, I > have seen stranger ironies in life. And for satire, it just wasn't > very funny. > Stuart LaForge > > > "There is nothing wrong with America that faith, love of freedom, > intelligence, > and energy of her citizens cannot cure."- Dwight D. Eisenhower > > > > > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Mon Nov 29 21:38:15 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 29 Nov 2010 15:38:15 -0600 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <006f01cb9003$143a7c70$3caf7550$@att.net> Message-ID: <4CF41D47.9080802@satx.rr.com> On 11/29/2010 2:51 PM, Darren Greer wrote: > I really thought he meant he would kiss the place where the "accident" > happened. I've got a filthy mind. Me too, Darren. Same thought. But then I remembered that Spike, unlike us, has a mind as pure as the driven snow. Damien Broderick From spike66 at att.net Mon Nov 29 22:17:50 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 14:17:50 -0800 Subject: [ExI] Satirical News Sites (Was Nulla contro lo Onion) In-Reply-To: References: Message-ID: <009701cb9013$42eda500$c8c8ef00$@att.net> >... On Behalf Of BillK Subject: Re: [ExI] Satirical News Sites (Was Nulla contro lo Onion) 2010/11/29 Darren Greer wrote plaintively: >> But then again, I had to read THIS story twice to make sure IT wasn't satirical. ... Shaky McNet. >That's the problem with satire today. The world has gone so crazy that Onion articles get quoted as real news stories and satire becomes impossible. >The current European bailout of Greece and now Ireland is ongoing craziness masquerading as official policy... The craziness keeps on coming. BillK BillK, I can't tell if this is satire or real, but it looks and sounds real to me. I am not hip to euros or politics on that particular continent. Please Euro-hipsters, is this really as it sounds? If so, what (if anything) do we do now? http://www.youtube.com/watch?v=Fyq7WRr_GPg&feature=player_embedded spike From spike66 at att.net Mon Nov 29 22:33:41 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 14:33:41 -0800 Subject: [ExI] Satirical News Sites (Was Nulla contro lo Onion) In-Reply-To: References: Message-ID: <009801cb9015$7992ae00$6cb80a00$@att.net> . On Behalf Of Darren Greer Subject: Re: [ExI] Satirical News Sites (Was Nulla contro lo Onion) BillK wrote: >>That's the problem with satire today. The world has gone so crazy that Onion articles get quoted as real news stories and satire becomes impossible.< >Just so. What is real news often seems like satire.It plays ingeniously on all the prevalent fears and manages to combine the concerns of the hard left with the paranoia of the hard right so seamlessly that you no longer know which side is which.Darren Darren, do rent Michael Moore's rookie card, Roger and Me, which hit the theatres in 1989. Moore was unknown at that time; I went just to have something to do. I found it to be absolutely brilliant satire. One could tell it was very dark political comedy, but it was impossible for me to tell which side (left or right) was being savaged. I came away with a vague suspicion that his target was the far left, but later learned he was attempting to vilify capitalism, which one might suppose is more aligned with the right. He did a short sequel called Pets or Meat, which was even darker and funnier if one is in the mood. Moore's stuff is an acquired taste. I think his entire act is a put-on: he is really a crazy far-right satirist who poses as a crazy far-left satirist for the purposes of demonstrating that once you get far enough from the mainstream the two are indistinguishable. spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Mon Nov 29 22:55:05 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 14:55:05 -0800 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <303197.29059.qm@web65609.mail.ac4.yahoo.com> Message-ID: <00a101cb9018$76f406f0$64dc14d0$@att.net> >. On Behalf Of Darren Greer Subject: Re: [ExI] Nulla contro lo Stato Stuart wrote: >>My apologies, Darren. My response when I was forwarded this piece was emotional and not analytical. >No problem Stuart. Don't apologize. Probably the most exciting thing that has happened to me in weeks. I was already planning Mr. Cumming's legal defense in my head. . And I have to admit, I also briefly thought that if the TSA agent was hot enough, I might have had the same problem. :) Darren I have a solution to the airline security problem that should satisfy everyone, especially libertarians. Get the TSA and all government completely out of the picture, turn over security to the airlines. Each company sets their own policy utterly without restriction. You carry actual responsibility(!) to choose your own favorite, or the one you think is closest to right. Some go the standard metal detector route, some profile based on race, religion, age, random sampling, appearance or any-the-hell-thing they want. The airlines know there are some of us who actually enjoy being groped, so they provide attractive young screeners and allow the customer to choose the gender and the costume of the screener, anything from French maid to dominatrix and their male counterparts, assuming such a thing exists, and I imagine it probably does. Some have zeeeerooooo security: great for convenience, take your chances with your own life, good luck and nothingspeed. In return, the passengers sign away any right to sue the airlines regardless, for if they agree to go into that company's airplane, they becoming the fucking property of that airline from the time they enter the gate until they leave on the other end. Perhaps that particular adjective for the noun property might be taken two ways, but the point is there is a solution here which should satisfy everyone who bitches about this new system forced on us by the fact that clearly there are now *plenty* of people on this planet who want to commit mass murder, just for the sake of pleasing their favorite deity. Speaking of which, if a terrorist attempts a horrific bombing using FBI-supplied realistic but inert explosives, and somehow manages to perish in the attempt, will he get 72 lifelike inflatable dolls? If so, will they then fake their orgasms? spike -------------- next part -------------- An HTML attachment was scrubbed... URL: From pharos at gmail.com Mon Nov 29 23:10:59 2010 From: pharos at gmail.com (BillK) Date: Mon, 29 Nov 2010 23:10:59 +0000 Subject: [ExI] Satirical News Sites (Was Nulla contro lo Onion) In-Reply-To: <009701cb9013$42eda500$c8c8ef00$@att.net> References: <009701cb9013$42eda500$c8c8ef00$@att.net> Message-ID: On Mon, Nov 29, 2010 at 10:17 PM, spike wrote: > BillK, I can't tell if this is satire or real, but it looks and sounds real > to me. ?I am not hip to euros or politics on that particular continent. > Please Euro-hipsters, is this really as it sounds? ?If so, what (if > anything) do we do now? > > http://www.youtube.com/watch?v=Fyq7WRr_GPg&feature=player_embedded > > It's real in the sense that Farage did actually make that ranting speech to the European Parliament. He is famous for his rants against European politicians and this one has gone all over the interwebs. He is the leader of the UK Independence Party, a small English nationalist party which wants the UK to withdraw completely from the Euro political system. It is debatable how much of what he actually said is correct, but he has tapped into an anti-Euro politics feeling that is becoming very common. The 50,000 protest march outside the Ireland parliament would doubtless agree with him. BillK From spike66 at att.net Mon Nov 29 23:01:45 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 15:01:45 -0800 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <4CF41D47.9080802@satx.rr.com> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <006f01cb9003$143a7c70$3caf7550$@att.net> <4CF41D47.9080802@satx.rr.com> Message-ID: <00a601cb9019$6515c9e0$2f415da0$@att.net> On 11/29/2010 2:51 PM, Darren Greer wrote: >> I really thought he meant he would kiss the place where the "accident" >> happened. I've got a filthy mind. >Me too, Darren. Same thought. But then I remembered that Spike, unlike us, has a mind as pure as the driven snow...Damien Broderick Well yes I suppose it is true, so long as you mean the snow that is driven such that it lands in the settling tank at the sewage processing plant. My mind is as pure as that driven snow, yes. spike From pharos at gmail.com Mon Nov 29 23:23:48 2010 From: pharos at gmail.com (BillK) Date: Mon, 29 Nov 2010 23:23:48 +0000 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <00a101cb9018$76f406f0$64dc14d0$@att.net> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <303197.29059.qm@web65609.mail.ac4.yahoo.com> <00a101cb9018$76f406f0$64dc14d0$@att.net> Message-ID: 2010/11/29 spike wrote: > In return, the passengers sign away any right to sue the airlines > regardless, for if they agree to go into that company?s airplane, they > becoming the fucking property of that airline from the time they enter the > gate until they leave on the other end.? Perhaps that particular adjective > for the noun property might be taken two ways, but the point is there is a > solution here which should satisfy everyone who bitches about this new > system forced on us by the fact that clearly there are now *plenty* of > people on this planet who want to commit mass murder, just for the sake of > pleasing their favorite deity. > > You're missing the point, Spike. The jihadists don't particularly want to commit mass murder, although they don't mind if they do. Collateral damage is the term the US army uses when they kill civilians. It is an economic war to destroy the US financial system. Look at the billions spent in Iraq and Afghanistan. And the billions wasted on the TSA and 'security'. See: Quote: In his October 2004 address to the American people, bin Laden noted that the 9/11 attacks cost al Qaeda only a fraction of the damage inflicted upon the United States. "Al Qaeda spent $500,000 on the event," he said, "while America in the incident and its aftermath lost -- according to the lowest estimates -- more than $500 billion, meaning that every dollar of al Qaeda defeated a million dollars." The point is clear: Security is expensive, and driving up costs is one way jihadists can wear down Western economies. The writer encourages the United States "not to spare millions of dollars to protect these targets" by increasing the number of guards, searching all who enter those places, and even preventing flying objects from approaching the targets. "Tell them that the life of the American citizen is in danger and that his life is more significant than billions of dollars," he wrote. "Hand in hand, we will be with you until you are bankrupt and your economy collapses." ----------------------------- BillK From jebdm at jebdm.net Mon Nov 29 23:29:50 2010 From: jebdm at jebdm.net (Jebadiah Moore) Date: Mon, 29 Nov 2010 18:29:50 -0500 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <00a101cb9018$76f406f0$64dc14d0$@att.net> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <303197.29059.qm@web65609.mail.ac4.yahoo.com> <00a101cb9018$76f406f0$64dc14d0$@att.net> Message-ID: 2010/11/29 spike > I have a solution to the airline security problem that should satisfy > everyone, especially libertarians. Get the TSA and all government > completely out of the picture, turn over security to the airlines... > The problem is that half the security isn't supposed to be for the people on the planes, but for the people in the buildings that the planes could get flown into, for the prevention of hostage situations, for the prevention of loss of property, etc. -- Jebadiah Moore http://blog.jebdm.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From thespike at satx.rr.com Tue Nov 30 00:07:17 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 29 Nov 2010 18:07:17 -0600 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <00a101cb9018$76f406f0$64dc14d0$@att.net> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <303197.29059.qm@web65609.mail.ac4.yahoo.com> <00a101cb9018$76f406f0$64dc14d0$@att.net> Message-ID: <4CF44035.3050300@satx.rr.com> On 11/29/2010 4:55 PM, spike wrote: > if a terrorist attempts a horrific bombing using FBI-supplied realistic > but inert explosives, and somehow manages to perish in the attempt, will > he get 72 lifelike inflatable dolls? They used to be known as Blow-Up Dolls, but that is now generally regarded as in poor taste among the jihadis. Damien Broderick From spike66 at att.net Tue Nov 30 00:29:53 2010 From: spike66 at att.net (spike) Date: Mon, 29 Nov 2010 16:29:53 -0800 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <303197.29059.qm@web65609.mail.ac4.yahoo.com> <00a101cb9018$76f406f0$64dc14d0$@att.net> Message-ID: <00cd01cb9025$b5971c00$20c55400$@att.net> ... On Behalf Of BillK Subject: Re: [ExI] Nulla contro lo Stato 2010/11/29 spike wrote: >> In return, the passengers sign away any right to sue the airlines >> regardless, for if they agree to go into that company's airplane, they >> become the fucking property of that airline ... >You're missing the point, Spike. The jihadists don't particularly want to commit mass murder, although they don't mind if they do... The Christmas Tree bomber dreamed of mass murder since the age of 15. > Collateral damage is the term the US army uses when they kill civilians... Ja, of course they do not consider us civilians if we pay taxes. > It is an economic war to destroy the US financial system. Look at the billions spent in Iraq and Afghanistan. And the billions wasted on the TSA and 'security'... BillK Ja, BillK I see your point and agree completely, but not that I am missing the point. Let private enterprise do what it does so very well, without government oversight. Get government out of the picture completely. Then the proles buy as much security as they feel they need, and take responsibility for themselves. >From my perspective as a business traveller, so very much air travel is a complete waste, a painful lonely waste. We did so many meetings, did business that could have been done on conference calls. I say that most business travel is unnecessary, and expensive as all hell. One is away from one's office, and stuff piles up while one is on the planes, yet some business travellers actually like it, like having an expense account to dine in fashion at the company's expense and so forth. I hated travelling. So make it to where travel is risky and very personal. Then the business traveller may look closely at all alternatives. Move bytes, not butts. The greens win (less fuel hungry air travel), families win (mom is home more), the terrorists win (because they were able to cut down airlines and murder people), companies win because their guys are in the home office more making fewer travel bills, consumers win because prices come down. The losers are not clear to me. Booeing perhaps? spike From possiblepaths2050 at gmail.com Tue Nov 30 01:22:14 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 29 Nov 2010 18:22:14 -0700 Subject: [ExI] reverse aging In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> <3EDF26C2560548199BA6E6A03512F6A1@PCserafino> <55CE9E2C436B40B4A7C45C33809E05AB@PCserafino> Message-ID: My understanding is that to do the same in humans (much easier said than done) would make the test subjects very vulnerable to cancers. But some researchers actually contend that is not true. And how many years do you think we are away from a *successful* treatment being developed? I say this because at least in theory, if this is perfected, we have firmly landed our feet on the launching pad of longevity escape velocity! : ) http://www.scientificamerican.com/article.cfm?id=telomerase-reverses-aging John On 11/29/10, Bryan Bishop wrote: > On Mon, Nov 29, 2010 at 12:34 PM, scerir wrote: >> somebody gave me the pdf of the (6 pages) paper on Nature, >> anyone interested? > > sure > > - Bryan > http://heybryan.org/ > 1 512 203 0507 > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From sparge at gmail.com Tue Nov 30 01:15:21 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 29 Nov 2010 20:15:21 -0500 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <303197.29059.qm@web65609.mail.ac4.yahoo.com> <00a101cb9018$76f406f0$64dc14d0$@att.net> Message-ID: 2010/11/29 Jebadiah Moore : > > The problem is that half the security isn't supposed to be for the people on > the planes, but for the people in the buildings that the planes could get > flown into, for the prevention of hostage situations, for the prevention of > loss of property, etc. Good point, but don't the reinforced cockpit doors pretty much take care of that? Seems like UAV technology has advanced sufficiently that it ought to be possible to equip planes to allow the FAA to take remote control in the event of hijacking or pilot incapacitation. -Dave From possiblepaths2050 at gmail.com Tue Nov 30 02:34:44 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 29 Nov 2010 19:34:44 -0700 Subject: [ExI] Ray Kurzweil's predictions regarding the Singularity Message-ID: As with many of you, I am utterly enthralled by Ray Kurzweil's predictions about the march toward AGI that is many times faster and brighter than humans. But I am just not so sure about his 2045 date for a Singularity. Michael Annisimov and others have challenged his claims, but I would say generally Kurzweil has a sound track record. But I do think his statement that the global Google network makes his crucial prediction of roughly human brain computing power (but without actual AGI) actually being achieved by 2010, as really grasping at straws. Or am I being too hard on him? And so not counting the Google network, when do any of you see a supercomputer existing that can do 30 petaflops a second, or whatever level of computational power that would be required to equal the human brain? 2015? 2020? How many of you still consider 2045 as a good date for the Singularity? I am beginning to think it will take at least one additional decade for his vision to come true. And this worries me because I may not be alive in 2055... But then considering the *possible* rate of take-off if a human level AGI (but thinking so much faster than humans) is allowed to exponentially improve itself, I can also envision (though unlikely) a Singularity right around 2035. This really matters to me. John From msd001 at gmail.com Tue Nov 30 02:48:24 2010 From: msd001 at gmail.com (Mike Dougherty) Date: Mon, 29 Nov 2010 21:48:24 -0500 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <4CF41D47.9080802@satx.rr.com> References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <006f01cb9003$143a7c70$3caf7550$@att.net> <4CF41D47.9080802@satx.rr.com> Message-ID: On Mon, Nov 29, 2010 at 4:38 PM, Damien Broderick wrote: > On 11/29/2010 2:51 PM, Darren Greer wrote: > > I really thought he meant he would kiss the place where the "accident" >> happened. I've got a filthy mind. >> > > Me too, Darren. Same thought. But then I remembered that Spike, unlike us, > has a mind as pure as the driven snow. > I've seen snow that's been driven on, it's anything but pure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agrimes at speakeasy.net Tue Nov 30 02:27:35 2010 From: agrimes at speakeasy.net (Alan Grimes) Date: Mon, 29 Nov 2010 21:27:35 -0500 Subject: [ExI] Suzanne Gildert on Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?, Teleplace, 28th November 2010, 10am PST In-Reply-To: References: Message-ID: <4CF46117.3080600@speakeasy.net> Giulio Prisco wrote: > VIDEO - Suzanne Gildert on Thinking about the hardware of thinking in Teleplace > > http://telexlr8.wordpress.com/2010/11/29/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010/ Upload this, simulate that. =\ Why do you think I feel that uploaders dominate transhumanism? -- DO NOT USE OBAMACARE. DO NOT BUY OBAMACARE. Powers are not rights. From sparge at gmail.com Tue Nov 30 02:56:46 2010 From: sparge at gmail.com (Dave Sill) Date: Mon, 29 Nov 2010 21:56:46 -0500 Subject: [ExI] Ray Kurzweil's predictions regarding the Singularity In-Reply-To: References: Message-ID: On Mon, Nov 29, 2010 at 9:34 PM, John Grigg wrote: > 30 petaflops a second FLOPS = floating point operations per second Sorry, that's one of my peeves. Regarding predictions for the singularity, I'm going to hold off on making one until flying cars are mainstream. :-) Just kidding. I'm going to hold off until we've got a dog-level AI. -Dave From possiblepaths2050 at gmail.com Tue Nov 30 02:57:34 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 29 Nov 2010 19:57:34 -0700 Subject: [ExI] Suzanne Gildert on Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?, Teleplace, 28th November 2010, 10am PST In-Reply-To: <4CF46117.3080600@speakeasy.net> References: <4CF46117.3080600@speakeasy.net> Message-ID: Alan Grimes wrote: >Why do you think I feel that uploaders dominate transhumanism? Well, you did say that... : ) John On 11/29/10, Alan Grimes wrote: > Giulio Prisco wrote: >> VIDEO - Suzanne Gildert on Thinking about the hardware of thinking in >> Teleplace >> >> http://telexlr8.wordpress.com/2010/11/29/suzanne-gildert-on-thinking-about-the-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploading-teleplace-28th-november-2010/ > > Upload this, simulate that. > > =\ > > Why do you think I feel that uploaders dominate transhumanism? > > > -- > DO NOT USE OBAMACARE. > DO NOT BUY OBAMACARE. > Powers are not rights. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From natasha at natasha.cc Tue Nov 30 03:19:25 2010 From: natasha at natasha.cc (Natasha Vita-More) Date: Mon, 29 Nov 2010 21:19:25 -0600 Subject: [ExI] Suzanne Gildert on Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?, Teleplace, 28th November 2010, 10am PST In-Reply-To: References: Message-ID: Great talk and thank you Giulio. I'm glad that my questions were addressed by Suzanne since no one on the Extropy or Humanity+ list seemed to care. Suzanne did a great job and she is a lot of fun! Natasha Vita-More -----Original Message----- From: extropy-chat-bounces at lists.extropy.org [mailto:extropy-chat-bounces at lists.extropy.org] On Behalf Of Giulio Prisco Sent: Monday, November 29, 2010 3:46 AM To: ExI chat list; extrobritannia at yahoogroups.com; World Transhumanist Association Discussion List; Institute for Ethics and Emerging Technologies; transumanisti at yahoogroups.com Subject: Re: [ExI] Suzanne Gildert on Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?, Teleplace, 28th November 2010, 10am PST VIDEO - Suzanne Gildert on Thinking about the hardware of thinking in Teleplace http://telexlr8.wordpress.com/2010/11/29/suzanne-gildert-on-thinking-about-t he-hardware-of-thinking-can-disruptive-technologies-help-us-achieve-uploadin g-teleplace-28th-november-2010/ On Sat, Nov 27, 2010 at 8:00 PM, Giulio Prisco wrote: > REMINDER Suzanne Gildert on Thinking about the hardware of thinking > tomorrow in teleplace > > http://telexlr8.wordpress.com/2010/11/22/suzanne-gildert-on-thinking-a > bout-the-hardware-of-thinking-can-disruptive-technologies-help-us-achi > eve-uploading-teleplace-28th-november-2010-10am-pst/ > > On Mon, Nov 22, 2010 at 5:10 PM, Giulio Prisco wrote: >> Suzanne Gildert will give a talk in Teleplace on "Thinking about the >> hardware of thinking: Can disruptive technologies help us achieve >> uploading?" on November 28, 2010, at 10am PST (1pm EST, 6pm UK, 7pm >> continental EU). >> >> http://telexlr8.wordpress.com/2010/11/22/suzanne-gildert-on-thinking- >> about-the-hardware-of-thinking-can-disruptive-technologies-help-us-ac >> hieve-uploading-teleplace-28th-november-2010-10am-pst/ >> >> This is a revised version of Suzanne's talk at TransVision 2010, also >> inspired by her article on "Building more intelligent machines: Can >> 'co-design' help?" (PDF). See also Suzanne's previous Teleplace talk >> on "Quantum Computing: Separating Hope from Hype". >> >> Thinking about the hardware of thinking: Can disruptive technologies >> help us achieve uploading? >> >> S. Gildert, Teleplace, 28th November 2010 >> >> We are surrounded by devices that rely on general purpose silicon >> processors, which are mostly very similar in terms of their design. >> But is this the only possibility? As we begin to run larger and more >> brain-like emulations, will our current methods of simulating neural >> networks be enough, even in principle? Why does the brain, with 100 >> billion neurons, consume less than 30W of power, whilst our attempts >> to simulate tens of thousands of neurons (for example in the blue >> brain project) consumes tens of KW? As we wish to run computations >> faster and more efficiently, we might we need to consider if the >> design of the hardware that we all take for granted is optimal. In >> this presentation I will discuss the recent return to a focus upon >> co-design - that is, designing specialized software algorithms >> running on specialized hardware, and how this approach may help us >> create much more powerful applications in the future. As an example, >> I will discuss some possible ways of running AI algorithms on novel >> forms of computer hardware, such as superconducting quantum computing >> processors. These behave entirely differently to our current silicon >> chips, and help to emphasize just how important disruptive >> technologies may be to our attempts to build intelligent machines. >> >> Event on Facebook >> >> Dr. Suzanne Gildert is currently working as an Experimental Physicist >> at D-Wave Systems, Inc. She is involved in the design and testing of >> large scale superconducting processors for Quantum Computing >> Applications. Suzanne obtained her PhD and MSci degree from The >> University of Birmingham UK, focusing on the areas of experimental >> quantum device physics and superconductivity. >> >> teleXLR8 is a telepresence community for cultural acceleration. We >> produce online events, featuring first class content and speakers, >> with the best system for e-learning and collaboration in an online 3D >> environment: Teleplace. Join teleXLR8 to participate in online talks, >> seminars, round tables, workshops, debates, full conferences, >> e-learning courses, and social events. with full immersion >> telepresence, but without leaving home. >> > _______________________________________________ extropy-chat mailing list extropy-chat at lists.extropy.org http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat From possiblepaths2050 at gmail.com Tue Nov 30 03:24:38 2010 From: possiblepaths2050 at gmail.com (John Grigg) Date: Mon, 29 Nov 2010 20:24:38 -0700 Subject: [ExI] Ray Kurzweil's predictions regarding the Singularity In-Reply-To: References: Message-ID: According to someone from another list who has a spouse in the supercomputer field, we should be seeing a 30 petaflops machine by around 2014. And so Ray Kurzweil turns out to be just a few years off! : ) I realize he never meant for his dates to be set in stone, but I'd like for the crucial predictions regarding computational power, to be no more than five years off... John On 11/29/10, Dave Sill wrote: > On Mon, Nov 29, 2010 at 9:34 PM, John Grigg > wrote: >> 30 petaflops a second > > FLOPS = floating point operations per second > > Sorry, that's one of my peeves. > > Regarding predictions for the singularity, I'm going to hold off on > making one until flying cars are mainstream. :-) Just kidding. I'm > going to hold off until we've got a dog-level AI. > > -Dave > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > From thespike at satx.rr.com Tue Nov 30 03:58:43 2010 From: thespike at satx.rr.com (Damien Broderick) Date: Mon, 29 Nov 2010 21:58:43 -0600 Subject: [ExI] Ray Kurzweil's predictions regarding the Singularity In-Reply-To: References: Message-ID: <4CF47673.5010504@satx.rr.com> On 11/29/2010 9:24 PM, John Grigg wrote: > According to someone from another list who has a spouse in the > supercomputer field, we should be seeing a 30 petaflops machine by > around 2014. And so Ray Kurzweil turns out to be just a few years > off! : ) Here's what I wrote in the 2001 edition of THE SPIKE: < machines already exist that do several trillion calculations per second. So our current devices are 33,000 times too feeble for the job. We need to increase computing power from 3 teraflops to 100,000 teraflops (or, more concisely, 100 petaflops)--a whopping factor. Impossible, surely? Not at all. Exponential growth cuts numbers like that down to size... A mere fifteen doublings meets that goal nicely. > With annual doubling, that would predict... 2016. For 100 petaflops. 2014 looks right for 30 petaflops. Damien Broderick From atymes at gmail.com Tue Nov 30 05:46:01 2010 From: atymes at gmail.com (Adrian Tymes) Date: Mon, 29 Nov 2010 21:46:01 -0800 Subject: [ExI] Ray Kurzweil's predictions regarding the Singularity In-Reply-To: References: Message-ID: On Mon, Nov 29, 2010 at 6:56 PM, Dave Sill wrote: > On Mon, Nov 29, 2010 at 9:34 PM, John Grigg > wrote: > > 30 petaflops a second > > FLOPS = floating point operations per second > If only that were correct nonetheless. (Acceleration, as in adding the capacity for 30 petaflops, each second.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Nov 30 10:44:06 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 30 Nov 2010 06:44:06 -0400 Subject: [ExI] CQT Researcher Uncovers Quantitative Link Between Quantum Non-Locality and Uncertainty Message-ID: http://www.quantumlah.org/highlight/191110_sciencenews.php -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Nov 30 13:23:17 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 30 Nov 2010 09:23:17 -0400 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <006f01cb9003$143a7c70$3caf7550$@att.net> <4CF41D47.9080802@satx.rr.com> Message-ID: I actually heard the "Piercy Cummings" TSA pat-down story reported on the news, as news, on a local radio station this morning on my way to school. Darren 2010/11/29 Mike Dougherty > On Mon, Nov 29, 2010 at 4:38 PM, Damien Broderick wrote: > >> On 11/29/2010 2:51 PM, Darren Greer wrote: >> >> I really thought he meant he would kiss the place where the "accident" >>> happened. I've got a filthy mind. >>> >> >> Me too, Darren. Same thought. But then I remembered that Spike, unlike us, >> has a mind as pure as the driven snow. >> > > I've seen snow that's been driven on, it's anything but pure. > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From lubkin at unreasonable.com Tue Nov 30 12:58:47 2010 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 30 Nov 2010 07:58:47 -0500 Subject: [ExI] NASA tease on SETI find Message-ID: <201011301334.oAUDY5YC016878@andromeda.ziaspace.com> NASA's having a press conference to announce "an astrobiology finding that will impact the search for evidence of extraterrestrial life" on Thursday, 2 Dec at 2 PM EST. It will be streamed live and broadcast on NASA TV. If you're press, or can convince NASA you are, you can attend (DC, various NASA centers) or phone in. -- David. From atymes at gmail.com Tue Nov 30 16:00:18 2010 From: atymes at gmail.com (Adrian Tymes) Date: Tue, 30 Nov 2010 08:00:18 -0800 Subject: [ExI] CQT Researcher Uncovers Quantitative Link Between Quantum Non-Locality and Uncertainty In-Reply-To: References: Message-ID: This is a great example of what's wrong with most news reporting about quantum mechanics: lots of fluff about "it's spooky" and "it's wierd", very little reporting on what the actual breakthrough actually is. Moreover, the breakthrough is coded in metaphor and simile - which leads to public misunderstanding. For example, this seems to be the main cause of the misunderstanding that entanglement means FTL communication. It is true that you can determine what one member of a pair is, and therefore conclude what the other member must be even if it's a long ways away - but any information you encode with it still has to travel at light speed or less. Even if you encode the information with an entangled bit, the information (encoded or otherwise) still had to travel normally to get to the other party. If you and another party pre-arrange to act depending on the polarity of a bit sent from a source midway between you, that pre-arrangement had to travel at light speed, and you could act the same depending on other information sent from a source midway between you. 2010/11/30 Darren Greer > http://www.quantumlah.org/highlight/191110_sciencenews.php > > > > -- > "In the end that's all we have: our memories - electrochemical impulses > stored in eight pounds of tissue the consistency of cold porridge." - > Remembrance of the Daleks > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonkc at bellsouth.net Tue Nov 30 17:28:39 2010 From: jonkc at bellsouth.net (John Clark) Date: Tue, 30 Nov 2010 12:28:39 -0500 Subject: [ExI] Ray Kurzweil's predictions regarding the Singularity. In-Reply-To: References: Message-ID: <0FF6C59F-A65E-4C70-9993-FCB1F3094991@bellsouth.net> On Nov 29, 2010, at 9:34 PM, John Grigg wrote: > I am just not so sure about his [Ray Kurzweil's] 2045 date for a Singularity. I'll bet that if pressed even Kurzweil would say he is not sure of that date because of its very nature a singularity is very hard to predict. I would be astonished if it happened in the next 10 years and I would be equally astonished if it didn't happen in the next 100 years, but then again being astonished is the name of the game when you're talking about the singularity. It's easy to extrapolate a linear growth rate or even a exponential one like Moore's Law, but nobody can predict when or if a breakthrough will happen. That word is thrown around a lot nowadays but fundamental breakthroughs that arrive completely out of the blue and very quickly prove to be of existential importance are quite rare, in the 20'th century I can only think of one. In 1935 nobody could have predicted that in 10 years nuclear energy would not only become possible but practical, and dramatically and permanently change the world; nor could they have predicted that in 4 years the worst war in human history would start. If somebody found an easy way to make a practical Quantum Computer (the topological approach using non-Abelian anions?) it would throw all previous predictions about the singularity out the window, and so would a major new war. About the only thing I'm reasonably sure of is that whenever the singularity happens one year before it will still seem like a long long way away. John K Clark -------------- next part -------------- An HTML attachment was scrubbed... URL: From darren.greer3 at gmail.com Tue Nov 30 18:12:35 2010 From: darren.greer3 at gmail.com (Darren Greer) Date: Tue, 30 Nov 2010 14:12:35 -0400 Subject: [ExI] CQT Researcher Uncovers Quantitative Link Between Quantum Non-Locality and Uncertainty In-Reply-To: References: Message-ID: Adrian wrote: >This is a great example of what's wrong with most news reporting about quantum mechanics: lots of fluff about "it's spooky" and "it's weird."< I changed the headline for exi and substituted quantitative for weirdness and specified the phenomena they were discussing for that very reason. I don't mind 'spooky,' because I'm always reminded of Einstein when it's said. I'm not a physicist, but I think it's important to remember that these things seem "weird" only to those versed in at least the basic science. To someone who didn't know anything about the speed of light constant and why it can't be violated, it wouldn't seem weird at all. Just more goofy science stuff, by guys and gals with nothing better to do with their time. Darren 2010/11/30 Adrian Tymes > This is a great example of what's wrong with most news reporting about > quantum > mechanics: lots of fluff about "it's spooky" and "it's wierd", very little > reporting on > what the actual breakthrough actually is. Moreover, the breakthrough is > coded in > metaphor and simile - which leads to public misunderstanding. > > For example, this seems to be the main cause of the misunderstanding that > entanglement means FTL communication. It is true that you can determine > what > one member of a pair is, and therefore conclude what the other member must > be > even if it's a long ways away - but any information you encode with it > still has to > travel at light speed or less. Even if you encode the information with an > entangled > bit, the information (encoded or otherwise) still had to travel normally to > get to > the other party. If you and another party pre-arrange to act depending on > the > polarity of a bit sent from a source midway between you, that > pre-arrangement > had to travel at light speed, and you could act the same depending on other > information sent from a source midway between you. > > 2010/11/30 Darren Greer > >> http://www.quantumlah.org/highlight/191110_sciencenews.php >> >> >> >> -- >> "In the end that's all we have: our memories - electrochemical impulses >> stored in eight pounds of tissue the consistency of cold porridge." - >> Remembrance of the Daleks >> >> _______________________________________________ >> extropy-chat mailing list >> extropy-chat at lists.extropy.org >> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat >> >> > > _______________________________________________ > extropy-chat mailing list > extropy-chat at lists.extropy.org > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat > > -- "In the end that's all we have: our memories - electrochemical impulses stored in eight pounds of tissue the consistency of cold porridge." - Remembrance of the Daleks -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbenzai at yahoo.com Tue Nov 30 20:31:26 2010 From: bbenzai at yahoo.com (Ben Zaiboc) Date: Tue, 30 Nov 2010 12:31:26 -0800 (PST) Subject: [ExI] Nulla contro lo Stato In-Reply-To: Message-ID: <413205.51995.qm@web114419.mail.gq1.yahoo.com> The Avantguardian wrote: > I cannot find the words to describe how unAmerican this > is: > > http://www.deadseriousnews.com/?p=573 > > Somebody has pulled my country out from beneath me. I've never before used the phrase "ROFLMAO" in actual, literal truth, until now. Ben Zaiboc (Still giggling) From sparge at gmail.com Tue Nov 30 20:52:41 2010 From: sparge at gmail.com (Dave Sill) Date: Tue, 30 Nov 2010 15:52:41 -0500 Subject: [ExI] Nulla contro lo Stato In-Reply-To: <413205.51995.qm@web114419.mail.gq1.yahoo.com> References: <413205.51995.qm@web114419.mail.gq1.yahoo.com> Message-ID: You actually, literally laughed your ass off? -Dave On Nov 30, 2010 3:46 PM, "Ben Zaiboc" wrote: The Avantguardian wrote: > I cannot find the words to describe how unAmerican this > is: > > http://www.deadseriousnews.com/... > Somebody has pulled my country out from beneath me. I've never before used the phrase "ROFLMAO" in actual, literal truth, until now. Ben Zaiboc (Still giggling) _______________________________________________ extropy-chat mailing list extropy-chat at lis... -------------- next part -------------- An HTML attachment was scrubbed... URL: From spike66 at att.net Tue Nov 30 21:32:14 2010 From: spike66 at att.net (spike) Date: Tue, 30 Nov 2010 13:32:14 -0800 Subject: [ExI] Nulla contro lo Stato In-Reply-To: References: <413205.51995.qm@web114419.mail.gq1.yahoo.com> Message-ID: <007f01cb90d6$0e1bdfe0$2a539fa0$@att.net> On Behalf Of Dave Sill Subject: Re: [ExI] Nulla contro lo Stato >>I've never before used the phrase "ROFLMAO" in actual, literal truth, until now. Ben >You actually, literally laughed your ass off? -Dave Not a problem. He can always go to a retailer. {8^D ? SICLWASFA Actually, that isn?t what he said. In literal truth, he used the phrase ?ROFLMAO? which I did just now as well. One need not involve actual literal laughter to literally use the phrase ROFLMAO. With regard to mirth, I sat in chair laughing with ass still firmly attached. Literally. Most of the time, the term ?literally? can be replaced with its opposite, figuratively. Note every time you hear the term literally, which is perhaps figuratively a thousand times a day, and replace it with figuratively. See how much more sense it makes. You will entertain yourself irregardful. spike (We have far too much fun in this century, more than we deserve.) {8-] -------------- next part -------------- An HTML attachment was scrubbed... URL: From avantguardian2020 at yahoo.com Tue Nov 30 21:34:13 2010 From: avantguardian2020 at yahoo.com (The Avantguardian) Date: Tue, 30 Nov 2010 13:34:13 -0800 (PST) Subject: [ExI] reverse aging In-Reply-To: References: <175291.19133.qm@web65603.mail.ac4.yahoo.com> <002701cb8fe4$ef4a71e0$cddf55a0$@att.net> <4a4dab57fcc08ea47620360ca68c720d.squirrel@www.main.nc.us> <3EDF26C2560548199BA6E6A03512F6A1@PCserafino> <55CE9E2C436B40B4A7C45C33809E05AB@PCserafino> Message-ID: <110530.75483.qm@web65616.mail.ac4.yahoo.com> ----- Original Message ---- > From: John Grigg > To: ExI chat list > Sent: Mon, November 29, 2010 5:22:14 PM > Subject: Re: [ExI] reverse aging > > My understanding is that to do the same in humans (much easier said > than done) would make the test subjects very vulnerable to cancers. > But some researchers actually contend that is not true. > > And how many years do you think we are away from a *successful* > treatment being developed?? I say this because at least in theory, if > this is perfected, we have firmly landed our feet on the launching pad > of longevity escape velocity!? : ) > > http://www.scientificamerican.com/article.cfm?id=telomerase-reverses-aging Aside from valid cancer concerns, I would like to?stress that the scientists in the Nature paper did not reverse normal aging in mice. Instead what they did was?first cause premature aging in mice by taking away telomerase. Then they reversed the premature aging that they caused?by putting?telomerase back. Therefore they would not, for example, qualify for the Methuselah Mouse Prize. From scerir at alice.it Tue Nov 30 21:52:32 2010 From: scerir at alice.it (scerir) Date: Tue, 30 Nov 2010 22:52:32 +0100 Subject: [ExI] CQT Researcher Uncovers Quantitative Link Between Quantum Non-Locality and Uncertainty In-Reply-To: References: Message-ID: Darren Greer > To someone who didn't know anything about the speed of light constant > and why it can't be violated, it wouldn't seem weird at all. look, a sort of superluminal communication, using specific entangled states, and three actors, has been shown to be - let us say - thinkable,. a bit technical http://www.scielo.br/pdf/bjp/v35n2a/a18v352a.pdf maybe simpler http://arxiv.org/PS_cache/arxiv/pdf/0903/0903.1076v2.pdf sort of interesting review, about the above (at the end), and in general about the possible existence of those quantum correlations 'out of space-time' (which might be a trivial - QM is made on abstract spaces - or a profound idea - space-time might be emergent) http://arxiv.org/PS_cache/arxiv/pdf/1011/1011.3440v1.pdf From lubkin at unreasonable.com Tue Nov 30 22:53:27 2010 From: lubkin at unreasonable.com (David Lubkin) Date: Tue, 30 Nov 2010 17:53:27 -0500 Subject: [ExI] NASA tease on SETI find In-Reply-To: <201011301334.oAUDY5YC016878@andromeda.ziaspace.com> References: <201011301334.oAUDY5YC016878@andromeda.ziaspace.com> Message-ID: <201011302252.oAUMqImh002314@andromeda.ziaspace.com> I wrote: >NASA's having a press conference to announce "an astrobiology >finding that will impact the search for evidence of extraterrestrial >life" on Thursday, 2 Dec at 2 PM EST. Probably this: -- David.