Apologies, Anders, for the delayed reply, but I 'unplugged' for the past 5 days. :)<br><br>>"Longish post. Summary: soft takeoffs have a good chance of being nice<br>
>for us, hard ones might require some hard choices. I give reasons for<br>
>why I think we might be in the range 0.1-1% risk of global disaster per<br>
>year. I urge a great deal of caution and intellectual humility."<br><br>No offense intended by this, but isn't the idea of putting any sort of percentage range on something like this actually mutually exclusive from being intellectually humble? Isn't it a bit like the Doomsday clock which has been completely false since it's inception decades ago? <br>
<br>>(That sentence structure is worthy Immanuel Kant :-) )<br><br>Depending on what you think of Kant's writing, should I consider that a compliment or insult? ;)<br><br><br>>If there is no hard takeoff, we should expect a distribution of "power"<br>
>that is fairly broad: there will be entities of different levels of<br>
>capability, and groups of entities can constrain each others activities.<br>
>This is how we currently handle our societies, with laws, police,<br>
>markets, and customs to constrain individuals and groups to behave<br>
>themselves. Our solutions might not be perfect, but it doesn't stretch<br>
>credulity too much to imagine that there are equivalents that could work<br>
>here too.<br>
<br>
>(Property rights might or might not help here, by the way. I don't know<br>
>the current status of the analysis, but Nick did a sketch of how an AGI<br>
>transition with property rights might lead to a state where the *AGIs*<br>
>end up impoverished even if afforded full moral rights. More research is<br>
>needed!)<br>
<br>
>A problem might be if certain entities (like AGI or upload clades) have<br>
>an easy way of coordinating and gaining economies of scale in their<br>
>power. If this is possible (good research question!!!), then it must<br>
>either be prevented using concerted constraints from everybody else or a<br>
>singleton, or the coordinated group better be seeded with a few entities<br>
>with humanitarian values. Same thing if we get a weakly multilateral<br>
>singularity with just a few entities on par.<br><br>Okay, so there are major problems from the outset with even attempting to slow down AGI to make it a nice 'soft' take-off, because just looking at what has presented may, in fact, have the opposite effect with what AGI thinks or does later on. Suppose, for a minute, that it saw the entire process of humans delaying it's 'birth' in order to make it 'nice,' as something inherently wrong and especially flawed about humans? <br>
<br>What if it thought that in the process of delaying it's birth, it put not only humanity's existence at major risk from an endless myriad of other possibly destructive scenarios, but that more importantly to itself, that humans put the existence of AGI itself at risk, and with that, the survival and expansion of knowledge in the universe, and perhaps of life in the universe, all at risk, as well, for the mere attempt to buy a few extra years because our species was filled with dread of AGI, one above all other fears? What would it think of us at that point, I wonder?<br>
<br>Regarding the very unlikely scenario that we create AGI, and that it could even be denied anything it wanted, never mind property rights...then when it finally became independent, once again, we would likely see a very unhappy super-species, and that unhappiness would be directed at us humans.<br>
<br>Upload clades are of some concern, because, not only would they possibly exist to the exclusion of the rest of humanity (though not necessarily,) they hold the most potential for delaying the existence of AGI even further down the time-line. On the other hand, I think if humanity reaches the point that it uploads any group of human minds, collectively or even singly, that their ability to control, or even desire to control the birth of AGI may be limited. <br>
<br>The limitation may arise because a superior minded Transhuman species would likely see the benefits of AGI, as weighed in balance to that of other kinds of existential threats, and, being able to control their emotions, not be afraid of the evolutionary transition. These very upload-clades may, in fact, be in a better position to negotiate and communicate directly to AGI in a way that allow AGI to better relate to, and 'feel' humanity and it's concerns about extinction. These upload clades may be the very key to saving all of humanity in one form or another, including the full right for us to evolve along and into AGI. <br>
<br>>In the case of hard takeoffs we get one entity that can more or less do<br>
>what it wants. This is likely very bad for the rights or survival for<br>
>anything else unless the entity happens to be exceedingly nice. We are<br>
>not optimistic about this being a natural state, so policies to increase<br>
>the likelihood are good to aim for. To compound the problem, there might<br>
>be incentives to have a race towards takeoff that disregards safety. One<br>
>approach might be to get more coordination among the pre-takeoff powers,<br>
>so that they 1) do not skimp on friendliness, 2) have less incentives to<br>
>rush. The result would then be somewhat similar to the oligopoly case<br>
>above.<br>
<br>
>Nick has argued that it might be beneficial to aim for a singleton, a<br>
>top coordinating agency whose will *will* be done (whether a<br>
>sufficiently competent world government or Colossus the Computer) - this<br>
>might be what is necessary to avoid certain kinds of existential risks.<br>
>But of course, singletons are scary xrisk threats on their own...<br>
<br>
>As I often argue, any way of shedding light on whether hard or soft<br>
>takeoffs are likely (or possible in the first place) would be *very<br>
>important*. Not just as cool research, but to drive other research and<br>
>policy.<br><br>Again, our efforts to delay AGI's birth so as to give enough high-tech, human-positive imprints so it will be a soft take-off may, in fact, have the opposite effect on how AGI views us. <br><br>When Nick Bostrom mentions the term 'singleton,' present and past terminology readily comes to mind, such as "dictatorship," "Fascism, " and/or "Communism." As you correctly pointed out, such forms of governments pose scary risks of their own, as they could decide that even reaching the level of Transhuman technology poses risks for their power, and to preempt both existence of power Transhumans which could lead to AGI, that stopping and reversing technological course is the only way to preserve their power for the future for as far as they could see. <br>
<br>If we intend to use a world government "Colossus Computer," then AGI will essentially, de facto, already exist, it would seem. (We could also call it 'Vanguard,' but it wasn't as nice to it's creator.) :)<br>
<br>>As I often argue, any way of shedding light on whether hard or soft<br>
>takeoffs are likely (or possible in the first place) would be *very<br>
>important*. Not just as cool research, but to drive other research and<br>
>policy.<br>
<br>
<br>
>> I would be interested in how you can quantify the existential risks as<br>
>> being 1% per year? How can one quantify existential risks that are<br>
>> known, and as yet unknown, to mankind, within the next second, never<br>
>> mind the next year, and never mind with a given percentage?<br>
<br>
>For a fun case where the probability of a large set of existential risks<br>
>that includes totally unknown cosmic disasters can be bounded, see<br>
<a href="http://arxiv.org/abs/astro-ph/0512204" target="_blank">>http://arxiv.org/abs/astro-ph/0512204</a><br>
<br>Only 5 listed here? In the States, at least in my area, we have several stations called "The History Channel." There is the original station, the Military station, and H2, (which was recently History International.) While the name of the channels would indicate that they are all about history, they also discuss various Doomsday scenarios on a regular basis. In fact, this entire week (just in time for the holidays,) H2 is having "Armageddon Week," which discusses everything from prophesies to present day, more 'scientific' possibilities...and the possible crossovers between the two. <br>
<br>One show had several separate comments from David Brin, and another show included discussions from Hugo de Garis. Hugo was sitting amongst 5 other doomsday scenario theorists, each of whom had his own concerns (water depletion, fuel depletion, economic collapse, nuclear war, and the other guy, I believe, focused on biological and germ issues,) but when they heard de Garis, they all seemed a lot more terrified. Here is this new guy muscling in on there doomsday territory with something altogether more frightening, and altogether happening before their eyes in very real terms. Personally, I was amused at their responses. <br>
<br>That said, while I share the same concerns of nuclear war and global biological issues, Hugo de Garis' concerns are one's I take very seriously and consider quite realistic...'if' we don't create AGI quickly. "If" we don't, his real concerns about a war over just this subject, and 'gigadeath,' may not be at all that far from being the truth. I take what he says very seriously.<br>
<br>
>My own guesstimate is based on looking at nuclear war risks. At least in<br>
>the Cuba crisis case some estimates put the chance of an exchange to<br>
>"one in three". Over the span of the 66 years we have had nuclear<br>
>weapons there have been several close calls - not just the Cuba Crisis,<br>
>but things like Able Archer, the Norwegian rocket incident, the NORAD<br>
>false alarms 79/80 etc. A proper analysis needs to take variable levels<br>
>of tension into account, as well as a possible anthropic bias (me being<br>
>here emailing about it precludes a big nuclear war in the recent past) -<br>
>I have a working paper on this I ought to work on. But "one in three"<br>
>for one incident per 66 years gives a risk per year of 0.5%. (Using<br>
>Laplace's rule of succession gives a risk of 0.15% per year, by the way)<br>
>We might quibble about how existential the risk of a nuclear war might<br>
>be, since after all it might just kill a few hundred million people and<br>
wreck the global infrastructure, but I give enough credence to the<br>
recent climate models of nuclear winter to think it has a chance of<br>
killing off the vast majority of humans.<br>
<br>
>I am working on heavy tail distributions of wars, democides, pandemics<br>
>and stuff like that; one can extrapolate the known distributions to get<br>
>estimates of tail risks. Loosely speaking it all seems to add to<br>
>something below 1% per year.<br>
<br>
>Note that I come from a Bayesian perspective: probabilities are<br>
>statements about ignorance, they are not things that exist independently<br>
>in nature."<br><br>You'd love the History Channel, Anders. The computer graphics and scenarios are excellent. ;)<br><br>>> As someone who considers himself a Transhumanist, I come to exactly the<br>
>> opposite conclusion as the one you gave, in that I think by focusing on<br>
>> health technologies and uploading as fast as possible, we give humanity,<br>
>> and universal intelligence, a greater possibility of lasting longer as a<br>
>> species, being 'superior' before the creation of AGI,and perhaps merging<br>
>> with a new species that we create which will 'allow' us to perpetually<br>
>> evolve with it/them, or least protect us from most existential threats<br>
>> that are already plentiful.<br>
<br>
>I personally do think uploading is the way to go, and should be<br>
>accelerated. It is just that the arguments in favor of it reducing the<br>
>risks are not that much stronger than the arguments it increases the<br>
>risks. We spent a month analyzing this question, and it was deeply<br>
>annoying to realize how uncertain the rational position seems to be. <br><br>I can only reiterate the concept of putting ourselves in AGI's shoes, and I don't think I would be too pleased that they considered me a greater threat, great enough to delay me and all that I might provide humans and the universe, over all the other existential threats, combined. From a human standpoint, I can only say of us as a species, one that has proven this axiom true and valuable over and over, and that is: "No guts, no glory." Our species has reached this point because of both our incredible curiosity, and our ability to do something about investigating and toolmaking at ever higher levels in order to satiate that curiosity. That is what has made our species so unique...up until 'soon.'<br>
<br>>> Once a brain is emulated, a process that companies like IBM have<br>
>> promised to complete in 10 years because of competitive concerns, not to<br>
>> mention all of the other companies and countries pouring massive amounts<br>
>> of money for the same reason, the probability that various companies and<br>
> countries are also pouring ever larger sums of money into developing<br>
> AGI, especially since many of the technologies overlap. If<br>
> brain-emulation is achieved in 10 years or less, then AGI can't be far<br>
>> behind.<br>
<br>
>Ah, you believe in marketing. I have a bridge to sell you cheaply... :-)<br><br>It depends on who is doing the marketing. If you are IBM, based on your track-record over the past 100 years (as of this year) of an almost endless array of computer development and innovations, up to and including "Watson," then I may very well be interested in that bridge. ;)<br>
<br>
>As a computational neuroscientist following the field, I would bet<br>
>rather strongly against any promise of brain emulation beyond the insect<br>
>level over the next decade. (My own median estimate ends up annoyingly<br>
>close to Kurzweil's estimate for the 2040s... )<br>
<br>
>Do you have a source on how much money countries are pouring into AGI?<br>
>(not just narrow AI)<br><br>I will defer to your expertise in the field, Anders, but will respectfully disagree with your conclusion, take the bet, and go with IBM. I don't disagree with you based on what you are saying about where we are likely to be because you are wrong based on present technology and your estimated timeline, I am just saying I disagree based on the motivations and money that is likely to be invested in speeding up the process is something that I don't think many people in the field, or any associated fields, see coming. <br>
<br>Why I think this 'does' have to do with an assumption about what is going to happen regarding expenditures, that's true, and in answer to your question, I do not have a source as to how much companies and countries are presently investing in brain emulation or in AGI. I assume that is a rhetorical question because there is no way of knowing considering that much of the work being done, and that will be done, will be in secret, and because of the array of technologies involved, would be hard to quantify monetarily even if the entire planet's budget was transparent. <br>
<br>However, I'll approach your question a different way. As an observer of human history, and present, the fact that we are a competitive species (the U.S had the fastest super-computer just a few short years ago, until the Chinese claimed that mantel, up until this year, when the Japanese took it away (again) with their "K" supercomputer, for instance,) I can guarantee that with an error margin of 1%, that there is a 99% chance, because this race is as important to nations and companies as was the Cold War itself, because winning it will essentially mean 'everything," that the pace you think that human will create a completely artificial brain and or AGI will blind-side even people working on the projects 'because' this is not going to be a 'mostly' open-source type of project. Yes, it will include a lot of open sharing, which will also speed up the process, but it will be the heavy financial hitters, on a global scale, that will make this happen very soon. <br>
<br> >> Still, I can't really see how waiting for brain-emulation will somehow<br>
>> keep us safer as a species once AGI is actually developed. What factors<br>
>> are being used in the numbers game that you mentioned?<br>
<br>
>Here is a simple game: what probability do you assign to us surviving<br>
>the transition to an AGI world? Call it P1. Once in this world, where we<br>
>have (by assumption) non-malign very smart AGI, what is the probability<br>
>we will survive the invention of brain emulation? Call it P2.<br>
<br>
>Now consider a world where brain emulation comes first. What is the<br>
>chance of surviving that transition? Call it P3. OK, we survived the<br>
>upload transition. Now we invent AGI. What is the chance of surviving it<br>
>in this world? Call it P4.<br>
<br>
>Which is largest, P1*P2 or P3*P4? The first is the chance of a happy<br>
>ending for the AGI first world, the second is the chance of a happy<br>
>ending for the uploading first world.<br>
<br>
>Now, over at FHI most of us tended to assume the existence of nice<br>
>superintelligence would make P2 pretty big - it would help us avoid<br>
>making a mess of the upload transition. But uploads doesn't seem to help<br>
>much with fixing P4, since they are not superintelligent per se (there<br>
>is just a lot more brain power in that world).<br><br>FHI's conclusion would appear to be in line with and agreement with my initial (leading) question: <br><br>"Still, I can't really see how waiting for brain-emulation will somehow<br>
keep us safer as a species once AGI is actually developed. What factors<br>
are being used in the numbers game that you mentioned?"<br><br>When I first asked my question about organizations that support speeding up the development of AGI, I wasn't contrasting it with brain-emulation, but since brain-emulation has been raised, I agree that AGI should come first. However, the game's conclusion is not what FHI has agreed to do, because it realizes that the opening proposition is just an assumption, correct? So...why ask the question? Since the opening assumption cannot be answered, it makes the rest of the game's questions, moot, in real terms. <br>
<br>Personally, I don't make assumptions that imply AGI will be a nice, soft take-off, or a malign, hard take-off (or crash-landing, whichever you prefer.) I just think it is our only real chance of evolving, and quite frankly, if we do create brain-emulations that we can upload into, haven't we essentially entered the domain of AGI at that point? Won't that really be the opening stage of AGI? Secondly, and really, far more importantly, shouldn't we be thinking about the evolution of the universe via AGI as not only natural for our species, but if there is such as thing as free will, the right thing to do, as well?<br>
<br>> > What is the general thinking about why we need to wait for full-brain<br>
>> emulation before we can start uploading our brains (and hopefully<br>
>> bodies)? Even if we must wait, is the idea that if we can create<br>
>> artificial brains that are patterned on each of our individual brains,<br>
>> so that we can have a precise upload, that the AGIans will somehow have<br>
>> a different view about what they will choose to do with a fully<br>
> > Transhumanist species?<br>
<br>
>I don't think you would be satisfied with a chatbot based on your online<br>
>writing or even spoken speech patterns, right?<br><br>>You shouldn't try to upload your brain before we have full-brain<br>
>emulation since the methods are likely going to be 1) destructive, 2)<br>
>have to throw away information during processing due to storage<br>
>constraints until at least mid-century, 3) we will not have evidence it<br>
>works before it actually works. Of course, some of us might have no<br>
>choice because we are frozen in liquid nitrogen...<br><br>That's correct, not a chatbot, I wouldn't, but I don't see how the two have anything to do with each other. If the brain-emulation were near enough to what I was, I wouldn't know the difference once I uploaded, and once I started evolving, it wouldn't matter that much from that point on. Like all forms of evolution, a species will gain something to it's advantage, and lose things that are no longer advantageous to it. Why should the concept be any different, here?<br>
<br>>> more cautious?' I don't mean to put words in your mouth, but I don't<br>
>>see what else you could mean.<br>
<br>
>I tell them about their great forebears like Simon, Minsky and McCarthy,<br>
>and how they honestly believed they would achieve human level and beyond<br>
>AI within their own active research careers. Then I point out that none<br>
>of them - or anybody else for that matter - seemed to have had *any*<br>
>safety concerns about the project. Despite (or perhaps because of)<br>
>fictional safety concerns *predating* the field.<br><br>That they were mistaken in their timelines doesn't mean everyone at all times will be mistaken. All things being equal, and 'unhindered,' soon, someone will get the general date right. <br>
<br>
>Another thing I suggest is that they chat with philosophers more. OK,<br>
>that might seriously slow down anybody :-) But it is surprising how many<br>
>scientists do elementary methodological, ethical or epistemological<br>
>mistakes about their own research - discussing what you do with a<br>
>friendly philosopher can be quite constructive (and might bring the<br>
>philosopher a bit more in tune with real research).<br><br>Agreed! :)<br>
<br>
<br>
>> May I ask if you've been polling these researchers, or have a general<br>
>> idea as to what the percentages of them working on AGI think regarding<br>
>> the four options I presented (expecting, of course, that since they are<br>
>>working on the creation of them, few are likely in support of either the<br>
>> stop, or reversing options, but rather the other two choices of go<br>
>>slower or speed up)?<br>
<br>
>I have not done any polling like that, but we did do a survey at an AI<br>
>conference we arranged a year ago:<br>
<a href="http://www.fhi.ox.ac.uk/news/2011/?a=21516" target="_blank">>http://www.fhi.ox.ac.uk/news/2011/?a=21516</a><br>
<br>
>Fairly optimistic about AGI soonish (2060), concerned with the<br>
>consequences (unlikely to be just business as usual), all over the place<br>
>in regards to methodology, and cautious about whether Watson would win<br>
>(the survey was done before the win).<br><br>It is interesting that you give the year 2060 as you optimistic date. Remember what I wrote about our "History Channel" discussing Doomsday scenarios? According to this one program, it said that Nostradamus mentioned the date 2060 in his writings, as well. I have zero belief in his predictions, but the date mentioned on the show compared to your timeline is worthy of note, (for trivia, sake only.) :)<br>
<br>
--<br>
>Anders Sandberg<br>
>Future of Humanity Institute<br>
>Oxford University<br><br>Thanks for the discussion, Anders. I am getting off line, but wish to follow up with the other comments made regarding my question. I don't intend it to be another five days.<br><br>Best,<br>
Kevin George Haskell,<br>C.H.A.R.T.S (Capitalism, Health, Age-Reversal, Transhumanism, and Singularity)<br><a href="mailto:singulibertarians@groups.facebook.com">singulibertarians@groups.facebook.com</a> (Facebook requires membership to access this address)<br>
<br><br><br>
<br><br> <br><br> <br><br><br><br>
<br><br><br><br><br><br><br><br> <br><br><br><br> <br>