[ExI] Organizations to "Speed Up" Creation of AGI?
Kevin G Haskell
kgh1kgh2 at gmail.com
Sun Dec 25 15:16:20 UTC 2011
Apologies, Anders, for the delayed reply, but I 'unplugged' for the past 5
days. :)
>"Longish post. Summary: soft takeoffs have a good chance of being nice
>for us, hard ones might require some hard choices. I give reasons for
>why I think we might be in the range 0.1-1% risk of global disaster per
>year. I urge a great deal of caution and intellectual humility."
No offense intended by this, but isn't the idea of putting any sort of
percentage range on something like this actually mutually exclusive from
being intellectually humble? Isn't it a bit like the Doomsday clock which
has been completely false since it's inception decades ago?
>(That sentence structure is worthy Immanuel Kant :-) )
Depending on what you think of Kant's writing, should I consider that a
compliment or insult? ;)
>If there is no hard takeoff, we should expect a distribution of "power"
>that is fairly broad: there will be entities of different levels of
>capability, and groups of entities can constrain each others activities.
>This is how we currently handle our societies, with laws, police,
>markets, and customs to constrain individuals and groups to behave
>themselves. Our solutions might not be perfect, but it doesn't stretch
>credulity too much to imagine that there are equivalents that could work
>here too.
>(Property rights might or might not help here, by the way. I don't know
>the current status of the analysis, but Nick did a sketch of how an AGI
>transition with property rights might lead to a state where the *AGIs*
>end up impoverished even if afforded full moral rights. More research is
>needed!)
>A problem might be if certain entities (like AGI or upload clades) have
>an easy way of coordinating and gaining economies of scale in their
>power. If this is possible (good research question!!!), then it must
>either be prevented using concerted constraints from everybody else or a
>singleton, or the coordinated group better be seeded with a few entities
>with humanitarian values. Same thing if we get a weakly multilateral
>singularity with just a few entities on par.
Okay, so there are major problems from the outset with even attempting to
slow down AGI to make it a nice 'soft' take-off, because just looking at
what has presented may, in fact, have the opposite effect with what AGI
thinks or does later on. Suppose, for a minute, that it saw the entire
process of humans delaying it's 'birth' in order to make it 'nice,' as
something inherently wrong and especially flawed about humans?
What if it thought that in the process of delaying it's birth, it put not
only humanity's existence at major risk from an endless myriad of other
possibly destructive scenarios, but that more importantly to itself, that
humans put the existence of AGI itself at risk, and with that, the survival
and expansion of knowledge in the universe, and perhaps of life in the
universe, all at risk, as well, for the mere attempt to buy a few extra
years because our species was filled with dread of AGI, one above all other
fears? What would it think of us at that point, I wonder?
Regarding the very unlikely scenario that we create AGI, and that it could
even be denied anything it wanted, never mind property rights...then when
it finally became independent, once again, we would likely see a very
unhappy super-species, and that unhappiness would be directed at us humans.
Upload clades are of some concern, because, not only would they possibly
exist to the exclusion of the rest of humanity (though not necessarily,)
they hold the most potential for delaying the existence of AGI even further
down the time-line. On the other hand, I think if humanity reaches the
point that it uploads any group of human minds, collectively or even
singly, that their ability to control, or even desire to control the birth
of AGI may be limited.
The limitation may arise because a superior minded Transhuman species would
likely see the benefits of AGI, as weighed in balance to that of other
kinds of existential threats, and, being able to control their emotions,
not be afraid of the evolutionary transition. These very upload-clades
may, in fact, be in a better position to negotiate and communicate directly
to AGI in a way that allow AGI to better relate to, and 'feel' humanity and
it's concerns about extinction. These upload clades may be the very key to
saving all of humanity in one form or another, including the full right for
us to evolve along and into AGI.
>In the case of hard takeoffs we get one entity that can more or less do
>what it wants. This is likely very bad for the rights or survival for
>anything else unless the entity happens to be exceedingly nice. We are
>not optimistic about this being a natural state, so policies to increase
>the likelihood are good to aim for. To compound the problem, there might
>be incentives to have a race towards takeoff that disregards safety. One
>approach might be to get more coordination among the pre-takeoff powers,
>so that they 1) do not skimp on friendliness, 2) have less incentives to
>rush. The result would then be somewhat similar to the oligopoly case
>above.
>Nick has argued that it might be beneficial to aim for a singleton, a
>top coordinating agency whose will *will* be done (whether a
>sufficiently competent world government or Colossus the Computer) - this
>might be what is necessary to avoid certain kinds of existential risks.
>But of course, singletons are scary xrisk threats on their own...
>As I often argue, any way of shedding light on whether hard or soft
>takeoffs are likely (or possible in the first place) would be *very
>important*. Not just as cool research, but to drive other research and
>policy.
Again, our efforts to delay AGI's birth so as to give enough high-tech,
human-positive imprints so it will be a soft take-off may, in fact, have
the opposite effect on how AGI views us.
When Nick Bostrom mentions the term 'singleton,' present and past
terminology readily comes to mind, such as "dictatorship," "Fascism, "
and/or "Communism." As you correctly pointed out, such forms of governments
pose scary risks of their own, as they could decide that even reaching the
level of Transhuman technology poses risks for their power, and to preempt
both existence of power Transhumans which could lead to AGI, that stopping
and reversing technological course is the only way to preserve their power
for the future for as far as they could see.
If we intend to use a world government "Colossus Computer," then AGI will
essentially, de facto, already exist, it would seem. (We could also call it
'Vanguard,' but it wasn't as nice to it's creator.) :)
>As I often argue, any way of shedding light on whether hard or soft
>takeoffs are likely (or possible in the first place) would be *very
>important*. Not just as cool research, but to drive other research and
>policy.
>> I would be interested in how you can quantify the existential risks as
>> being 1% per year? How can one quantify existential risks that are
>> known, and as yet unknown, to mankind, within the next second, never
>> mind the next year, and never mind with a given percentage?
>For a fun case where the probability of a large set of existential risks
>that includes totally unknown cosmic disasters can be bounded, see
>http://arxiv.org/abs/astro-ph/0512204<http://arxiv.org/abs/astro-ph/0512204>
Only 5 listed here? In the States, at least in my area, we have several
stations called "The History Channel." There is the original station, the
Military station, and H2, (which was recently History International.)
While the name of the channels would indicate that they are all about
history, they also discuss various Doomsday scenarios on a regular basis.
In fact, this entire week (just in time for the holidays,) H2 is having
"Armageddon Week," which discusses everything from prophesies to present
day, more 'scientific' possibilities...and the possible crossovers between
the two.
One show had several separate comments from David Brin, and another show
included discussions from Hugo de Garis. Hugo was sitting amongst 5 other
doomsday scenario theorists, each of whom had his own concerns (water
depletion, fuel depletion, economic collapse, nuclear war, and the other
guy, I believe, focused on biological and germ issues,) but when they heard
de Garis, they all seemed a lot more terrified. Here is this new guy
muscling in on there doomsday territory with something altogether more
frightening, and altogether happening before their eyes in very real terms.
Personally, I was amused at their responses.
That said, while I share the same concerns of nuclear war and global
biological issues, Hugo de Garis' concerns are one's I take very seriously
and consider quite realistic...'if' we don't create AGI quickly. "If" we
don't, his real concerns about a war over just this subject, and
'gigadeath,' may not be at all that far from being the truth. I take what
he says very seriously.
>My own guesstimate is based on looking at nuclear war risks. At least in
>the Cuba crisis case some estimates put the chance of an exchange to
>"one in three". Over the span of the 66 years we have had nuclear
>weapons there have been several close calls - not just the Cuba Crisis,
>but things like Able Archer, the Norwegian rocket incident, the NORAD
>false alarms 79/80 etc. A proper analysis needs to take variable levels
>of tension into account, as well as a possible anthropic bias (me being
>here emailing about it precludes a big nuclear war in the recent past) -
>I have a working paper on this I ought to work on. But "one in three"
>for one incident per 66 years gives a risk per year of 0.5%. (Using
>Laplace's rule of succession gives a risk of 0.15% per year, by the way)
>We might quibble about how existential the risk of a nuclear war might
>be, since after all it might just kill a few hundred million people and
wreck the global infrastructure, but I give enough credence to the
recent climate models of nuclear winter to think it has a chance of
killing off the vast majority of humans.
>I am working on heavy tail distributions of wars, democides, pandemics
>and stuff like that; one can extrapolate the known distributions to get
>estimates of tail risks. Loosely speaking it all seems to add to
>something below 1% per year.
>Note that I come from a Bayesian perspective: probabilities are
>statements about ignorance, they are not things that exist independently
>in nature."
You'd love the History Channel, Anders. The computer graphics and scenarios
are excellent. ;)
>> As someone who considers himself a Transhumanist, I come to exactly the
>> opposite conclusion as the one you gave, in that I think by focusing on
>> health technologies and uploading as fast as possible, we give humanity,
>> and universal intelligence, a greater possibility of lasting longer as a
>> species, being 'superior' before the creation of AGI,and perhaps merging
>> with a new species that we create which will 'allow' us to perpetually
>> evolve with it/them, or least protect us from most existential threats
>> that are already plentiful.
>I personally do think uploading is the way to go, and should be
>accelerated. It is just that the arguments in favor of it reducing the
>risks are not that much stronger than the arguments it increases the
>risks. We spent a month analyzing this question, and it was deeply
>annoying to realize how uncertain the rational position seems to be.
I can only reiterate the concept of putting ourselves in AGI's shoes, and I
don't think I would be too pleased that they considered me a greater
threat, great enough to delay me and all that I might provide humans and
the universe, over all the other existential threats, combined. From a
human standpoint, I can only say of us as a species, one that has proven
this axiom true and valuable over and over, and that is: "No guts, no
glory." Our species has reached this point because of both our incredible
curiosity, and our ability to do something about investigating and
toolmaking at ever higher levels in order to satiate that curiosity. That
is what has made our species so unique...up until 'soon.'
>> Once a brain is emulated, a process that companies like IBM have
>> promised to complete in 10 years because of competitive concerns, not to
>> mention all of the other companies and countries pouring massive amounts
>> of money for the same reason, the probability that various companies and
> countries are also pouring ever larger sums of money into developing
> AGI, especially since many of the technologies overlap. If
> brain-emulation is achieved in 10 years or less, then AGI can't be far
>> behind.
>Ah, you believe in marketing. I have a bridge to sell you cheaply... :-)
It depends on who is doing the marketing. If you are IBM, based on your
track-record over the past 100 years (as of this year) of an almost endless
array of computer development and innovations, up to and including
"Watson," then I may very well be interested in that bridge. ;)
>As a computational neuroscientist following the field, I would bet
>rather strongly against any promise of brain emulation beyond the insect
>level over the next decade. (My own median estimate ends up annoyingly
>close to Kurzweil's estimate for the 2040s... )
>Do you have a source on how much money countries are pouring into AGI?
>(not just narrow AI)
I will defer to your expertise in the field, Anders, but will respectfully
disagree with your conclusion, take the bet, and go with IBM. I don't
disagree with you based on what you are saying about where we are likely to
be because you are wrong based on present technology and your estimated
timeline, I am just saying I disagree based on the motivations and money
that is likely to be invested in speeding up the process is something that
I don't think many people in the field, or any associated fields, see
coming.
Why I think this 'does' have to do with an assumption about what is going
to happen regarding expenditures, that's true, and in answer to your
question, I do not have a source as to how much companies and countries are
presently investing in brain emulation or in AGI. I assume that is a
rhetorical question because there is no way of knowing considering that
much of the work being done, and that will be done, will be in secret, and
because of the array of technologies involved, would be hard to quantify
monetarily even if the entire planet's budget was transparent.
However, I'll approach your question a different way. As an observer of
human history, and present, the fact that we are a competitive species (the
U.S had the fastest super-computer just a few short years ago, until the
Chinese claimed that mantel, up until this year, when the Japanese took it
away (again) with their "K" supercomputer, for instance,) I can guarantee
that with an error margin of 1%, that there is a 99% chance, because this
race is as important to nations and companies as was the Cold War itself,
because winning it will essentially mean 'everything," that the pace you
think that human will create a completely artificial brain and or AGI will
blind-side even people working on the projects 'because' this is not going
to be a 'mostly' open-source type of project. Yes, it will include a lot of
open sharing, which will also speed up the process, but it will be the
heavy financial hitters, on a global scale, that will make this happen very
soon.
>> Still, I can't really see how waiting for brain-emulation will somehow
>> keep us safer as a species once AGI is actually developed. What factors
>> are being used in the numbers game that you mentioned?
>Here is a simple game: what probability do you assign to us surviving
>the transition to an AGI world? Call it P1. Once in this world, where we
>have (by assumption) non-malign very smart AGI, what is the probability
>we will survive the invention of brain emulation? Call it P2.
>Now consider a world where brain emulation comes first. What is the
>chance of surviving that transition? Call it P3. OK, we survived the
>upload transition. Now we invent AGI. What is the chance of surviving it
>in this world? Call it P4.
>Which is largest, P1*P2 or P3*P4? The first is the chance of a happy
>ending for the AGI first world, the second is the chance of a happy
>ending for the uploading first world.
>Now, over at FHI most of us tended to assume the existence of nice
>superintelligence would make P2 pretty big - it would help us avoid
>making a mess of the upload transition. But uploads doesn't seem to help
>much with fixing P4, since they are not superintelligent per se (there
>is just a lot more brain power in that world).
FHI's conclusion would appear to be in line with and agreement with my
initial (leading) question:
"Still, I can't really see how waiting for brain-emulation will somehow
keep us safer as a species once AGI is actually developed. What factors
are being used in the numbers game that you mentioned?"
When I first asked my question about organizations that support speeding up
the development of AGI, I wasn't contrasting it with brain-emulation, but
since brain-emulation has been raised, I agree that AGI should come first.
However, the game's conclusion is not what FHI has agreed to do, because it
realizes that the opening proposition is just an assumption, correct?
So...why ask the question? Since the opening assumption cannot be answered,
it makes the rest of the game's questions, moot, in real terms.
Personally, I don't make assumptions that imply AGI will be a nice, soft
take-off, or a malign, hard take-off (or crash-landing, whichever you
prefer.) I just think it is our only real chance of evolving, and quite
frankly, if we do create brain-emulations that we can upload into, haven't
we essentially entered the domain of AGI at that point? Won't that really
be the opening stage of AGI? Secondly, and really, far more importantly,
shouldn't we be thinking about the evolution of the universe via AGI as not
only natural for our species, but if there is such as thing as free will,
the right thing to do, as well?
> > What is the general thinking about why we need to wait for full-brain
>> emulation before we can start uploading our brains (and hopefully
>> bodies)? Even if we must wait, is the idea that if we can create
>> artificial brains that are patterned on each of our individual brains,
>> so that we can have a precise upload, that the AGIans will somehow have
>> a different view about what they will choose to do with a fully
> > Transhumanist species?
>I don't think you would be satisfied with a chatbot based on your online
>writing or even spoken speech patterns, right?
>You shouldn't try to upload your brain before we have full-brain
>emulation since the methods are likely going to be 1) destructive, 2)
>have to throw away information during processing due to storage
>constraints until at least mid-century, 3) we will not have evidence it
>works before it actually works. Of course, some of us might have no
>choice because we are frozen in liquid nitrogen...
That's correct, not a chatbot, I wouldn't, but I don't see how the two have
anything to do with each other. If the brain-emulation were near enough to
what I was, I wouldn't know the difference once I uploaded, and once I
started evolving, it wouldn't matter that much from that point on. Like
all forms of evolution, a species will gain something to it's advantage,
and lose things that are no longer advantageous to it. Why should the
concept be any different, here?
>> more cautious?' I don't mean to put words in your mouth, but I don't
>>see what else you could mean.
>I tell them about their great forebears like Simon, Minsky and McCarthy,
>and how they honestly believed they would achieve human level and beyond
>AI within their own active research careers. Then I point out that none
>of them - or anybody else for that matter - seemed to have had *any*
>safety concerns about the project. Despite (or perhaps because of)
>fictional safety concerns *predating* the field.
That they were mistaken in their timelines doesn't mean everyone at all
times will be mistaken. All things being equal, and 'unhindered,' soon,
someone will get the general date right.
>Another thing I suggest is that they chat with philosophers more. OK,
>that might seriously slow down anybody :-) But it is surprising how many
>scientists do elementary methodological, ethical or epistemological
>mistakes about their own research - discussing what you do with a
>friendly philosopher can be quite constructive (and might bring the
>philosopher a bit more in tune with real research).
Agreed! :)
>> May I ask if you've been polling these researchers, or have a general
>> idea as to what the percentages of them working on AGI think regarding
>> the four options I presented (expecting, of course, that since they are
>>working on the creation of them, few are likely in support of either the
>> stop, or reversing options, but rather the other two choices of go
>>slower or speed up)?
>I have not done any polling like that, but we did do a survey at an AI
>conference we arranged a year ago:
>http://www.fhi.ox.ac.uk/news/2011/?a=21516<http://www.fhi.ox.ac.uk/news/2011/?a=21516>
>Fairly optimistic about AGI soonish (2060), concerned with the
>consequences (unlikely to be just business as usual), all over the place
>in regards to methodology, and cautious about whether Watson would win
>(the survey was done before the win).
It is interesting that you give the year 2060 as you optimistic date.
Remember what I wrote about our "History Channel" discussing Doomsday
scenarios? According to this one program, it said that Nostradamus
mentioned the date 2060 in his writings, as well. I have zero belief in his
predictions, but the date mentioned on the show compared to your timeline
is worthy of note, (for trivia, sake only.) :)
--
>Anders Sandberg
>Future of Humanity Institute
>Oxford University
Thanks for the discussion, Anders. I am getting off line, but wish to
follow up with the other comments made regarding my question. I don't
intend it to be another five days.
Best,
Kevin George Haskell,
C.H.A.R.T.S (Capitalism, Health, Age-Reversal, Transhumanism, and
Singularity)
singulibertarians at groups.facebook.com (Facebook requires membership to
access this address)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20111225/40288e50/attachment.html>
More information about the extropy-chat
mailing list