[ExI] Organizations to "Speed Up" Creation of AGI?
Anders Sandberg
anders at aleph.se
Sun Dec 25 23:11:35 UTC 2011
Kevin G Haskell wrote:
>
> >For a fun case where the probability of a large set of existential risks
> >that includes totally unknown cosmic disasters can be bounded, see
> >http://arxiv.org/abs/astro-ph/0512204
>
> Only 5 listed here? In the States, at least in my area, we have
> several stations called "The History Channel." There is the original
> station, the Military station, and H2, (which was recently History
> International.) While the name of the channels would indicate that
> they are all about history, they also discuss various Doomsday
> scenarios on a regular basis.
Yup. It is very much a channel for "history" - I don't think it has that
deep scientific validity. A channel that has Giorgio A Tsoukalos claim
"aliens did it" doesn't impress me that much.
The problem with seriously studying existential risk is that most people
do treat it as entertainment rather than something related to the real
world. This is true even for academics - there are about three times as
many scientific papers about dung-beetle reproduction than about human
extinction. Not good evidence for the rationality of our species, even
though the beetles are lovely.
If I may boast a bit with my institute, we have a pretty good (IMHO)
book about global catastrophic risks and existential risks.
http://www.global-catastrophic-risks.com/
But there should be more.
> I can only reiterate the concept of putting ourselves in AGI's shoes,
> and I don't think I would be too pleased that they considered me a
> greater threat, great enough to delay me and all that I might provide
> humans and the universe, over all the other existential threats, combined.
But now you are assuming the AGI thinks like a human. Humans are social
mammals that care a lot what other humans think about them, their
ability to form future alliances and have built-in emotional macros for
social reactions. An AGI is unlikely to have these properties unless you
manage to build them into it. The AGI might just note that you did an
attempt to secure it, and even though you failed, this is merely
information to use for future decisions, nothing to lash back about.
> From a human standpoint, I can only say of us as a species, one that
> has proven this axiom true and valuable over and over, and that is:
> "No guts, no glory." Our species has reached this point because of
> both our incredible curiosity, and our ability to do something about
> investigating and toolmaking at ever higher levels in order to satiate
> that curiosity. That is what has made our species so unique...up until
> 'soon.'
Evolutionary strategies that are successful can lead into dead ends.
Apparently there are several lineages of carnivores over the triassic
that grew larger, gaining advantages as predators - and then became
vulnerable to temporary ecological noise and died out... only to be
replaced by the next lineage. We could be in the same situation - a
daring strategy might on average work well, but raises the risks over
time until they cause a disaster. It has happened again and again in
finance, as Taleb has pointed out.
>
> >> Once a brain is emulated, a process that companies like IBM have
> >> promised to complete in 10 years because of competitive concerns,
> not to
> >> mention all of the other companies and countries pouring massive
> amounts
> >> of money for the same reason, the probability that various
> companies and
> > countries are also pouring ever larger sums of money into developing
> > AGI, especially since many of the technologies overlap. If
> > brain-emulation is achieved in 10 years or less, then AGI can't be far
> >> behind.
>
> >Ah, you believe in marketing. I have a bridge to sell you cheaply... :-)
>
> It depends on who is doing the marketing. If you are IBM, based on
> your track-record over the past 100 years (as of this year) of an
> almost endless array of computer development and innovations, up to
> and including "Watson," then I may very well be interested in that
> bridge. ;)
IBMs track record at predicting the future is spotty. Remember their
views on the PC market, and how they let themselves be outsmarted by
those kids at Microsoft?
Besides the obvious enthusiasm bias when dealing with ones own field,
what researchers like Henry Markram claim is going to be achieved
"within 10 years" turns out to be much messier when you actually inspect
the claims and research state.
I think that once you get a demonstration of a brain to emulation
pipeline interest will explode, but right now there are no such projects
- the IBM project aims at neuromorphic computing, Markram has done
column simulations and are hoping to scale them up, and so on.
>
> Why I think this 'does' have to do with an assumption about what is
> going to happen regarding expenditures, that's true, and in answer to
> your question, I do not have a source as to how much companies and
> countries are presently investing in brain emulation or in AGI. I
> assume that is a rhetorical question because there is no way of
> knowing considering that much of the work being done, and that will be
> done, will be in secret, and because of the array of technologies
> involved, would be hard to quantify monetarily even if the entire
> planet's budget was transparent.
Maybe, but from my knowledge of the AI field there is plenty of funding
for particular applications but very little support for general AI. Even
the intelligence applications are more text mining and decision support,
not thinking.
>
> However, I'll approach your question a different way. As an observer
> of human history, and present, the fact that we are a competitive
> species (the U.S had the fastest super-computer just a few short years
> ago, until the Chinese claimed that mantel, up until this year, when
> the Japanese took it away (again) with their "K" supercomputer, for
> instance,) I can guarantee that with an error margin of 1%, that there
> is a 99% chance, because this race is as important to nations and
> companies as was the Cold War itself, because winning it will
> essentially mean 'everything," that the pace you think that human will
> create a completely artificial brain and or AGI will blind-side even
> people working on the projects 'because' this is not going to be a
> 'mostly' open-source type of project. Yes, it will include a lot of
> open sharing, which will also speed up the process, but it will be the
> heavy financial hitters, on a global scale, that will make this happen
> very soon.
This makes sense once governments think there is anything to it. A bit
like all major governments agree on the importance of dealing with
climate change, right? Or how the Soviet union kept pace with the US in
computing?
While rational governments or parts of governments might race for
superintelligence, do not underestimate the lack of foresight and
incompetence one can achieve too. Looking at the history of great
strategic projects - radar, nuclear weapons, rocketry etc. I am struck
by how often they got derailed by stupidity. The German nuclear project
was hopeless because the key scientists had either been suppressed by
the Deutsche Physik movement or (in the case of Uranverein 1) just sent
to the front. Key parts of the Soviet programs were purged in political
purges. Different radar projects in the US military were sabotaging each
other, and so on.
--
Anders Sandberg,
Future of Humanity Institute
Philosophy Faculty of Oxford University
More information about the extropy-chat
mailing list