[ExI] Hard Takeoff

Michael Anissimov michaelanissimov at gmail.com
Tue Nov 16 02:56:50 UTC 2010


Hi Samantha,

2010/11/15 Samantha Atkins <sjatkins at mac.com>

>
> While it "could" do this it is not at all certain that it would.  Humans
> can improve themselves even today in a variety of ways but very few take the
> trouble.  An AGI that is not autonomous would do what it was told to do by
> its owners who may or may not have improving it drastically as a high
> priority.
>

Quoting Omohundro:

http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

Surely no harm could come from building a chess-playing robot, could it? In
this paper
we argue that such a robot will indeed be dangerous unless it is designed
very carefully.
Without special precautions, it will resist being turned off, will try to
break into other
machines and make copies of itself, and will try to acquire resources
without regard for
anyone else’s safety. These potentially harmful behaviors will occur not
because they
were programmed in at the start, but because of the intrinsic nature of goal
driven systems.
In an earlier paper we used von Neumann’s mathematical theory of
microeconomics
to analyze the likely behavior of any sufficiently advanced artificial
intelligence
(AI) system. This paper presents those arguments in a more intuitive and
succinct way
and expands on some of the ramifications.


> Possibly, depending on its long term memory and integration model.  If it
> came from human brain emulation this is less certain.
>

I was assuming AGI, not a simulation, but yeah.  It just seems likely that
AGI would be able to stay awake perpetually, though not entirely certain.
 It seems like this would a priority upgrade for early-stage AGIs.


> This very much depends on the brain architecture.  If too close a copy of
> human brains this may not be the case.
>

Assuming AGI.


> 4.  overclock helpful modules on-the-fly
>
>
> Not sure what you mean by this but this is very much a question of specific
> architecture rather than general AGI.
>

I doubt it would be hard to implement.  You can overclock specific modules
in chess AI or Brood War AI today.  It means giving a specific module extra
computing power.  It would be like temporarily shifting your auditory cortex
tissue to take up visual cortex processing tasks to determine the trajectory
of an incoming projectile.


> What does this mean?  Integrate other systems?  How? To what level?  Humans
> do some degree of this all the time.
>

The human brain stays at a roughly constant 100 billion neurons and a weight
of 3 lb.  I mean directly absorbing computing power into the brain.


> It could be so constructed but may or may not in fact be so constructed.
>

Self-improvement would likely be an emergent property due to the reasons
given in the Omohundro paper.  So if it weren't developed deliberately from
the start, self-improvement is an ability that would be likely to develop on
the road to human-equivalence.


> I am not sure exactly what is meant by this.  That it is very very good at
> understanding code amounts to a 'modality'?
>

Lizards have brain modules highly adapted to evaluating the fitness of
fellow lizards for fighting or mating.  Chimpanzees have the same modules,
but with respect to others chimpanzees.  Trilobites probably had specialized
neural hardware for doing the same with other trilobites.

Some animals can smell very well, but have poor hearing and sight.  Or vice
versa.  The reason why is because they have dedicated chunks of brainware
that evolved to deal with sensory data from a particular channel.  Humans
have HUGE visual cortex areas, larger than the brains of mice.  We can see
in more colors than most animals.  The way a human sees is different than
the way an eagle sees, because we have different eyes, brains, and visual
processing centers.

The human visual cortex takes in gigabytes (or something like that) of
information per second, and processes it down to edges, corners, distance
estimates, salient objects, colors, and many other important features.  To a
slug, a view of a city looks like practically nothing, because its eyes are
crap, its brain is crap, and its visual processing centers are crap.  To a
human, it can have a thousand different features and meanings.

We didn't evolve to process code.  We probably did evolve to process simple
mathematics and the idea of logical processes on some level, so we apply
that to code.

Humans are not general-purpose intellects, capable of doing anything
satisfactorily.  Compared to potential superintelligences, we are idiots.
 Future superintelligences will look back on humans and marvel that we could
write any code at all.  After all, we were designed mainly to mess around
with each other, kill animals, forage, retain our status, and have sex.
 Most human beings alive today are more or less incapable of coding.
 Imagine if human beings had evolved in an environment for millions of years
where we were murdered and prevented from reproducing if our coding
abilities fell short.  Create an environment like that, and you might have a
situation promoting the evolution of specific brain centers for visualizing
and writing computer code.


> This assumes an ability to integrate random other computers that I do not
> think is at all a given.
>

All it requires is that the code can be parallelized.


> This is simple economics.  Most humans don't take advantage of the many
> such positive sum activities they can perform today without such
> self-copying abilities.  So why is it certain that an AGI would?
>

Not certain, but pretty damn likely, because it could probably perform tasks
without getting bored, and would have innate drives towards increasing its
power and protecting/implementing its utility function.


> There is an interesting debate to be had here, about the details of the
> plausibility of the arguments, but most transhumanists just seem to dismiss
> the conversation out of hand, or don't know that there's a conversation to
> have.
>
> Statements about "most transhumanists" are fraught with many problems.
>

Most of the 500+ transhumanists I have talked to.

> http://singinst.org/upload/LOGI//seedAI.html
>
> Prediction: most comments in response to this post will again ignore the
> specific points in favor of a rapid takeoff and simply dismiss the idea
> based on low intuitive plausibility.
>
>
> Well, that helps a lot.  It is a form of calling those who disagree lazy or
> stupid before they even voice their disagreement.
>

I like to get to the top of the Disagreement Pyramid quickly, and it seems
very close to impossible when transhumanists discuss the Singularity, and
particularly the idea of hard takeoff.  As someone arguing on behalf of the
idea of hard takeoff, I demand that critics address the central point, not
play *ad hominem* with me.  You're addressing the points -- thanks!

http://www.acceleratingfuture.com/michael/blog/images/disagreement-hierarchy.jpg


> No, you don't have air tight evidence.  You have a reasonable argument for
> it.
>

It depends on what specifically is being argued.

-- 
michael.anissimov at singinst.org
Singularity Institute
Media Director
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101115/1529f095/attachment.html>


More information about the extropy-chat mailing list