[ExI] Hard Takeoff
Ben Goertzel
ben at goertzel.org
Fri Nov 19 16:35:23 UTC 2010
Hi all,
I have skimmed this thread and I find that Samantha's views are pretty
similar to mine.
There is a strong argument that a hard takeoff is plausible. This
argument has been known for a long time, and so far as I can tell SIAI
hasn't done much to make it stronger, though they've done a lot to
publicize it. The factors Michael A mentions are certainly part of
this argument...
OTOH, I have not heard any reasonably strong argument that a hard
takeoff is *likely*... from Michael or anyone else. There are simply
too many uncertainties involved, too many fast and loose speculations
about future technologies, to be able to make such an argument.
Whereas, I think there *are* reasonably strong arguments that
transhuman AGI is likely, assuming ongoing overall technological
development
-- Ben G
2010/11/16 Samantha Atkins <sjatkins at mac.com>:
>
> On Nov 15, 2010, at 6:56 PM, Michael Anissimov wrote:
>
> Hi Samantha,
> 2010/11/15 Samantha Atkins <sjatkins at mac.com>
>>
>> While it "could" do this it is not at all certain that it would. Humans
>> can improve themselves even today in a variety of ways but very few take the
>> trouble. An AGI that is not autonomous would do what it was told to do by
>> its owners who may or may not have improving it drastically as a high
>> priority.
>
> Quoting Omohundro:
> http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
> Surely no harm could come from building a chess-playing robot, could it? In
> this paper
> we argue that such a robot will indeed be dangerous unless it is designed
> very carefully.
> Without special precautions, it will resist being turned off, will try to
> break into other
> machines and make copies of itself, and will try to acquire resources
> without regard for
> anyone else’s safety. These potentially harmful behaviors will occur not
> because they
> were programmed in at the start, but because of the intrinsic nature of goal
> driven systems.
> In an earlier paper we used von Neumann’s mathematical theory of
> microeconomics
> to analyze the likely behavior of any sufficiently advanced artificial
> intelligence
> (AI) system. This paper presents those arguments in a more intuitive and
> succinct way
> and expands on some of the ramifications.
>
>
> I have argued this point (and stronger variants) with Steve. If the AI's
> goals are totally centered on chess playing then it is extremely unlikely
> that it would both diverge along many or all possible paths that might make
> it a more powerful chess player. Many many fields of knowledge could
> possibly make it better at is stated goal but it would have to be much more
> a generalist than a specialist to notice them and take the time to master
> them. If it could so diverge along so many paths then it would also
> encounter other fields of knowledge including those for judging the relative
> importance of various values using various methodologies. Which would tend,
> if understood, to make it not a single minded chess playing machine from
> hell. The argument seems self-defeating.
>
>
>
>> Possibly, depending on its long term memory and integration model. If it
>> came from human brain emulation this is less certain.
>
> I was assuming AGI, not a simulation, but yeah. It just seems likely that
> AGI would be able to stay awake perpetually, though not entirely certain.
> It seems like this would a priority upgrade for early-stage AGIs.
>
>
> One path to AGI is via emulating at least some subsystems of the human
> brain. It is not at all clear to me that this would not also bring in many
> human limitations. For instance, our learning cannot be transferred
> immediately to another person because of our rather individual neural
> associative patterns that the learning act modified. New knowledge is not
> in any one discrete place or in some universally instantly useful form as
> encoded in the human brain. Using a similar learning scheme in an AGI would
> mean that you could not transfer achieved learning very efficiently between
> AGIs. You could only copy them.
>>
>> This very much depends on the brain architecture. If too close a copy of
>> human brains this may not be the case.
>
> Assuming AGI.
>
>>
>> 4. overclock helpful modules on-the-fly
>>
>> Not sure what you mean by this but this is very much a question of
>> specific architecture rather than general AGI.
>
> I doubt it would be hard to implement. You can overclock specific modules
> in chess AI or Brood War AI today. It means giving a specific module extra
> computing power. It would be like temporarily shifting your auditory cortex
> tissue to take up visual cortex processing tasks to determine the trajectory
> of an incoming projectile.
>
>
> I am not sure the analogy holds well though. If the mind is highly
> integrated it is not certain that you could isolate one activity like that
> much more easily than we can in our own brains. Perhaps.
>>
>> What does this mean? Integrate other systems? How? To what level?
>> Humans do some degree of this all the time.
>
> The human brain stays at a roughly constant 100 billion neurons and a weight
> of 3 lb. I mean directly absorbing computing power into the brain.
>
> I mean that we integrate with computational systems albeit by slow HCI
> today. Unless you have in mind that the AGI hack systems around it, most
> of the computation going on on most of that hardware has nothing to do with
> the AGI and is written in such a way it cannot communicate that well even
> with other dumb programs or even with other instances of the same programs
> on other machines. It is also not certain and is plausibly unlikely that
> AGIs run on general purpose computers. I do grant of course that an AGI
> can interface to a computer much more efficiently than you or I can with the
> above caveat. Many systems on other machines were written by humans. You
> almost have to get inside the human programmer's head to efficiently use
> many of these. I am not sure the AGI would be automatically good at that.
>
>
>>
>> It could be so constructed but may or may not in fact be so constructed.
>
> Self-improvement would likely be an emergent property due to the reasons
> given in the Omohundro paper. So if it weren't developed deliberately from
> the start, self-improvement is an ability that would be likely to develop on
> the road to human-equivalence.
>
> As mentioned I do not find his argument altogether persuasive.
>
>
>>
>> I am not sure exactly what is meant by this. That it is very very good at
>> understanding code amounts to a 'modality'?
>
> Lizards have brain modules highly adapted to evaluating the fitness of
> fellow lizards for fighting or mating. Chimpanzees have the same modules,
> but with respect to others chimpanzees. Trilobites probably had specialized
> neural hardware for doing the same with other trilobites.
>
> A chess playing AGI for instance would not necessarily be at all good at
> understanding code. Our thinking is largely a matter of interactions at the
> level of neural networks and associative logic but none of us have a
> modality for this that I know of. My argument is that an AGI can have
> human level or better general intelligence without being a domain expert
> much less having a modality for the stuff it is implemented in - code. It
> may have many modalities but I am not sure this will be one of them.
>
>
> Some animals can smell very well, but have poor hearing and sight. Or vice
> versa. The reason why is because they have dedicated chunks of brainware
> that evolved to deal with sensory data from a particular channel. Humans
> have HUGE visual cortex areas, larger than the brains of mice. We can see
> in more colors than most animals. The way a human sees is different than
> the way an eagle sees, because we have different eyes, brains, and visual
> processing centers.
>
> I get the point but the AGI will not have such dedicated brain systems
> unless they are designed in on purpose. It will not get them just by
> definition of AGI afaik.
>
> We didn't evolve to process code. We probably did evolve to process simple
> mathematics and the idea of logical processes on some level, so we apply
> that to code.
>
> The AGI did not evolve at all.
>
> Humans are not general-purpose intellects, capable of doing anything
> satisfactorily.
>
> What do you mean by satisfactorily? We did a great number of things
> satisfactorily enough to get us to this point. We are indeed
> general-purpose intelligent beings. We certainly have our limits but we
> are amazingly flexible nonetheless.
>
> Compared to potential superintelligences, we are idiots.
>
> Well, this seems a fine game. Compared to some hypothetical but arguably
> quite possible being we are of less use than amoebas are to us. So what?
>
> Future superintelligences will look back on humans and marvel that we could
> write any code at all.
>
> If they really are that smart about us then they will understand how we
> could. After 30 years writing software for a living though I too marvel
> that humans can write any code at all. I fully understand (with chagrin)
> how very limited our abilities in this area are. If I were actively
> pursuing AGI I would quite likely gear first attempts toward various type of
> programmer assistants and automatic code refactoring and code data mining
> systems. The current human software tools aren't much better than they
> were 20 years ago. IDEs? Almost none have as much power as Lisp and
> Smalltalk environments had in the 80s.
>
> After all, we were designed mainly to mess around with each other, kill
> animals, forage, retain our status, and have sex. Most human beings alive
> today are more or less incapable of coding. Imagine if human beings had
> evolved in an environment for millions of years where we were murdered and
> prevented from reproducing if our coding abilities fell short.
>
> Are you suggesting that an evolutionary arms race at the level of code will
> exist among AGIs? If not then what will shape them for this purported
> modality?
>
>
>>
>> This assumes an ability to integrate random other computers that I do not
>> think is at all a given.
>
> All it requires is that the code can be parallelized.
>
> I think it requires more than that. It requires that the AGIs understand
> these other systems that may have radically different architectures than its
> own native systems. It requires that it is given permission for (or simply
> take it) running processes on these other systems. That said it can do a
> much better job of integrating a lot of information available through web
> services and other means on the net today. There is a lot of power there.
> So I mostly concede this point.
>
>
>>
>> This is simple economics. Most humans don't take advantage of the many
>> such positive sum activities they can perform today without such
>> self-copying abilities. So why is it certain that an AGI would?
>
> Not certain, but pretty damn likely, because it could probably perform tasks
> without getting bored, and would have innate drives towards increasing its
> power and protecting/implementing its utility function.
>
> I still don't see where an innate drive toward increasing power came from
> unless it was instilled on purpose. Nor do I see why it would never ever
> re-evaluate its utility function or see it as more important than the
> "utility functions" of a great number of other agents, AGI and biological,
> in its environment.
>
>
>>
>> There is an interesting debate to be had here, about the details of the
>> plausibility of the arguments, but most transhumanists just seem to dismiss
>> the conversation out of hand, or don't know that there's a conversation to
>> have.
>>
>> Statements about "most transhumanists" are fraught with many problems.
>
> Most of the 500+ transhumanists I have talked to.
>>
>> http://singinst.org/upload/LOGI//seedAI.html
>> Prediction: most comments in response to this post will again ignore the
>> specific points in favor of a rapid takeoff and simply dismiss the idea
>> based on low intuitive plausibility.
>>
>> Well, that helps a lot. It is a form of calling those who disagree lazy
>> or stupid before they even voice their disagreement.
>
> I like to get to the top of the Disagreement Pyramid quickly, and it seems
> very close to impossible when transhumanists discuss the Singularity, and
> particularly the idea of hard takeoff. As someone arguing on behalf of the
> idea of hard takeoff, I demand that critics address the central point, not
> play ad hominem with me. You're addressing the points -- thanks!
>
> You are welcome. Thanks for the interesting reply.
> - samantha
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Chairman, Humanity+
Director of Engineering, Vzillion Inc
Adjunct Professor of Cognitive Science, Xiamen University, China
Advisor, Singularity University and Singularity Institute
ben at goertzel.org
"My humanity is a constant self-overcoming" -- Friedrich Nietzsche
More information about the extropy-chat
mailing list