[extropy-chat] Singularity econimic tradeoffs

Samantha Atkins samantha at objectent.com
Sun Apr 18 22:32:55 UTC 2004

On Apr 17, 2004, at 1:32 PM, Dan Clemmensen wrote:

> Samantha Atkins wrote:
>> Please clarify how increased computer power and (supposedly non-AI) 
>> software will lead to a singularity.
> OK!
> In my opinion, the Singularity will result from any fast-feedback 
> process that uses computers to enhance "technological creativity." 
> "Technological creativity"is that quality or set of qualities that 
> results in new advances in technology. For purposes of this 
> discussion, we can restrict ourselves to computer and related 
> technologies.
> In my favorite scenario, the initial SI is a collaboration between one 
> or more humans and a large computing resource. The humans supply the 
> creativity and high-level pattern recognition, while the computers 
> supply brute-force stuff like web searching, peep-hole optimizations, 
> etc. If the collaboration can be made tight enough, the system as a 
> whole will operate as fast as the human(s) can make high-level 
> decisions.

Bingo!   Since humans are making the high-level decision the 
speed/productivity of the system is directly limited by human 
limitations.  The include human irrationality and various forms of 
monkey politics inherent to human group dynamics.

>  Such a system would permit computer implementation under human 
> guidance. Presumably, the first thing the inventors of such a system 
> will use the system for is improvements to the system.

I have been around in corporate/business computing environments for 
quite some time and I have been involved in business selling to such 
environments and improving productivity of various groups, including 
software groups.  The first and foremost thing any IT resource is used 
for is to enhance profitability.   Self-improvement of IT systems,  
done in-house, by external software purchase and integration or some 
mixture are usually not a very high priority.  It is notoriously hard 
to sell software based on such improvements to infrastructure.  Usually 
the improvements have to be cast in terms of "solutions" to particular 
onerous problems seen as part of the business process directly 
influencing the bottom line.  This recasting as solutions severely 
limits how much improvement is achieved or even contemplated to 
fundamental infrastructure and methodology.

It is even more difficult to make a profitable business selling tools 
to software producers, i.e., programmers.  The business model just 
doesn't work out that well.  It is possible to pay the bills of a small 
company and group in this manner but there isn't much way to get rich 
doing this.  Yet the problems of improving software productivity and 
quality are very germane to such a path to SI and are largely 
non-trivial problems.  Open Source efforts hold some promise but I have 
my doubts the most central problems will be solved in the OS world.

> As soon as the system is implementing things as fast as the human(s) 
> can make decisions, the next problem that the inventors will turn to 
> is increasing the scope of sub-problems that can be solved by the 
> computers rather than the human, using whatever software tools come to 
> hand: there is no particular need for an overall theory of AI here. 
> since the humans are still handling that part. The humans become more 
> and more productive. As they add more and more tools to the computer 
> toolbox, the humans operate at progressively higher levels of 
> abstraction. They use the system to optimize the system. If necessary, 
> they use the system to design new hardware to add to the system. 
> Eventually, the humans are operating at such a high level of 
> abstraction that the non-human part of the system reaches and then 
> exceeds the current human level of technical creativity.

There is a very real Ai component needed for such planning, scheduling, 
understanding intent of decisions, weighing repercussions of 
implementation choices, deciding when to bring humans back into the 
loop and so on.  This is very non-trivial and not at all in the scope 
of most business computing today.

Having humans operate at such a high level of abstraction is not at all 
easy to do.  Only persons well trained in formal abstract reasoning are 
likely to comfortably operate at such levels and then only within the 
severe limits set by our internal computational hardware and 
biologically heavily conditioned minds.  Their is some truth in the old 
chestnut that groups of humans often have an effective intelligence no 
greater than 70% of the average intelligence of the group.  I am being 
charitable when I say this is what I have observed.  Much of the group 
decision making is not on the basis of the manipulation of the 
principles and abstractions involved using logic at all.

> The richer the available computer resource the faster this will go. My 
> gut feeling is that the currently-available computer resources are 
> already rich enough that the process no linger needs new hardware to 
> go to completion, and can therefore go to completion in less than one 
> week, at which point all connected computes form a single 
> fully=optimized system, which has also designed and sent fabrication 
> and purchase orders for the next-generation hardware and for the 
> equipment needed to produce nanotech.

In this scenario the determining factor is the rationality and ability 
to abstract effectively of the human component.   I very much agree 
there is a lot of promise in the power of IA to augment the range, 
creativity and abilities of human individuals and groups.  But such 
power is not likely IMHO to play more than a bootstrapping role in the 
creation of true SI.

- samantha

More information about the extropy-chat mailing list