[ExI] Hard Takeoff

Samantha Atkins sjatkins at mac.com
Tue Nov 16 05:45:32 UTC 2010


On Nov 14, 2010, at 11:26 AM, Aware wrote:

> Michael, what has always frustrated me about Singularitarians, apart
> from their anthropomorphizing of "mind" and "intelligence", is the
> tendency, natural for isolated elitist technophiles, to ignore the
> much greater social context.  The vast commercial and military
> structure supports and drives development providing increasingly
> intelligent systems, exponentially augmenting and amplifying human
> capabilities, hugely outweighing not only in height but in breadth,
> the efforts of a small group of geeks (and I use the term favorably,
> being one myself.)
> 

On SL4 especially but also in many other singularitarian camps a great deal of attention was paid to avoiding anthropomorphizing.  So I am a bit surprised by that charge.   I don't think ignoring social context is that common either.  Some of us are very focused on context as we are highly concerned with how to get from here, and thus exactly what here is like, to some relatively positive future there, and getting some coherence on what that there would look like.   I do grant that the number of transhumanist focused on this aspect is a pretty small percentage of total.

Commercial efforts are notoriously short term and drive only some forms of intelligent systems in relatively small niches.  They do drive general communication, computational capability, device proliferation and so on very very well. Some of these devices are augmenting/changing us.  Not as fast as an AGI but with a lot more commercial viability beneath them.  But this does not go very deep toward new AGI applicable results.

Military research, to the extent it is not a boondoggle, is another matter.  A lot of very strong research is done on military contract.   Unfortunately. 

> The much more significant and accelerating risk is not that of a
> "recursively self-improving" seed AI going rogue and tiling the galaxy
> with paper clips or copies of itself, but of relatively small groups
> of people, exploiting technology (AI and otherwise) disproportionate
> to their context of values.

How would you judge their 'context of values'?  Against what would you judge it?

> 
> The need is not for a singleton nanny-AI but for development of a
> fractally organized synergistic framework for increasing awareness of
> our present but evolving values, and our increasingly effective means
> for their promotion, beyond the capabilities of any individual

I have no idea what a 'fractally organized synergistic framework for increasing awareness of our present but evolving values' is or entails or when or how you would know that you have achieved that.  Frankly, our values today, overall are pretty thinly based on our evolved psychology and not, for most human beings, very much in the way of self-examination, wisdom or much ethical inquiry.    I somewhat doubt that human 1.0 is overall designed to be capable of much more except in relatively isolated cases.  I submit that that much is not good enough for the challenges ahead of us.


> biological or machine intelligence.
> 

If it is beyond the capabilities of any intelligence then how will it seemingly magically arise in fractal magnificence among an accumulation of said inadequate intelligences?  

- samantha




More information about the extropy-chat mailing list