[extropy-chat] RE: Singularitarian verses singularity

Jef Allbright jef at jefallbright.net
Thu Dec 22 23:21:19 UTC 2005


On 12/22/05, mike99 <mike99 at lascruces.com> wrote:

<snip>  // section about potential effectiveness of humans data-mining

>   What would be shocking about these results? I suspect (but cannot by
> any means prove) that the results would indicate that building the world
> we wish to live in would require us to combine seemingly contradictory or
> mutually exclusive components. Like what? Like the contradictory core claims
> of the libertarians and the socialists. Like the mutually exclusive ideals
> of the evolutionary psychologists and the spiritual idealists.
>
>   Our experience of human politics shows that we are incapable of combining
> these elements on our own. In fact, right now we cannot even conceive of the
> possible need to do so. Yet I suspect that our inability to even imagine
> combining these is precisely why we need an SAI to tell us that this is what
> we must do. **Not compel us to do it** mind you -- no coercion -- but TELL
> us to do it. And to give us the benefit of its managerial ability (including
> its ability to design and manage Drexlerian nanotech) to make this possible
> outside of any political wrangling.
>

I find it interesting that discussions of this type argue about the
capabilities of humans versus the capabilities of AIs, but they seldom
consider the developing alternative -- that a higher level of
organization, based on humans and their developing intellectual tools,
could possess and exercise wisdom accelerating beyond the capabilities
of any individual human.

The inherent advantage of such organization would be that it would be
firmly grounded in human values.  To work, it would require a
subsystem that abstracts subjective human values, weighted according
to how well they work over increasing scope, and a subsystem that
abstracts objective (scientific) principles of effective interaction.

Such an organization would in a sense amplify broad-based human wisdom
(knowledge of what's important and what works) by collecting diverse
human input and via competitive abd cooperative processes promote what
works to successively higher levels of abstraction.  The higher level
undertanding of the system would be beyond the comprehension of the
lower level human elements, who could predict its ability to achieve
"good" goals but not predict its specific actions.

I think we're already seeing the beginnings of such higher level forms
of networked human values and knowledge with examples such as music
and entertainment rating systems, del.icio.us, frapper, wikipedia,
etc., and with a lot more to be done.

- Jef
http://www.jefallbright.net



More information about the extropy-chat mailing list