[extropy-chat] Arrow of morality redux

Jef Allbright jef at jefallbright.net
Tue Nov 15 05:21:27 UTC 2005


On 11/14/05, Samantha Atkins <sjatkins at mac.com> wrote:
>
> On Nov 14, 2005, at 2:57 PM, Jef Allbright wrote:
> >
>
> > I see technological risk accelerating at a rate faster than the
> > development of individual human intelligence (which gives us much of
> > our built-in sense of morality), and faster than cultural intelligence
> > (from which we get moral guidance based on societal beliefs) but
> > maybe--just maybe--not faster than technologically based amplification
> > of human values exploiting accelerating instrumental knowledge to
> > implement effective decision-making which, as I've explained elsewhere
> > in more detail, is a more encompassing concept of morality.
> >
>
> I agree that IA is very important.

A critical distinction is that this system of intelligence
amplification is necessarily composed of multiple independent
viewpoints.  It is not sufficient to merely amplify the capabilities
of a single self--a single set of values--for the same reason that a
single point of view, from its own perspective, seems self-consistent
at any instant, regardless of its actual correspondence with reality
(with what works over increasing scope.)

> However it is not obvious that
> higher effective intelligence and much more effective decision making
> [redundant?] will lead to more moral or wise goals.

Yes, those terms seem redundant.  I emphasized two terms, but they
were (1) increasing awareness of (subjective) values exploiting (2)
increasing awareness of (objective) instrumental knowledge to
implement increasingly effective decision-making.

The "morality" of a choice is always evaluated from a subjective
viewpoint because "goodness" is always relative to values which are
necessarily subjective.

The "wisdom" of a choice is a measure of the (increasingly objective)
effectiveness of a moral choice.

> It could lead to
> much more efficiently implementing the same old goals and
> prejudices.

Again, the higher level intelligence requires elements providing
*independent* inputs to the process.

> I still believe it is a net great improvement to
> today's insanity as so much of it seems to grow out of rank
> stupidity.

Yes, there is the wisdom of crowds and there is the mass insanity of
crowds, depending on whether the multiple inputs tend to correct each
other or to reinforce each other.

> If higher intelligence could be more tied to critical
> examination of current assumptions and goals and much more aware
> choosing of goals then we would see much greater improvement.  But
> how are you going to get pst the propensity of human beings to ignore
> the knowledge they do have and the amount of decision making power
> they now possess?

Yes, this is why I often refer to the need for a social framework
whereby individual subjective values compete on an objective basis and
those that survive are promoted to compete at successively higher
levels of abstraction.  Each level would provide payoffs for
participation, somewhat analogous to the way cells benefit from their
contribution to the larger organism.

But far from resembling the Borg, such a system thrives on diversity
to achieve higher level goals in common.

- Jef
http://www.jefallbright.net



More information about the extropy-chat mailing list