[extropy-chat] Re: riots in France
Samantha Atkins
sjatkins at mac.com
Tue Nov 15 00:57:41 UTC 2005
On Nov 14, 2005, at 2:57 PM, Jef Allbright wrote:
>
> I see technological risk accelerating at a rate faster than the
> development of individual human intelligence (which gives us much of
> our built-in sense of morality), and faster than cultural intelligence
> (from which we get moral guidance based on societal beliefs) but
> maybe--just maybe--not faster than technologically based amplification
> of human values exploiting accelerating instrumental knowledge to
> implement effective decision-making which, as I've explained elsewhere
> in more detail, is a more encompassing concept of morality.
>
I agree that IA is very important. However it is not obvious that
higher effective intelligence and much more effective decision making
[redundant?] will lead to more moral or wise goals. It could lead to
much more efficiently implementing the same old goals and
prejudices. I still believe it is a net great improvement to
today's insanity as so much of it seems to grow out of rank
stupidity. If higher intelligence could be more tied to critical
examination of current assumptions and goals and much more aware
choosing of goals then we would see much greater improvement. But
how are you going to get pst the propensity of human beings to ignore
the knowledge they do have and the amount of decision making power
they now possess?
> I apologize, as usual, for the density of my post.
No apologies needed that I can see.
- samantha
More information about the extropy-chat
mailing list