[ExI] extropy-chat Digest, Vol 78, Issue 40

Keith Henson hkeithhenson at gmail.com
Sun Mar 21 21:43:27 UTC 2010


On Sun, Mar 21, 2010 at 5:00 AM,  Stefano Vaj <stefano.vaj at gmail.com> wrote:

> 2010/3/20 Cl?ment 'clemux' Schreiner <ml at clemux.info>:
>> The Culture's Minds are therefore always created with some elements of moral,
>> and that's the reason for their general benevolence.
>
> Yes, if I am not mistaken a paternalistic, condescending, nosy,
> interfering and sometimes rather harsh kind of benevolence...

I agree.

If any human pass through the singularity more or less as they are
now, then being treated by godlike AIs analogous to the way we treat
cats may be the best they can hope for.  It could be worse:

"Some authors consider rule by secretive technocrats to be virtually
inevitable. In Creating Alternative Futures, Hazel Henderson argues
that complex technologies "become inherently totalitarian" (her
italics) because neither voters nor legislators can understand them.
In The Human Future Revisited, Harrison Brown likewise argues that the
temptation to bypass democratic processes in solving complex crises
brings the danger "that if industrial civilization survives it will
become increasingly totalitarian in nature." If this were so, it would
likely mean our doom: we cannot stop the technology race, and a world
of totalitarian states based on advanced technology needing neither
workers nor soldiers might well discard most of the population."

http://e-drexler.com/d/06/00/EOC/EOC_Chapter_13.html

> From: Lee Corbin <lcorbin at rawbw.com>
> Subject: Re: [ExI] Look to Windward--Banks
>
> Keith writes
>
>> Singularity SF is really hard to write.  Everybody who does so knows
>> they have to cheat or they have no characters the reader can identify
>> with.
>
> I don't think that creating characters that the reader
> can identify with is the problem at all.

They have to be "left behind" humans or AIs that are limited to
something like human level.

snip

> No. The problem is coming up with a realistic plot that
> has to include conflict of some kind.

"Realistic" and "conflict" in a world of godlike AI is tough.  It is
hard to imagine conflict between humans and such AIs.  AIs themselves
in conflict . . . . Unless the AIs had human elements to their
personalities, it's not easy to imagine why AIs would fight.  And if
they did, being light years away a battle would seem like a good idea.

The only element of conflict I wrote into The Clinic Seed was Suskulan
feeling guilty about what he was doing to the people of the village.

> From: Emlyn <emlynoregan at gmail.com>

> On 20 March 2010 09:41, Damien Broderick <thespike at satx.rr.com> wrote:
>> On 3/1/2010 12:16 AM, Keith Henson wrote:
>>
>>> I don't know how deep your knowledge of modern biology, particularly
>>> evolution is. ?Selfish Gene by Dawkins is basic to understanding
>>> evolution, I presume you have read that. ?What other parts of the
>>> topic are you up on?
>>
>> Before everyone becomes convinced that it's all Real Simple and Clear, I
>> suggest a careful read of biologist Peter Watts' scathing comments at:
>>
>> <http://www.rifters.com/crawl/?p=1163>
>>
>> (Canadian Watts has just been found guilty on an insanely bogus charge and
>> might well spend a couple of years in jail. See his report at:

I read it.  I don't know Peter, perhaps some of you do.  It seemed to
me to be considerably in excess of a rational discussion.  If this is
normal for Peter, then don't worry about it.  If it is not, I would
strongly suggest an MRI brain scan.  This and getting into a row with
border guards (if it is unusual behavior for him) suggests the
possibility of organic brain problems.

> Evolutionary Psychology is seeming more and more bogus to me. Wall to
> wall just-so stories, grand conclusions drawn from very specific,
> restricted scope studies, mathematical model based "proofs" with
> ridiculously parsimonious assumptions.

Normally parsimonious assumptions are considered better.  Can you be
more specific?

> Plus its popularity seems tied
> to its ability to support the political assumptions of the theorist,
> whatever they may be (too often a very 19th century social-darwinistic
> kind of sensibility), and it has the added benefit that you don't need
> to know anything about the real work done in academic disciplines that
> have studied the field in question.

The first half of this sentence is getting close to Godwin's law.  The
second half looks to me like ad hominem.

Evolutionary psychology is currently placing a foundation under much
of social science.  My estimate is that it has done this in virtually
all of the top rated schools, and may be more than half way through
with the rest.  Google for Evolutionary Psychology graduate programs.
It's a way to look at behavior, and a fundamental outline of how to
construct models.

Models lead to predictions.  If the predictions fail, then the model
needs to be reconsidered.  Even poor models are better than no models
because they lead to better models.

For example, my initial model for what leads to wars failed with the
US Civil war.  The current economic situation was not contributing
since there was no economic downturn at that time.  But the
_anticipation_ of hard times due to ending slavery was a factor and
indeed the population of the South was correct in this assessment.

Keith




More information about the extropy-chat mailing list