[extropy-chat] SI morality

Eugen Leitl eugen at leitl.org
Fri Apr 16 16:34:28 UTC 2004

On Sat, Apr 17, 2004 at 01:16:01AM +1200, Paul Bridger wrote:

> Given all the negative AI scenarios played out in popular culture 
> (Matrix, Terminator etc.) I expect the most deadly obstacle to a 
> big-bang type Singularity to be fear. All scientific obstacles can be 

Fear is a very rational reaction in face of a runaway positive-feedback AI.
It's the only thing which can cook our goose for good.

> conquered by the application of our rational minds, but something that 
> cannot be conquered by rationality is...irrationality. However, I also 
> expect AI to appear in our lives slowly at first and then with 
> increasing prevalence.

No, overthreshold AI is effectively zero to hero in a very short time, at
least on the wall clock. It's a classical punctuated equilibrium scenario:
overcritical seed taking over a sea of idling hardware. We can count
ourselves lucky in that we don't know how to build that seed, yet.
> Like you, I strongly believe a purely rational artificial intelligence 

Unlike you, I believe a "purely rational" is remarkably meaningless epithet.
The future is intrinsically unpredictable, regardless of the amount of data
and amount of computation expedited. As such, even a god is adrift in a sea
of uncertainty. Reality is computationally undecidedable, better learn to
live with it.

> would be a benevolent one, but I wouldn't expect most people to agree 
> (simply because most people don't explore issues beyond what they see at 
> the movie theater). There's a fantastic quote on a related issue from 

I don't think this list is very representative for "most people", for better
or worse.

> Greg Egan's Diaspora: "Conquering the universe is what bacteria with 
> spaceships would do." In other words, any culture sufficiently 

Conquering the universe is what any evolutionary system would do. Unless you
can show me how evolutionary regime can be sustainably abandoned: unverse,
consider yourself conquered. Greg Egan has plenty of irrational moments.

> technologically advanced to travel interstellar distances would also 
> likely be sufficiently rationally advanced to not want to annihilate us. 

Non sequitur.

> I think a similar argument applies to any purely rational artificial 
> intelligence we manage to create.

Given that a "purely rational intelligence" is meaningless, you might want to
reconsider your analysis.
> I'm interested: have people on this list speculated much about the 
> morality of a purely rational intelligence? If you value rationality, as 


> extropians do, then surely the morality of this putative rational 
> artificial intelligence would be of great interest - it should be the 

Morality is an evolutionary artifact. Even superficial analysis does not
forebode well for our sustained coexistance with a neutral AI.

> code we all live by. Rationality means slicing away all arbitrary 
> customs, and reducing decisions to a cost-benefit analysis of forseeable 
> consequences. This is at once no morality, and a perfect morality. 

This is not very different from how we're doing things now, unsurprisingly so.
Notice that "forseeable" doesn't mean much. Predictions are notoriously
difficult. Especially about the future.

> Hmm...Zen-like truth, or vacuous pseudo-profundity - you decide. :)

Eugen* Leitl <a href="http://leitl.org">leitl</a>
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20040416/a6266d31/attachment.bin>

More information about the extropy-chat mailing list