[extropy-chat] SI morality
Paul Bridger
paul.bridger at paradise.net.nz
Sat Apr 17 01:18:35 UTC 2004
Eugen Leitl wrote:
>>conquered by the application of our rational minds, but something that
>>cannot be conquered by rationality is...irrationality. However, I also
>>expect AI to appear in our lives slowly at first and then with
>>increasing prevalence.
>>
>>
>
>No, overthreshold AI is effectively zero to hero in a very short time, at
>least on the wall clock. It's a classical punctuated equilibrium scenario:
>overcritical seed taking over a sea of idling hardware. We can count
>ourselves lucky in that we don't know how to build that seed, yet.
>
>
>
Yep, by definition a relentlessly self-improving AI (I take it
self-improvement is the threshold you mention) is going to quickly
increase in power. However, for me being "overthreshold" is not a
critical property of AI. I am merely claiming that subthreshold AI will
enter our lives gradually - the point being that this is an effective
(though incidental) way to get around the problem of public fear.
At first people will have autonomous vacuum cleaners, then they will
talk to their cars, then they will have conscious robotic pets and
then...poof!...everyone will be kneeling in subservience to the first
overthreshold AI before being consumed by nanomachines for fuel. :)
>>Like you, I strongly believe a purely rational artificial intelligence
>>
>>
>
>Unlike you, I believe a "purely rational" is remarkably meaningless epithet.
>The future is intrinsically unpredictable, regardless of the amount of data
>and amount of computation expedited. As such, even a god is adrift in a sea
>of uncertainty. Reality is computationally undecidedable, better learn to
>live with it.
>
>
>
By purely rational I do not mean omniscient. The only reason I used the
"purely" intensifier is to distinguish from human rationality, which is
incomplete at best.
Rationality does not imply being able to predict the future, it's just
the opposite of faith. A rational decision is one based on evidence from
the real world, faith is believing stuff for arbitrary/non-existent reasons.
>>would be a benevolent one, but I wouldn't expect most people to agree
>>(simply because most people don't explore issues beyond what they see at
>>the movie theater). There's a fantastic quote on a related issue from
>>
>>
>
>I don't think this list is very representative for "most people", for better
>or worse.
>
>
>
Of course I would never claim that this list comprises people who
observe life with a bowl of popcorn at their sides. That is obvious.
When I say "most people" I mean "most people", not "most people on this
list". Sorry for not being clear (I'm not being sarcastic, I realise
there are two readings of my statement).
>>Greg Egan's Diaspora: "Conquering the universe is what bacteria with
>>spaceships would do." In other words, any culture sufficiently
>>
>>
>
>Conquering the universe is what any evolutionary system would do. Unless you
>can show me how evolutionary regime can be sustainably abandoned: unverse,
>consider yourself conquered. Greg Egan has plenty of irrational moments.
>
>
The universe is a big place. Big enough that there is effectively zero
competition for elbow room and natural resources. As such there is no
reason for anything other than peaceful contact with an alien species.
On the other hand, I suppose one race might be keen on turning the our
galaxy into a big computer for some very good reason I can't imagine.
Perhaps I'm just thinking too small.
I consider exponential-order behaviour (which conquering the universe
would require) to be...not irrational, but obviously dangerous and
therefore to be avoided.
>>extropians do, then surely the morality of this putative rational
>>artificial intelligence would be of great interest - it should be the
>>
>>
>
>Morality is an evolutionary artifact. Even superficial analysis does not
>forebode well for our sustained coexistance with a neutral AI.
>
>
That depends on what you consider a "neutral" AI to value. If it values
growth or power and nothing else, then clearly we are doomed to the
extent we cannot help it grow.
The important question is: given a self-improving AI, will that AI
change its values and what will they be?
Well obviously such an AI will not change its core imperatives, since
that behaviour would contradict its core imperatives.
What exactly do you mean by a "neutral" AI? One that sees humans as
irrelevant?
>>code we all live by. Rationality means slicing away all arbitrary
>>customs, and reducing decisions to a cost-benefit analysis of forseeable
>>consequences. This is at once no morality, and a perfect morality.
>>
>>
>
>This is not very different from how we're doing things now, unsurprisingly so.
>Notice that "forseeable" doesn't mean much. Predictions are notoriously
>difficult. Especially about the future.
>
>
It depends what you mean by "we". If world leaders fall into "we" then I
would strongly disagree.
Some important politicians direct world events and research priorities
based on faith. The examples are too obvious to mention.
Paul.
More information about the extropy-chat
mailing list