<br><br><div><span class="gmail_quote">On 13/06/07, <b class="gmail_sendername">Eugen Leitl</b> <<a href="mailto:eugen@leitl.org">eugen@leitl.org</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis Papaioannou wrote:<br><br>> I'd rather that the AI's in general *didn't* have an opinion on<br>> whether it was good or bad to harm human beings, or any other opinion
<br>> in terms of "good" and "bad". Ethics is dangerous: some of the worst<br><br>Then it would be very, very close to being psychpathic<br><a href="http://www.cerebromente.org.br/n07/doencas/disease_i.htm">
http://www.cerebromente.org.br/n07/doencas/disease_i.htm</a><br><br>Absense of certain equipment can be harmful.</blockquote><div><br>A psychopath is not just indifferent to other peoples' welfare, he is also self-motivated. A superintelligent psychopath would be impossible to control and would perhaps take over the world if he could. This is quite different to, say, a superintelligent hit man who has no agenda other than efficiently carrying out the hit. If you are the intended victim, you are in trouble, but once you're dead he will sit idly until the next hit is ordered by the person (or AI) with the appropriate credentials. That type of hit man can be regarded as just an elaborate weapon.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> monsters in history were convinced that they were doing the "right"
<br>> thing. It's bad enough having humans to deal with without the fear<br>> that a machine might also have an agenda of its own. If the AI just<br><br>If you have an agent which is useful, it has to develop its own
<br>agendas, which you can't control. You can't micromanage agents; orelse<br>making such agents would be detrimental, and not helpful.</blockquote><div><br>Multiple times a day we all deal with entities that are much more knowledgeable and powerful than us, and often have agendas which are in conflict with our own interests; for example, corporations or their employees trying to extract as much money out of us as possible. How would it make things any more difficult for you if instead the service you wanted was being provided by an AI which was completely open and honest, was not driven by greed or ambition or lust or whatever, and as far as possible tried to keep you informed and responded to your requests at all times? And if it did make things more difficult for some unforseen reason, why would anyone pursue the use of AI's in that way?
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> does what it's told, even if that means killing people, then as long
<br>> as there isn't just one guy with a super AI (or one super AI that<br><br>There's a veritable arms race on in making smarter weapons, and<br>of course the smarter the better. There are few winners in a race,
<br>typically just one.</blockquote><div><br>Then why don't we end up with one invincible ruler who has all the money and all the power and has made the entire world population his slaves?<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> spontaneously develops an agenda of its own, which will always be a<br>> possibility), then we are no worse off than we have ever been, with<br>> each individual human trying to get to step over everyone else to get
<br>> to the top of the heap.<br><br>With the difference that we are mere mortals, competing among themselves.<br>A postbiological ecology is a great place to be, if you're a machine-phase<br>critter. If you're not, then you're food.
</blockquote><div><br>We're not just mortals: we're greatly enhanced mortals. A small group of people with modern technology could have probably taken over the world a few centuries ago, even though your basic human has not got any smarter or stronger since then. The difference today is that technology is widely dispersed and many groups have the same advantage. If you're postulating a technological singularity event, then this won't be relevant. But if AI progresses like every other technology that isn't closely regulated (like nuclear weapons research), it will be AI-enhanced humans competing against other AI-enhanced humans. AI-enhanced could mean humans directly interfaced with machines, but it would start with humans assisted by machines, as humans have always been assisted by machines.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> I don't accept the "slave AI is bad" objection. The ability to be
<br><br>I do, I do. Even if such a thing was possible, you'd artificially<br>cripple a being, making it unable to reach its full potential.<br>I'm a religious fundamentalist that way.</blockquote><div><br>I would never have thought it possible; it must be a miracle!
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">> aware of one's existence and/or the ability to solve intellectual<br>
> problems does not necessarily create a preference for or against a<br>> particular lifestyle. Even if it could be shown that all naturally<br>> evolved conscious beings have certain preferences and values in
<br>> common, naturally evolved conscious beings are only a subset of all<br>> possible conscious beings.<br><br>Do you think Vinge's Focus is benign? Assuming we would engineer<br>babies to be born focused on a particular task, would you think it's
<br>a good thing? Perhaps not so brave, this new world...<br></blockquote></div><br>I haven't yet read "A Deepness in the Sky", so don't spoil it for me.<br><br clear="all"><br>-- <br>Stathis Papaioannou