<br><br><div><span class="gmail_quote">On 06/06/07, <b class="gmail_sendername">Eugen Leitl</b> <<a href="mailto:eugen@leitl.org">eugen@leitl.org</a>> wrote:</span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> own. Essentially this is what you are doing when you consult a human<br>> expert, so why would you expect any less from a machine?<br><br>When I consult a human expert, I expect him to maximize his revenue<br>
long-term, and him knowing that I know that.</blockquote><div><br>That's the problem with human experts: their agenda may not necessarily coincide with your own, although at least if you know what potential conflicts will be, like the expert wanting to overservice or recommend the product he has a financial interest in, you can minimise the negative impact of this on yourself. However, one of the main advantages of expert systems designed from scratch would be that they have no agendas of their own at all, other than honestly answering the question posed to them given the available information. How would such a system acquire the motivation to do anything else?
<br></div><br></div><br>-- <br>Stathis Papaioannou