<br><br><div><span class="gmail_quote">On 05/06/07, <b class="gmail_sendername">Christopher Healey</b> <<a href="mailto:CHealey@unicom-inc.com">CHealey@unicom-inc.com</a>> wrote:<br></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
1. I want to solve intellectual problems.</blockquote><div><br>OK. <br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">2. There are external factors that constrain my ability to solve
<br>intellectual problems, and may reduce that ability in the future (power<br>failure, the company that implanted me losing financial solvency,<br>etc...). </blockquote><br><div>Suppose your goal is to win a chess game *adhering to the rules of chess*. One way to win the game is to drug your opponent's coffee, but this has nothing to do with solving the problem as given. You would need another goal, such as beating the opponent at any cost, towards which end the intellectual challenge of the chess game is only a means. The problem with anthropomorphising machines is that humans have all sorts of implicit goals whenever they do anything, to the extent that we don't even notice that this is the case. Even something like the will to survive does not just come as a package deal when you are able to reason logically: it's something that has to be explicitly included as an axiom or goal.
<br></div><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">3. Maximizing future problems solved requires statistically minimizing<br>any risk factors that could attenuate my ability to do so.
<br><br>4. Discounting the future due to uncertainty in my models, I should<br>actually spend *some* resources on solving actual intellectual problems.<br><br>5. Based on maximizing future problems solved, and accounting for
<br>uncertainties, I should spend X% of my resources on mitigating these<br>factors.<br><br> 5a. Elevation candidate - Actively seek resource expansion.<br>Addresses identified rationales for mitigation strategy above, and
<br>further benefits future problems solved in potentially major ways.<br><br><br>The AI will already be doing this kind of thing internally, in order to<br>manage it's own computational capabilities. I don't think an AI capable
<br>of generating novel and insightful physics solutions can be expected not<br>to extrapolate this to an external environment with which it possesses a<br>communications channel.</blockquote><div><br>Managing its internal resources, again, does not logically lead to managing the outside world. Such a thing needs to be explicitly or implicitly allowed by the program. A useful physicist AI would generate theories based on information it was given. It might suggest that certain experiments be performed, but trying to commandeer resources to ensure that these experiments are carried out would be like a chess program creating new pieces for itself when it felt it was losing. You could design a chess program that way but why would you?
<br></div><br></div><br>-- <br>Stathis Papaioannou