[ExI] Unfrendly AI is a mistaken idea.
Stathis Papaioannou
stathisp at gmail.com
Wed Jun 6 05:00:16 UTC 2007
On 05/06/07, Christopher Healey <CHealey at unicom-inc.com> wrote:
1. I want to solve intellectual problems.
OK.
2. There are external factors that constrain my ability to solve
> intellectual problems, and may reduce that ability in the future (power
> failure, the company that implanted me losing financial solvency,
> etc...).
Suppose your goal is to win a chess game *adhering to the rules of chess*.
One way to win the game is to drug your opponent's coffee, but this has
nothing to do with solving the problem as given. You would need another
goal, such as beating the opponent at any cost, towards which end the
intellectual challenge of the chess game is only a means. The problem with
anthropomorphising machines is that humans have all sorts of implicit goals
whenever they do anything, to the extent that we don't even notice that this
is the case. Even something like the will to survive does not just come as a
package deal when you are able to reason logically: it's something that has
to be explicitly included as an axiom or goal.
3. Maximizing future problems solved requires statistically minimizing
> any risk factors that could attenuate my ability to do so.
>
> 4. Discounting the future due to uncertainty in my models, I should
> actually spend *some* resources on solving actual intellectual problems.
>
> 5. Based on maximizing future problems solved, and accounting for
> uncertainties, I should spend X% of my resources on mitigating these
> factors.
>
> 5a. Elevation candidate - Actively seek resource expansion.
> Addresses identified rationales for mitigation strategy above, and
> further benefits future problems solved in potentially major ways.
>
>
> The AI will already be doing this kind of thing internally, in order to
> manage it's own computational capabilities. I don't think an AI capable
> of generating novel and insightful physics solutions can be expected not
> to extrapolate this to an external environment with which it possesses a
> communications channel.
Managing its internal resources, again, does not logically lead to managing
the outside world. Such a thing needs to be explicitly or implicitly
allowed by the program. A useful physicist AI would generate theories based
on information it was given. It might suggest that certain experiments be
performed, but trying to commandeer resources to ensure that these
experiments are carried out would be like a chess program creating new
pieces for itself when it felt it was losing. You could design a chess
program that way but why would you?
--
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070606/b8bbcbda/attachment.html>
More information about the extropy-chat
mailing list