[ExI] Unfrendly AI is a mistaken idea.
CHealey at unicom-inc.com
Tue Jun 5 13:03:32 UTC 2007
> Stathis Papaioannou
> Perhaps an AI with general intelligence would have all these
> but I don't see why it couldn't just specialise in one area, and even
> if it were multi-talented I don't see why it should be motivated to do
> anything other than solve intellectual problems. Working out how to
> a superweapon, or even working out how it would be best to
> employ that superweapon, does not necessarily lead to a desire to use
> threaten the use of that weapon. I can understand that *if* such a
> arose for any reason, weaker beings might be in trouble, but could you
> explain the reasoning whereby the AI would arrive at such a position
> starting from just an ability to solve intellectual problems?
This is really the point I was trying to make in my other emails.
1. I want to solve intellectual problems.
2. There are external factors that constrain my ability to solve
intellectual problems, and may reduce that ability in the future (power
failure, the company that implanted me losing financial solvency,
3. Maximizing future problems solved requires statistically minimizing
any risk factors that could attenuate my ability to do so.
4. Discounting the future due to uncertainty in my models, I should
actually spend *some* resources on solving actual intellectual problems.
5. Based on maximizing future problems solved, and accounting for
uncertainties, I should spend X% of my resources on mitigating these
5a. Elevation candidate - Actively seek resource expansion.
Addresses identified rationales for mitigation strategy above, and
further benefits future problems solved in potentially major ways.
The AI will already be doing this kind of thing internally, in order to
manage it's own computational capabilities. I don't think an AI capable
of generating novel and insightful physics solutions can be expected not
to extrapolate this to an external environment with which it possesses a
More information about the extropy-chat