[ExI] Unfrendly AI is a mistaken idea.
eugen at leitl.org
Tue Jun 5 12:38:19 UTC 2007
On Tue, Jun 05, 2007 at 09:30:00PM +1000, Stathis Papaioannou wrote:
> You can't argue that an intelligent agent would *necessarily* behave
> the same way people would behave in its place, as opposed to the
Actually, yes, because people build systems which participate in the
economy, and the optimal first target niche is a human substitute.
There is a lot of fun scenarios out there, which, however, suffer from
excessive detachment from reality. These never gets the chance to be
built. Because of that it is not very useful to study such alternative
hypotheticals excessively, to the detriment of where the rubber hits
> argument that it *might* behave that way. Is there anything logically
> inconsistent in a human scientist figuring out how to make a weapon
> because it's an interesting intellectual problem, but then not going
Weapon design is not merely an intellectual problem, and neither do
theoretical physicists operate in complete detachment from the empirical
folks. I.e. the sandboxed supergenius or braindamaged idiot savant is a
synthetic scenario which is not going to happen, so we can ignore it.
> on to use that knowledge in some self-serving way? That is, does the
> scientist's intended motive have any bearing whatsoever on the
> validity of the science, or his ability to think clearly?
If you don't exist, that tends to cramp your style a bit.
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the extropy-chat