[ExI] related replies
hkeithhenson at gmail.com
Sun Feb 28 19:39:52 UTC 2010
On Sun, Feb 28, 2010 at 5:00 AM, Stathis Papaioannou wrote:
> There is no requirement on AI researchers to make an AI by following
> the structure and function of the brain. However, if they do do it
> this way, as mind uploading researchers would, the resulting AI will
> both behave like a human and have the consciousness of a human.
This way is *exceeding* dangerous unless we deeply understand the
biological mechanisms of human behavior, particularly behaviors such
as capture-bonding which are turned on by behavioral switches (due to
external situations). A powerful human type AI in "war mode,"
irrational and controlling millions of robot fighting machines is not
something you want to happen.
Humans do go irrational (for reasons firmly rooted in our evolutionary
past). See the descriptions of the mental state the machete killers
in Rwanda were in while they killed close to a million people.
> Ben Zaiboc <bbenzai at yahoo.com> wrote:
> Keith Henson <hkeithhenson at gmail.com>:
>> From an engineering viewpoint, it isn't significantly harder to
>> upload, sideload, or merge with more capable computational resources
>> while maintaining consciousness. ?The reverse would work as well. ?I
>> worked this feature into "the clinic seed."
> Yeah, I figured that out too. I don't consider this discussion to be
> of vital importance to me. This "the clinic seed" of yours, though.
> What is it?
Google henson clinic seed, take the first link. It's been discussed
More information about the extropy-chat