[ExI] singularity summit on foxnews
hkhenson
hkhenson at rogers.com
Sun Sep 16 02:38:48 UTC 2007
At 04:00 PM 9/15/2007, Robert Picone wrote:
snip
>Actually, I don't see a huge difference between this hypothetical AI
>and a human either, but you'll notice that for the most part, humans
>don't kill whenever it would benefit them.
That's true, but humans *do* kill when it would benefit their
*genes,* and this makes complete sense if you understand evolutionary theory.
People don't kill other people very often for the same reason lions
don't. It's dangerous, even if you *don't* get caught because people
fight back, and you take a substantial risk of dying in the process
of killing another top predator.
Now, when the choice is *starving* or your kids starving (almost the
same thing from a gene's viewpoint) the cost/benefit changes and so
does the human propensity to kill.
>I am, for example, fairly
>confident that I could kill a man on the street and take whatever was
>in his wallet without being caught, and yet I have no desire to do
>this.
Completely expected from simple evolutionary theory.
>The major difference is that there already is a system in place to
>prevent the creation of unfriendly humans. A human's behaviors, as
>much as they can be, are designed by a society.
Dreamer. How many missed meals do you think it takes for humans to
dump such trained behavior?
snip
>Both good points about our society, but I don't see how they do
>anything but support my argument. When collectives lash out, it has
>always been against those things alien to them, namely minorities,
>foreign cultures, or both at once.
And it is the direct result of a society that as a group sees a bleak
future. Germany in the late 1920s being a classical example.
>If AIs acted like humans in this
>respect, and frequently collaborated, humans, or some subset of
>humans, would likely qualify for such a potential lashing out.
>Consider how many people out there are hostile to the concept of AI as
>you are accusing me of being, do you suppose this hostility would be
>completely one-sided?
The problem is one of speed. There is no reason for AIs to be
limited to human speeds. Imagine trying to fight an opponent who
could think and move ten or a hundred times as fast as you can.
>I do of course advocate trying to remove the behaviors that spawn
>these issues from human society as well, but the steps to shape the
>behaviors of individuals that do not yet exist are much more clear
>than the steps to significantly change the behaviors of large groups
>that already exist.
I have very serious doubts that removing the traits behind these
behaviors would be a good idea. I can make a case for keeping them
turned off by manipulating the environment so that people don't see a
bleak future.
But with respect to AI, if people don't get it right, building
limits/desires to the AIs in the first place, they won't have a chance.
Of course, given my recent experiences, I am not going to put effort
into this probably lost cause.
Keith Henson
More information about the extropy-chat
mailing list