[ExI] Education/destruction (was support an AGI?)
hkhenson
hkhenson at rogers.com
Wed Mar 5 08:17:17 UTC 2008
At 11:08 AM 3/4/2008, Kevin Freels wrote:
snip
>One thing you will notice is that the greater the education a person
>has, the less likely they engage in wholesale destruction of life.
I would be interested in what observations you used to make statement.
In the current world educated people are generally better off
economically and the richer people are under less stress. Put them
under stress, i.e., hungry and no prospects for the next meal, and I
venture to say they would be as destructive as people with no education at all.
In addition, it wasn't grade school dropouts who invented and
manufactured nuclear weapons. And finally, put *me* under enough
stress, such as locked up in solitary confinement and . .
. http://www.kuro5hin.org/story/2007/10/30/18253/301 (Rule of
thumb, don't unjustly lock up engineers. Killing them is much safer.)
snip
>So I would expect with an AGI that people could still choose their
>own destiny. They could remain human and continue as before except
>in a much better world, or they could upload, convert to a
>mechanical body for exploration, or any combination in between.
I think you miss something. Humans, being evolved animals, have
common characteristics. Not all people are vulnerable to drug
addiction, but I venture to say that all of them are vulnerable to
direct stimulation of the brain reward circuits.
>Some may even make copies of themselves digitally and shoot
>themselves across the universe on a laserbeam. Some will "perfect"
>themselves into oblivion. Others will choose to remain as
>traditionally human as possible. Divergence is almost inevitable.
You are making a huge assumption here, that humans will be in charge
of their own destiny post singularity. You might be right, but I
don't see it being more likely than a pet rat being in charge of his destiny.
snip
>All this doom and gloom about the pointlessness of it all really concerns me.
The future is not ordained. It's hard to say how it is going to turn
out and it's even harder to say how people will feel about it when it happens.
It's interesting that Bill Hibbard and I both went to fiction to
express the way we felt were potential routes to answering Elizer's concerns.
http://www.ssec.wisc.edu/~billh/g/mcnrstm.html
http://www.terasemjournals.org/GN0202/henson.html
http://www.singinst.org/upload/artificial-intelligence-risk.pdf
Keith Henson
More information about the extropy-chat
mailing list