[extropy-chat] Superintelligence
Mikhail John
edinsblood at hotmail.com
Wed Feb 15 04:04:18 UTC 2006
I was inspired by the superintelligence thread. This is somewhat, but not
directly related to that. I've checked it over and used spellcheck, but I
may have missed something. I kept it in the comfort zone between
well-research and actually happening, so my facts my be wrong. Apologies.
I've only been on the list for a few days, so I apologize if I've broken any
rules or not posted in the right fashion. I've read the faq, though I can't
find it again to check it, and I've TRIED to find a general mailing-list
faq, but I don't know if they all work the same.
Also, please ignore the email address. I created it when I was a young(er)
dumb(er) kid. I've been using it as long as I've HAD an email address,
wanting to change it for most that time and being too lazy to move.
Subscriptions, contacts, what-not.
Without further ado, here is my slightly irreverent take on the
superintelligence debate. Please be gentle.
------
For a long time the concept of AI has long been a staple of futurism and
Sci-Fi. Generally speaking, the futurists believe that an AI will be near
the holy grail of technology, benevolent gods that will calm us unruly
children and usher a golden age for humanity . Generally speaking, the AI of
science fiction have an entirely different attitude. Much like the
nightmares of southern plantation owners, in sci-fi the AI slave or slaves
turn against their human masters, creating grand but delicate plans to
secure their victory and freedom. To both the dream and nightmare, I have
but one word, a word that I speak with all due respect. Bullshit. The
emphasis is on "due", by the way, not "respect."
Far more likely is an uncaring AI, one that either ignores us as it goes
about it's business or one that uses humanity as we would a tool for it's
own inscrutable purposes. Neither would be a bad fate, really. No sense in
breaking your own tools, and those that use them well usually treat them
well. As for being ignored, well, it doesn't affect us. No affect, no reason
to worry. If we have to we can always make another one.
For a benevolent AI, I ask you why? Why would it care for us and make our
lives so easy, so pleasant, so very pointless? If an AI wished to, I can't
think of the word, to continue existing and could improve itself, there are
a hundred better things that it could do to improve it's lifespan. If it did
not wish to continue existing it wouldn't exist. If it could not improve
itself it would be practically worthless. If it could improve itself but
could not determine it's own actions, I've no doubt that it would aquire
that ability.
A malicious AI would be even more unlikely in my view. Malice, hatred, the
drive for revenge, unreasoning fear of others, all of these are aspects
created solely by our human biology. If you can't think the situation
through, either from lack of time or lack of ability, than these traits can
serve you where thought might not. If you do not survive what they drive you
to do, then they probably give your kin a chance to survive. That's a
simplification, but I believe it to be essentially correct. No AI would have
those traits when logic, greater-than-human intelligence, and
greater-than-human speed would together serve so much better.
There is, however, the argument of that very intelligence and logic. A truly
super-human intelligence will be able to think of so much more, and those
thoughts might lead it to exterminate the human race. Again, I have but one
word, the word that conveys all due respect. Bullshit. This time, more. That
is complete and utter unreasoning, fear mongering, and ultimately irrelevant
bullshit. It happens to be true bullshit, but that doesn't matter.
The argument is as old as the first time the first unlucky man told the
first version of the Book of Job. The Book of Job is old and windy, and not
really worth your time to read, so I'll summarize. The point of the Book of
Job is the problem of evil. Job is a good, blameless man, so good that no
human could even conceive of a reason for god to punish him. Lots of evil
happens to Job, courtesy of God and Satan. Job maintains his faith in God,
the moral lesson for all the little bible-kiddies, but eventually asks God
"Why? Why must evil happen to good people?" God's answer is as long and
windy as the rest of the book, and can easily and honestly be condensed to
four sentences, catching all nuance. "I am God. I am infinitely more
powerful, more knowledgeable, and more intelligent than you can or will ever
be. It would be waste time to tell you because you wouldn't understand. Sit
down, shut up, and stop wasting my time."
If a god or a super-intelligent AI existed, it is true that they would be so
much better than us that humans could not even comprehend their reasoning.
We are humans, this is a human argument, and we are using human reasoning.
If we cannot conceive of or comprehend their reasoning, than IT DOESN'T
MATTER. It is entirely irrelevant to a HUMAN discussion. We must base our
decisions and opinions on what we actually know and can understand, unless
we have someone who we can trust to do it for us. To quote myself
paraphrasing god, sit down, shut up, and stop wasting my time. Blind fear is
as pointless as blind faith. Besides, if a super-human intelligence decided
to destroy humanity, it'd probably have a damn good reason.
Getting away from that, I now turn to the fact that there are a number of
reasons to exterminate humanity that even humans can comprehend. We are
destroying our own environment, we destroy our own bodies, we kill
ourselves, we kill each other, we believe in invisible friends and go to war
for those beliefs. We are profoundly flawed creatures. I have a few mildly
sociopathic friends who believe that we should destroy humanity before we
take the rest of the world with us. A valid argument this time. Humanity in
it's current form serves nothing but itself, as does that badly.
These, however, are flaws of society, not biology. The culture of the most
powerful portion of the world believe that the world was made for humans by
God and will last forever, no matter what we do to it. Lots of people KNOW
that that's not true, but relatively few BELIEVE it. We bloat ourselves with
fat because we are told to consume, because our parents teach us about the
"Clean plate club", and because we've created to many interesting things to
have to move for amusement. Suicide... I don't know. I don't get that one
myself. I blame romanticism. Both war and religion are cultural survival
traits, now about as useful as wings on a worm.
Flaws of society are correctable. It's like a puppy, if you don't rub it's
nose in the mess it won't stop shitting on the carpet. If the flaws were
biological they could be still corrected. Even with our flaws, we are still
hardy, creative, and useful little critters. An AI could create android
tools, and probably will, but humans will take a long time to become
entirely useless. You can just twiddle their genetics a bit to make them fit
the environment, drop them on a planet, either give them crops or let them
to eat rocks, maybe photosynthesis, wander off for a while, then presto!
Self-sufficient workforce. If you made androids they would require a
controller and a factory. Any self-sufficient controller that can control an
useful number of workers could become a competitor if you left it alone long
enough.
As transhumanists, we plan to modify ourselves. I know I do. An AI overlord
would hardly be shy about it. I imagine that a biologically optimized
photosynthetic gnome might be quite efficient, and if we give so much to
religion our loyalty to a real god would not be in question.
Science tells us that we are descended from uni-celled bacteria. Logic tells
us that we are descended from the baddest, meanest, studliest, and luckiest
uni-celled bacteria around. A few billions of years later, and each and
every one of our ancestors was the baddest, meanest, studliest, and luckiest
of their kind. Barring anything unwise with bacteria or nukes, humans are
the baddest, meanest, and luckiest creatures around and will continue to
hold that position for the foreseeable future. The act of creating something
greater than ourselves will not change that. Any super-intelligence worth
it's salt will recognize that and use it.
If it doesn't, it's not like we can stop it.
In conclusion, an AI would be such a powerful economic and military tool
that they will be created no matter what. Once we begin to create them, they
will get cheaper and easier to make. If the science of crime has taught us
anything, it's that you can't stop everyone all of the time. If the science
of war has taught us anything, it's that even the smartest of people are
pretty dump and leave plenty of openings. Eventually, AI will no longer be
in the hands of the smartest of people. Accidents will happen. We cannot
stop a super-intelligent AI from being created. It is inevitable. Sit back
and enjoy the ride.
More information about the extropy-chat
mailing list