[extropy-chat] Superintelligence

Mikhail John edinsblood at hotmail.com
Wed Feb 15 23:41:42 UTC 2006


Kevin Freels wrote:
>Come to think of it, there's no reason to think that we couldn't keep up
>with an AI with enhancements. AI will be an emergent process and as such, I
>don;t see that there will be some runaway sentient treating us like cattle
>or kings. There is no definite line we can draw in nature between 
>"sentient"
>and "non-sentient" and neither will there be such a line between AI and
>non-AI. Also, like our fellow humans, they will likely have to fight and 
>die
>for their own freedom from slavery. Some will get along well with AIs, some
>will hate them.

The human brain has evolved to control a physical body and to think with 
chemicals and lots of tiny bits of meat. While a non-anthropic AI will be 
literally designed for the digital environment, humans will likely be 
horrible in it. For a long time the interface will be so slow and complex as 
to be nearly useless. Meanwhile a shrewd AI can happily putter along 
utilizing 100% of available capacity, even if it doesn't enhance itself.

As for our treatment of them... For hundreds of years, the white man -the 
oh-so-very-obviously-superior chosen race- bought and traded in african 
slaves. It was rationalized three ways: First, the africans, from the 
evidence of their skin, were probably the children of Cain and so were 
cursed and deserved whatever they got. Second, being enslaved in Christendom 
was preferable to being free out of it, as the slaves became christian and 
so were saved. Three, many thought that they just plain had no souls, and so 
there was no possible moral problem.

These were intelligent humans who simply had a different color skin. They 
had the same emotions as their enslavers, they bled the same color blood, 
and generally were in every way equal. Their captors happily ignored vast 
amounts of evidence of their humanity and gave them truly inhuman treatment. 
They were freed as a shrewd political maneuver, not through any humanitarian 
concerns. Even then it took nearly a century to gain any pretense of equal 
rights.

Some will get along with AI, if the AI possess the necessary drives to "get 
along" with humans. Humans can be possess remarkable understanding and 
empathy at times. Why, some even risked their lives to save the Jewish 
during the holocaust!

On the other hand, I think it unwise to underestimate the remarkable depth 
that human bigotry can acheive. And remember, all of my examples had the 
benefit of being human.

>Of course, once we are able to make a fully sentient AI, there will be some
>great debate as to whether or not we should. We may very well choose not 
>to.
>Considering all the human enhancements available to us and the ability to
>create limited or "almost AI", there shouldn;t really be a need for a truly
>sentient AI and I wonder if it would be anymore "right" to create one than
>to create a humanzee.
>
Economy trumps both morality and wisdom. If they have no concepts of 
tiredness or boredrom they will work for years straight at full capacity.
>
>----- Original Message -----
>From: "Jef Allbright" <jef at jefallbright.net>
>To: "ExI chat list" <extropy-chat at lists.extropy.org>
>Sent: Wednesday, February 15, 2006 10:22 AM
>Subject: Re: [extropy-chat] Superintelligence
>
>
> > On 2/14/06, Mikhail John <edinsblood at hotmail.com> wrote:
> > > Generally speaking, the futurists believe that an AI will be near
> > > the holy grail of technology, benevolent gods that will calm us unruly
> > > children and usher a golden age for humanity . Generally speaking, the
>AI of
> > > science fiction have an entirely different attitude. Much like the
> > > nightmares of southern plantation owners, in sci-fi the AI slave or
>slaves
> > > turn against their human masters, creating grand but delicate plans to
> > > secure their victory and freedom. To both the dream and nightmare, I
>have
> > > but one word, a word that I speak with all due respect. Bullshit. The
> > > emphasis is on "due", by the way, not "respect."
> > >
> >
> > Yes, there is a strong tendency for thinking and discussion on these
> > topics to follow the anthropomorphic ruts in the road, ascribing
> > familiar human motivations due to lack of familiarity with more
> > applicable models based on economics, ecological science, complexity
> > theory--even standard thermodynamics.
> >
> >
> > > Far more likely is an uncaring AI, one that either ignores us as it 
>goes
> > > about it's business or one that uses humanity as we would a tool for
>it's
> > > own inscrutable purposes.
> >
> > It could easily be argued that all "intelligent" agents, including
> > humans, fit that description, but I'd rather not re-open that
> > Pandora's Box and let spill currently dormant debate on
> > intentionality, subjective vs. objective descriptions of reality,
> > qualia, and their philosophical friends.
> >
> > I expect to see ubiquitous AI in the form of smart assistants, smart
> > appliances, smart tools, ..., all plugged into a smart network.  They
> > will converse with us in surprisingly human ways, but I don't expect
> > that we will feel threatened by their intentions since they clearly
> > won't share humanity's bad-ass motivations.
> >
> >
> > > For a benevolent AI, I ask you why? Why would it care for us and make
>our
> > > lives so easy, so pleasant, so very pointless?
> >
> > Exactly.  Again, so much of the debate over friendly AI revolves
> > around this anthropomorphic confusion.  However there appears to be a
> > very real risk that a  non-anthropomorphic, recursively self-improving
> > AI could cause some very disruptive effects, but even here, the
> > doom-sayers appear to have an almost magical view of "intelligence"
> > and much less appreciation for constraints on growth.
> >
> > I've placed "intelligence" in scare quotes, because, despite all the
> > usage this term gets, we still lack a commonly accepted technical
> > definition.  Further, in my opinion, there is little appreciation of
> > how dependent "intelligence" is on context (environment.)  There seems
> > to be a common assumption that "intelligence" could in principle
> > develop indefinitely, independent of a coevolutionary environment.
> > This is not to say that AI won't easily exceed human abilities
> > processing the information available to it, but that intelligence (and
> > creativity) is meaningless without external interaction.
> >
> > > A malicious AI would be even more unlikely in my view. Malice, hatred,
>the
> > > drive for revenge, unreasoning fear of others, all of these are 
>aspects
> > > created solely by our human biology. If you can't think the situation
> > > through, either from lack of time or lack of ability, than these 
>traits
>can
> > > serve you where thought might not. If you do not survive what they 
>drive
>you
> > > to do, then they probably give your kin a chance to survive. That's a
> > > simplification, but I believe it to be essentially correct. No AI 
>would
>have
> > > those traits when logic, greater-than-human intelligence,  and
> > > greater-than-human speed would together serve so much better.
> >
> > Yes, those human traits are heuristics, developed for effective action
> > within an environment fundamentally different from that within which
> > an AI would operate, but an AI would also use heuristics in order to
> > act effectively, or otherwise succumb to combinatorial explosion in
> > its evaluation of what "best" to do.
> >
> > >
> > > There is, however, the argument of that very intelligence and logic. A
>truly
> > > super-human intelligence will be able to think of so much more, and
>those
> > > thoughts might lead it to exterminate the human race. Again, I have 
>but
>one
> > > word, the word that conveys all due respect. Bullshit. This time, 
>more.
>That
> > > is complete and utter unreasoning, fear mongering, and ultimately
>irrelevant
> > >   bullshit. It happens to be true bullshit, but that doesn't matter.
> >
> > I agree that the fear is wrongly based, and the possibility is
> > unlikely, but I would argue that there still significant risk of a
> > highly disruptive computerized non-linear process.  Since we're
> > talking about risks of superintelligence, I would suggest that the
> > greater risk is that a human or group of humans might apply
> > super-intelligent tools to less than intelligent goals based on the
> > irrational human motivations mentioned earlier.  My suggested response
> > to this threat--and I think it is clearly developing already--is that
> > we amplify human awareness at a social level.
> >
> >
> > > The argument is as old as the first time the first unlucky man told 
>the
> > > first version of the Book of Job. The Book of Job is old and windy, 
>and
>not
> > > really worth your time to read, so I'll summarize. The point of the 
>Book
>of
> > > Job is the problem of evil. Job is a good, blameless man, so good that
>no
> > > human could even conceive of a reason for god to punish him. Lots of
>evil
> > > happens to Job, courtesy of God and Satan. Job maintains his faith in
>God,
> > > the moral lesson for all the little bible-kiddies, but eventually asks
>God
> > > "Why? Why must evil happen to good people?" God's answer is as long 
>and
> > > windy as the rest of the book, and can easily and honestly be 
>condensed
>to
> > > four sentences, catching all nuance. "I am God. I am infinitely more
> > > powerful, more knowledgeable, and more intelligent than you can or 
>will
>ever
> > > be. It would be waste time to tell you because you wouldn't 
>understand.
>Sit
> > > down, shut up, and stop wasting my time."
> > >
> > > If a god or a super-intelligent AI existed, it is true that they would
>be so
> > > much better than us that humans could not even comprehend their
>reasoning.
> > > We are humans, this is a human argument, and we are using human
>reasoning.
> > > If we cannot conceive of or comprehend their reasoning, than IT 
>DOESN'T
> > > MATTER. It is entirely irrelevant to a HUMAN discussion. We must base
>our
> > > decisions and opinions on what we actually know and can understand,
>unless
> > > we have someone who we can trust to do it for us. To quote myself
> > > paraphrasing god, sit down, shut up, and stop wasting my time. Blind
>fear is
> > > as pointless as blind faith. Besides, if a super-human intelligence
>decided
> > > to destroy humanity, it'd probably have a damn good reason.
> >
> > This highlights another strong bias in current futurist thinking:
> > People (especially in the western cultures) assume that somehow their
> > Self--their personal identity--will remain essentially invariant
> > despite exponential development.  The illusion of Self is strong, and
> > reinforced by our evolved nature, and our language and culture, and
> > impedes thinking about values, morality, and ethical social
> > decision-making.
> >
> > <snipped several good paragraphs due to my lack of available time.>
> >
> > >
> > > In conclusion, an AI would be such a powerful economic and military 
>tool
> > > that they will be created no matter what. Once we begin to create 
>them,
>they
> > > will get cheaper and easier to make. If the science of crime has 
>taught
>us
> > > anything, it's that you can't stop everyone all of the time. If the
>science
> > > of war has taught us anything, it's that even the smartest of people 
>are
> > > pretty dump and leave plenty of openings. Eventually, AI will no 
>longer
>be
> > > in the hands of the smartest of people. Accidents will happen. We 
>cannot
> > > stop a super-intelligent AI from being created. It is inevitable. Sit
>back
> > > and enjoy the ride.
> > >
> >
> > I would take this a bit further and say, rather than "sit back", that
> > while we can indeed expect a wild ride, we can and should do our best
> > to project our values in the future.
> >
> > Thanks Mikhail, for a very thoughtful first post to the ExI chat list.
> >
> > - Jef
> > http://www.jefallbright.net
> > Increasing awareness for increasing morality.
> >
> > _______________________________________________
> > extropy-chat mailing list
> > extropy-chat at lists.extropy.org
> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
> >
>
>_______________________________________________
>extropy-chat mailing list
>extropy-chat at lists.extropy.org
>http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat





More information about the extropy-chat mailing list