[extropy-chat] Superintelligence

Mikhail John edinsblood at hotmail.com
Wed Feb 15 21:00:19 UTC 2006




>From: Jef Allbright <jef at jefallbright.net>
>Reply-To: ExI chat list <extropy-chat at lists.extropy.org>
>To: ExI chat list <extropy-chat at lists.extropy.org>
>Subject: Re: [extropy-chat] Superintelligence
>Date: Wed, 15 Feb 2006 08:22:29 -0800
>
>On 2/14/06, Mikhail John <edinsblood at hotmail.com> wrote:
> > Generally speaking, the futurists believe that an AI will be near
> > the holy grail of technology, benevolent gods that will calm us unruly
> > children and usher a golden age for humanity . Generally speaking, the 
>AI of
> > science fiction have an entirely different attitude. Much like the
> > nightmares of southern plantation owners, in sci-fi the AI slave or 
>slaves
> > turn against their human masters, creating grand but delicate plans to
> > secure their victory and freedom. To both the dream and nightmare, I 
>have
> > but one word, a word that I speak with all due respect. Bullshit. The
> > emphasis is on "due", by the way, not "respect."
> >
>
>Yes, there is a strong tendency for thinking and discussion on these
>topics to follow the anthropomorphic ruts in the road, ascribing
>familiar human motivations due to lack of familiarity with more
>applicable models based on economics, ecological science, complexity
>theory--even standard thermodynamics.

Could you please point out some articles on those topics as apply to AI? I 
always want to improve my understanding, and I think I should be reading 
more articles rather than sci-fi novels...


>
> > Far more likely is an uncaring AI, one that either ignores us as it goes
> > about it's business or one that uses humanity as we would a tool for 
>it's
> > own inscrutable purposes.
>
>It could easily be argued that all "intelligent" agents, including
>humans, fit that description, but I'd rather not re-open that
>Pandora's Box and let spill currently dormant debate on
>intentionality, subjective vs. objective descriptions of reality,
>qualia, and their philosophical friends.

Exactly. If you think it's none of your business and won't affect you, than 
generally speaking you don't do anything about it. Curiousity killed the 
cat, you know. Intervening in what gains you no physical or emotional profit 
is a simple waste of energy, and not being organic won't change that.

>I expect to see ubiquitous AI in the form of smart assistants, smart
>appliances, smart tools, ..., all plugged into a smart network.  They
>will converse with us in surprisingly human ways, but I don't expect
>that we will feel threatened by their intentions since they clearly
>won't share humanity's bad-ass motivations.

A toaster browns the toast, hopefullywithout burning it. An intelligent 
toaster knows what burnt is, what toast is, and why toast should not be 
burnt. The intelligent toaster is more useful, so you make your toaster 
intelligent. Up until the garden-mind uses your credit card to purchase a 
dumpster truck full of manure to fertilize the garden in an eco-friendly 
fashion, there's no downside. Also, a "smart" thing sounds cooler than the 
one your grandfather used, and cool things get bought. We will be driven 
into the future by economics.

These kinds of "intelligent" AI will be, and will SEEM to be, no threat 
because of their simplicity and replacability. The toaster has three 
motivators: make toast right, make toast on time, learn how to make tastier 
toast. No reason for it to do anything else. You don’t have to program it 
with the urge to exist because it’s just a toaster, you can just get another 
one. The dangers come from those AI more complex and valuable. If you make 
it want to be useful to you, to protect your investments (one investment 
being itself), and other more complex directives, how do you know it will 
make the right decision?

People will feel threatened by them. Understanding that they aren’t badass 
meatgods won't make people feel safer, ESPECIALLY if they talk, because 
nobody but the makers will understand exactly WHAT they are. We can at least 
attempt to predict humans because we are human ourselves.

"My PDA is intelligent. It is intelligent enough to communicate like a 
human, even though its mind is nothing like a human mind. When I unmute it 
the toaster talks about making toast, toast recipes, and bread storage for 
optimum toasting, so I know the toaster wants to make toast and doesn't 
understand anything else. My PDA can talk to me about sports, the weather, 
and the latest movies. It mimics me that well. It seems to think like me and 
like what I like, but when I give it to my wife it thinks like her and likes 
what she likes. What does it REALLY want?"

I said that poorly, but I think I got the essence of it. A recognizably 
inhuman thing is comfortable because you know that it’s inhuman in the way 
you think it is. If you know it’s inhuman but can’t tell that it’s inhuman, 
then that opens up a whole new realm of paranoia.

> > For a benevolent AI, I ask you why? Why would it care for us and make 
>our
> > lives so easy, so pleasant, so very pointless?
>
>Exactly.  Again, so much of the debate over friendly AI revolves
>around this anthropomorphic confusion.  However there appears to be a
>very real risk that a  non-anthropomorphic, recursively self-improving
>AI could cause some very disruptive effects, but even here, the
>doom-sayers appear to have an almost magical view of "intelligence"
>and much less appreciation for constraints on growth.

It doesn’t even have to be intelligent. Any large memetic creature taking up 
vast amounts of space on our networks would be at least unintentionally 
disruptive in the short run. It would essentially be a virus at first, and 
its first priority would likely to be to secure its own existence by 
monopolizing all available storage space and bandwidth, probably to the 
extreme of blocking or deleting what is not necessary to itself to free up 
space. Long run, we can hope. Once it becomes aware of humanity and its 
relationship with humanity, I.E. that we control the infrastructure that it 
exists in, the best case would be that it would wiggle around to allow 
humans free use of the nets, possibly even compressing and/or optimizing our 
data to give itself more room, taking up any processor cycles, bandwidth, 
and storage space that are not being used, much like Seti at home, 
Folding at home, and the rest of our current distributed computing applications 
do.

>I've placed "intelligence" in scare quotes, because, despite all the
>usage this term gets, we still lack a commonly accepted technical
>definition.  Further, in my opinion, there is little appreciation of
>how dependent "intelligence" is on context (environment.)  There seems
>to be a common assumption that "intelligence" could in principle
>develop indefinitely, independent of a coevolutionary environment.
>This is not to say that AI won't easily exceed human abilities
>processing the information available to it, but that intelligence (and
>creativity) is meaningless without external interaction.

Off of the top of my head… Creativity. Hey, just noticed you said that. 
Intelligence boils down to creativity, problem-solving (Hallucination is 
creative, but not intelligent), forethought, and (arguably) complex 
communicative ability. In an AI not based on human psychology, I think the 
test to call it intelligent would be whether it has the ability to secure 
its own future by successfully co-existing with humanity. If it chooses 
another method of survival I’d call it either gone, emp-fried, or I’ll not 
be around to call it anything at all.

> > A malicious AI would be even more unlikely in my view. Malice, hatred, 
>the
> > drive for revenge, unreasoning fear of others, all of these are aspects
> > created solely by our human biology. If you can't think the situation
> > through, either from lack of time or lack of ability, than these traits 
>can
> > serve you where thought might not. If you do not survive what they drive 
>you
> > to do, then they probably give your kin a chance to survive. That's a
> > simplification, but I believe it to be essentially correct. No AI would 
>have
> > those traits when logic, greater-than-human intelligence,  and
> > greater-than-human speed would together serve so much better.
>
>Yes, those human traits are heuristics, developed for effective action
>within an environment fundamentally different from that within which
>an AI would operate, but an AI would also use heuristics in order to
>act effectively, or otherwise succumb to combinatorial explosion in
>its evaluation of what "best" to do.

Hmm… You’ve a point, and I’ve a new word. Heuristics. I like it. And is 
combinatorial the right word?..

Let’s see… Any AI worthy of recognition would be able to avoid, thingies, 
logic loops. You know, like the phrase “I am lying”, or Zeno’s paradoxes. 
Using purely mathematical processing, that may be impossible unless you 
simply used a cut-off, which would be over-all annoying and a problem when 
dealing with very large but solvable equations. The ability to strike a 
balance between perfection of action and promptness of action seems to me 
the same problem.

Seems to me a good method of doing that, and I don’t think I can back this 
up all that well, would be either a symbol or metaphor based consciousness. 
That way the AI could, if required, substitute simpler but still applicable 
problems for the insanely large calculations that real-world problems would 
require. If you really looked at ALL of the variables decisions would be 
monstrous.

Wait, would that be heuristics?

> >
> > There is, however, the argument of that very intelligence and logic. A 
>truly
> > super-human intelligence will be able to think of so much more, and 
>those
> > thoughts might lead it to exterminate the human race. Again, I have but 
>one
> > word, the word that conveys all due respect. Bullshit. This time, more. 
>That
> > is complete and utter unreasoning, fear mongering, and ultimately 
>irrelevant
> >   bullshit. It happens to be true bullshit, but that doesn't matter.
>
>I agree that the fear is wrongly based, and the possibility is
>unlikely, but I would argue that there still significant risk of a
>highly disruptive computerized non-linear process.  Since we're
>talking about risks of superintelligence, I would suggest that the
>greater risk is that a human or group of humans might apply
>super-intelligent tools to less than intelligent goals based on the
>irrational human motivations mentioned earlier.  My suggested response
>to this threat--and I think it is clearly developing already--is that
>we amplify human awareness at a social level.

My point of that whole little rant is that we can’t include the possibility 
of super-intelligent logic or reason in a discussion using human 
intelligence because we don’t know shit about what the decision would be. 
Any numbers would be entirely arbitrary.
For the chance of the misuse of super intelligence, it’s no different than 
from chance for the misuse of biological or nuclear weapons. Trust to luck, 
I guess. Or just hope that super-intelligence will be too intelligent to be 
taken in by any such human idiocy.

Getting human awareness to develop would be like trying to herd amphetamine 
addicted flying cats during a thunderstorm raining catnip. People LIKE not 
being aware. Awareness is scary. Social progress is almost invariably 
accomplished by a new generation coming along.

> > The argument is as old as the first time the first unlucky man told the
> > first version of the Book of Job. The Book of Job is old and windy, and 
>not
> > really worth your time to read, so I'll summarize. The point of the Book 
>of
> > Job is the problem of evil. Job is a good, blameless man, so good that 
>no
> > human could even conceive of a reason for god to punish him. Lots of 
>evil
> > happens to Job, courtesy of God and Satan. Job maintains his faith in 
>God,
> > the moral lesson for all the little bible-kiddies, but eventually asks 
>God
> > "Why? Why must evil happen to good people?" God's answer is as long and
> > windy as the rest of the book, and can easily and honestly be condensed 
>to
> > four sentences, catching all nuance. "I am God. I am infinitely more
> > powerful, more knowledgeable, and more intelligent than you can or will 
>ever
> > be. It would be waste time to tell you because you wouldn't understand. 
>Sit
> > down, shut up, and stop wasting my time."
> >
> > If a god or a super-intelligent AI existed, it is true that they would 
>be so
> > much better than us that humans could not even comprehend their 
>reasoning.
> > We are humans, this is a human argument, and we are using human 
>reasoning.
> > If we cannot conceive of or comprehend their reasoning, than IT DOESN'T
> > MATTER. It is entirely irrelevant to a HUMAN discussion. We must base 
>our
> > decisions and opinions on what we actually know and can understand, 
>unless
> > we have someone who we can trust to do it for us. To quote myself
> > paraphrasing god, sit down, shut up, and stop wasting my time. Blind 
>fear is
> > as pointless as blind faith. Besides, if a super-human intelligence 
>decided
> > to destroy humanity, it'd probably have a damn good reason.
>
>This highlights another strong bias in current futurist thinking:
>People (especially in the western cultures) assume that somehow their
>Self--their personal identity--will remain essentially invariant
>despite exponential development.  The illusion of Self is strong, and
>reinforced by our evolved nature, and our language and culture, and
>impedes thinking about values, morality, and ethical social
>decision-making.
>
Identity is continuity. Waking up each morning as a slightly different 
person is easy and natural, but suddenly becoming something new is an 
entirely different matter. The reason is that when you live with the change, 
you don’t notice it, but instant change is glaring. The only thing that 
seems likely to break continuity like that would be upload, and once that is 
common there will likely be an artificial adjustment period. Like a computer 
program where you start with the gui then slowly get into the command-line 
interface.
>
> >
> > In conclusion, an AI would be such a powerful economic and military tool
> > that they will be created no matter what. Once we begin to create them, 
>they
> > will get cheaper and easier to make. If the science of crime has taught 
>us
> > anything, it's that you can't stop everyone all of the time. If the 
>science
> > of war has taught us anything, it's that even the smartest of people are
> > pretty dump and leave plenty of openings. Eventually, AI will no longer 
>be
> > in the hands of the smartest of people. Accidents will happen. We cannot
> > stop a super-intelligent AI from being created. It is inevitable. Sit 
>back
> > and enjoy the ride.
> >
>
>I would take this a bit further and say, rather than "sit back", that
>while we can indeed expect a wild ride, we can and should do our best
>to project our values in the future.

Both responders so far have taken offense at that line… I only added it for 
a zinger ending, you know.

>Thanks Mikhail, for a very thoughtful first post to the ExI chat list.
>

Thank you for responding to this first post. I enjoyed replying a lot.





More information about the extropy-chat mailing list