[ExI] How could you ever support an AGI?
Lee Corbin
lcorbin at rawbw.com
Wed Mar 5 03:56:27 UTC 2008
I am afraid that Alex Blainey's post, below, is the first one except for
John Clark's original (Robert Bradbury's original) that truly understands
the danger posed by AGI.
> I can't help but notice that many of the posts have started out with
> logic and concluded with quazi-anthropomorphic, straw man arguments.
> I understand that an AGI will or should be based upon 'human intelligence,'
> however the end result will be completely Alien to us. So much so that
> our interpretation of intelligence wouldn't really fit.
Quite right. Have the other posters studied the "fast take-off" scenarios?
Moreover, some didn't seem to understand that the *whole* point of
"Friendly AI" is to create an unusal, somehow very constrained AI,
that simply won't convert the Earth and the Solar System according
to its own needs, completely neglecting the insignificant bacteria
that created it.
> > But maybe there are general and universal principles associated
> > with intelligence. Intelligence means finding patterns and connections,
> > understanding that affecting this part here means affecting this other
> > part over there, intelligence means having a higher sense of physical
> > and moral "ecology".
We have utterly no way to be able to claim this. As Alex goes on:
> Again this is reduced to anthropomorphic intelligence. The AGI will
> have logic based 'cold' intelligence. From this it will probably and
> rightly deduce that morality is a human construct which serves the
> needs of human civilisation.
Even if it deigns to examine the mores and traditions of the tiny
brainless beings that predictably brought it about.
> A civilisation which it is not a part. Expecting It to adhere to these
> moral codes would be akin to you or I adhering to the moral codes
> of Ants.
Very well put.
> > If you see connections between all the beings than you feel compassion
But that's only because humans were *evolved* to do so after perhaps a
dozen million years.
> > and understanding (and yes these are human feelings, but they are also
> > fundamental components of our intelligence, and a lot of new research
> > shows that without feelings we would no have a conscious intelligence at all).
The research shows that the way *our* brains are organized, this is true.
*We* happen to have hard-coded at very low levels things like genuine
altruism for others. Sharks don't. Tigers don't. It all depends on how
you earn a living in nature. We happen to be social animals because that's
what worked for anthropoids. The first truly transcendent AI will be a
one-off.
> How do we code for that groggy morning feeling? or the rush of excitement
> associated with anticipation of something good? All the things which truly
> make us who we are, the things which have driven us and made us take the
> unique forks in our lives.
>
> These are what give us the basis for our 'Intelligence' our logic, our rationalisation.
> It is what makes us human. The uploaded us and the AGI will have none of this,
Alex, I actually disagree with this. An upload of a human being will not
be considered successful if the human traits are lost. But the AGI need
have none of it.
> At most we can hope for some minor 'don't do this because it's bad'
> type of rules in its main code. But if we have given it the ability to
> change it's code, what is to stop it overwriting these rules based
> upon some logical conclusion that it comes to?
Right. Nothing. Managing to create "Friendly AI" is extremely challenging,
and my own belief is that the first successful AIs that truly surpass human
intelligence will be developed by people and processes that don't concern
themselves with such niceties. Somewhere it will be done, and by people
who don't care about the consequences, or are too naive to worry about
them.
> It may well be the cold inteligent decision to pre-emptively exterminate
> a potential threat. After all, it wouldn't feel bad about it, it wouldn't feel anything.
Right. In all likelihood, the first dangerous AI won't feel anything. That's
why we can only hope that Eliezer's writings (see the SL4 archives) are
paid attention to by anyone getting close to success. Even then the risks
are enormous, but as John says, we have no choice. It's going to happen
one way or another.
But the future is *so* uncertain. Perhaps there won't be a hard take-off,
and we can enlist the first AIs as allies. Perhaps ingenious strategies like
those of Rolf Nelson (see http://www.singinst.org/blog/2007/11/04/rolf-nelson-on-ai-beliefs/)
may work out. Who knows? All is hardly lost, but please don't think that any
kind of "reason" or "logic" or human feelings that you can appeal to will have
any effect on such a new creature.
Thanks, Alex, for putting it well.
Lee
More information about the extropy-chat
mailing list