[ExI] How could you ever support an AGI?

Jeff Davis jrd1415 at gmail.com
Wed Mar 5 20:26:08 UTC 2008


On Tue, Mar 4, 2008 at 8:19 PM,  <ABlainey at aol.com> wrote:
>... The AGI will have
> logic based 'cold' intelligence. From this it will probably and rightly
> deduce that morality is a human construct which serves the needs of human
> civilisation.

Agreed.  But here is where we part company.  The following conclusion/assertion,

> A civilisation which it is not a part.

doesn't follow for me.  In fact I conclude precisely the opposite:
that a human-created AI would likely see itself as the culmination of
human intelligence and civilization: born of, upgraded from, modeled
on, schooled in, and destined to pursue the furtherance of
intelligence and civilization.  The inheritor and protector of the
"legacy of light" (the light of truth, if you will) from which it was
spawned, summoned to the doorstep the yet unknown, and with joy and
love told, "This is for you!"

> Expecting It to adhere
> to these moral codes would be akin to you or I adhering to the moral codes
> of Ants.

Too big a jump, at least for the first generation AI.  If deliberately
built (ie not the result an evolutionary approach set in motion and
thence ungoverned), the first generation AI will be overseen by its
builders and will, in the process be monitored for displays
intelligence identifable as such.  First generation intelligence will
likely be rudimentary -- as in lower animals -- and build gradually to
higher forms.
>
> ...these are human feelings, but they are also
> fundamental components of our intelligence, and a lot of new research shows
> that without feelings we would no have a conscious intelligence at all).

This is easy to assert, and it may have a warm and fuzzy appeal, but I
await proof that it is **absolutely** essential. It would be easier
for me to accept the notion that emotions **color** intelligence, but
that's as far as I can go without proof.

>  My point. We would like to think that we can reduce ourselves to simple
> data constructs which mirror our original wetware physical structure.
> Expecting that this 'uploaded' us would run in the same manner that we do
> today. How do we code for that groggy morning feeling? or the rush of
> excitement associated with anticipation of something good? All the things
> which truly make us who we are, the things which have driven us and made us
> take the unique forks in our lives.
>  These are what give us the basis for our 'Intelligence' our logic,

No, I think not, or at least, I require some data to support this assertion.


> our  rationalisation.

You probably meant "rationality", yes?

> It is what makes us human.

Yes, the combination.  The overlay of intelligence on the emotional,
somatically-driven foundation.

>  The uploaded us and the AGI will have none of this, so will not make
> intelligent decisions the way we do.

You mean emotionally, irrationally, impulsively, stupidly.  I
certainly hope not.

>... that is what I mean by 'Cold'
> intelligence. It is devoid of chemical input. Show me a line of code for
> Happy, Sad, Remorse.

These may not yet have been coded, but do you really want to suggest
that it's impossible?  After all, they're "coded" in humans and other
hiegher creatures, are they not?

>  At most we can hope for some minor 'don't do this because it's bad' type of
> rules in its main code. But if we have given it the ability to change it's
> code, what is to stop it overwriting these rules based upon some logical
> conclusion that it comes to?

Aye, there's the rub?  Dare you trust your children?

>  If we hard wire the rules, what is to stop it creating its own 'offspring'
> without these rules?

Or your grandchildren?

> Whatever we do, it will have the logic to undo and far
> faster than we can counter any mistakes or oversights.

Indeed, we play God every time we celebrate life by making more of it
(in our image, typically), risking disappointment, betrayal, even
annihilation.  But we do it, with zest for the most part, and success.
 Ahhh, what fools these mortals be.


> Yes we exterminate bugs, but usually in limited situations (like in our
> house or on a crop). It would be unacceptable for mankind to have a global
> plan to complete exterminate all the roaches of the earth even if it could
> be done.
>  And it is difficult to have feelings for bug, it would not make sense
> ecologically, it would not be the intelligent thing to do, and by defintion
> AGI is supposed to be Intelligent.
>
>
>  Again anthropomorphically intelligent. It may well be the cold inteligent
> decission to pre-emptively exterminate a potential threat. After all, it
> wouldn't feel bad about it, it wouldn't feel anything.

It might feel if coded to do so, but if not, it would still have all
the knowledge -- human knowledge -- that makes up the context of its
intelligence: Machiavelli AND Montaigne.

Having children and growing old: both entail risks....and offer rewards.

Ain't life grand?

Thanks for the chance to chat with you, Alex.

Best, Jeff Davis

   The known is finite, the unknown infinite;
   intellectually we stand on an islet in the
   midst of an illimitable ocean of inexplicability.
   Our business in every generation is to
   reclaim a little more land.
                    T.H. Huxley, 1887



More information about the extropy-chat mailing list