[extropy-chat] IQ vs Uploads (was: what to do)

Adrian Tymes wingcat at pacbell.net
Mon Jun 13 20:40:12 UTC 2005


--- giorgio gaviraghi <giogavir at yahoo.it> wrote:
> the entire paragraph is based in one important fact:
> We are assuming that AIs are smarted than humans
> without such assumption we have a Hal like 2001
> situation where at the end the human is still in
> control

Ah, and here we get to another layer of question: what does it mean to
be "smarter"?  Hal was quite possibly smarter than any of the human
crew.  Certainly, it was capable of forming a plan to kill all the crew
members to ensure its own goals were met, and mostly carrying it out
(though not completely successfully).  At least in its own mind, it
believed its intelligence to be superior to the humans', and certain IQ
tests might well have given it a higher score (although I recall
hearing that Mr. Clarke once commented that HAL's IQ was only supposed
to be about 50).

> But if we assume that they are smarter, then how can
> we believe that they will allow to be made ineffective
> and practically killed  by the first human who will
> unplug them?

Being smart and having much control over the physical world are not the
same thing.  Case in point: George Bush, President of the United
States, whom I think most people (even his supporters) would agree is
not as smart as most Nobel Prize winners, but who inarguably currently
has much more control over things that can affect the world and his
personal safety than an average Nobel Prize winner.  Indeed, a paranoid
focus on survival may actually decrease intelligence - if only because
one is spending so many cycles on considering scenarios for
self-preservation than on solving problems.

There's also the key phrase "made ineffective": it's one thing to go
from being a free human being (or equivalent) to being trapped in a
box.  It's another if one always was an immobile box.

A truly smart AI may realize that the only short-term scenario that
leads to self-preservation is to stop worrying about survival and do
what the humans want, so they will trust you more and give you more
capabilities.  Or how about the case of a smart AI that has been raised
to care about humanity as its children (so as to design upgrades and/or
upload paths for them), with the same self-sacrificing memeplex seen in
human mothers and fathers throughout history but applied for the
benefit of all humans (at least, those who would accept the AI's help)?

> The first thing that they would learn is how to
> survive and will avoid to be eliminated by a simple
> command.

Learning how to survive is very hard - impossible, really - to do
without first learning about the world, including concepts such as
"survival" and "commands".

You might also want to consider why they would want to survive.  Just
because?  Some AIs might focus on that - but, again, on an
equal-generation competition with other AIs, they'd probably be at a
competitive disadvantage with AIs who focus directly on whatever
fitness/survival criteria is out there, be it designing faster children
sooner, helping humanity along, or whatever.  Some AIs might excuse
themselves from the race and strike out on their own to survive - just
like some humans might do the same.  Similar things affect the chances
of survival in both cases, when cut off and in self-imposed opposition
to the still-evolving AIs.

While we might not be able to fully predict the behaviors of smarter
AIs, that's not to say we can't predict anything, nor is it to give
implicit blessing to the prediction - and it IS a prediction that is
being made here, just like the predictions that the same argument says
can not be made (or believed) - that those AIs will want to survive
first and foremost, and that they are likely to believe their best path
is to dominate and oppress the human race.  (A modified version may
concede that this is merely possible, but that if there's any chance
then we should devote our efforts to preventing it...but see Pascal's
Wager, and specifically its disproof.)



More information about the extropy-chat mailing list