[extropy-chat] IQ vs Uploads (was: what to do)

Samantha Atkins sjatkins at mac.com
Mon Jun 13 22:37:19 UTC 2005


Why are we rehashing things that have been much more thoroughly and  
competently discussed in the past on this list?  Slow day?

-s


On Jun 13, 2005, at 2:32 PM, giorgio gaviraghi wrote:

> maybe we should make another important assumption
> about AIs
> They have individual free will
> in this case they could disobey human commands, have
> their own goals, take their own decisions, refuse to
> "unplug " themselves.
> In a smarter than human scenario they could connect
> between them and create a collective mind, billion of
> times more powerful than the individual.
> If you consider this possibility we have unlimited
> situations and none of them looks good for humans
> --- Adrian Tymes <wingcat at pacbell.net> ha scritto:
>
>
>> --- giorgio gaviraghi <giogavir at yahoo.it> wrote:
>>
>>> the entire paragraph is based in one important
>>>
>> fact:
>>
>>> We are assuming that AIs are smarted than humans
>>> without such assumption we have a Hal like 2001
>>> situation where at the end the human is still in
>>> control
>>>
>>
>> Ah, and here we get to another layer of question:
>> what does it mean to
>> be "smarter"?  Hal was quite possibly smarter than
>> any of the human
>> crew.  Certainly, it was capable of forming a plan
>> to kill all the crew
>> members to ensure its own goals were met, and mostly
>> carrying it out
>> (though not completely successfully).  At least in
>> its own mind, it
>> believed its intelligence to be superior to the
>> humans', and certain IQ
>> tests might well have given it a higher score
>> (although I recall
>> hearing that Mr. Clarke once commented that HAL's IQ
>> was only supposed
>> to be about 50).
>>
>>
>>> But if we assume that they are smarter, then how
>>>
>> can
>>
>>> we believe that they will allow to be made
>>>
>> ineffective
>>
>>> and practically killed  by the first human who
>>>
>> will
>>
>>> unplug them?
>>>
>>
>> Being smart and having much control over the
>> physical world are not the
>> same thing.  Case in point: George Bush, President
>> of the United
>> States, whom I think most people (even his
>> supporters) would agree is
>> not as smart as most Nobel Prize winners, but who
>> inarguably currently
>> has much more control over things that can affect
>> the world and his
>> personal safety than an average Nobel Prize winner.
>> Indeed, a paranoid
>> focus on survival may actually decrease intelligence
>> - if only because
>> one is spending so many cycles on considering
>> scenarios for
>> self-preservation than on solving problems.
>>
>> There's also the key phrase "made ineffective": it's
>> one thing to go
>> from being a free human being (or equivalent) to
>> being trapped in a
>> box.  It's another if one always was an immobile
>> box.
>>
>> A truly smart AI may realize that the only
>> short-term scenario that
>> leads to self-preservation is to stop worrying about
>> survival and do
>> what the humans want, so they will trust you more
>> and give you more
>> capabilities.  Or how about the case of a smart AI
>> that has been raised
>> to care about humanity as its children (so as to
>> design upgrades and/or
>> upload paths for them), with the same
>> self-sacrificing memeplex seen in
>> human mothers and fathers throughout history but
>> applied for the
>> benefit of all humans (at least, those who would
>> accept the AI's help)?
>>
>>
>>> The first thing that they would learn is how to
>>> survive and will avoid to be eliminated by a
>>>
>> simple
>>
>>> command.
>>>
>>
>> Learning how to survive is very hard - impossible,
>> really - to do
>> without first learning about the world, including
>> concepts such as
>> "survival" and "commands".
>>
>> You might also want to consider why they would want
>> to survive.  Just
>> because?  Some AIs might focus on that - but, again,
>> on an
>> equal-generation competition with other AIs, they'd
>> probably be at a
>> competitive disadvantage with AIs who focus directly
>> on whatever
>> fitness/survival criteria is out there, be it
>> designing faster children
>> sooner, helping humanity along, or whatever.  Some
>> AIs might excuse
>> themselves from the race and strike out on their own
>> to survive - just
>> like some humans might do the same.  Similar things
>> affect the chances
>> of survival in both cases, when cut off and in
>> self-imposed opposition
>> to the still-evolving AIs.
>>
>> While we might not be able to fully predict the
>> behaviors of smarter
>> AIs, that's not to say we can't predict anything,
>> nor is it to give
>> implicit blessing to the prediction - and it IS a
>> prediction that is
>> being made here, just like the predictions that the
>> same argument says
>> can not be made (or believed) - that those AIs will
>> want to survive
>> first and foremost, and that they are likely to
>> believe their best path
>> is to dominate and oppress the human race.  (A
>> modified version may
>> concede that this is merely possible, but that if
>> there's any chance
>> then we should devote our efforts to preventing
>> it...but see Pascal's
>> Wager, and specifically its disproof.)
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>>
>>
> http://lists.extropy.org/mailman/listinfo/extropy-chat
>
>>
>>
>
>
>
>
>
>
>
> ___________________________________
> Yahoo! Mail: gratis 1GB per i messaggi e allegati da 10MB
> http://mail.yahoo.it
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo/extropy-chat
>




More information about the extropy-chat mailing list