[ExI] Transhumanism at the IEEE

John Clark johnkclark at gmail.com
Wed Oct 9 18:23:43 UTC 2019


On Tue, Oct 8, 2019 at 9:06 PM Rafal Smigrodzki via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

*> One reason for this evolution in my views is that it seems like
> intelligence is a bit more granular than it seemed before. It is possible
> to achieve vastly superhuman results in an increasing number of domains,
> without creating a system capable of surviving on its own. *


The fact there already exists one program that can start from knowing
nothing but the basic rules and in just a few days teach itself to play
games as different as Checkers, Chess, Go and Shogi at a superhuman level
would seem to indicate we are well on our way toward developing a AI that
is neither brittle or specialized.

*> The limited superhuman AIs (LiSAI) does rewrite some aspects of its
> programming but overall its ability to create new goals is low.* [...]  *it's
> less scary to me than it was 25 years ago, now at the level of nuclear
> all-out war rather than the grey goo meltdown.*


If you're worried about a grey goo meltdown or a AI doing things that are
obviously stupid like dismantling the Solar System so it can use the
material to make more thumb tacks then the last thing in the world you'd
want to do is give it a rigid goal structure, you'd want it to have the
ability to modify goals when conditions demand it just as people do. Does
giving a AI that freedom mean we may not always be able to tell it what to
do? Yes. There is no way to be certain the AI will always remain friendly
just as there is no way to be certain your children will grow up to be good
to you when you become old, you just hope for the best.

  John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20191009/d4ebc07b/attachment.htm>


More information about the extropy-chat mailing list