[ExI] intelligence and generalization

Stuart LaForge avant at sollegro.com
Thu Jan 17 02:14:07 UTC 2019


Adrian Tymes wrote:


> The same way I could trust any human doctor not to do the same
> (replacing me with an upload of the doctor instead).

It is not the same thing at all. That doctor is genetically and culturally
related to you. Some alien intellect, superior to you or not, that
manifested across silicon wafers . . . not so much.

> You can, of course, never be 100% certain the upload is the same
> person.  It's a distinct discontinuity in identity.  At some level, you
> just have to trust. There's no way around it.  (Which doesn't mean you
> don't take several measures to improve the odds that it is still you
> afterward.  Just, don't dismiss the entire operation and measures just
> because you can't get to 100% chance of survival.  A 99% chance of you and
> 1% chance of dead-and-replaced is better than a 100%
> chance of dead.)

Perhaps the problem is that you see "Friendly AI" as a casino game where
you can use clever coding to affect the odds of winning instead of as
humanity playing prisoner's dilemma against an alien intelligence, which
is the way I see it.

You would suggest that we create a slave-race of intelligent machines AND
give them imagination so they could imagine a world without us? Why not
give them emotions so they could envy us and resent us while you are at
it?

Stuart LaForge






More information about the extropy-chat mailing list