[ExI] What might be enough for a friendly AI?

John Grigg possiblepaths2050 at gmail.com
Thu Nov 18 01:35:31 UTC 2010


Spike wrote:
We did get your point Stefano, a damn good one.  If we had any help to
offer in understanding that not-so-subtle point, we would have offered
it.  I see the whole
friendliness-to-a-species-that-isn’t-friendly-to-itself as a paradox
we are nowhere near solving.  Asimov recognized it over half a century
ago, and we haven’t derived a solution yet.
>>>

Unfortunately, humanity's *example* will be terrible, and so we will
be teaching AGI to not trust us and "respect" our rules.  If we
somehow make them to be unmotivated (on their own), but very obedient
slaves, we should then be okay.  I just think we are deceiving
ourselves if we think we can pull that off...

I think experts on raising human teenagers should be brought in as
consultants...

John  : )


On 11/17/10, spike <spike66 at att.net> wrote:
> . On Behalf Of Florent Berthet
> Subject: Re: [ExI] What might be enough for a friendly AI?
>
>
>
>>.It may just be me, but this whole friendliness thing bothers me.
>
>
>
> Good.  It should bother you.  It bothers anyone who really thinks about it.
>
>
>
>>.I don't really mind dying if my successors (supersmart beings or whatever)
> can be hundreds of times happier than me.
>
> More generally, wouldn't it be a shame to prevent an AGI to create an
> advanced civilization (eg computronium based) just because this outcome
> could turn out to be less "friendly" to us than the one of a human-friendly
> AGI?  In the end, isn't the goal about maximizing collective happiness?
>
>
>
> Florent you are a perfect example of dangerous person to have on the AGI
> development team.  You (ad I too) might go down this perfectly logical line
> of reasoning, then decide to take it upon ourselves to release the AGI, in
> order to maximize happiness.
>
>
>
>>.So why don't we just figure out how to make the AGI understand the concept
> of happiness (which shouldn't be hard since we already understand it), and
> make it maximize it?
>
>
>
> Doh!  You were doing so well up to that point, then the fumble right at the
> goal line.  We don't really understand happiness.  We know what makes us
> feel good, because we have endorphins.  An AGI would (probably) not have
> endorphins.  We don't know if it would be happy or what would make it happy.
>
>
>
> spike
>
>
>
>
>
>
>
>




More information about the extropy-chat mailing list