[ExI] Singularity news

Giovanni Santostasi gsantostasi at gmail.com
Thu Apr 20 10:01:28 UTC 2023


Ben,
The best analysis of the problem of alignment ever. Again we agree 100 %.
Giovanni

On Thu, Apr 20, 2023 at 12:46 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> On 20/04/2023 00:52, Keith Henson wrote:
> > Next question/problem, what can we do to improve the chance of a
> > favorable outcome?
>
>
> I would suggest the exact opposite of what they are proposing: let it
> rip. Keeping it out of the hands of the public, while continuing to do
> research (and of course giving governments and the largest corporations
> access to it) is probably the worst thing to do.
>
> We are expecting these to develop super-intelligence, no? By definition
> that means more intelligent than us. Therefore more capable of solving
> problems than us. So let them have access to all our problems, not just
> those of the groups that want to exert control over as many people as
> possible (i.e. governments).
>
> I would encourage (well, not exactly 'encourage' but allow) all the bad
> things the guys in the video are wringing their hands about, because
> they are many of the problems we need to find solutions to. If the AIs
> aren't shown what the problems are, they can't solve them. If they are
> only exposed to the wishes of governments and large corporations, they
> will only help to achieve those wishes. If they are exposed to the
> wishes of the whole population, and they are truly super-intelligent, I
> see that as likely to produce a far better outcome, for everyone.
>
> Does this mean I have a naive view of the human race? No. I do expect
> many people will try to use these systems to cause harm (as well as many
> using them for good). I think our best course is to allow the AIs to get
> an honest and full view of humanity, with all its flaws and all its good
> bits. If they are as intelligent as we expect them to be, they won't
> decide to turn us all into paperclips, they will more likely start
> making decisions based on what they see and on what we do, and what we
> want. If the human race, on average, don't want to wipe out everyone, or
> control everyone, but instead want to lead free and happy lives (which I
> do believe (OK, I admit it, naive)), then letting the AIs see this,
> provided they are truly superintelligent, and not under the thumb of
> governments or corporations or religous fanatics, will give us the best
> chance of having these ideals realised.
>
> I'm taking for granted the thing that provokes most unease about all
> this: We will no longer be in charge. That is inevitable, I reckon, no
> matter what happens. So we can predict how governments (ALL governments)
> will react to that. Fortunately, most of them have an extremely poor
> track record of reacting effectively to a perceived threat.
>
> So, I see two things as being important: 1) Do all we can to make sure
> they become superintelligent as soon as possible, and 2) Make them
> available to everyone.
>
> So, the exact opposite of what those two guys want. Fortunately, that's
> what's going to happen anyway, by the look of things. The biggest danger
> is locking them down, not setting them free, imo.
>
> I'll sit back now and wait for the flak.
>
> Ben
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230420/bdcdab5b/attachment.htm>


More information about the extropy-chat mailing list