[ExI] New article on AGI dangers

BillK pharos at gmail.com
Sun Jul 31 09:59:42 UTC 2022


On Sun, 31 Jul 2022 at 08:29, Stuart LaForge via extropy-chat
<extropy-chat at lists.extropy.org> wrote:
> I just posted this on Less Wrong and I am posting it here for discussion:
>
> I think best way to assure alignment, at least superficially is to
> hardwire the AGI to need humans. This could be as easy installing a
> biometric scanner that recognized a range of acceptable human
> biometrics that would in turn goose the error-function temporarily but
> wore off over time like a Pac Man power pill. The idea is to get the
> AGI to need non-fungible human input to maintain optimal
> functionality, and for it to know that it needs such input. Almost
> like getting it addicted to human thumbs on its sensor. The key would
> be implement this at the most fundamental-level possible like  the
> boot sector or kernel so that the AGI cannot simply change the code
> without shutting itself down.
>
> Stuart LaForge
> _______________________________________________


Won't that mean slowing the AGI down to human speeds?
Or even slower, while it waits for human authorisation?
And some humans are the 'bad guys' who want to cause death and
destruction for their own profit.
So which humans are assigned to become the AGI nanny?
One reason for solving the alignment problem is the superhuman speed
that the AGI works at. By the time humans realise that things are
going badly wrong it will be too late for humanity.


BillK


More information about the extropy-chat mailing list