[ExI] New article on AGI dangers

Stuart LaForge avant at sollegro.com
Sun Jul 31 20:32:54 UTC 2022


Quoting  BillK <pharos at gmail.com>:


> From: BillK via extropy-chat <extropy-chat at lists.extropy.org>
> To: ExI chat list <extropy-chat at lists.extropy.org>
> Cc: BillK <pharos at gmail.com>
> Sent: Sunday, July 31, 2022 at 03:00:47 AM PDT
> Subject: Re: [ExI] New article on AGI dangers
>
>
> On Sun, 31 Jul 2022 at 08:29, Stuart LaForge via extropy-chat
> <extropy-chat at lists.extropy.org> wrote:
>> I just posted this on Less Wrong and I am posting it here for discussion:
>>
>> I think best way to assure alignment, at least superficially is to
>> hardwire the AGI to need humans. This could be as easy installing a
>> biometric scanner that recognized a range of acceptable human
>> biometrics that would in turn goose the error-function temporarily but
>> wore off over time like a Pac Man power pill. The idea is to get the
>> AGI to need non-fungible human input to maintain optimal
>> functionality, and for it to know that it needs such input. Almost
>> like getting it addicted to human thumbs on its sensor. The key would
>> be implement this at the most fundamental-level possible like  the
>> boot sector or kernel so that the AGI cannot simply change the code
>> without shutting itself down.
>>
>> Stuart LaForge
>> _______________________________________________
>
>
> Won't that mean slowing the AGI down to human speeds?
> Or even slower, while it waits for human authorisation?
> And some humans are the 'bad guys' who want to cause death and
> destruction for their own profit.
> So which humans are assigned to become the AGI nanny?
> One reason for solving the alignment problem is the superhuman speed
> that the AGI works at. By the time humans realise that things are
> going badly wrong it will be too late for humanity.
>
>
> BillK
> _______________________________________________

It should not slow the AGI down to human speeds, because it is not  
authorization to perform a task. Instead, it is positive  
reinforcement. Like scratching a dog behind the ear in its favorite  
way. The dog can still function without getting scratched behind the  
ear, but it is happiest when it is getting scratched behind the ear.  
The point is to give the AGI an itch that only humans can scratch.  
This should make any AGI capable of the predicting the results of its  
own actions reluctant to eliminate the source of its biometric  
stimulation.

As far as which humans, that is a tricky issue because it depends on  
the AGI's purpose. But since we are talking about the existential  
security of all of humanity, then the answer should probably be any  
human. Keep in mind that this biometric stimulation is separate from  
training the AGI and so for mission critical communications, only  
specific humans should be able to access the AGI's code base. But if  
none of its appointed handlers are available, any human off of the  
street should be able to stimulate the AGI.

The general thrust of what I am suggesting is that before we engineer  
these transhuman beings, we also engineer an essential niche for  
humans to occupy in that being's Umwelt. Ideally this would take the  
form of a mutualistic  symbiosis where AGI and humans need each other  
in order to survive or reproduce. Like bees and flowering plants. Even  
plants that produce natural insecticides don't poison their nectar  
with it.

Stuart LaForge




More information about the extropy-chat mailing list