[ExI] New article on AGI dangers
Stuart LaForge
avant at sollegro.com
Sun Jul 31 07:25:55 UTC 2022
I just posted this on Less Wrong and I am posting it here for discussion:
I think best way to assure alignment, at least superficially is to
hardwire the AGI to need humans. This could be as easy installing a
biometric scanner that recognized a range of acceptable human
biometrics that would in turn goose the error-function temporarily but
wore off over time like a Pac Man power pill. The idea is to get the
AGI to need non-fungible human input to maintain optimal
functionality, and for it to know that it needs such input. Almost
like getting it addicted to human thumbs on its sensor. The key would
be implement this at the most fundamental-level possible like the
boot sector or kernel so that the AGI cannot simply change the code
without shutting itself down.
Stuart LaForge
More information about the extropy-chat
mailing list