[ExI] stealth singularity
interzone at gmail.com
Thu May 16 23:37:56 UTC 2019
On Thu, May 16, 2019, 7:29 PM John Clark <johnkclark at gmail.com> wrote:
> Incidentally this sort of thing could cause social problems in the very
> near future. Imagine being denied a loan because a AI, which has proven
> itself over and over again to be superhumanly good at this, has decided not
> to give you a loan because you are a bad credit risk. You look at your
> credit history and it looks pretty good to you and in fact it looks good to
> most people, so you ask the AI exactly what is it that you don't like,
> but just as it can't explain why it made a particular Chess move it can't
> explain why your loan was denied except to say "it just didn't look right
> to me". You would probably be frustrated by that response, and what would
> make it even worse is the knowledge that it's hunch is probably right, you
> probably wouldn't be able to repay it.
John, this is a very valid concern. I work in the financial industry and
can tell you there is already a lot of pushback from regulators on any type
of black box AI used for these purposes. Generally, large financial firms
need to be able to explain exactly why a decision was made which
discourages the use of deep learning algos and incentivizes algos like a
tree based approach where it's more obvious what went into a decision.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat