[ExI] nick's book being sold by fox news
pharos at gmail.com
Mon Oct 27 19:12:46 UTC 2014
On Mon, Oct 27, 2014 at 6:47 PM, Kelly Anderson wrote:
> I would phrase this as "AI will be incredibly dangerous if we do get it
> right." If we can't get intelligence, then it won't be that dangerous. If we
> get AGI, then almost by definition it will be dangerous. I think we have
> exactly ONE chance to get the initial training of AGIs correct. We should
> focus on raising the first generation of AGIs with a generous dose of
> compassion training and the like. Sort of like raising small children with
> good morals, etc.
It's already happening. We will soon be surrounded by AI in everything we touch.
The AI on the horizon looks more like Amazon Web Services--cheap,
reliable, industrial-grade digital smartness running behind
everything, and almost invisible except when it blinks off. This
common utility will serve you as much IQ as you want but no more than
you need. Like all utilities, AI will be supremely boring, even as it
transforms the Internet, the global economy, and civilization. It will
enliven inert objects, much as electricity did more than a century
ago. Everything that we formerly electrified we will now cognitize.
This new utilitarian AI will also augment us individually as people
(deepening our memory, speeding our recognition) and collectively as a
In fact, this won't really be intelligence, at least not as we've come
to think of it. Indeed, intelligence may be a liability--especially if
by "intelligence" we mean our peculiar self-awareness, all our frantic
loops of introspection and messy currents of self-consciousness. We
want our self-driving car to be inhumanly focused on the road, not
obsessing over an argument it had with the garage.
What we want instead of intelligence is artificial smartness. Unlike
general intelligence, smartness is focused, measurable, specific.
More information about the extropy-chat