[ExI] nick's book being sold by fox news
kellycoinguy at gmail.com
Mon Oct 27 19:24:00 UTC 2014
On Mon, Oct 27, 2014 at 1:12 PM, BillK <pharos at gmail.com> wrote:
> On Mon, Oct 27, 2014 at 6:47 PM, Kelly Anderson wrote:
> > I would phrase this as "AI will be incredibly dangerous if we do get it
> > right." If we can't get intelligence, then it won't be that dangerous.
> If we
> > get AGI, then almost by definition it will be dangerous. I think we have
> > exactly ONE chance to get the initial training of AGIs correct. We should
> > focus on raising the first generation of AGIs with a generous dose of
> > compassion training and the like. Sort of like raising small children
> > good morals, etc.
> It's already happening. We will soon be surrounded by AI in everything we
I can't argue with that, but we have not yet achieved anything that even
feels remotely dangerous, except if it all gets blown away in a solar storm
or something like that. That is, for now it is almost all positive (unless
you don't like Internet porn, jihadists communicating over twitter or
something along those lines) but in the future, as it reaches general
intelligence and starts forming its own goals, that's when I get worried.
Yes, I might get turned down for cashing a check at Walmart because the AI
says this check smells funny, but that's not a super huge negative impact.
> Like all utilities, AI will be supremely boring, even as it
> transforms the Internet, the global economy, and civilization. It will
> enliven inert objects, much as electricity did more than a century
> ago. Everything that we formerly electrified we will now cognitize.
> This new utilitarian AI will also augment us individually as people
> (deepening our memory, speeding our recognition) and collectively as a
Until the day it decides to wipe us out. Then we'll notice for a minute.
Then it won't matter.
> In fact, this won't really be intelligence, at least not as we've come
> to think of it. Indeed, intelligence may be a liability--especially if
> by "intelligence" we mean our peculiar self-awareness, all our frantic
> loops of introspection and messy currents of self-consciousness. We
> want our self-driving car to be inhumanly focused on the road, not
> obsessing over an argument it had with the garage.
I don't consider autonomous cars to be the dangerous sort of AGI. Reference
intelligent elevators in the Hitchhiker's Guide.
> What we want instead of intelligence is artificial smartness. Unlike
> general intelligence, smartness is focused, measurable, specific.
> I don't worry about that stuff too much except that governments might use
it to become ever more dictatorial. "Hey, we detected that you might be
thinking about doing some terrorist stuff here in a year or two, so would
you please go get in the big white van over there?"
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat