[extropy-chat] The emergence of AI
Hal Finney
hal at finney.org
Fri Dec 3 19:28:48 UTC 2004
My guess is that AI will indeed emerge gradually. Even the fans
of self-improving AI systems may agree that before the AI can start
making a significant contribution to improving itself, it must attain
human level competence in at least such fields as programming and AI.
This accomplishment must occur without the help of the AI itself and would
seem to be a minimum before bootstrapping and exponential takeoff could
hope to occur.
Yet achieving this goal will be an amazing milestone with repurcussions
all through society, even if the self-improvement never works. And as we
approach this goal everyone will be aware of it, and of the new presence
of human-level capability in machines.
Given such a trajectory, I suspect that we will see regulation of AI as
a threatening technology, following the patterns of biotech regulation
and the incipient nanotech regulation. Recall that these made up the
troika of terror in Bill Joy's seminal Wired article. AI threatens
the economy by taking away jobs, and it threatens humanity if it can
achieve not just human-level, but genius-level intelligence and beyond.
AI systems are almost always painted as sinister and threatening in
science fiction movies, going all the way back to the Golem.
Maybe wrapping it in a fuzzy exterior will help, child robots and talking
dogs: "Hello, I'm Rags, woof, woof". But the reality is that people are
going to be working side by side with these systems, and I think that
is how they will base their conception of them as helpful or dangerous.
Hal
More information about the extropy-chat
mailing list