[extropy-chat] Fools building AIs (was: Tyranny in place)
Russell Wallace
russell.wallace at gmail.com
Sat Oct 7 20:42:16 UTC 2006
On 10/7/06, Samantha Atkins <sjatkins at mac.com> wrote:
>
> Russell, I am very surprised at you. Almost no one here believes that AGI is in some unknowably distant future. I am certain you know full well that it is not what the humans program the AGI to do that is likely the concern. Hell, if it was just matter of rot programming the AGI to exterminate humans explicitly there would be nothing to worry about and FAI would be easy! In any field where success is largely a matter of intelligence, information and its timely application the significantly faster, brighter and more well informed will exceed what can be done by others. And that doesn't even touch on the depth of Moravec's argument which you could easily read for yourself.
>
Samantha, recall that you are talking to one of the people who's been
actually working on this stuff. The idea that human-equivalent AI is just
around the corner was a story we told ourselves because we wanted it to be
true and we didn't know enough about the problem to come up with any form of
realistic estimate, like the eighteenth century artisans who made mechanical
animals and imagined all the functionality of a real animal might be just a
little harder to do. In reality human-equivalent AI is not one but several
technological generations away, each generation requiring a set of major
related breakthoughs and the development of an industry to follow through on
them; and we'll need to cover most of that distance before we know enough to
do more than philosophize about what might make an AI Friendly or
Unfriendly.
This is not, mind you, a counsel of despair, nor a call to retreat to
narrow-AI projects of the kind we already know how to do. Smart-tool AI in
particular is, I think, only one generation away; it will be harder to
create than I once dreamed a Transcendent Power might be, but _if_ we
approach it in the right way, it looks just barely doable. And smart-tool AI
would suffice for a great deal; it looks to me both necessary and sufficient
for radical advances in nanotechnology, life extension, space colonization.
What is this blunt denial of the obvious about?
>
It's about my opinion that real progress will be assisted if we acknowledge
reality and face up to the full complexity of the tasks ahead of us, neither
contenting ourselves with small narrow-AI projects nor needing to believe in
the modern-day equivalent of the shoemaker's elves.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20061007/a3911aca/attachment.html>
More information about the extropy-chat
mailing list