[extropy-chat] Fools building AIs (was: Tyranny in place)
sjatkins at mac.com
Sun Oct 8 06:41:51 UTC 2006
On Oct 7, 2006, at 1:42 PM, Russell Wallace wrote:
> On 10/7/06, Samantha Atkins <sjatkins at mac.com> wrote:
> Russell, I am very surprised at you. Almost no one here believes
> that AGI is in some unknowably distant future. I am certain you
> know full well that it is not what the humans program the AGI to do
> that is likely the concern. Hell, if it was just matter of rot
> programming the AGI to exterminate humans explicitly there would be
> nothing to worry about and FAI would be easy! In any field where
> success is largely a matter of intelligence, information and its
> timely application the significantly faster, brighter and more well
> informed will exceed what can be done by others. And that doesn't
> even touch on the depth of Moravec's argument which you could easily
> read for yourself.
> Samantha, recall that you are talking to one of the people who's
> been actually working on this stuff.
Precisely why I was surprised to say the least. I do not remember you
being such a naysayer on the subject.
> The idea that human-equivalent AI is just around the corner was a
> story we told ourselves because we wanted it to be true and we
> didn't know enough about the problem to come up with any form of
> realistic estimate, like the eighteenth century artisans who made
> mechanical animals and imagined all the functionality of a real
> animal might be just a little harder to do. In reality human-
> equivalent AI is not one but several technological generations away,
> each generation requiring a set of major related breakthoughs and
> the development of an industry to follow through on them; and we'll
> need to cover most of that distance before we know enough to do more
> than philosophize about what might make an AI Friendly or Unfriendly.
That is one opinion. I very much doubt it is that difficult. Also
did you factor in accelerating change fully in these "generations"?
In some fields a generation is about a month long.
> This is not, mind you, a counsel of despair, nor a call to retreat
> to narrow-AI projects of the kind we already know how to do. Smart-
> tool AI in particular is, I think, only one generation away; it will
> be harder to create than I once dreamed a Transcendent Power might
> be, but _if_ we approach it in the right way, it looks just barely
> doable. And smart-tool AI would suffice for a great deal; it looks
> to me both necessary and sufficient for radical advances in
> nanotechnology, life extension, space colonization.
> What is this blunt denial of the obvious about?
> It's about my opinion that real progress will be assisted if we
> acknowledge reality and face up to the full complexity of the tasks
> ahead of us, neither contenting ourselves with small narrow-AI
> projects nor needing to believe in the modern-day equivalent of the
> shoemaker's elves.
Eh, it is fun to attempt to build elves. But I was talking there
about denying that whether the AGI is "friendly" or not is a bit more
difficult than merely refraining from explicitly programming in the
goal of exterminating humanity.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat