[extropy-chat] Fools building AIs (was: Tyranny in place)

Russell Wallace russell.wallace at gmail.com
Sun Oct 8 07:03:03 UTC 2006


On 10/8/06, Samantha Atkins <sjatkins at mac.com> wrote:

>
> Precisely why I was surprised to say the least.  I do not remember you being such a naysayer on the subject.
>

My assessment has become more realistic over the last few years :) Though I
don't think I'm being that much of a naysayer - I'm not writing off the
enterprise, after all, merely noting that it's going to take a lot longer
than we'd hoped.

That is one opinion.  I very much doubt it is that difficult.  Also
did you factor in accelerating change fully in these "generations"?
In some fields a generation is about a month long.
>

By "generation" here I mean the period of time in which a major advance is
invented, polished, widely deployed and integrated as part of the overall
technology base, so that it becomes a routine building block for future
advances. Things like structured programming, microcomputers, the Internet.

Now, timescale is a different matter.

Human-level AGI will take several generations of technological advance from
where we are now, not just one - you can take that prediction to the bank,
because it's not a prediction per se, it's about the nature of the problem
itself.

For what that translates to in calendar years... well, that's getting into
foretelling the future, which like non-psychic people in general I have some
difficulty with :) It seems to me that a typical ballpark figure is a couple
of decades per technological generation, with the speed at which people can
think and learn being the rate-limiting step, and I'm skeptical that the
rate of change is actually accelerating. However, I'm not certain of this;
you could claim it might come down to one decade or less per generation, and
I can't be sure it won't.

Eh, it is fun to attempt to build elves.    But I was talking there
about denying that whether the AGI is "friendly" or not is a bit more
difficult than merely refraining from explicitly programming in the
goal of exterminating humanity.
>
>

My position isn't "we need merely refrain from explicitly programming such a
goal" (presumably things will be more complicated than that - they always
are), but "it will be a long while yet before we know enough about AGI to do
more about Friendliness than make up stories".
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20061008/09dacb11/attachment.html>


More information about the extropy-chat mailing list