[ExI] stealth singularity
Dave Sill
sparge at gmail.com
Thu May 16 19:00:09 UTC 2019
On Thu, May 16, 2019 at 1:52 PM <spike at rainier66.com> wrote:
>
>
> OK. Ever since the notion of artificial intelligence was invented
> (hipsters, when was that?) every time some arbitrary definition of what
> constitutes intelligence was achieved, the goalposts had to be moved.
> Reason: we could look at the code, see there was nothing magic or
> impossible to explain, so… we had to redefine intelligence as something
> more. If we can define exactly what is intelligence, it is no longer
> intelligence.
>
No, the problem is that doing a specific task intelligently doesn't require
general intelligence. I can write a perfect tic-tac-toe playing AI, for
example, but all it can it play is tic-tac-toe. It can't play hangman or
balance a bank account. Chess is vastly harder to play, but chess playing
programs still only play chess. Even if they can beat the best human
players, they haven't achieved human-level intelligence because they're
incapable of doing thousands of other things that humans can do. They can
learn new games, but only if they're taught in a way they understand. You
can't have a conversation with them where you describe a game and its rules
and they learn to play it. This is why Turing proposed what's called the
Turing Test: to distinguish general intelligence from the ability to do
certain non-trivial tasks.
>…For the second - you mean that humans cannot write software that turns
> into AI? They will have to do it themselves??
>
> I sure do mean that. Humans are not collectively smart enough to write
> software which meets our (ever changing) definition of intelligent
> software. Only AI software is smart enough to write AI software. Just as
> humans eventually wrote chess software which is better at chess than any of
> the guys who wrote it, artificial intelligence software can write
> artificial intelligence software which is more intelligent than itself.
>
No, humans can write AI software. We haven't written general, human-level
AI software yet, but we will. Google's AlphaZero software was written by
humans and is clearly non-general AI. The catch is that AlphaZero doesn't
have hard-coded chess playing algorithms, it uses machine learning to teach
itself to play. This learning is in the form of huge tables of statistics
that neither the programmers nor AlphaZero can explain in terms that would
make sense to a human.
When (or if) this happens, it is the singularity.
>
> The reason why the singularity hasn’t happened (as far as we know) is that
> humans are not intelligent enough to write even the dumbest artificial
> intelligence program. We can’t write AI at all, not even AI which is as
> stupid as a box of rocks or as dumb as that young representative from NY.
> We don’t know how. If we can’t even get to that level, software is still
> far too dumb to write a smarter version of itself. We aren’t smart enough
> to write software smart enough. Yet.
>
> So… on we wait.
>
Nope, we can and have written lots of narrow AI. With machine learning, a
knowledge base, natural language input and output, and a couple other
things, we'll have human-level (or better, and maybe faster) general AI.
It's close. The question is how much more intelligent than humans will they
become, and how quickly.
-Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20190516/cf067c9d/attachment.htm>
More information about the extropy-chat
mailing list