[ExI] Moving goal posts
Stuart LaForge
avant at sollegro.com
Thu Sep 1 12:25:50 UTC 2022
As Spike has been saying for a while now, the so-called experts keep
moving the goal posts for AI to be considered intelligent. In an
earlier thread, it was mentioned that AI is not creative enough
outside of the boundaries of whatever data they have been trained on.
This has manifested in ever more hairsplitting discrimination as to
what defines artificial intelligence. For example this expert contends
that artificial general intelligence (AGI) is still weak AI, which is
inferior to strong AI and, also, that we will never achieve AGI, let
alone strong AI.
https://www.nature.com/articles/s41599-020-0494-4
Abstract
The modern project of creating human-like artificial intelligence (AI)
started after World War II, when it was discovered that electronic
computers are not just number-crunching machines, but can also
manipulate symbols. It is possible to pursue this goal without
assuming that machine intelligence is identical to human intelligence.
This is known as weak AI. However, many AI researcher have pursued the
aim of developing artificial intelligence that is in principle
identical to human intelligence, called strong AI. Weak AI is less
ambitious than strong AI, and therefore less controversial. However,
there are important controversies related to weak AI as well. This
paper focuses on the distinction between artificial general
intelligence (AGI) and artificial narrow intelligence (ANI). Although
AGI may be classified as weak AI, it is close to strong AI because one
chief characteristics of human intelligence is its generality.
Although AGI is less ambitious than strong AI, there were critics
almost from the very beginning. One of the leading critics was the
philosopher Hubert Dreyfus, who argued that computers, who have no
body, no childhood and no cultural practice, could not acquire
intelligence at all. One of Dreyfus’ main arguments was that human
knowledge is partly tacit, and therefore cannot be articulated and
incorporated in a computer program. However, today one might argue
that new approaches to artificial intelligence research have made his
arguments obsolete. Deep learning and Big Data are among the latest
approaches, and advocates argue that they will be able to realize AGI.
A closer look reveals that although development of artificial
intelligence for specific purposes (ANI) has been impressive, we have
not come much closer to developing artificial general intelligence
(AGI). The article further argues that this is in principle
impossible, and it revives Hubert Dreyfus’ argument that computers are
not in the world.
------------------------------------------------------
Meanwhile AI has been ignoring the experts and doing stuff like
pissing off human artists by winning art contests against them.
https://www.vice.com/en/article/bvmvqm/an-ai-generated-artwork-won-first-place-at-a-state-fair-fine-arts-competition-and-artists-are-pissed
"Jason Allen's AI-generated work "Théâtre D'opéra Spatial" took first
place in the digital category at the Colorado State Fair."
The future seems to be shaping up to be humorously incongruous. :)
Stuart LaForge
More information about the extropy-chat
mailing list