[ExI] How could you ever support an AGI?
Stefano Vaj
stefano.vaj at gmail.com
Mon Mar 3 07:54:00 UTC 2008
On Mon, Mar 3, 2008 at 2:11 AM, <ABlainey at aol.com> wrote:
> If we do not put all the technology together in one box in a systematic and
> controlled manner, at some point it will happen spontaneously, through pure
> chance or accident. The internet being a prime example of opportunity for
> this to occur. When it happens, and it will. We will have no control,
> insight or warning. We (Homosapiens) will instantly become obsolete. The
> ramifications of this are impossible to predict.
I do not think that AGI is going to be a dramatic quantum leap, a
sudden awakening of given computer, any more than the moment could be
identified when Homo sapiens or a horse was suddenly born from
something radically different.
What is going to happen is that from machines that can pass the Turing
test over 10 questions 1% of the times with 1% of the users (we are
probably already there) we will get to machines that can do that over
100 questions 50% of the times with 50% of the users to those who
score better than real humans on any finite number of interactions
with any finite number of users.
As for the obsolescence of Homo sapiens, well, biology is not static
itself, what's the big deal? Even though the definition of
"extinction" may vary, my own favourite is "dying away without leaving
behind any successor". And this, by definition, would definitely not
the case should fully artificial, uploaded or mixed digital
intelligences become the dominant form of sentience on the planet.
Stefano Vaj
More information about the extropy-chat
mailing list