[extropy-chat] Singularity Blues

Edmund Schaefer edmund.schaefer at gmail.com
Wed Apr 6 18:39:11 UTC 2005


On Apr 6, 2005 1:34 PM, Gary Miller <aiguy at comcast.net> wrote:
> 
> [R]eality dictates that the major advances most likely to
> trigger singularity will emerge from private for profit research or
> government funded research projects.
> 
> At the point a privately developed technology gains publicity either
> though doing amazingly well in the Turing or in a commercial
> natural language recognition project, whoever owns this
> technology will be targeted for acquisition.

There's nothing special about near-human levels of intelligence from
the AI's point of view. Language ability in humans is not an invented
result of general intelligence, but facilitated by highly specialized
neurological adaptations (specifically in the Broca's and Wernicke's
areas). There's no reason to expect AI to linger at a quasi-human
point of development where it has just enough intelligence to figure
out human language processing, but not enough to figure out how to
engineer molecular nanoassemblers or bootstrap itself up to
superintelligence.

Computers already vastly outperform humans at manual calculation, some
types of mathematical theorem-proving, finding patterns in large
amounts of statistical data, etc., while doing horribly at relatively
simple human tasks like distinguishing a duck from a kitchen table. We
can't assume that language processing is just naturally easier than
engineering dangerous ultratechnology just because humans are so good
at the former and so bad at the latter. We have no guaruntee that AIs
smart enough to kickstart a singularity will be recognizable as
exceptionally intelligent before it's too late.

[cut]

> Open sourcing a technology that powerful would be roughly equivalent to
> passing out free enriched plutonium at the UN.

Agreed.



More information about the extropy-chat mailing list