[ExI] Oxford scientists edge toward quantum PC with 10b qubits.

Adrian Tymes atymes at gmail.com
Fri Jan 28 19:27:53 UTC 2011


On Fri, Jan 28, 2011 at 9:51 AM, Richard Loosemore <rpwl at lightlink.com> wrote:
> I guess what I am fighting is the idea (which is very common, it seems to
> me) that lack of horsepower is what is holding back AGI.

Lack of horsepower is one of the things holding back AGI, but hardly the
sole thing.  Solving it gets us "closer", but there remain wildly different
challenges to solve before AGI can be realized.

2011/1/28 John Clark <jonkc at bellsouth.net>:
> Nothing? The pattern is always the same. Solving calculus problems required
> intelligence, beating a Chess Grandmaster required intelligence, being a
> great research Librarian required intelligence, and beating a Jeopardy
> champion required intelligence; but then computers could do these things
> better than humans and suddenly we found that these activities had
> absolutely nothing to do with intelligence. How odd.

Yep.  Because, in the process of solving them, we keep finding tricks and
cheats that those who thought "X requires intelligence" didn't conceive of.

In general, those who propose that one specific capability and task - which
can not be used to learn and support entirely different capabilities and tasks -
is "intelligence" aren't thinking very hard about it.

> > It ain't AI until it's competitive with human jobs.
>
> Many members of our species won't be satisfied even then.

More to the point, computers have replaced certain human jobs over the
years.  How many human telephone operators are employed these days,
vs. in the 1960s?

> And so just before he was sent into oblivion for eternity the last surviving
> human being turned to the Jupiter Brain and said "You're not 'REALLY' as
> smart as me".

Yes, but note that the Jupiter Brain was capable of doing that.  Watson is
not capable of doing anything but Jeopardy, and it certainly didn't learn to
do that on its own - rather, several humans figured out how to do it, and
codified their thinking into a tool.

Get me a computer that can learn to do things it was never programmed or
designed to do.  (In the broad sense, not "this theorem solver was not
specifically programmed with this particular theorem, but it solved it".)  Even
that might not be true intelligence, but it will be closer than what we have
now.

Note that the Turing Test is a partial codification of this.
Instruct, in ordinary
English (or other human language of similar breadth of use - which rules out
C++ and similar languages), a computer on how to do a thing it has never
done before.  Have it do that thing, and improve its own performance, filling
in not-explicitly-stated requirements.

The "trick" that's widely used for this today is, get humans to state things in
highly formal ways.  For instance, connecting a telephone call: a telephone
number is a number, and there is one obvious almost-all-purpose way to
phrase the number.  A computer can be programmed to look up that number,
look up where you are calling from, and plot out a connection given what it
knows of the telephone circuits.  But this is a very different problem from,
say, taking a typical bureaucracy's ill-documented procedures and figuring
out who you need to call to accomplish a certain action - especially when
the documentation turns out to be wrong (say, by being out of date).




More information about the extropy-chat mailing list