[ExI] Oxford scientists edge toward quantum PC with 10b qubits.
Kelly Anderson
kellycoinguy at gmail.com
Tue Feb 1 22:03:10 UTC 2011
On Mon, Jan 31, 2011 at 12:05 PM, Richard Loosemore <rpwl at lightlink.com> wrote:
> Kelly Anderson wrote:
>> On Fri, Jan 28, 2011 at 9:01 AM, Richard Loosemore <rpwl at lightlink.com>
>> Trivial!?! This is the final result of decades of research in both
>> software and hardware. Hundreds of thousands of man hours have gone
>> into the projects that directly led to this development. Trivial! You
>> have to be kidding. The subtle language cues that are used on Jeopardy
>> are not easy to pick up on. This is a really major advance in AI. I
>> personally consider this to be a far more impressive achievement than
>> Deep Blue learning to play chess.
>
> I stand by my statement that what Watson can do is "trivial".
If what you are saying is that Watson is doing a trivial subset of
human capabilities, then yes, what Watson is doing is trivial. It is
by no means a trivial problem to get computers to do it, as I'm sure
you are aware.
> You are wildly overestimating Watson's ability to handle "subtle language
> cues". It is being asked a direct factual question (so, no need for Watson
> to categorize the speech into the dozens or hundreds of subtle locution
> categories that a human would have to), and there is also no need for Watson
> to try to gauge the speaker's intent on any of the other levels at which
> communication usually happens.
Have you watched Jeopardy? Just figuring out what they mean by the
category name is often quite difficult. The questions are often full
of puns, innuendo and other slippery language.
> Furthermore, Watson is unable (as far as I know) to deploy its knowledge in
> such a way as to learn any new concepts just by talking, or answer questions
> that involve mental modeling of situations, or abstractions.
It learns new concepts by reading. As far as I know, it has no
capability for generating follow up questions. But if a module were
added to ask questions, I have no doubt that it would be able to read
the answer, and thus 'learn' a new concept, at least insofar as what
Watson is doing can be classified as learning.
> For example, I
> would bet that if I ask Watson:
>
> "If I have a set of N balls in a bag, and I pull out the same number of
> balls from the bag as there are letters in your name, how many balls would
> be left in the bag?"
>
> It would be completely unable to answer.
Of course, because it has to be in the form of an answer... ;-)
Seriously, you may be correct. However, I would not be surprised if
Watson were able to handle this kind of simple progressive logic. We
would have to ask the designers. Natural language processing has been
able to parse those kinds of sentences for some time, so I would not
be surprised if Watson could also parse your sentence. Whether it
would be able to answer or not is something I don't know. I hope
someday they put some form of Watson online so we can ask it questions
and see how good it is at answering them.
>> Richard, do you think computers will achieve Strong AI eventually?
>
> Kelly, by my reckoning I am one of only a handful of people on this planet
> with the ability to build a strong AI, and I am actively working on the
> problem (in between teaching, fundraising, and writing to the listosphere).
That's fantastic, I truly hope you succeed. If you are working to
build a strong AI, then you must believe it is possible.
I have spent about the last two hours reading your papers, web site,
etc. You have an interesting set of ideas, and I'm still digesting it.
One question comes up from your web site, I quote:
"One reason that we emphasize human-mind-like systems is safety. The
motivation mechanisms that underlie human behavior are quite unlike
those that have traditionally been used to control the behavior of AI
systems. Our research indicates that the AI control mechanisms are
inherently unstable, whereas the human-like equivalent can be
engineered to be extremely stable."
Are you implying that humans are safe? If so, what do you mean by safety?
-Kelly
More information about the extropy-chat
mailing list