[ExI] Oxford scientists edge toward quantum PC with 10b qubits.

Kelly Anderson kellycoinguy at gmail.com
Mon Jan 31 17:52:22 UTC 2011


>2011/1/31 Dave Sill <sparge at gmail.com>:
> Perhaps, but I don't think it's trivial for a computer to learn it via an
> explanation, and the communication, reasoning, problem solving, and
> understanding required to do so make it a good test of real intelligence. Of
> course, tic-tac-toe is just an example. If I were tasked to conduct a Turing
> test I wouldn't use tic-tac-toe, I'd make up a simple game of my own.

I think as long as you stuck with board games, where it is known that
there are rules and that by playing those rules can be discovered,
that you could create a specialty computer program (not a general AI)
using today's technology that could learn any arbitrary game. It would
not get as good on it's own as the best humans in many cases. In
general, computers beat humans when the game tree is small enough, and
they suck when the game tree gets unwieldy and there is no good
pruning or goodness that is easily discoverable.

In other words, if IBM spent as much time and research developing a
program to learn to play arbitrary board games as they have on Watson,
I think they would come up with something that would be surprisingly
good. Again, that's not general AI, just another very small slice.

The question is how many very small slices do you need to build up a
Strong AI?? The answer seems to be "all of them" that humans have, and
while that is an interesting answer that leads directly to machine
learning, is it a useful answer? In other words, is what IBM is doing
with Watson useful? Damn right it is.

>> Now, if you want a challenge for a computer, try the oriental board
>> game Go. As far as I know, there aren't any computers that can grok
>> that as good as people yet. I'm sure it's coming soon though. :-)
>
> No doubt. And it'll be impressive. But it'll still just be a Go computer and
> not generally intelligent.

In all probability that is correct.

>>
>> I think the problem is really related to the definition of
>> intelligence. Nobody has really defined it...
>
> Wikipedia has a pretty good one:
>   "Intelligence is an umbrella term describing a property of the mind
> including related abilities, such as the capacities for abstract thought,
> understanding, communication, reasoning, learning, learning from past
> experiences, planning, and problem solving."

By this definition, a computer will never have intelligence because
someone will say, But the computer doesn't have a "mind". It's all a
bit circular. I have seen individual computer programs that exhibit
all of the characteristics (one at a time) in that list, but I
wouldn't consider any of them intelligent, except over a very limited
domain.

>> ... so the definition seems to
>> fall out as "Things people do that computers don't do yet."
>
> I disagree. Show me a computer that meets the above definition of
> intelligence at an average human level.

There isn't one. But in 2060 when there is a computer that meets and
exceeds the above definition on every measurable level and by every
conceivable test, there will still be people (maybe not you, but some
people) who will say, but it's all just an elaborate parlor trick. The
computer isn't REALLY intelligent.

In my experience, anything that escapes AI gets a new name. Pattern
recognition, computer vision, natural language processing, optical
character recognition, facial recognition, etc. etc. So that for all
practical purposes AI is forever the stuff we don't know how to do
very well yet.

The first computer that passes the Turing test (and I'm sure there are
weaker and stronger forms of the Turing test) will no doubt have a
technology with a name, and that name will probably not be "artificial
intelligence"...

>> So what is
>> "Things computers do that people can't do"? Certainly it is not ALL
>> trivial stuff. For example, using genetic algorithms, computers have
>> designed really innovative jet engines that no people ever considered.
>> Is that artificial intelligence (i.e. the kind people can't do?)
>
> You mean that people have designed and used programs with genetic algorithms
> to create innovative designs. Or did a computer wake up one day and say
> "hey, I've got wicked new idea for a jet engine!"?

As you are aware, computers are not self aware or self directed at
this point. My argument is not that computers have already achieved
artificial intelligence, just that they show a glimmer of hope in the
area, and that they have done things that people haven't, even in the
area of "creativity" and "art" where people are supposed to be the
masters of the domain.

Do the computer programs that generate new compositions in the style
of (insert your favorite classical composer here) have artificial
intelligence in that area? Or is it just another technology that has
escaped AI and gotten a new name?

-Kelly




More information about the extropy-chat mailing list