[ExI] Character Recognition (was Re: AI milestones)

Kelly Anderson kellycoinguy at gmail.com
Wed Mar 7 18:19:17 UTC 2012


2012/3/6 Stefano Vaj <stefano.vaj at gmail.com>:
> On 6 March 2012 21:04, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>>
>> This might have been using Markov chains.. a really primitive kind of
>> limited context. When I say primitive, I mean simple to implement, not
>> ineffective. Any kind of context is absolutely critical to getting
>> good rates of character recognition and Markov chains is a good way to
>> easily put in a little context.
>
> Yes. This is still the case with computer dictation AFAIK.

I'm sure they are also using many more advanced techniques, along with
huge databases (such as trained neural networks), to get the pretty
decent results they are indeed getting these days.

The amazing thing is that even after it works fairly well, people
don't want to use it. I occasionally do dictation of text messages on
my phone, and tried to use Dragon on my computer, but I tire of it...
because it isn't 100% accurate is part of it, but the other part is
that my neural pathways are just better at typing well than speaking
well...

>> To see how this works, consider that in English, the letter Q is
>> nearly always followed by the letter U.
>
> The only exception, as for other matters, being Al Qaeda... :-)

LOL, not the only exception, but clearly a big one. Words of Islamic
origin are indeed most of the exceptions to this rule.

>> We humans are very good at text recognition because of the context. If
>> you present humans with the text out of context, they do little better
>> than computers with no context. Give humans the context, and they are
>> much better than humans.
>
>
> I also understand that in spoken language we actually hear no more than 20%
> of uttered phonemes in mormal circumstances. This is of course is even more
> of a disaster for languages that have a lot of omophones (say, Japanese).
> Yet, we manage to put everything in context.

It's entirely possible that we only hear 20%, but as I said there is a
lot of redundancy built into most human languages to compensate.
Context is nearly EVERYTHING in pattern recognition... and computers
are piss poor at putting things into context. That's my basic thesis
on this matter.

-Kelly



More information about the extropy-chat mailing list