<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1"><BASE
href=http://www.saske.net/ head <>
<META content="MSHTML 6.00.2600.0" name=GENERATOR></HEAD>
<BODY text=black bgColor=white>
<DIV>
<DIV class=OutlookMessageHeader dir=ltr align=left><FONT face=Tahoma
size=2></FONT></DIV><FONT face=Arial color=#0000ff size=2></FONT>From <A
href="http://www.theregister.co.uk/content/73/36376.html">The Register</A>: NASA
boffins have pulled off a seemingly impressive feat - reading words which have
not actually been spoken. The system works by computer analysis of
"sub-auditory" speech at the throat. NASA's Ames Research Center developer Chuck
Jorgensen explains further: "A person using the subvocal system thinks of
phrases and talks to himself so quietly it cannot be heard, but the tongue and
vocal cords do receive speech signals from the brain". <BR>From <A
href="http://www.sciencedaily.com/releases/2004/03/040318072412.htm">Science
Daily</A>: NASA scientists have begun to computerize human, silent reading using
nerve signals in the throat that control speech. In preliminary experiments,
NASA scientists found that small, button-sized sensors, stuck under the chin and
on either side of the 'Adam's apple,' could gather nerve signals, send them to a
processor and then to a computer program that translates them into words. "What
is analyzed is silent, or subauditory, speech, such as when a person silently
reads or talks to himself," said Chuck Jorgensen, a scientist whose team is
developing silent, subvocal speech recognition at NASA's Ames Research Center,
Moffett Field, Calif. "Biological signals arise when reading or speaking to
oneself with or without actual lip or facial movement," Jorgensen
explained.</DIV></BODY></HTML>