gogaku at IX.NETCOM.COM
Tue Sep 16 15:29:08 UTC 2003
The version I received on the Outil list also included the following:
The matter is actually more complicated and subtle. It has
to do with the entropy of language. Even low-order models
of English text yield entropy values of 3 bits per letter.
Higher-order models that account for context demonstrate the enormous
redundancy in human languages.
The ability to extract meaning from high-entropy text and speech is
apparently hard-wired into the human brain, in the form of associative
memory. Did some graduate research on it.
Oh, effective on pure speech signals, too ... otherwise cell phones would
never work ... the bit rate on them is much, much slower than what is
required for even an approximate representation of the original voice.
Baking the World a Better Place
> -----Original Message-----
> From: American Dialect Society
> [mailto:ADS-L at LISTSERV.UGA.EDU] On Behalf Of Michael B Quinion
> Sent: Tuesday, 16 September 2003 3:15 AM
> "Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it
> deosn't mttaer in waht oredr the ltteers in a wrod are, the
> olny iprmoetnt tihng is taht the frist and lsat ltteer be at
> the rghit pclae The rset can be a total mses and you can
> sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid
> deos not raed ervey lteter by istlef, but the wrod as a wlohe".
More information about the Ads-l