OP velar fricative orthography
Bryan Gordon
linguista at gmail.com
Tue Jun 27 18:35:58 UTC 2006
Orthography issues are very complex, and although I'm not one to take sides
(because that can lead to brutal and unproductive arguments), I am one to
point out all the issues at play in all of their frightening splendour.
WARNING: since we are discussing orthographic issues here I feel it is
important to represent the characters as they are, so I use Unicode
encoding. If your email doesn't display it properly, please copy and paste
to a word processor and change the font to Aboriginal or Gentium or
something that shows all the characters. I additionally have attached a
Unicode (UTF-8) .txt file to this document in case it's not possible to do a
simple copy-and-paste operation. The .txt file is in AbRomanSerif font, but
if you don't have that one, switching to Gentium or the like should work.
I recognise a lot of different issues at play here, each of which has
partisans emotionally attached to the one or the other side. I will try to
present both sides without taking one over the other. Perhaps it would be
useful to disassemble some of these points for the sake of clarity:
1) "normal" symbols vs. "weird" symbols
There's a whole scale of "normalcy" when we discuss orthographic symbols,
and all of us would do well to admit to ourselves and to each other that
this scale is very English-centric.
i. "Very Normal": characters which exist in English, e.g., <s, z, gh, x, sh,
zh>.
ii. "Almost Normal": characters which are "moved around," e.g., <aⁿ, oⁿ,
iⁿ>.
iii. "Sort Of Weird": characters with acute accents are felt to be less
weird than other diacritics, e.g. <á, ú, é, í, óⁿ, áⁿ, íⁿ>.
iv. "Weird": characters with other diacritics, including a lot of the
Americanist and IPA symbols, e.g., <ą, į, ã, ĩ, ã́, ĩ́, ą́, į́, š, ž, ǧ, ȟ,
p̣, ṭ, ḳ, pʻ, tʻ, kʻ>. Note that x-hacek and x-underdot are not universally
supported by Unicode (yet).
v. "Very Weird": characters not present in English at all, e.g., <ð, ʃ, ʒ,
ɣ, ʔ, ˀ>.
All of these characters have been debated on at one point or another, and
the debate always centres around those who favour the weirder symbols for
their universality or one-to-one phonemic correspondence vs. those who
favour the more normal symbols for an ostensible ease of learning.
2) Learnability
Many of the speakers had traumatic experiences centring about learning to
read and write in English, and it is a frequent sentiment that imposing new
diacritics or letters is akin to forcing people to learn to read and write
all over again. On the other hand, it seems a bit strange that we should
limit ourselves to using only simple English letters, when there are so many
English letters that are not used in Omaha or Ponca at all. And then there's
the issue of learnability for children. I think one-to-one correspondences
are easier to learn for children, even if they mean we have to use
non-English symbols. There's a reason there are no spelling bees in Spain or
Germany: their writing system is closer to the spoken language than that of
English. They both use digraphs, but they also both use most or all of the
letters of the English alphabet. People on both side of this issue tend to
talk like their word is law, and the other side is just wrong.
3) Orthographic Disruption
The feeling here is that there are writing systems already in use, and
people have invested time and money into learning and printing them, so
letʼs stick with them. For once in history, also, the official orthographies
of Omaha and Ponca are close enough that you can skip from one to the other
without much thought. But this is not really a reason to stick with the
orthography if there are problems with it. This is just another of the
issues here.
4) Confusion
Confusion is an even bigger problem than learnability, when people have
learned one value for one letter and then it switches to a different one.
This could be a problem, for example, if <x> is used for the voiced velar
fricative, because many people are already familiar with the system which
assigns it to the unvoiced, and <gh> to the voiced. As Rory mentions, this
was never a problem with the transition from <ç> to <s, z>. But it is
something of a problem with using <p, t, k> and even <c> or <ch> for tense
stops, because many people are used to seeing these used for aspirated stops
as well. It's also a big problem with some of our Americanist usages like
<th> for the aspirated stop instead of the fricative/approximant, and it
shows why we need to use standard orthographies in our papers when we are
sharing them with the communities we work with. One thing I like about using
<q> is that it uses one more English letter and thereby reduces the need for
digraphs; but I still think using a simple <x> for the voiced velar
fricative may be a bit confusing for a lot of people. Another thing which
bears mention here is the issue of how to represent the alveopalatal
affricates. There seem to be two competing schools of thought on this: one
which uses <j, c, cʰ, cʼ> and the other which uses <j, ch, chʰ, chʼ>. Use of
haceks on the letters also varies. The first system is fully adequate and
has one-to-one correspondence on its side, while the second system is more
English-like and therefore felt by many people (adults) to be easier. The
big problem is that itʼs easy to get confused on whether a given <ch> is
supposed to be aspirated or tense.
5) Distinctiveness from other languages
This sort of stands in opposition to confusion. Early on, Omaha and Ponca
orthographies used symbols like <ǧ, ȟ, pʻ, tʻ, kʻ>, but nobody ever looked
back after the transition to <x, pʰ, tʰ, kʰ> was made. I think this is
because nobody wanted Omaha or Ponca to look too much like Lakhota. It is
strange, though, that this same sentiment doesnʼt seem to be as strong when
it comes to how much it looks like English.
6) Loss of distinctions in clusters
This is the main point of the post Rory just sent. He points out that the
phonetic realisation of a velar fricative in a cluster is always unvoiced,
so we should use the unvoiced variant in the orthography (or, in his
proposal with x-hacek and x-underdot, the unmarked variant – a very
autosegmentalist solution!). Itʼs not just the fricatives, though.
Comparative and phonetic evidence suggests that the stops in fricative-stop
clusters in Omaha-Ponca are the "simple" stops, i.e., the voiced ones. So
should the orthography move from <shk, xp, st, etc.> to <shg, xb, sd>? I
think that would weird a lot of people out, too! Thereʼs also something to
be said for the fact that Hahn, for one, *did* hear and note such
distinctions as /xð ɣð/.
None of this is intended as an answer to Markʼs original question. Iʼm not,
after all, one of the decision-makers on this. But I am glad to cast light
on this problem in its full, terrifying complexity, so that everyone knows
all the issues at play here, and those who *are *the decision-makers can
make decisions on each and every one of those issues.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/siouan/attachments/20060627/8960ddb4/attachment.htm>
-------------- next part --------------
I recognise a lot of different issues at play here, each of which has partisans emotionally attached to the one or the other side. I will try to present both sides without taking one over the other. Perhaps it would be useful to disassemble some of these points for the sake of clarity:
1) ânormalâ symbols vs. âweirdâ symbols
Thereâs a whole scale of ânormalcyâ when we discuss orthographic symbols, and all of us would do well to admit to ourselves and to each other that this scale is very English-centric.
i. âVery Normalâ: characters which exist in English, e.g., <s, z, gh, x, sh, zh>.
ii. âAlmost Normalâ: characters which are âmoved around,â e.g., <aâ¿, oâ¿, iâ¿>.
iii. âSort Of Weirdâ: characters with acute accents are felt to be less weird than other diacritics, e.g. <á, ú, é, Ã, óâ¿, áâ¿, Ãâ¿>.
iv. âWeirdâ: characters with other diacritics, including a lot of the Americanist and IPA symbols, e.g., <Ä
, į, ã, Ä©, ãÌ, Ä©Ì, ą́, Ą̃, Å¡, ž, ǧ, È, pÌ£, á¹, ḳ, pÊ», tÊ», kÊ»>. Note that x-hacek and x-underdot are not universally supported by Unicode (yet).
v. âVery Weirdâ: characters not present in English at all, e.g., <ð, Ê, Ê, É£, Ê, Ë>.
All of these characters have been debated on at one point or another, and the debate always centres around those who favour the weirder symbols for their universality or one-to-one phonemic correspondence vs. those who favour the more normal symbols for an ostensible ease of learning.
2) Learnability
Many of the speakers had traumatic experiences centring about learning to read and write in English, and it is a frequent sentiment that imposing new diacritics or letters is akin to forcing people to learn to read and write all over again. On the other hand, it seems a bit strange that we should limit ourselves to using only simple English letters, when there are so many English letters that are not used in Omaha or Ponca at all. And then thereâs the issue of learnability for children. I think one-to-one correspondences are easier to learn for children, even if they mean we have to use non-English symbols. Thereâs a reason there are no spelling bees in Spain or Germany: their writing system is closer to the spoken language than that of English. They both use digraphs, but they also both use most or all of the letters of the English alphabet. People on both side of this issue tend to talk like their word is law, and the other side is just wrong.
3) Orthographic Disruption
The feeling here is that there are writing systems already in use, and people have invested time and money into learning and printing them, so letʼs stick with them. For once in history, also, the official orthographies of Omaha and Ponca are close enough that you can skip from one to the other without much thought. But this is not really a reason to stick with the orthography if there are problems with it. This is just another of the issues here.
4) Confusion
Confusion is an even bigger problem than learnability, when people have learned one value for one letter and then it switches to a different one. This could be a problem, for example, if <x> is used for the voiced velar fricative, because many people are already familiar with the system which assigns it to the unvoiced, and <gh> to the voiced. As Rory mentions, this was never a problem with the transition from <ç> to <s, z>. But it is something of a problem with using <p, t, k> and even <c> or <ch> for tense stops, because many people are used to seeing these used for aspirated stops as well. Itâs also a big problem with some of our Americanist usages like <th> for the aspirated stop instead of the fricative/approximant, and it shows why we need to use standard orthographies in our papers when we are sharing them with the communities we work with. One thing I like about using <q> is that it uses one more English letter and thereby reduces the need for digraphs; but I still think using a simple <x> for the voiced velar fricative may be a bit confusing for a lot of people. Another thing which bears mention here is the issue of how to represent the alveopalatal affricates. There seem to be two competing schools of thought on this: one which uses <j, c, cÊ°, cʼ> and the other which uses <j, ch, chÊ°, chʼ>. Use of haceks on the letters also varies. The first system is fully adequate and has one-to-one correspondence on its side, while the second system is more English-like and therefore felt by many people (adults) to be easier. The big problem is that itʼs easy to get confused on whether a given <ch> is supposed to be aspirated or tense.
5) Distinctiveness from other languages
This sort of stands in opposition to confusion. Early on, Omaha and Ponca orthographies used symbols like <ǧ, È, pÊ», tÊ», kÊ»>, but nobody ever looked back after the transition to <x, pÊ°, tÊ°, kÊ°> was made. I think this is because nobody wanted Omaha or Ponca to look too much like Lakhota. It is strange, though, that this same sentiment doesnʼt seem to be as strong when it comes to how much it looks like English.
6) Loss of distinctions in clusters
This is the main point of the post Rory just sent. He points out that the phonetic realisation of a velar fricative in a cluster is always unvoiced, so we should use the unvoiced variant in the orthography (or, in his proposal with x-hacek and x-underdot, the unmarked variant â a very autosegmentalist solution!). Itʼs not just the fricatives, though. Comparative and phonetic evidence suggests that the stops in fricative-stop clusters in Omaha-Ponca are the âsimpleâ stops, i.e., the voiced ones. So should the orthography move from <shk, xp, st, etc.> to <shg, xb, sd>? I think that would weird a lot of people out, too! Thereʼs also something to be said for the fact that Hahn, for one, did hear and note such distinctions as /xð ɣð/.
None of this is intended as an answer to Markʼs original question. Iʼm not, after all, one of the decision-makers on this. But I am glad to cast light on this problem in its full, terrifying complexity, so that everyone knows all the issues at play here, and those who are the decision-makers can make decisions on each and every one of those issues.
More information about the Siouan
mailing list