Phonemes (was RE: use of sign language in Jordan) - longish

Carolyn Ostrander clostran at syr.edu
Fri Sep 28 09:01:15 UTC 2007


I think it's a good question too. 
I hesitate to jump in because I'm a little out of touch these days - 
 
My available memory cells are taken up with dissertation stuff in a
different field at the moment. 
It's been almost 10 years since I was reading in this area, but here
goes - 
In the late '90s, given the slipperiness of phonemes and the development
of other phonological descriptive systems, 
some people were using feature-level taxonomies - generative phonology
and Optimality theory. 
I don't recall reading OT texts on SLs, but it seems to me that several
people were publishing 
feature-level analyses. 
 
Diane Brentari (1999). A prosodic model of sign language phonology.
Cambridge, Mass.: MIT Press.
is reviewed in: 
 
Wendy Sandler. Review: Diane Brentari (1999). A prosodic model of sign
language phonology. Cambridge, Mass.: MIT Press.  in Phonology  (1999),
16: 443-447 Cambridge University Press
 
Wendy Sandler & Diane Lillo-Martin. "Natural Sign Languages". In
Handbook of Linguistics. M. Aronoff & J. Rees-Miller (eds.) 2001. pp.
533-562

 

cites Sandler, Wendy (then in press). One phonology or two? Sign
language and phonological theory. in R. Sybesma and L. Cheng, eds., GLOT
International State-of-the-Article Book. The Hague: Holland Academie
Graphics

 
It seems to me that there was at least one other generative or lexical
model I was looking at - I'll check my shelves. 
 
To me (as I remember) the advantage of these descriptions was that they
were not tied to using terms that originated with spoken languages; 
a [+/- feature]  hierarchy worked equally well for both, with each on
its own terms.
 I was very taken by the idea of the "mora," a two part unit roughly
corresponding to: 
move-hold patterns, 
Consonant/Vowel pairings, 
Japanese syllabaries' written representations, 
the opening and closing of a set of muscles. 
- but this was only one aspect of feature analysis, not the whole
theory.
I liked to think we split perceptions up arbitrarily as drumbeats in
order to process a continuous stream of input as if it had discrete
parts.
Capturing some information about short segments lets us compare them to
what's coming in, to understand new input in spite of variability.
Whenever someone generates a system for coding in another modality - an
alphabet, a syllabary, a pictographic lexicon, a set of dots and dashes
or bumps, you name it - 
Sandler, W. (1989). Phonological representation of the sign: Linear-
ity and nonlinearity in American Sign Language. Dordrecht: Foris.
 they are encoding an overlapping set of different levels of
"analysis"/categorization, but sorting it by (more or less) one level
that they can understand and manipulate consciously.
So one system might break up moves and holds as separate elements, while
another might combine them in a single system. 
In the spoken language examples that have been talked about, some code
at the mora level, some split mora up into C and V (and, perhaps, leave
the V out), some split out more features and come up with - roughly
20-50 "phonemes" (more if you add tonal elements), others focus on a
morphological or semantic level (Chinese characters, for example) - but
even Chinese characters are phonologically based to some extent, in
spite of also capturing, as you are talking about below, the semantic
comment. 
 
Feature-based systems have an advantage over semantic-level systems in
that you can use a smaller set of symbols to fully express - 
everything; but they also have disadvantages (cognitively). One of them
is that the input stream is parsed into arbitrary units differently by
different people, making some elements disconnect from the "standard" or
intended representation. 
This really hit me with the following example (sorry it's from a spoken
language; you'll see why in a minute): 
We pronounce "flour" and "flower" the same, but spell them differently. 
Why? 
In moving from an open mouth-low tongue (flah) to a rounded one (ur),
there is a point of merger. 
For some people, that point is perceptible; it just happens to be the
same "coarticulation" set that defines "w" as a consonant. 
Hence:  Flo (short o as in "hot") + ur 
but ALSO  Flo + Wur
AND sometimes Flo + U (these two together rhyme with "how") + ur.
The representations, then, are less arbitrary for some than for others:
if your internal system matches the level of representation, the shape
but not the number of symbols and their deployment is arbitrary. The
farther your own internal system is from the symbolic one, the more the
system has a second level of abstraction that makes it hard to
code/decode.
 
A second interesting thing about these pairings is that they were very
slippery historically until dictionaries came along.  
Signwriting begins with a dictionary, though. 
I would be really curious to know whether there are persistent "poor
spellers" in signwriting as time goes on, 
who perceive categories of movement differently and insist on their own
coding system. 
 
Carolyn Ostrander
PhD student, Composition and Cultural Rhetoric
Syracuse University 
clostran at syr.edu



________________________________

	From: slling-l-bounces at majordomo.valenciacc.edu
[mailto:slling-l-bounces at majordomo.valenciacc.edu] On Behalf Of Sonja
Erlenkamp
	Sent: Friday, September 28, 2007 3:34 AM
	To: A list for linguists interested in signed languages; A list
for linguists interested in signed languages
	Subject: SV: [SLLING-L] use of sign language in Jordan
	
	
	 
	Kathy wrote:
	 
	>(And I WOULD like to know if anyone can list the phonemes of
any sign language...and justify their phonemic Zstatus...)
	 
	That's a really good question. :) 
	I have searched for an answer to that questions some years now
and haven't found any full description of the phonemesystem of any
signed language yet. But of course I may not have found the one that
exists (please let me know if that is the case :)
	One of my Ph.D. students who is working on notational systems
for sign lanuage dictionaries seems to close in to a conclusion that one
of the major problems for notational system is to capture shared iconic
features of different signs.
	Personally I believe that many, if not allmost of all the
parameters in a single sign and including nonmanual features can (and
often do) carry an iconic potential which makes them by definition
non-arbitrary and that means again they could not be phonemic in the
sense of spoken language phonemes, because phonemes are by definition
arbitrary. On the other hand is for example  a handshape not always
morphemic either since it does not carry some meaning in a morphemic
sense, just an iconic potential that can be activated in a sign. I think
that signed languages probably do not fit entirely in the linguistic
level model of phonemic - morphemic and that we probably need a new
level, somewhat in between these two "levels" describing how "iconemes"
work. I use the terms "iconeme" roughly said for the "smallest
analysable unit in a language carrying an iconic potential". 
	And if (I say IF!) we end up describing an iconeme-level of
signed languages this could also influence our understanding of writing
systems/notational tools for signed languages.
	 
	Just my two cents on a friday morning :)
	 
	All the best
	 
	Sonja Erlenkamp
	
	
	
	
________________________________

	Connect to the next generation of MSN Messenger  Get it now!
<http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&sou
rce=wlmailtagline> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/slling-l/attachments/20070928/2adfde9c/attachment.htm>
-------------- next part --------------
_______________________________________________
SLLING-L mailing list
SLLING-L at majordomo.valenciacc.edu
http://majordomo.valenciacc.edu/mailman/listinfo/slling-l


More information about the Slling-l mailing list