10.1778, Disc: New: What exactly are allophones?

LINGUIST Network linguist at linguistlist.org
Tue Nov 23 17:04:12 UTC 1999


LINGUIST List:  Vol-10-1778. Tue Nov 23 1999. ISSN: 1068-4875.

Subject: 10.1778, Disc: New: What exactly are allophones?

Moderators: Anthony Rodrigues Aristar: Wayne State U.<aristar at linguistlist.org>
            Helen Dry: Eastern Michigan U. <hdry at linguistlist.org>
            Andrew Carnie: U. of Arizona <carnie at linguistlist.org>

Reviews: Andrew Carnie: U. of Arizona <carnie at linguistlist.org>

Associate Editors:  Martin Jacobsen <marty at linguistlist.org>
                    Ljuba Veselinova <ljuba at linguistlist.org>
		    Scott Fults <scott at linguistlist.org>
		    Jody Huellmantel <jody at linguistlist.org>
		    Karen Milligan <karen at linguistlist.org>

Assistant Editors:  Lydia Grebenyova <lydia at linguistlist.org>
		    Naomi Ogasawara <naomi at linguistlist.org>
		    James Yuells <james at linguistlist.org>

Software development: John H. Remmers <remmers at emunix.emich.edu>
                      Chris Brown <chris at linguistlist.org>
                      Qian Liao <qian at linguistlist.org>

Home Page:  http://linguistlist.org/


Editor for this issue: Karen Milligan <karen at linguistlist.org>

=================================Directory=================================

1)
Date:  Sun, 21 Nov 1999 23:20:10 +0100
From:  Martin Salzmann <Salzmann.M at gmx.ch>
Subject:  What exactly are Allophones?

-------------------------------- Message 1 -------------------------------

Date:  Sun, 21 Nov 1999 23:20:10 +0100
From:  Martin Salzmann <Salzmann.M at gmx.ch>
Subject:  What exactly are Allophones?

Editor's Note:
Martin Salzmann's summary for his query (both reprinted below)
contained such a wide range of responses that the moderators felt
this topic might prove to be a good one for a discussion.


                                 The Query

Dear all,

        I'm currently trying to teach the basics of phonology to my students
        beginning with the classical distinction between phonemes and
        (allo-)phones.

        Now to my great surprise, I've encountered a problem that has never come
        to my attention before: In the classical structuralist sense, phonemes
        belong to the domain of "langue", i.e. the phonological system, while
        (allo)phones belong to the domain of "parole", they are the actual
        phonetic realizations of a phoneme. No problem so far.
        A phoneme like /i:/ (any other vocalic phoneme could serve as example as
        well) in English is realized as [i:] - but this phonetic notation is of
        course an abstraction  since every [i:] that is uttered is somewhat
        different - so we'd have to say that there is an infinite number of
        allophones of the phoneme /a/. Usually one only says that there is one
        allophone - probably because all the realizations are considered in some
        way similar.
        But what about  complementarily distributed allophones? Take for
        instance the English voiceless plosives which depending on the
        environment are realized either as aspirated or non aspirated. These are
        called allophones but they are again an abstraction, each of the
        allophones, e.g. [p +asp] and [p -asp] can be realized in an infinite
        number of different ways. Now what should be called an allophone? If the
        term were restricted to the actual phonetic realization (as in classical
        structuralism), we'd have to find a new term for the "abstract
        allophones"
        So do we end up with tree levels instead of two? Or to put the question
        differently: Which phenomena belong to phonology, which to phonetics?
        Since complementary distribution is an abstract regular pattern not
        solely due to physiological necessity, i'd have to belong to the domain
        of phonology in my opinion. But is there a possibility to express this
        in classical structural phonology or is the theory just simply flawed or
        is my argumentation faulty?

        I believe, the picture would be different in generative phonology. If
        I'm not completely mistaken, the three-way contrast indicated above is
        in fact represented by the classical derivational model: There are the
        URs (roughly comparable with the phonemes) and the surface
        representations. As far as I can tell, these surface representations are
        not phonetic entities, but still feature bundles (different from the URs
        only in being fully specified and possibly having undergone some rules).
        This is how the are in the phonological component (probably
        corresponding to what I called "abstract allophones" above). What in the
        classical structuralist sense is called a phone, conceived as a physical
        entity, would then be the result of the interaction of the phonological
        component with the sensory system.

        There is a third problem for which I've been unable to find a satisfying
        solution. Quite often, a distinction is made between complementary
        distribution and Coarticulation, the distinguishing factor being
        physiological (in)evitability. In a major textbook like Spencer
        Phonology (1996), the different pronunciations of /k/ in <key> and
        <car>, the first palatal, the second velar, are considered an instance
        of coarticulation because of a physiological inevitability. Now this
        inevitability is more properly called a physiological inevitability of
        the native speakers of English since the two sounds can be contrastive
        in a number of languages, for instance Turkish. But if there is no
        universal physiological necessity to pronounce the sounds this way,
        couldn't one regard the two realizations as complementarily distributed
        allophones or surface representations?
        In German phonology, when speaking of complementary distribution, the
        example always adduced is the distribution of the voiceless velar and
        palatal fricatives; There occurrence depends on the value for the
        feature [back] of the preceding vowel (i.e. velar after a back vowel,
        palatal after a front vowel). For a native speaker of standard high
        German, this distribution is about as physiologically inevitable as the
        key/car distribution for a native speaker of English, but no one has
        ever spoken of coarticulation in this case. What might be the reasons?
        Am I just simply wrong? Do these examples represent truly different
        phenomena?



=============================================================
                               The Responses



I'm not an expert in phonology, but I do teach it, and the story I
usually tell my students is that there are 3 levels: the phoneme, the
allophone, and the phone.  I have, in fact, seen this perspective
taken in several textbooks.  A phone is a physical event, an allophone
is a category of physical events, and a phoneme is a category of
allophones.  In fact, I don't see any way around looking at it this
way.  Of course, a phone is really still an abstraction, because
speech is not segmented.

This does bring up deeper questions about the distinction between
phonetics and phonology, which I have never seen resolved to my
satisfaction.  However, one could, if one wanted to gloss over things,
give the problem of how phones are categorized into allophones to
phonetics (the search for acoustic cues to segment identities is part
of this task) and give the problem of how allophones are grouped into
phonemes to phonology.  Such a division would only convince beginning
students, of course, but it is a sort of ideal that almost reflects
reality.

As for the distinction between complementary distribution and
coarticulation, I think that you've confused the taxonomy a bit.
Coarticulation is merely one process that leads to complementary
distribution.  That is, the two /k/ allophones in English are an
example of complementary distribution.  The source of this
distribution is coarticulation affects.  Furthermore, coarticulation
affects of this sort are not a physiological inevitability, as you
point out.

The book you refer to may be trying to make a distinction between the
minor, though predictable, differences in pronunciation between
different occurences of the *same allophone*, and the larger
differences recognized as allophonic variation (differences which
distinguish allophones *from each other*). If so, coarticulation is
not the word to use to describe the minor differences. I have never
heard a term for this notion, because there is clearly a gradient.
Whether we recognize two physical occurences as two occurences of the
same allophone, or as occurrences of different allophones is not a
question that I think has a definite answer.  That is, the border
between phonetics and phonology is not clearly defined.  Perhaps work
in phonetics will one day make things clearer.

Regards,

Bob Knippen
Dept. of English
Texas A&M University
===============================================================

As far as I know, pure Saussurean theory does not address phonological
variation at this point; the theory of phonemes which led to
distinctive feature theory came from Boas and Sapir (cf. Boas's paper
"On Alternating Sounds" and Sapir's paper "The Sound System of
Language")--that Jakobson was influenced by Boas is clear in his
writing on Boas (to be found in his collected writings).  Boas
explained that the sounds of American languages were not random, that
there was a system; Sapir realized that the sounds of the language
were organized systematically with respect to one another and that
phonemes were therefore psychologically real for speakers in a way
that simple phones are not; that is, one can physically make sounds
not present in one's own language but it is difficult to place them
into one's own speech system; also, it is difficult for most speakers
to hear allophonic variation in their own language.  If I were to put
it into structuralist terms, then, the rule for deriving allophones
which stand in complementary distribution would be a part of langue,
that is the language system at the type level.  One could make a case
for this in Saussurean terms by referring to the concepts of linearity
and relative motivation (that is, the relationship of phonemes to each
other is determined by pure difference (i.e.  distinctive features),
but the presence of a phoneme in a syntagm is realized in parole
through the syntagmatic rules through which a phoneme, standing next
to other phonemes in a linear relation, is realized in
speech--however, this is NOT what Saussure argues.  For Saussure, the
relation between type-level langue and token-level parole production
is problematic--he does not give a cogent theory of speech production
and the relationship of langue to the ideal speech community [masse
parlante] is particularly problematic.  The theory of the phoneme
which later developed into generative phonology did not develop with
Saussure; rather recognition of a phonemic system as psychologically
real for speakers came through Sapir in the 1920s, after Saussure's
time; Jakobson united this idea of a phonological system with the
Saussurean idea of value as produced through negative difference,
creating with Halle the theory of generative phonology (different
still from articulatory phonetics--although the connection of the
phonological system with the body as realized in phonetic utterances
is important for some theorists).

In short, YOUR argumentation is not faulty; rather the problem is with
the lack of interface between type and token in Saussurean
theory--this problem runs throughout in terms of language change but
also in terms of the relation between phonetic realizations of
rule-based behavior, such as one sees in phonology.  This is a problem
with roots in Western semiotic thinking since Aristotle and Plato.
But in generative phonology, the insight into distinctive features of
the phonological system did come from the Saussurean concept of value;
that is that nothing has value except in terms of how it is different
from other possibilities within a syntagm.  However, that view has
been critiqued in terms of the lexicon (Jakobson: Six lectures de son
et sens) but it appears to have validity for the phonological system
at the level of speaker awareness of the system (that is, what
speakers recognize as phonemes and where they have trouble
distinguishing complementary distribution in their hearing).  More
cognitively-based approaches, such as the current Optimality Theory,
appear to critique this view but are not as approachable in terms of
speaker awareness--whether they are valid as representations of actual
cognitive phenomena is a current or future debate, I'm sure.

John Thiels
Ph.D. student
Department of Anthropology
Brandeis University
Waltham, MA
================================================================
Interesting posting on Linguist.  The skepticism you
exhibit towards the accepted distinction appeals to
me.  I'd say you're on the right track when you
suggest that the original distinction between
"phoneme" and "allophone" is flawed.  As you point,
out every realization is unique, so how can there be
classes, allophones?  Without going into detail, I'd
just suggest you look at two works for a different
view of the task of phonology:

William Diver.  "Theory" in _Meaning as Explanation:
Advances in Linguistic Sign Theory._  Ellen
Contini-Morava & Barbara Sussman Goldberg (eds.).
Mouton de Gruyter, 1995.

Yishai Tobin.  _Phonology as Human Behavior._  Duke
University Press.  Recent date.

Joseph Davis
City College of New York
================================================================
Hola Martin:
en primer lugar, mis excusas por no contestarte en inglés ya que no sé
escribirlo; espero que puedas comprender el español. De todos modos, mi
nota es muy breve. Te voy a dar una referencia bibliográfica que a mí me
ha
servido de mucho en mis clases sobre estos temas. Se trata de un trabajo
de
Coseriu:
Coseriu, Eugenio (1962): "Forma y Sustancia en los Sonidos del
Lenguaje",
en Teoría del lenguaje y lingüística general. Cinco Estudios. Madrid,
Gredos.
Lo puedes encontrar también en Revista de la Facultad de Humanidades y
Ciencias, 12, 143-217.
También aparece en una edición independiente en Montevideo en 1954.
Espero que me hayas entendido y que te sirva de ayuda.
Un saludo,
Javier

Javier Simón Casas
Departamento de Lingüística General e Hispánica
Universidad de Zaragoza
SPAIN
================================================================
Dear Martin.
I think that the problems you present are very common.

Jonh Laver (1994) distinguishes between the linguistic sound (the
material
sound, what you hear) and the phone. de Saussure also  (1916) speaks
about
an "acoustic image" (psicological) and the sound (physical) and says
that
the acoustic image and the sound mustn't be confounded. The phono is
abstract.

Coseriu (1952/1973) thinks that the sound is in the 'parole'; the phone
is
in the 'norm'; and the phoneme is in the 'system'. I think that this way
of
classifying sounds in structural phonology is better than saying that
phones are in 'parole' and phonemes in 'langue'.


Francisco Dubert García
Departamento de Filoloxía Galega
Universidade de Santiago de Compostela
Santiago de Compostela
España
e-mail: fgdubert at usc.es
==========================================================
Hi
        Your question on Linguist left me somewhat puzzled. I was under
the
impression that allophones were purely in the domaine of phonology and
co-articulation under the domaine of phonetics. Allophones and
co-articulation being both context motivated but the first one at the
phonological level (i.e. how a certain pheneme has to be realised under
specific phonological environment) and co-articulation at the phonetic
level (i.e. how certain allophones are realised under specific phonetic,
or
articulatory, environment).
        In your message, you ask wether there are 3 levels of
phonological
realisation instead of the usual 2. This somewhat gave me food for
thought
and digestion kept me awake for a little time. Maybe there is a language

specific phonetics? i.e. the fine tuning of surface realisation will
have
an effect on the surrounding segments, effect that may not be universal.
In
Montreal French, there is a phenomena of affrication of dental stops
when
followed by front high vowel, /i,y/. In European French, this phenomena
is
not present, at least one does not hear it. When a sonograme analysis is

run on both dialects, affrication is present in both, though at
different
"strength". Laurent Santerre, deceased phonetician at Universite de
Montreal, once told me that it would be a perception problem since
accousticaly both dialects affricate their dental stops in that
environment. The specific aperture of Montreal French will reinforce the

level of affrication, making it more audible for speakers of that
dialect.
He noticed the same distinction in the diphtongisation of long vowels in

both dialects.
        The velar-palatal difference between the 2 realisatons of /k/ in

English is, as you mentioned, considered as a co-articulation phenomena
while in Turkish it is considered as a dictinctive feature. I would be
inclined to think, from the affrication example in French, that in both
languages there is a co-articualtion phenomena of palatalisation in
front
of front vowels and of velarisation in front of back vowels in both
languages. The difference being distinctive in Turkish, it is not
perceived
but would show on a sonograme. The dinstinctiveness of the feature would

(and this is completly conjectural) weaken the perception of the
co-articulation as the aperture of European French vowels weakens the
perception, or stength, of affrication of dental stops.
        Now, should there be 3 levels (2 abstract and one concrete) of
sound production? I would say yes. Phonology (phonemes and allophones)
being abstract objects, one being the output of the other, and
co-articulation being the acoustic output of allophones. This is a
somewhat
risky proposition since allophones have nearly always been tought as the

actual surface realisation of phonemes, but, as you wrote, no speaker
ever
produce the exact same realisation of a phoneme. It was argued that the
differences were trivial and that they should not be taken into account
since they do not influence the perception, i.e. thay pass through the
percptual filters of speakers of a given language.
        On the other hand, the phonology of a language is nothing more
than
the statistical average of the phonology of the same language native
speakers, I would say that the relation between allophones and
co-articulation motivated difference in actual surface realisation is
the
same as the difference between the phonology of a language and the
individual phonology of the speakers. One is and average to be expected
>from any individual speaker and the other being the actual individual
realisation of the said phonology. Allophones are the average, expected
realisation of a phoneme in the language while co-articulation is the
actualindividual realistion.
        To comme back to the palatal-velar influence in the realisation
of
/k/ in English versus the dinstictiveness of those features un Turkish,
I
would say that both [k+velar] and [k+pal] in English sould be considered
as
allophones as far as perception is concerned while they are phonemes in
Turkish. As far as acoustics is concerned, I would say that they are
subject to the  co-articulation phenomena (if it is demonstrated that
the
sonogrames in both English ant Turkish show palatalisation in a similar
context.)

Yours

Alain Theriault
Ph.D. candidate
Universite de Montreal
================================================================
I am a linguistics and speech pathology student.  I don't have a
"professional" opinion on your problem, but I've certainly thought about
this
a lot.  I'm afraid that after studying phonetics, the only conclusion I
can
make is that phonologists just don't consider the reality of speech
enough.
They came up with a system of phonemes and allophones where one is
supposed
to be abstract and the other "real," but as you said there are tons of
allophones, not just a few.  For example, an acoustic phonetician (can't

remember which) suggested there might be over 100 allophones of /s/ in
English alone.  These depend on context of course, which raises your
problem
about coarticulation.
I don't see any difference between comp distribution and
coarticulation.  If
coarticulation is language specific, which you show with the Turkish
example,
then it has to be allophonic variation. I think the reason allophones
came
about was maybe an attempt to account for the realities of speech, but
it
certainly doesn't come anywhere close to doing this.  In terms of
teaching,
I'm not sure there is a problem though. If you teach coarticulation as
being
a phonological process, it follows naturally that allophones are
involved.
Maybe any sort of assimilation, dissimilation, deletion, etc. is a type
of
language specific coarticulation - after all, speech is just one long
string
of coarticulated sounds.

Cori Kropf
cor9999 at aol.com
=================================================================
Dear Martin,

Phonemes and allophones can be clearly distinguished by taking into
account two criteria: distribution and functional contrast, cf. John
Lyons (1981): Language and
Linguistics, Cambridge: CUP, pp. 85ff.
That is to say, allophones are phonetically similar (whatever this may
be, though) realizations of the abstract unit of a phoneme, technically
in the same way as allomorphs and
word-forms are concrete realizations of abstract morphemes and lexemes
respectively. With regard to phonology, phonemes can be identified on
the basis of the aforesaid
two criteria.
If phonetically similar sounds don't occur in the same context
(complementary distribution: e.g. light [l] vs. dark [l]), they can be
referred to as two allophones of one phoneme:
there is no minimal pair in existence, in which the distinction between
these two allophones leads to a functional contrast (in meaning).
On the other hand, if - even phonetically similar - sounds occur in the
same context and are able to create a functional contrast, i.e. fulfill
a distinctive function (e.g. voiced /b/
vs. voiceless /p/ in /bit/ vs. /pit/), these sounds are two different
phonemes in the language-system under issue.
However, I think, your problem lies within the concept of phonetic
similarity. For example, [t] and the regional variant of a glottal stop
in words such as <butter> can as well be
regarded as two allophones in free variation, because they don't fulfill
a distinctive function, but it goes without saying that the two lack
phonetic similarity since both place and
manner of articulation differ.
Nevertheless, phonemes can always be identified by the criterion of
functional contrast, whereas allophones as realizations of phonemes can
always be found either in
complementary distribution or in free variation without functional
contrast. I guess the best way is to ignore, to a certain extent, the
traditional belief that allophones must be
phonetically similar. Concentrating on this criterion, my students in
the foundation courses also tend to be confused.

Yours sincerely

J. Mukherjee

Englisches Seminar der Rheinischen
Friedrich-Wilhelms-Universität Bonn
Regina-Pacis-Weg 5
D - 53113 Bonn
++49-228-735727
j.mukherjee at uni-bonn.de
======================================================================
In my opinion, most of the answer to your question involves a
meta-rumination, a meta-reflection,
on what a theory of phonology (like a theory of many laboratory-based
sciences) is, rather than
a position particular to one brand or another of structuralist or
generative phonology -- though
it may be hard to untangle what is particular to (say) Hockett-ian
structuralist phonology and
what is Hockett's meta-rumination about how science works.

Every science has to deal with the fact that there's a smooth (hence
infinite) range of
laboratory descriptions of the events that are being observed. But due
to
several
kinds of restrictions, both theoretical and practical (and the line
there
is often
hard to draw), we limit our "representation" of the data -- for example,
we
limit it to
N significant digits; we often say we do that due to limitations of the
equipment
being used to conduct observations.

We -- still not just linguists, but anybody doing laboratory science --
go
another step
frequently, and collapse into larger categories observations that we
know
are different
(that we measure as different), based on explicit or implicit
theoretical
assumptions.
This gives us a more symbolic representation of the observations. And
this
gets
us to what a linguist might call the narrowest phonetic representation.
Some linguists,
both contemporary but also from earlier periods in the century, would
begin to
feel uncomfortable at this point, and say that as far as speakers are
concerned,
there is no justification for assuming that this level of representation

corresponds
to anything in the head; most linguists would shrug and say, oh probably
it
does,
and we can't get anywhere without making that assumption.

This question is not one of levels, at least not of linguistic levels;
it's
part of the
meta-reflections we share with other sciences. Linguistic levels are
specifically
theory-internal, and in all cases that I"m aware of, deal with
alternative
symbolic representations.

A lot of phoneticians are uncomfortable with any symbolic system, any
representational
system at all, which leaves us rather high and dry when trying to work
out a
way to interface with phonologists' theories. At the other end of the
spectrum
is SPE's theory of phonetics, designed specifically to meet the needs
of phonologists, not phoneticians.

Questions of allophony are by their very nature questions of phonology,
and
phoneticians can say nothing about them until there is at least some
kind
of working agreement about how phonetics and phonology interact, and
divide up responsibilities. On the other hand, terms like coarticulation

are fundamentally phonetic in nature, and have no particular status in
phonology (except as convenient labels).

No-one, I think, is attracted by the notion that phonetics is universal
and
phonology is language-particular -- all recognize that there is
univeral and particular in both. I myself think that the notion of
phonetically-motivated, or articulatorily-motivated, is unpalatable
and unappealing, and often vacuous, but I'm probably in a minority
on that.

This just touches the surface, but maybe this is helpful.
Best, John Goldsmith
=====================================================================
Dear Mr Salzmann!
I saw your posting about allophones at the Linguist List. If my reply
would be of any help to you, I'll be very glad.
The problem is seen from the point of view of the phonological school
which taught me -
St-Petersburg school of phonology.

Allophone does not belong to parole, but to langue. Phoneme and
allophone are both abstractions, they are correlative as general and
particular. There cannot be an infinite number of allophones. The
number of allophones for each phoneme of a certain language is
limited. The number of phones is infinite.
There are certainly three levels: phone - allophone - phoneme. I
think that phoneme belongs to phonology, and allophone - to phonetics.

I don't see how coarticulation and complementary distribution can be
opposed, they are too different phenomena. But we can say that
coarticulation causes a type of compl. distribution, forming
combinative allophones (there are two types of allophones: caused by
position and by combination - i.e. coarticulation).
And both phenomena that you show (key/car and back/front vowel-
palatal/velar consonant) can be called complementary distributed
allophones, caused by coarticulation.

- -
Lena Pigrova (Lena at KP3912.spb.edu)
====================================================================
Dear Martin,

I appreciated your discussion of the problems of the notion of
allophony.  I addressed some of these problems in my thesis,
Pronunciation modeling in speech synthesis.  Traditionally, people have
ascribed symbols to certain allophones in certain languages.  For
example, it is common to use a flap symbol in "phonetic" (as opposed to
phonological) transcriptions of American English.  However, it becomes
clear that symbols are most appropriate for phonological analyses, and
their usefulness for allophony is questionable.  In chapter 4, I
describe an experiment comparing gradient and discrete aspects of
postlexical variation.  You can download my thesis from
http://www.ling.upenn.edu/~coreym/diss.html.

Corey Miller
Nuance Communications
coreym at nuance.com
=====================================================================
Bonjour,

1.
>But what about  complementarily distributed allophones? Take for
>instance the English voiceless plosives which depending on the
>environment are realized either as aspirated or non aspirated. These
are
>called allophones but they are again an abstraction, each of the
>allophones, e.g. [p +asp] and [p -asp] can be realized in an infinite
>number of different ways. Now what should be called an allophone? If
the
>term were restricted to the actual phonetic realization (as in
classical
>structuralism), we'd have to find a new term for the "abstract
>allophones"
>So do we end up with tree levels instead of two? Or to put the question

>differently: Which phenomena belong to phonology, which to phonetics?
>Since complementary distribution is an abstract regular pattern not
>solely due to physiological necessity, i'd have to belong to the domain

>of phonology in my opinion. But is there a possibility to express this
>in classical structural phonology or is the theory just simply flawed
or
>is my argumentation faulty?

I confess I don't see where the problem lies here. Allophones are,
strictly
speaking, the realizations of phonemes. As such, and as you underlined
it
for /a:/, their number, for one given phoneme, is virtually infinite,
even
in the case of 'complementary distribution'. One might speak about
'major
classes' of allophones (e.g. +asp. and -asp. plosives in English) ; I
guess, however, that this would be an artefact, partly due to the use of

IPA symbols, and that phonetic data reveal a gradient, rather than
discrete
categories, within allophonic variation.

2.
>I believe, the picture would be different in generative phonology. If
>I'm not completely mistaken, the three-way contrast indicated above is
>in fact represented by the classical derivational model: There are the
>URs (roughly comparable with the phonemes) and the surface
>representations. As far as I can tell, these surface representations
are
>not phonetic entities, but still feature bundles (different from the
URs
>only in being fully specified and possibly having undergone some
rules).
>This is how the are in the phonological component (probably
>corresponding to what I called "abstract allophones" above). What in
the
>classical structuralist sense is called a phone, conceived as a
physical
>entity, would then be the result of the interaction of the phonological

>component with the sensory system.

In fact, the three-way contrast established by classical generative
phonology is not similar to the one you suggest, for two reasons.
Firstly,
the URs are morpho-phonemes, not phonemes in the structuralist sense.
For
example, the /k/ of electriC and the /s/ of electriCity are distinct
phonemes but two 'reflexes' of the same UR in SPE-based frameworks.
Secondly, there is not a specific level in generative phonology
corresponding to the phonemic level of earlier theories. Rather,
generative
phonology could be said to posit a n-way contrast according to the
number
(n) of rules that apply to a given UR.

3.
>There is a third problem for which I've been unable to find a
satisfying
>solution. Quite often, a distinction is made between complementary
>distribution and Coarticulation, the distinguishing factor being
>physiological (in)evitability. In a major textbook like Spencer
>Phonology (1996), the different pronunciations of /k/ in <key> and
><car>, the first palatal, the second velar, are considered an instance
>of coarticulation because of a physiological inevitability. Now this
>inevitability is more properly called a physiological inevitability of
>the native speakers of English since the two sounds can be contrastive
>in a number of languages, for instance Turkish. But if there is no
>universal physiological necessity to pronounce the sounds this way,
>couldn't one regard the two realizations as complementarily distributed

>allophones or surface representations?
>In German phonology, when speaking of complementary distribution, the
>example always adduced is the distribution of the voiceless velar and
>palatal fricatives; There occurrence depends on the value for the
>feature [back] of the preceding vowel (i.e. velar after a back vowel,
>palatal after a front vowel). For a native speaker of standard high
>German, this distribution is about as physiologically inevitable as the

>key/car distribution for a native speaker of English, but no one has
>ever spoken of coarticulation in this case. What might be the reasons?
>Am I just simply wrong? Represent these examples truly different
>phenomena?

Well, I always told my students that 'complementary distribution' and
'coarticulation' (generally) represent the same phenomena but a
different
point of view. On pure phonological grounds, you speak about
'complementary
distribution' ; this is a mere statement of distributional facts. Now,
if
you want to 'explain' such distributions on phonetic grounds, the notion
of
'coarticulation' is generally unavoidable (as is the case for <key> and
<car> as well as for the palatal allophone of German /k/). Note that
this
(too simple) distinction between 'phonetics' and 'phonology' is only
valid
within a linear framework. In autosegmental models, for example,
coarticulation is directly represented insofar as the palatality of the
[k]
in <key> can be viewed as belonging to the phoneme /i/, which will be
linked to two slots. In this theory, 'complementary distribution'
explicitly appears as a segmental *effect* of coarticulation.

Best regards.


  Joaquim Brandao de Carvalho
                       jbrandao at idf.ext.jussieu.fr
  Departement de linguistique
  Faculte des Sciences Humaines et Sociales - Sorbonne
  Universite Rene Descartes - Paris V

====================================================================
Hello Martin,

your query raises a real interesting point; namely, that of  the
level(s) of
abstraction of our linguistic (and generally "scientific") description.
The
IPA, I guess, tried to distinguish between langue and parole with
respect to
the phonological system, and perhaps were not aware that their
"allophones"
are as much of an abstraction as are their phonemes, albeit on a
different
level/nature.  I remember discussions back when I was a grad student at
UCLA
(i.e., among others with Peter Ladefoged and Vicki Fromkin) about this
question of  "abstraction" in the IPA alphabet.  (Note the label!)  They

were also called "phonetic symbols"; IOW, at some level, at least, we
were
quite aware of our "abstraction".  This may be another case of a method
(technique) of analysis (data handling), once accepted for convenience
acquires the status of "fact", and is no longer questioned by the
peoople in
the discipline.

With all the work that's been done on categorical perception, one can
understand the IPA's abstractions and the need for them.

Regards,
Peter
====================================================================
Lieber Martin Salzmann,

zu Ihrer Linguist-List-Anfrage kurz folgendes:

ad 1) Das von Ihnen dargestellte Problem scheint mir in der Tat immer
dann zu
bestehen, wenn man die Begriffe 'Phonem' und 'Allophon' vor dem
Hintergrund des
Saussureschen Begriffspaars 'Langue' - 'Parole' zu beschreiben versucht
-
insofern ist dieses Problem durchaus real und zeigt eine
Unzulänglichkeit der
'Langue-Parole'-Dichotomie. In der deutschsprachigen Romanistik hat ein
Vorschlag von E. Coseriu viel Beachtung gefunden, der die Saussure'sche
Dichotomie zu einer Trias 'System - (usuelle) Norm - Rede (parole)'
erweitert
hat (etwa E. Coseriu, Sprachkompetenz, Tübingen: Francke 1988). Ihre
'abstrakten Allophone' ließen sich in diesem Rahmen der Ebene der Norm,
die
konkreten Realisierungen der Ebene der Rede (Parole) zuordnen. Wenn in
einführenden linguistischen Darstellungen Allophone einfach der Ebene
der
'Parole' zugeordnet werden, begreift man Allophone offensichtlich
(implizit)
als 'Types' und läßt ihre Realisierung als 'tokens' außer acht - ob dies
nun
durch eine zuvor gegebene Definition des Begriffs der 'Parole' gedeckt
ist oder
nicht. Im Bereich der Morphologie stellen sich im übrigen ganz ähnliche
und
darüber hinaus noch weitere Probleme, wenn  Morpheme der Langue und
Allomorphe
der Parole zugewiesen werden. Ich habe damit verbundene Probleme einmal
in einem
einführenden Kurs thematisiert, was bei den Studierenden zu ernormen
Konfusionen
geführt hat. Ich erörterte seitdem die Begriffe der strukturalistischen
Phonologie und Morphologie in der Regel ohne expliziten Bezug auf das
Begriffspaar 'Langue - Parole'.

ad 2): Mit Ihrer Einschätzung der generativen Phonologie stimme ich
vollkommen
überein. Das Problem stellt sich hier nicht.

ad 3): Diese Problem scheint mir nicht ganz einfach zu lösen. Ich weiß
nicht
genau, was Spencer (1996) zu engl. /k/ in <key> und <car> schreibt (ich
habe das
Buch leider nicht bei mir zu Hause, sondern in meinem Büro in der Uni
und kann
deshalb dort auch nicht noch einmal nachschlagen). Letztendlich handelt
es sich
sowohl bei dem engl. Beispiel wie bei dem von Ihnen angeführten klass.
dt.
Lehrbuch-Beispiel um ein Assimilationsphänomen. Die Frage, die sich
stellt (und
die möglicherweise durchaus kontrovers zu beantworten ist), ist, ob
beide
Phänomene den gleichen Stellenwert habe. Die Realisierung des dt.
Phonems /x/
als velarer oder als palataler Frikativ ist auf jeden Fall Bestandteil
sprachspezifischer phonologischer Regeln des Standardhochdeutschen. Ich
würde
hier in keinem Falle von physiologischer Notwendigkeit oder
Unvermeidbarkeit
sprechen. Der engl. Fall - entsprechendes gilt mutatis mutandis auch für
das
Deutsche, die romanischen Sprachen und sicherlich zahlreiche weitere
Sprachen -
ist möglicherweise generellerer Natur, so daß man hier schon eher von
physiologischer Koartikulation sprechen und dementsprechend das Phänomen
unter
'low level phonetic rules' (und nicht mehr phonologische Regeln)
subsumieren
könnte - ähnlich etwa der (akustisch nachweisbaren) phonetischen
Nasalierung von
Vokalen im Kontext von Nasalkonsonanten (wie in dt. <Mann> oder engl.
<man>), im
Unterschied zu allophonischer Nasalierung, wie z.B. im Portugiesischen
oder
unter abstrakten Analysen von NV auch im Franzsöischen). Es gibt
jedoch Arbeiten, in denen die unterschiedlichen Realisierungen von /k/
vor
vorderen und hinteren Vokalen als (komplementär distribuierte) Allophone

aufgefaßt und damit wohl der phonologischen Komponente der Grammatik
zugewiesen
werden. In der Tat können solche Ausgangslagen zu Phonologisierungen und

Restrukturierungen lexikalischer Repräsentationen führen: cf. etwa lat.
/k/ vor
vorderen Vokalen, das im Italienischen, Spanischen usw. zu einer
palatalen
Affrikate wird, während lat. /k/ vor hinteren Vokalen als velarer
Verschlußlaut
erhalten bleibt. Weitere Entwicklungen haben dann dazu geführt, daß /k/
und
/tsch/ (Pardon, keine phonetische Zeichen in diesem Editor!)
Minimalpaare bilden
(cf. it. <chi> /ki/ 'wer' und <ci> /tschi/ 'dort'). Frage also: Sind
unterschiedliche Realisierungen von /k/ in den o.g. englischen
Beispielen oder
in dt. <Kirche> und <Kugel> in der heutigen Synchronie das Resultat
phonologischer oder phonetischer Regeln? Spielt der
artikulatorische/auditive/akustische Abstand zwischen den Realisierungen
für die
 Entscheidung der Frage eine Rolle (Ist /k/ vor vorderem Vokal noch
velar oder
bereits ein palataler Okklusiv, i.e. [c]?). Ist die Frage überhaupt
"objektiv"
zu beantworten? Gibt es Entscheidungskriterien?

Andreas Gather

Dr. Andreas Gather
Ruhr-Universität Bochum
Romanisches Seminar
GB 8/133
Universitätsstr. 150
D-44780 Bochum
Email:  andreas.gather at ruhr-uni-bochum.de    ODER
        ac.gather at t-online.de
=====================================================================
You have raised a number of interesting questions in your posting, and I

thought I'd give you my thoughts on the matter.

First, I'm not sure all structuralists would have accepted your claim
that
phonemes are part of langue and allophones part of parole.  Of course,
there was considerable variation among structuralists--first of all,
between American and European, second, in Europe between Britain and the

continent (i.e. between say, Jones and Trubetzkoy) and in the US between

the Sapirians and the Bloofieldians.  For Sapir allophones were probably

part of parole, while phonemes were mental percepts (similar to what
Baudouin thought), but for Bloomfield the langue/parole distinction was
meaningless and phonemes were just classifications of sounds into
boxes.  Current generative theory (at least some of it) would
distinguish
between phonological rules and phonetic implementation rules, although
they
normally allow for language-specific instances of the latter as well.

The German case you bring up is interesting because it is claimed, at
least
for Standard German that the alternation is actually not automatic after

all--certain morpheme boundaries block it.  A typical example is the
contrast `tauchen' :  `Pfauchen', which supposedly has a velar in the
former and a palatal in the latter.  I could dig up some references to
this
issue if you're interested.

There's much more that could be said about these questions--Steven
Anderson
deals with some of it in his history of phonological theory, and others,

myself included, have other opinions.  I'd be glad to discuss it further
if
you're interested.

Geoff Nathan

Geoffrey S. Nathan
Southern Illinois University at Carbondale
Carbondale, IL, 62901-4517
Phone:  (618) 453-3421 (Office)
     (618) 549-0106 (Home)
                             geoffn at siu.edu

=====================================================================
Hello,

A topic which I often hear mentioned in the recent Laboratory Phonology
literature
(or at conferences) is whether there is any difference between
non-contrastive phonetic differences which are traditionally considered
allophonic alternations and non-contrastive phonetic differences which
are
usually not mentioned at all or relegated to "phonetic implementation."
This seems to be the same topic you are bringing up.

First, as you point out, a distinction of variation which is
physiologically necessary vs. not doesn't seem very useful.  Kingston
and
Diehl (Language, 1994) argue that much obviously phonetic variation is
language specific, and therefore learned rather than physiologically
caused.  (They also cite various references on this.)

I'm not sure classic generative phonology deals with this problem any
better than structuralism does.  It seems obvious that structuralism
would
also require some sort of "phonetic implementation" (whatever that is),
although we know now that that can't be universal.  But in generative
phonology, how do we decide what's worthy of being part of the phonology

and what's (language specific) phonetic implementation?  As you point
out,
physiological necessity isn't a good criterion.  Whether the difference
can be distinctive in some other language probably isn't a good
criterion
either, since a wide variety of differences are distinctive in _some_
language, and this has nothing to do with the system of the language
being
analyzed.  (For example, English /u/ is similar to a front rounded vowel

in the environment /tut/ because the alveolars raise the F2.  /u/ vs.
/y/
is distinctive in many languages.  But if this means that English /u/
has
an allophone [y], then the vowel /u/ must have quite a variety of
allophones, conditioned by place of the preceding and following
consonant.)  Whether the variation is language specific or not doesn't
seem to be a good criterion either.

In the end, there are examples of clearly phonetic variation in
language,
which may be language specific.  However, these same types of variation
may be phonologized in a given language, so one often finds similar
processes which look more or less phonological.  Some current work in OT

seems to put all such variation (and perhaps all variation of any sort)
into the grammar.  I think it's an open question as to whether a
distinction between "real" allophonic variation and "lower-level
phonetic"
variation is useful, and how such a distinction should be included in
phonological theory.

Natasha Warner

_______________________________________________________________________________

Natasha Warner                                  Ph.: 31-(0)24-3521372
Max Planck Institute for Psycholinguistics      FAX: 31-(0)24-3521213
PB 310                                          Email:
Natasha.Warner at mpi.nl
NL-6500 AH Nijmegen
the Netherlands
=====================================================================
Dear Martin,

Allophone does seem to have been an ill-defined notion.

A Greek native speaker once told me he had never realized that palatal
chi and velar chi were different sounds until he had read it in a Greek
language course for foreigners.
I am tempted to conclude that realizations of the same phoneme are
labeled as different allophones when they sound different to a linguist
who's not a native speaker. That's an exaggerated conclusion, but  guess

there's some truth in it.
By the way, the Modern Greek chi does not only have a front and a back
realization (before front and back vowel respectively), but also
intermediate realizations (before various consonants - there's a paper
by Mirambel on this). I think the existence of such intermediaries can
be a criterion on where we have one or two phonemes. However, there
might be the contrary example of the velar and palatal fricatives in
Standard High German, which can be argued to be one and the same phoneme

(though the word "Frauchen" makes the issue controversial).

In a search for a definition, one should return to the writings of those

who first introduced the notion. It doesn't seem to be Trubetzkoy. The
notion is used by Martinet but I don't know if he's the first. I confess

I don't know if it is used by e.g. Jakobson or Chomsky.
Insofar as I remember, Martinet did not provide a definition but
proceeded with examples. With such an approach, it is possible to pick
up just two realizations of one phoneme (among the infinity of
realizations) and to say: these two sounds are not different phonemes,
they are allophones of one another (= realizations of the same phoneme).

Of course the examples (languages, phonemes and phones) are so selected
as to be cases of sizeable phonetic differences. This is perhaps a
human-science approach, as opposed to the exact-science approach which
wants proper definitions as you have thems in mathematics. So to speak,
only when the phonetic difference is real big does one bother to take
the word "allophone" from its shelf.
Insofar as I remember (again), the word "allophones" does not refer to
all differences that are possible in "parole", but only to such phonetic

variations that are determined by the phonological context (i.e.
ignoring svariations due to regional or social background, speech
situation, and chance).

Suggestions:
Theoretical definition: allophone is a synonym of realization (more
exactly: two phones are allophones of one another = they are
realizations of the same phoneme). That might not be a very useful
notion, i.e. the word realization and phone are enough to do the job.
Practical definition, used for language teaching: only those allophonic
differences are considered that are useful for a good pronunciation of
the language. That will be an ad-hoc choice determined by the native
language of the pupils as much as by the target language itself. Thus,
that would not be an objective (subject-independent) notion, i.e. not a
scientific one.
These two suggestions are not real definitions but only pointers.

Another brand of linguistic research favours subjects such as "The
meaning of  soma  in Aristoteles" or "The use of  virtus  by Vergil"
(fictitious examples). Perhaps the search of a definition of
"allophones" would be more like "The use and phraseology of "allophone"
by Martinet and his disciples" than like "What is an allophone?"...

Remy Viredaz, Geneva
====================================================================



---------------------------------------------------------------------------
LINGUIST List: Vol-10-1778



More information about the LINGUIST mailing list