<Language> Crowley: Book Review

H. Mark Hubey HubeyH at mail.montclair.edu
Fri Mar 12 03:57:07 UTC 1999


<><><><><><><><><><><><>--This is the Language List--<><><><><><><><><><><><><>

Here is the review:

> -------------------------Crowley--------------------------------
>
> Crowley, T. (1992) An Introduction to Historical Linguistics, Oxford
> University Press, NY.
>
> p.37
> The concept of lenition is not very well defined, and linguists who use
> the term often seem to rely more on intuition or guesswork than on a
> detailed understanding of what lenition means.

It is very well defined now. For those who cannot obtain my book, they
should note that they will be able to read it in the pages of the
Journal of the International Quantitative Linguistics Association
with a name like "Vector Speech Spaces via Dimensional Analysis" or
something resembling it.

Crowley has a disarming candor in this book which no doubt derives from
his knowledge of the field and his love of it. There is no "black magic"
in this book. He either explains it clearly or explains that nobody
really
knows, or that people guess, etc (except in cases where he errs !). :-)


> p. 48
> ..I will define the concept of phonetic similarity. Two sounds can be
> described as being phonetically more similar to each other after a sound
> change has taken place if those two sounds have more phonetic features
> in common than they did before the change took place. If a sound change
> results in an increase in the number of shared features, then we can say
> that assimilation has taken place.

Now here is a perfect example of candor, intelligence, creativity all
rolled into one. First, he has apparently discovered the concept of
similarity all on his own. He cannot exactly put his finger on it but
it is clear that he is talking about distance. And this distance is
indeed
the simplest such distance metric, the one based on distinctive
features.
If all the distinctive features are binary, then this distance is simply
the number of bits that differ between two phonemes. This distance
metric
is used in computer science for distances between bitstrings and is
called
the Hamming metric after the first person to have used it. For more on
this
you can look at my book [Hubey,1994] and you will also find other (more
accurate and more precise) distance metrics. The final result of all
these
spaces (metric spaces) is the vector spaces via dimensional analysis
that
is mentioned above. I think it is my greatest contribution in the whole
book. Unfortunately a few linguists whom I asked thought that this
particular
chapter was snake oil. That's how things go in life!


> p. 73
> A phonetic description of a language simply describes the physical facts
> of the sounds of the language. A phonemic description, however,
> describes not the physical facts, but the way these sounds are related
> to each other for speakers of that language. It is possible for two
> languages to have the same physical sounds, yet to have very different
> phonemic systems. The phonemic description therefore tells us what are
> the basic sound units for a particular language that enables its
> speakers to differentiate meanings.

This is reasonably typical and yet also something that hides a great
truth; there is great confusion in the literature on what these things
mean. I propose that words acoustic, perceptual and articulatory be used
instead of phonetic. It would go a long way toward clearing up the
confusion
in the field. Then we could rename phonetic and phonemic as "absolute"
and
"relative" as well as "low-precision/accuracy" vs
"high-precision/accuracy"
descriptions. This accords reasonably well with the facts in
linguistics.

>
> p.88
> ...you have to look for forms in the various related languages which
> appear to be derived from a common original form. Two such forms are
> cognate with each other, and both are reflexes of the same form in the
> protolanguage [PL].

Here is truth and candor; "appear to be derived".


> p.89
> In deciding whether two forms are cognate or not, you need to consider
> how similar they are both in form and meaning. If they are similar
> enough that it could be assumed that they are derived from a single
> original form with a single original meaning, then we say that they are
> cognate.

More truth and candor: "similar in both form and meaning". This again
is nothing but "distance" staring us in the face again, and again,
and...



> p.93
> Having set out all of the sound correspondences [SC or RegSC] that you
> can find in the data, you can now move on to the third step, which is to
> work out what original sound in the protolanguage might have produced
> that particular range of sounds in the various
>  daughter languages. Your basic assumption should be that each separate
> set of sound correspondences goes back to a distinct original phoneme.
> In reconstructing the shapes of these original phonemes, you should
> always be guided by a number of general principles:
>

This is the only place where I have actually seen principles actually
listed in mostly clear terminology.


> (i) Any reconstruction should involve sound changes that are plausible.
> (You should be guided by the kinds of things that you learned in Chapter
> 2 in this respect.)

This again must be "empirical". What else can "plausible" mean? It means
mostly that if it has been deemed plausible already (because someone has
found it to exist, or has convinced others that it existed) then you can
use it the same way. Of course, there are other ways.

>
> (ii) Any reconstruction should involve as few changes as possible
> between the protolanguage and the daughter languages.


General principle called Ockham's Razor (sometimes Occam's Razor) and
often called the "Parsimony Principle". It's a general principle that
people use for no other reason than the fact that it exists.


> It is perhaps easiest to reconstruct back from those sound
> correspondences in which the reflexes of the original phoneme (or
> protophoneme) are identical in all daughter languages. By principle (ii)
> you should normally assume that such correspondences go back to the same
> protophoneme as you find in the daughter languages, and that there have
> been no sound changes of any kind.


If the sound changes resemble the [in]famous random walk (Brownian
motion) then if we divide up the changes into intervals, the greatest
amount will be "no-change" because the random walk has zero mean and
that is where it gets its maximum.


> p.95
> (iii) Reconstructions should fill gaps in phonological systems rather
> than create unbalanced systems.
>
> Although there will be exceptions among the world's languages, there is
> a strong tendency for languages to have 'balanced' phonological systems.
> By this I mean that there is a set of sounds distinguished by a
> particular feature, this feature is also likely to be used to
> distinguish a different series of sounds in the language. For example,
> if a language has two back rounded vowels (i.e. /u/ and /o/), we would
> expect it also to have two front unrounded vowels (i.e. /i/ and /e/).

This is another general principle that is in use. It is called
"symmetry".
There is a whole book written on it [Van Fraasen] , and things like this
have already been used already in physics. Maxwell's Equations (of
electromagnetics) came out of symmetry considerations during the last
century and he predicted in 1860s what was confirmed experimentally in
the
1890s. For whatever reason, nature seems to like symmetry. But it is not
exactly clear what he means by "balance". It seems (from my reading of
books on linguistics) that there is a kind of a mental chemical elements
table-like thing and it is this table that linguists somehow mentally
try
to fill in. But there are many ways in which symmetry arguments pop up
and many of these can be seen in, you guessed it, my book [Hubey,1994].
(It was rejected by some anonymous reviewer for a publishing company
and I am really upset at what he wrote. Too bad I can't find out who he
is and then check what mathematical work he has actually accomplished in
linguistics and see if it amounts to anything worth writing about. My
guess
is that he does not comprehend or that he wants to publish some of these
ideas himself while keeping mine unknown. Nobody will be ever able to
figure
out who did what until decades later and by that time his name will be
attached to my work. It does sound like I am paranoid :-) but I don't
believe
that anyone can be that incompetent and still pretend to be doing
mathematical
work in linguistics. I can teach this to college students.]

> p. 98
>
> (iv) A phoneme should not be reconstructed in a protolanguage unless it
> is shown to be absolutely necessary from the evidence of the daugher
> languages.
>
> p.109
>
> ..But what do our reconstructions actually represent? Do they represent
> a real language as it was actually spoken at some earlier time, or do
> our reconstructions only give an approximation of some earlier language?
> ....according to this point of view, a 'protolanguage' as it is
> reconstructed is not a 'language' in the same sense as any of its
> descendant languages, or as the 'real' protolanguage' itself. It is
> merely an abstract statement of correspondences.
> ...Other linguists, while not going as far as this, have stated that,
> while languages that are related through common descent are derived from
> a single ancestor language, we should not necessarily assume that this
> language really existed as such. The assumption of the comparative
> method is that we should arrive at an entirely uniform protolanguage and
> this is likely to give us a distorted or false view of the
> protolanguage. In some cases, the comparative method may even allow us
> to reconstruct a protolanguage that never existed historically.

This is yet another one of those great truthful discussions and even
better one that touches upon some of the deeper issues of what
diachronic linguistics is about.


> p.110
> ..One frequently employed device in these sorts of situations is to
> distinguish the protophoneme by which two phonetically similar
> correspondence sets are derived by using the lower and upper case forms
> of the same symbol....Another option in these kinds of situations is to
> use subscript or superscript numerals e.g. /*l1/ and /*l2).

Great usage. These devices, like using Greek letters, script letters,
German frakturs, bold, italics, letters with bars, arrows, underwiggles,
subscripts, superscripts, etc have all been employed in mathematics and
physics for similar reasons. IT would probably be best to use script
upper
case letters actually (something like Dingbat script).



> p. 119                  [Internal Reconstruction chap. 6]
>
> There is a second method of reconstruction that is known as internal
> reconstruction which allows you to make guesses about the history of a
> language as well.

It seems like this is comparative construction applied to the same
language by looking for clusters of words derived from the same root.

>
> p.123
> ...you would normally consider using internal method only in the
> following circumstances:
>
> (a) Sometimes, the language you are investigating might be a linguistic
> isolate i.e. it may not be related to any other language (and is
> therefore in a family of its own). In such a case, there is no
> possibility of applying the comparative method as there is nothing to
> compare this language with. Internal reconstruction is therefore the
> only possibility that is available.
>
> (b) A very similar situation to this would be the one in which the
> language you are studying is so distantly related to its sister
> languages that the comparative method is unable to reveal very much its
> history. This would be because there are so few cognate words between
> the language you are working on and its sister languages that it would
> be difficult to set out the systematic sound correspondences.
>
> (c) You may want to to know something about changes that have taken
> place between a reconstructed protolanguage [RPL] and its descendant
> languages.
>
> (d) Finally, you may want to try to reconstruct further back still from
> a protolanguage that you have arrived at by means of the comparative
> method. The earliest language from which a number of languages is
> derived is, of course, itself a linguistic isolate in the sense that we
> are unable to show that any other languages are descended from it. There
> is no reason  why you cannot apply the internal method of reconstruction
> to a protolanguage, just as you could with any linguistic isolate, if
> you wanted to go back still further back in time.
>
> ...this method can only be used when a a sound change ahs resulted in
> some kind of morphological alternation in a language. Morphological
> alternations [MA] that arise as a result of sound changes always involve
> conditioned sound changes [CSC]. If an unconditioned sound change [USCh]
> has taken place in a language, there will be no synchronic residue of
> the original situation in the form of morpological alternations, so the
> internal method will be completely unable to produce any results in
> these kinds of situations.

All excellent explanations. Truthful. To the point. No black magic
here.

IT would have been so much better if he could have introduced some
ideas on how to evaluate this data objectively and rigorously.


> [more on intermediate changes leading to false reconstructions..]
>
> p. 129          [Grammatical, Semantic, and Lexical Change, chap. 7]
>
> The number of individual phonemes of a language ranges from around a
> dozen or so in some languages, to 140 or so at the very most in other
> languages.
>
> p.132
> There is a tendency for languages to change typologically according to a
> kind of cycle. Isolating languages tend to move towards agglutinating
> structures. Agglutinating languages tend to move towards the
> inflectional type, and finally, inflecting languages tend to become less
> inflectional over time and more isolating. ..[diagram]..
> Isolating languages become agglutinating in structure by a process of
> phonological reduction. By this I mean that free form grammatical
> markers may become phonologically reduced to unstressed bound form
> markers (i.e. suffixes or prefixes).
> p.134
> ...languages which are agglutinating type tend to change towards
> inflectional type. By the process of morphological fusion, two
> originally clearly divisible morphemes in a word may change in such a
> way that the boundary is no longer clearly recognizable.
> [defn of portmanteu morphemes].
> p.135
> Finally, languages of the inflectional type tend to the isolating type;
> this process is called morphological reduction. It is very common for
> inflectional morphemes to become more and more reduced, until sometimes
> they disappear altogether. The forms that are left, after the complete
> disappearance of inflectional morphemes, consist of single phonemes.

This is worth discussing in detail probably in some other list because
it is a very interesting and complex problem. But the fact that he
writes about it and also gives such clear scenarios for belief in the
possibility of such occurrences speaks loudly for his understanding
of the issues of diachronics.


> p.136
> There is, in fact, a fourth type of language: those having polysynthetic
> morphology. Such languages represent extreme forms of agglutinating
> languages in which single word correspond to what in other kinds of
> languages are expressed as whole clauses. Thus a single word may include
> nominal subjects and objects, and possibly also adverbial information,
> and even non-core nominal arguments in the clause such as direct objects
> and spatial noun phrases.
>
> p. 137
> Polysynthetic languages can develop out of more analytic (i.e.
> nonpolysynthetic) languages by a process of argument incorporation.

Excellent again. By doing this he has pointed to a way of creating
some kind of a scale between say, zero and one, which we write as
[0,1] meaning an interval between 0 and 1. This means that we can
now treat typology as a variable that takes values in [0,1]. Since
both probability theory and fuzzy logic take values in [0,1] we now
have the means to create mathematical models and test them.


> p. 144
> Words in languages can be grouped into two basic categories: lexical
> words, and grammatical words. Lexical words are those which have
> definable meanings of their own when they appear independently of any
> linguistic context: elephant, trumpet, large. Grammatical words, on the
> other hand, only have meanings when they occur in the company of other
> words, and they relate those other words together to form a grammatical
> sentence. Such words in English include the, these, on, my. Grammatical
> words constitute the mortar in a wall, while lexical words are more like
> bricks.

Great analogy.


> p.145
> The change from lexical word to grammatical word is only the first step
> in the process of grammaticalization, with the next step being
> morphologisation i.e. the development of a bound form out of what was
> originally a free form.
>
> In fact, morphologisation can involve degrees of bonding between bound
> forms and other forms as it is possible to distinguish between clitics
> and affixes. A clitic is a bound form which is analysed as being
> attached to a whole phrase than to just a single word. An affix,
> however, is attached as either a prefix or a suffix directly to a word.

I prefer words like postfix, prefix, infix.


> p.168                   [Subgrouping  chapter 8]
>
> Similarities between languages can be explained as being due either
> shared retention from a protolanguage, or shared innovations since the
> time of the protolanguage. If two languages are similar they share some
> feature that has been retained from a protolanguage, you cannot use this
> similarity as evidence that they have gone through a period of common
> descent. The retention of a particular feature in this way is not
> significant, because you should expect a large number of features to be
> retained this way.
>
> However, if two languages are similar because they have both undergone
> the same innovation or change, then you can say that this is evidence
> that they have had a period of common descent and that they therefore do
> belong to the same subgroup. You can say that a shared innovation in two
> languages is evidence that those two languages belong in the same
> subgroup, because exactly the same change is unlikely to take place
> independently in two separate languages. By suggesting that the
> languages have undergone a period of common descent, you are saying that
> the particular change took place only once between the higher level
> protolanguage and the intermediate protolanguage which is between this
> and the various modern languages that belong in the subgroup. [problem
> of multiple scales!]
>
> p.168
> While it is shared innovations that we use as evidence for establishing
> subgroups, certain kinds of innovations are likely to be stronger
> evidence for subgrouping than other kinds. ...subgrouping rests on the
> assumption that shared similarities are unlikely to be due to chance.
> However some kinds of similarities between languages are in fact due to
> chance, i.e. the same changes do sometimes take place quite
> independently in different languages. This kind of situation is often
> referred to as parallel development or drift.

The concept of distance automatically takes care of this problem.

For example suppose we are looking at a protolangauge (PL) that
has five features. Let us represent this as [1,1,1,1,1]. Now suppose
three of the languages derived from this PL have A=[0,0,1,1,1],
B=[0,1,1,1,0] and C=[1,1,1,0,0]. Now the distances between these are:

d(A,B)= 3, d(A,C)=4, and d(B,C)=3.

The maximum distance possible is 5, so we can obtain the similarities;

s(A,B)=2, s(A,C)=1, s(B,C)=2

As can be seen from their features, B and C have jointly innovated
the last feature, whereas A and B have jointly innovated the first
feature. Now A and C have not jointly innovated anything and their
similarity is 1 whereas the others are higher. To make it clearer
we can simply compute their distances from the PL

d(A,PL)=2, d(B,PL)=2, and d(C,PL)=2 and therefore on the similarity
scale we have

s(A,PL)=s(B,PL)=s(C,PL)=3

So they are all equally removed from PL whereas their relationships
to each other is seen in the s(..) measures.


> ...
> In classifying languages into subgroups, you therefore need to avoid the
> possibility that innovations in two languages might be due to drift or
> parallel development. YOu an do this by looking for the following in
> linguistic changes:
>
> (i) Changes that are particularly unusual.
> (ii) Sets of several phonological changes, especially unusual changes
> which would not ordinarily be expected to have taken place together.
> (iii) Phonological changes which correspond to unconnected grammatical
> or semantic changes.
> ...
> If two languages share common sporadic or irregular phonological change,
> this provides even better evidence for subgrouping those two languages
> together as the same irregular change is unlikely to take place twice
> independently.

Unfortunately, "unusual" here is not defined clearly. Does it mean
"not occurring empirically" amongst the world's languages?

>
> p. 171                  [Lexicostatistics and Glottochronology]
>
> Lexicostatistics is a technique that allows us to determine the degree
> of relationship between two languages, simply by comparing the
> vocabularies of the languages and determining the degree of similarity
> between them. This method operates under two basic assumptions. The
> first of these is that there are some parts of the vocabulary of
> language that are much less subject lexical change than other parts,
> i.e. there are certain parts of the lexicon in which words are less
> likely to be completely replaced by non-cognate forms. The area of the
> lexicon that is assumed to be more resistant to lexical change is
> referred to as core vocabulary (or a basic vocabulary).

It seems here that these assumptions belong only to those who practice
lexicostatistics. Is that really true or is it only they who clearly
state their assumptions while some of the others might be just
muddling along? Or is it that now there are a plethora of beliefs
and assumptions?


> There is a second aspect to this first general assumption underlying the
> lexicostatistical method, and that is the fact that this core of
> relatively change-resistant vocabulary is the same for all languages.
> The universal core vocabulary includes items such as pronouns, numerals,
> body parts, geographical features, basic actions, and basic states.
> Items like these are unlikely to be replaced by words copied from other
> langauges, because all people, whatever their cultural differences, have
> eyes, mouths, and legs, and know about the sky and clouds, the sun, and
> the moon, stones, and trees and so on. Other concepts however may be
> culture-specific.

This can easily be fixed up.  Just create another parameter snd use
this parameter to change the first parameter.

> ...
> The second assumption that underlies the lexicostatistical method is
> that the actual rate of lexical replacement in the core vocabulary is
> more or less stable, and is therefore aboutg the same for all languages
> over time. In peripheral vocabulary of course, the rate of lexical
> replacement is not stable at all, and may be relatively fast or slow
> depending on the nature of cultural contact between speakers of
> different languages. This second assumption has been tested in 13
> languages for which there are written records going back over long
> periods of time. It has been found that there has been an average
> vocabulary retention of 80.5 percent every 1,000 years.

This number can easily be changed for other languages based on
more intelligent guesswork and model building. Indeed it should be
done.

>
> p.173                   [basic or core vocabulary]
> The most popular list of this length is known as the Swadesh list, which
> is named after the linguist Morris Swadesh who drew it up in the early
> 1960s.
>
> p.181
> Once the percentage of cognate forms has been worked out, we can use the
> following mathematical formula to work out the time depth, or the period
> of separation of two languages;
>
>                 t = log C/(2*logR)
>
> In the formula above, t stands for the number of thousands of years that
> two languages have been separated, C stands for the percentage of
> cognates as worked out by comparing basic vocabularies, and R stands for
> the constant change factor mentioned earlier (the value in this formula
> is set at 0.85).

Too bad he does not show where it comes from.

>
> p.183
> Firstly, there is the problem of deciding which words should be regarded
> as core vocabulary and which should not. Obviously, it may be possible
> for different sets of vocabulary to produce differing results.
>
> Another difficulty involves the actual counting of forms that are
> cognate against those that are not cognate in basic vocabulary lists
> from two different languages.
> ...
> Lexicostatisticians in fact rely heavily on what is often
> euphemistically called the inspection method of determining whether two
> forms are cognate or not in a pair of languages. What this amounts to is
> that you are more or less free to apply intelligent guesswork as to
> whether you think two forms are cognate or not.

More candor. Yes, guesswork. OR should we call it a belief. Or should we
call it an axiom or postulate. Why not?

> ...
> Of course, two different linguists can take the same lists from two
> different languages
> , and since there is no objective way of determining what should be
> ticked 'yes' and what should be ticked 'no', it is possible that both
> will come up with significantly different cognate figures at the end of
> the exercise.
> [p. 186 example of languages of Milne Bay area of Papua New Guinea]

 [minimal spanning tree can be drawn from these figures]

Yes, I sketched on into my copy of the book. We can easily get a tree
from this data. I can sketch it out if anyone is interested. It could
be a good MS thesis.


> p. 201          [causes of language change ]
>
> One famous linguist Otto Jesperson made a great deal of the importance
>.....
> Despite the obvious appeal of this argument as a major factor in
> explaining language change, there are also several problems associated
> with it. The first is that it is extremely difficult, perhaps even
> impossible, to define explicitly what we mean by 'simplicity' in
> language. Simplicity is clearly a relative term.

Yes, and complexity is also difficult to define, but there is a whole
field and science of complexity now, and simplicity can be defined
from complexity. In fact, a measure of complexity appropriate for
linguistics is desperately needed for many reasons.


> p. 212          [observing language change, chapter 10]
>
> The concept of linguistic indeterminacy also relates to the idea of the
> linguistic system as used by Saussure. He argued that in describing a
> language (i.e. phonemes, morphemes, words, phrases, clauses, and so on)
> and describing the ways in which these units interrelate (i.e. the
> grammatical rules for putting them together for making up larger units).
> In talking about describing the system of a particular language,
> Saussure is implying that for every language, there is one -- and only
> one -- linguistic system.

This is also another excellent idea. Unfortunately, many linguists seem
to only repeat the words and not do anything about this. In fact, there
is a way to do this, and some of it will be in my next paper. I already
wrote about this on many lists, but unless someone did something about
this and published it somewhere I did not see, there is nothing being
done about it. This idea takes its clearest form in thermodynamics
where we have intensive and extensive parameters. I wrote about this in
my newest book for social scientists and especially linguists. I will
be looking for a publisher soon. I hope I dont' get the same kind of
review. I will have to have the IEEE or some other nonlinguistic
entity publish it if this continues. Maybe I will have to do it myself.
AT least it will be there for others to read 100 years from now.


> p. 215
> One of the most influential linguists of the past few decades, Noam
> Chomsky, expresses this view when he said that a grammar should describe
> an 'ideal speaker-hearer relationship', and it should ignore factors
> from outside the language itself (such as formality of a social
> situation). But language is not an ideal system at all.

Yes, more "idealization" like ideal gases, frictionless pulleys,
massless springs etc of physics. A necessary evil.


>
> p. 227                  [problems with traditional assumptions, chap. 11]
>
> Jones emphasized that it was similarities in the structure of the
> Indo-European languages, rather than it was similarities between words,
> that were important in determining language relationships. This
> observation led to a new intellectual climate in the study of language
> relationships, as scholars started looking instead for grammatical
> similarities between languages to determine whether or not they should
> be considered to be related. Lexical similarities, it was argued, were
> poor evidence of genetic relationship, as similarities between
> practically any word in any two languages can be established with enough
> effort.

Here again we run into the same problem. What is a structure? Is there
no mathematical model of these "structures"?  Yes there are. PLenty of
them, about 400 pages worth. [Hubey,1994].


>
> p. 232
> In reconstructing the history of languages, you therefore need to make
> the important distinction between a systematic (or a regular)
> correspondence and an isolated (or sporadic) correspondence. This is a
> distinction that I did not make in Chapter 5 when I was talking about
> the comparative method, but it is very important.

Good heuristic way to do probability theory.


>
> p. 256                  [Language Contact, chapter 12]
>
> The influence of one of the linguistic systems of an individual on the
> other linguistic system of that individual is referred to in general as
> interference.
>
> Interference can occur in the phonological system of a language, in its
> semantics, or in its grammar. Phonological interference simply means the
> carrying over of the phonological features of one language into the
> other as an accent of some kind.
> ...
> p. 257
> Semantic interference can also be referred to as semantic copying, as
> loan translation, or as calquing. A calque (or a semantic copy or a loan
> translation) is when we do not copy a lexical item as such from one
> language into another, but when just the meanings are transferred from
> one language to the other, while at teh same time we use the
> corresponding forms of the original language.

How about if people spent 300 years speaking two languages until the
two languages "fused". Is that possible? I know people (ignorant ones)
who speak 2-3 languages and they speak them all the same way. I can
imagine how a whole village of these 1,000 years ago could have created
a new language from 2-3 other languages without trying.


> p. 260
> There is a significant body of literature on the subject of linguistic
> diffusion and convergence, which is based on the assumption that
> languages can and do influence one another. The term diffusion is used
> to refer to the spread of a particular linguistic feature from one
> language to another (or, indeed to several other languages).
>
> p.262
> The diffusion of grammatical features in this way has caused some
> linguists to question further the validity and basic assumptions of the
> whole comparative method. Some languages appear to have undergone so
> much diffusion in the lexicon and the grammar that it can be difficult
> to decide which protolanguage they are derived from. According to the
> comparative method as I have described it in this volume, it is possible
> for a language to be derived from only a single protolanguage, yet some
> linguists have found it necessary to speak of mixed languages, which
> seem to derive from two different protolanguages at once.

This is probably another important development and it is good that
Crowley writes about this. At least now the poor student does not
go away with the feeling that all is carved on stones.


> p.270
> Many linguists have been struck by the fact that pidgin and creole
> languages often show strong parallels in their structure with their
> substrate languages than their superstrate languages.

Extremely important for language mixing. Similar to the way in which
we can create degrees of typology we can also think of language
contact in degrees. From one extreme in which only the superstrate
wins to the other extreme in which the substrate wins out we have
a whole continuum of types/degrees of changes. So all languages can
then be considered to be "mixed language" but to different degrees.
This problem is even better than the present problem.


> p.312           [cultural reconstruction, chapter 13]
>
> While many attempts at paleolinguistic comparisons fall far short of
> scientific respectability, the writings of Johanna Nichols since the
> mid-1980s have attracted considerable interest among some linguists, as
> well as archaeologists and others interested in establishing
> relationships at much greater-time depths than is possible using the
> comparative method.
>
> Nichols' approach is more akin to population science in that she does
> not aim to study the evolution of individual languages, or even closely
> related groups of languages. Rather she aims to study the history of
> 'populations' of languages. By this, she means that she considers large
> groupings of languages together, dealing not with particular features
> of individual languages, but broader general features of language
> groupings. Thus, she considers for example, the languages of Australia
> or Africa as a whole. She pays attention not to whether structural
> features are present or absent, but to what are the statistical
> frequencies and distributions of features are within these larger
> populations of languages.
>
> Such linguistic markers are considered to be akin to biological markers
> in that they can be used to identify affinities between populations at
> considerable time-depths. She argues that if, in the languages of a
> continent (or some other large geographical area) a feature shows up
> with a high frequency, this distribution is not something that is due to
> recent diffusion. When several markers of this type are shared, this is
> taken as being indicative of historical affinity. Of course, such
> features must be known to be typologically unrelated.
> ...
> The actual application and interpretation of Nichols' method is complex
> and it is unlikely to become the standard model by which individual
> historical linguists will attempt to study linguistic relationships.

Nichols is doing with mathematics what other linguists do with
words. Here Crowley fails to understand what Nichols is doing but
only to a degree. He does, however, have at least a good understanding
of the importance of what she is doing, which is much more than what
many other linguists are apparently capable of.

Here endeth the review!

--
Best Regards,
Mark
-==-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
hubeyh at montclair.edu =-=-=-= http://www.csam.montclair.edu/~hubey
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
<><><><><><><><><><><><><><>----Language----<><><><><><><><><><><><><><><>
Copyrights and "Fair Use":     http://www.templetions.com/brad//copyright.html
"This means that if you are doing things like comment on a copyrighted work, making fun of it,
teaching about it or researching it, you can make some limited use of the work without permission.
For example you can quote excerpts to show how poor the writing quality is. You can teach a
course about T.S. Eliot and quote lines from his poems to the class to do so. Some people think
fair use is a wholesale licence to copy if you don't charge or if you are in  education, and it isn't.
If you want to republish other stuff without permission and think you have  a fair use defence, you
should read the more detailed discussions of the subject you will find through the links above."



More information about the Language mailing list