[Lingtyp] Greenbergian word order universals: confirmed after all

Juergen Bohnemeyer jb77 at buffalo.edu
Sun Nov 5 23:22:02 UTC 2023


Dear all – Two points:

First, this thread has served to alleviate somewhat my concerns about the review processes used by the generic science journals – mostly through offline one-on-one communications I’ve received. Thanks!

Secondly, I’d like to, maybe gently push back, or may be just elaborate, on the contrast Simon Greenhill draws between phylogenetic inference and the “overly simplistic correlation methods”, as he calls them. In the process, I’d like to try to clarify what in my mind are the strengths and weaknesses of each (type of) method. And I’ll pose a couple of challenge questions to the detractors of phylogenetic/coevolutionary testing and one for their proponents, and I hope all of these questions won’t go unanswered 😊

Before I do that, let me flag my interest in this: I’m currently in Leipzig spending a sabbatical with Russell Gray’s Department of Linguistic and Cultural Evolution (≈ the Grambank team), with the principal objective of figuring out how to capitalize the most on Grambank for my project. The project tracks the typological distribution of functional expressions and attempts to account for it (the distribution) in terms of an evolutionary process that makes grammaticalization sensitive to functional pressures. Since coming here, I’ve been struggling with the question whether I should adopt stratified sampling, a phylogenetic method (I’m not going to use the term ‘co-evolution’ here, because I’m at this stage of the project just looking at the emergence of individual functional expressions), or both. So I’m airing my thoughts very much in hopes of eliciting feedback.

So. Let’s start with a very basic reminder: We cannot observe causality, we can only infer it, as David Hume taught us (i.e., humanity) nearly three centuries ago. The difference between the methods at issue is one between synchronic inference and diachronic inference.

Now, before I go any further, a big disclaimer: I think I understand at the most abstract level the ideas underlying phylogenetic inference. But I don’t understand the statistical “sausage making” involved. Meaning I have many basic questions about the algorithms used for phylogenetic inference.

With that said: Imagine an ideal world in which we had perfect and reliable knowledge about the complete phylogeny of every extant human language and every language that ever existed. It seems indisputable to me that in this world, we would test hypotheses about causal relations between two linguistic features not merely by looking at their synchronic distributions, but also by looking at whether the two features tend to co-evolve across phylogenies or whether they appear to have merely accidentally “travelled together” through time in a few families.

So this would be my first question to the detractors of phylogenetic/dynamic (Maslova’s term)/co-evolutionary tests for typological generalizations: do you disagree that if we had that perfect knowledge, we would as a matter of course take into account the phylogenetic perspective?

Now, in the real world, our knowledge of phylogenies is itself largely inferred from synchronic distributions. Crucially, though, these inferences are based on data that may overlap with the typological patterns of interest, but is largely independent of it. To make this more concrete:


  *   Jäger & Wahle (2021) use the world tree inferred from ASJP<https://asjp.clld.org/> cognate data presented in Jäger (2018). Notably, Jäger’s method takes into account systematic sound correspondences, so to my uninitiated eye, it looks like an unassailable computational implementation of the historic-comparative method. Of course, the data the analysis is based on is another matter.



  *   Verkerk et al. (ms.), which Martin cites in the post that started the thread, rely on Glottolog, which Jäger & Wahle also used to validate their inferred world tree. In my understanding, Glottolog is sourced by a compilation of the best available evidence for phylogenetic relations, principally curated by its lead author, Harald Hammarström.

Two more points to keep in mind here:


  *   Stratified sampling of course also takes into account phylogenetic and areal information. So the two types of approaches have really more in common than Simon gives them credit for in his reply, I think. However, stratified sampling uses phylogenetic and areal information in first approximation only “negatively”, as it were: by systematically removing observations from the analysis that are suspected to be “tainted” by areal or genealogical dependencies.



  *   The distance of our real world from that ideal world in which we have perfect knowledge of phylogenetic and areal relations is in my (again, uninitiated, in the sense that I’m not a historical linguist) mind pretty darn great. However, the phylogenetic statistics algorithms use “forests” of alternative phylogenetic trees weighted for the confidence the field (or the analysts) has/have in them, factored into the tests as Bayesian priors. In this sense, the use of phylogenetic information is more sophisticated in the dynamic tests than in anything based on stratified sampling that I have seen.


So with all this assembled, here’s my semi-informed take on the pros and cons of each approach:


  *   Stratified sampling (I’m going to start with the method that has been practiced the longest)



     *   Pro
        *   Technologically and conceptually simple, easy to implement by anybody with the most basic level of training in inferential statistics, and transparent for the analyst to track and understand the effects of their decisions.
        *   Statistically conservative, in the specific sense that it minimizes the odds of false positives, i.e., of support for generalizations for which the evidence isn’t really there. (One might think that Dunn et al. 2011 present counterevidence to this. But the apparent false positives pointed out by Dunn et al. turn out to be true positives after all, now that Jäger & Wahle and Verkerk et al. have replicated their analysis with a larger sample.)
        *   Validity of testing depends only on one key assumption – the big one: independence of observations. (Detractors of stratified sampling might however argue that it is impossible to ensure that this assumption is warranted in any decent-sized sample.)
     *   Con
        *   There has never been a consensus on an optimal sampling algorithm that balances minimization of areal and genealogical biases with retaining enough statistical power to detect the patterns of interest. With such a consensus absent after 45 years of some of the smartest minds in the field applying themselves to the problem, maybe there is no optimal solution to be had?
        *   Based on information reduction (exclusion of observations); does not take into account phylogenetic information to the fullest measure of what can be concluded from it.


  *   Dynamic/phylogenetic/co-evolutionary inference


     *   Pro
        *   Utilizes the best available phylogenetic information “for all it’s worth”, directly examining whether features tend to coevolve across lineages or only in particular lineages. (The same cannot currently be said about areal information. However, Verkerk et al. (ms.) use mixed models regression – which is feasible based on Grambank thanks to its relatively low percentage of missing observations – to factor in possible areal effects.)
        *   Does not depend on the assumption of independence of observation.


     *   Con
        *   Very steep training demands both on the conceptual and on the technological side. Currently only a handful of specialists are able to perform such analyses. In addition, the computational complexities are so demanding as to render the analyses effectively opaque for all but those specialists, raising questions about the validation of such tests. (Gerhard Jäger’s comparison to carbon dating in archeology is apt here as well, I think: while phylogenetic analysis will likely become more standard in the future, it may be the case that typologists will always have to rely on specialists for actually performing such tests, just like archeologists rely on physicists or lab technicians for performing carbon dating. I think 😉)
        *   Reliability of phylogenetic inference seems to depend on the size of the phylogenies involved: the smaller the family, the higher the uncertainty in the absence of actual historic data.
        *   As with stratified sampling, there does not yet appear to be a consensus approach. Jäger & Wahle (2021) and Verkerk et al. (ms.) use different algorithms (implemented in different software packages), make different assumptions, and their tests are conceptually quite distinct.
        *   Outcomes depend on many assumptions which are surrounded by varying degrees of uncertainty.

So at long last, here are my three challenge questions:


  *   For the detractors of phylogenetic inference:
     *   (Repeated from above to make sure this doesn’t get lost) Do you (dis)agree that if we had complete and robust knowledge of the phylogenies of the extant languages, we would perform phylogenetic tests as a matter of course when attempting to validate typological generalizations?
     *   Do you dispute that since phylogenetic testing of typological generalizations has become possible, typology as a field cannot simply ignore it and move on without it?
  *   For the proponents of phylogenetic inference:
     *   Do you dispute that stratified sampling is statistically conservative and has the advantage of transparency and robustness vis-à-vis the underlying assumptions, so it is likely not a method that typology will simply move beyond and disregard in the future?

And there it. Apologies for the overlong post! – Cheers – Juergen

Juergen Bohnemeyer (He/Him)
Professor, Department of Linguistics
University at Buffalo

Office: 642 Baldy Hall, UB North Campus
Mailing address: 609 Baldy Hall, Buffalo, NY 14260
Phone: (716) 645 0127
Fax: (716) 645 3825
Email: jb77 at buffalo.edu<mailto:jb77 at buffalo.edu>
Web: http://www.acsu.buffalo.edu/~jb77/

Office hours Tu/Th 3:30-4:30pm in 642 Baldy or via Zoom (Meeting ID 585 520 2411; Passcode Hoorheh)

There’s A Crack In Everything - That’s How The Light Gets In
(Leonard Cohen)
--


From: Lingtyp <lingtyp-bounces at listserv.linguistlist.org> on behalf of Simon Greenhill <simon at simon.net.nz>
Date: Saturday, November 4, 2023 at 13:39
To: lingtyp at listserv.linguistlist.org <lingtyp at listserv.linguistlist.org>
Cc: Dunn Michael <michael.dunn at lingfil.uu.se>, Russell Gray <russell_gray at eva.mpg.de>
Subject: [Lingtyp] Greenbergian word order universals: confirmed after all
Colleagues, Martin, everyone else

Thank you for sharing your perspectives on our 2011 paper. It's nice to see this still be discussed more than a decade later. However, I would like to express my concerns and disagreements with some of the points you've raised.

I'm very proud of the Dunn et al. paper for a number of reasons. I'll name three.

First, the paper showed that the overly simplistic correlation methods that had been used to make sweeping global claims were problematic. We need better tools to tackle these questions, and the tools we applied were one part of a better toolkit.

Second, it highlighted the need to understand language systems in a diachronic manner. We cannot decouple language typology from language history, instead we need to understand how these are entangled.

Third, it emphasised the way that particular configurations of languages can be arrived at via different routes in different families at different times. This enables a much richer understanding of how these particular generalisations have arisen.

Have Jäger and Wähle disproved any of that? no. Maybe these were not completely novel insights (Maslova’s work has been mentioned which touches on a few of these issues too, for example), but these ideas did appear to crystallise in this paper.

While it's certainly important to revisit and reevaluate research findings to ensure accuracy, it is crucial to approach these discussions with an understanding of the scientific process. Scientific paradigms evolve over time, and different studies may yield varying results due to changes in methodologies, data sources, and sample sizes. This doesn't necessarily imply that the initial research was flawed or that the authors were neglectful. In particular, the tools, the data, and our understanding of how languages change are substantially further advanced than they were a decade ago (or, I know that *my* understanding of these things is more advanced now, at least). And these other papers that you mention -- and many other studies -- have built upon the work we did in 2011.

Furthermore, I would like to caution against drawing overly broad conclusions about the quality of research published in high-prestige journals. The peer-review process in such journals is rigorous, and while they may occasionally feature sensationalist claims, this doesn't diminish the overall value they contribute to the scientific community. For the record, of the handful of papers I've had in these journals *all* have been reviewed by people I would infer to be linguists based on the comments and issues they raised. We did not send these papers to these journals to avoid linguistic reviewers but, frankly, I've had better reviews at these journals than at prominent linguistics journals (and by "better" I mean more rigorous, more thorough, and more critical).

Finally, linguistic typology is an ongoing and evolving field trying to tackle very difficult problems. We need all the tools and approaches we can get to solve these problems across all the levels that languages operate on (from detailed language internal analyses to high-level global analyses). Rather than looking back and gate-keeping what is 'real’ typology published in ‘real’ linguistics journals, we should shift our focus forward. Typology can be a welcoming and diverse community that embraces a wide range of approaches, analyses, and styles. Let's look outward to foster connections with other fields and disciplines.

After all, why shouldn't linguistic typology work be everywhere in science? it's certainly interesting enough.

Simon

Dr. Simon J. Greenhill

Associate Professor

Te Kura Mātauranga Koiora | School of Biological Sciences
Te Whare Wānanga o Tāmaki Makaurau | University of Auckland

Abteilung für Sprach- und Kulturevolution | Department of Linguistic and Cultural Evolution
Max-Planck-Institut für Evolutionäre Anthropologie | Max Planck Institute for Evolutionary Anthropology

_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org
https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flistserv.linguistlist.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Flingtyp&data=05%7C01%7Cjb77%40buffalo.edu%7Cbe1181a9879e434ccc0308dbdd3319c4%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638346983806627973%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=OgjDO%2FGow%2FfFmxOIvmrWNNq1nwcsZaLzI5OMnoGjBXg%3D&reserved=0<https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20231105/f64c3fa1/attachment.htm>


More information about the Lingtyp mailing list