[Lingtyp] complex annotations and inter-rater reliability

Wiemer, Bjoern wiemerb at uni-mainz.de
Sun Jan 11 10:55:39 UTC 2026


Dear Jürgen,
many thanks for this nice summarizing segregation of the three types of problems (and thanks for pointing out your forthcoming book).  You are right, they are not only related to annotation, but they become highly relevant for annotation IF we, as linguists, are interested in their comparability (with other data sets, with other languages, etc.) and, while annotating, we observe problems in assigning values from our properties (distinction, categories), and these problems are evidently NOT caused by an insufficient theory, lack of understanding the relevant structure of the language under inspection or other things that may be caused by missing an important point in your grid.
                I referred to the discussion on this list in late 2023. It seems to have ended up by just stating that, in semantic annotation at least, we can only do our best at carving the relevant notional domain by distinctions that seem suitable for the particular research aim and then let relevant data annotate by different people. Then you observe that people disagree to a non-negligeable degree. And what then...? Where to look for the reasons, and how to diminish them? You can (again and again) refine your description and hope to get all relevant distinctions. But this will only make your grid more complex, and the instruction for annotators. I'm sincerely looking forward to reading your book and to learning from it how to cope with this.
In general, I see that, as for the second point summarized by you below, we need a better understanding of how to relate descriptive concepts (used inter alia in annotations) to comparative concepts. And maybe to have an intermediate level in-between (as it were, of "semi-comparative" concepts) that captures certain groups of descriptive concepts (that describe similar phenomena), but that are not that widespread, or prominent, as distinctions, say, between verbs and nouns, flagging and indexing, assertive vs directive etc. speech acts, or of being a root or not. Maybe, understanding how to do this will help to diminish the jungle of specific terms that concur with traditional (and seemingly well-understood) terms, which together create so many occasions for misunderstandings. And even if there were less confusion about how to use complex and simple concepts, this would not mean that we are able to assign all, or the majority of, tokens from data sets labels on which different annotators agree.

Thanks, again, for this summary of yours.
Best,
Björn.

Von: Juergen Bohnemeyer <jb77 at buffalo.edu>
Gesendet: Samstag, 10. Januar 2026 21:19
An: Wiemer, Bjoern <wiemerb at uni-mainz.de>; lingtyp at listserv.linguistlist.org
Betreff: Re: [Lingtyp] complex annotations and inter-rater reliability

Dear Björn - I'm not trying to answer your question, I'm just trying to understand it. Which I have so far evidently failed to do.

Any kind of annotation presupposes an analysis. Semantic annotation presupposes semantic analysis. The problems you describe seem to fall into at least three different categories:


  *   Problems of semantic analysis, such as the analysis of your Polish example. Now, it so happens that I have written a book on how to do semantic analysis without relying on L1 speaker intuitions. It's coming out in February. Please see here<https://www.cambridge.org/ga/universitypress/subjects/languages-linguistics/semantics-and-pragmatics/semantic-research-data-analysis?format=PB&isbn=9781108441926>.

  *

  *   Problems of the metalanguage used to communicate results of semantic analysis, which may or may not be isomorphic with the labels used for annotation. In any case, this is where the issue of comparative concepts and etic grids arises.

  *

  *   Problems of typology and theory, such as is there an exhaustive classification of speech acts (answer: no, at least not so far, and I've begun to think that such a classification may be unattainable) and how, if at all, to distinguish between complementizers and mood markers (knowing that there is in many languages a grammaticalization continuum involved here).

What puzzles me about your question, and makes it hard for me to understand where you are going with it, is that the three issues listed above seem to all be problems in their own right (although they are of course closely interrelated), and none of them seems to be particularly intimately tied to annotation.

Maybe you could clarify a bit further? - Best - Juergen


Juergen Bohnemeyer (He/Him)
Professor, Department of Linguistics
University at Buffalo

Office: 642 Baldy Hall, UB North Campus
Mailing address: 609 Baldy Hall, Buffalo, NY 14260
Phone: (716) 645 0127
Fax: (716) 645 3825
Email: jb77 at buffalo.edu<mailto:jb77 at buffalo.edu>
Web: http://www.acsu.buffalo.edu/~jb77/

Office hours Tu/Th 3:30-4:30pm in 642 Baldy or via Zoom (Meeting ID 585 520 2411; Passcode Hoorheh)

There's A Crack In Everything - That's How The Light Gets In
(Leonard Cohen)

--


From: Lingtyp <lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>> on behalf of Wiemer, Bjoern via Lingtyp <lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org>>
Date: Saturday, January 10, 2026 at 09:11
To: lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org> <lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org>>
Subject: Re: [Lingtyp] complex annotations and inter-rater reliability
Dear All,
a week ago, I posted a mail with issues concerning annotations, in particular semantic ones. I thank those few colleagues who have contributed to that discussion, and I have been waiting over the week if there might come more. As this hasn't been the case, I'd like to summarize some points, but also to become more precise as for the more particular background. I'm sorry that, maybe, I hadn't been clear enough, and apologize for this long mail.

>From the few reactions I got (from four colleagues) I dare infer that the topic is either considered irrelevant or it hasn't been realized yet. The issue I caught up with was a discussion from October 2023, in which in particular Volker Gast explained that semantic annotation yields astonishingly divergent results among annotators even for distinctions which seem to be intuitively quite clear (and, as I suppose, the researchers had defined them well before they presented material to informants/annotators). In this line, my request primarily concerned semantic annotation.
                In summarizing the responses this week, one result is that there were none concerning semantic annotation. There was consensus that, in principle, the burden is on those writing the guidelines (or a codebook). I agree with this, in principle. The problem is, however, that some annotations are done for the purpose of exploring things that are too subtle or too much off the radar to be considered in mainstream annotation guidelines, or designs. I come back to this below.
Apart from the linguist's task to provide clear guidelines of annotation, Christian Lehmann raised another requirement, namely that there be "a complete linguistic description of the language" in question. One might assume that this is possible for morphosyntax and phonology (provided one breaks down all concepts to "atomic" oppositions; see below), at least for languages we have enough data of and a body of experts who have already worked on them (and probably neglecting much of sociolinguistic variation and abstracting from diachronic change, e.g. by comparing even 19th-century English and PDE). I understand this postulate (exhaustive description) as being based on descriptive concepts (for each particular language, or rather: language stage). Thus, how can this be made compatible with comparative concepts, which Martin draws attention to? From all the discussion I am able to recall for this topic I learned that descriptive and comparative concepts each have their own justification (depending on one's goals), but they are usually incommensurable. Moreover, as far as I understand (and remember from numerous examples), comparative concepts are, as a rule, related to structural notions relevant for phonology and morphosyntax; they thus rather concern notions like "root, morpheme, affix, indexing vs flagging" etc. - some of which may not be comparative concepts, after all, but they are meant to make linguists capable of speaking about the "same" things in linguistic comparison in terms of structure.
But what about notions like, e.g., those specifying different kinds of illocution (= the speaker's communicative intention by saying/uttering U), or information source? It has often been emphasized that, probably, nobody would have started looking for evidential (or: information source) marking in (west) European languages if there hadn't been linguists (with a European/western background) like Boas, Sapir and some people working on the Balkans (since the 19th c.) and, later, on Amazonian languages who noticed that for the description of these languages it is highly relevant to take account of either bound morphology or of usage types of verbal constructions (e.g., anterior/perfect grams) because these marked distinctions could not be adequately described by notions like mood, modality, or tense (though many tried to squeeze them into those categorial or grammatical distinctions). That is, we get those "new distinctions" that belong to a different notional dimension than those paradigmatic oppositions you know from "classical languages", and we get them because they are part and parcel of the morphosyntactic "outfit" of those "more exotic" languages or, at least, of their function range (e.g. as extensions of the perfect). And once we got them we get new notional (semantic-pragmatic) distinctions which, on the level of functions of utterances, you assume different kinds of information source to be discernible also in languages that don't have such clear-cut means of marking them in their morphosyntax.
The tertium comparations now concerns only functions, irrespective of the way they can be expressed. So, you can also investigate, say, sentence adverbs (or whatever one wants to call by the vague term "particle") from the point of view of information source marking. Sentence adverbs are not morphosyntax, they are operators on propositions and illocutions (and might be subclassified accordingly). If you want to annotate their functions in samples of authentic speech, you normally adopt some commonly accepted classification of functions, but actually this might turn out only a bone-fide application of what linguists (i.e. experts of certain language and/or of certain linguistic phenomena) think is a valued approximation to the linguistic reality of speakers of the respective languages. How do we know that this is exhaustive, or at least sufficient? Concomitantly, what will we do if we get rather divergent results by different annotators? Here we have a parallel to the problem discussed by Volker Gast in 2023. We may keep the burden with the researchers who write the codebook, but how should they know whether (and which) problems arise because of a bad codebook or because the annotators didn't understand the distinctions made?
Research in AI (and in relevant tools) has started dealing with 'human label variation', that is with annotation disagreements: "Human label variation arises when annotators assign different labels to the same item for valid reasons, while annotation errors occur when labels are assigned for invalid reasons." (Weber-Genzel et al. 2024: 2256) Here, variation is distinguished from errors (this includes misunderstanding an instruction). Simultaneously, the amount of observed variation between annotators is taken as a signal that something is in the data which needs to be explained as their objective property, and not in the sense that one group of annotators is "right", another one is "wrong". Almost all examples I have read about concern clause linkage and the semantic relation between interrelated clauses (with and without connectives). It thus has mainly been observed with distinctions relevant for semantic annotation.

Furthermore, Martin remarks that we need "clear and simple definitions of annotation categories". I totally agree. I also agree that notions like 'mood' and 'subordination' are highly complex (and usually fraught with traditions that are often rather an obstacle than helpful). In my view, the consequence must be that such complex notions are broken down into simpler oppositions. Thus, in order to get an empirically valid picture of different strategies and degrees of subordination in a language (or rather: its corpus examples) I would start with coding parameters like those pointed out in Verstraete 2007. I would complement them with indicators of those differences that have been pointed out w.r.t. a gradient between quoted/direct and indirect speech [Verstraete deliberately didn't deal with them]. What we then get is a grid of annotational variables that is complex for its number of distinctions; moreover, it needs really good training (especially if you want to "hire" annotators), first of all because many of these simpler distinctions are not immediately obvious (or intuitive) and they anyway require the annotator to interpret each example in a sufficiently rich context. (This is similar to what you need to do if you annotate differences of referential status of or (in)definiteness - often you cannot determine that without a broader context.) In addition, even some rich context might not suffice to assign exactly one label from your value set of the given criterion (all of which are specified in your codebook).
                The same could be said about 'mood', on condition that linguists sufficiently specify what they mean by it (which often is not the case). In fact, we can do without this notion, because it can probably be shown that what linguists usually call 'mood' (especially "analytic mood") is just a way to refer to the function of diverse function words (some of them are good examples of clitics, others are not) to indicate illocutionary force, and simultaneously they often restrict marking of tense (and "synthetic mood") on the verb (or the VP) from the inventory of TMA marking admissible in the respective language. This is where the function of such "mood" markers meets with the notion of 'complementizer', i.e. another complex notion (to get a representative picture of "intersections" between what people call 'mood' and 'complementizer' cf. Wiemer 2023a, 2023b). Even if you seem to have a good definition of what a (canonical) complementizer is [it is certainly not a comparative concept, as isn't 'mood'!], you need to break it down into more "atomic" distinctions, and among these we find illocutionary functions and various kinds of stance (epistemic, volitional, etc.), which have to be compatible with whatever is their complement-taking predicate. However, do we have an exhaustive list of illocutionary functions? Moreover, to check them (in order to assign a value from your annotation grid) you again often need to rely on rich enough context. Here a simple example:

Polish
(1)         Niech Pan siądzie!
                'May you sit down!'

The function word niech (= uninflected trunk of obsolete *nexati 'let') requires a finite verb in the non-past (here: siądzie.3SG 'sit down'), this restriction applies for niech if used in the volitional domain (vs its use as concessive conjunction, when niech allows for past tense). In isolation, this utterance could be assigned different illocutionary functions: it may simply be directive (command), it may also be permissive (if this is a reaction to a request for action), but it may also be non-curative ('I don't care', likewise reactive, but not necessarily to a request), from where we easily get into concessive use. Whether it is directive, permissive or non-curative depends on simpler distinctions like [+/- reactive], [+/- relates to request], also [+/- interlocutors on equal social level], etc. Now, a codebook could contain the labels 'directive', 'permissive', etc., and respective explanations concerning these more atomic distinctions; but it could also just list these simpler distinctions as binary choices, and the illocutionary function could be computed from their combination. That is, we could not use the labels of illocutionary functions; they might be better "known", but will they be understood and be simpler for the annotators?
                Does anybody have experience with this kind of question (and the application in practice with annotators and interrater-reliability testing)? This question arises when you compare the Polish construction with equivalent (often cognate) constructions in other Slavic languages, and if you want to grasp whether they have changed functionally over time (for a first impression cf. Wiemer 2023c). But there are many other examples. Which "full description" of any of these languages would help me choose the right solution?

Last not least, all remarks and the information given by Sasha Berdicevskis from the perspective of computer linguistics are related and helpful to what I have been after (thanks, Sasha!). However, I'm not sure whether the methodology he describes for investigating lexical change can be applied one-by-one to research in grammatical change (in the broad sense, as with my Polish example above), as Sasha remarks himself, and not only because it might be too expensive (with probably depressing outcome) to be done on treebanks. Would your "very optimistic computational linguist" use something like BERT for this purpose? If yes, the methodology seems to be inverse to composing elaborate codebooks with a large number of criteria/variables (broken down into possibly atomic contrasts), doesn't it? For you would assess the validity of BERT's results ex post from human interpreters (probably language experts?), who need to understand which contextual features BERT's vector analysis was based on (which may remain a black box)?

After all, all contributors to this discussion converge in saying that being a native speaker of the language or not doesn't matter, at least not in the first place. This was my impression as well before I posted my mail a week ago. But the reason I asked for that, was that obviously not all peers reviewers of project applications or of journal articles share this assumption. And then it gets difficult to argue with them.
                The same applies to reviewers who reject the existence of phenomena (e.g., of data that defy unambiguous annotation) because they are not discussed or noted in UD ressources... This is another anecdote, but a really annoying and sad one.

However, let's be optimistic, though, and be confident that research in diachronic change can be done without native speakers, and the parameters indicating change even be quantified (to some extent), even if data is not that abundant and no full descriptions of earlier languages stages are available (and testable). And that peer reviewers will be constructive in this regard.

Best,
Björn.


References
Verstraete, Jean-Christophe. 2007. Rethinking the Coordinate-Subordinate Dichotomy Interpersonal Grammar and the Analysis of Adverbial Clauses in English. Berlin, New York: Mouton de Gruyter.
Weber-Genzel, Leon, Siyao Peng, Marie-Catherine de Marneffez & Barbara Plank. 2024. VariErr NLI: Separating Annotation Error from Human Label Variation. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2256-2269.
Wiemer, Björn. 2023a. Between analytical mood and clause-initial particles - on the diagnostics of subordination for (emergent) complementizers. Zeitschrift für Slawistik 68-2, 187-260.
Wiemer, Björn. 2023b. Clause-initial connectives, bound and unbound: Indicators of mood, of subordination, or of something more fundamental? Slavia Meridionalis 23 (Special issue: Comparative and typological approaches to Slavic languages. Ed. by Jakub Banasiak, Julia Mazurkiewicz-Sułkowska, Bożena Rozwadowska, Dorota Klimek-Jankowska). DOI: 10.11649/sm.3194
Wiemer, Björn. 2023c. Directive-optative markers in Slavic: observations on their persistence and change. Linguistica Brunensia 71-1, 5-45.



Von: Lingtyp <lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>> Im Auftrag von Martin Haspelmath via Lingtyp
Gesendet: Montag, 5. Januar 2026 12:51
An: lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org>
Betreff: Re: [Lingtyp] complex annotations and inter-rater reliability


Dear Björn,

Since you mentioned works on cross-linguistic inter-coder reliability as well (e.g. Himmelmann et al. 2018 on the universality of intonational phrases):

I think it's important to have clear and simple definitions of annotation categories, so if you are interested, for example, in "the coding of clause-initial "particles" (are they just particles, operators of "analytical mood", or complementizers?)", you need to have clear and simple definitions of particle, mood, and complementizer as comparative concepts. ("The burden is on those who formulate the guidelines", as Christian Lehmann said.)

I think one can define particle as "a bound morph that is neither a root nor an affix nor a person form nor a linker", but this definition of course presupposes that one has a definition of "root", of "affix", and so on. These terms are not understood uniformly either, and mood is perhaps the worst of all traditional terms (even worse than "subordination", I think).

Matters are quite different with materials from little-studied languages, i.e. with "transcribing and annotating recordings", as described by Jürgen Bohnemeyer. Language-particular descriptive categories are much easier to identify across texts than comparatively defined categories are to identify across languages.

Best wishes for the New Year,

Martin
On 03.01.26 12:54, Wiemer, Bjoern via Lingtyp wrote:
Dear All,
since this seems to be the first post on this list this year, I wish everybody a successful, more peaceful and decent year than the previous one.

I want to raise an issue which gets back to a discussion from October 2023 on this list (see the thread below, in inverse chronological order). I'm interested to know whether anybody has a satisfying answer to the question how to deal with semantic annotation, or the annotation of more complex (and less obvious) relations, in particular with the annotation of interclausal relations, both in terms of syntax and in semantic terms. Problems arise already with the coordination-subordination gradient, which ultimately is an outcome of a complex bunch of semantic criteria (like independence of illocutionary force, perspective from which referential expressions like tense or person deixis are interpreted; see also the factors that were analyzed meticulously, e.g., by Verstraete 2007). Other questions concern the coding of clause-initial "particles": are they just particles, operators of "analytical mood", or complementizers? (Notably, these things do not exclude one another, but they heavily depend on one's theory, in particular one's stance toward complementation and mood.) Another case in point is the annotation of the functions and properties of constructions in TAME-domains, especially if the annotation grid is more fine-grained than mainstream categorizing.
                The problems which I have encountered (in pilot studies) are very similar to those discussed in October 2023 for seemingly even "simpler", or more coarse-grained annotations. And they aggravate a lot when we turn to data from diachronic corpora: even if being an informed native speaker is usually an asset, with diachronic data this asset is often useless, and native knowledge may be even a hindrance since it leads the analyst to project one's habits and norms of contemporary usage to earlier stages of the "same" language. (Similar points apply for closely related languages.) I entirely agree that annotators have to be trained, and grids of annotation to be tested, first of all because you have to exclude the (very likely) possibility that raters disagree just because some of the criteria are not clear to at least one of them (with the consequence that you cannot know whether disagreement or low Kappa doesn't result from misunderstandings, instead of reflecting properties of your object of study). I also agree that each criterion of a grid has to be sufficiently defined, and the annotation grid (or even its "history") as such be documented in order to save objective criteria for replicability and comparability (for cross-linguistic research, but also for diachronic studies based on a series of "synchronic cuts" of the given language).

On this background, I'd like to formulate the following questions:

  1.  Which arguments are there that (informed) native speakers are better annotators than linguistically well-trained students/linguists who are not native speakers of the respective language(s), but can be considered experts?
  2.  Conversely, which arguments are there that non-native speaker experts might be even better suited as annotators (for this or that kind of issue)?
  3.  Have assumptions about pluses and minuses of both kinds of annotators been tested in practice? That is, do we have empirical evidence for any such assumptions (or do we just rely on some sort of common sense, or on the personal experience of those who have done more complicated annotation work)?
  4.  How can pluses and minuses of both kinds of annotators be counterbalanced in a not too time (and money) consuming way?
  5.  What can we do with data from diachronic corpora if we have to admit that (informed) native speakers are of no use, and non-native experts are not acknowledged, either? Are we just deemed to refrain from any reliable and valid in-depth research based on annotations (and statistics) for diachronically earlier stages and for diachronic change?
  6.  In connection with this, has any cross-linguistic research that is interested in diachrony tried to implement insights from such fields like historical semantics and pragmatics into annotations? In typology, linguistic change has increasingly become more prominent during the last 10-15 years (not only from a macro-perspective). I thus wonder whether typologists have tried to "borrow" methodology from fields that have possibly been better in interpreting diachronic data, and even quantify them (to some extent).

I don't want to be too pessimistic, but if we have no good answers as for who should be doing annotations - informed native speakers or non-native experts (or only those who are both native and experts)? - and how we might be able to test the validity of annotation grids (for comparisons across time and/or languages), there won't be convincing arguments how to deal with diachronic data (or data of lesser studied languages for which there might be no native speakers available) in empirical studies that are to disclose more fine-grained distinctions and changes, also in order to quantify them. In particular, reviewers of project applications may always ask for a convincing methodology, and if no such research is funded we'll remain ignorant of quite many reasons and backgrounds of language change.

I'd appreciate advice, in particular if it provides answers to any of the questions under 1-6 above.

Best,
Björn (Wiemer).


Von: Lingtyp <lingtyp-bounces at listserv.linguistlist.org><mailto:lingtyp-bounces at listserv.linguistlist.org> Im Auftrag von William Croft
Gesendet: Montag, 16. Oktober 2023 15:52
An: Volker Gast <volker.gast at uni-jena.de><mailto:volker.gast at uni-jena.de>
Cc: LINGTYP at LISTSERV.LINGUISTLIST.ORG<mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG>
Betreff: Re: [Lingtyp] typology projects that use inter-rater reliability?

An early cross-linguistic study with multiple annotators is this one:


Gundel, Jeannette K., Nancy Hedberg & Ron Zacharski. 1993. Cognitive status and the form of referring expressions in discourse. Language 69.274-307.

It doesn't have all the documentation that Volker suggests; our standards for providing documentation has risen.

I have been involved in annotation projects in natural language processing, where the aim is to annotate corpora so that automated methods can "learn" the annotation categories from the "gold standard" (i.e. "expert") annotation -- this is supervised learning in NLP. Recent efforts are aiming at developing a single annotation scheme for use across languages, such as Universal Dependencies (for syntactic annotation), Uniform Meaning Representation (for semantic annotation), and Unimorph (for morphological annotation). My experience is somewhat similar to Volker's: even when the annotation scheme is very coarse-grained (from a theoretical linguist's point of view), getting good enough interannotator agreement is hard, even when the annotators are the ones who designed the scheme, or are native speakers or have done fieldwork on the language. I would add to Volker's comments that one has to be trained for annotation; but that training can introduce (mostly implicit) bases, at least in the eyes of proponents of a different theoretical approach -- something that is more apparent in a field such as linguistics where there are large differences in theoretical approaches.

Bill


On Oct 16, 2023, at 6:02 AM, Volker Gast <volker.gast at uni-jena.de<mailto:volker.gast at uni-jena.de>> wrote:


Hey Adam (and others),

I think you could phrase the question differently: What typological studies have been carried out with multiple annotators and careful documentation of the annotation process, including precise annotation guidelines, the training of the annotators, publication of all the (individual) annotations, calculation of inter-annotator agreement etc.?

I think there are very few. The reason is that the process is very time-consuming, and "risky". I was a member of a project co-directed with Vahram Atayan (Heidelberg) where we carried out very careful annotations dealing with what we call 'adverbials of immediate posteriority' (see the references below). Even though we only dealt with a few well-known European languages, it took us quite some time to develop annotation guidelines and train annotators. The inter-rater agreement was surprisingly low even for categories that appeared straightforward to us, e.g. agentivity of a predicate; and we were dealing with well-known languages (English, German, French, Spanish, Italian). So the outcomes of this process were very moderate in comparison with the work that went into the annotations. (Note that the project was primarily situated in the field of contrastive linguistics and translation studies, not linguistic typology, but the challenges are the same).

It's a dilemma: as a field, we often fail to meet even the most basic methodological requirements that are standardly made in other fields (most notably psychology). I know of at least two typological projects where inter-rater agreement tests were run, but the results were so poor that a decision was made to not pursue this any further (meaning, the projects were continued, but without inter-annotator agreement tests; that's what makes annotation projects "risky": what do you do if you never reach a satisfactory level of inter-annotator agreement?). Most annotation projects, including some of my own earlier work, are based on what we euphemistically call 'expert annotation', with 'expert' referring to ourselves, the authors. Today I would minimally expect the annotations to be done by someone who is not an author, and I try to implement that requirement in my role as a journal editor (Linguistics), but it's hard. We do want to see more empirical work published, and if the methodological standards are too high, we will end publishing nothing at all.

I'd be very happy if there were community standards for this, and I'd like to hear about any iniatives implementing more rigorous methodological standards in lingusitic typology. Honestly, I wouldn't know what to require. But it seems clear to me that we cannot simply go on like this, annotating our own data, which we subsequently analyze, as it is well known that annotation decisions are influenced by (mostly implicit) biases.

Best,
Volker

Gast, Volker & Vahram Atayan (2019). 'Adverbials of immediate posteriority in French and German: A contrastive corpus study of tout de suite, immédiatement, gleich and sofort'. In Emonds, J., M. Janebová & L. Veselovská (eds.): Language Use and Linguistic Structure. Proceedings of the Olomouc Linguistics Colloquium 2018, 403-430. Olomouc Modern Lanuage Series. Olomouc: Palacký University Olomouc.

in German:

Atayan, V., B. Fetzer, V. Gast, D. Möller, T. Ronalter (2019). 'Ausdrucksformen der unmittelbaren Nachzeitigkeit in Originalen und Übersetzungen: Eine Pilotstudie zu den deutschen Adverbien gleich und sofort und ihren Äquivalenten im Französischen, Italienischen, Spanischen und Englischen'. In Ahrens, B., S. Hansen-Schirra, M. Krein-Kühle, M. Schreiber, U. Wienen (eds.): Translation -- Linguistik -- Semiotik, 11-82. Berlin: Frank & Timme.

Gast, V., V. Atayan, J. Biege, B. Fetzer, S. Hettrich, A. Weber (2019). 'Unmittelbare Nachzeitigkeit im Deutschen und Französischen: Eine Studie auf Grundlage des OpenSubtitles-Korpus'. In Konecny, C., C. Konzett, E. Lavric, W. Pöckl (eds.): Comparatio delectat III. Akten der VIII. Internationalen Arbeitstagung zum romanisch-deutschen und innerromanischen Sprachvergleich, 223-249. Frankfurt: Lang.


---
Prof. V. Gast
https://linktype.iaa.uni-jena.de/VG

On Sat, 14 Oct 2023, Adam James Ross Tallman wrote:


Hello all,
I am gathering a list of projects / citations / papers that use or refer to inter-rater reliability. So far I have.
Himmelmann et al. 2018. On the universality of intonational phrases: a cross-linguistic interrater study. Phonology 35.
Gast & Koptjevskaja-Tamm. 2022. Patterns of persistence and diffusibility in the European lexicon. Linguistic Typology (not explicitly the topic of the paper, but interrater reliability metrics are used)
I understand people working with Grambank have used it, but I don't know if there is a publication on that.
best,
Adam
--
Adam J.R. Tallman
Post-doctoral Researcher
Friedrich Schiller Universität
Department of English Studies
_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp



_______________________________________________

Lingtyp mailing list

Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>

https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp

--

Martin Haspelmath

Max Planck Institute for Evolutionary Anthropology

Deutscher Platz 6

D-04103 Leipzig

https://www.eva.mpg.de/linguistic-and-cultural-evolution/staff/martin-haspelmath/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20260111/3c1b0f51/attachment.htm>


More information about the Lingtyp mailing list