34.595, Confs: Computational Linguistics, Discourse Analysis, Text/Corpus Linguistics/Germany

The LINGUIST List linguist at listserv.linguistlist.org
Thu Feb 16 21:30:03 UTC 2023


LINGUIST List: Vol-34-595. Thu Feb 16 2023. ISSN: 1069 - 4875.

Subject: 34.595, Confs: Computational Linguistics, Discourse Analysis, Text/Corpus Linguistics/Germany

Moderator: Malgorzata E. Cavar, Francis Tyers (linguist at linguistlist.org)
Managing Editor: Lauren Perkins
Team: Helen Aristar-Dry, Steven Franks, Everett Green, Sarah Robinson,
      Joshua Sims, Jeremy Coburn, Daniel Swanson, Matthew Fort,
      Maria Lucero Guillen Puon, Billy Dickson
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Hosted by Indiana University

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Everett Green <everett at linguistlist.org>
================================================================


Date: Thu, 16 Feb 2023 21:29:02
From: Anton Benz [benz at leibniz-zas.de]
Subject: Questions under Discussion: Annotation Challenge and Workshop

 
Questions under Discussion: Annotation Challenge and Workshop 
Short Title: QUDAnno22 

Date: 23-Feb-2023 - 24-Feb-2023 
Location: Berlin, Germany 
Contact: Anton Benz 
Contact Email: benz at leibniz-zas.de 
Meeting URL: https://pragma.ruhr-uni-bochum.de/qud-challenge/index.html 

Linguistic Field(s): Computational Linguistics; Discourse Analysis; Text/Corpus Linguistics 

Meeting Description: 

The QUDAnno22 Challenge consists in a joint effort of annotating three texts
distinguished by genre with text-structuring Questions under Discussion
(QUDs). At the workshop the results of the challenge will be discussed.
Submissions to the challenge count as submissions to the workshop. As a result
of the challenge, we plan an edited volume at Language Science Press (pending
final approval).

Background and Motivation:
QUDs are central to many discourse analyses that explain linguistic
regularities as a consequence of the assumption that the sentences and text
segments with which the regularities are associated are answers to an explicit
or implicit question. QUDs were early on used for explaining possible
sequences of dialogue moves (Carlson, 1983; Ginzburg, 1995), clarifying
information-structural concepts (e.g. the topic/focus distinction, Roberts,
2012 [1996]; van Kuppevelt, 1995), temporal progression and
foreground–background relations in narration (Klein & von Stutterheim, 1987;
von Stutterheim & Klein, 1989), information structural constraints on
implicature (van Kuppevelt, 1996), representing discourse goals and defining
contextual relevance (Roberts, 2012 [1996]), and for analysing structure and
coherence of discourse, of both text and dialogue (Klein & von Stutterheim,
1987; van Kuppevelt, 1995). Since then, QUDs have been firmly established as
an analytic tool, leading to fruitful applications for a wide range of
linguistic phenomena.
Most theories assume that sentences are subordinated to a focus–congruent
question that is again subordinated to higher discourse-structuring questions
(see, for example, Klein & von Stutterheim 1987b, van Kuppevelt 1995, Roberts
2012 [1996]; see also Benz & Jasinskaja 2017). QUD-theories for phenomena such
as non-at-issue content, presupposition projection, and focus assume that the
phenomena can also depend on questions higher up in the hierarchy. Hence, a
proper test of these theories requires explicit knowledge of the relevant
discourse structuring questions.
Although there is an obvious need for QUD-annotated corpora, there has been
little work in this direction. Exceptions are e.g. De Kuthy et al. (2018),
Riester et al. (2018), Riester (2019) and Westera et al. (2020).

The Issue:
We think that this research gap does not exist by chance. For morphological
and syntactic features, there typically exist established criteria that
objectively decide how a text item should be annotated. It then only depends
on the clarity of annotation guidelines, the tag system, and the qualification
of the annotators how close the annotations come to the objectively correct
ones. For QUDs it needs to be proven or refuted whether there is an objective
text-structuring QUD-hierarchy that annotators just have to uncover. One
problem is posed by the many information structural features that QUDs are
supposed to explain, among them the given/new, focus/background, and
at-issue/not-at-issue distinction, for which it is an open question whether
they can all be predicted by a uniform question hierarchy. Another problem is
the representation of discourse goals that QUDs are also assumed to represent.
Annotating discourse goals in the form of QUDs makes it necessary to interpret
the text and the authors’ motivations. This is a task that can easily lead to
widely different results. However, testing specific claims about the role of
QUDs require an explicit representation of these goal-representing QUDs. For
example, to test whether the non-at-issue content of a sentence is definable
as content that does not provide relevant material for answering any of its
superordinated questions requires explicit knowledge of these questions.
 

Program:

The QUDAnno22 Challenge consists in a joint effort of annotating three texts
distinguished by genre with text-structuring Questions under Discussion
(QUDs). At the workshop the results of five annotation teams are discussed. In
addition to the teams, three invited commentators will present their views on
annotating QUDs and the results of the challenge. 

The program can be found here
https://pragma.ruhr-uni-bochum.de/qud-challenge/programme.html

Contributing annotation teams:

- Oliver Deck, Tatjana Scheffler and Hannah J. Seemann (RUB): QUD Structure as
Discourse Structure: Segmentation, Labelling, and Genre Characteristics of
Longer Texts.
- Christoph Hesse (ZAS Berlin), Ralf Klabunde (RUB) and Anton Benz (ZAS
Berlin): Non-at-issue content in three text genres.
- Lisa Schäfer, Robin Lemke, Bozhidara Hristova, Heiner Drenhaus and Ingo
Reich (U Saarbrücken): What are you talking about? Estimating the probablity
of Questions Under Discussion based on crowdsourced non-expert annotations.
- Zurine Abalos, Elena Castroviejo (UPV/EHU Basque Country) and Melanie S.
Masià (UIB les Illes Balears): Do topic shifts challenge discourse coherence?
- Arndt Riester (Universität Bielefeld): The uncertain lifespan of topics in
the right frontier (and other issues)

Invited commentators: 

Laia Mayol (UPF Barcelona)
Matthijs Westera (Universiteit Leiden)
Lisa Brunetti (Université Paris Cité/CNRS)

The organizing committee of the challenge: 
Anton Benz: Benz(at)leibniz-zas.de 
Christoph Hesse: Hesse(at)leibniz-zas.de 
Ralf Klabunde: ralf.klabunde(at)ruhr-uni-bochum.de 
Maurice Langner: maurice.langner(at)ruhr-uni-bochum.de 
Tatjana Scheffler: tatjana.scheffler(at)ruhr-uni-bochum.de 
Arndt Riester: arndt.riester(at)uni-bielefeld.de 
Oliver Deck: oliver.deck(at)ruhr-uni-bochum.de





------------------------------------------------------------------------------

***************************    LINGUIST List Support    ***************************
 The 2019 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
  to find out how to donate and check how your university, country or discipline
     ranks in the fund drive challenges. Or go directly to the donation site:
               https://iufoundation.fundly.com/the-linguist-list-2019

                        Let's make this a short fund drive!
                Please feel free to share the link to our campaign:
                    https://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-34-595	
----------------------------------------------------------





More information about the LINGUIST mailing list