36.1447, Confs: Formal Linguistic Approaches to MultiModality (FLAMM) (Ireland)

The LINGUIST List linguist at listserv.linguistlist.org
Tue May 6 03:05:02 UTC 2025


LINGUIST List: Vol-36-1447. Tue May 06 2025. ISSN: 1069 - 4875.

Subject: 36.1447, Confs: Formal Linguistic Approaches to MultiModality (FLAMM) (Ireland)

Moderator: Steven Moran (linguist at linguistlist.org)
Managing Editor: Justin Fuller
Team: Helen Aristar-Dry, Steven Franks, Joel Jenkins, Daniel Swanson, Erin Steitz
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Editor for this issue: Erin Steitz <ensteitz at linguistlist.org>

================================================================


Date: 05-May-2025
From: Valentina Colasanti [valentina.colasanti at tcd.ie]
Subject: Formal Linguistic Approaches to MultiModality (FLAMM)


Formal Linguistic Approaches to MultiModality (FLAMM)
Short Title: FLAMM

Date: 04-Dec-2025 - 05-Dec-2025
Location: Dublin, Ireland
Contact: Valentina Colasanti
Contact Email: FLAMM at tcd.ie
Meeting URL: https://sites.google.com/view/flammtrinity

Linguistic Field(s): General Linguistics

Submission Deadline: 15-Jul-2025

Formal Linguistic Approaches to MultiModality (FLAMM) will take place
at Trinity College Dublin on 4-5 December 2025. The workshop aims to
promote and advance the study of multimodality from a formal
linguistic perspective by bringing together scholars interested in the
formal study of multimodality.
_____________
Invited speakers:
Cornelia Ebert (Goethe-Universität Frankfurt)
Donna Jo Napoli (Swarthmore College)
Philippe Schlenker (CNRS - Institut Jean-Nicod, Paris / New York
University)
Vadim Kimmelman (University of Bergen)
______________
Decades of insightful work in formal linguistics has succeeded in
providing a largely unified treatment of both spoken and sign
languages despite their differing modalities of externalisation (see
Brentari 1993, 2019; Wilbur 1991,1996; Petronio and Lillo-Martin 1997;
Neidle et al. 2000; Sandler and Lillo-Martin 2006; Cecchetto et al.
2006; Napoli and Sutton-Spence 2010; Davidson 2014; Pfau et al. 2018;
Kimmelmann 2019; among many others). However, the overall success of
this unified approach does not mean that modality is a 'solved
problem' or of secondary importance in formal linguistics---far from
it. For example, it has long been noted that the physical properties
of the visual-gestural modality afford a greater degree of
simultaneity of expression than the auditory-spoken modality (Sandler
and Lillo-Martin 2006). Simultaneity of this sort poses a prima facie
challenge for theories of linearisation (particularly those requiring
a total ordering among linguistic objects in a derivation, e.g. Kayne
1994), and yet it remains an under-theorised research area.
Other matters relating to modality of externalisation have received
more attention in the formal literature, particularly within the last
10 years, coinciding with the rise of linguistically-grounded work on
gesture (e.g., Super Linguistics: Patel-Grosz et al. 2023). Recent
advancements in sign language linguistics pose exciting prospects for
a formal approach to multimodality in otherwise-spoken languages.
For example, several recent works on the formal semantics of gesture
observe that at least some gestures behave semantically like normal
linguistic objects of a certain kind, e.g. by exhibiting scopal
interactions with pieces of the spoken content they are paired with,
projecting alternatives under focus, etc. (Lascarides and Stone
2009a,b; Ebert and Ebert 2014, Ebert 2024; Schlenker 2014, 2018, 2020;
Schlenker and Chemla 2018; Esipova 2019a,b). Facts of this sort led
Esipova (2019b) to conjecture that, if gestures behave semantically
like normal linguistic objects, then they must be the product of
normal linguistic derivations within a Y-model of grammar (modulo
simultaneity and other modality-specific PF properties).
This conjecture has been further developed in very recent work on the
syntax of gesture (e.g., Sailor and Colasanti 2020). For instance,
Colasanti (2023a,b) argues that the inventory of functional items
within a single language can be multimodal: i.e., a language may have
both spoken and gestural functional morphemes. Functional items
expressed in the visual modality within otherwise-spoken languages
include question particles (see Jouitteau 2007 on Atlantic French and
Colasanti 2023a on Neapolitan), focus markers (see Colasanti and
Cuonzo 2022 and Colasanti 2023b on Lancianese), topic markers
(Colasanti and Marchetiello forthcoming), epistemic markers
(Marchetiello 2024), and negators (Prieto and Espinal 2020; Colasanti
and Sailor 2025).
FLAMM welcomes all submissions adopting a formal approach to
linguistic (multi)modality in order to address questions like those
above. Other questions directly relevant to this call include the
following:
Questions relating to the grammatical integration of gesture:
- To what extent is gesture (and/or specific gestures) a truly
linguistic object, i.e. the output of a modular linguistic derivation
constrained by the Y-model?
- For gestures that can be shown to be 'grammatically integrated' in
this way (i.e., the product of a linguistic derivation):
    - Are there principled reasons for continuing to refer to such
objects as 'gestures' rather than 'signs'? Are there formal
differences between grammatically-integrated gestures in spoken
languages and signs in sign languages?
    - What sorts of syntactic, semantic, and phonological properties
can such objects have or not have? What grammatical principles would
these generalisations follow from?
    - What consequences would these have for our theory of the
Lexicon? From a realisational / Distributed Morphology perspective, is
externalisation in the visual-gestural modality purely a PF property
(i.e. specified in List 2 and exponed during Vocabulary Insertion), or
are gestures lexically special somehow?
- If certain physical movements co-occurring with speech or sign are
para-linguistic rather than the product of a linguistic derivation
(cf. the 'gesture' vs. 'gesticulation' distinction), how can we tell?
What diagnostics can the linguist use to distinguish the linguistic
from the para-linguistic in this empirical domain?
- Are iconic and non-iconic gestures grammatically integrated in the
same way? If not, at what level(s) of representation do they differ,
and why?
- Similarly, what if any role does the conventionalisation of gesture
play?
Questions relating to simultaneity:
- Can the temporal alignment of gesture and speech inform our theory
of linearisation?
- Given that both gesture and prosody exhibit the property of
simultaneity (with speech), what formal properties do the two systems
have (or not have) in common?
- To what extent can research into bimodal bilingualism (e.g.
Lillo-Martin et al. 2016) inform our approach to co-speech gesture,
particularly with respect to questions of linearisation and
simultaneity (e.g. Donati & Branchini 2013)?
- Co-speech gestures are expressed simultaneously with speech, but so
is prosody. Can the study of simultaneity in the visual-gestural
modality inform our approach to the meaningful aspects of prosody
(e.g. focal accents, intonational melodies associated with clause
types, expressive lengthening, etc.)?
- Beyond simultaneity, are there properties of signs in sign languages
that are not found in spoken languages due to the exclusive use of the
visual-gestural modality (e.g., modality-specific effects)? What can
we learn about modality-specific effects by studying gestures in
otherwise-spoken languages?
Other questions:
- Since gestures only happen while we use language, how do generative
linguists reconcile the competence-performance dichotomy in light of
the grammatical contribution of gesture?
- What empirical methods (e.g., fieldwork, experimentation, etc.) can
or should be employed to study gesture formally? What lessons can we
learn from formal sign language linguistics in this regard (e.g.
Kimmelmann 2021)?
___________________
Abstract guidelines:
Presentations will be 20 minutes each, plus 10 minutes for Q&A.
Abstracts submitted for consideration must be fully anonymised, and
adhere to the following guidelines:
- They must not exceed two pages, including data, references, and
diagrams
- They must be typed in at least 11‐point font, with 2.5cm margins
(A4) or 1" margins (letter)
Submissions are limited to maximum 2 per author, at most one of which
is a single‐authored submission.
Only electronic submissions will be accepted. Please submit your
abstract using EasyChair:
https://easychair.org/conferences?conf=flamm1.



------------------------------------------------------------------------------

********************** LINGUIST List Support ***********************
Please consider donating to the Linguist List to support the student editors:

https://www.paypal.com/donate/?hosted_button_id=87C2AXTVC4PP8

LINGUIST List is supported by the following publishers:

Bloomsbury Publishing http://www.bloomsbury.com/uk/

Cambridge University Press http://www.cambridge.org/linguistics

Cascadilla Press http://www.cascadilla.com/

De Gruyter Mouton https://cloud.newsletter.degruyter.com/mouton

Edinburgh University Press http://www.edinburghuniversitypress.com

Elsevier Ltd http://www.elsevier.com/linguistics

John Benjamins http://www.benjamins.com/

Language Science Press http://langsci-press.org

Lincom GmbH https://lincom-shop.eu/

Multilingual Matters http://www.multilingual-matters.com/

Netherlands Graduate School of Linguistics / Landelijke (LOT) http://www.lotpublications.nl/

Oxford University Press http://www.oup.com/us

Wiley http://www.wiley.com


----------------------------------------------------------
LINGUIST List: Vol-36-1447
----------------------------------------------------------



More information about the LINGUIST mailing list