33.3245, Calls: Computational Linguistics, Discourse Analysis, General Linguistics/Switzerland

The LINGUIST List linguist at listserv.linguistlist.org
Tue Oct 25 08:56:54 UTC 2022


LINGUIST List: Vol-33-3245. Tue Oct 25 2022. ISSN: 1069 - 4875.

Subject: 33.3245, Calls: Computational Linguistics, Discourse Analysis, General Linguistics/Switzerland

Moderators:

Editor for this issue: Everett Green <everett at linguistlist.org>
================================================================


Date: Tue, 25 Oct 2022 08:56:23
From: Teodora Vukovic [teodora.vukovic2 at uzh.ch]
Subject: Computational and Quantitative Approaches to Multimodal Video Analysis

 
Full Title: Computational and Quantitative Approaches to Multimodal Video Analysis 
Short Title: CAMVA 2023 

Date: 22-Jun-2023 - 23-Jun-2023
Location: Zurich, Switzerland 
Contact Person: Teodora Vuković
Meeting Email: teodora.vukovic2 at uzh.ch
Web Site: https://www.liri.uzh.ch/en/events/CAMVA-2023.html 

Linguistic Field(s): Computational Linguistics; Discourse Analysis; General Linguistics 

Subject Language(s): English (eng)

Call Deadline: 11-Dec-2022 

Meeting Description:

CAMVA 2023 workshop has the goal of connecting theoretical and qualitative
approaches to multimodal analysis and the challenges that arise from the
intersection. It aims at discussing the possibilities of using existing or
building novel software to be used for the purposes of multimodal analysis in
linguistics and other fields, while keeping in mind established research
principles in Conversation Analysis and Multimodal Analysis in general.


Call for Papers:

Whereas other branches of linguistics and sociology have undergone a
significant computational and qualitative transformation over the last
decades, Conversation Analysis (CA) and Multimodal Interaction Analysis remain
predominantly qualitative fields of research to this day. In part, this is due
to core tenets of the field, such as the emphasis on using emic categories and
adhering to the sequentially of an interaction. In addition, the local,
contextually embedded accomplishment of actions in interaction also makes
quantitative multimodal analyses of interactions a tricky undertaking. In
part, it may also be due to the fact that the extensive transcription and
annotation of large video corpora is extremely time consuming. Nonetheless,
there have been a few voices within CA that have raised the questions if
quantitative analyses could also be used in CA research (Stivers 2015). These
calls coincide with an increase of computational models that can be applied to
video recordings of human interaction and beyond that can largely facilitate
or even fully automate the process of annotation. 

Computer Vision tools can be used to recognize and categorize embodied
elements of communication, such as gestures or facial expressions, as well as
for demarcating environmental features, such as the background or furniture,
distances between participants to an interaction and much more. Automatic
speech recognition tools have become increasingly precise and reliable, even
in dealing with challenges of spoken or non-standard language. There has been
a rich variety of sophisticated Natural Language Processing tools, that can
label grammar, moods, topics, narrative sequences, etc. Furthermore, fields
such as Corpus Linguistics have developed elaborate methods to process, query
and analyse linguistic data quantitatively to derive data-driven trends and
insights. This is why we would like to raise the question if and how
quantitative methods could also be used effectively in CA and interactional
linguistics in order to investigate human interaction from a multimodal
perspective. 

In the workshop, we would like to support an exchange of ideas and approaches
that could expand our understanding of how computational methods could
complement qualitative analyses, and also how computational approaches could
benefit from theoretical insights. We therefore invite empirical as well as
theoretical contributions that describe or reflect the use of quantitative or
computational tools in the context of multimodal analyses of interaction.
These can cover human interactions, including speech, embodied conduct and
sign language, including film and documentary recordings. 

The workshop will consist of two presentation sessions and a panel discussion
between invited experts representing theoretical and computational approaches.

Please send your abstract of maximum 500 words (excluding bibliography,
figures and tables). The deadline for submission is December 11, 2022.
Notifications of acceptance will be sent out by January 15, 2023.

In order to guarantee diversity, one person may be the first author of only
one submission and co-author of one other submission.

Please, submit your abstracts via EasyChair
(https://easychair.org/conferences/?conf=camva2023).

Organizers: VIAN-DH, URPP Language and Space, Linguistic Research
Infrastructure (University of Zurich)

Funding: URPP Language and Space (University of Zurich)




------------------------------------------------------------------------------

***************************    LINGUIST List Support    ***************************
 The 2022 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
  to find out how to donate and check how your university, country or discipline
     ranks in the fund drive challenges. Or go directly to the donation site:
                   https://crowdfunding.iu.edu/the-linguist-list

                        Let's make this a short fund drive!
                Please feel free to share the link to our campaign:
                    https://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-33-3245	
----------------------------------------------------------





More information about the LINGUIST mailing list