31.800, Calls: Comp Ling, Phonetics, Phonology/Canada
The LINGUIST List
linguist at listserv.linguistlist.org
Tue Feb 25 21:19:09 UTC 2020
LINGUIST List: Vol-31-800. Tue Feb 25 2020. ISSN: 1069 - 4875.
Subject: 31.800, Calls: Comp Ling, Phonetics, Phonology/Canada
Moderator: Malgorzata E. Cavar (linguist at linguistlist.org)
Student Moderator: Jeremy Coburn
Managing Editor: Becca Morris
Team: Helen Aristar-Dry, Everett Green, Sarah Robinson, Peace Han, Nils Hjortnaes, Yiwen Zhang, Julian Dietrich
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org
Homepage: http://linguistlist.org
Please support the LL editors and operation with a donation at:
https://funddrive.linguistlist.org/donate/
Editor for this issue: Lauren Perkins <lauren at linguistlist.org>
================================================================
Date: Tue, 25 Feb 2020 16:18:26
From: Tomas Lentz [lentz at uva.nl]
Subject: Neural network models for articulatory gestures
Full Title: Neural network models for articulatory gestures
Short Title: NNArt
Date: 09-Jul-2020 - 09-Jul-2020
Location: Vancouver, BC, Canada
Contact Person: Tomas Lentz
Meeting Email: lentz at uva.nl
Web Site: https://staff.science.uva.nl/t.o.lentz/nnart/
Linguistic Field(s): Computational Linguistics; Phonetics; Phonology
Call Deadline: 15-Mar-2020
Meeting Description:
This workshop (satellite to LabPhon 17 on the day after, 9 July, 2020,
1:30pm-17:00pm) aims at bringing together researchers interested in
articulation and computational modelling, especially neural networks.
Articulation has been formalised as dynamic articulatory gestures, i.e., a
target-driven pattern of articulator movements (e.g., Browman & Goldstein,
1986). Such a pattern unfolds in time and space and could therefore also be
seen as a spatial sequence of analytically relevant articulatory landmarks
such as timepoint of peak velocity and target achievement. Seeing such
sequences as sequences of vectors (of spatial coordinates) make them
potentially learnable with algorithms for sequence modelling.
Current developments of machine learning offer greatly improved power for
sequence learning and prediction. Recurrent Neural Networks (RNNs) or their
extension Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997) allows
efficient training over short and even long time intervals (Gers, Schraudolph
& Schmidhuber, 2002). Such networks have been used for acoustic modelling, but
their application in articulation research has been mainly been limited to
ultrasound data, and applied less to the classification of two-dimensional
articulator movement curves as obtained from EMA or ROI analyses of MRI data.
However, promising approaches to acoustics-to-EMA mapping tentatively suggest
that articulatory movement allow meaningful modelling using deep neural
networks (e.g., Liu et al., 2005, Chartier et al., 2018)
Call for Papers:
We call for abstracts that bring together articulation data and computational
modelling, especially neural network modelling. We welcome any abstract,
including tentative work, on the possibility of using neural and/or deep
computational modelling for articulatory data. Suggestions for topics are:
- Whether it is possible to capture invariants, language-independent
predictable patterns that apply to all articulation
- If transfer learning is possible, i.e. if a network trained on
articulatory features in one speaker and, ultimately, language can be mapped
onto the pattern of another speaker (or language)
- If annotation of gestures can be aided by generating most likely gesture
structures, analogous to the derivation of articulation from acoustics (e.g.,
Mitra, Vikramjit, et al. 2010)
- If diagnostic classification is possible on networks that model
articulation, analogous to e.g., the detection of counterparts to
compositionality in a model of arithmetic grammar by Hupkes & Zuidema (2017)
Please use the link to EasyChair
(https://easychair.org/my/conference?conf=nnart2020) to submit abstracts.
Tentative work is more than welcome! As for the main conference, abstracts
should be written in English and not exceed one page of text. References,
examples and/or figures can optionally be included on a second page. Submitted
abstracts must be in .pdf format, with Times New Roman font, size 12, 1 inch
margins and single spacing. We do not require anonymous abstracts.
Website for more details: https://staff.science.uva.nl/t.o.lentz/nnart/
------------------------------------------------------------------------------
*************************** LINGUIST List Support ***************************
The 2019 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
to find out how to donate and check how your university, country or discipline
ranks in the fund drive challenges. Or go directly to the donation site:
https://iufoundation.fundly.com/the-linguist-list-2019
Let's make this a short fund drive!
Please feel free to share the link to our campaign:
https://funddrive.linguistlist.org/donate/
----------------------------------------------------------
LINGUIST List: Vol-31-800
----------------------------------------------------------
More information about the LINGUIST
mailing list