31.1958, Confs: Comp Ling, Phonetics, Phonology/Online

The LINGUIST List linguist at listserv.linguistlist.org
Mon Jun 15 14:50:27 UTC 2020


LINGUIST List: Vol-31-1958. Mon Jun 15 2020. ISSN: 1069 - 4875.

Subject: 31.1958, Confs: Comp Ling, Phonetics, Phonology/Online

Moderator: Malgorzata E. Cavar (linguist at linguistlist.org)
Student Moderator: Jeremy Coburn
Managing Editor: Becca Morris
Team: Helen Aristar-Dry, Everett Green, Sarah Robinson, Lauren Perkins, Nils Hjortnaes, Yiwen Zhang, Joshua Sims
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Lauren Perkins <lauren at linguistlist.org>
================================================================


Date: Mon, 15 Jun 2020 10:49:54
From: Tomas Lentz [lentz at uva.nl]
Subject: Neural network models for articulatory gestures

 
Neural network models for articulatory gestures 
Short Title: NNArt 

Date: 09-Jul-2020 - 09-Jul-2020 
Location: Vancouver, BC, Canada 
Contact: Tomas Lentz 
Contact Email: lentz at uva.nl 
Meeting URL: https://staff.science.uva.nl/t.o.lentz/nnart/ 

Linguistic Field(s): Computational Linguistics; Phonetics; Phonology 

Meeting Description: 

This workshop (satellite to LabPhon 17 on the day after, 9 July, 2020,
1:30pm-17:00pm) aims at bringing together researchers interested in
articulation and computational modelling, especially neural networks. 

Articulation has been formalised as dynamic articulatory gestures, i.e., a
target-driven pattern of articulator movements (e.g., Browman & Goldstein,
1986). Such a pattern unfolds in time and space and could therefore also be
seen as a spatial sequence of analytically relevant articulatory landmarks
such as timepoint of peak velocity and target achievement. Seeing such
sequences as sequences of vectors (of spatial coordinates) make them
potentially learnable with algorithms for sequence modelling.

Current developments of machine learning offer greatly improved power for
sequence learning and prediction. Recurrent Neural Networks (RNNs) or their
extension Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997) allows
efficient training over short and even long time intervals (Gers, Schraudolph
& Schmidhuber, 2002). Such networks have been used for acoustic modelling, but
their application in articulation research has been mainly been limited to
ultrasound data, and applied less to the classification of two-dimensional
articulator movement curves as obtained from EMA or ROI analyses of MRI data.

However, promising approaches to acoustics-to-EMA mapping tentatively suggest
that articulatory movement allow meaningful modelling using deep neural
networks (e.g., Liu et al., 2005, Chartier et al., 2018)
 

Program Information: 

NNART offers three pre-recorded presentations of 30 minutes, available during
(and through) the main LabPhon conference, and one online discussion session
on 9 July, 12:45pm-1:30pm (Vancouver time).

Presentations (prerecorded): 
-  Sam Tilsen, Learning gestural parameters and activation in an RNN
implementation of Task Dynamics
-  Sam Kirkham, Georgina Brown and Emily Gorman, Uncovering phonological
invariance and speaker individuality in articulatory gestures using machine
learning
-  Marco Silva Fonseca and Brennan Dell, Modelling optional sound variation
and obligatory gesture assimilation using LSTM RNNs

Note: This workshop is accessible to registered attendees of the online
conference LabPhon 17. Due to the worldwide Covid-19 pandemic, both the main
conference and our satellite will go virtual instead of taking place in
Vancouver. The presentations will be made available as video files.

Discussion session (live, using Zoom):
You can send questions on the presentations, or discussion topics, to the
organizers (details see above), or ask them in person to the presenters at the
online session. Please register for both the main conference and our workshop
to be kept up to date and to receive further information (e.g., the Zoom link)
for the discussion session.





------------------------------------------------------------------------------

***************************    LINGUIST List Support    ***************************
 The 2019 Fund Drive is under way! Please visit https://funddrive.linguistlist.org
  to find out how to donate and check how your university, country or discipline
     ranks in the fund drive challenges. Or go directly to the donation site:
               https://iufoundation.fundly.com/the-linguist-list-2019

                        Let's make this a short fund drive!
                Please feel free to share the link to our campaign:
                    https://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-31-1958	
----------------------------------------------------------






More information about the LINGUIST mailing list