35.1283, Calls: Computational Linguistics / Linguistics Vanguard (Jrnl)

The LINGUIST List linguist at listserv.linguistlist.org
Tue Apr 23 14:05:07 UTC 2024


LINGUIST List: Vol-35-1283. Tue Apr 23 2024. ISSN: 1069 - 4875.

Subject: 35.1283, Calls: Computational Linguistics / Linguistics Vanguard (Jrnl)

Moderators: Malgorzata E. Cavar, Francis Tyers (linguist at linguistlist.org)
Managing Editor: Justin Fuller
Team: Helen Aristar-Dry, Steven Franks, Everett Green, Daniel Swanson, Maria Lucero Guillen Puon, Zackary Leech, Lynzie Coburn, Natasha Singh, Erin Steitz
Jobs: jobs at linguistlist.org | Conferences: callconf at linguistlist.org | Pubs: pubs at linguistlist.org

Homepage: http://linguistlist.org

Please support the LL editors and operation with a donation at:
           https://funddrive.linguistlist.org/donate/

Editor for this issue: Zackary Leech <zleech at linguistlist.org>

LINGUIST List is hosted by Indiana University College of Arts and Sciences.
================================================================


Date: 22-Apr-2024
From: Vsevolod Kapatsinski [vkapatsi at uoregon.edu]
Subject: Computational Linguistics / Linguistics Vanguard (Jrnl)


Call for Papers:

Special collection: Implications of Neural Networks and other Learning
Models for Linguistic Theory

Managing Editor: Vsevolod Kapatsinski (University of Oregon)
Co-editor: Gašper Beguš (University of California, Berkeley)

This Linguistics Vanguard special collection is motivated by the
recent breakthroughs in the application of neural networks to language
data. Linguistics Vanguard publishes short 3000-4000 word articles on
cutting-edge topics in linguistics and neighboring areas. Inclusion of
multimodal content designed to integrate interactive content
(including, but not limited to audio and video, images, maps, software
code, raw data, hyperlinks to external databases, and any other media
enhancing the traditional written word) is particularly encouraged.
Special collections contributors should follow general submission
guidelines for the journal
(https://www.degruyter.com/journal/key/lingvan/html#overview).

Overview of the special issue topic:

Neural network models of language have been around for several
decades, and became the de facto standard in psycholinguistics by the
1990s. There have also been several important attempts to incorporate
neural network insights into linguistic theory (e.g., Bates &
MacWhinney, 1989; Bybee, 1985; Bybee & McClelland, 2005; Heitmeier et
al., 2021; Smolensky & Legendre, 2006). However, until recently,
neural network models did not approximate the generative capacity of a
human speaker or writer. This changed in the last few years, when
large language models (e.g., the GPT family), embodying largely the
same principles but trained on vastly larger amounts of data, have
made a breakthrough so that the language they generate is now usually
indistinguishable from that generated by a human. The accomplishments
of these models have led to both calls for further integration between
linguistic theory and neural networks (Beguš 2020; Kapatsinski, 2023;
Kirov & Cotterell, 2018; Pater, 2019; Piantadosi, 2023) and criticism
suggesting that the way they work is fundamentally unlike human
language learning and processing (e.g., Bender et al., 2021; Chomsky
et al., 2023).

The present special collection for Linguistics Vanguard aims to foster
a productive discussion between linguists, cognitive scientists,
neural network modelers, neuroscientists, and proponents of other
approaches to learning theory (e.g., Bayesian probabilistic inference,
instance-based lazy learning, reinforcement learning, active
inference; Jamieson et al., 2022; Tenenbaum et al., 2011; Sajid et
al., 2021). We call for contributions addressing the central question
of linguistic theory — Why are languages the way they are? – by means
of a computational modeling approach. Reflections and position papers
motivating the best ways to approach this question computationally are
also welcome.

The contributions are encouraged to compare different models trained
on the same data approximating human experience. Insightful position
papers will also be accepted. Contributions should explicitly address
the ways in which the training data of the model(s) they discuss
resembles and differs from human experience. Contributions can involve
either hypothesis testing via minimally different versions of the same
well-motivated model (e.g., Kapatsinski, 2023), or comparisons of
state-of-the-art models from different intellectual traditions (e.g.,
Albright & Hayes, 2003; Sajid et al., 2021) on how well they answer
the question above.

Timeline:

abstract due by July 1, 2024
notification of authors (full paper invitations) by August 1, 2024
full paper due by November 1, 2024
reviews to be completed by January 31, 2025
publication by March 2025

For more information and to submit an abstract, please visit
https://blogs.uoregon.edu/ublab/lmlt/



------------------------------------------------------------------------------

Please consider donating to the Linguist List https://give.myiu.org/iu-bloomington/I320011968.html


LINGUIST List is supported by the following publishers:

Cambridge University Press http://www.cambridge.org/linguistics

De Gruyter Mouton https://cloud.newsletter.degruyter.com/mouton

Equinox Publishing Ltd http://www.equinoxpub.com/

John Benjamins http://www.benjamins.com/

Lincom GmbH https://lincom-shop.eu/

Multilingual Matters http://www.multilingual-matters.com/

Narr Francke Attempto Verlag GmbH + Co. KG http://www.narr.de/

Wiley http://www.wiley.com


----------------------------------------------------------
LINGUIST List: Vol-35-1283
----------------------------------------------------------



More information about the LINGUIST mailing list