27.825, Diss: Computational Ling, Syntax: Tom S. Juzek: 'Acceptability Judgement Tasks and Grammatical Theory'

The LINGUIST List via LINGUIST linguist at listserv.linguistlist.org
Mon Feb 15 15:51:18 UTC 2016


LINGUIST List: Vol-27-825. Mon Feb 15 2016. ISSN: 1069 - 4875.

Subject: 27.825, Diss: Computational Ling, Syntax: Tom S. Juzek: 'Acceptability Judgement Tasks and Grammatical Theory'

Moderators: linguist at linguistlist.org (Damir Cavar, Malgorzata E. Cavar)
Reviews: reviews at linguistlist.org (Anthony Aristar, Helen Aristar-Dry, Sara Couture)
Homepage: http://linguistlist.org

*****************    LINGUIST List Support    *****************
                   25 years of LINGUIST List!
Please support the LL editors and operation with a donation at:
           http://funddrive.linguistlist.org/donate/

Editor for this issue: Ashley Parker <ashley at linguistlist.org>
================================================================


Date: Mon, 15 Feb 2016 10:50:14
From: Tom Juzek [tom.juzek at googlemail.com]
Subject: Acceptability Judgement Tasks and Grammatical Theory

 
Institution: University of Oxford 
Program: D.Phil. in Linguistics 
Dissertation Status: Completed 
Degree Date: 2016 

Author: Tom S Juzek

Dissertation Title: Acceptability Judgement Tasks and Grammatical Theory 

Dissertation URL:  http://ora.ox.ac.uk/objects/uuid:b276ec98-5f65-468b-b481-f3d9356d86a2

Linguistic Field(s): Computational Linguistics
                     Syntax


Dissertation Director(s):
Mary Dalrymple
Greg Kochanski

Dissertation Abstract:

This thesis considers various questions about acceptability judgement tasks
(AJTs).

In Chapter 1, we compare the prevalent informal method of syntactic enquiry,
researcher introspection, to formal judgement tasks. We randomly sample 200
sentences from Linguistic Inquiry and then compare the original author
judgements to online AJT ratings. Sprouse et al., 2013, provided a similar
comparison, but they limited their analysis to the comparison of sentence
pairs and to extreme cases. We think a comparison at large, i.e. involving all
items, is more sensible. We find only a moderate match between informal author
judgements and formal online ratings and argue that the formal judgements are
more reliable than the informal judgements. Further, the fact that many
syntactic theories rely on questionable informal data calls the adequacy of
those theories into question.

In Chapter 2, we test whether ratings for constructions from spoken language
and constructions from written language differ if presented as speech vs as
text and if presented informally vs formally. We analyse the results with an
LME model and find that neither mode of presentation nor formality are
significant factors. Our results suggest that a speaker’s grammatical
intuition is fairly robust.

In Chapter 3, we quantitatively compare regular AJT data to their Z-scores and
ranked data. For our analysis, we test resampled data for significant
differences in statistical power. We find that Z-scores and ranked data are
more powerful than raw data across most common measurement methods.

Chapter 4 examines issues surrounding a common similarity test, the TOST. It
has long been unclear how to set its controlling parameter δ. Based on data
simulations, we outline a way to objectively set δ. Further results suggest
that our guidelines hold for any kind of data.

The thesis concludes with an appendix on non-cooperative participants in AJTs.




------------------------------------------------------------------------------

*****************    LINGUIST List Support    *****************
Please support the LL editors and operation with a donation at:
            http://funddrive.linguistlist.org/donate/
 


----------------------------------------------------------
LINGUIST List: Vol-27-825	
----------------------------------------------------------







More information about the LINGUIST mailing list