[Lingtyp] spectrograms in linguistic description and for language comparison

Matthew Dryer dryer at buffalo.edu
Sun Dec 18 04:53:27 UTC 2022


Randy,

[I sent this out right after my previous email, but I didn¡¯t receive a copy, which makes me think that for some reason it didn¡¯t reach lingtyp, so I am sending it again.]

To illustrate what I meant in the email I just sent, try using Google Translate to translate the following:

My father has ten eyes
My father is younger than me.
I was born in 1492.
I ate ten large elephants for dinner.

Google Translate will have no idea that there is anything odd about these sentences.

I don¡¯t think you want to use a system like that as the basis for an argument about how to collect linguistic data.

Matthew


From: Juergen Bohnemeyer <jb77 at buffalo.edu>
Date: Saturday, December 17, 2022 at 11:49 PM
To: Matthew Dryer <dryer at buffalo.edu>, Randy J. LaPolla <randy.lapolla at gmail.com>, Adam Singerman <adamsingerman at gmail.com>
Cc: lingtyp at listserv.linguistlist.org <lingtyp at listserv.linguistlist.org>
Subject: Re: [Lingtyp] spectrograms in linguistic description and for language comparison
Dear all ¨C Matthew¡¯s post made me think of a possible interpretation of Randy¡¯s point about induction where previously there was none in my mind, i.e., I failed to understand why Randi was invoking induction and AI. Now, on the off chance that that interpretation is ballpark correct, I¡¯d like to add the following to Matthew¡¯s response:

Computational linguists are currently very much debating, and empirically researching, how much neural networks really know about language. This is not merely a matter of how this knowledge is represented ¨C i.e., in the form of probability vectors instead of rules. Rather, it appears that this knowledge is often rather superficial. Transformers are easily thrown into making mistakes that human speakers, and even human learners, don¡¯t make. See for example here, and works cited therein:

http://www.acsu.buffalo.edu/~rchaves/lookatthat.pdf

Note this work is not at all coming from a rationalist/innatist bent. Rather, the point is that it seems premature to consider contemporary deep learning networks adequate representations of human linguistic knowledge. I¡¯m persuaded that we may eventually (and perhaps sooner rather than later) see the evolution of artificial systems that are able to acquire such knowledge. But we don¡¯t yet seem to be able to predict how exactly such systems will work.

I¡¯m willing to bet that they will rely on statistical learning, mind you. As do we humans. If this is where Randy was going by invoking induction, we are not in disagreement up to that point.

However, it seems to me that once artificial systems have evolved to the point where they can no longer be tricked into non-human-like generalizations by manipulations such as those used in the Chaves & Richter paper, their linguistic knowledge may well be adequately described in terms of traditional grammar rules, even if this knowledge is not actually internally represented in such rules. Except perhaps those will be rules that are stated in probabilistic rather than categorical form. Which may be a format linguists and psychologists will one day also apply to the description of human linguistic knowledge.

Best -- Juergen

Juergen Bohnemeyer (He/Him)
Professor, Department of Linguistics
University at Buffalo

Office: 642 Baldy Hall, UB North Campus
Mailing address: 609 Baldy Hall, Buffalo, NY 14260
Phone: (716) 645 0127
Fax: (716) 645 3825
Email: jb77 at buffalo.edu<mailto:jb77 at buffalo.edu>
Web: http://www.acsu.buffalo.edu/~jb77/

Office hours Tu/Th 3:30-4:30pm in 642 Baldy or via Zoom (Meeting ID 585 520 2411; Passcode Hoorheh)

There¡¯s A Crack In Everything - That¡¯s How The Light Gets In
(Leonard Cohen)
--


From: Lingtyp <lingtyp-bounces at listserv.linguistlist.org> on behalf of Matthew Dryer <dryer at buffalo.edu>
Date: Saturday, December 17, 2022 at 9:44 PM
To: Randy J. LaPolla <randy.lapolla at gmail.com>, Adam Singerman <adamsingerman at gmail.com>
Cc: lingtyp at listserv.linguistlist.org <lingtyp at listserv.linguistlist.org>
Subject: Re: [Lingtyp] spectrograms in linguistic description and for language comparison
Randy,

(A belated response, but the best of reasons for Lingtyp: I have been too busy at ALT to read my email the past few days.)

Randy, you don¡¯t want to use this experience with computers identifying sentences to support your case. The reason that the symbolic approach in AI didn¡¯t work is that it proved impossible to identify sentences without the system having access to the same world knowledge, the same knowledge of context and the same knowledge of the addressee that people use in interpreting sentences. In other words, the problem was that it is not possible to interpret sentences in context without all that. In order for the symbolic approach to work, the system would have to have all the knowledge of the world and the context that real people have.

The alternative that replaced it used induction on interpreted data to identify transitional probabilities to identify words. But what the means is that that the system only identifies what the words are; it cannot interpret what sentences mean in context. I.e. the system doesn¡¯t do what real people do. That¡¯s not an argument for induction.

Matthew


From: Lingtyp <lingtyp-bounces at listserv.linguistlist.org> on behalf of Randy J. LaPolla <randy.lapolla at gmail.com>
Date: Wednesday, December 14, 2022 at 12:49 PM
To: Adam Singerman <adamsingerman at gmail.com>
Cc: lingtyp at listserv.linguistlist.org <lingtyp at listserv.linguistlist.org>
Subject: Re: [Lingtyp] spectrograms in linguistic description and for language comparison
PS: One thing I forgot to mention about induction:
If you know about the history of AI, in the 70¡¯s and 80¡¯s the main paradigm was symbolic AI, which is rule-based. They worked very hard for many years to get the systems to parse even simple sentences, but when it was clear to all that the rule-based approach was a failure, they experimented with a purely inductive approach, and the difference in output convinced them right away this was the way to go. After their experiments, Jeff Dean, the head of Google¡¯s Brain Lab, famously said, ¡°We don¡¯t need grammar¡±. Google translate is as good as it is now because of this switch to an inductive method. Of course those of us doing fieldwork will not have the large database Google has or the speed of the machines, but the principle is the same: induction can get you there.

Randy
¡ª¡ª
Professor Randy J. LaPolla£¨ÂÞÈʵØ), PhD FAHA
Center for Language Sciences
Institute for Advanced Studies in Humanities and Social Sciences
Beijing Normal University at Zhuhai
A302, Muduo Building, #18 Jinfeng Road, Zhuhai City, Guangdong, China

https://randylapolla.info<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Frandylapolla.info%2F&data=05%7C01%7Cdryer%40buffalo.edu%7C228e3fa0c40843e2167108dae0b33ba6%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638069357678718748%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=4CimozVBTX31W8Z4KFDNya0ZiRid8DyksWADGXsOL%2FU%3D&reserved=0>
ORCID ID: https://orcid.org/0000-0002-6100-6196<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Forcid.org%2F0000-0002-6100-6196&data=05%7C01%7Cdryer%40buffalo.edu%7C228e3fa0c40843e2167108dae0b33ba6%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638069357678718748%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2FY%2BklVZgw7DcN3PO4QwpUUBxXxBaLQZ0%2Fl5HNWiqFKs%3D&reserved=0>

Óʱࣺ519000
¹ã¶«Ê¡Ö麣ÊÐÌƼÒÍåÕò½ð·ï·18ºÅľîìÂ¥A302
±±¾©Ê¦·¶´óѧÖ麣УÇø
ÈËÎĺÍÉç»á¿Æѧ¸ßµÈÑо¿Ôº
ÓïÑÔ¿ÆѧÑо¿ÖÐÐÄ












On 14 Dec 2022, at 11:15 PM, Randy J. LaPolla <randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>> wrote:

Dear Adam,
Sorry to just be getting back to you on this.

We have very different conceptions of language and goals in doing linguistics (your interest in "(control vs raising structures, pied-piping, islands, gaps in inflectional paradigms, etc)¡± very much reflects this, as these are not things that concern me). This is what my blog post is about, how different the choices can be. Saying I am ¡°wrong¡± implies there is only one right way to do linguistics.  Not to be rude, but as Popper said, "Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve." -Karl Popper, Objective Knowledge: An Evolutionary Approach (Clarendon Press, 1972, p. 266)
All of our theories and methodologies are subjective attempts to achieve some goal, actually heuristics. As there are different ways of looking at the same phenomenon and for different purposes, there is no right and wrong, only more or less useful ways of analysing the phenomena relative to some purpose, and also depend on our assumptions, definition of language, etc., as I discuss in the blog. Y. R. Chao and J. R. Firth both argued that there are different ways to analyse language depending on the language and your purposes, including for example not using a phoneme type of representation of the phonology, but some different type. This is not just linguistics, but for example, physics as well, where light can be understood as a wave or a particle depending on your purposes. And our understanding of the Universe has gone through many major changes, each one thought to be ¡°right¡± at the time, but later overthrown by a later theory.

You assume that we can somehow "fully represent a given language¡¯s grammatical possibilities¡±. As language is a complex adaptive system that is constantly changing, and is human behaviour and so not a finite thing, I don¡¯t think that it will ever be possible to "fully represent a given language¡¯s grammatical possibilities¡±. One problem I see with modern linguistics is not acknowledging the tremendous diversity of usages within a single language, as the search has been for universals and for a single tight system, which even Charles Hocket (1967) said was a ¡°wild goose chase¡±. That is just for one language, never mind trying to do that for all languages, which has led to linguistics missing so much of the diversity between languages.

The beauty of working inductively is that you are only responsible for what is in your data. You don¡¯t have to make broad generalisations about the language that in many cases turn out to be problematic, you just say this is what is and is not in my data. Of course the more data you have the stronger the generalisations you can make. I did not rule out using some stimuli such as the MPI sets, as these set up contexts that the speaker can talk about, but asking people to translate word lists or sentences will not give you useful data. What you will get back are the categories of the working language. But again, this is part of the problem. Too many linguists think that words are translatable and mean the same thing in different languages. This is easily shown to be false. Not only is the prototype of the cognitive category represented by the word different for different cultures (even different speakers), but the extension (the use of the word for different objects or situations) is also different. This is true of every word in the languages. Humboldt knew this, and argued against Aristotle¡¯s view that all people have the same object in mind even if the word is different. Humboldt said no, even if we both look at the same horse we are seeing different things, as our cognitive categories are different.

All the best,
Randy

¡ª¡ª
Professor Randy J. LaPolla£¨ÂÞÈʵØ), PhD FAHA
Center for Language Sciences
Institute for Advanced Studies in Humanities and Social Sciences
Beijing Normal University at Zhuhai
A302, Muduo Building, #18 Jinfeng Road, Zhuhai City, Guangdong, China

https://randylapolla.info<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Frandylapolla.info%2F&data=05%7C01%7Cdryer%40buffalo.edu%7C228e3fa0c40843e2167108dae0b33ba6%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638069357678875000%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=9o6NVwbMe0uei2OBsyikQ7g8pR6OfzCL0Lua72Mq%2FiA%3D&reserved=0>
ORCID ID: https://orcid.org/0000-0002-6100-6196<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Forcid.org%2F0000-0002-6100-6196&data=05%7C01%7Cdryer%40buffalo.edu%7C228e3fa0c40843e2167108dae0b33ba6%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638069357678875000%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=A3wPCDLvYhHvBvAfAP%2Btr5YrIKvOP1ut3WXtZrzdaIg%3D&reserved=0>

Óʱࣺ519000
¹ã¶«Ê¡Ö麣ÊÐÌƼÒÍåÕò½ð·ï·18ºÅľîìÂ¥A302
±±¾©Ê¦·¶´óѧÖ麣УÇø
ÈËÎĺÍÉç»á¿Æѧ¸ßµÈÑо¿Ôº
ÓïÑÔ¿ÆѧÑо¿ÖÐÐÄ





On 11 Dec 2022, at 11:00 AM, Adam Singerman <adamsingerman at gmail.com<mailto:adamsingerman at gmail.com>> wrote:

I think Randy is wrong (sorry if this comes across as blunt) and so I
am writing, on a Saturday night no less, to voice a different view.

Working inductively from a corpus is great, but no corpus is ever
going to be large enough to fully represent a given language's
grammatical possibilities. If we limit ourselves to working
inductively from corpora then many basic questions about the languages
we research will go unanswered. From a corpus of natural data we
simply cannot know whether a given pattern is missing because the
corpus is finite (i.e., it's just a statistical accident that the
pattern isn't attested) or whether there's a genuine reason why the
pattern is not showing up (i.e., its non-attestation is principled).

When I am writing up my research on Tupar¨ª I always prioritize
non-elicited data (texts, in-person conversation, WhatsApp chats). But
interpreting and analyzing the non-elicited data requires making
reference to acceptability judgments. The prefix (e)tareman- is a
negative polarity item, and it always co-occurs with (and inside the
scope of) a negator morpheme. But the only way I can make this point
is by showing that speakers invariably reject tokens of (e)tareman-
without a licensing negator. Those rejected examples are by definition
not going to be present in any corpus of naturalistic speech, but they
tell me something crucial about what the structure of Tupar¨ª does and
does not allow. If I limit myself to inductively working from a
corpus, fundamental facts about the prefix (e)tareman- and about
negation in Tupar¨ª more broadly will be missed.

A lot of recent scholarship has made major strides towards improving
the methodology of collecting and interpreting acceptability
judgments. The formal semanticists who work on understudied languages
(here I am thinking of Judith Tonhauser, Lisa Matthewson, Ryan
Bochnak, Amy Rose Deal, Scott AnderBois) are extremely careful about
teasing apart utterances that are rejected because of some
morphosyntactic ill-formedness (i.e., ungrammaticality) versus ones
that are rejected because of semantic or pragmatic oddity. The
important point is that such teasing apart can be done, and the
descriptions and analyses that result from this work are richer than
what would result from a methodology that uses corpus examination or
elicitation only.

One more example from Tupar¨ª: this language has an obligatory
witnessed/non-witnessed evidential distinction, but the deictic
orientation of the distinction (to the speaker or to the addressee) is
determined via clause type. There is a nuanced set of interactions
between the evidential morphology and the clause-typing morphology,
and it would have been impossible for me to figure out the basics of
those interactions without relying primarily on conversational data
and discourse context. But I still needed to get some acceptability
judgments to ensure that the picture I'd arrived at wasn't overly
biased by the limitations of my corpus. Finding speakers who were
willing to work with me on those judgments wasn't always easy; a fair
amount of metalinguistic awareness was needed. But it was worth it!
The generalizations that I was able to publish were much more solid
than if I had worked exclusively from corpus data. And the methodology
I learned from the Tonhauser/Matthewson/etc crowd was fundamental to
this work.

The call to work inductively from corpora would have the practical
effect of making certain topics totally inaccessible for research
(control vs raising structures, pied-piping, islands, gaps in
inflectional paradigms, etc) even though large scale acceptability
tasks have shown that these phenomena are "real," i.e., they're not
just in the minds of linguists who are using introspection. Randy's
point that "no other science allows the scientist to make up his or
her own data, and so this is something linguists should give up" is a
straw man argument now that many experimentalist syntacticians use
large-scale acceptability judgments on platforms like Mechanical Turk
to get at speakers' judgments. I think we do a disservice to our
students and to junior scholars if we tell them that the only real
stuff to be studied will be in the corpora that we assemble. Even the
best corpora are finite, whereas L1 speakers' knowledge of their
language is infinitely productive.

¡ª Adam
_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
https://listserv.linguistlist.org/cgi-bin/mailman/listinfo/lingtyp<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flistserv.linguistlist.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Flingtyp&data=05%7C01%7Cdryer%40buffalo.edu%7C228e3fa0c40843e2167108dae0b33ba6%7C96464a8af8ed40b199e25f6b50a20250%7C0%7C0%7C638069357678875000%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=F%2B7QZtjF%2F3ivFVlR69oWSJgsaBFIM0jP2Vuo4BW4W3U%3D&reserved=0>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20221218/fd410292/attachment-0001.htm>


More information about the Lingtyp mailing list