[Lingtyp] Does bipolar polysemy exist?
Mattis List
mattis.list at lingpy.org
Sun Jun 3 10:25:20 UTC 2018
Dear Stela,
I still don't see how building a machine that is barely understood,
mimicking translation from French to English or similar can be
considered as being a scientific alternative to linguistic research. You
may say, from the perspective of AI research it is scientific, in the
sense that new knowledge is generated if the machine words better, but
it does NOT answer linguistic questions, it could optimally provide a
model of linguistic intuition among humans, but this does not answer
fundamental questions that linguists ask themselves, and we are
discussing linguistic implications here, not questions of engineering,
like how to tweak your neural network most quickly and efficiently.
Have you followed the debate about google's AI beating the Go players?
The follow-up was: now, that the machine is better than humans, humans
will have to study the moves by the machine to learn from it. So a
machine was created that beats humans in a particular game, but the
knowledge, what creates successful strategies when playing that game was
NOT created. That IS the difference between engineers working on
automatic translation and linguists trying to investigate certain
properties of human languages.
I have discussed this in an earlier blog post (longer time ago) that
focuses on topics of historical linguistics, where machine learning
techniques have an even harder time in providing useful solutions.
There I also point to some literature reflecting that engineers and
computer scientists are well aware of this problem (maybe not the people
with google, but maybe even them: it is much easier to enhance a model
if you understand why it fails):
*
http://phylonetworks.blogspot.com/2016/11/once-more-on-artificial-intelligence.html
I also don't understand why you insist on the non-grammar part of google
translate? They use sequence models, right? And what is a model that
creates a sequence other than a grammar, albeit a rather simple one on
Chomsky's hierarchy. Or am I getting something wrong, and there's a
definition for grammar I am not aware of? Definitely possible, but I'd
like to know which one you base your distinction on...
Best,
Mattis
On 03.06.2018 11:31, Stela Manova wrote:
> Dear Mattis,
>
> You write:
>
>> but there's a misunderstanding regarding scientific endeavor
>> here: google people are engineers, their goal is to get a machine
>> running that replicates a human talking. What linguists want to do is
>> scientific endeavor, we do not only want to replicate a machine doing
>> the same things that we do, but we want to UNDERSTAND what the
>> machine does.
>
> What is scientific endeavor? Imagine that we have to describe
> scientifically, let me say, a brick. We can say: 1) a brick is building
> material; it is used for building houses; when building houses, we order
> bricks in a specific way to construct walls, etc.; 2) Alternative
> definition: a brick is a/ /parallelepiped with 90 degree angles; bricks
> can differ in form and size; a brick usually contains holes that can
> also differ in form and size; etc. Which definition provides a better
> understanding of what is a brick - the one that focuses on what a brick
> is good for or the one that cares about form, size, and holes?
>
> Like all recent research in AI, Google research is inspired by the
> organization of the human brain (thank you, Dmitry, for the addition).
> Is this scientific endeavor or not? I think it is.
>
> I cannot agree that Google engineers do not understand what a machine
> does. On the contrary, exactly because they understand it very well,
> they managed to optimize the Google Translate algorithm. Btw, what
> should make linguists nervous is not the fact that the algorithm without
> grammar performs faster than that with grammar (in my previous message I
> explained why; it is a matter of mathematics). What is really surprising
> is that the algorithm without grammar translates more precisely than the
> algorithm with grammar.
>
> I am aware how sensitive the grammar-non-grammar issue is for the
> linguistic community, but as Volker Gast wrote here: "At the end of the
> day, the various approaches to linguistics should be judged against the
> value of their results…”
>
> Best,
> Stela
>
>
>> On 02.06.2018, at 21:11, Dmitry Nikolaev <dsnikolaev at gmail.com
>> <mailto:dsnikolaev at gmail.com>> wrote:
>>
>> Dear Mattis,
>>
>> a small correction:
>>
>> > Furthermore, it is not that trivial as the google-people suggest: they
>> > use extremely large training corpora for automatic translation which is
>> > based on stochastic (albeit apparently simple) grammars. A human,
>> > however, acquires a language with much LESS training material and a
>> > smaller brain. This questions cannot be solved if we rely on google or
>> > the engineering part of "computer science".
>>
>> The biggest announced neural networks seem to have on the order of 1
>> to 2 hundred billion parameters (weights of connections between
>> neurons). Human brain has ~100 billion neurons and on the order of 100
>> trillion connections / learnable parameters. Huge NLP endeavours
>> probably match and surpass the amount of input humans receive when
>> acquiring a language, but computationally human brain is not small, it
>> is in another universe.
>>
>> With kind regards,
>> Dmitry
>>
>>
>> On Sat, 2 Jun 2018 at 14:29, Mattis List <mattis.list at lingpy.org
>> <mailto:mattis.list at lingpy.org>> wrote:
>>
>> Dear Stela,
>>
>> very brief, but there's a misunderstanding regarding scientific
>> endeavor
>> here: google people are engineers, their goal is to get a machine
>> running that replicates a human talking. What linguists want to do is
>> scientific endeavor, we do not only want to replicate a machine doing
>> the same things that we do, but we want to UNDERSTAND what the
>> machine does.
>>
>> This issue of machine learning approaches which are all very
>> black-boxy,
>> has now finally gained some intention among scholars, since it is also
>> dangerous, if we want to use machines to replace human labor in the
>> future (look at how badly facebook filters hate-speech). But it is
>> also
>> fundamentally different as an approach: we NEED to care about
>> categories, as we want to look inside the box, not simply create a
>> new one.
>>
>> Furthermore, it is not that trivial as the google-people suggest: they
>> use extremely large training corpora for automatic translation
>> which is
>> based on stochastic (albeit apparently simple) grammars. A human,
>> however, acquires a language with much LESS training material and a
>> smaller brain. This questions cannot be solved if we rely on google or
>> the engineering part of "computer science".
>>
>> Best,
>>
>> Mattis
>>
>>
>>
>> On 02.06.2018 11:17, Stela Manova wrote:
>> > Dear Randy,
>> >
>> > What you write simply shows that you do not know enough about
>> numerical
>> > systems and how a computer works. Yes, there exist different
>> numerical
>> > systems, btw not only the binary and the decimal one, but there are
>> > special notations for the different systems, so that
>> mathematicians and
>> > computers know in which system a number is. Additionally, a computer
>> > works only in binary code. How exactly those things happen in
>> computer
>> > science is explained, e.g.,
>> here: http://www.cplusplus.com/doc/hex/.
>> >
>> > Regarding induction / deduction and Jeff Dean’s method, I will not
>> > philosophize, there is a clear definition of mathematical
>> induction. In
>> > math, induction is used in recursive situations to establish the
>> basic
>> > case. That MIT professor explains induction and recursion very
>> >
>> well: https://www.youtube.com/watch?v=WPSeyjX1-4s&t=0s&list=PLUl4u3cNGP63WbdFxL8giv4yhgdMGaZNA&index=23.
>> > Let us leave readers decide of what type is Jeff Dean’s method.
>> >
>> > What linguists cannot understand is the fact that in order to apply
>> > mathematical logic, one needs elements that are of the same
>> type. If you
>> > assume that there are different types of words (basic elements of a
>> > system), you cannot describe that system mathematically, at
>> least not
>> > without preliminary sortings of the elements, which will make the
>> > analysis more time-consuming = slower computer program.
>> Therefore, Jeff
>> > Dean claims that using grammar is less efficient than handling
>> without
>> > grammar. In sum, the difference between the computer scientist
>> Jeff Dean
>> > and a linguist: Jeff Dean treats all words as units (elements of the
>> > same type) while linguists philosophize on bipolar polysemy =
>> Jeff Dean
>> > solves a problem, linguists create an additional one.
>> >
>> > Btw, if linguists listen to computer scientists, there would not
>> be any
>> > research on complexity in linguistics, either. The above MIT
>> professor
>> > again, part 1
>> >
>> at: https://www.youtube.com/watch?v=o9nW0uBqvEo&list=PLUl4u3cNGP63WbdFxL8giv4yhgdMGaZNA&index=36, and
>> > part 2
>> >
>> at: https://www.youtube.com/watch?v=7lQXYl_L28w&list=PLUl4u3cNGP63WbdFxL8giv4yhgdMGaZNA&index=37.
>> >
>> > Best,
>> > Stela
>> >
>> >> On 02.06.2018, at 08:51, Randy J. LaPolla
>> <randy.lapolla at gmail.com <mailto:randy.lapolla at gmail.com>
>> >> <mailto:randy.lapolla at gmail.com
>> <mailto:randy.lapolla at gmail.com>>> wrote:
>> >>
>> >> Dear Stella,
>> >> The mathematical approach you discussed is very much in the
>> >> Structuralist tradition, and not that much in line with the most
>> >> cutting edge recent AI research. Almost all linguistics (including
>> >> Chomsky), plus most computer science, particularly NLP, is based on
>> >> Structuralist principles (though Interactional Linguistics,
>> >> Usage-based approaches, and Halliday’s approach are not). What you
>> >> said, "in mathematics / computer science, in isolation, a
>> sequence of
>> >> elements always has a single meaning because if it has not, no
>> >> computation is possible”, and you assume it must be true for
>> language,
>> >> is very much the sort of thing I was talking about. Even in
>> computer
>> >> science that is not true, as “10” in a binary system such as
>> machine
>> >> code has a different “meaning” from “10” in a non-binary
>> situation, so
>> >> 1 + 1 = 2 is only true in the context of a non-binary code.
>> >> Mathematics and logic is also tautologies, as Wittgenstein pointed
>> >> out, so quite different from natural language, where even “War
>> is war”
>> >> is not a tautology, and that is why there was the whole Oxford
>> School
>> >> of Ordinary Language Philosophy (Grice, Austin, Searle, etc.),
>> as they
>> >> saw that natural language is quite different from the mathematical
>> >> approach being pushed by the logical positivists and analytic
>> >> philosophers. (Frege and Russel had turned logic into
>> mathematics, and
>> >> tried to apply it to language—the early Wittgenstein went along
>> with
>> >> that initially, but later saw how problematic even his own early
>> >> approach was.)
>> >>
>> >> I am aware of what has been going on in AI, particularly by
>> Jeff Dean,
>> >> in the switch from symbolic (deductive/rule-based) AI to inductive
>> >> approaches, and am quite happy they finally have seen the light in
>> >> that regard, and that has made a big difference in terms of
>> what the
>> >> systems can do. That switch, from rule based deductive
>> algorithms, is
>> >> what Dean means by doing without grammar; what they find using the
>> >> inductive approach is still grammar (as Peirce said “Induction
>> infers
>> >> a rule”), and simply based on symbol manipulation, so a long
>> way from
>> >> modelling actual communication, which is based on meaning, not
>> >> symbols, and so what they are talking about is not really
>> >> “understanding". Induction can only take you so far (Peirce’s
>> view was
>> >> that deduction (which is tautology) and induction do not tell you
>> >> anything new—although abduction is the “weakest” inference, as
>> he put
>> >> it, it is the only one that tells you something new; On the
>> difference
>> >> between the latter two: “. . . the essence of an induction is
>> that it
>> >> infers from one set of facts another set of similar facts, whereas
>> >> hypothesis [abduction—rjl] infers from facts of one kind to
>> facts of
>> >> another.”); the next step is to understand how communication
>> actually
>> >> works (as it isn’t coding/decoding) and try to see if it is
>> possible
>> >> to model abductive inference, which is what real communication is
>> >> based on. I don’t know if that is possible. The problem is they are
>> >> not working with linguists who understand communication, and so
>> on the
>> >> one had assume it is about symbol manipulation, and on the
>> other end
>> >> up often reinventing the wheel. One example is a talk I went to
>> at our
>> >> Complexity Institute, where the speaker talked about how his
>> algorithm
>> >> had shown that some words in English, such as “a little bit" occur
>> >> together more often than others. We linguists of course knew that
>> >> decades ago, but as this person had not talked to any linguists
>> before
>> >> starting a linguistic study, he had no clue about what had been
>> done
>> >> in terms of collocational relationships.
>> >>
>> >> Yes, the abilities and principles related to meaning creation and
>> >> linguistic behaviour are general cognitive mechanisms and
>> behavioural
>> >> principles, not specific to language, and not unique to humans. You
>> >> say, "Linguists believe that linguistics is a module of its own
>> in the
>> >> brain and love re-defining things as something specific for the
>> >> field”, but that statement only applies to an ever-shrinking
>> minority
>> >> of people doing rationalist philosophy rather than empirical
>> >> linguistics, and the ones associated with the now discredited
>> symbolic
>> >> AI.
>> >>
>> >> All the best,
>> >> Randy
>> >> -----
>> >> *Randy J. LaPolla, PhD FAHA* (羅仁地)
>> >> Professor of Linguistics and Chinese, School of Humanities
>> >> Nanyang Technological University
>> >> HSS-03-45, 14 Nanyang Drive | Singapore 637332
>> >> http://randylapolla.net/
>> >> Most recent book:
>> >>
>> https://www.routledge.com/The-Sino-Tibetan-Languages-2nd-Edition/LaPolla-Thurgood/p/book/9781138783324
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>> On 1 Jun 2018, at 4:57 PM, Stela Manova
>> <stela.manova at univie.ac.at <mailto:stela.manova at univie.ac.at>
>> >>> <mailto:stela.manova at univie.ac.at
>> <mailto:stela.manova at univie.ac.at>>> wrote:
>> >>>
>> >>> Dear Randy,
>> >>>
>> >>> What I wrote does not have anything to do with structuralism
>> but is
>> >>> based on recent research in language understanding on which we
>> rely
>> >>> every day. I mean research carried out by Google. Intriguingly,
>> >>> people who do NLP and LU at Google are not linguists but computer
>> >>> scientists and the senior fellow of the Google Brain Team,
>> Jeff Dean,
>> >>> claims that language understanding does not need grammar, see his
>> >>> slides on Scaling language understanding models
>> >>>
>> at https://blog.ycombinator.com/jeff-deans-lecture-for-yc-ai/, starts
>> >>> at 24:54 in the video, as well as the slides on Google Translate -
>> >>> 27:52 in the video (the slides are below the video), but
>> please watch
>> >>> the whole video if you have time. This is one of Jeff Dean’s many
>> >>> talks on Deep Learning, I give this link because I have it in my
>> >>> computer but you can google the topic and the speaker. So,
>> Google’s
>> >>> LU does not use grammar but is based on combinations /
>> sequences of
>> >>> elements and statistics; and ironically, linguists who believe in
>> >>> grammar and irony (based on your message below) use Google
>> products
>> >>> every day. The wisdom from the Google sequence-to-sequence
>> model is
>> >>> that single examples do not count as evidence for the
>> organization of
>> >>> a system. Reread now our discussion on what is bipolar
>> polysemy and
>> >>> you will understand, why so many linguistics professors from
>> so many
>> >>> different countries cannot agree on a definition.
>> >>>
>> >>> It is not about bipolar polysemy, it is about the future of the
>> >>> field. Google guys claim and prove that the same learning logic
>> >>> applies to all areas of life; roughly, the same rules operate
>> >>> in visual perception, chemistry, language, etc. Linguists believe
>> >>> that linguistics is a module of its own in the brain and love
>> >>> re-defining things as something specific for the field - there is
>> >>> even statistics for linguists which unfortunately differs from
>> Google
>> >>> statistics because people who do statistics in Google are
>> >>> mathematicians while (most of the) linguistic statisticians
>> were bad
>> >>> at math at school and therefore studied languages at the
>> university, etc.
>> >>>
>> >>> I have a PhD in general linguistics from the University of Vienna
>> >>> (and my both PhD supervisors were very bad at math) but I cannot
>> >>> agree that this is sufficient evidence that the tip of my nose
>> is the
>> >>> end of the horizon. OK, I was also educated in math nine years -
>> >>> intensively.
>> >>>
>> >>> Best,
>> >>>
>> >>> Stela
>> >>>
>> >>>
>> >>>> On 01.06.2018, at 06:06, Randy J. LaPolla
>> <randy.lapolla at gmail.com <mailto:randy.lapolla at gmail.com>
>> >>>> <mailto:randy.lapolla at gmail.com
>> <mailto:randy.lapolla at gmail.com>>> wrote:
>> >>>>
>> >>>> Hi All,
>> >>>> This whole discussion shows how problematic some of the a priori,
>> >>>> non-empirical assumptions of the Structuralist approach are. The
>> >>>> assumption that there is a fixed association of sign and
>> signifier,
>> >>>> and so words have meaning in some abstract universe divorced from
>> >>>> context, and the assumption that language can be dealt with
>> >>>> mathematically, and the assumption that communication happens
>> >>>> through coding and decoding (on the computational model), and
>> that
>> >>>> the “real” word is the written, abstract, out-of-phonetic-context
>> >>>> form, and so phonology in context can be ignored, and as there is
>> >>>> only one “real” meaning to a word, the different uses in
>> context ,
>> >>>> such as irony, can be simply ignored or treated as deviant. The
>> >>>> assumption that there is a fixed system that has iron-clad rules,
>> >>>> and that there are aspects of the system that are necessary for
>> >>>> communication to occur.
>> >>>>
>> >>>> There is much literature showing how problematic these
>> assumptions
>> >>>> are, but somehow they are still in force in much of
>> linguistics, as
>> >>>> reflected in some of this discussion.
>> >>>>
>> >>>> My own view is that communication involves one person
>> performing a
>> >>>> communicative act in a particular place and time and to a
>> particular
>> >>>> addressee, and the addressee abductively inferring that person’s
>> >>>> reason for performing that act in that particular context to that
>> >>>> particular person at that particular time. So it is completely
>> >>>> context dependent, as Nick shows, and there is no minimum
>> >>>> morphosyntactic structure required, as David Gil has shown.
>> No part
>> >>>> of the communicative situation or act can be left out in terms of
>> >>>> understanding the meaning that the addressee creates in inferring
>> >>>> the communicator’s intention (as Mark shows in including
>> gesture in
>> >>>> his discussion, though it also includes non-conventionalised
>> >>>> behaviour, e.g. gaze and body movements; and it is creation of
>> >>>> meaning, not transfer of meaning, and so subjective and
>> >>>> non-determinative). Language and other conventionalised
>> >>>> communicative behaviour (language is behaviour, not a thing, and
>> >>>> does not differ in nature from other conventionalised behaviour)
>> >>>> emerges out of the interaction of the people involved.
>> >>>>
>> >>>> So the question asked is like a Zen koan: you can’t answer it
>> yes or
>> >>>> no, as it is based on problematic assumptions.
>> >>>>
>> >>>> Randy
>> >>>>
>> >>>> -----
>> >>>> *Randy J. LaPolla, PhD FAHA* (羅仁地)
>> >>>> Professor of Linguistics and Chinese, School of Humanities
>> >>>> Nanyang Technological University
>> >>>> HSS-03-45, 14 Nanyang Drive | Singapore 637332
>> >>>> http://randylapolla.net/
>> >>>> Most recent book:
>> >>>>
>> https://www.routledge.com/The-Sino-Tibetan-Languages-2nd-Edition/LaPolla-Thurgood/p/book/9781138783324
>> >>>>
>> >>>>
>> >>>>
>> >>>>> On 1 Jun 2018, at 7:42 AM, Nick Enfield
>> <nick.enfield at sydney.edu.au <mailto:nick.enfield at sydney.edu.au>
>> >>>>> <mailto:nick.enfield at sydney.edu.au
>> <mailto:nick.enfield at sydney.edu.au>>> wrote:
>> >>>>>
>> >>>>> In Lao:
>> >>>>>
>> >>>>>
>> >>>>> 1. The verb cak2 means ‘know’, and can be negated as in
>> man2 bòò1
>> >>>>> cak2 [3sg neg know] ‘S/he doesn’t know.’ But when used
>> alone,
>> >>>>> with no subject expressed, often with the perfect marker
>> (as in
>> >>>>> cak2 or cak2 lèèw4) it means “I don’t know.”
>> >>>>> 2. The verb faaw4 means ‘to hurry, rush’, and can be
>> negated as in
>> >>>>> man2 bòò1 faaw4 [3sg neg rush] ‘S/he doesn’t hurry/isn’t
>> >>>>> hurrying.’ But when used alone as an imperative, with no
>> >>>>> subject expressed, often repeated, or with an appropriate
>> >>>>> sentence-final particle (as in faaw4 faaw4 or faaw4 dee4) it
>> >>>>> means “Don’t hurry, Stop hurrying, Slow down”.
>> >>>>> 3. Often, both positive and negative readings of verbs are
>> >>>>> available when the irrealis prefix si is used (with
>> context or
>> >>>>> perhaps intonation doing the work); eg khaw3 si kin3
>> [3pl irr
>> >>>>> eat] could mean ‘They will eat it’ or ‘They will
>> definitely not
>> >>>>> eat it’ with a meaning similar to the colloquial English
>> >>>>> expression “As if they would eat it.” The second meaning is
>> >>>>> made more likely by insertion of the directional paj3 ‘go’
>> >>>>> before the verb (khaw3 si paj3 kin3 [3pl irr go eat] ‘As if
>> >>>>> they would eat it.’).
>> >>>>>
>> >>>>>
>> >>>>> Nick
>> >>>>>
>> >>>>> * *
>> >>>>> * *
>> >>>>> * *
>> >>>>> * *
>> >>>>> *N. J. ENFIELD *| FAHA FRSN | Professor of Linguistics
>> >>>>> Head, Post Truth Initiative https://posttruthinitiative.org/
>> >>>>> Director, SSSHARC (Sydney Social Sciences and Humanities
>> Advanced
>> >>>>> Research Centre)
>> >>>>> Faculty of Arts and Social Sciences
>> >>>>> *THE UNIVERSITY OF SYDNEY*
>> >>>>> Rm N364, John Woolley Building A20 | NSW | 2006 | AUSTRALIA
>> >>>>> T +61 2 9351 2391 | M +61 476 239 669
>> >>>>> orcid.org/0000-0003-3891-6973
>> <http://orcid.org/0000-0003-3891-6973>
>> <http://orcid.org/0000-0003-3891-6973>
>> >>>>> E nick.enfield at sydney.edu.au <mailto:nick.enfield at sydney.edu.au>
>> >>>>> <mailto:nick.enfield at sydney.edu.au
>> <mailto:nick.enfield at sydney.edu.au>> | W sydney.edu.au
>> <http://sydney.edu.au/>
>> >>>>> <http://sydney.edu.au/> nickenfield.org
>> <http://nickenfield.org/> <http://www.nickenfield.org/>
>> >>>>> * *
>> >>>>>
>> >>>>>
>> >>>>> *From: *Lingtyp <lingtyp-bounces at listserv.linguistlist.org
>> <mailto:lingtyp-bounces at listserv.linguistlist.org>
>> >>>>> <mailto:lingtyp-bounces at listserv.linguistlist.org
>> <mailto:lingtyp-bounces at listserv.linguistlist.org>>> on behalf of
>> >>>>> Mark Donohue <mark at donohue.cc <mailto:mark at donohue.cc>
>> <mailto:mark at donohue.cc <mailto:mark at donohue.cc>>>
>> >>>>> *Date: *Friday, 1 June 2018 at 7:13 AM
>> >>>>> *To: *David Gil <gil at shh.mpg.de <mailto:gil at shh.mpg.de>
>> <mailto:gil at shh.mpg.de <mailto:gil at shh.mpg.de>>>
>> >>>>> *Cc: *"LINGTYP at LISTSERV.LINGUISTLIST.ORG
>> <mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG>
>> >>>>> <mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG
>> <mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG>>"
>> >>>>> <lingtyp at listserv.linguistlist.org
>> <mailto:lingtyp at listserv.linguistlist.org>
>> >>>>> <mailto:lingtyp at listserv.linguistlist.org
>> <mailto:lingtyp at listserv.linguistlist.org>>>
>> >>>>> *Subject: *Re: [Lingtyp] Does bipolar polysemy exist?
>> >>>>>
>> >>>>> In Tukang Besi, an Austronesian language of Indonesia, the verb
>> >>>>> 'know' is dahani; verbs are generally prefixed to agree with the
>> >>>>> S,A argument, thus
>> >>>>>
>> >>>>> ku-dahani 'I know'
>> >>>>> 'u-dahani 'you know'
>> >>>>>
>> >>>>> etc.
>> >>>>> In some contexts (imperatives, emphatic generic (TAME-less)
>> >>>>> assertion), the prefix can be omitted.
>> >>>>>
>> >>>>> dahani 'I/you certainly know'
>> >>>>>
>> >>>>> Now, I've heard this (and only this) verb used, in the
>> absence of
>> >>>>> any inflection, with exactly its opposite meaning
>> >>>>>
>> >>>>> Dahani 'I don't know'
>> >>>>>
>> >>>>> in what might be a sarcastic sense. Unlike the antonymic uses of
>> >>>>> many adjectives in many languages, including English, this
>> use of
>> >>>>> dahani is actually a simple (though emphatic) negation of the
>> >>>>> verb's 'normal' meaning.
>> >>>>>
>> >>>>> -Mark
>> >>>>>
>> >>>>> On 1 June 2018 at 04:43, David Gil <gil at shh.mpg.de
>> <mailto:gil at shh.mpg.de>
>> >>>>> <mailto:gil at shh.mpg.de <mailto:gil at shh.mpg.de>>> wrote:
>> >>>>>> Yes, as Matti points out, negative lexicalization is not
>> quite as
>> >>>>>> rare as I was implying. Yet at the same time, I suspect
>> that it
>> >>>>>> might not be as common as Matti is suggesting. Looking at the
>> >>>>>> examples that he cites in his Handbook chapter, I suspect
>> that in
>> >>>>>> some cases, the negative counterpart isn't "just" negative,
>> but is
>> >>>>>> also associated with some additional meaning components.
>> >>>>>>
>> >>>>>> Matti doesn't list "good"/"bad" as being such a pair, though,
>> >>>>>> citing work by Ulrike Zeshan on sign languages, he does mention
>> >>>>>> other evaluative concepts such as "not right", "not possible",
>> >>>>>> "not enough". in English, at least, "bad" is not the
>> negation of
>> >>>>>> "good", it is the antonym of "good"; there's all kind of
>> stuff in
>> >>>>>> the world which we attach no evaluative content to, and which
>> >>>>>> hence is neither good nor bad. (It's true that in English,
>> in many
>> >>>>>> contexts, the expression "not good" is understood as meaning
>> >>>>>> "bad", which is interesting in and of itself, but still, it
>> is not
>> >>>>>> necessarily understood in this way.) While I have no direct
>> >>>>>> evidence, I would strongly suspect that in languages that have
>> >>>>>> lexicalized expressions for "not right", "not possible",
>> and "not
>> >>>>>> enough", the meanings of these expressions will be the
>> antonyms of
>> >>>>>> "right", "possible" and "enough", and not their negations.
>> >>>>>>
>> >>>>>> Under lexicalized negatives in the domain of tense/aspect,
>> Matti
>> >>>>>> lists "will not", "did not", "not finished". Well the one case
>> >>>>>> that I am familiar with that falls into this category is
>> that of
>> >>>>>> the Malay/Indonesian iamative/perfect marker "sudah", which
>> has a
>> >>>>>> lexicalized negative counterpart "belum". However, "belum"
>> isn't
>> >>>>>> just "not sudah"; it also bears a strong (if not invariant)
>> >>>>>> implicature that at some point in the future, the state or
>> >>>>>> activity that is not complete will be completed — in fact, just
>> >>>>>> like the English expression "not yet". (When people in
>> Indonesia
>> >>>>>> ask you if you're married, it's considered impolite to
>> answer with
>> >>>>>> a simple negation "tidak"; you're supposed to say "belum"
>> >>>>>> precisely because of its implicature that you will, in the
>> future,
>> >>>>>> get married. By avoiding this implicature, the simple negation
>> >>>>>> "tidak" is viewed as a threat to the natural order of
>> things, in
>> >>>>>> which everybody should get married.)
>> >>>>>>
>> >>>>>> I suspect that many if not all of the cases characterized
>> by Matti
>> >>>>>> as "lexicalized negatives" will turn out to be associated with
>> >>>>>> some additional meaning component beyond that of "mere"
>> negation.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On 31/05/2018 20:06, Miestamo, Matti M P wrote:
>> >>>>>>>
>> >>>>>>> Dear David, Zygmunt and others,
>> >>>>>>>
>> >>>>>>> negative lexicalization is not quite as rare as David seems to
>> >>>>>>> imply. There is a cross-linguistic survey of this
>> phenomenon by
>> >>>>>>> Ljuba Veselinova (ongoing work, detailed and informative
>> >>>>>>> presentation slides available through her website), and Zeshan
>> >>>>>>> (2013) has written on this phenomenon in sign languages.
>> There's
>> >>>>>>> also a short summary in my recent Cambridge Handbook of
>> >>>>>>> Linguistic Typology chapter on negation (preprint
>> available via
>> >>>>>>> the link in the signature below).
>> >>>>>>>
>> >>>>>>> Best,
>> >>>>>>> Matti
>> >>>>>>>
>> >>>>>>> --
>> >>>>>>> Matti Miestamo
>> >>>>>>> http://www.ling.helsinki.fi/~matmies/
>> >>>>>>>
>> <https://protect-au.mimecast.com/s/O7N4CL7rK8t5zx0kUBCq-Q?domain=ling.helsinki.fi>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>> Zygmunt Frajzyngier <Zygmunt.Frajzyngier at COLORADO.EDU
>> <mailto:Zygmunt.Frajzyngier at COLORADO.EDU>
>> >>>>>>>> <mailto:Zygmunt.Frajzyngier at COLORADO.EDU
>> <mailto:Zygmunt.Frajzyngier at COLORADO.EDU>>> kirjoitti 31.5.2018
>> >>>>>>>> kello 17.23:
>> >>>>>>>>
>> >>>>>>>> David, Friends
>> >>>>>>>> Related to David’s post, not to the original query.
>> >>>>>>>> In any individual language, there may exist a few of
>> ‘Not-X’ items.
>> >>>>>>>> In Mina (Central Chadic) there is a noun which designates
>> >>>>>>>> ‘non-blacksmith’.
>> >>>>>>>> In several Chadic languages there exist negative existential
>> >>>>>>>> verb unrelated to the affirmative existential verb.
>> >>>>>>>> Zygmunt
>> >>>>>>>>
>> >>>>>>>> On 5/31/18, 5:52 AM, "Lingtyp on behalf of David Gil"
>> >>>>>>>> <lingtyp-bounces at listserv.linguistlist.org
>> <mailto:lingtyp-bounces at listserv.linguistlist.org>
>> >>>>>>>> <mailto:lingtyp-bounces at listserv.linguistlist.org
>> <mailto:lingtyp-bounces at listserv.linguistlist.org>> on behalf
>> >>>>>>>> of gil at shh.mpg.de <mailto:gil at shh.mpg.de>
>> <mailto:gil at shh.mpg.de <mailto:gil at shh.mpg.de>>> wrote:
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> On 31/05/2018 13:37, Sebastian Nordhoff wrote:
>> >>>>>>>>> On 05/31/2018 01:18 PM, David Gil wrote:
>> >>>>>>>>>> A point of logic. "Not X" and "Antonym (X)" are distinct
>> >>>>>>>>>> notions, and
>> >>>>>>>>>> the original query by Ian Joo pertains to the former,
>> not the
>> >>>>>>>>>> latter.
>> >>>>>>>>> but is there any (monomorphemic) lexeme which expresses
>> not-X
>> >>>>>>>>> which is
>> >>>>>>>>> not the antonym of X?
>> >>>>>>>> But how many (monomorphemic) lexemes expressing not-X are
>> >>>>>>>> there at all?
>> >>>>>>>> The only ones I can think of are suppletive negative
>> >>>>>>>> existentials, e.g.
>> >>>>>>>> Tagalog "may" (exist) > "wala" (not exist). Even
>> suppletive
>> >>>>>>>> negative
>> >>>>>>>> desideratives don't quite fit the bill, e.g. Tagalog
>> >>>>>>>> "nais"/"gusto"
>> >>>>>>>> (want) > "ayaw", which is commonly glossed as "not want",
>> >>>>>>>> but actually
>> >>>>>>>> means "want not-X", rather than "not want-X" — "ayaw" is
>> >>>>>>>> thus an antonym
>> >>>>>>>> but not a strict negation of "nais"/"gusto".
>> >>>>>>>>
>> >>>>>>>> What is not clear to me about the original query is
>> whether
>> >>>>>>>> it is asking
>> >>>>>>>> for negations or for antonyms.
>> >>>>>>>>
>> >>>>>>>> --
>> >>>>>>>> David Gil
>> >>>>>>>>
>> >>>>>>>> Department of Linguistic and Cultural Evolution
>> >>>>>>>> Max Planck Institute for the Science of Human History
>> >>>>>>>> Kahlaische Strasse 10, 07745 Jena, Germany
>> >>>>>>>>
>> >>>>>>>> Email: gil at shh.mpg.de <mailto:gil at shh.mpg.de>
>> <mailto:gil at shh.mpg.de <mailto:gil at shh.mpg.de>>
>> >>>>>>>> Office Phone (Germany): +49-3641686834
>> >>>>>>>> Mobile Phone (Indonesia): +62-81281162816
>> >>>>>>>>
>> >>>>>>>> _______________________________________________
>> >>>>>>>> Lingtyp mailing list
>> >>>>>>>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> >>>>>>>> <mailto:Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>>
>> >>>>>>>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>> >>>>>>>>
>> <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> _______________________________________________
>> >>>>>>>> Lingtyp mailing list
>> >>>>>>>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> >>>>>>>> <mailto:Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>>
>> >>>>>>>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>> >>>>>>>>
>> <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>
>> >>>>>>> _______________________________________________
>> >>>>>>> Lingtyp mailing list
>> >>>>>>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> >>>>>>> <mailto:Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>>
>> >>>>>>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>> >>>>>>>
>> <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>
>> >>>>>>
>> >>>>>> --
>> >>>>>> David Gil
>> >>>>>>
>> >>>>>> Department of Linguistic and Cultural Evolution
>> >>>>>> Max Planck Institute for the Science of Human History
>> >>>>>> Kahlaische Strasse 10, 07745 Jena, Germany
>> >>>>>>
>> >>>>>> Email: gil at shh.mpg.de <mailto:gil at shh.mpg.de>
>> <mailto:gil at shh.mpg.de <mailto:gil at shh.mpg.de>>
>> >>>>>> Office Phone (Germany): +49-3641686834
>> >>>>>> Mobile Phone (Indonesia): +62-81281162816
>> >>>>>>
>> >>>>>> _______________________________________________
>> >>>>>> Lingtyp mailing list
>> >>>>>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> >>>>>> <mailto:Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>>
>> >>>>>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>> >>>>>>
>> <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Lingtyp mailing list
>> >>>>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> >>>>> <mailto:Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>>
>> >>>>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>> >>>>
>> >>>> _______________________________________________
>> >>>> Lingtyp mailing list
>> >>>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> >>>> <mailto:Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>>
>> >>>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>> >>>
>> >>
>> >
>> >
>> >
>> > _______________________________________________
>> > Lingtyp mailing list
>> > Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> > http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>> >
>>
>> _______________________________________________
>> Lingtyp mailing list
>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>>
>> _______________________________________________
>> Lingtyp mailing list
>> Lingtyp at listserv.linguistlist.org
>> <mailto:Lingtyp at listserv.linguistlist.org>
>> http://listserv.linguistlist.org/mailman/listinfo/lingtyp
>
More information about the Lingtyp
mailing list