[Lingtyp] A brief summary of the discussion on bipolar polysemy

Joo Ian ian.joo at outlook.com
Sat Jun 9 08:04:49 UTC 2018


Dear all,

I would like to express my sincere gratitude to everyone for answering my brief, rather simplistic, question on “bipolar polysemy” with such a fruitful discussion that truly provided me detailed insights on lexical polysemy and ambiguity. Since I am quite busy at the moment, I will just briefly summarize the discussion I have followed to the extent my limited time and knowledge allows me to.

First, many have pointed out the cases of auto-antonyms<https://en.wikipedia.org/wiki/Auto-antonym>, which are indeed abundant across many languages. For example, a word can mean either ‘host’ or ‘guest’. Auto-antonyms are very interesting, but they do not necessarily fall into the category of bipolar polysemy, because antonyms are different from negations: ‘host’ is an antonym of ‘guest,’ but is not exactly the negation of ‘guest’ (‘non-guest’).

There was a suggestion that English let may be a true case of bipolar polysemy, as it can mean ‘to allow’ and ‘to prevent’ at the same time. I checked this information on Wiktionary<https://en.wiktionary.org/wiki/let#English>, and it seems that the two meanings have different origins, the former being leten and the latter letten. So this may be a case of homonymy rather than polysemy.

Next, there are cases where a word can convey bipolar meanings due to phonological reductions, such as French T’inquiète (`don’t worry’, lit. `you worry’), which is derived from the full form Ne t’inquiète pas ‘Do not worry’. Also, David pointed out the case of Malay ta(h)u ‘to know’, which can mean ‘I don’t know’, but I this could be also a phonological reduction of ‘tak/gak tahu’ (don’t know).

So there remains the question whether there exists a lexical item can fully convey a meaning and the negation of it in any context. I may have neglected if someone really did come up with such a case; in case my memory fails, then I would appreciate it if someone brought it up.

The discussion was followed by the the philosophical question what a ‘meaning’ is. What does it really mean to say that ‘X means Y’? In what context? I did not follow this debate in detail, as my time was limited, but it did signal to me that my question would have been too simplistic in nature, and that I perhaps should have added more details to it to get a meaningful answer.

Lastly, I would like to explain my motivation for asking this question in the first place. I read Goldberg (1995)’s paper on Construction Grammar, and its description of Caused Motion Construction on particular. This Construction, according to Goldberg, means “X causes Y to move Z”, as in (1):


  1.  She sneezed the napkin off the table.

But in some cases of Caused Motion, the Y does not move anywhere, such as in (2):


  1.  He kept her at arm’s length.

In (2), he does not cause her to move, quite the opposite: he causes her not to move (beyond arm’s length). Goldberg’s explanation for the case of (2) is that the Caused Motion Construction is polysemous and that it can mean: “X prevents Y from moving comp(Z)” where comp(Z) is the complementary opposite of Z.

Since Goldberg’s claim was (to my understanding) that every linguistic unit is a construction, including morphemes, words, etc, it seemed unrealistic to me that a linguistic unit can both mean X and not-X. Since there is no morpheme that can mean X and not-X, why should there exist a Construction that can mean something like that?

Please find attached a draft of a paper I wrote to tackle this problem. I left out the question of bipolar polysemy, because I discovered that it is a far more complex issue than I have imagined, but I believe that my argument still holds. I would appreciate any critical comments or suggestions on it.

To conclude, thanks again to everyone who answered to my question, I really learned a lot from it. And I am sorry that I was not able to include every helpful comments in my brief summary.

>From Hong Kong,
Ian Joo
http://ianjoo.academia.edu

From: m.m.jocelyne.fernandez-vest at vjf.cnrs.fr<mailto:m.m.jocelyne.fernandez-vest at vjf.cnrs.fr>
Sent: Wednesday, June 6, 2018 8:54 PM
To: ebernard at filol.ucm.es<mailto:ebernard at filol.ucm.es>; stela.manova at univie.ac.at<mailto:stela.manova at univie.ac.at>
Cc: LINGTYP at LISTSERV.LINGUISTLIST.ORG<mailto:lingtyp at listserv.linguistlist.org>
Subject: Re: [Lingtyp] Does bipolar polysemy exist?

Right yes: racism is universal, and sexism very frequent.
Yet, after 40 years of career within Language science, I claim that open minded linguists also exist: you should try to collaborate with them and avoid the sectarist ones.

Keep being proud of your name, language and origin: even in "Western" Europe, women are allowed to be smart.
 Good luck, dear Stela!

M.M.Jocelyne FERNANDEZ-VEST
CNRS & Université Sorbonne Nouvelle

Envoyé de mon iPhone

Le 6 juin 2018 à 11:41, ENRIQUE BERNARDEZ SANCHIS <ebernard at filol.ucm.es<mailto:ebernard at filol.ucm.es>> a écrit :
You're fully right, Stela. Racism is rampant. As sexism also is
Enrique

2018-06-06 11:16 GMT+02:00 Stela Manova <stela.manova at univie.ac.at<mailto:stela.manova at univie.ac.at>>:
Dear all,
In the past few days, in relation to the discussion on bipolar polysemy, I exchanged a number of messages with Mattis List on the relevance of Google’s research to linguistics. I thank Mattis for his reactions to my postings. I was honored: a male scholar born in Germany recipient of a highly prestigious ERC Staring Grant replied to me – a female scholar born in Bulgaria. I have lived in Vienna for almost 20 years. Yesterday, I was reminded one more time that I should be happy that I had the opportunity to study for a PhD degree in Austria but I cannot do science because as a female Bulgarian, I am supposed to be either a cleaning lady or a saleswoman. In other words, I am an Eastern European cockroach that tries to invade foreign territories. At least, I felt so the whole day yesterday, which postponed the writing of this message and also influenced its content.
Mattis suggested that I prepare something on what we exchanged views here, namely Google and linguistics, so that the topic can be further discussed in blogs or other Internet platforms. I noticed (I use Google analytics) that after my postings on this list many linguists from all over the world visited my homepage. I understand it as an indication of interest in the topic. I could imagine preparing something on Google, math and linguistics but currently I have many problems in Vienna that do not allow me to do it. I am sorry for bothering you with this but in my opinion it is telling about what is going on in the European linguistics at the moment and these problems are not less important for the future of linguistics than Google’s research. I am specialized in Slavic and general linguistics, so I give examples from these fields. In the part of Europe where I reside, to be from Eastern Europe is a fault - even in the Slavic department of a university! In the Department of Slavic Studies in Vienna, people of Slavic origin are also divided into born in Austria and born in a Slavic country and the former group discriminates the latter, to show that they are superior. In the Department of Linguistics, the full professor, the natural morphologist Wolfgang U. Dressler, retired and was replaced with the formal semanticist Daniel Büring who immediately started a generative cleaning: only formal linguistics is linguistics, everything is syntax and syntax/semantics and all people who do research on other topics have to die. I have joint publications with two famous morphologists, Dressler and Aronoff, and I claim that there is morphology (not only distributed). If I could understand, to some extent, Büring’s incompatibility with Natural Morphology, could anybody explain to me why having published with Mark Aronoff who has a PhD from MIT makes me a bad linguist? Why are people who are not aware of basic principles of math very successful in formal grammar? What gives such people the right to impose their theoretical beliefs on others? Where does so much hate in linguistics come from?
And now frankly, how many linguists will read an Internet text by a person with a Slavic name such as Stela Manova? It seems to me that a discussion on Google’s research and its relevance to linguistics, as suggested by Mattis, would make sense only if well-established linguists support it. I am afraid that I myself will also need some support in order to work on this; ideally, to distance from the linguistic absurd in Vienna, spending some time at an institution where I will not be treated as a cockroach. I am still doing linguistics, somehow; but I am tired of the senseless wars I have to war every day to survive in linguistics and as you could guess from my posts, I am now focused more on cognitive science and programming than on linguistics pure. As for my interest in mathematics and computers, I grew up in Bulgaria where at the age of 10, I was discovered as mathematically gifted and then received a solid education in mathematics. Yes, during the socialism, women were allowed to be smart.
If someone is interested in working with me on what I addressed on this list or related issues, please feel free to contact me.
Best,
Stela


On 04.06.2018, at 11:33, Mattis List <mattis.list at lingpy.org<mailto:mattis.list at lingpy.org>> wrote:

Dear Stela,

I think the points you brought up are very interesting, but it's
probably time to stop the discussion at this point. What I would
encourage you to do is now to write up your arguments in some form of a
blog post (that you publish online on your preferred venue), as I think
they are very interesting and important for a broader audience of
linguists, and this is just a mailing list for typologists. If you do
so, it would be very interesting, probably not only for me, but also for
other colleagues from different fields, to jump on the train of this
very interesting discussion and respond in blogs accordingly. It would
also put the discussion on a more solid ground, as we could bring in
quotes from colleagues and the like.

Looking forward to read your arguments in much more detail. I'll try to
do my best to answer them following all due scientific standards.

All the best,

Mattis

On 2018-06-04 11:00, Stela Manova wrote:

Dear Mattis,

You tend to provide misleading information, this not only about the
capacity of the human brain, s. Dmitry Nikoleav’s correction message
below. You are also mistaken about the Google’s goal, which is neither
“mimicking translation” nor "to get a machine running that replicates a
human talking". Google’s goal is to have an effective search engine -
the most effective one; everything else is more or less related to it.
The Google search algorithm is highly relevant to linguistics - Google
searches primarily texts. Yet for some reason, you never mention the
Google search engine when you discuss Google’s research results.

Google's AI beating the professional Go player: Yes, I know the story
but your version is misleading again. This was done by DeepMind in
London within their AlphaGo
program: https://deepmind.com/research/alphago/. Google bought the
start-up DeepMind - for £400 M! The Go player against the machine
contest took place afterwards but it is a DeepMind story. Google did not
pay so much money for a game-developing company. DeepMind was and is
specialized in visual recognition. So, it is not about playing Go or any
other computer game better than a human but about training a neural
network to solve visual recognition tasks and it is good that the
computer won. Computer vision assists us in many areas of life:
medicine, healthcare, security and navigation, to mention just a few.

Grammar / non-grammar and Google: I speak of non-grammar because
Google's method does not have anything to do with linguistics. I refer
to the method explained in the following
video: https://blog.ycombinator.com/jeff-deans-lecture-for-yc-ai/. A
neural network is trained on different types of data, including
language. I do not see a connection to Chomsky or any other theoretical
framework in linguistics.

In linguistics, Baayen’s Naive  Discriminative Learning is in line with
Google’s research.

As for linguistic research / fundamental linguistic questions and
Google’s approach, I do not have Google's resources (human and funding),
and therefore do morphology in linguistics. My research was not inspired
by Google but by the Gauss-Jordan
elimination: http://homepage.univie.ac.at/stela.manova/uploads/1/2/2/4/12243901/cognitiveapproachsuff1-suff2.pdf. I
have investigated suffix combinations in a number of languages and it
turned out that in all those languages suffix combinations are fixed,
i.e. if a word has more than one derivational suffix, based on the first
suffix, one can predict the following suffix because there is only one
option for a following suffix. I then tested this finding
psycholinguistically. Native speakers know which suffix combinations
exist and which do not in their language and they do not need bases
(roots / stems) to judge whether a suffix combination is a legitimate
one. All linguistic theories derive morphological structure starting
with a root (or a stem, depending on the theory). Yet, for some reason,
native speakers that took part in the experiments, all without
linguistic education, did not need roots and stems and could do
something they were not supposed to be able to do. I think that we do
not know enough about the role of memory in language processing. This is
how I understand Google’s research. It seems to me that in language
processing the human brain relies on structures (sequences) of various
lengths and uses them as ready-made blocks as well as that it also uses
pieces of structure that linguists do not recognize as linguistic units,
i.e. that there are not only phonemes, morphemes, etc. in language
processing but the human brain also operates with structures that are
neither words no morphemes, neither phrases nor sentences, etc. In the
case of my research, e.g., suffix combinations are structures between
morphemes and words.

Best,
Stela


On 03.06.2018, at 12:25, Mattis List <mattis.list at lingpy.org<mailto:mattis.list at lingpy.org>
<mailto:mattis.list at lingpy.org>> wrote:

Dear Stela,

I still don't see how building a machine that is barely understood,
mimicking translation from French to English or similar can be
considered as being a scientific alternative to linguistic research. You
may say, from the perspective of AI research it is scientific, in the
sense that new knowledge is generated if the machine words better, but
it does NOT answer linguistic questions, it could optimally provide a
model of linguistic intuition among humans, but this does not answer
fundamental questions that linguists ask themselves, and we are
discussing linguistic implications here, not questions of engineering,
like how to tweak your neural network most quickly and efficiently.

Have you followed the debate about google's AI beating the Go players?
The follow-up was: now, that the machine is better than humans, humans
will have to study the moves by the machine to learn from it. So a
machine was created that beats humans in a particular game, but the
knowledge, what creates successful strategies when playing that game was
NOT created. That IS the difference between engineers working on
automatic translation and linguists trying to investigate certain
properties of human languages.

I have discussed this in an earlier blog post (longer time ago) that
focuses on topics of historical linguistics, where machine learning
techniques have an even harder time in providing useful solutions.
There I also point to some literature reflecting that engineers and
computer scientists are well aware of this problem (maybe not the people
with google, but maybe even them: it is much easier to enhance a model
if you understand why it fails):

*
http://phylonetworks.blogspot.com/2016/11/once-more-on-artificial-intelligence.html

I also don't understand why you insist on the non-grammar part of google
translate? They use sequence models, right? And what is a model that
creates a sequence other than a grammar, albeit a rather simple one on
Chomsky's hierarchy. Or am I getting something wrong, and there's a
definition for grammar I am not aware of? Definitely possible, but I'd
like to know which one you base your distinction on...

Best,

Mattis


On 03.06.2018 11:31, Stela Manova wrote:

Dear Mattis,

You write:


   but there's a misunderstanding regarding scientific endeavor
   here: google people are engineers, their goal is to get a machine
   running that replicates a human talking. What linguists want to do is
   scientific endeavor, we do not only want to replicate a machine doing
   the same things that we do, but we want to UNDERSTAND what the
   machine does.

What is scientific endeavor? Imagine that we have to describe
scientifically, let me say, a brick. We can say: 1) a brick is building
material; it is used for building houses; when building houses, we order
bricks in a specific way to construct walls, etc.; 2) Alternative
definition: a brick is a/ /parallelepiped with 90 degree angles; bricks
can differ in form and size; a brick usually contains holes that can
also differ in form and size; etc. Which definition provides a better
understanding of what is a brick - the one that focuses on what a brick
is good for or the one that cares about form, size, and holes?

Like all recent research in AI, Google research is inspired by the
organization of the human brain (thank you, Dmitry, for the addition).
Is this scientific endeavor or not? I think it is.

I cannot agree that Google engineers do not understand what a machine
does. On the contrary, exactly because they understand it very well,
they managed to optimize the Google Translate algorithm. Btw, what
should make linguists nervous is not the fact that the algorithm without
grammar performs faster than that with grammar (in my previous message I
explained why; it is a matter of mathematics). What is really surprising
is that the algorithm without grammar translates more precisely than the
algorithm with grammar.

I am aware how sensitive the grammar-non-grammar issue is for the
linguistic community, but as Volker Gast wrote here: "At the end of the
day, the various approaches to linguistics should be judged against the
value of their results…”

Best,
Stela



On 02.06.2018, at 21:11, Dmitry Nikolaev <dsnikolaev at gmail.com<mailto:dsnikolaev at gmail.com>
<mailto:dsnikolaev at gmail.com<mailto:dsnikolaev at gmail.com>>> wrote:

Dear Mattis,

a small correction:


Furthermore, it is not that trivial as the google-people suggest: they
use extremely large training corpora for automatic translation which is
based on stochastic (albeit apparently simple) grammars. A human,
however, acquires a language with much LESS training material and a
smaller brain. This questions cannot be solved if we rely on google or
the engineering part of "computer science".

The biggest announced neural networks seem to have on the order of 1
to 2 hundred billion parameters (weights of connections between
neurons). Human brain has ~100 billion neurons and on the order of 100
trillion connections / learnable parameters. Huge NLP endeavours
probably match and surpass the amount of input humans receive when
acquiring a language, but computationally human brain is not small, it
is in another universe.

With kind regards,
Dmitry


On Sat, 2 Jun 2018 at 14:29, Mattis List <mattis.list at lingpy.org<mailto:mattis.list at lingpy.org>
<mailto:mattis.list at lingpy.org<mailto:mattis.list at lingpy.org>>> wrote:

   Dear Stela,

   very brief, but there's a misunderstanding regarding scientific
   endeavor
   here: google people are engineers, their goal is to get a machine
   running that replicates a human talking. What linguists want to do is
   scientific endeavor, we do not only want to replicate a machine doing
   the same things that we do, but we want to UNDERSTAND what the
   machine does.

   This issue of machine learning approaches which are all very
   black-boxy,
   has now finally gained some intention among scholars, since it is
also
   dangerous, if we want to use machines to replace human labor in the
   future (look at how badly facebook filters hate-speech). But it is
   also
   fundamentally different as an approach: we NEED to care about
   categories, as we want to look inside the box, not simply create a
   new one.

   Furthermore, it is not that trivial as the google-people suggest:
they
   use extremely large training corpora for automatic translation
   which is
   based on stochastic (albeit apparently simple) grammars. A human,
   however, acquires a language with much LESS training material and a
   smaller brain. This questions cannot be solved if we rely on
google or
   the engineering part of "computer science".

   Best,

   Mattis



   On 02.06.2018 11:17, Stela Manova wrote:

Dear Randy,

What you write simply shows that you do not know enough about
   numerical

systems and how a computer works. Yes, there exist different
   numerical

systems, btw not only the binary and the decimal one, but there are
special notations for the different systems, so that
   mathematicians and

computers know in which system a number is. Additionally, a computer
works only in binary code. How exactly those things happen in
   computer

science is explained, e.g.,
   here: http://www.cplusplus.com/doc/hex/.


Regarding induction / deduction and Jeff Dean’s method, I will not
philosophize, there is a clear definition of mathematical
   induction. In

math, induction is used in recursive situations to establish the
   basic

case. That MIT professor explains induction and recursion very
   well: https://www.youtube.com/watch?v=WPSeyjX1-4s&t=0s&list=PLUl4u3cNGP63WbdFxL8giv4yhgdMGaZNA&index=23.

Let us leave readers decide of what type is Jeff Dean’s method.

What linguists cannot understand is the fact that in order to apply
mathematical logic, one needs elements that are of the same
   type. If you

assume that there are different types of words (basic elements of a
system), you cannot describe that system mathematically, at
   least not

without preliminary sortings of the elements, which will make the
analysis more time-consuming = slower computer program.
   Therefore, Jeff

Dean claims that using grammar is less efficient than handling
   without

grammar. In sum, the difference between the computer scientist
   Jeff Dean

and a linguist: Jeff Dean treats all words as units (elements of the
same type) while linguists philosophize on bipolar polysemy =
   Jeff Dean

solves a problem, linguists create an additional one.

Btw, if linguists listen to computer scientists, there would not
   be any

research on complexity in linguistics, either. The above MIT
   professor

again, part 1
   at: https://www.youtube.com/watch?v=o9nW0uBqvEo&list=PLUl4u3cNGP63WbdFxL8giv4yhgdMGaZNA&index=36, and

part 2
   at: https://www.youtube.com/watch?v=7lQXYl_L28w&list=PLUl4u3cNGP63WbdFxL8giv4yhgdMGaZNA&index=37.


Best,
Stela


On 02.06.2018, at 08:51, Randy J. LaPolla
   <randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com> <mailto:randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>>

<mailto:randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>
   <mailto:randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>>>> wrote:


Dear Stella,
The mathematical approach you discussed is very much in the
Structuralist tradition, and not that much in line with the most
cutting edge recent AI research. Almost all linguistics (including
Chomsky), plus most computer science, particularly NLP, is based on
Structuralist principles (though Interactional Linguistics,
Usage-based approaches, and Halliday’s approach are not). What you
said, "in mathematics / computer science, in isolation, a
   sequence of

elements always has a single meaning because if it has not, no
computation is possible”, and you assume it must be true for
   language,

is very much the sort of thing I was talking about. Even in
   computer

science that is not true, as “10” in a binary system such as
   machine

code has a different “meaning” from “10” in a non-binary
   situation, so

1 + 1 = 2 is only true in the context of a non-binary code.
Mathematics and logic is also tautologies, as Wittgenstein pointed
out, so quite different from natural language, where even “War
   is war”

is not a tautology, and that is why there was the whole Oxford
   School

of Ordinary Language Philosophy (Grice, Austin, Searle, etc.),
   as they

saw that natural language is quite different from the mathematical
approach being pushed by the logical positivists and analytic
philosophers. (Frege and Russel had turned logic into
   mathematics, and

tried to apply it to language—the early Wittgenstein went along
   with

that initially, but later saw how problematic even his own early
approach was.)

I am aware of what has been going on in AI, particularly by
   Jeff Dean,

in the switch from symbolic (deductive/rule-based) AI to inductive
approaches, and am quite happy they finally have seen the light in
that regard, and that has made a big difference in terms of
   what the

systems can do. That switch, from rule based deductive
   algorithms, is

what Dean means by doing without grammar; what they find using the
inductive approach is still grammar (as Peirce said “Induction
   infers

a rule”), and simply based on symbol manipulation, so a long
   way from

modelling actual communication, which is based on meaning, not
symbols, and so what they are talking about is not really
“understanding". Induction can only take you so far (Peirce’s
   view was

that deduction (which is tautology) and induction do not tell you
anything new—although abduction is the “weakest” inference, as
   he put

it, it is the only one that tells you something new; On the
   difference

between the latter two: “. . . the essence of an induction is
   that it

infers from one set of facts another set of similar facts, whereas
hypothesis [abduction—rjl] infers from facts of one kind to
   facts of

another.”); the next step is to understand how communication
   actually

works (as it isn’t coding/decoding) and try to see if it is
   possible

to model abductive inference, which is what real communication is
based on. I don’t know if that is possible. The problem is they are
not working with linguists who understand communication, and so
   on the

one had assume it is about symbol manipulation, and on the
   other end

up often reinventing the wheel. One example is a talk I went to
   at our

Complexity Institute, where the speaker talked about how his
   algorithm

had shown that some words in English, such as “a little bit" occur
together more often than others. We linguists of course knew that
decades ago, but as this person had not talked to any linguists
   before

starting a linguistic study, he had no clue about what had been
   done

in terms of collocational relationships.

Yes, the abilities and principles related to meaning creation and
linguistic behaviour are general cognitive mechanisms and
   behavioural

principles, not specific to language, and not unique to humans. You
say, "Linguists believe that linguistics is a module of its own
   in the

brain and love re-defining things as something specific for the
field”, but that statement only applies to an ever-shrinking
   minority

of people doing rationalist philosophy rather than empirical
linguistics, and the ones associated with the now discredited
   symbolic

AI.

All the best,
Randy
-----
*Randy J. LaPolla, PhD FAHA* (???)
Professor of Linguistics and Chinese, School of Humanities
Nanyang Technological University
HSS-03-45, 14 Nanyang Drive | Singapore 637332
http://randylapolla.net/
Most recent book:
   https://www.routledge.com/The-Sino-Tibetan-Languages-2nd-Edition/LaPolla-Thurgood/p/book/9781138783324









On 1 Jun 2018, at 4:57 PM, Stela Manova
   <stela.manova at univie.ac.at<mailto:stela.manova at univie.ac.at> <mailto:stela.manova at univie.ac.at<mailto:stela.manova at univie.ac.at>>

<mailto:stela.manova at univie.ac.at<mailto:stela.manova at univie.ac.at>
   <mailto:stela.manova at univie.ac.at<mailto:stela.manova at univie.ac.at>>>> wrote:


Dear Randy,

What I wrote does not have anything to do with structuralism
   but is

based on recent research in language understanding on which we
   rely

every day. I mean research carried out by Google. Intriguingly,
people who do NLP and LU at Google are not linguists but computer
scientists and the senior fellow of the Google Brain Team,
   Jeff Dean,

claims that language understanding does not need grammar, see his
slides on Scaling language understanding models
   at https://blog.ycombinator.com/jeff-deans-lecture-for-yc-ai/, starts

at 24:54 in the video, as well as the slides on Google Translate -
27:52 in the video (the slides are below the video), but
   please watch

the whole video if you have time. This is one of Jeff Dean’s many
talks on Deep Learning, I give this link because I have it in my
computer but you can google the topic and the speaker. So,
   Google’s

LU does not use grammar but is based on combinations /
   sequences of

elements and statistics; and ironically, linguists who believe in
grammar and irony (based on your message below) use Google
   products

every day. The wisdom from the Google sequence-to-sequence
   model is

that single examples do not count as evidence for the
   organization of

a system. Reread now our discussion on what is bipolar
   polysemy and

you will understand, why so many linguistics professors from
   so many

different countries cannot agree on a definition.

It is not about bipolar polysemy, it is about the future of the
field. Google guys claim and prove that the same learning logic
applies to all areas of life; roughly, the same rules operate
in visual perception, chemistry, language, etc. Linguists believe
that linguistics is a module of its own in the brain and love
re-defining things as something specific for the field - there is
even statistics for linguists which unfortunately differs from
   Google

statistics because people who do statistics in Google are
mathematicians while (most of the) linguistic statisticians
   were bad

at math at school and therefore studied languages at the
   university, etc.


I have a PhD in general linguistics from the University of Vienna
(and my both PhD supervisors were very bad at math) but I cannot
agree that this is sufficient evidence that the tip of my nose
   is the

end of the horizon. OK, I was also educated in math nine years -
intensively.

Best,

Stela



On 01.06.2018, at 06:06, Randy J. LaPolla
   <randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com> <mailto:randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>>

<mailto:randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>
   <mailto:randy.lapolla at gmail.com<mailto:randy.lapolla at gmail.com>>>> wrote:


Hi All,
This whole discussion shows how problematic some of the a priori,
non-empirical assumptions of the Structuralist approach are. The
assumption that there is a fixed association of sign and
   signifier,

and so words have meaning in some abstract universe divorced from
context, and the assumption that language can be dealt with
mathematically, and the assumption that communication happens
through coding and decoding (on the computational model), and
   that

the “real” word is the written, abstract, out-of-phonetic-context
form, and so phonology in context can be ignored, and as there is
only one “real” meaning to a word, the different uses in
   context ,

such as irony, can be simply ignored or treated as deviant. The
assumption that there is a fixed system that has iron-clad rules,
and that there are aspects of the system that are necessary for
communication to occur.

There is much literature showing how problematic these
   assumptions

are, but somehow they are still in force in much of
   linguistics, as

reflected in some of this discussion.

My own view is that communication involves one person
   performing a

communicative act in a particular place and time and to a
   particular

addressee, and the addressee abductively inferring that person’s
reason for performing that act in that particular context to that
particular person at that particular time. So it is completely
context dependent, as Nick shows, and there is no minimum
morphosyntactic structure required, as David Gil has shown.
   No part

of the communicative situation or act can be left out in terms of
understanding the meaning that the addressee creates in inferring
the communicator’s intention (as Mark shows in including
   gesture in

his discussion, though it also includes non-conventionalised
behaviour, e.g. gaze and body movements; and it is creation of
meaning, not transfer of meaning, and so subjective and
non-determinative). Language and other conventionalised
communicative behaviour (language is behaviour, not a thing, and
does not differ in nature from other conventionalised behaviour)
emerges out of the interaction of the people involved.

So the question asked is like a Zen koan: you can’t answer it
   yes or

no, as it is based on problematic assumptions.

Randy

-----
*Randy J. LaPolla, PhD FAHA* (???)
Professor of Linguistics and Chinese, School of Humanities
Nanyang Technological University
HSS-03-45, 14 Nanyang Drive | Singapore 637332
http://randylapolla.net/
Most recent book:
   https://www.routledge.com/The-Sino-Tibetan-Languages-2nd-Edition/LaPolla-Thurgood/p/book/9781138783324





On 1 Jun 2018, at 7:42 AM, Nick Enfield
   <nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au> <mailto:nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au>>

<mailto:nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au>
   <mailto:nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au>>>> wrote:


In Lao:


  1. The verb cak2 means ‘know’, and can be negated as in
   man2 bòò1

     cak2 [3sg neg know] ‘S/he doesn’t know.’ But when used
   alone,

     with no subject expressed, often with the perfect marker
   (as in

     cak2 or cak2 lèèw4) it means “I don’t know.”
  2. The verb faaw4 means ‘to hurry, rush’, and can be
   negated as in

     man2 bòò1 faaw4 [3sg neg rush] ‘S/he doesn’t hurry/isn’t
     hurrying.’ But when used alone as an imperative, with no
     subject expressed, often repeated, or with an appropriate
     sentence-final particle (as in faaw4 faaw4 or faaw4 dee4) it
     means “Don’t hurry, Stop hurrying, Slow down”.
  3. Often, both positive and negative readings of verbs are
     available when the irrealis prefix si is used (with
   context or

     perhaps intonation doing the work); eg khaw3 si kin3
   [3pl irr

     eat] could mean ‘They will eat it’ or ‘They will
   definitely not

     eat it’ with a meaning similar to the colloquial English
     expression “As if they would eat it.” The second meaning is
     made more likely by insertion of the directional paj3 ‘go’
     before the verb (khaw3 si paj3 kin3 [3pl irr go eat] ‘As if
     they would eat it.’).


Nick

* *
* *
* *
* *
*N. J. ENFIELD *| FAHA FRSN | Professor of Linguistics
Head, Post Truth Initiative https://posttruthinitiative.org/
Director, SSSHARC (Sydney Social Sciences and Humanities
   Advanced

Research Centre)
Faculty of Arts and Social Sciences
*THE UNIVERSITY OF SYDNEY*
Rm N364, John Woolley Building A20 | NSW | 2006 | AUSTRALIA
T +61 2 9351 2391 | M +61 476 239 669
orcid.org/0000-0003-3891-6973<http://orcid.org/0000-0003-3891-6973>
   <http://orcid.org/0000-0003-3891-6973>
   <http://orcid.org/0000-0003-3891-6973>

E nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au> <mailto:nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au>>
<mailto:nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au>
   <mailto:nick.enfield at sydney.edu.au<mailto:nick.enfield at sydney.edu.au>>> | W sydney.edu.au<http://sydney.edu.au>
   <http://sydney.edu.au/>

<http://sydney.edu.au/> nickenfield.org<http://nickenfield.org>
   <http://nickenfield.org/> <http://www.nickenfield.org/>

* *


*From: *Lingtyp <lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>
   <mailto:lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>>

<mailto:lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>
   <mailto:lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>>>> on behalf of

Mark Donohue <mark at donohue.cc<mailto:mark at donohue.cc> <mailto:mark at donohue.cc<mailto:mark at donohue.cc>>
   <mailto:mark at donohue.cc<mailto:mark at donohue.cc> <mailto:mark at donohue.cc<mailto:mark at donohue.cc>>>>

*Date: *Friday, 1 June 2018 at 7:13 AM
*To: *David Gil <gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>
   <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>>>

*Cc: *"LINGTYP at LISTSERV.LINGUISTLIST.ORG<mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG>
   <mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG<mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG>>

<mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG<mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG>
   <mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG<mailto:LINGTYP at LISTSERV.LINGUISTLIST.ORG>>>"

<lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org>
   <mailto:lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org>>

<mailto:lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org>
   <mailto:lingtyp at listserv.linguistlist.org<mailto:lingtyp at listserv.linguistlist.org>>>>

*Subject: *Re: [Lingtyp] Does bipolar polysemy exist?

In Tukang Besi, an Austronesian language of Indonesia, the verb
'know' is dahani; verbs are generally prefixed to agree with the
S,A argument, thus

ku-dahani 'I know'
'u-dahani 'you know'

etc.
In some contexts (imperatives, emphatic generic (TAME-less)
assertion), the prefix can be omitted.

dahani 'I/you certainly know'

Now, I've heard this (and only this) verb used, in the
   absence of

any inflection, with exactly its opposite meaning

Dahani 'I don't know'

in what might be a sarcastic sense. Unlike the antonymic uses of
many adjectives in many languages, including English, this
   use of

dahani is actually a simple (though emphatic) negation of the
verb's 'normal' meaning.

-Mark

On 1 June 2018 at 04:43, David Gil <gil at shh.mpg.de<mailto:gil at shh.mpg.de>
   <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>

<mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>>> wrote:

Yes, as Matti points out, negative lexicalization is not
   quite as

rare as I was implying.  Yet at the same time, I suspect
   that it

might not be as common as Matti is suggesting.  Looking at the
examples that he cites in his Handbook chapter, I suspect
   that in

some cases, the negative counterpart isn't "just" negative,
   but is

also associated with some additional meaning components.

Matti doesn't list "good"/"bad" as being such a pair, though,
citing work by Ulrike Zeshan on sign languages, he does mention
other evaluative concepts such as "not right", "not possible",
"not enough".  in English, at least, "bad" is not the
   negation of

"good", it is the antonym of "good"; there's all kind of
   stuff in

the world which we attach no evaluative content to, and which
hence is neither good nor bad. (It's true that in English,
   in many

contexts, the expression "not good" is understood as meaning
"bad", which is interesting in and of itself, but still, it
   is not

necessarily understood in this way.) While I have no direct
evidence, I would strongly suspect that in languages that have
lexicalized expressions for "not right", "not possible",
   and "not

enough", the meanings of these expressions will be the
   antonyms of

"right", "possible" and "enough", and not their negations.

Under lexicalized negatives in the domain of tense/aspect,
   Matti

lists "will not", "did not", "not finished".  Well the one case
that I am familiar with that falls into this category is
   that of

the Malay/Indonesian iamative/perfect marker "sudah", which
   has a

lexicalized negative counterpart "belum".  However, "belum"
   isn't

just "not sudah"; it also bears a strong (if not invariant)
implicature that at some point in the future, the state or
activity that is not complete will be completed — in fact, just
like the English expression "not yet".  (When people in
   Indonesia

ask you if you're married, it's considered impolite to
   answer with

a simple negation "tidak"; you're supposed to say "belum"
precisely because of its implicature that you will, in the
   future,

get married.  By avoiding this implicature, the simple negation
"tidak" is viewed as a threat to the natural order of
   things, in

which everybody should get married.)

I suspect that many if not all of the cases characterized
   by Matti

as "lexicalized negatives" will turn out to be associated with
some additional meaning component beyond that of "mere"
   negation.





On 31/05/2018 20:06, Miestamo, Matti M P wrote:


Dear David, Zygmunt and others,

negative lexicalization is not quite as rare as David seems to
imply. There is a cross-linguistic survey of this
   phenomenon by

Ljuba Veselinova (ongoing work, detailed and informative
presentation slides available through her website), and Zeshan
(2013) has written on this phenomenon in sign languages.
   There's

also a short summary in my recent Cambridge Handbook of
Linguistic Typology chapter on negation (preprint
   available via

the link in the signature below).

Best,
Matti

--
Matti Miestamo
http://www.ling.helsinki.fi/~matmies/
   <https://protect-au.mimecast.com/s/O7N4CL7rK8t5zx0kUBCq-Q?domain=ling.helsinki.fi>





Zygmunt Frajzyngier <Zygmunt.Frajzyngier at COLORADO.EDU<mailto:Zygmunt.Frajzyngier at COLORADO.EDU>
   <mailto:Zygmunt.Frajzyngier at COLORADO.EDU<mailto:Zygmunt.Frajzyngier at COLORADO.EDU>>

<mailto:Zygmunt.Frajzyngier at COLORADO.EDU<mailto:Zygmunt.Frajzyngier at COLORADO.EDU>
   <mailto:Zygmunt.Frajzyngier at COLORADO.EDU<mailto:Zygmunt.Frajzyngier at COLORADO.EDU>>>> kirjoitti 31.5.2018

kello 17.23:

David, Friends
Related to David’s post, not to the original query.
In any individual language, there may exist a few of
   ‘Not-X’ items.

In Mina (Central Chadic) there is a noun which designates
‘non-blacksmith’.
In several Chadic languages there exist negative existential
verb unrelated to the affirmative existential verb.
Zygmunt

On 5/31/18, 5:52 AM, "Lingtyp on behalf of David Gil"
<lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>
   <mailto:lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>>

<mailto:lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>
   <mailto:lingtyp-bounces at listserv.linguistlist.org<mailto:lingtyp-bounces at listserv.linguistlist.org>>> on behalf

of gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>
   <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>>> wrote:




    On 31/05/2018 13:37, Sebastian Nordhoff wrote:

On 05/31/2018 01:18 PM, David Gil wrote:

A point of logic.  "Not X" and "Antonym (X)" are distinct
notions, and
the original query by Ian Joo pertains to the former,
   not the

latter.
but is there any (monomorphemic) lexeme which expresses
   not-X

which is
not the antonym of X?
    But how many (monomorphemic) lexemes expressing not-X are
there at all?
    The only ones I can think of are suppletive negative
existentials, e.g.
    Tagalog "may" (exist) > "wala" (not exist). Even
   suppletive

negative
    desideratives don't quite fit the bill, e.g. Tagalog
"nais"/"gusto"
    (want) > "ayaw", which is commonly glossed as "not want",
but actually
    means "want not-X", rather than "not want-X" — "ayaw" is
thus an antonym
    but not a strict negation of "nais"/"gusto".

    What is not clear to me about the original query is
   whether

it is asking
    for negations or for antonyms.

    --
    David Gil

    Department of Linguistic and Cultural Evolution
    Max Planck Institute for the Science of Human History
    Kahlaische Strasse 10, 07745 Jena, Germany

    Email: gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>
   <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>>

    Office Phone (Germany): +49-3641686834
    Mobile Phone (Indonesia): +62-81281162816

    _______________________________________________
    Lingtyp mailing list
    Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>

<mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>>

    http://listserv.linguistlist.org/mailman/listinfo/lingtyp
   <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>



_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>

<mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>>

http://listserv.linguistlist.org/mailman/listinfo/lingtyp
   <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>

_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>

<mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>>

http://listserv.linguistlist.org/mailman/listinfo/lingtyp
   <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>


--
David Gil

Department of Linguistic and Cultural Evolution
Max Planck Institute for the Science of Human History
Kahlaische Strasse 10, 07745 Jena, Germany

Email: gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>
   <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de> <mailto:gil at shh.mpg.de<mailto:gil at shh.mpg.de>>>

Office Phone (Germany): +49-3641686834
Mobile Phone (Indonesia): +62-81281162816

_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>

<mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>>

http://listserv.linguistlist.org/mailman/listinfo/lingtyp
   <https://protect-au.mimecast.com/s/VBmHCMwvLQTGnKp2ikHGCw?domain=listserv.linguistlist.org>


_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>

<mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>>

http://listserv.linguistlist.org/mailman/listinfo/lingtyp

_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>

<mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>>

http://listserv.linguistlist.org/mailman/listinfo/lingtyp





_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>

http://listserv.linguistlist.org/mailman/listinfo/lingtyp

   _______________________________________________
   Lingtyp mailing list
   Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
   <mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>
   http://listserv.linguistlist.org/mailman/listinfo/lingtyp

_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
<mailto:Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>>
http://listserv.linguistlist.org/mailman/listinfo/lingtyp





_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
http://listserv.linguistlist.org/mailman/listinfo/lingtyp



--
Enrique Bernárdez
Catedrático de Lingüística General
Departamento de Lingüística, Estudios Árabes, Hebreos y de Asia Oriental
Facultad de Filología
Universidad Complutense de Madrid
_______________________________________________
Lingtyp mailing list
Lingtyp at listserv.linguistlist.org<mailto:Lingtyp at listserv.linguistlist.org>
http://listserv.linguistlist.org/mailman/listinfo/lingtyp

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20180609/6cd580f9/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: No Motion in Caused Motion Construction.pdf
Type: application/pdf
Size: 37367 bytes
Desc: No Motion in Caused Motion Construction.pdf
URL: <http://listserv.linguistlist.org/pipermail/lingtyp/attachments/20180609/6cd580f9/attachment.pdf>


More information about the Lingtyp mailing list