[Corpora-List] Boot Camp (Continued...)
Wolfgang Teubert
w.teubert at bham.ac.uk
Tue Aug 19 08:02:15 UTC 2008
Dear All,
It seems to me our discussion is trawling off in various directions which all are enlightening in their own ways. Some contributions show what has been dubbed a language engineering angle. Their primary interest, as I see it, lies in the development of language technology to come up with certain useful applications like corpus-based machine translation, knowledge extraction, automatic abstracting and expert systems, for instance. As long as they can show that the performance of their systems goes up, they will get more funding and are happy. They are not interested in a discussion of meaning. I once was told by Dan Melamed that as far as he is concerned, meaning is an illusion.
Others focus on the joys and horrors of corpus compilation, their availability and the ensuing copyright problems. These are certainly important issues, and with the ongoing privatisation of the electronic academic discourse and the electronic data we use, it is only a matter of time that the only way to gain access will be to find sponsorship from these discourse providers and carry out the kind of research they are interested in. Google, Microsoft and NewsCorp spring to mind. They will soon play a role in academic research comparable to that of Monsanto or GlaxoSmithKline in their respective fields.
Perhaps, though, it might worth our while to resume a specifically linguistic discussion. I am, for instance, interested in meaning. And it seems to me that for each school of linguistic thought, meaning means something different. Some contributors apparently think that it is a matter of tolerance not discriminate against any of them. I agree. But linguistics, as I see it, is not only a belief system allowing everyone to be happy in their own way. Because it is, for me at least, to quite a considerable extent part of the human, the interpretive sciences, it thrives on clashes in argumentation and interpretation. To my taste, not nearly enough of that is taking place. There are already many sectarian tendencies in linguistics, obvious from the lists of references at the end of academic papers. One tends to quote only people within one's own camp. Inside these camps, there is the same kind of stated homogeneity as one finds among the frontbenchers of our parliamentary parties. But the more monovocal a discourse is the more static and the less open to innovation will it be. Only a plurivocal, democratic discourse can come up with new ideas.
Being a corpus linguist, I am not affiliated with any of the many cognitive camps. Some of them, it seems to me, have moved away from their former vicinity to the philosophy of mind. They are quite content to provide models that allow the representation of meaning in some formalistic and abstract way but in no way assume their model to be isomorphic or even just functionally equivalent to the working of the mind. (Indeed I recently read a textbook for cognitive linguistics in which the word 'mind' did not occur at all.) I see this as a return to the happy days of the 1960s and 70s, when people like Greimas and Pottier developed their structuralist theory of semantic features.
As similar as this theory is to some of what has been said about mental concepts, sememes were never more than abstract notions and were never thought to have ontological status. Originally, at least, in cognitive linguistics, what was called mental/cognitive concepts/representations was said to be isomorphic models of what we could actually find in the mind if we only had access to it. It is only this aspect in which I find cognitive linguistics flawed, namely those varieties of it that are concerned with the way thought is turned into an utterance and vice versa. These varieties are, from my outside perspective, connected with names like Langacker, Lakoff, Jackendoff, Levinson, Sperber/Wilson, Wierzbicka, Fodor, Pinker, Chomsky. For all of them, even though some of them rejects the cognitive label, the meaning of an utterance is its mental representation, a representation in some kind of mentalese.
There are, of course differences concerning the nature of these representations. For Lakoff, they are non-symbolic, embodied, entities of experience. For Pinker they are symbolic. Either way, this brings in a complication. What is it that makes a non-symbolic entity symbolic? What needs to be added, and from where does this addition come? If mental concepts are symbolic, they need to be interpreted, but by whom? By Searle's homunculus or by Dennett's central meaner? The other problem is that the language of thought is thought to be language-independent. But is it really possible for one cognitive linguist to convince their colleague what the content of a mental concept is if all they can come up with is a translation into some natural language? Levinson's inconclusive work on Tzeltal springs to mind. A third problem is that we, the language users, are obviously unaware of our mental representations. Does that mean we are also unaware of our thoughts? What about intentionality? Does the mental processing of utterances mean that mental concepts are processed as uninterpreted symbols, just as a computer would summarise a text without knowing what it means? Is meaning, as Melamed would probably like to have it, no more than a supervenient feature, a figment of our imagination?
I know that for many what I say here is no more than a crude and mistaken caricature of the status of mental concepts in various cognitive camps. Again I announce my willingness to be converted to the camp that can show me the 'true' mental representation of the word 'globalisation'. Could it ever be more than what has been said in the discourse about globalisation? Once it has been translated into a language of thought, does it not have to be translated for someone like me again into a natural language? Is this more than a triplication of the same content?
More recently I find that many cognitive linguists like to pass their mental concepts on to the neural sciences. They then appear to become synaptically connected clusters of neurons firing. But once I have identified the neurons in question, do I then know what 'globalisation' means? Or are we told that it really does not matter a bit what it means as long as we behave in the prescribed manner?
For me, the meaning of 'globalisation' is all that has been said about globalisation. Meaning is only in the discourse, and nowhere else. Our task as language users is (I do not believe that linguists have a privileged access to meaning) to collaborate in interpreting this discourse evidence. There is no valid interpretation as such. As long as they are based on evidence accepted as such by the interpretive community (Stanley Fish), they will have an impact on the discourse and add something to the meaning of 'globalisation'. Meanings and their interpretations are always provisional, as long as the discourse goes on. The clash of different interpretations is what makes innovation, or progress, possible.
What language engineers are doing (and often doing very successfully) will never tell us anything about meaning. An automatic summarisation of a paper is never an interpretation of it. For me, however, the sole raison d'etre for linguistics is to try and find ways to make sense of what is said.
To appeal to a sense of tolerance and let everyone be happy in their own way will not promote the new ideas we need. We have to show where we differ.
Cheers
Wolfgang
_______________________________________________
Corpora mailing list
Corpora at uib.no
http://mailman.uib.no/listinfo/corpora
More information about the Corpora
mailing list