People who speak tone languages aren’t really different from everyone else. In using tone in everyday speech, they are not driven by fierce hostility towards vowels and consonants or a devilish wish to frustrate language learners. As a speaker of two tonal languages (Mandarin and Fuzhou Min), I can tell you as much. Through this blog post, I hope to provide preliminary answers to two questions: What is tone, and where does tone come from?
Tone, intonation and stress are all ways in which we fine-tune the pitch of our voice to express differences in meaning. Pitch is often conceptualised on a scale from “low” to “high”, and the ability to manipulate pitch allows us to signal a question with a rising pitch or indicate stress in words like “present”. Tone is most commonly defined as contrastive pitch for distinguishing morphemic units. With tone, you can contrast words with identical segments but distinct pitch patterns, through difference in either pitch height (High vs. Low) or shape (Level vs. Contour). In Mandarin Chinese, for example, almost every morpheme is associated with one of four tones, which have distinct pitch shapes. For tone language speakers, distinguishing words through tonal differences is as natural as doing so through difference in vowels and consonants. In the following sound file, you can observe how the sounds [ma] combine with tones in Mandarin to form morphemes with distinct meanings (“mother”, “hemp”, “horse” and “scold”, in this order).
Tone is a highly areal feature. Although an estimated 50 to 70 percent of all human languages are tonal, the vast majority of these are clustered in Sub-Saharan Africa, Asia-Pacific, and Central and North Americas. Tone languages also differ considerably amongst themselves, and such differences often seem to be driven by language genealogy. One salient divide is between the so-called register tone languages and contour tone languages, which are respectively the norm in Africa and East Asia. Roughly speaking, register tone systems are made up of tones with level pitch (e.g., Yoruba’s high, mid and low tones), while contour tone languages have more complex tonal shapes. Fuzhou Min, for example, has a contour tone characterised by rising-falling pitch. Take a listen below (the word means “two”).
In many African languages, entire morphemes can consist only in tones – these tones seem able to“float” without being permanently attached to segments. The definite article in Bambara is said to be an example of this. The word for “the” in Bambara is a floating low tone, which docks onto nouns and changes their pitch shapes. To exemplify, the word for “river” in Bambara is pronounced [bá] (the accute accent denotes a high tone) in isolation, but “the river” is rendered as [bâ] (circumflex denotes falling tone). Here, we can construe the falling tone as the combination of a high tone (from the noun) and a low tone (from the floating definite article). The discovery of floating tones played an important role in launching the theory known as Autosegmental Phonology, which continues to dominate the way we represent our objects of study in phonology. The 1970s; those were glorious days for tones.
A frequent question for me at college formals is why people would want to “do” tones. I get what they are coming from. After all, English and many other languages seem perfectly able to cope without tones. Even if people take a liking to tones, why the hell do some languages use monstrously large number of tones (some Cantonese varieties reportedly have 10 tones)? There has been a whole branch of tonology devoted to these questions under the banner of “tonogenesis”. This is what got me interested in tones, so allow me to indulge in an example.
The best known source of tones are voicing contrast in obstruents (stops like /b/ and fricatives like /s/). The story goes like this. When you produce an obstruent that’s voiced, say /b/, you tend to lower your larynx and draw your arytenoid cartilages together, allowing your vocal folds to vibrate (for a very close view of vocal fold vibration, see the video at the end of this paragraph). These movements often depress pitch on following vowels. As time goes by, your listeners may pick up on this lowering effect as a consistent correlate of voicing. Then one day, you wake up to find your voicing contrast has gone (language change is brutal, man). Your listeners panic – how are they supposed to deal with all these new homophones? In desperation, they turn to pitch as the key to distinguish pair of words, at which point we may say the language has become tonal. On this hypothesis, if English were to lose the contrast between /b/ and /p/, we’d expect words like “bet” to develop a low tone and “pet” to associate with high tones. The scenario I sketched above may seem far-fetched, but we have very good evidence that this exact process happened in Khmu in Northern Laos, among other languages.
This post has not touched on the more exciting (in my view, anyway) phenomenon of tone sandhi, which I hope to write about in my next contribution. Meanwhile, I have prepared three take-away messages:
1. The best introduction to tone:
Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press.
2. On tonogenesis and its manifestation in Kammu:
Hombert, Jean-Marie. 1975. Towards a theory of tonogenesis: an empirical, physiologically and perceptually based account of the development of tonal contrasts in languages. University of California, Berkeley Doctoral dissertation.
Svantesson, J. & David House. 2006. Tone production, tone perception and Kammu tonogenesis. Phonology 23(2). 309.
3. Bambara floating tone:
Clements, Nick & Kevin C. Ford. 1979. Kikuyu tone shift and its synchronic consequences. Linguistic Inquiry 10. 179–210.
4. Autosegmental Phonology
Goldsmith, John A. 1976. An overview of autosegmental phonology. Linguistic Analysis 2. 23–68.
The modern discipline of linguistics, especially historical linguistics, owes a lot to the rather more arcane field of philology, a subject which had its greatest flowering in the nineteenth century—the term is still used by some as a synonym for ‘historical linguistics’. Traditional philology dealt with European, Middle-Eastern and South Asian languages, aiming to trace their histories and thus reconstruct their prehistories. To do these things, it was first important to describe the oldest records of these languages in detail. This ‘basic’ descriptive work might sound straightforward, but, as any historical linguist can tell you, the messy nature of the evidence means that it’s anything but.
Let’s take English as an example. A philologist or historical linguist interested in mapping out developments that have taken place in the history of English needs to have a clear idea of what English was like at different points in time. The earliest period in which English was written is the Old English period—this covers a relatively long period of time, from as early as 600AD up to the year 1066 or so, and lots of change happened in this time. To work out what the language looked like at different points in this period and so what change happened, it’s obvious what we need to do: we need to take all our documents in Old English, order them by the dates they were written, interpret them all and describe how language is used in each.
This is much easier said than done. For one thing, ancient and medieval documents are very rarely dated—unlike in modern published books, there was no custom of writing the year of creation at the beginning of every codex. Some sorts of documents—particularly ‘charters’ and other legal documents—do have explicit dates, while others can be associated with particular historical figures. So one thing we can do is look at features of the language of just those texts which can be dated, and then try to date the others by comparison. One famous attempt to do this with Old English was the so-called Lichtenheld Test, named after the scholar who first made the relevant observation (Lichtenheld 1873). I won’t go into the drier linguistic details, but in simple terms this was built on the observation that a particular syntactic pattern of adjectives (that of ‘weak’ adjectives occurring without a determiner) occurred often in Beowulf, which was generally believed to be a very early text, and barely at all in the poetry of Cynewulf, a poet who can be confidently dated much later in the OE period. The obvious conclusion is that this pattern was possible in ‘early’ Old English, but fell out of favour over time, and so that it should be possible to date a sample of Old English by how often it uses this pattern.
It turns out that there are two problems with this. Firstly, it just doesn’t work. The test was carried out to its fullest extent by Adriaan Barnouw (Barnouw 1902), and the datings it gives for Old English poetry just don’t match any of the other evidence very well. The second problem is that it’s circular. Beowulf was at that point widely agreed to be an especially ancient poem, and many scholars still hold this view. The problem is that much of the evidence for the idea that Beowulf is a very old text comes from the ‘fact’ that its language is very archaic—but at the same time, one of our best pieces of evidence for what ‘archaic’ Old English is like is the language of Beowulf!
Nevertheless, we might still suggest that the way this odd adjectival pattern in Old English differs from text to text is best explained by assuming that its popularity fell over time, even if the evidence doesn’t fit this picture very straightforwardly. But this leads us into another challenge faced by scholars of historical languages. The clearest observation about this pattern is that it’s never used in prose texts—it only occurs in poetry. So evidently if we’re going to describe how Old English was used differently at different times, we’re also going to have to describe how it was used differently in different genres. The problem is that what texts survive from different genres is inconsistent over time—some periods are better represented in Biblical translations, some with saints’ lives, some with different sorts of poetry, some with legal charters… (Incidentally, this problem is multiplied again by the existence of different dialects from different regions). In short, given that we don’t really have enough reliably datable material of enough different genres from every period, how can we ever confidently work out why a particular writer chose a particular linguistic expression? How can we tell whether our odd adjective pattern was used more in some texts because they were composed earlier, or whether it was a feature of poetic style that some poets simply preferred?
In short, it’s a messy business. Our surviving evidence is a tiny, scattershot selection from an unknowable—but undoubtedly vastly larger—whole.
To end on a cheerful note, however, this makes it all the more exciting that we are still making real, unqualified advances in our understanding of this material. A particularly resonant recent example is put forward in Walkden (2013), dealing with the Old English word hwæt. This is famously the first word of the poem Beowulf, traditionally translated vaguely as an interjection (‘Lo!’) and more recently in Seamus Heaney’s lyrical translation and accompanying introduction as ‘So.’—in either case, a word standing outside clausal syntax used by the poet to call for the audience’s attention. Walkden shows that these are not quite right. Hwæt does actually affect clausal word order, so it must be inside the clause after all. By collecting and comparing all the times it occurs in Old English and Old Saxon, Walkden shows that hwæt introduces exclamative clauses, rather like Modern English how in ‘how cold it is today!’, or what in ‘what a wonderful piece of news that is!’
So thanks to Walkden’s research, we can now propose a new, more accurate translation of the first sentence of this most translated of texts—How much we have heard of the might of the nation-kings in the ancient times of the Spear-Danes!
The concept of “word” would seem fairly central to linguistics. One of the definitions of “syntax” given by the Oxford English Dictionary is:
“The ways in which a particular word … can be arranged with other words …”.
And “morphology” is defined:
“the structure, form, or variation in form … of a word or words …”.
Semanticists talk about “word meaning”, phonologists about “word stress” and so on. All this is all very well – but what is a “word”? This question, it turns out, is akin to most other questions in linguistics in not being answered as easily as we might like. A big part of the problem arises because of conflicts between different criteria for wordness. Take, for example, the element ‘m in I’m. From a purely grammatical point of view, ignoring the sound (and writing) side of things, ‘m acts like a word – shown most clearly by the fact that it can always be substituted with am with no real change in meaning: I’m playing means the same as I am playing, and so forth. (am shows much more wordy behaviour.) But ‘m isn’t like words in other respects: it doesn’t contain a vowel, and it can’t occur on its own. Thus while am needn’t immediately adjacently follow I, ‘m must:
Other items like ‘m in English are things like the ‘ll of I’ll be playing, the n’t of isn’t, hasn’t etc., the ‘s in the king of France’s head and so on. These can be called “clitics”. One definition of a clitic is that it is a grammatical word but not a phonological word. Grammatically, ‘m behaves like am, ‘ll behaves like will and n’t behaves like not, so they can be said to be grammatical words*. But they can’t appear on their own: they must form a single phonological unit with another item, being pronounced (and written, not completely incidentally) as if they were part of it. Neither can they bear stress, like more typical words:
On these sound-based criteria, then, clitics don’t seem to be words. There are a lot of complications here and I’m oversimplifying some issues slightly, but it’s hopefully clear that the issue of a word is isn’t terribly clear-cut. To make matters worse, some items seem able to be both full words and clitics – e.g. the usually doesn’t bear any stress and is pronounced quite weakly, like a clitic, but sometimes it is stressed, shown most clearly in something like I didn’t say A book, I said THE book. And there’s dispute whether some items in some languages are clitics or just inflections.
To conclude, then, the idea of a word is somewhat complicated. Some things behave like words in some ways but not others; some words can sometimes be substituted for things that are not words, or at least not on all criteria. If there is a moral, it is that even the most basic concepts (in linguistics, and presumably elsewhere) cannot necessarily be taken for granted.
* – Possessive ‘s is a bit more complicated. There’s no full word that can be substituted for it, unless you change the order of things to get the head of the King of France, and even then the two aren’t totally equivalent. But unlike items like plural -s (e.g. in kings) it doesn’t attach to words but whole phrases: we say the kings of France but not the king’s of France head. As it isn’t a word by all criteria and also isn’t an affix like plural -s, it gets lumped in the clitic category.
Dixon, R.M.W., & Aikhenvald, Alexandra. 2003. Word: A Cross-linguistic Typology. Cambridge: Cambridge University Press. See particularly the introduction.
The title is not a typo, but you’ll have to work out what it means (if anything) for yourselves! By way of introduction to this linguistic- and universal-themed ramble, have a little read through the following pretty lengthy quote from C.S. Lewis’ 1938 novel Out of the Silent Planet. In this passage, Ransom, a Cambridge philologist, comes across an alien creature on the planet Mars, or Malacandria as it is known in the book:
“A lifetime of linguistic study assured Ransom almost at once that these were articulate noises. The creature was talking. It had language. If you are not a philologist, I am afraid you must take on trust the prodigious emotional consequences of this realisation in Ransom’s mind. A new world he had already seen – but a new, an extraterrestrial, a non-human language was a different matter… The love of knowledge is a kind of madness. In the fraction of a second which it took Ransom to decide that the creature was really talking, and while he still knew that he might be facing instant death, his imagination had leaped over every fear and hope and probability of his situation to follow the dazzling project of making a Malacandrian grammar. ‘An Introduction to the Malacandrian Language’ – ‘The Lunar Verb’ – ‘A Concise Martian-English Dictionary’ … the titles flitted through his mind. And what might one not discover from the speech of a non-human race? The very form of language itself, the principle behind all possible languages, might fall into his hands.”
There are many points in this passage to talk about, but I’d like to focus on the last sentence – the principle behind all possible languages.
Noam Chomsky is famous for many things, one of which is the idea of Universal Grammar (UG). In brief, UG represents the means by which a child can constrain the types of hypotheses they make about the language(s) they are acquiring such that the grammar they acquire more or less resembles that of the older generations. When a child hears a sentence, there are an infinite number of possible ways to generate such a sentence, yet the types of grammatical rules that they hypothesise to be at work in generating the sentences of that language represent only a tiny fraction of all the infinite logical possibilities. UG was conceived as the innate knowledge that a child has which allows the child to entertain just that tiny fraction of all the logical possibilities. In short, UG makes the problem of language acquisition tractable (in the mathematical sense).
UG is thus a mathematical and logical necessity (Nowak, 2006). The existence of something that constrains the hypotheses of the language acquirer is thus on firm conceptual ground. The question of what UG is like, what it consists in and of, however, is another matter.
The earlier Chomskyan approach to this question was that UG is innate, human-specific, language-specific, and rich in content. The current Chomskyan approach, however, is that UG is innate, human-specific, language-specific, but impoverished in content. The reason for this change is the shift to what Chomsky calls Third Factors, i.e. “principles not specific to the faculty of language” (Chomsky, 2005: 6). The exact nature of these Third Factors is currently under discussion, but the suggestion is that Third Factors include various principles of computation, which are not specific to language but which nonetheless play a role in shaping the forms that language can take. From an evolutionary perspective this seems to be desirable. Not only has language in its current form arisen in a reasonably short span of evolutionary time (at most, seven million years, when humans split from their most closely related living species, i.e. chimpanzees), but it is unlikely that language evolved in isolation from other biological and/or mental properties, i.e. language has co-evolved (see Lenneberg, 1967).
If some of these third factor principles of computation are mathematical principles, then it is possible that they are not simply non-specific to the faculty of language, but also non-specific to the human species; in fact, they’d be more like laws of nature. If that is the case, alien languages really would provide a gateway to the principle (or maybe principles) behind all possible languages. A crazy note to end on perhaps, but then the passage did say that the love of knowledge is a kind of madness …
Chomsky, N. (2005). Three Factors in Language Design. Linguistic Inquiry, 36(1), 1–22.
Lenneberg, E. (1967). Biological foundations of language. New York: Wiley.
Nowak, M. A. (2006). Evolutionary Dynamics: Exploring the Equations of Life. Cambridge, MA: Belknap Press of Harvard University Press.
My research is on the Englishes spoken by Aboriginal people in Australia. Aboriginal English is a fascinating dialect group which is often viewed negatively by outsiders but which is nonetheless hugely important to its speakers, many of whom no longer have access to their ancestral languages. Today’s blogpost is therefore on a topic close to my heart. As with Rowena’s previous post on sign language FAQs, it evolved from many conversations over beers, food, or in any other situation that made me end up talking about my research. Versions of the 4 statements below have been cropping up more or less frequently in these conversations, and each time I’ve felt like they needed a better response than what I could manage on the fly. And that’s what you have a blog for, right?
I thought we would start out with a classic (which has already been alluded to in Eleni’s post on language and dancing, amongst others). Sometimes called the Sapir-Whorf hypothesis, the idea that language has a dominant influence on thought is an interesting point to start a debate on endangered languages. In its strongest form, the hypothesis is that each language embodies a worldview of its own, which can be contrasted with the world views inherent to other languages. Thus it follows that a speaker of Arrernte, an Australian Aboriginal language, will see the world in a fundamentally different way than a speaker of English, simply because of linguistic differences between the languages. However, empirical evidence such as Berlin and Kay’s 1969 study on universals in colour term semantics, as well as theoretical criticism (most notably Pinker 1994) means the original version of this idea has fallen out of grace in contemporary linguistics. Nevertheless, the Sapir-Whorf hypothesis still survives, both in beer gardens in Cambridge, and in the urban myth that Inuits have countless words for snow – more on this interesting and super-pervasive myth here.
But wait, does this mean one language is as good as another? That would mean there was no reason to care about – or save – endangered languages. Worse, even, being “stuck” with a less widespread language could then be nothing but a barrier to getting an education and a job. Now, to answer these questions properly I would need an entire blog post just on this subject (or even a book). I’ll try to be brief, but if you’ve got more time on hand, you might want to take 10 minutes to view this TEDx video by Felicity Meakins.
You’ll notice Felicity saying that “encoded in [Gurindji] grammar is a different world view”. Now, that might sound familiar, but what she alludes to is a weaker and more widely acknowledged version of the Sapir-Whorf hypothesis, whereby rather than determining the way we think, the particular characteristics of a language may influence how we draw distinctions, or how we categorise things in our mind. What she emphasises is not only the differences in pronunciation, grammar and lexis between English and Gurindji, but also how the two languages convey different ways of conceptualising the world – e.g. with reference to ourselves (left or right in English ), or to external directions (north, south, east and west in Gurindji). And if you ask speakers of endangered languages themselves, they will tell you that their language and culture are intricately linked, and how losing a language feels like losing a connection to culture and group identity.
In the video above, Felicity Meakins shows some of the potential problems caused by the lack of awareness of Indigenous languages in education. Similarly, Diana Eades has conducted a number of studies on misunderstandings in courtroom settings caused by linguistic differences between speakers of minority and majority languages (and dialects). The problems facing second language English speakers with an endangered first language are a point of discussion in Australia, among other places, and are currently in the process of coming into focus, both with linguists and the public.
For scholars, endangered languages have plenty of practical implications in and of themselves. Many great linguistic discoveries have been made on the basis of a small language that few from the outside had heard about. And there’s knowledge of the culture in the languages that can, for instance, help archaeologists understand the motifs of cave paintings, or ancient migration patterns, better.
Language death is what happens when a language no longer has any native speakers. The language may be recorded (for instance on tape, or in dictionaries) and can be revitalised, but language death will still have serious effects on the language itself, such as loss of complexity and dialectal variation. Furthermore, as I’ve argued above, the community associated with the language loose a significant connection to their culture and traditions, and to their own identity as a social group. But what does this have to do with me, you might ask? Well, language is knowledge. Of a culture, of a way of living, of the past. All this is worth keeping! So if you’re the kind of person who thinks that knowing as much as possible about the world is important, and that linguistic and cultural variety is amazing, then I hope you find that small Australian Aboriginal, or Chinese, or Celtic, languages are worth preserving.
In Romeo and Juliet, Shakespeare wrote the following famous lines:
What’s in a name? that which we call a rose/
by any other name would smell as sweet.
There are many linguistic aspects that one could highlight about these two lines, and many ways in which one could answer Juliet’s question. I want to highlight a recent story that raises many linguistic issues with respect to the use of proper names, and has far-reaching consequences for the lives of certain people.
This issue is a news item that I read a couple of months ago. This June, the highest court in Malaysia apparently settled a question that it had been dealing with for some time: the question was whether the Catholic Malaysian newspaper Herald (http://www.heraldmalaysia.com/) could use the word “Allah” when referring to the Christian god.
The legal battle had been going on for a couple of years, and the issue seems to go back to at least 1984, when “the use of the word ‘Allah’ was prohibited in the Bahasa Malaysia version of the publication of the Herald newsletter.” (http://goo.gl/IGRi2K)
From what I understand, the reason for the conflict is the following: people who are in favour of the ban argue that “Allah” is an Arabic word that refers to the Muslim god, whereas Malaysian Christians argue that the word has been used for a long time in Malay, meaning “God”, and not (just) referring to the Muslim god (http://goo.gl/C7F4LK).
The linguistic issue, as I see it, is the following. There is a term in Arabic, “Allah” that some people understand to be a proper name, i.e. a linguistic expression that refers to a particular individual (setting aside certain obvious questions here). In this sense, when speakers of Malay use this term, they refer to that entity. Since that entity is the Muslim god, the argument seems to be that the name cannot be used in a Christian context, as it could be misleading to certain people. Indeed, Al Jazeera wrote that “authorities say using ‘Allah’ in non-Muslim literature could confuse Muslims and entice them to convert, a crime in Malaysia.” (http://goo.gl/C7F4LK)
Catholics contend, however, that in Malay, “Allah” can also be used to refer to the Christian god.
How is that possible? What is, after all, in a name? Proper names are special linguistic expressions in some ways: they have been argued to be “rigid designators”. This means that across all possible worlds (basically, whatever one could imagine the world to be like), a proper name always refers to same entity.
Some philosophers (for example, Bertrand Russell and Gottlob Frege) argued that names signify properties of the people they refer to; Frege writes in his “Über Sinn und Bedeutung” (On sense and reference) that the name “Aristotle” could be understood as the person of whom “the pupil of Plato and the teacher of Alexander the Great” holds.
Philosophers who argue in favour of rigid designation, however, point out that properties like “the pupil of Plato and the teacher of Alexander the Great” are contingent; they can in principle be true or false. Imagine that for some reason, Aristotle didn’t teach Alexander the Great, but someone else did. The property “the pupil of Plato and the teacher of Alexander the Great” could not refer to Aristotle any more, yet all other things being equal, the name “Aristotle” would still refer to Aristotle. So while a list of properties like “the pupil …” can change across situations that one can imagine, philosophers like Saul Kripke argue that the reference of names doesn’t: their reference is fixed (hence the term rigid designator). “Aristotle” refers to Aristotle, whatever the circumstances (of course, this is an extremely simplified account of the matter; a lot of the discussion here is based on Abbott 2010, Chapter 5).
OK, so does this help us with analysing the issue? Proponents of the ban on “Allah” might seem to hold the view that “Allah” is a rigid designator that always refers to the Muslim god, independently of whether it is used in Arabic or in Malay, or in a Muslim or a Christian context. Are they not right, if proper names refer in this way? In a way, they are — but at the same time, when Malay-speaking Catholics use the term “Allah”, they do not intend to refer to the Muslim god, but to their own god.
It looks like we have two names that sound the same: let’s call them “Allah”-1 and “Allah”-2. These are two linguistic expressions, both of which might even be rigid designators, but that refer (rigidly) to two distinct entities (this opens many questions that I have to ignore here; some people would argue that given the close connection between the monotheistic religions in question, the two terms actually refer to the same entity; some people would argue that they do not refer at all, etc.). And it seems that people in favour of banning the use of “Allah”-2 seem to be of the opinion that there can only be one and only one such linguistic expression (arguably for the same reason that there can only be one and only one relevant deity). People who use “Allah”-2 regularly beg to differ, I bet.
Why do we get this confusion with something like “Allah” but not with “Aristotle”? Well, for one thing, you might not have too many friends who are called “Aristotle”, causing relatively little confusion. But it might also be instructive to look at the expression “Allah” in some more detail. While it can be used as a name, it seems clear that even in Arabic, “Allah” (or rather الله) does not only refer to the Muslim god, but it can also mean “god” or “deity”. In fact, if one looks at the Bible in Arabic, “Allah” or “الله” is all over it, corresponding to “god” in the English version (thanks a lot to Sarah Ouwayda for the screenshot showing this). “الله” itself is a contraction of the Arabic definite determiner “al” and the common noun إلٰه “ʾilāh” meaning, as you might guess, “god”.
So the origin of the proper name “Allah” was at some point the phrase “the god”, which of course, being a definite description, refers to a single entity. The reference of a definite description like “the god”, however, can easily change from context to context. As such, a Muslim text could use “the god” and refer to the Muslim god, whereas a Christian text could use “the god” and refer to the Christian god. (In other words, definite descriptions are not necessarily rigid designators.)
I am describing only one scenario here, in which both sides possibly have a Kripkean perspective on proper names; as Luca Sbordone points out, one could also imagine that each side subscribes to a different theory of referring. For example, Russellian Muslims could argue that “Allah” refers to the entity that has all the attributes that the Quran ascribes to it. Now if Catholics claim that “Allah” has all kinds of other properties as well, one can see how these views clash.
Summarising, from a linguistic point of view, there are a couple of levels to this story. Given that proper names are often held to pick out one and only one referent, the reasoning in the court’s decision might have involved that Muslims would be confused if a Christian text ascribed certain properties to an entity (the Muslim god) that doesn’t have them. Catholics would contend that the proper name that they use has a different reference. What makes the issue even more complicated is that it’s difficult to decide out of context which of “Allah”-1 or “Allah”-2 is used, as they sound the same.
Finally, this issue of ambiguity arguably doesn’t arise with a name like “Aristotle” because its reference is more unique, and it has always been a name and thus a rigid designator. On the other hand, “Allah”, coming from “the god”, became a proper name in the course of the expression’s history and people have used it to refer to different entities at different times and different places (and in different languages).
In a follow-up to this post, I will mention somewhat similar cases, in which, entities referred to by proper names fear that their names are “deteriorating” to mere common nouns, a kind of inverse to the situation discussed here.
Thanks to Sarah Ouwayda for some clarifications about Arabic, and to Luca Sbordone for many helpful comments. I would like to stress that I don’t mean to hurt anyone’s religious feelings, this post is merely meant to highlight some interesting linguistic issues.
Abbott, Barbara. 2010. Reference. Oxford: Oxford University Press.
Frege, Gottlob. 1892. Translation here: http://philo.ruc.edu.cn/logic/reading/On%20sense%20and%20reference.pdf
Sources for Arabic:
More on the issue:
One of the things which it is important for languages to be able to do is distinguish who performed an action, who was affected by that action, etc. There are a number of ways in which they do this. English largely just uses word order - Luke loves Lucy does not mean the same as Lucy loves Luke. A great number of other languages are very like English in this respect as well. Another common way of achieving the same goal is through case: different forms of a noun (or pronoun) which realise this sort of function. Latin is a well-known example of a language with case. In Latin, endings called nominative are used with “subjects” (prototypically, nouns which perform or are responsible for actions) and endings called accusative are used with “objects” (prototypically nouns in some way acted upon). For example, compare this sentence -
|Natural translation:||“Lucy loves the dog”|
- with this one:
|Natural translation:||“the dog loves Lucy”|
The different endings on the nouns convey their different roles in each instance. English does something similar with some pronouns (compare “I” in I love Lucy with “me“ in Lucy loves me).
Other languages also use case for a similar purpose, but do things a bit differently. To illustrate this it will be helpful to introduce the distinction between “intransitive” and “transitive” verbs. With intransitive verbs only a single noun (or pronoun) is associated with the action, giving sentences like I fall, Luke died, she went away etc. With transitive verbs there are two associated nouns: I like linguistics, Lucy loves Luke etc.
In Latin (and English) the same case – the nominative – is used for the subject of intransitive verbs as the subject of transitive verbs. Thus “Lucy arrived” in Latin is Lucia advenit (not Luciam advenit with the accusative, or any other case), and we say I fall and not me fall. But in many languages the form used with the “subject” of intransitive verbs is the same as that of the object of transitives, with a separate form for the subject of transitives. This makes the traditional terms “subject” and “object” a bit confusing when talking about different languages and many linguists prefer labels something like the following instead:
Nominative, then, is a case used for S and A but not P; accusative is used for P only. A case used for S and P but not A is called absolutive; a case used for A only is called ergative.
An example of a language with an ergative-absolutive system is Yup’ik, spoken in Alaska. The following is an intransitive sentence in Yup’ik:
|Natural translation:||“Doris travelled”|
And the following is a transitive one:
|Natural translation:||“Tom greeted Doris”|
(Sentences from Payne 1997, p. 135.)
Note that the same form is used for “Doris” in both sentences, whereas “Tom” takes a different ending.
In another type of system – the one in which I’m currently most interested – there are two (or sometimes more) cases which can occur with S (the intransitive “subject”). Typically one of these is the same case as used with the transitive subject A and the other is that used with the transitive object P: these can be referred to as agentive and patientive cases respectively.
In one variety of Tibetan, the agentive is marked with a suffix -s, whereas the patientive doesn’t take any suffix. This is seen with A and P in the following transitive sentence:
|Natural translation:||“I killed the tiger”|
Compare this with intransitive sentences where S takes the agentive -
|Natural translation:||“I cried”|
- and the patientive (note the absence of the -s suffix):
|Natural translation:||“I died”|
(Sentences from DeLancey 1984, pp. 132-3.)
This is as if we in English were to say I cried but me died.
The exact criteria which decide whether the agentive or patientive is used vary between languages: roughly speaking the agentive is generally used when S is more “in control” of the action and the patientive when it performs the action involuntarily. Part of my research is aimed at trying to understand and explain these patterns across and within languages in more detail. I also want to explore the ways in which agentive-patientive languages relate to languages with other types of case system at the more abstract, underlying level within the mind which theoretical linguistics aims to understand.
DeLancey, Scott. 1984. Transitivity and ergative case in Lhasa Tibetan. Proceedings of the Tenth Annual Meeting of the Berkeley Linguistics Society, pp. 131-40.
Payne, Thomas E. 1997. Describing Morphosyntax. Cambridge: Cambridge University Press.
Hello there! The CamLangSci blog is taking a late summer break this week, before term kicks off again (yes, the Cambridge year really does start in October). But we wouldn’t want to miss the opportunity to wish you, our readers, a belated but happy European Day of Languages!
In case you missed out on the fun last Friday, the European Day of Languages is a chance to celebrate and encourage language diversity and multilingualism across Europe. The perpetual flag-waver of language learning, the Guardian, obviously took advantage of the occasion, with pieces including ‘Three European languages you didn’t know exist’ – do you know where Karaim is spoken? Why not take a look if you’re experiencing CamLangSci withdrawal?
Or, if you’d prefer to give your eyes a rest from the screen, listen to BBC R4’s ‘The Forum’, which this week looked at how speaking more than one language affects the brain, and featured two of the most established names in bilingualism research, Ellen Bialystok and Antonella Sorace.
Finally, if you’re in Cambridge look out for the Festival of Ideas coming up at the end of October. As ever, there are tons of interesting and free events on all corners of the arts, humanities and social sciences, including quite a few linguisticky ones – check out the programme on-line or pick up a copy in town, and come along!
In any Pragmatics 101, you’ll learn that Paul Grice, one of the fathers of the field as we know it today, originally proposed four maxims fleshing out his Co-operative Principle for communication: quality, quantity, relevance, and manner. Relying on the assumption that these maxims hold of their interlocutor, hearers make inferences from the speaker’s utterance: pragmatic enrichments of the literal semantic content – what the speaker meant, though didn’t literally say. These aspects of the meaning are called implicatures.
Now, subsequent theorists – neo-Griceans and post-Griceans – have, rightly, pointed out that Grice’s four maxims are not the be-all and end-all – they include interrelations and redundancy, and Grice himself suggested that there may be others besides. With the exception of Relevance Theory, though, later theories have maintained the plurality of maxims, for example Horn’s Q and R principles or Levinson’s Q, M and I (Horn 1984; Levinson 2000). They’re all assumed to be able to cover at least a basic diversity of cases such as these (where +> indicates the implicated meaning and an informal reasoning is given in brackets):
Mavis: Would you like a camomile tea?
Mary: I need to work late tonight.
+> Mary does not want a camomile tea
(given the world knowledge that camomile tea is soporific, it relates to the question as a negative answer as it would not aid working late)
Bob: Did you cycle to Brighton?
Ben: I cycled to London.
+> Ben did not cycle to Brighton
(given the knowledge that Brighton is further from Cambridge than London, had Ben cycled further, he would have said so)1
John made the car stop.
+> John made the car stop not in the normal way
(otherwise he would have used the conventional phrase ‘stopped the car’)
Terry: Did you eat the cookies?
Tom: I ate some (of the) cookies.2
+> Tom did not eat all the cookies.
(Given that Tom knows how many cookies he ate, if he had eaten all of them, he would have been informative and said so)
These are examples of relevance, manner, quantity ad hoc and quantity scalar implicatures, respectively. However, if you trawl through a database of academic articles for studies on the subject over, say, the last fifteen years, you will find almost exclusively the final case to be the only one. Scalar Implicatures rule the pragmatic roost at present. But, as was said at the recent Formal and Experimental Pragmatics Workshop at ESSLLI (of which proceedings here), we need to remember that ‘some’ is not the only word.
Studies have been restricted to this one type of implicature, a sub-type of quantity implicature, scalar implicature.They are, in many ways, a paradigmatic case, and the basic intuitions about them, according to standard theory, are pretty clear. Furthermore, there arose some intricate debates about particular cases (if you’re interested, the default vs nonce and globalist vs localist battles) which kept the theoretical market buoyant with new theories and counter-examples, and the experimentalists in a job testing all these theories.
However, while this means that we might be making some progress on understanding something about Scalar Implicatures, and perhaps Quantity Implicatures in general, what we know about Manner and Relevance is lagging behind. And this is unsatisfactory, because, on a Gricean view, we want a unified approach to these different inferences. We also don’t know much about how the different inferences interact. What happens when multiple inferences could be derived from a single utterance? How does one support (or interfere with) the other? For example, relevance-type inferences may well be crucial in generating, or constraining, the alternative utterances that are negated as part of Quantity Implicature derivation (e.g., ‘all’ is the stronger alternative negated to enrich ‘some’ with the meaning ‘some and possibly all’ to ‘some and not all’).
But further, as Bart Geurts pointed out in his talk on Co-operativity at the Workshop, work on implicature has also restricted itself to only one type of speech act – assertion – while it is clear that other speech acts may also yield implicatures:
1 Where did you last see your poodle?
+> That may help you to find it.
2 Shoot the piano player!
+> The drummer can stay.
3 Do you have a pen or pencil?
+> Either will do.
(Taken from Bart Geurt’s talk – slides available here)
This imbalance takes on another hue from the perspective of my research in acquisition. Work on language acquisition is always a bit chicken-and-egg: we want to look at how children acquire a certain feature of language, and to do so we need to know what that feature of language is. This makes it rather problematic when it comes to developmental pragmatics: how can we investigate how children learn to derive implicatures when we’re not sure how adults process them? On the other hand (the egg-first perspective), looking at how children acquire a linguistic feature can tell you a lot about its nature. And that’s where work on the big picture of children’s pragmatic competence (or lack of it) is exciting for theorists too3.
This year saw a milestone in the field of developmental pragmatics with an edited volume with the does-what-it-says-on-the-tin title Pragmatic Development in First Language Acquisition (Ed. Danielle Matthews). There are chapters on the state-of-the-art of speech acts, metaphor, irony, evidentiality, prosody, conversation, word learning, and – you guessed it – scalar implicature. But were Manner and Relevance anywhere to be seen?
If we want a really Gricean view – in which speakers are always pragmatic as part of a more general rationality and co-operativity – we need to broaden our attention to include more types of inference – in processing as well as acquisitional studies. Here endeth the plea.
3 This is what my PhD research is (partly) about – so watch this space for more on this topic in future posts!
Degen, J., & Tanenhaus, M. K. (2011). Making inferences: the case of scalar implicature processing. In Proceedings of the 33rd annual conference of the Cognitive Science Society (pp. 3299–3304). Cognitive Science Society Austin, TX.
Grice, H. P. (1989). Studies in the Way of Words. Harvard University Press.
Horn, L. (1984). Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature. Meaning, Form, and Use in Context, 42.
Levinson, S. C. (2000). Presumptive meanings: The theory of generalized conversational implicature. Cambridge, MA: MIT Press.
Matthews, D. (2014). Pragmatic Development in First Language Acquisition (Vol. 10). John Benjamins Publishing Company.
Pouscoulous, N., Noveck, I. A., Politzer, G., & Bastide, A. (2007). A developmental investigation of processing costs in implicature production. Language Acquisition, 14(4), 347–375.
Don’t worry, this is not going to be a judgmental blog post. I really, really enjoy different varieties of native and non-native English – although in rare cases I have been heard teasing friends with their ways of speaking. Instead, I hope it will be the kind of blog post that inspires reflection, while trying to impart some of the overly enthusiastic sociolinguist author’s fondness for pronunciation patterns.
I guess I should start by explaining what I mean by accent. I chose this everyday term to cover roughly speaking the part of linguistic variation which isn’t covered by grammar or word choice (sociophonetic variation in linguist terms). Although the way you pronounce your words might seem insignificant, such variation is actually able to impart quite a bit of information about you. In order to detangle the most important ways your accent can differ from others’, I’m going to divide accent differences into 3 broad types, depending on what aspect of communication is leaving those traces in the way you speak.
Let’s start with what is arguably the most basic source of phonetic variation: differences in our physiology. Our bodies, mouths and throats are different from each other, and this affects the sounds they are able to produce, rather like the differences between cellos and violins. This affects multiple levels of the way we speak, but a tangible example is the differences in our speech organs that are caused by sexual dimorphism. Men generally have larger vocal folds than women, and, like the strings on cellos and violins, this affects the pitch range we are able to produce: because they are generally larger, men’s vocal folds vibrate at lower frequencies than women’s, which leads to a deeper pitch. Age generally changes the vocal folds, making them less flexible, which is why older people frequently sound more hoarse or creaky. A similar effect can happen if you have a cold or smoke over longer periods of time, both of which can create changes to the structure of the vocal folds. Although these differences are only probabilistic (some men have high-pitched voices), most people find they’re able to guess the approximate sex and age of a voice.
Secondly, your accent is influenced by your social circumstances. Social phonetic variation originates from associations between accent features and groups of people, in the same way as someone saying “yo” makes you think of rap culture. A well-known type of social variation is geographical pronunciation patterns – you sound like someone from Yorkshire because you use a number of accent features which people associate with Yorkshire speakers. Interestingly, this kind of variation is likely to affect what people think of you. The BBC Voices project recorded 34 accents of English and made about 5000 British listeners judge how attractive and/or prestigious they sounded. The researchers found that accents associated with stereotypes of power, like American English and German English, ranked high for prestige but low for attractiveness, whereas e.g. Southern Irish English and Caribbean English were ranked low for prestige but high for attractiveness. Interestingly, awareness of such links can be utilised for sociolinguistic ends. For example, in Japanese a trend has been reported for women, who naturally have high-pitched voices, to make their voices even higher to come across as more feminine. Similarly, some homosexual men speak at a higher pitch range, thus associating themselves with a less stereotypical kind of masculinity. Which accent features are being used in this way partly depends on their noticeability. Some accent features are highly noticeable, like the Uptalk or HRT intonation pattern, and these can be used – or left out – as part of a conscious strategy. Others accent features that are less noticeable are used as part of a long-standing speech habit, but can nonetheless be used by listeners to unpack links between you and social groups.
The last type of accent variation that I’m going to cover arises from the context of the conversation itself. Conversational context, such as what you’re talking about and who you’re talking to, also affects the way you sound. Contextual accent variation can be part of a long-standing habit which gets activated by certain situations, like the way you talk differently when you’re in a formal situation such as at a court of justice, or the difference between talking with a close friend in comparison with someone you’ve just met. It can also be conditioned by the immediate situation, like if someone has mentioned Wales and you do a poor imitation of a Welsh accent. Emotions, like if you suddenly feel happy or angry during a conversation, can also affect the way you sound. Many people report they can hear if a person is smiling, even if they can’t see them. Like with social variation, contextual variation constrains and/or enriches the other kinds of accent variation – it might be the case that you identify as a hip-hopper and generally try to sound like a black American (= social variation), but if you’re taking an IELTS test, you’ll probably try and sound as standard as possible.
As an experiment, next time you’re speaking with someone on the phone, put on your detective hat and try and identify how much you would be able to infer about them from their voice alone. You’d be surprised about how much subtle cues such as vowels and consonants, voice quality or rhythm really say about people. And about you.