The title is not a typo, but you’ll have to work out what it means (if anything) for yourselves! By way of introduction to this linguistic- and universal-themed ramble, have a little read through the following pretty lengthy quote from C.S. Lewis’ 1938 novel Out of the Silent Planet. In this passage, Ransom, a Cambridge philologist, comes across an alien creature on the planet Mars, or Malacandria as it is known in the book:
“A lifetime of linguistic study assured Ransom almost at once that these were articulate noises. The creature was talking. It had language. If you are not a philologist, I am afraid you must take on trust the prodigious emotional consequences of this realisation in Ransom’s mind. A new world he had already seen – but a new, an extraterrestrial, a non-human language was a different matter… The love of knowledge is a kind of madness. In the fraction of a second which it took Ransom to decide that the creature was really talking, and while he still knew that he might be facing instant death, his imagination had leaped over every fear and hope and probability of his situation to follow the dazzling project of making a Malacandrian grammar. ‘An Introduction to the Malacandrian Language’ – ‘The Lunar Verb’ – ‘A Concise Martian-English Dictionary’ … the titles flitted through his mind. And what might one not discover from the speech of a non-human race? The very form of language itself, the principle behind all possible languages, might fall into his hands.”
There are many points in this passage to talk about, but I’d like to focus on the last sentence – the principle behind all possible languages.
Noam Chomsky is famous for many things, one of which is the idea of Universal Grammar (UG). In brief, UG represents the means by which a child can constrain the types of hypotheses they make about the language(s) they are acquiring such that the grammar they acquire more or less resembles that of the older generations. When a child hears a sentence, there are an infinite number of possible ways to generate such a sentence, yet the types of grammatical rules that they hypothesise to be at work in generating the sentences of that language represent only a tiny fraction of all the infinite logical possibilities. UG was conceived as the innate knowledge that a child has which allows the child to entertain just that tiny fraction of all the logical possibilities. In short, UG makes the problem of language acquisition tractable (in the mathematical sense).
UG is thus a mathematical and logical necessity (Nowak, 2006). The existence of something that constrains the hypotheses of the language acquirer is thus on firm conceptual ground. The question of what UG is like, what it consists in and of, however, is another matter.
The earlier Chomskyan approach to this question was that UG is innate, human-specific, language-specific, and rich in content. The current Chomskyan approach, however, is that UG is innate, human-specific, language-specific, but impoverished in content. The reason for this change is the shift to what Chomsky calls Third Factors, i.e. “principles not specific to the faculty of language” (Chomsky, 2005: 6). The exact nature of these Third Factors is currently under discussion, but the suggestion is that Third Factors include various principles of computation, which are not specific to language but which nonetheless play a role in shaping the forms that language can take. From an evolutionary perspective this seems to be desirable. Not only has language in its current form arisen in a reasonably short span of evolutionary time (at most, seven million years, when humans split from their most closely related living species, i.e. chimpanzees), but it is unlikely that language evolved in isolation from other biological and/or mental properties, i.e. language has co-evolved (see Lenneberg, 1967).
If some of these third factor principles of computation are mathematical principles, then it is possible that they are not simply non-specific to the faculty of language, but also non-specific to the human species; in fact, they’d be more like laws of nature. If that is the case, alien languages really would provide a gateway to the principle (or maybe principles) behind all possible languages. A crazy note to end on perhaps, but then the passage did say that the love of knowledge is a kind of madness …
Chomsky, N. (2005). Three Factors in Language Design. Linguistic Inquiry, 36(1), 1–22.
Lenneberg, E. (1967). Biological foundations of language. New York: Wiley.
Nowak, M. A. (2006). Evolutionary Dynamics: Exploring the Equations of Life. Cambridge, MA: Belknap Press of Harvard University Press.
My research is on the Englishes spoken by Aboriginal people in Australia. Aboriginal English is a fascinating dialect group which is often viewed negatively by outsiders but which is nonetheless hugely important to its speakers, many of whom no longer have access to their ancestral languages. Today’s blogpost is therefore on a topic close to my heart. As with Rowena’s previous post on sign language FAQs, it evolved from many conversations over beers, food, or in any other situation that made me end up talking about my research. Versions of the 4 statements below have been cropping up more or less frequently in these conversations, and each time I’ve felt like they needed a better response than what I could manage on the fly. And that’s what you have a blog for, right?
I thought we would start out with a classic (which has already been alluded to in Eleni’s post on language and dancing, amongst others). Sometimes called the Sapir-Whorf hypothesis, the idea that language has a dominant influence on thought is an interesting point to start a debate on endangered languages. In its strongest form, the hypothesis is that each language embodies a worldview of its own, which can be contrasted with the world views inherent to other languages. Thus it follows that a speaker of Arrernte, an Australian Aboriginal language, will see the world in a fundamentally different way than a speaker of English, simply because of linguistic differences between the languages. However, empirical evidence such as Berlin and Kay’s 1969 study on universals in colour term semantics, as well as theoretical criticism (most notably Pinker 1994) means the original version of this idea has fallen out of grace in contemporary linguistics. Nevertheless, the Sapir-Whorf hypothesis still survives, both in beer gardens in Cambridge, and in the urban myth that Inuits have countless words for snow – more on this interesting and super-pervasive myth here.
But wait, does this mean one language is as good as another? That would mean there was no reason to care about – or save – endangered languages. Worse, even, being “stuck” with a less widespread language could then be nothing but a barrier to getting an education and a job. Now, to answer these questions properly I would need an entire blog post just on this subject (or even a book). I’ll try to be brief, but if you’ve got more time on hand, you might want to take 10 minutes to view this TEDx video by Felicity Meakins.
You’ll notice Felicity saying that “encoded in [Gurindji] grammar is a different world view”. Now, that might sound familiar, but what she alludes to is a weaker and more widely acknowledged version of the Sapir-Whorf hypothesis, whereby rather than determining the way we think, the particular characteristics of a language may influence how we draw distinctions, or how we categorise things in our mind. What she emphasises is not only the differences in pronunciation, grammar and lexis between English and Gurindji, but also how the two languages convey different ways of conceptualising the world – e.g. with reference to ourselves (left or right in English ), or to external directions (north, south, east and west in Gurindji). And if you ask speakers of endangered languages themselves, they will tell you that their language and culture are intricately linked, and how losing a language feels like losing a connection to culture and group identity.
In the video above, Felicity Meakins shows some of the potential problems caused by the lack of awareness of Indigenous languages in education. Similarly, Diana Eades has conducted a number of studies on misunderstandings in courtroom settings caused by linguistic differences between speakers of minority and majority languages (and dialects). The problems facing second language English speakers with an endangered first language are a point of discussion in Australia, among other places, and are currently in the process of coming into focus, both with linguists and the public.
For scholars, endangered languages have plenty of practical implications in and of themselves. Many great linguistic discoveries have been made on the basis of a small language that few from the outside had heard about. And there’s knowledge of the culture in the languages that can, for instance, help archaeologists understand the motifs of cave paintings, or ancient migration patterns, better.
Language death is what happens when a language no longer has any native speakers. The language may be recorded (for instance on tape, or in dictionaries) and can be revitalised, but language death will still have serious effects on the language itself, such as loss of complexity and dialectal variation. Furthermore, as I’ve argued above, the community associated with the language loose a significant connection to their culture and traditions, and to their own identity as a social group. But what does this have to do with me, you might ask? Well, language is knowledge. Of a culture, of a way of living, of the past. All this is worth keeping! So if you’re the kind of person who thinks that knowing as much as possible about the world is important, and that linguistic and cultural variety is amazing, then I hope you find that small Australian Aboriginal, or Chinese, or Celtic, languages are worth preserving.
In Romeo and Juliet, Shakespeare wrote the following famous lines:
What’s in a name? that which we call a rose/
by any other name would smell as sweet.
There are many linguistic aspects that one could highlight about these two lines, and many ways in which one could answer Juliet’s question. I want to highlight a recent story that raises many linguistic issues with respect to the use of proper names, and has far-reaching consequences for the lives of certain people.
This issue is a news item that I read a couple of months ago. This June, the highest court in Malaysia apparently settled a question that it had been dealing with for some time: the question was whether the Catholic Malaysian newspaper Herald (http://www.heraldmalaysia.com/) could use the word “Allah” when referring to the Christian god.
The legal battle had been going on for a couple of years, and the issue seems to go back to at least 1984, when “the use of the word ‘Allah’ was prohibited in the Bahasa Malaysia version of the publication of the Herald newsletter.” (http://goo.gl/IGRi2K)
From what I understand, the reason for the conflict is the following: people who are in favour of the ban argue that “Allah” is an Arabic word that refers to the Muslim god, whereas Malaysian Christians argue that the word has been used for a long time in Malay, meaning “God”, and not (just) referring to the Muslim god (http://goo.gl/C7F4LK).
The linguistic issue, as I see it, is the following. There is a term in Arabic, “Allah” that some people understand to be a proper name, i.e. a linguistic expression that refers to a particular individual (setting aside certain obvious questions here). In this sense, when speakers of Malay use this term, they refer to that entity. Since that entity is the Muslim god, the argument seems to be that the name cannot be used in a Christian context, as it could be misleading to certain people. Indeed, Al Jazeera wrote that “authorities say using ‘Allah’ in non-Muslim literature could confuse Muslims and entice them to convert, a crime in Malaysia.” (http://goo.gl/C7F4LK)
Catholics contend, however, that in Malay, “Allah” can also be used to refer to the Christian god.
How is that possible? What is, after all, in a name? Proper names are special linguistic expressions in some ways: they have been argued to be “rigid designators”. This means that across all possible worlds (basically, whatever one could imagine the world to be like), a proper name always refers to same entity.
Some philosophers (for example, Bertrand Russell and Gottlob Frege) argued that names signify properties of the people they refer to; Frege writes in his “Über Sinn und Bedeutung” (On sense and reference) that the name “Aristotle” could be understood as the person of whom “the pupil of Plato and the teacher of Alexander the Great” holds.
Philosophers who argue in favour of rigid designation, however, point out that properties like “the pupil of Plato and the teacher of Alexander the Great” are contingent; they can in principle be true or false. Imagine that for some reason, Aristotle didn’t teach Alexander the Great, but someone else did. The property “the pupil of Plato and the teacher of Alexander the Great” could not refer to Aristotle any more, yet all other things being equal, the name “Aristotle” would still refer to Aristotle. So while a list of properties like “the pupil …” can change across situations that one can imagine, philosophers like Saul Kripke argue that the reference of names doesn’t: their reference is fixed (hence the term rigid designator). “Aristotle” refers to Aristotle, whatever the circumstances (of course, this is an extremely simplified account of the matter; a lot of the discussion here is based on Abbott 2010, Chapter 5).
OK, so does this help us with analysing the issue? Proponents of the ban on “Allah” might seem to hold the view that “Allah” is a rigid designator that always refers to the Muslim god, independently of whether it is used in Arabic or in Malay, or in a Muslim or a Christian context. Are they not right, if proper names refer in this way? In a way, they are — but at the same time, when Malay-speaking Catholics use the term “Allah”, they do not intend to refer to the Muslim god, but to their own god.
It looks like we have two names that sound the same: let’s call them “Allah”-1 and “Allah”-2. These are two linguistic expressions, both of which might even be rigid designators, but that refer (rigidly) to two distinct entities (this opens many questions that I have to ignore here; some people would argue that given the close connection between the monotheistic religions in question, the two terms actually refer to the same entity; some people would argue that they do not refer at all, etc.). And it seems that people in favour of banning the use of “Allah”-2 seem to be of the opinion that there can only be one and only one such linguistic expression (arguably for the same reason that there can only be one and only one relevant deity). People who use “Allah”-2 regularly beg to differ, I bet.
Why do we get this confusion with something like “Allah” but not with “Aristotle”? Well, for one thing, you might not have too many friends who are called “Aristotle”, causing relatively little confusion. But it might also be instructive to look at the expression “Allah” in some more detail. While it can be used as a name, it seems clear that even in Arabic, “Allah” (or rather الله) does not only refer to the Muslim god, but it can also mean “god” or “deity”. In fact, if one looks at the Bible in Arabic, “Allah” or “الله” is all over it, corresponding to “god” in the English version (thanks a lot to Sarah Ouwayda for the screenshot showing this). “الله” itself is a contraction of the Arabic definite determiner “al” and the common noun إلٰه “ʾilāh” meaning, as you might guess, “god”.
So the origin of the proper name “Allah” was at some point the phrase “the god”, which of course, being a definite description, refers to a single entity. The reference of a definite description like “the god”, however, can easily change from context to context. As such, a Muslim text could use “the god” and refer to the Muslim god, whereas a Christian text could use “the god” and refer to the Christian god. (In other words, definite descriptions are not necessarily rigid designators.)
I am describing only one scenario here, in which both sides possibly have a Kripkean perspective on proper names; as Luca Sbordone points out, one could also imagine that each side subscribes to a different theory of referring. For example, Russellian Muslims could argue that “Allah” refers to the entity that has all the attributes that the Quran ascribes to it. Now if Catholics claim that “Allah” has all kinds of other properties as well, one can see how these views clash.
Summarising, from a linguistic point of view, there are a couple of levels to this story. Given that proper names are often held to pick out one and only one referent, the reasoning in the court’s decision might have involved that Muslims would be confused if a Christian text ascribed certain properties to an entity (the Muslim god) that doesn’t have them. Catholics would contend that the proper name that they use has a different reference. What makes the issue even more complicated is that it’s difficult to decide out of context which of “Allah”-1 or “Allah”-2 is used, as they sound the same.
Finally, this issue of ambiguity arguably doesn’t arise with a name like “Aristotle” because its reference is more unique, and it has always been a name and thus a rigid designator. On the other hand, “Allah”, coming from “the god”, became a proper name in the course of the expression’s history and people have used it to refer to different entities at different times and different places (and in different languages).
In a follow-up to this post, I will mention somewhat similar cases, in which, entities referred to by proper names fear that their names are “deteriorating” to mere common nouns, a kind of inverse to the situation discussed here.
Thanks to Sarah Ouwayda for some clarifications about Arabic, and to Luca Sbordone for many helpful comments. I would like to stress that I don’t mean to hurt anyone’s religious feelings, this post is merely meant to highlight some interesting linguistic issues.
Abbott, Barbara. 2010. Reference. Oxford: Oxford University Press.
Frege, Gottlob. 1892. Translation here: http://philo.ruc.edu.cn/logic/reading/On%20sense%20and%20reference.pdf
Sources for Arabic:
More on the issue:
One of the things which it is important for languages to be able to do is distinguish who performed an action, who was affected by that action, etc. There are a number of ways in which they do this. English largely just uses word order - Luke loves Lucy does not mean the same as Lucy loves Luke. A great number of other languages are very like English in this respect as well. Another common way of achieving the same goal is through case: different forms of a noun (or pronoun) which realise this sort of function. Latin is a well-known example of a language with case. In Latin, endings called nominative are used with “subjects” (prototypically, nouns which perform or are responsible for actions) and endings called accusative are used with “objects” (prototypically nouns in some way acted upon). For example, compare this sentence -
|Natural translation:||“Lucy loves the dog”|
- with this one:
|Natural translation:||“the dog loves Lucy”|
The different endings on the nouns convey their different roles in each instance. English does something similar with some pronouns (compare “I” in I love Lucy with “me“ in Lucy loves me).
Other languages also use case for a similar purpose, but do things a bit differently. To illustrate this it will be helpful to introduce the distinction between “intransitive” and “transitive” verbs. With intransitive verbs only a single noun (or pronoun) is associated with the action, giving sentences like I fall, Luke died, she went away etc. With transitive verbs there are two associated nouns: I like linguistics, Lucy loves Luke etc.
In Latin (and English) the same case – the nominative – is used for the subject of intransitive verbs as the subject of transitive verbs. Thus “Lucy arrived” in Latin is Lucia advenit (not Luciam advenit with the accusative, or any other case), and we say I fall and not me fall. But in many languages the form used with the “subject” of intransitive verbs is the same as that of the object of transitives, with a separate form for the subject of transitives. This makes the traditional terms “subject” and “object” a bit confusing when talking about different languages and many linguists prefer labels something like the following instead:
Nominative, then, is a case used for S and A but not P; accusative is used for P only. A case used for S and P but not A is called absolutive; a case used for A only is called ergative.
An example of a language with an ergative-absolutive system is Yup’ik, spoken in Alaska. The following is an intransitive sentence in Yup’ik:
|Natural translation:||“Doris travelled”|
And the following is a transitive one:
|Natural translation:||“Tom greeted Doris”|
(Sentences from Payne 1997, p. 135.)
Note that the same form is used for “Doris” in both sentences, whereas “Tom” takes a different ending.
In another type of system – the one in which I’m currently most interested – there are two (or sometimes more) cases which can occur with S (the intransitive “subject”). Typically one of these is the same case as used with the transitive subject A and the other is that used with the transitive object P: these can be referred to as agentive and patientive cases respectively.
In one variety of Tibetan, the agentive is marked with a suffix -s, whereas the patientive doesn’t take any suffix. This is seen with A and P in the following transitive sentence:
|Natural translation:||“I killed the tiger”|
Compare this with intransitive sentences where S takes the agentive -
|Natural translation:||“I cried”|
- and the patientive (note the absence of the -s suffix):
|Natural translation:||“I died”|
(Sentences from DeLancey 1984, pp. 132-3.)
This is as if we in English were to say I cried but me died.
The exact criteria which decide whether the agentive or patientive is used vary between languages: roughly speaking the agentive is generally used when S is more “in control” of the action and the patientive when it performs the action involuntarily. Part of my research is aimed at trying to understand and explain these patterns across and within languages in more detail. I also want to explore the ways in which agentive-patientive languages relate to languages with other types of case system at the more abstract, underlying level within the mind which theoretical linguistics aims to understand.
DeLancey, Scott. 1984. Transitivity and ergative case in Lhasa Tibetan. Proceedings of the Tenth Annual Meeting of the Berkeley Linguistics Society, pp. 131-40.
Payne, Thomas E. 1997. Describing Morphosyntax. Cambridge: Cambridge University Press.
Hello there! The CamLangSci blog is taking a late summer break this week, before term kicks off again (yes, the Cambridge year really does start in October). But we wouldn’t want to miss the opportunity to wish you, our readers, a belated but happy European Day of Languages!
In case you missed out on the fun last Friday, the European Day of Languages is a chance to celebrate and encourage language diversity and multilingualism across Europe. The perpetual flag-waver of language learning, the Guardian, obviously took advantage of the occasion, with pieces including ‘Three European languages you didn’t know exist’ – do you know where Karaim is spoken? Why not take a look if you’re experiencing CamLangSci withdrawal?
Or, if you’d prefer to give your eyes a rest from the screen, listen to BBC R4’s ‘The Forum’, which this week looked at how speaking more than one language affects the brain, and featured two of the most established names in bilingualism research, Ellen Bialystok and Antonella Sorace.
Finally, if you’re in Cambridge look out for the Festival of Ideas coming up at the end of October. As ever, there are tons of interesting and free events on all corners of the arts, humanities and social sciences, including quite a few linguisticky ones – check out the programme on-line or pick up a copy in town, and come along!
In any Pragmatics 101, you’ll learn that Paul Grice, one of the fathers of the field as we know it today, originally proposed four maxims fleshing out his Co-operative Principle for communication: quality, quantity, relevance, and manner. Relying on the assumption that these maxims hold of their interlocutor, hearers make inferences from the speaker’s utterance: pragmatic enrichments of the literal semantic content – what the speaker meant, though didn’t literally say. These aspects of the meaning are called implicatures.
Now, subsequent theorists – neo-Griceans and post-Griceans – have, rightly, pointed out that Grice’s four maxims are not the be-all and end-all – they include interrelations and redundancy, and Grice himself suggested that there may be others besides. With the exception of Relevance Theory, though, later theories have maintained the plurality of maxims, for example Horn’s Q and R principles or Levinson’s Q, M and I (Horn 1984; Levinson 2000). They’re all assumed to be able to cover at least a basic diversity of cases such as these (where +> indicates the implicated meaning and an informal reasoning is given in brackets):
Mavis: Would you like a camomile tea?
Mary: I need to work late tonight.
+> Mary does not want a camomile tea
(given the world knowledge that camomile tea is soporific, it relates to the question as a negative answer as it would not aid working late)
Bob: Did you cycle to Brighton?
Ben: I cycled to London.
+> Ben did not cycle to Brighton
(given the knowledge that Brighton is further from Cambridge than London, had Ben cycled further, he would have said so)1
John made the car stop.
+> John made the car stop not in the normal way
(otherwise he would have used the conventional phrase ‘stopped the car’)
Terry: Did you eat the cookies?
Tom: I ate some (of the) cookies.2
+> Tom did not eat all the cookies.
(Given that Tom knows how many cookies he ate, if he had eaten all of them, he would have been informative and said so)
These are examples of relevance, manner, quantity ad hoc and quantity scalar implicatures, respectively. However, if you trawl through a database of academic articles for studies on the subject over, say, the last fifteen years, you will find almost exclusively the final case to be the only one. Scalar Implicatures rule the pragmatic roost at present. But, as was said at the recent Formal and Experimental Pragmatics Workshop at ESSLLI (of which proceedings here), we need to remember that ‘some’ is not the only word.
Studies have been restricted to this one type of implicature, a sub-type of quantity implicature, scalar implicature.They are, in many ways, a paradigmatic case, and the basic intuitions about them, according to standard theory, are pretty clear. Furthermore, there arose some intricate debates about particular cases (if you’re interested, the default vs nonce and globalist vs localist battles) which kept the theoretical market buoyant with new theories and counter-examples, and the experimentalists in a job testing all these theories.
However, while this means that we might be making some progress on understanding something about Scalar Implicatures, and perhaps Quantity Implicatures in general, what we know about Manner and Relevance is lagging behind. And this is unsatisfactory, because, on a Gricean view, we want a unified approach to these different inferences. We also don’t know much about how the different inferences interact. What happens when multiple inferences could be derived from a single utterance? How does one support (or interfere with) the other? For example, relevance-type inferences may well be crucial in generating, or constraining, the alternative utterances that are negated as part of Quantity Implicature derivation (e.g., ‘all’ is the stronger alternative negated to enrich ‘some’ with the meaning ‘some and possibly all’ to ‘some and not all’).
But further, as Bart Geurts pointed out in his talk on Co-operativity at the Workshop, work on implicature has also restricted itself to only one type of speech act – assertion – while it is clear that other speech acts may also yield implicatures:
1 Where did you last see your poodle?
+> That may help you to find it.
2 Shoot the piano player!
+> The drummer can stay.
3 Do you have a pen or pencil?
+> Either will do.
(Taken from Bart Geurt’s talk – slides available here)
This imbalance takes on another hue from the perspective of my research in acquisition. Work on language acquisition is always a bit chicken-and-egg: we want to look at how children acquire a certain feature of language, and to do so we need to know what that feature of language is. This makes it rather problematic when it comes to developmental pragmatics: how can we investigate how children learn to derive implicatures when we’re not sure how adults process them? On the other hand (the egg-first perspective), looking at how children acquire a linguistic feature can tell you a lot about its nature. And that’s where work on the big picture of children’s pragmatic competence (or lack of it) is exciting for theorists too3.
This year saw a milestone in the field of developmental pragmatics with an edited volume with the does-what-it-says-on-the-tin title Pragmatic Development in First Language Acquisition (Ed. Danielle Matthews). There are chapters on the state-of-the-art of speech acts, metaphor, irony, evidentiality, prosody, conversation, word learning, and – you guessed it – scalar implicature. But were Manner and Relevance anywhere to be seen?
If we want a really Gricean view – in which speakers are always pragmatic as part of a more general rationality and co-operativity – we need to broaden our attention to include more types of inference – in processing as well as acquisitional studies. Here endeth the plea.
3 This is what my PhD research is (partly) about – so watch this space for more on this topic in future posts!
Degen, J., & Tanenhaus, M. K. (2011). Making inferences: the case of scalar implicature processing. In Proceedings of the 33rd annual conference of the Cognitive Science Society (pp. 3299–3304). Cognitive Science Society Austin, TX.
Grice, H. P. (1989). Studies in the Way of Words. Harvard University Press.
Horn, L. (1984). Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature. Meaning, Form, and Use in Context, 42.
Levinson, S. C. (2000). Presumptive meanings: The theory of generalized conversational implicature. Cambridge, MA: MIT Press.
Matthews, D. (2014). Pragmatic Development in First Language Acquisition (Vol. 10). John Benjamins Publishing Company.
Pouscoulous, N., Noveck, I. A., Politzer, G., & Bastide, A. (2007). A developmental investigation of processing costs in implicature production. Language Acquisition, 14(4), 347–375.
Don’t worry, this is not going to be a judgmental blog post. I really, really enjoy different varieties of native and non-native English – although in rare cases I have been heard teasing friends with their ways of speaking. Instead, I hope it will be the kind of blog post that inspires reflection, while trying to impart some of the overly enthusiastic sociolinguist author’s fondness for pronunciation patterns.
I guess I should start by explaining what I mean by accent. I chose this everyday term to cover roughly speaking the part of linguistic variation which isn’t covered by grammar or word choice (sociophonetic variation in linguist terms). Although the way you pronounce your words might seem insignificant, such variation is actually able to impart quite a bit of information about you. In order to detangle the most important ways your accent can differ from others’, I’m going to divide accent differences into 3 broad types, depending on what aspect of communication is leaving those traces in the way you speak.
Let’s start with what is arguably the most basic source of phonetic variation: differences in our physiology. Our bodies, mouths and throats are different from each other, and this affects the sounds they are able to produce, rather like the differences between cellos and violins. This affects multiple levels of the way we speak, but a tangible example is the differences in our speech organs that are caused by sexual dimorphism. Men generally have larger vocal folds than women, and, like the strings on cellos and violins, this affects the pitch range we are able to produce: because they are generally larger, men’s vocal folds vibrate at lower frequencies than women’s, which leads to a deeper pitch. Age generally changes the vocal folds, making them less flexible, which is why older people frequently sound more hoarse or creaky. A similar effect can happen if you have a cold or smoke over longer periods of time, both of which can create changes to the structure of the vocal folds. Although these differences are only probabilistic (some men have high-pitched voices), most people find they’re able to guess the approximate sex and age of a voice.
Secondly, your accent is influenced by your social circumstances. Social phonetic variation originates from associations between accent features and groups of people, in the same way as someone saying “yo” makes you think of rap culture. A well-known type of social variation is geographical pronunciation patterns – you sound like someone from Yorkshire because you use a number of accent features which people associate with Yorkshire speakers. Interestingly, this kind of variation is likely to affect what people think of you. The BBC Voices project recorded 34 accents of English and made about 5000 British listeners judge how attractive and/or prestigious they sounded. The researchers found that accents associated with stereotypes of power, like American English and German English, ranked high for prestige but low for attractiveness, whereas e.g. Southern Irish English and Caribbean English were ranked low for prestige but high for attractiveness. Interestingly, awareness of such links can be utilised for sociolinguistic ends. For example, in Japanese a trend has been reported for women, who naturally have high-pitched voices, to make their voices even higher to come across as more feminine. Similarly, some homosexual men speak at a higher pitch range, thus associating themselves with a less stereotypical kind of masculinity. Which accent features are being used in this way partly depends on their noticeability. Some accent features are highly noticeable, like the Uptalk or HRT intonation pattern, and these can be used – or left out – as part of a conscious strategy. Others accent features that are less noticeable are used as part of a long-standing speech habit, but can nonetheless be used by listeners to unpack links between you and social groups.
The last type of accent variation that I’m going to cover arises from the context of the conversation itself. Conversational context, such as what you’re talking about and who you’re talking to, also affects the way you sound. Contextual accent variation can be part of a long-standing habit which gets activated by certain situations, like the way you talk differently when you’re in a formal situation such as at a court of justice, or the difference between talking with a close friend in comparison with someone you’ve just met. It can also be conditioned by the immediate situation, like if someone has mentioned Wales and you do a poor imitation of a Welsh accent. Emotions, like if you suddenly feel happy or angry during a conversation, can also affect the way you sound. Many people report they can hear if a person is smiling, even if they can’t see them. Like with social variation, contextual variation constrains and/or enriches the other kinds of accent variation – it might be the case that you identify as a hip-hopper and generally try to sound like a black American (= social variation), but if you’re taking an IELTS test, you’ll probably try and sound as standard as possible.
As an experiment, next time you’re speaking with someone on the phone, put on your detective hat and try and identify how much you would be able to infer about them from their voice alone. You’d be surprised about how much subtle cues such as vowels and consonants, voice quality or rhythm really say about people. And about you.
Attention: This article contains Chinese and Japanese text. Without proper rendering support, you may see question marks, boxes, or other symbols instead of Chinese characters, Japanese kanji and kana.
Talking about pidgin languages, we might all have some typical examples in our mind, from a stereotypical pidgin like Chinese Pidgin English, to some highly creolised languages, such as Tok Pisin. In case you haven’t an idea about what a pidgin language is, it is usually charactarised as being a simplified but conventionalised language without any native speakers. Although they may differ in some aspects of grammar, they share the basic process by which they emerge and develop: in simple terms, they come from constant language contact, and a major purpose is to construct a lingua franca among people speaking different mother tongues. Most of the time, this process is quite “peaceful” – the word “pidgin” itself, which is the (mis)pronunciation of English word “business”, has already provided some evidence: these languages often originated from a series of commercial activities between different countries.
But in the world of pidgin languages, a peaceful development is not always the case. Not only commercial activities can lead to different languages coming into contact, and the birth of a new pidgin language, but also invasion and wars. In the dust of history, we can trace some dead pidgins to their dirty background, and by studying these languages, we can reveal some fragments of the past. In this post, I would like to introduce a pidgin “Manchurian Chinese-Japanese Pidgin”, which is dead nowadays but was alive and well during the Manchukuo era (1932-1945) and the Second Sino-Japanese War (1937-1945). Born in the war, this language has a complicated history of language policies and negative evaluation, but it’s worth closer investigation.
A brief history of Sino-Japanese language contact
Although historically the Japanese language has received a series of influences from archaic Chinese, I would like to skip that part and only focus on the situation of modern Japan, i.e. after Meiji Restoration (1868). Since China was one of the biggest military targets of Japan at that time, Chinese was an important part in the training of the military forces. When the Imperial Japanese Army Academy was founded in 1881, Chinese was already one of the well-established courses; specialised textbooks were published during the First Sino-Japanese War to help Japanese soldiers to fight and communicate in China (Ando, 1988).
Right after the Russo-Japanese War in 1905, Japanese started its influence in the north-eastern China: while the South Manchurian Railway was being built, a large group of Japanese soldiers settled down along the railway, and their language started to mix with the local Chinese dialect. This mixed language, also known as a pidgin, is called “Railway Mandarin” (沿線官話) in Standard Chinese; it is the embryo of Manchurian Chinese-Japanese Pidgin.
The establishment of Manchukuo in 1932 was a signal that Japan had started its colonial action in China; the Manchurian government took over the north-eastern part of China and Japanese immigrants rushed in. After the establishment of Manchukuo, language contact between Chinese and Japanese became more frequent, and some policies regarding the use of languages were issued by the Japanese government. Chinese residents in Manchukuo and other Japanese-occupied areas were forced to study Japanese in school, and top-performing students could go to Japan for higher education (Kawashima, 2006). At the same time, the Japanese military continued its invasion into other parts of China, and they needed to learn Chinese in order to command the local Chinese people.
A complicated grammar of Manchurian Chinese-Japanese Pidgin
The Chinese-Japanese pidgin language developed in the Manchukuo era and Second Sino-Japanese War has a number of names, because it is referred to in different historical and linguistic texts. Here I partly follow the suggestion of Sakurai (2012, “Manchurian Pidgin Chinese”), and adopt a neutral name “Manchurian Chinese-Japanese Pidgin” (MCJP), because that name clearly shows the main features of that language: it is a pidgin language of Chinese and Japanese; it clearly shows features of both Chinese and Japanese; the language has not been creolised; and, its “hometown” was Manchukuo. Some other names are also used in historical records: the name accepted by most Chinese scholars is Xiehe Yu (協和語, pronounced as Kyowa-go in Japanese, literally “Concord Language”; see Sakurai 2012 for a review), while Ando (1988) refers to one particular MCJP variant as Military Chinese Language (兵隊シナ語, sometimes 兵隊中国語).
MCJP is marked by its complex origins, numerous variants and complicated structures. As I explained in the previous section, both Japanese native speakers and Chinese native speakers needed to use a pidgin language to communicate; however, due to the distinct patterns of language exposure, they ended up constructing several variants of pidgin language. We could then divide this pidgin to two main branches: one of them uses Chinese as its target language (hence MCJP-Chinese), and the other uses Japanese instead (hence MCJP-Japanese). According to historical documents, these two variants stabilised in early 1940s and were widely used in the communication between Japanese soldiers and Chinese people (Ando, 1988; Sakurai, 2012). Thus, they are not simply the product of language learning and foreigner-talk. Due to the similar function and status they have in the language environment of Manchukuo, in recent research they tend to be treated as different variants of a single pidgin (Sakurai, 2012; Zhang, 2012), even though at first glance they look pretty different from each other.
Young people who grow up in China may still remember some remarkable scenes in TV programmes about the Second Sino-Japanese war: usually, there is a commander dressed in the yellow-green Japanese military uniform, with a little round moustache, shouting to the hero or heroine who is fighting against the Japanese, “你的, 良心, 大大的坏了” (“you, the conscience went bad badly” which is a weird variant of “你没有良心” – “you don’t have conscience”). Such strings are easily recognisable by Chinese people who have little knowledge of Japanese, but they do not sound like “authentic Chinese sentences” to the native speakers – you seldom hear a Chinese native speaker put her main verb at the end of a sentence without any auxiliary verb like ba or bei, unless she intends to achieve certain rhetorical effects. Those lines are vivid examples of MCJP-Chinese, which is often used by Japanese soldiers; in Ando’s (1988) analysis it is called Military Chinese Language.
Chart 1: The syntax structures of MCJP-Chinese and Standard Chinese.
|Topic (subject)||(Object)||Main verb|
From the comparison above, we can see that the sentence structure of MCJP-Chinese mainly follows Japanese; it has an SOV word order, and the particle “的” becomes a part of the topic, which is similar to a topic-marker (Zhang, 2012; while Chinese does not have any pronounced topic-marker). Practically, however, the target language of MCJP-Chinese is Chinese; the main vocabulary and the pronunciation of words are rough replication of Chinese words and sounds, which is shown in Chart 2.
Chart 2: The pronunciation of MCJP based on Chinese and Standard Chinese (example from Ando, 1988)
|Corresponding Chinese expressions||你的||干活计||不行||顶好||坏了|
|Meaning||you or your||labour, work||not good||excellent||go bad|
The other variant of MCJP, whose target language is Japanese, was used by the Chinese native speakers in the Japanese-occupied area, especially in Manchukuo. Those Chinese people were forced to learn Japanese in school, but the level of language education varies place to place, and Chinese language was still the dominant language in their daily communication with other Chinese people; such situation led to the development of a “new type” of Japanese, which includes code-mixing and some special grammar. It was once regarded as a “low-level” Japanese variant by the Japanese native speakers, but recent studies show that it is in fact a pidgin that has been ignored (Zhang, 2012).
One prominent characteristic of MCJP-Japanese is the use of the auxiliary verb aru (Hiragana ある; Katakana アル); in Japanese, aru should only be used after transitive verbs and to indicate the meaning of state modification, but in MCJP-Japanese, it replaced other auxiliary verbs and lost its original meaning. Interestingly, not only Chinese people in the Japanese-occupied area, but also Japanese soldiers whose mother tongue is Japanese adopted this structure in the conversation to Chinese people:
The local Chinese: 二十銭安いあるか？(“Is twenty cents cheap enough?” Standard Japanese: 二十銭安いですか？)
The Japanese soldier: うわ！高いあるな！(“Oh! Too expensive!” Standard Japanese: うわ！高いですな！)
(Example from Zhang, 2012)
As well as this extension of aru, a number of particles in Standard Japanese are dropped in MCJP-Japanese, especially the case-markers like “が” (ga, nominative marker), “を” (wo, accusative marker) and even “の” (no, possessive marker) (Zhang, 2012).
Although we can generally divide MCJP into two different variants, there is no clear boundary between them. In the actual language material, we can discover that a speaker can switch between two variants of MCJP and even mix them up in one sentence, no matter what his native language is. From the perspective of sociolinguistics, MCJP is one lingua franca used in the Manchukuo and other Japanese-occupied areas in China during the Second Sino-Japanese war.
Nowadays, even though MCJP disappeared several decades ago, its influences are still obvious. In some fictional works based on the history of that war by Chinese and Japanese authors, we can discover a trace of MCJP, for example, the use of aru of MCJP-Japanese has become a (wrong) stereotype in Japanese comics to show that a Chinese character is speaking Japanese, while the iconic line “你的, 良心, 大大的坏了” of MCJP-Chinese is also a stereotype of Japanese learning Chinese language.
Although the war was a terrible event, the language that it bore is precious to linguists in China and Japan, because it opens a door to understanding the development of a pidgin language.
References (I am terribly sorry that all of them are in Japanese; there is no reliable English resource about this language, which is a great pity):
Ando, H. (1988). Chinese and Modern Japan (中国語と近代日本). Tokyo: Iwanami Shoten.
Kawashima, S. (2006). War-time system and Japanese language – Japanese research (戰時體制與日本語·日本研究). Proceeding in International Symposium of Transformation of Modern Japanese Society, Taiwan: Academia Sinica.
Sakurai, T. (2012). Manchuria Pidgin Chinese and Kyowa-go (満州ピジン中国語と協和語). Meikai Japanese 17:2. Retrieved from http://www.urayasu.meikai.ac.jp/japanese/meikainihongo/17/sakurai.pdf
Zhang, S. (2012). The language contact in Manchukuo: Realities of language contact shown in new materials (「満洲国」における言語接触－新資料に見られる言語接触の実態). Retrieved from https://glim-re.glim.gakushuin.ac.jp/bitstream/10959/2750/1/jinbun_10_51_68.pdf
A few weeks ago there was a two-part programme on BBC entitled Talk to the Animals presented by Lucy Cooke. As you might imagine, it was about ‘cracking the animal code’ – finding out what animals are communicating with each other and how they are doing so. It was a great programme and got me thinking again about the differences between humans and other animals in terms of the way we communicate.
Our most stand-out method of communication is, of course, language. And our language is used to communicate just about anything and everything we can think of. Whilst animal communication typically concerns food, danger and mating, human communication goes way beyond these things. The more interesting question for me, however, is not so much what we are communicating, but how we are communicating it. How do we package the information we wish to convey and how do we structure it? How is language designed such that it allows us to do these things in the first place?
This question is huge and, surprise, surprise, unanswered. Therefore, I’m simply going to muse on one of the most significant design features of language that has been identified – duality of patterning.
Every human language has a system by which meaningless sounds are combined to make meaningful units (these can be thought of as words), and every human language has a separate system which combines these meaningful units into phrases and sentences (the same applies to Sign Languages). This means that a language can have a reasonably small number of meaningless elements from which it can generate a very large number of distinct words. Furthermore, this very large number of distinct words can be combined to form an even larger number of distinct sentences (in fact, an infinite number of sentences). The capacity of human language to take discrete elements from one level and combine them to make discrete units at another level is what Charles F. Hockett called duality of patterning (Hockett 1960).
It is an immensely efficient way of doing things. Imagine what language would be like if this were not the case. To be meaningful at all, the elements of language would have to be meaningful in and of themselves. Since there would be no way of combining them, we could only express as many things as we have words for. The shapes of these words would be chaotic as well since there would be no way of combining smaller meaningless elements into words.
A number of authors have suggested that a system for combining meaningless elements into meaningful words does exist in other animals, e.g. humpback whales and chaffinches (see Hurford 2007), but a system for combining meaningful words into phrases and sentences appears to be much rarer and possibly unique to humans. Why should humans have two combinatorial systems at their disposal? Or could it be that the two systems are fundamentally the same but appear different purely because of the nature of the elements they manipulate? This suggests that studying the similarities and differences between phonology and syntax will shed light on the underpinnings of our combinatorial abilities (see Nevins (2010) who argues that the operation Agree is found in both syntax and phonology). Comparing these with the abilities of other animals may then shed light on the evolution of language itself.
Hockett, C. F. (1960). The Origin of Speech. Scientific American, 203(3), 89–96.
Hurford, J. R. (2007). The Origins of Meaning: Language in the Light of Evolution. Oxford: Oxford University Press.
Nevins, A. (2010). Locality in Vowel Harmony. Cambridge, MA: MIT Press.
As I sat on the edge of my sofa on Saturday night watching Doctor Who and trying to acclimatise myself to a slightly softened version of Malcolm Tucker as the new identity of everyone’s favourite Time Lord, I wondered whether I could call myself a Whovian. A Whovian, as you may know, is someone who self identifies as a part of the Doctor Who fanbase. It is one of the seemingly endless set of terms that have been created to describe one’s particular fandom affiliation.
Nicknames for groups of fans have been around for a long time, for example the name Whovian was first used in the 1980s when fans created a fan club newsletter called the Whovian Times. For years we have heard football fans identifying themselves as part of the Toon Army or as a Gooner (Newcastle City fans and Arsenal fans, respectively). However, there seems to have been a recent explosion of fan nicknames in a host of areas: music (Beliebers, Directioners, Swifties), TV (Sherlockians, Gleeks), books (Ringers, Tributes, Twihards), films (Trekkies) and even celebrities (Cumberbitches, Pine Nuts). I want to consider some issues in this post: Why do we feel the need to create these nicknames? Why do some fan groups have nicknames whilst others do not? What makes a good fandom nickname and how are these created?
Firstly, why are these nicknames coined? I think there are four core reasons:
This brings me on to another question – why do some fan groups have nicknames and some do not? Some brands are enormously popular and yet do not have a fan group nickname. For example, Oprah Winfrey is arguably the most powerful woman in America. She has immense influence, is allegedly worth $2.9 billion and has over 25 million followers on Twitter. However, her many millions of fans do not have a nickname. I think this is due to two of the reasons mentioned above. As nicknames may stem from brands being divisive, there must be a feeling that the brand needs defending. Oprah is not criticised enough for her fans to rally together under one name. Secondly, to create an ingroup, a brand must be in some way exclusive. Oprah is too ubiquitous and popular to really be the source of an ingroup and therefore a fan name. Other huge fanbases that do not have a clear nickname include fans of Game of Thrones (or more generally the book series A Song of Ice and Fire) and fans of Harry Potter (notably some people call this group Pottheads but this started as a derogatory term and does not unite the fanbase). In these cases I suggest the final reason is at play again. With their phenomenal popularity, one cannot affiliate oneself with these brands as an ingroup due to the sheer size of the fanbase. However, one may choose to affiliate with certain characters or groups within the brands. For example, fans may side with the Lannisters or the Starks in the Game of Thrones fandom and Gryffindor or Slytherin in Harry Potter. Indeed, some of these sub-sections do have fan group nicknames. For example, the group of Harry Potter fans who wish that Hermione had chosen Harry instead of Ron call themselves Harmonians!
So, how are these nicknames formed? One way nicknames appear is that the artists select them themselves. This is not a new occurrence, with George Harrison calling the superfans of The Beatles, who gathered outside the Apple Corps building, Apple Scruffs. In 2009 Lady Gaga dubbed her fans her Little Monsters (after her album Fame Monster) and Ke$ha called her fans Animals (after her album Animal). However, often the communities themselves develop the nicknames for themselves. Sometimes they have a selection of names that they ask the celebrity to pick from (for example, Ed Sheeran picked Sheerios from a some fan-suggested possibilities). Sometimes the fan group names are selected and the artist in question does not necessarily approve of the choice (for example, Benedict Cumberbatch would prefer his fans called themselves Cumberbabes or the Cumber Collective, but they have dubbed themselves his Cumberbitches). When fans do select nicknames, it may be that a number abound for a while until one wins out (as with Ringers from a number of other possibilities for Lord of the Rings fans, such as LOTRians). In the case of fans of the Hunger Games series, they rather democratically had an online vote to choose their fan name.
So then, how does one create a good fan group nickname? The easiest way is to take the name of the object of your affection and add a suffix. The most popular appear to be –ers, –ies and –ians. Notably fanbases seem to steer away from the suffix –phile (the suffix that means ‘to have a fondness for’) perhaps due to unfavourable connotations from use of this suffix in unsavoury words such as paedophile and necrophile. A second option is to create amusing portmanteaus, such as Gleeks (from Glee + geek), Twihards (from Twilight + try-hard), Bey Hive (from Beyonce + Bee Hive), Fanilow (from fan + Barry Manilow) and, finally, for men who like a retro kids TV show, Brony (from brother + My Little Pony). As mentioned earlier, the fan group nickname can act as a shibboleth and therefore some groups may choose something that is slightly more obscure so that only ‘real fans’ will understand the meaning. For example, the Hunger Games’ fans chose Tributes as their nickname (a term used in the books to describe a certain heroic group to which most of the main characters belong). Similarly, Miley Cyrus fans are called Smilers, originating from the fact that Miley was nicknamed smiley when she was child. Bruce Springsteen fans call themselves Bruce Tramps due to one of his song titles and Katy Perry fans named themselves KatyCats due to their idol’s love of cats.
Due to inherent narcissism I could not help but consider what my fans would call themselves if I ever gained celebrity. I think that Rowena would only have to drop its first syllable to become a passable fan group name. Therefore, I can only hope that I maintain obscurity to save any group from ever having to declare themselves Weeners.