Do happiness and sadness taste like sweet and sour chicken?

It may sound a bit weird to you when you see this title; it did to me when I was invited to answer that question on a Chinese question website – ‘in Chinese, why do we use the same word sour to represent the taste of vinegar and the sad feeling when you hear a touching story?’ Several similar questions can be found on that website, such as ‘why do we use up/high for something good while down/low for something bad’, or ‘why does English use in to talk about time relation’. Fortunately (or not), my current work is about semantics, specifically about metaphor, which meant I could give an answer when they turned to me. And today, my blog starts from that story and will go slightly beyond to discover the question: when we mean ‘happy’ and ‘sad’ by saying ‘sweet’ and ‘sour’, do we really taste that in mind?

 

CC stu_spivack

CC stu_spivack

The whole story comes from the development of the so-called ‘contemporary theory of metaphor’ (henceforth CTM), which comes out of the field of cognitive semantics and is represented by Lakoff and Johnson and their book Metaphors We Live by (1980). Lakoff and Johnson’s idea is about the cognitive realisation and conceptual formation of metaphor. They classify metaphor as a mapping between two concepts in different conceptual domains, which turns ‘metaphor’ into a phenomenon at the level of concept formation. Lakoff and Johnson believe that metaphor, as a mirror, faithfully reflects our perception and cognition of the whole world, and such reflection is embedded in our daily language. The reason we use ‘up’ for happiness (e.g. ‘cheer up’) and ‘down’ for sadness (e.g. ‘his mood is low’) is not simply because we want to make our speech fancier; instead, we do feel ‘high’ and jump ‘up’ when we are full of joy, while we lower our heads when we are disappointed. They also claim that these metaphor mappings should be universal, since human beings should perceive these events in a similar way – which is also a fundamental proposal of cognitive linguistics.

The presence of CTM leads to an earthquake-like shift in the field of metaphor research. Our definition of ‘metaphor’ changes drastically due to their proposal ‘metaphor is a mapping at the conceptual level’. In the traditional view, such as a Gricean account (Grice 1989), a metaphorical sentence is always non-literal, and we can always sense the deviance when we hear someone saying to his lover ‘you are the cream in my coffee’. Under the framework of CTM, however, even some typical literal sentences can contain a conceptual metaphor. For instance, ‘her voice is sweet’, which sounds quite literal to most of native speakers of English and a lot of English learners, contains a conceptual metaphor PLEASURABLE EXPERIENCES ARE SWEET FOOD. (When we refer to conceptual metaphors, we use small capital letters to show that it is the mapping at the level of concept: ‘Pleasurable experiences’ is the target domain of the metaphor, and ‘sweet food’ is the source domain – see Barcelona 2000 for more examples). Pleasurable experiences could bring people a good mood, just like what sweet food does. The linguistic realisation of a conceptual metaphor is called a ‘linguistic metaphor’, although it may be classified as ‘literal’ in the traditional semantic view. Iconic conceptual metaphors identified by Lakoff and Johnson include argument is war, time is space, life is a journey and so on, – you won’t miss them if you read any article on CTM.

Let’s go back to our sweet and sour examples, with some analyses and counterexamples. Based on CTM, a series of interpretation of ‘sweet’ and ‘sour’ sentences are produced, which makes use of conceptual metaphors like PLEASURABLE EXPERIENCES ARE SWEET FOOD (Dirven 1985; Barcelona 2000), UNPLEASANT EXPERIENCE ARE SOUR OR BITTER FOOD (Barcelona 2000) and JEALOUSY IS SOUR/BITTER (Yu 1998; Buss 2000). These observations show that cross-linguistically sweetness is associated with pleasant experiences and joyful objects, while sourness is associated with the opposite. The reason for such association, as is inferred from the spirit of CTM, is that both the source domain and the target domain could evoke some similar cognitive effects. However, soon we will see that these basic conceptual metaphors cannot cater for all the possibilities that ‘sweet’ and ‘sour’ can present in different languages.

Although Lakoff and Johnson claim that conceptual metaphors exist across languages and cultures, the realisation of these conceptual metaphors varies in different languages, which means the mapping may not be really ‘universal’. Take our favourite example ‘sweet’. In a number of languages, the word ‘sweet’ is associated with nice feelings and delicate objects, for instance, ‘sweet music’ and ‘sweet voice’ in English, or ‘xinli ganjue hentian’ (feeling sweet in one’s heart) and ‘tianyan miyu’ (sweet sentences and honey words) in Chinese. But an extraordinary example is discovered in Japanese: the Japanese correspondence ‘amai’ (sweet) can be used to describe a naive person without any knowledge, which has an obviously negative implication. Such use is also transferred to Chinese, and I was totally surprised when one of my close friends said ‘ta taitian-le’ (he is too sweet) while her intention was ‘he is so naive’. There is even a semi-formulaic popular expression in Chinese ‘sha bai tian’ (lit. stupid, white and sweet) to describe ‘a super naive, super foolish person’. The use of ‘sweet’ for naivety is clearly not a part of the conceptual metaphor PLEASURABLE EXPERIENCES ARE SWEET FOOD.

Another interesting example is that both English and Japanese demonstrate the use (although limited) of ‘sweet’ when describe ‘a large amount’, which is reflected in ‘a sweet amount of time’ and ‘mizu ga amai’ (lit. the water is a large amount); in Chinese, however, this expression is absent. It is also difficult to cover the meaning ‘a large amount’ if we apply the conceptual metaphor PLEASURABLE EXPERIENCES ARE SWEET FOOD.

Such cross-linguistic differences lead me to question whether these associations are systematic or merely coincidental, or a combination of the two. It is clearly shown in the case above that the use of ‘sweet’ for ‘naive’ in Chinese is a borrowing from Japanese, while in English, the connection ‘naivety is sweet’ is totally absent. At that stage, we have three choices to explain this phenomenon. First, maybe we do have a conceptual metaphor NAIVETY IS SWEET FOOD; this argument is difficult to prove, because cognitively we cannot directly associate naivety with sweetness, and we also need to find the reason to explain why it only appears in a limited number of languages. Second, maybe ‘naivety is sweet’ is derived from some existing conceptual metaphors which have not been discovered yet, since ‘naivety’ is definitely not a pleasant experience; it is no less difficult to find the conceptual metaphor, however. Third, it is a mere coincidence that Japanese uses ‘sweet’ for naivety, which makes the seeming-conceptual-metaphor nothing. The use of ‘sweet’ for ‘a large amount’ in English and Japanese faces the same problem. Either we need to find a valid conceptual metaphor to cater for these expressions and explain why it is only present in some languages, or we should admit that it is not a metaphor at all, even though it involves some domain mappings.

These are the problems that challenge CTM today. Maybe humans systematically use ‘sweet’ to represent happiness because they feel good when they encounter the sweet flavour, but before we research all the possibilities in different languages and cultures, we cannot claim that this usage is universal, and we cannot attribute all the different usages to human cognition. We should always keep in mind that those cross-linguistic similarities might only be a coincidence or a result of semantic borrowing. When we use ‘sweet and sour’ to describe the mixture of happiness, unease and anxiety, it is possible that we use it only because it is a linguistic convention. Maybe we do not have a plate of sweet and sour chicken in our mind after all.

For more sweet and sour feelings, have a look at these references:

Barcelona, Antonio. 2000. ‘On the plausibility of claiming a metonymic motivation for conceptual metaphor’, in Antonio Barcelona (ed.), Metaphor and Metonymy at the Crossroads: A Cognitive Perspective (Walter de Gruyter), pp. 31–58

Buss, David M. 2000. The Dangerous Passion: Why Jealousy Is as Necessary as Love and Sex (Simon and Schuster)

Dirven, René. 1985. ‘Metaphor as a basic means for extending the lexicon’, in Wolf Paprotté and René Dirven (eds.), The Ubiquity of Metaphor: Metaphor in language and thought (John Benjamins Publishing), pp. 85–119

Grice, H. Paul. 1989. Studies in the Way of Words (Cambridge, Massachusetts: Harvard University Press)

Lakoff, George, and Mark Johnson. 1980. Metaphors We Live By (Chicago: University Of Chicago Press)

Yu, Ning. 1998. The Contemporary Theory of Metaphor: A Perspective from Chinese (John Benjamins Publishing)

Tones like to move it

In my last post, I wrote about some characteristics of tones (among others, they can “float”) and the theory of their origin – the science of tonogenesis. I mentioned that tones are highly areal: They either have a huge presence in a language family (Niger-Congo & Sino-Tibetan) or hardly show up at all (Indo-European). Even among regions where tones show up in large numbers, there are still significant differences in how they typically behave. Traditionally, tonologists tend to concentrate on either African (esp. Bantu) tone languages or Asian (esp. Chinese) ones, with relatively little conversation between the two camps. This is partly due to historical reason, partly because the points of interests are so very different between these two groups of languages. I will use today’s and my next post to introduce salient characteristics of African and Asian tone languages, and show how their impact on our understanding of phonology and of course, language.

African tones are famous for their mobility. The Bantu language of Chizigula (aka Zigula), spoken in Tanzania and Somalia, provides a particularly striking example. In this language, a verb is either toneless, or one of its syllables carry a H (high) tone. When I talk about verbs, I am really referring to verbal stems, which you can think of the basic form of a verb without all the affixes. As it often happens in African languages, Chizigula has a rich morphological system, with potentially layers of affixes. The interesting thing is, when a Chizigula verbal stem with an H tone gets suffixes, the H tone always moves to the penultimate (second-to-last) syllable in the newly affixed verb. I said “always”, because the H tone is absolutely hellbent on moving, no matter how many syllables it has to jump in doing so. Consider the Chizigula verb for “request”, with and without suffixes, in (1).

(1a)     lómbez                    ‘request’

(1b)    ku-lombéz-a             ‘to request’

(1c)     ku-lombez-éz-a            ‘to request for’

(1d)    ku-lombez-ez-án-a       ‘to request for each other’

Example (1a) shows the verbal stem, /lómbez/, where the H tone is attached to the segment /o/, marked with an accute accent. We take this tonal assignment to be basic and “underlying”, given the verbal stem appears in isolation here. In (1b), with the addition of suffix -a, the H tone moves to the right to the now second-to-last syllable, /be/. In (1c) and (1d), with progressively more suffixes added, the H tone moves further and further to the right (no pun here, for those politically conscious), but true to its form it always ends up with the penultimate syllable, even when this means moving three syllables away from its underlying position.

So Chizigula tone is a travel freak. What’s so interesting about that? Well, as I alluded to in my last post, the consequence of this and other findings about tonal mobility is nothing short of revolutionary for phonological theory. One resultant insight is that tones are “autosegments”: they are autonomous and independent from segments, from which they can leave, across which they can move and onto which they can dock. Phonologists formalise this insight by positing separate tonal tier and segmental tier, linked by association lines. I won’t go deeper into the fine theory, except to say that this formalism, in essence, is what we today know as Autosegmental Phonology. The following graph depicts how the itinerary of Chizigula H tone is represented in this scheme.

Screen Shot 2015-01-19 at 20.04.23

H de-linking and re-association

The H tone is originally linked to the syllable /lom/; under the pressure to have all H tones docked onto the penultimate syllable, the H is delinked from /lom/ and re-links with the penultimate /be/. You can easily extend this scheme to (1c) and (1d): all you have to do is do the same delinking operation, and then re-link the H tone to /ze/ in (1c) and /za/ in (1d).

Another insight gained from Chizigula tones’ unusual migration pattern has to do with the problem of locality. Linguists tend to think that linguistic objects like tones can’t walk around unrestrained from where they should’ve been. In other words, there must be some kind of “locality condition”, by which objects only move to an adjacent position. Chizigula tones moving three syllables away from their underlying position obviously stretches our definition of adjacency. In response to this and other so-called long-distance processes, phonologists now recognise “relativised locality”, in contrast to the stricter version of “absolute locality”. In a nutshell, it’s not the absolute distance (x syllables or y segments) that determines adjacency, but whether there are obstacles along the path of movement. Chizigula tones can do long-distance travel because nothing intervenes on their path; if there were low tones in Chizigula and one of these should stand between H tone and the penultimate syllable, the H tone may well have to cancel its travel plan. One of the languages that do have this blocking effect is Luganda, where H tone spreads freely until encountering an L tone.

(2a) à-, bala, e-, bi-, kópo

(2b) à-bálá é-bí-kópo        ‘he counts cups’

When all the stems and affixes in (2a) stand alone, only the first syllable of /kópo/ has a H tone, and the prefix /à/ has a L tone; the rest are toneless. When these are strung together to form the sentence in (2b), the H tone has spread and occupied four syllables until stopped by the L tone on /à/. This is but one small example illustrating relativised locality – more examples can be found in vowel and consonantal harmony processes.

Thus in just one tonal process, a data point from a language spoken by around 20000 people, we have seen so much of tonal phonology, and tonal phonology at its best. From the way Chizigula H tone moves (and Luganda H tone stops moving), we have a solid piece of evidence showing how our brain manipulates mental objects – the autonomous movement of tones (Autosegmental Phonology) and the condition on their movement (relativised locality). Things will get still better, however, when we move to Chinese tone sandhi next time.

Further reading:

1. Autosegmental Phonology

Goldsmith, John A. 1976. An overview of autosegmental phonology. Linguistic Analysis 2. 23–68

Excellent slides on autosegmental phonology by Jochen Trommer. The figure from this blog is taken from his lide http://www.uni-leipzig.de/~jtrommer/Nonconcatenative/1a.pdf

2. Relativised locality

Nevins, A., & Vaux, B. (2004). The transparency of contrastive segments in Sibe: Evidence for relativized locality. GLOW, Thessaloniki.

Vaux, B. (1999). Does Consonant Harmony Exist. Presented at the Linguistic Society of America Annual Meeting.

How easy is it for somone to change their own grammar?

A few days ago, a friend of mine engaged me in conversation about gendered pronoun usage in English. After spending some time considering the topic, they’d decided it would be a positive political move to start using gender neutral terms by default—not assuming, in other words, that they could automatically guess the gender identity and pronoun preference of anyone they met. They wanted my view—as a linguist, as well as someone interested in queer and feminist politics—on the practicality of switching to entirely gender neutral pronoun use, and on which pronoun was the best option.

A lot of ink, both figurative and physical, has been spilt on the issue of pronoun choice and gender neutral pronouns. Most mainstream discussion of the topic has concerned how to refer to individuals of unspecified gender in formal writing. Traditional style manuals advocated using ‘he/him/his’ in this context, but this has been criticised from a feminist standpoint for a long time. To my ears—and, I assume, to others of my generation—using ‘he/him/his‘ to refer to individuals of unspecified gender now sounds stylistically weird nearly to the point of ungrammaticality. You’ll more commonly come across other solutions in formal writing, like ‘she or he / her or him / her or his’ or ‘they/them/their’.

In queer politics, the same need for a gender neutral pronoun also arises for a different reason. People who are neither female or male, or not solely female or male, such as nonbinary trans* people, may feel the need for a pronoun that doesn’t misgender them. In these spheres ‘they/them/their’ is common, but other alternatives are also used such as the so-called Spivak pronouns ‘E/Em/Eir’ as well as the gender neutral pronoun ‘ze/hir/hir’ used by some online genderqueer communities.

My friend’s proposal is radical, but is not unique. One similar example you may have come across in the news in the last few years comes from Sweden. Some preschools in Sweden such as Egalia in Södermalm practice genuspedagogik—pedagogy focused on highlighting the effect of gender on children in educational contexts—and aim to use the recently coined gender neutral pronoun ‘hen’ (instead of feminine ‘hon’ or masculine ‘han’) for all children. This pronoun is a convenient fit in Swedish: it obviously resembles the feminine and masculine forms, and it happens also to have the same form as the (gender neutral) 3rd person pronoun in neighbouring Finnish. It is beginning to gain a little ground in Swedish: it has been used in children’s books, in parliament and even in a published legal judgement.

In this post, however, our focus is on the linguistic issues involved. So can you just choose to use a new, gender neutral pronoun in lots of contexts where your native grammar specifies you should use a gender marked form? Will people understand you? If it is possible, which of the various options are preferable?

Introducing a new pronoun into a language is an unusual enterprise. Languages add new words all the time and speakers have no trouble acquiring and using them, but these are what are referred to as ‘open class’ words: nouns, verbs, adjectives, adverbs. These classes of words are open in the sense that they can be added to and speakers have many strategies (derivational morphology) in their grammars for doing this. Consider a recently coined word like ‘selfie’: it’s immediately obvious how it has been formed and how that composition relates to its meaning. However, ‘closed class’ or grammatical words, such as pronouns, auxiliary verbs and prepositions, are much harder to coin. Speakers have no strategies for creating these words, but instead seem to list them as a fixed—closed—set in their mental grammars. So when we try to add a new one, we’re not really engaging in a normal linguistic process, and accordingly it’s a lot harder for such usage to become entirely automatic and unconscious in the way that most of our language use is. Anyone at home in queer social spaces is probably aware of how easy it is to make mistakes in using others’ preferred pronouns, especially when those pronouns are neologisms such as ‘E/Em/Eir’ or ‘ze/hir/hir’.

Nevertheless, there’s no reason to believe it an impossible task, and, I think, several good reasons to assume that it’s quite feasible. Speakers clearly do change their grammars over the course of their lifetimes, at least in minor ways, as they’re exposed to new grammatical variants through diffusion (that’s the spread of new forms between speakers). This is one of the normal processes of language change, and is going on all the time. Although this individual change is limited, the evidence is that the sort of changes that are easiest for adult native speakers to acquire are structural mergers—changes which remove a previously maintained grammatical distinction. And the introduction of a gender neutral pronoun is effectively just such a merger.

In addition, the target situation—one in which the 3rd person singular pronoun used in many, most or even all situations doesn’t encode gender—is perfectly normal, cross-linguistically. The map below (reproduced from WALS) shows gender distinctions in independent pronouns in languages across the world: white dots represent languages with no gender distinctions, and it’s easy to see that they’re pretty common.

genderedpronouns

So which gender neutral pronoun should my friend pick? Obviously this is primarily a political question. Nevertheless, I think that we get can another interesting insight here by comparing the introduction of a gender neutral pronoun to ‘normal’ (that is, not consciously initiated) language change. Generally, when innovative forms or usage patterns enter into a language, they do so by gradual spread along many axes: they spread between adjacent geographical areas, between interconnected social groups, by a gradual increase in frequency, and, crucially, gradually from linguistic context to linguistic context. A new form is generally innovated in a particular grammatical context from which it spreads, first to very similar grammatical contexts and eventually to very different ones. As a result, we’re relatively used to coming across and acquiring new usages that are partially familiar to us but have been extended to related-but-slightly-different contexts.

The proposed gender neutral pronoun that most resembles this ‘natural’ situation of language change is ‘they/them/their’. This already exists in most varieties of spoken Modern English as a gender neutral pronoun used in contexts where the gender of the referent is unknown or the referent is non-specific—in speech, sentences like ‘if someone wants a piece of cake, they should have one’ don’t sound at all marked. You might even come across it used by speakers where they are intentionally avoiding mentioning a referent’s gender, such as when maintaining the anonymity of someone in an anecdote. So where for the other proposed pronouns it would be necessary to introduce an entirely new form, for ‘they/them/their all that’s needed is to extend the use of an existing form into new—but clearly related—contexts.

Who or what does what or who or what has what done to it (with what) and different ways of saying this

Traditional grammar makes use of the terms “subject” and “object” to describe the roles of nouns in a sentence. Prototypically a subject does the action; an object has the action done to it, for example:

(1) Lucy reads the book
subject   object

Now this is all very well up to a point but when we want to use “subject” and “object” as general labels referring to the meaning of a noun in relation to an action, we run into problems even within a language like English (for some problems that arise cross-linguistically see my previous post). Consider the following:

(2) the book is read by Lucy
subject

In this (passive) sentence, the book has the same relation to the action described by the verb as before, but it appears as the subject not as the object. In case there’s any doubt about this, consider the following pair of examples:

(3) Lucy loves me
subject   object
(4) I am loved by Lucy
subject

I in (4) behaves exactly like a subject: it is in the nominative case (I and not me) and it triggers agreement with the verb (I am), as well as preceding the verb. Yet its relation to the act of loving is pretty much the same as in (3).

Linguists have dealt with this problem by coming up with the notion of thematic roles, employing labels like AGENT and PATIENT to describe them. Unlike the relations of subject and object, these remain constant whether the sentence is active or passive:

(5) Lucy reads the book
AGENT PATIENT
(6) the book is read by Lucy
PATIENT AGENT

Lucy is the agent in both sentences; the book is the patient.

Whilst linguists have not yet managed to come to any sort of agreement as what all the different thematic roles actually are, the notion nevertheless helps us in making a number of interesting observations. For example, a lot of verbs denoting changes of state can occur both as intransitives (with only one noun, or “argument”, involved in the action) or as transitives (with two arguments). This is the case for example with freeze (the words in capitals refer again to thematic roles):

(7) Nick froze the ice cream
CAUSE EXPERIENCER
(8) the ice cream froze
EXPERIENCER

What happened to the ice cream is the same in both instances (it froze), and therefore it seems rational to give it the same thematic role (here labelled EXPERIENCER). In (7), though, the ice cream is an object; in (8) it is the subject. One possible analysis of this is that when the CAUSE argument (e.g. Nick in (7)) – to be understood simply as the argument which causes the change of state described by the verb to occur – is not expressed overtly, an EXPERIENCER is “promoted” into the now-vacant subject position.

This might, in fact, be similar to what we see in the passive. Compare example (4) above with example (9) below, where in the absence of an agent the patient is promoted to subject:

(9) the book is read
PATIENT

As a general rule, we might want to say that all sentences require subjects, and that while these are preferentially agents or causes, they may be patients or experiencers too if no agent or cause is available.

Another thematic role which has been suggested is that of INSTRUMENT. In the following example, this is the role of the knife:

(10) Tiberius sliced the bread with the knife
AGENT PATIENT INSTRUMENT

An instrument, informally speaking, is the thing which the agent uses to effect the action. But instruments can also occur in subject position, e.g.

(11) the knife sliced the bread
INSTRUMENT PATIENT

the knife here can still be considered an instrument: unlike a typical agent, it isn’t doing the action of its own accord, and we assume there is still some unexpressed agent responsible for the slicing. So we have another type of possible alternation: where an agent is omitted, an instrument may be promoted to subject position in its place.

It’s fascinating that as a result of alternations like this the subject of a verb can actually be associated with multiple possible meanings. To give some more examples, the following show that a subject of break might be associated with at least three different roles:

(12) Wilhelmina broke the window (with the snowball)
CAUSE/AGENT
(13) the snowball broke the window
INSTRUMENT
(14) the window broke
PATIENT/EXPERIENCER

Equally fascinating is that in some cases these alternations can’t occur. For example, in Imhotep ate the peas with a fork, we have an instrument a fork – but eat can’t take an instrument as its subject like break or slice can: we can’t (generally) say *A fork ate the peas to mean “some unspecified person ate the peas with a fork”.

These possible and impossible alternations seem to suggest a lot about the nature of the lexicon and/or the grammar. They are therefore invaluable tools to linguists seeking to understand better how languages work.

A Christmas Cracker

Well, it had to be done. It’s that time of year when each subject tries to find its link to Christmas, its festive cheer, however tenuous.

Our very own Cambridge University website (‘for staff’ section) posted this seasonal offering last week: ‘Festive tastes have changed but Christmas is still a cracker’. It’s about a (pilot) study with a corpus of spoken English – much needed for corpus-based linguistics, of course, historically limited to the written mode. But rather than ‘language as a window into the mind’, it’s ‘language as a window into society’. For example: “When it comes to Christmas stalwarts, sherry and brandy appear to have fallen out of favour over the last 20 years, replaced by vodka, gin and even champagne, all of which are being talked about more”.

When it comes to Christmas stalwarts, sherry and brandy appear to have fallen out of favour over the last 20 years, replaced by vodka, gin and even champagne, all of which are being talked about more. – See more at: http://www.cam.ac.uk/research/news/festive-tastes-have-changed-but-christmas-is-still-a-cracker#sthash.aEpOLRz7.dpuf

When it comes to Christmas stalwarts, sherry and brandy appear to have fallen out of favour over the last 20 years, replaced by vodka, gin and even champagne, all of which are being talked about more. – See more at: http://www.cam.ac.uk/research/news/festive-tastes-have-changed-but-christmas-is-still-a-cracker#sthash.aEpOLRz7.dpuf
When it comes to Christmas stalwarts, sherry and brandy appear to have fallen out of favour over the last 20 years, replaced by vodka, gin and even champagne, all of which are being talked about more. – See more at: http://www.cam.ac.uk/research/news/festive-tastes-have-changed-but-christmas-is-still-a-cracker#sthash.aEpOLRz7.dpuf
When it comes to Christmas stalwarts, sherry and brandy appear to have fallen out of favour over the last 20 years, replaced by vodka, gin and even champagne, all of which are being talked about more. – See more at: http://www.cam.ac.uk/research/news/festive-tastes-have-changed-but-christmas-is-still-a-cracker#sthash.aEpOLRz7.dpuf

I’m not personally convinced of whether we can draw firm conclusions about what’s important/popular/uplifting from frequency of word-use – are we in the realms of sociology not linguistics? – but it’s a festive, fun and thought-provoking read. And most importantly, it tells you how you can contribute to the Spoken British National Corpus.

Happy Christmas to all our readers!

CC en-User-Cgros841

CC en-User-Cgros841

Pragmatics: it’s a monkey business

Every day the useful and now omnipresent Google serves me up a selection of ‘language’ news items. Quite frequently, these are not much to do with Language at all, or at least as we Linguists think about it (‘body language experts say X is lying when he says…’), but this morning this one caught my eye: ‘How to speak MONKEY: Researchers uncover the sophisticated primate language – which even has local dialects’ . I was intrigued to see whether this was just another round in the ‘oh yes they talk like us, oh no they don’t’ match, so clicked through to the Daily Mail site.

As I scanned down the page, what really caught my eye was the word ‘implicatures’ – in common parlance in linguistic circles, of course, but a tad surprising in this particular publication1. And with my Gricean hat firmly on, questions immediately exploded in my mind: monkeys and implicatures?! Apes computing implicatures. That’s a surprising thought, because implicatures, in a Gricean world, are inferences you make about a speaker’s meaning with reference to their intentions; it requires ‘mind-reading’, or Theory of Mind. But nonhuman primates have at most only first-order intentionality, not the second-order required2. And such inferences are what makes human communication so rich, versatile, and, well, human.

CC CERCOPAN

CC CERCOPAN

In fact, the newspaper piece was a report of an article recently published in Linguistics and Philosophy, less sensationally entitled ‘Monkey semantics: two ‘dialects’ of Campbell’s monkey alarm calls’3. Tempting as it was, I admit I did not wade through all 60 pages of the original in detail. But here are the main points:

  • the aim is to apply a formal framework of syntax (complete with derivational morphology) and compositional semantics to Campbell’s monkey alarm calls
  • a particular interesting feature of the calls of this kind of monkey is the variation observed between regional groups: both involve the calls krak (a general alarm) and hok (an aerial threat) which can also have an ‘attenuating suffix’ -oo. However, in the Tai forest on the Ivory Coast, krak functions as a leopard alarm call, whereas on Tiwai island in Sierra Leone, it is a general alarm call.
  • two theories to account for this variation are proposed: 1. a lexical account: this variation is just due to the call in question having a different ‘meaning’ in each communicative context. 2. a pragmatic account: there is a common underspecified meaning of ‘krak’ – something like a general alarm call – that is enriched differently in each region by some sort of inference.

Here you can see where the implicatures came into all this: the ‘strengthening’ of meaning that the authors suggest is very much akin to the strengthening of meaning in scalar implicatures, e.g., from some (and possibly all) to some (and not all). Specifically, in Tai the alternatives are krak-oo, hok and hok-oo, and given that the more informative ‘weak general threat’ or ‘(weak) aerial threat’ are available but not used, these are negated and krak is left with the meaning of ‘dangerous ground threat’, which in this region is leopards. However, ‘dangerous ground predator’ is pretty contradictory on Tawai, as there are no such predators, and so the inference does not occur.

The authors are quick to point out that of course this has to be a very simple inference: “all that is needed is a—possibly automatic, unconscious, and non-rational—optimization device by which more informative calls ‘suppress’ less informative ones” (p.480). But then, is it much like an implicature at all, as I described implicatures earlier?

The interesting thing is that as soon as you start doing a formal description of nonhuman language using the same categories and terms, you have to ask again what you mean by them when applied to human language. You see, I was giving you just one view of implicature, and scalar implicature in particular. That view, in the tradition of Grice, holds that when a speaker produces an utterance which does not hold to the maxim of quantity, in that it could be more informative (e.g., I ate some of the cookies would be more informative if it were I ate all of the cookies), the hearer reasons that, given that the speaker is rational, co-operative, informative, etc, there must be a reason that they did not utter the more informative alternative, namely that it isn’t true. Trying to attribute such kind of pragmatic processes to other primates, going on present knowledge, is not going to pass muster.

Another approach, however, has scalar implicatures down as something much more grammatical – there is a silent and invisible element in the sentence that adds in the extra meaning. This means that some (but not all) of the inference can be derived without complex higher-order reasoning about speaker intentions. And it’s perhaps on this kind of view that one can talk about ‘monkey pragmatics’.

Incidentally, Julia Fischer, of the German Primate Centre, also considers “the investigation of the role of previous and actual contextual information on animals’ responses to signals as one of the most exciting challenges in our field. Studying animal pragmatics may turn out to be more fruitful than assessing the symbolic or syntactic aspects of animal communication”4. Here, it seems that ‘pragmatics’ is being used in yet another sense – any inference in communication that draws on cues from context.

So here, from our distant relations, we have a reminder that ‘Pragmatics’ for humans is a murky area, with vastly differing scopes and approaches; pragmatics for primates may be an even harder nut to crack.

    1. For example:
    Arthur: Did you meet her parents?
    Malcolm: I met her mother.
    +> I met her mother and not her father.

    2. first-order vs second-order ToM
    First order intentionality involves intending to change another person’s behaviour; second-order their state of mind.
    Theory of Mind is what, on one theory, enables us to think about other people’s thoughts; a full-blown Theory of Mind has yet to be found in any non-human.

    3. Schlenker, P., Keenan, C. S., Ryder, R., & Zuberbühler, K. (2013). Monkey semantics: Towards a formal analysis of primate alarm calls. In Twenty-Third Semantics and Linguistic Theory Conference (pp. 3–5).

    4. Fischer J (2013) Information, inference and meaning in primate vocal behaviour
    In: Stegmann U (ed) Animal Communication Theory: Information and Influence.
    Cambridge University Press, Cambridge, pp 297–317

Tone: what it is and where it comes from

People who speak tone languages aren’t really different from everyone else. In using tone in everyday speech, they are not driven by fierce hostility towards vowels and consonants or a devilish wish to frustrate language learners. As a speaker of two tonal languages (Mandarin and Fuzhou Min), I can tell you as much. Through this blog post, I hope to provide preliminary answers to two questions: What is tone, and where does tone come from?

Tone, intonation and stress are all ways in which we fine-tune the pitch of our voice to express differences in meaning. Pitch is often conceptualised on a scale from “low” to “high”, and the ability to manipulate pitch allows us to signal a question with a rising pitch or indicate stress in words like “present”. Tone is most commonly defined as contrastive pitch for distinguishing morphemic units. With tone, you can contrast words with identical segments but distinct pitch patterns, through difference in either pitch height (High vs. Low) or shape (Level vs. Contour). In Mandarin Chinese, for example, almost every morpheme is associated with one of four tones, which have distinct pitch shapes. For tone language speakers, distinguishing words through tonal differences is as natural as doing so through difference in vowels and consonants. In the following sound file, you can observe how the sounds [ma] combine with tones in Mandarin to form morphemes with distinct meanings (“mother”, “hemp”, “horse” and “scold”, in this order).

Tone is a highly areal feature. Although an estimated 50 to 70 percent of all human languages are tonal, the vast majority of these are clustered in Sub-Saharan Africa, Asia-Pacific, and Central and North Americas. Tone languages also differ considerably amongst themselves, and such differences often seem to be driven by language genealogy. One salient divide is between the so-called register tone languages and contour tone languages, which are respectively the norm in Africa and East Asia. Roughly speaking, register tone systems are made up of tones with level pitch (e.g., Yoruba’s high, mid and low tones), while contour tone languages have more complex tonal shapes. Fuzhou Min, for example, has a contour tone characterised by rising-falling pitch. Take a listen below (the word means “two”).

In many African languages, entire morphemes can consist only in tones – these tones seem able to“float” without being permanently attached to segments. The definite article in Bambara is said to be an example of this. The word for “the” in Bambara is a floating low tone, which docks onto nouns and changes their pitch shapes. To exemplify, the word for “river” in Bambara is pronounced [bá] (the accute accent denotes a high tone) in isolation, but “the river” is rendered as [bâ] (circumflex denotes falling tone). Here, we can construe the falling tone as the combination of a high tone (from the noun) and a low tone (from the floating definite article). The discovery of floating tones played an important role in launching the theory known as Autosegmental Phonology, which continues to dominate the way we represent our objects of study in phonology. The 1970s; those were glorious days for tones.

A frequent question for me at college formals is why people would want to “do” tones. I get what they are coming from. After all, English and many other languages seem perfectly able to cope without tones. Even if people take a liking to tones, why the hell do some languages use monstrously large number of tones (some Cantonese varieties reportedly have 10 tones)? There has been a whole branch of tonology devoted to these questions under the banner of “tonogenesis”. This is what got me interested in tones, so allow me to indulge in an example.

The best known source of tones are voicing contrast in obstruents (stops like /b/ and fricatives like /s/). The story goes like this. When you produce an obstruent that’s voiced, say /b/, you tend to lower your larynx and draw your arytenoid cartilages together, allowing your vocal folds to vibrate (for a very close view of vocal fold vibration, see the video at the end of this paragraph). These movements often depress pitch on following vowels. As time goes by, your listeners may pick up on this lowering effect as a consistent correlate of voicing. Then one day, you wake up to find your voicing contrast has gone (language change is brutal, man). Your listeners panic – how are they supposed to deal with all these new homophones? In desperation, they turn to pitch as the key to distinguish pair of words, at which point we may say the language has become tonal. On this hypothesis, if English were to lose the contrast between /b/ and /p/, we’d expect words like “bet” to develop a low tone and “pet” to associate with high tones. The scenario I sketched above may seem far-fetched, but we have very good evidence that this exact process happened in Khmu in Northern Laos, among other languages.

This post has not touched on the more exciting (in my view, anyway) phenomenon of tone sandhi, which I hope to write about in my next contribution. Meanwhile, I have prepared three take-away messages:

  1. Tone language speakers are not crazies.
  2. Tone languages are concentrated but incredibly diverse.
  3. Tones are versatile, fun to study, and well worth (future) linguists’ time to look into.

 

Further reading

1. The best introduction to tone:

Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press.

2. On tonogenesis and its manifestation in Kammu:

Hombert, Jean-Marie. 1975. Towards a theory of tonogenesis: an empirical, physiologically and perceptually based account of the development of tonal contrasts in languages. University of California, Berkeley Doctoral dissertation.

Svantesson, J. & David House. 2006. Tone production, tone perception and Kammu tonogenesis. Phonology 23(2). 309.

3. Bambara floating tone:

Clements, Nick & Kevin C. Ford. 1979. Kikuyu tone shift and its synchronic consequences. Linguistic Inquiry 10. 179–210.

4. Autosegmental Phonology

Goldsmith, John A. 1976. An overview of autosegmental phonology. Linguistic Analysis 2. 23–68.

Working with fragmentary evidence

The modern discipline of linguistics, especially historical linguistics, owes a lot to the rather more arcane field of philology, a subject which had its greatest flowering in the nineteenth century—the term is still used by some as a synonym for ‘historical linguistics’. Traditional philology dealt with European, Middle-Eastern and South Asian languages, aiming to trace their histories and thus reconstruct their prehistories. To do these things, it was first important to describe the oldest records of these languages in detail. This ‘basic’ descriptive work might sound straightforward, but, as any historical linguist can tell you, the messy nature of the evidence means that it’s anything but.

Let’s take English as an example. A philologist or historical linguist interested in mapping out developments that have taken place in the history of English needs to have a clear idea of what English was like at different points in time. The earliest period in which English was written is the Old English period—this covers a relatively long period of time, from as early as 600AD up to the year 1066 or so, and lots of change happened in this time. To work out what the language looked like at different points in this period and so what change happened, it’s obvious what we need to do: we need to take all our documents in Old English, order them by the dates they were written, interpret them all and describe how language is used in each.

This is much easier said than done. For one thing, ancient and medieval documents are very rarely dated—unlike in modern published books, there was no custom of writing the year of creation at the beginning of every codex. Some sorts of documents—particularly ‘charters’ and other legal documents—do have explicit dates, while others can be associated with particular historical figures. So one thing we can do is look at features of the language of just those texts which can be dated, and then try to date the others by comparison. One famous attempt to do this with Old English was the so-called Lichtenheld Test, named after the scholar who first made the relevant observation (Lichtenheld 1873). I won’t go into the drier linguistic details, but in simple terms this was built on the observation that a particular syntactic pattern of adjectives (that of ‘weak’ adjectives occurring without a determiner) occurred often in Beowulf, which was generally believed to be a very early text, and barely at all in the poetry of Cynewulf, a poet who can be confidently dated much later in the OE period. The obvious conclusion is that this pattern was possible in ‘early’ Old English, but fell out of favour over time, and so that it should be possible to date a sample of Old English by how often it uses this pattern.

It turns out that there are two problems with this. Firstly, it just doesn’t work. The test was carried out to its fullest extent by Adriaan Barnouw (Barnouw 1902), and the datings it gives for Old English poetry just don’t match any of the other evidence very well. The second problem is that it’s circular. Beowulf was at that point widely agreed to be an especially ancient poem, and many scholars still hold this view. The problem is that much of the evidence for the idea that Beowulf is a very old text comes from the ‘fact’ that its language is very archaic—but at the same time, one of our best pieces of evidence for what ‘archaic’ Old English is like is the language of Beowulf!

Nevertheless, we might still suggest that the way this odd adjectival pattern in Old English differs from text to text is best explained by assuming that its popularity fell over time, even if the evidence doesn’t fit this picture very straightforwardly. But this leads us into another challenge faced by scholars of historical languages. The clearest observation about this pattern is that it’s never used in prose texts—it only occurs in poetry. So evidently if we’re going to describe how Old English was used differently at different times, we’re also going to have to describe how it was used differently in different genres. The problem is that what texts survive from different genres is inconsistent over time—some periods are better represented in Biblical translations, some with saints’ lives, some with different sorts of poetry, some with legal charters… (Incidentally, this problem is multiplied again by the existence of different dialects from different regions). In short, given that we don’t really have enough reliably datable material of enough different genres from every period, how can we ever confidently work out why a particular writer chose a particular linguistic expression? How can we tell whether our odd adjective pattern was used more in some texts because they were composed earlier, or whether it was a feature of poetic style that some poets simply preferred?

In short, it’s a messy business. Our surviving evidence is a tiny, scattershot selection from an unknowable—but undoubtedly vastly larger—whole.

To end on a cheerful note, however, this makes it all the more exciting that we are still making real, unqualified advances in our understanding of this material. A particularly resonant recent example is put forward in Walkden (2013), dealing with the Old English word hwæt. This is famously the first word of the poem Beowulf, traditionally translated vaguely as an interjection (‘Lo!’) and more recently in Seamus Heaney’s lyrical translation and accompanying introduction as ‘So.’—in either case, a word standing outside clausal syntax used by the poet to call for the audience’s attention. Walkden shows that these are not quite right. Hwæt does actually affect clausal word order, so it must be inside the clause after all. By collecting and comparing all the times it occurs in Old English and Old Saxon, Walkden shows that hwæt introduces exclamative clauses, rather like Modern English how in ‘how cold it is today!’, or what in ‘what a wonderful piece of news that is!’

So thanks to Walkden’s research, we can now propose a new, more accurate translation of the first sentence of this most translated of texts—How much we have heard of the might of the nation-kings in the ancient times of the Spear-Danes!

What is a word?

The concept of “word” would seem fairly central to linguistics. One of the definitions of “syntax” given by the Oxford English Dictionary is:

“The ways in which a particular word … can be arranged with other words …”.

And “morphology” is defined:

“the structure, form, or variation in form … of a word or words …”.

Semanticists talk about “word meaning”, phonologists about “word stress” and so on. All this is all very well – but what is a “word”? This question, it turns out, is akin to most other questions in linguistics in not being answered as easily as we might like. A big part of the problem arises because of conflicts between different criteria for wordness. Take, for example, the element ‘m in I’m. From a purely grammatical point of view, ignoring the sound (and writing) side of things, ‘m acts like a word – shown most clearly by the fact that it can always be substituted with am with no real change in meaning: I’m playing means the same as I am playing, and so forth. (am shows much more wordy behaviour.) But ‘m isn’t like words in other respects: it doesn’t contain a vowel, and it can’t occur on its own. Thus while am needn’t immediately adjacently follow I, ‘m must:

  • OK: Am I playing?I probably am playing 
  • Not OK: ‘M I playingI probably’m playing

Other items like ‘m in English are things like the ‘ll of I’ll be playing, the n’t of isn’t, hasn’t etc., the ‘s in the king of France’s head and so on. These can be called “clitics”. One definition of a clitic is that it is a grammatical word but not a phonological word. Grammatically, ‘m behaves like am‘ll behaves like will and n’t behaves like not, so they can be said to be grammatical words*. But they can’t appear on their own: they must form a single phonological unit with another item, being pronounced (and written, not completely incidentally) as if they were part of it. Neither can they bear stress, like more typical words:

  • OK: You must NOT go (emphasis on not)
  • Not OK: You mustN’T go (emphasis on n’t)

On these sound-based criteria, then, clitics don’t seem to be words. There are a lot of complications here and I’m oversimplifying some issues slightly, but it’s hopefully clear that the issue of a word is isn’t terribly clear-cut. To make matters worse, some items seem able to be both full words and clitics – e.g. the usually doesn’t bear any stress and is pronounced quite weakly, like a clitic, but sometimes it is stressed, shown most clearly in something like I didn’t say A book, I said THE book. And there’s dispute whether some items in some languages are clitics or just inflections.

To conclude, then, the idea of a word is somewhat complicated. Some things behave like words in some ways but not others; some words can sometimes be substituted for things that are not words, or at least not on all criteria. If there is a moral, it is that even the most basic concepts (in linguistics, and presumably elsewhere) cannot necessarily be taken for granted.

* – Possessive ‘s is a bit more complicated. There’s no full word that can be substituted for it, unless you change the order of things to get the head of the King of France, and even then the two aren’t totally equivalent. But unlike items like plural -(e.g. in kings) it doesn’t attach to words but whole phrases: we say the kings of France but not the king’s of France head. As it isn’t a word by all criteria and also isn’t an affix like plural -s, it gets lumped in the clitic category.

Further reading

Dixon, R.M.W., & Aikhenvald, Alexandra. 2003. Word: A Cross-linguistic Typology. Cambridge: Cambridge University Press. See particularly the introduction.

Universal Universal Grammar

The title is not a typo, but you’ll have to work out what it means (if anything) for yourselves! By way of introduction to this linguistic- and universal-themed ramble, have a little read through the following pretty lengthy quote from C.S. Lewis’ 1938 novel Out of the Silent Planet. In this passage, Ransom, a Cambridge philologist, comes across an alien creature on the planet Mars, or Malacandria as it is known in the book:

“A lifetime of linguistic study assured Ransom almost at once that these were articulate noises. The creature was talking. It had language. If you are not a philologist, I am afraid you must take on trust the prodigious emotional consequences of this realisation in Ransom’s mind. A new world he had already seen – but a new, an extraterrestrial, a non-human language was a different matter… The love of knowledge is a kind of madness. In the fraction of a second which it took Ransom to decide that the creature was really talking, and while he still knew that he might be facing instant death, his imagination had leaped over every fear and hope and probability of his situation to follow the dazzling project of making a Malacandrian grammar. ‘An Introduction to the Malacandrian Language’ – ‘The Lunar Verb’ – ‘A Concise Martian-English Dictionary’ … the titles flitted through his mind. And what might one not discover from the speech of a non-human race? The very form of language itself, the principle behind all possible languages, might fall into his hands.”

There are many points in this passage to talk about, but I’d like to focus on the last sentence – the principle behind all possible languages.

Noam Chomsky is famous for many things, one of which is the idea of Universal Grammar (UG). In brief, UG represents the means by which a child can constrain the types of hypotheses they make about the language(s) they are acquiring such that the grammar they acquire more or less resembles that of the older generations. When a child hears a sentence, there are an infinite number of possible ways to generate such a sentence, yet the types of grammatical rules that they hypothesise to be at work in generating the sentences of that language represent only a tiny fraction of all the infinite logical possibilities. UG was conceived as the innate knowledge that a child has which allows the child to entertain just that tiny fraction of all the logical possibilities. In short, UG makes the problem of language acquisition tractable (in the mathematical sense).

UG is thus a mathematical and logical necessity (Nowak, 2006). The existence of something that constrains the hypotheses of the language acquirer is thus on firm conceptual ground. The question of what UG is like, what it consists in and of, however, is another matter.

The earlier Chomskyan approach to this question was that UG is innate, human-specific, language-specific, and rich in content. The current Chomskyan approach, however, is that UG is innate, human-specific, language-specific, but impoverished in content. The reason for this change is the shift to what Chomsky calls Third Factors, i.e. “principles not specific to the faculty of language” (Chomsky, 2005: 6). The exact nature of these Third Factors is currently under discussion, but the suggestion is that Third Factors include various principles of computation, which are not specific to language but which nonetheless play a role in shaping the forms that language can take. From an evolutionary perspective this seems to be desirable. Not only has language in its current form arisen in a reasonably short span of evolutionary time (at most, seven million years, when humans split from their most closely related living species, i.e. chimpanzees), but it is unlikely that language evolved in isolation from other biological and/or mental properties, i.e. language has co-evolved (see Lenneberg, 1967).

If some of these third factor principles of computation are mathematical principles, then it is possible that they are not simply non-specific to the faculty of language, but also non-specific to the human species; in fact, they’d be more like laws of nature. If that is the case, alien languages really would provide a gateway to the principle (or maybe principles) behind all possible languages. A crazy note to end on perhaps, but then the passage did say that the love of knowledge is a kind of madness …

References

Chomsky, N. (2005). Three Factors in Language Design. Linguistic Inquiry, 36(1), 1–22.

Lenneberg, E. (1967). Biological foundations of language. New York: Wiley.

Nowak, M. A. (2006). Evolutionary Dynamics: Exploring the Equations of Life. Cambridge, MA: Belknap Press of Harvard University Press.