The grass is greener… and sometimes bluer in other languages

What’s in a colour? Since more than 80% of information comes in through vision, the way we colour the world around us is quite important. Have you ever thought that the sky can be other than blue and the grass other than green in other languages? These colour categories seem universal, but in fact they are not.

Guy Deutscher finely described the problem in his book ‘Through the Language Glass. Why the World Looks Different in Other Languages’, but I will try to make a very brief overview of the linguistic ‘colour-issue’ and to mention some interesting points for further analysis.

In fact, it was William Gladstone who first noticed colour differences (at least, brought them to public attention) and set the stage for colour debate. In 1849 he published his work on Homer where he questioned why the ancient poet described the sea as wine-dark, honey as green, and sheep and iron as violet. Why were his skies never blue but iron or copper? These oddities cannot be blamed on Homer being blind or colour-blind, since other ancient Greek writers (along with the authors of the Indian Vedas, the Bible and early Chinese texts) shared this worldview. Gladstone conjectured that there was a universal anatomical deficiency in the ancient world which gradually evolved. But evolutionary studies proved that humans must have had the same degree of colour vision for millennia, which means that our vision is hardly different from Homer’s.

But if colour distinctions are not determined by anatomy, are they formed by language? Does language really determine or, at least, influence the way we colour the world around us? (as supposed by the Sapir-Whorf theory)

CC Colourfeeling

CC Colourfeeling

We tend to think of colour names in terms of our basic 11-colour paradigm. But it’s not typical of all the languages. For example, Russian has 2 words for blue (‘goluboy’ and ‘siniy’) distinguishing between light-blue and dark-blue. At the same time, pink is not considered as ‘basic’ colour in Russian, but rather as a very light hue of red. Polish also has two words for blue — ‘niebieski’ and ‘granatowy’ — but their semantics is different from Russian’s ‘goluboy’ and ‘siniy’. Japanese initially didn’t distinguish between blue and green having only one word for these hues — ‘aoi’ (which can be determined as blue with a far broader range of shades than in English). Nowadays, under the influence of the European tradition the semantics of the word has shifted towards our ‘classical’ blue. And green is now described with another word — ‘midori’. Nevertheless, the grass is still ‘aoi’ in Japanese, as well as a green traffic light. Some New Guinea Highland languages have terms only for ‘dark’ and ‘light’. Hanuno’o language, spoken in the Philippines, has only four basic colour words: black, white, red and green. Pirahã language, spoken by an Amazonian tribe, is said to have no fixed words for colours at all. According to Dan Everett, if you show them a red cup, they’re likely to say “This looks like blood”.

The issue of whether the colour spectrum is randomly carved up into categories, or whether there are universal constraints on where these categories form has long been the centre of linguistic, anthropological, psychological, and philosophical debate. Some interesting discoveries were made, the most famous of which is the one by Berlin and Kay (1969). They considered colour cognition as an innate, physiological process and discovered that colour words emerge in all languages in a predictable order. They identified eleven possible basic colour categories (white, black, red, green, yellow, blue, brown, purple, pink, orange, and grey) and found that the colours followed a specific evolutionary pattern. Black and white come first, then red, then yellow, then green and finally blue. Researchers tried to find explanations for this phenomenon in nature. Red is probably first because it is the colour of blood and of the easiest dyes to make in the wild. Green and yellow are the colours of plants. And blue is the last one because – with the exception of the sky – few things are blue in nature.

Though the theory was much critisized later on, it revolutionized and revived colour studies, which had stayed silent for almost 100 years. In the last few decades a whole number of experiments were carried out to find out whether speakers of ‘colour-deficient’ languages can see all the colours and distinguish between them.

The fact is, that we all see more or less the same. If asked to choose a lighter or a darker colour, most of us would do it properly. But a number of tests have shown that people can remember and sort coloured objects more easily if their language has a name for that colour. For instance, bilingual children of Senegal (French-Wolof) would distinguish between red and orange colours faster than monolingual Wolof children. Wolof language has only one word for these two hues, whereas French — two. And in some tests Russian speakers were faster at distinguishing certain shades of blue than English speakers (since Russian has 2 different terms for light and dark shades of blue, as has been already mentioned).

It goes without saying, that linguists are interested in describing the semantics of colour terms not just in one language, but in a range of different languages. Yet comparisons through translations (like ‘niebieski’ = blue; ‘siniy’ = blue; ‘aoi’ = blue) are absolutely unacceptable. The semantics, collocations and connotations of every word are unique.

It seems that it would make sense to look at colour categories from a cognitive perspective. The way people of different cultures set boundaries in spectrum and distinguish between certain hues depends on how they conceptualize colours, rather than on how they perceive them. Anna Wierzbicka argues that colour concepts are bound to certain universalities of human experience, such as day and night, sun, fire, vegetation, sky and earth. The number of colour terms may depend on how important it is for a certain culture to distinguish between them. For example, yellow and red hues are more relevant for Southern cultures (because of sun, sand, etc.) than blue, green or black, which can seem equal in their significance. Why invent three different words for a phenomenon conceptualized as one? Language strives for economy, hence the differences in the number of colour categories.

A curious afterthought to all this “colour-debate” is that colours are used in very different ways in different idioms across languages. For example, in English one argues until he is blue in the face, whereas in Russian he would definetely turn red. In English hair is grey, whereas in Russian it’s rather white.
And what would you say of this short advertisement?

Green bags available in seven colours.
(Placed at Cambridge University Press bookshop)

In this context it seems to acquire far more hidden meanings. ;-)

Darwin’s ideas on the evolution of language

When Charles Darwin (eventually) published On the Origin of Species by Means of Natural Selection in 1859, one species on whose origins he remained deliberately quiet or, at most, vague was Homo sapiens, i.e. us. That humans had evolved from a ‘lower animal’ was profoundly controversial in Darwin’s time (I say was, but it remains controversial for many even now, as Richard Dawkins continually reminds his readers). One of the major difficulties lay in accounting for the differences between humans’ mental capacities and those of other animals, and one of the chief differences concerned the evolution of language.

Darwin addressed the evolution of Homo sapiens in The Descent of Man published in 1871. It’s quite a hefty work and dedicates all of about 10 pages to the evolution of language, but those pages are full of insights and observations and many of the ideas and conclusions were ahead of their time. I’ll run through some of them, but it is well worth reading the original!

Darwin made the crucial distinction between articulate language, which he said was peculiar to humans, and things such as cries, gestures, facial expressions, etc, which are found in many species besides humans. This distinction is often blurred, especially when people talk about the evolution of “communication” as opposed to the evolution of language.

It is also common to blur the distinction between speech and language, but Darwin was careful to separate the two, with language being primarily a mental capacity. Drawing on several observations, Darwin concluded that the defining aspect of language was not in “the understanding of articulate sounds”, nor in “the mere articulation”, nor even in “the mere capacity of connecting definite sounds with definite ideas”, all of which are found in some other species or other. Instead, “the lower animals differ from man solely in his almost infinitely larger power of associating together the most diversified sounds and ideas; and this obviously depends on the high development of his mental powers” (the aspects of language which are uniquely human and those which are not might nowadays be referred to as Faculty of Language in the Narrow (FLN) and Broad (FLB) Senses respectively (see Hauser, Chomsky & Fitch 2002)). Exactly what is meant by this “almost infinitely larger power” (a vast lexicon, or the principle of compositionality, maybe?), the major point here is hard to miss – language is primarily a mental faculty and its evolution is tied in with the evolution of human cognition.

Darwin makes a very interesting analogy with birdsong. Songbirds show an instinctive tendency to sing and go through a ‘babbling’ stage, but ultimately they learn the particular song of their parents. In the same vein, particular languages have to be learned but there is an instinct to learn a language in the first place. Over evolutionary time, speaking and singing lead to improved modifications in the vocal organs, but Darwin pointed out that “the relation between the continued use of language and the development of the brain has no doubt been far more important.” As evidence that the brain and language are connected, Darwin observed that there are cases of “brain-disease” which specifically affect language or parts of language. In these observations, Darwin was using comparative and neurological evidence – a highly interdisciplinary undertaking!

Darwin argued that things like the human capacity for concept formation evolved from more rudimentary capacities. He showed that many animals form concepts and do so without language. Language is thus not a pre-requisite for concept formation, as was argued by some at the time. Incidentally, the claim in some of the recent Minimalist literature that the appearance of the operation Merge (the operation used to form sets) was the break-through moment in the evolution of language strikes me as implausible: concept formation presumably involves the ability to form sets and determine whether an entity is a member of a set or not, and if non-human animals can do this, then Merge presumably existed before language.

There’s a nice summary of Darwin’s arguments and the history of ideas of language evolution in Fitch’s book The Evolution of Language, including an entertaining section of the intellectual battle between Darwin and the linguist Max Müller. Darwin’s ideas on language evolution are by no means the final word on the matter (to the extent that there ever can be a final word on this subject), but Darwin’s ideas go to show that careful observation, interdisciplinary evidence, and courage and perseverance to pursue ideas despite various (apparent) challenges and problems can be both fruitful and illuminating.

(Non-Darwin) References

Fitch, W. T. (2005). The Evolution of Language. Cambridge: Cambridge University Press.

Hauser, M., Chomsky, N., & Fitch, W. T. (2002). The language faculty: What is it, who has it, and how did it evolve? Science 298: 1569–1579.

The ups and downs of a temporal adverb

You may remember that back in February I wrote about that little not-so-innocent word, ‘again’. It turned out to be a tricksy linguistic nut to crack because it appears to have two meanings. Our example was the following:

Frederick opened the door again.

This can be uttered in two different contexts, giving a repetitive and restitutive meaning of ‘again’:

a. Frederick opened the door, and he had opened it before.
b. Frederick opened the door, and it had been open before.

This can either taken to be a simple case of polysemy, or of a singularly repetitive meaning affecting different parts of a verb’s ‘internal’ meaning:

a. again (CAUSEFrederick (BECOME (openthe door)))
b. CAUSEFrederick (BECOME (again(openthe door)))

Today I’ll consider another puzzling aspect of this adverb.

In my last post I may have given the impression that ‘again’’s interesting properties only show themselves when they meet telic verbs like ‘open’ and ‘close’. Well, I’m sorry to admit that that’s not the whole story. What about these cases?

On Monday the shares rose, yesterday they fell, and today they rose again.
Yesterday the shares rose, and today they rose again.
The road widened, then narrowed, then widened again.share-price-8608_1280

On the face of it, these look just like our open/close cases. There seems to be an ambiguity between the repetitive and restitutive reading – they can comfortably occur in contexts where the event is repeated, and where there is a counterdirectional movement. This is how von Stechow (1996) treats them, at any rate, suggesting an analysis of (BECOME[MORE [low]]), similar to (CAUSE[BECOME[open]]) – low and open are the end states.

 

But hang on, is there (always) an inherent endpoint for these verbs – are they telic? A classic, though not unproblematic, test of telicity is the ‘in X time / for X time’ test. Usually atelic verbs sit happily with a ‘for X time’ adjunct, while telic ones do not, preferring instead ‘in X time’. For example:

Ben read for an hour / *in an hour.
Belinda opened the window *for a second / in a second.
(NB ‘for a second’ sounds okay – but on a different reading, that Belinda opened the window and closed it again after a second, not that it took one second to open the window).

Note that telicity isn’t a property of verbs per se, at least not in isolation, but of predicates, because the arguments the verb takes affect it. We can quite happily, although rather improbably, say

Ben read Crime & Punishment in an hour.

And actually, we can frame the event of Ben’s reading Crime & Punishment in two ways, that seem to make it either telic or atelic, because this is also fine:

Ben read Crime & Punishment for an hour.

Likewise:

Julie walked around the park for 20 minutes / in 20 minutes.

The idea of telicity under our belt, lets return to our degree achievement verbs (like ‘widen’ and ‘cool’) and directed motion verbs (like ‘rise’ and ‘fall’). What happens when we run them through the telicity test?

The road widened for 2 metres.
?* The road widened in 2 metres.
(NB this sounds fine on the reading of ‘in 2 metres further down the road’, but that is not the one we’re interested in here)
The shares fell for 3 days.
?The shares fell in 3 days.

These examples seem to be happier (at least to me – what about for you?) with ‘for X time’. So they don’t have an inherent endpoint (after all, we don’t know anything about it, apart from it being wider or lower than the start point), can’t be analysed as (BECOME[MORE[x]]), and, on this reading, would only have a repetitive reading when combined with ‘again’.

country-road-599457_1280

But, wait, we just said a few paragraphs ago that these verbs also have a repetitive/restitutive ambiguity when they appear alongside ‘again’. How can we account for that? Well, I suggest that, just like ‘walk around the park’, predicates with ‘fall’, ‘widen’ and so on are themselves ambiguous, and may have telic or atelic readings. This is what Hay, Kennedy & Levin (1999) argued when they wrote that these verbs’ telicity “depends on the the boundedness of the difference value” – in other words, whether there is a maximum or minimum bound, or a fixed end point, provided. We can see this by tweaking our examples slightly:

The road widened to four lanes in 2 metres.
The shares fell to rock bottom in three days.

Suddenly the ‘in X time’ adjunct is quite fine! And this is where we can get both readings for ‘again’:

The road widened to four lanes again.
a. The road widened to four lanes and it had widened to four lanes before.
b. The road widened to four lanes and it had been four lanes before.

The shares fell to rock bottom again.
a. The shares fell to rock bottom and they had fallen to rock bottom before.
b. The shares fell to rock bottom and they had been at rock bottom before.

The really interesting thing is that this bound does not have to be explicit – it does not have to be stated as part of the sentence. Rather the speaker and hearer can ‘fill it in’ based on the context and their world knowledge.

The sheep fell down the cliff again.

Assuming that this is a small cliff and the sheep survives his descent, we again get two readings, presumably because the ground below implicitly provides a lower bound.

And where there is a restitutive context, one where a reversal of direction has been made explicit, there are always upper or lower bounds that allow both readings of again (although the context would might favour the restitutive one).

Yesterday the shares rose. Today the fell again.

Here, the fact that the shares fell after rising means that the shares must have stopped rising at some inferred point, and this can be taken by the speaker and hearer as the upper bound of the rising eventuality.

So there we have it, some more fun and games with ‘again’ (and the predicates it interacts with). But the real amusement lies – for the linguist at least – in looking out for real life examples that confirm – or question – the theory. How many ‘agains’ can you spot today?

References

Hay, J., Kennedy, C., & Levin, B. (1999). Scalar structure underlies telicity in“ Degree achievements.” In Proceedings of SALT (Vol. 9, pp. 127–144).

von Stechow, A. 1996. The different readings of wieder ‘again’: A structural account. Journal of Semantics 13: 87–138.

Several long, tedious hours in the life of a philologist

One of the things I’ve been looking at recently is a particular grammatical pattern in various languages including Middle English (i.e. English as spoken in the period 1066 to 1470-ish). Simplifying matters a bit, in older varieties of English some verbs employed have in the “perfect” construction, whereas other verbs took be:

(1) I am comethou art gonehe is fallen …

(2) I have workedthou hast madeshe hath said …

In present-day English we basically only use have, so we use the following forms in place of those in (1):

(3) I have come, you have gonehe has fallen …

But when exactly did Middle English use have and when did it use be? The best way to answer this (the best I’ve been able to come up with at any rate) is to trawl through a great deal of text and see what patterns emerge. A body of texts put together for the purpose of trawling through to look for answers to particular questions in this way is known as a corpus. The corpus I’ve been using is the Helsinki corpus, a collection of texts up to the year 1710 – specifically the 609,000 words of texts from the period 1150-1500.

Obviously 609,000 words is a lot of words (The Lord of the Rings is about 480,000, for comparison, and my copy is 6.3cm thick in very small font). And the frequency of instances of what I’m looking for are pretty small: as a rough estimate, there about 6 instances of the perfect constructions in every thousand words, and only about 5% of all these constructions use be rather than have.

Thankfully advances in modern technology (specifically, in my case, the Microsoft Word search function) mean I don’t have to read through the entire length of the corpus hoping to spot the relevant constructions on the rare occasions when they do turn up. But even with the aid of the search facility, the process is still a rather drawn out one. There are two reasons for this: firstly, the irregularity of the verb to be, and secondly, the irregularity of English spelling in the period in question.

Regarding the first, observe that be in English has multiple different forms: beamareiswerewas etc. For one thing, there are simply more forms than we find for any other verb: compare the following:

(4) I am, you areI was, you were (different forms for different persons)

(5) I love, you loveI lovedyou loved (same forms in each tense regardless of person)

For another, many of the forms of be are completely different from each other, with no shared material. Thus, whilst all the forms of love begin with the letters lov- (love, loves, loved, loving), there is no sequence of letters which is common to all the forms of be.

To make matters worse, in Middle English there were even more forms of be. art, as in thou art, was very common, and there were also forms like they weren (= they were), sindan (= they are), he/she bið (= he/she is). To get the full picture, these need searching for as well.

This is compounded still further by the second problem: spelling. Spelling in Middle English wasn’t standardised and there was a great deal of variation in how words were spelled. Even for a little word like is spellings found include isissesse, ysyssehishys, hes, yes and so on and so forth. am is spelled ameomeamæm, ham … All these various spellings need to be taken into account for a comprehensive survey.

Some corpora may allow you to get around this sort of problem through tagging. In a tagged corpus, each word is associated with a tag which tells you what sort of word it is. The tags used vary, but some corpora specifically mark forms of be and have with their own particular codes, which makes them a lot easier to track down. Obviously, though, the corpus has to be tagged in the first place, which is a lot of work. This can be mitigated to some extent by getting a computer to do it for you, although computers aren’t 100% accurate at this sort of thing so it still needs to be checked by a real person.

After all this, what have I discovered? I’m approaching my word limit, so I’ll have to be quick, but basically verbs in English which took be in the perfect seem to have been either “change of location” verbs like gocome, fall or “change of state” verbs like become. This is interesting because – whilst languages which have this construction vary in how many verbs take be rather than have – there’s been a prediction that if any verbs take be they will include the change of location verbs, and if the class of be verbs is any larger than that it will include the change of state verbs. So Middle English supports that prediction.

In fact, the class of verbs which took be in Middle English is much the same as in modern French (where you say je suis allé(e) “I am gone” and not *j’ai allé “I have gone”). Might this be due to contact between English and French? Probably not, because the French spoken at the time of Middle English allowed be with a much larger set of verbs. This suggests we need to seek out a deeper explanation for the similarities, rooted in the psychology of linguistic processing.

Ultimately, then, I’ve found something out, and so all this corpus-trawling has been worth it.

Do you pronounce your ahs or your ars?

Do you pronounce your ars or your ahs?

One of the most salient differences between different varieties of English—most famously between the standard, prestige varieties of North America and those of most of the rest of the English speaking world—concerns rhoticity. Even if you’ve never heard the term, or given any particular thought to the phenomenon, I can almost guarantee that you’ll recognise it.

Rhoticity has to do with a sound change which began to affect some English varieties in the fifteenth century and then had its main period of fast expansion in the eighteenth century. This sound change deleted an /r/ (by then already pronounced in different ways in different dialects) in coda position—in layman’s terms, in all positions except before a vowel (for this reason linguists also sometimes use the term ‘nonprevocalic r’ to refer to the /r/s affected by this change). It left some traces on the pronunciation of the preceding vowel, lengthening it and for some vowels also changing their quality. This sound change has affected some varieties of English, such as British RP, but not others, such as General American—the two types are then referred to as non-rhotic and rhotic varieties respectively. The difference should be clear if you compare typical GA and RP pronunciations of words like ‘sister’, ‘car’ and ‘work’:

rhoticity sample words

In each case, the GA pronunciation preserves some relatively r-like gesture in the position where historically there was an /r/: a ‘retroflex’ gesture, involving curling the tongue-tip back towards the hard palate, which is relatively similar to the GA pronunciation of /r/ as a consonant before a vowel. By contrast, the RP pronunciation has no such r-like gestures: in ‘work’, the trace of the historical /r/ is shown by the fact that vowel is lengthened and has a different quality than it once had (as shown by the spelling, this word would once have had an ‘o’-like sound, perhaps [ɔ] or [o]); in ‘car’ the only trace is the lengthening of the vowel; and in ‘sister’, no trace remains at all.

The distribution of this sound change in different dialects is actually much more complicated than just American English vs. British English varieties. In the UK, it originally affected a series of varieties in the South East of England, the Midlands, the North of England (excluding some areas in the North West), and most of Wales—that is, the English spoken in these areas became non-rhotic. This left Scottish English, Irish English, and the varieties spoken in the West Country and some of the North West of England as rhotic varieties. However, the variety that has become most prestigious, at least in England—meaning that it has become associated with financial and political success and influence—is RP, which happens to be a non-rhotic variety. Because of its prestige, RP has exerted a lot of influence on other varieties in the UK, and as a result rhoticity has been consistently retreating into smaller and smaller regions. In some of the regions it was once completely general, such as the West Country, rhoticity is now primarily only found in the speech of older speakers, as young people are switching more to eastern, non-rhotic pronunciations.

Almost the inverse picture is found in North America. Here, immigrants from different parts of the UK brought different varieties with them: some rhotic, and some non-rhotic. Non-rhotic varieties were particularly well-established in the Southern States of the US and in New York City. However, the variety which has gained the most prestige in the US and Canada happens to be a rhotic one. As a result, the areas which preserve non-rhoticity have long been shrinking, often leaving older speakers with non-rhotic pronunciations while younger generations switch to rhotic ones.

These differences in the sociolinguistic and historical status of rhoticity in North America and the UK often make themselves felt in speakers’ creative and socially marked uses. As African American Vernacular English is a non-rhotic variety, in contrast to the prestige norm, speakers can choose to spell words in a way which indicates non-rhotic pronunciations to indicate that AAVE is the variety they speak: consider book titles like The Savvy Sistahs by Brenda Jackson, track titles like Whateva Man by Redman, or the stage name of rap artist DeAndre Cortez Way, Soulja Boy. This may be completely lost on speakers of British English varieties, for whom the non-rhotic pronunciation is the prestige norm.

Album cover by US rap artist Soulja Boy, demonstrating non-rhotic spelling  for

Album cover by US rap artist Soulja Boy, exhibiting non-rhotic spelling <Soulĵa> for <Soldier>

Nevertheless, non-rhotic spellings are also used in a creative way by speakers to communicate social information in the non-rhotic parts of the UK. However, here, the subtle difference is that such spellings do not point towards any specific variety spoken by the writer, and so don’t carry any particular ethnic or regional connotations: instead, they work simply by subverting the arbitrary, prestige orthographic norm. What connotations they carry thus have more to do with social background and attitudes to education and authority than they do with ethnicity.

Graffiti on the London overground showing non-rhotic spelling <neva> for <never> (taken from http://www.grafflix.co.uk/logcp8.html)

Graffiti on the London overground, exhibiting non-rhotic spelling of <neva> for <never> (taken from http://www.grafflix.co.uk/logcp8.html)

There’s lots more to say about the history and sociolinguistics of rhoticity in English, so I may return to it in a future post. For the time being, though, I’ll leave you with a hint about another interesting, related phenomenon: if you’d call something that tasted like oranges ‘organgey’, what would you call something that tasted like banana?

Instruments don’t kill people, Agents do

This is essentially a follow-up to my previous post, with a more practical focus, but it shouldn’t be necessary to read the earlier post to understand this one.

The pro-gun activists in the United States have a slogan: “guns don’t kill people, people kill people” (parodied by Welsh rap act Goldie Lookin Chain in the song Guns Don’t Kill People, Rappers Do.) The basic idea, presumably, is that guns, being inanimate objects, clearly cannot take responsibility for killing: rather, the responsibility for killing lies with people who use guns to that end. (And therefore we should, the argument goes, focus our attentions on stopping people from using guns to kill people, not on getting rid of guns themselves.) Even if we disagree with the sentiments behind this, we have no trouble understanding what is meant.

This is an interesting use of language because, from a strictly literal viewpoint, it’s undeniable that guns do kill people. Not as animate, volitional “agents”, of course, but nevertheless Guns kill people is a perfectly acceptable English sentence. And indeed, it’s quite normal for inanimate, non-volitional “instruments” to be used as subjects: there’s nothing syntactically or semantically wrong with Scissors cut paper or The knife sliced easily through the soft, white cheese.

Perhaps, we might argue, kill is different from cut or slice – it requires an animate agent as its subject. (Maybe it’s a bit like eat, which can’t take an instrument as its subject: as I pointed out in my last post, we can’t usually say The fork ate the peas to mean “someone ate the peas with the fork”.) But this is clearly false: surely nobody has any problem with The avalanche killed the skier or Trains kill people who ignore red lights at level crossings.  

No, Guns kill people is fine (strictly speaking, at any rate). But the aforementioned slogan does highlight something interesting about attitudes to language: although there’s nothing ungrammatical or unmeaningful about a sentence with an instrument as its subject, there is nevertheless a feeling that volitional agents make better subjects, and perhaps that it may even be in some sense incorrect to use an instrument as a subject when an agent would be available instead.

We see something similar in the arguments by cycling campaigners (e.g. this article) regarding the use of language in journalism relating to road collisions. Often, newspapers phrase things along the lines of A car collided with a cyclist or A lorry ran over a pedestrian. This, the cycling lobby claims, is undesirable because it appears to remove responsibility from the drivers of motor vehicles: cars and lorries do not generally run into things of their own accord, but because of actions taken by their drivers. In other words, given that in such incidents there is an agent (the driver), it is infelicitous to promote an instrument (the vehicle) to the status of subject.

Of course, in parallel with the gun case, an inanimate thing like a car or lorry is a perfectly acceptable subject of a verb like collide or run over as far as grammar or literal meaning is concerned. But the cyclists’ arguments nevertheless highlight, and indeed rest upon, an intuition that volitional agents are once again “better” subjects than instruments. Ordinary users of English have an impression that some types of construction are preferable to others, even when both are technically acceptable: an impression which links closely to what linguists have described as “thematic roles” like agent and instrument. This intuition may seem to support the linguistic analysis that agents are subjects by default, and instruments are only promoted to subject status when an agent is absent.

(In other cases the line between what is merely inappropriate and what is grammatically/semantically unacceptable becomes blurred. The article I linked to gives the example of [the cyclist] collided with a van, referring to an incident where the van was driven into the cyclist from behind. We would probably think of the cyclist here in terms of the thematic role of “patient”: he was not the principle cause of the action, didn’t bring it about on purpose and was the participant most affected by it. Is the use of a patient as a subject syntactically acceptable (as the journalist would appear to think), even if it is an undesirable phrasing, or is it just wrong in every way?)

So: even though things like thematic roles may seem like quite abstract linguistic concepts, it appears that they do have a role to play in the ways in which even non-linguists think about language – and in what is deemed advisable not merely semantically and syntactically, but socially as well.

Silent Phonology

Something you might find surprising if you delve into sign language literature is the familiarity of the terminology. When you see the word phonology you probably think about the study of sounds. You might even be shocked to discover that there is phonology for sign languages. This post will explain how the phonological terms of spoken languages can be applied to sign languages.

Spoken language phonology identifies the smallest contrastive sound units of language. In spoken languages phonemes differ in various ways (for example, place of articulation, voicing or aspiration). We know that phonemes are contrastive in a certain language when we find minimal pairs where only one of these features differs. For example, when we say the English words came and game, we know that the only difference between them is the voicing of the first consonant but this contrast is enough for them to be considered two different words. Place of articulation (PoA) is also contrastive. Game and dame both start with voiced stops, but one is velar and one is (usually) alveolar and this marks them as separate words. However, certain speakers pronounce dame with a dental stop (some Scottish accents, for example). As alveolar and dental stops are not contrastive in English, both would be considered acceptable variations of the same word. What this tells us is that not all contrasts are meaningful in all languages. We find the same situation when we look at contrastive units of sign languages.

PoA is contrastive in sign languages as well as in spoken languages. In sign languages PoAs are not places along the vocal tract but are various body parts where a sign takes place. These are called Locations. The same exact sign produced in two different Locations yields two different meanings. SEE and TELL in British Sign Language (BSL) are identical apart from their Location (from the eyes for SEE and from the lips for TELL) and this difference is what gives them separate meanings. Location is the first of five parameters that make up the phonology of signs.

The second parameter is Handshape. There are many possible handshapes and each sign language uses a certain sub-set of these as meaningful components of the language. Again, we can identify the handshapes used in a particular sign language by looking for minimal pairs. BSL, for example, has a contrast between a fist with the little finger raised (the [I] handshape) and a fist with the thumb raised (the [Ȧ] handshape). When we keep all other parameters the same and change just the handshape, two different signs are produced, for example PRAISE and CRITICISE. There are also handshapes that are contrastive in other languages but not contrastive in BSL. In American Sign Language there is a contrast between a fist made with the thumb over the fingers and a fist made with the fingers resting on the thumb. BSL does not have this distinction and use of either handshape for a sign such as EUROPE would not alter its meaning.

The orientation of the hand used in a sign is the third parameter. Orientation is the exact direction in which the handshape faces (upwards/downwards, leftwards/rightwards and towards/away from the signer). Even in gesture we can see how important hand orientation is, as we get a very different meaning if we turn the two-fingered peace sign around. In Britain this is offensive, yet this orientation may be seen as simply a variant of the same meaning in other cultures. Again, we can find minimal pairs in BSL where the only difference between signs is the orientation of the hand, for example NOW (in some varieties) and BRITISH (the former having the handshape oriented palm up and the latter palm down).

Young Bieber has no idea how offensive he is being to Brits (and not just with his singing).

The fourth parameter in sign language phonology is Movement. This parameter concerns exactly how a handshape moves in a sign. LIVE and FEEL have the same location, handshape and orientation, but the movement (repeated up and down Movement or short upwards Movement) marks them as distinct signs.

The final parameter concerns the non-manual features (NMFs) of the sign. This parameter includes facial expressions and lip patterns. There are some signs that share the same Location, Handshape, Orientation and Movement and are only differentiated by NMFs. By including English mouthing alongside the sign, we can clarify whether a sign means GARAGE or GERMANY. As well as mouthing, there are facial expressions in sign languages that distinguish between signs. For example, the signs DEPRESSED and RELIEVED are differentiated only by the facial expression displaying these two emotions. There is also an NMF that marks negation (head shakes, mouth turns down and eyebrows raise and furrow). The sign MILK with the negation NMF becomes NO-MILK. At sentence level, NMFs can also turn a plain form into a question (through raising of the eyebrows and head tilt) so ALL-RIGHT can become the question form ALL-RIGHT?

These five parameters are the same across all sign languages and, like spoken language phonology, each sign language has restrictions on the way in which these parameters may combine. Certain combinations of these parameters are phonotactically illegal (for example, some Handshapes are not made in certain Orientations). Orfanidou et al (2009) found that when they presented BSL signers with phonotactically illegal nonsense signs, signers often used phonotactic knowledge to correct them. This suggests that native signers, like native speakers, have an underlying understanding of the phonotactics of their language.

Although phonology may at first seem about as far away as possible from the study of sign languages, I hope this post has shown that spoken language terminology and concepts can be successfully applied to another language modality. If you enjoy reading about sign linguistics, have a look at BSL QED’s short linguistics notes on BSL for more.

References
Sutton-Spence, R., & Woll, B. (1999). The linguistics of British Sign Language: An introduction. Cambridge: Cambridge University Press.

Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37(3), 302–15.

Sign BSL Dictionary

BSL SignBank

 

Edited18/03/15 to revise and clarify section on NMFs

Again again!

Again. A useful little word. Rather common. Rather uninteresting? Absolutely not! It’s kept a considerable number of linguists in work for the past 40 years. Consider this sentence.

Frederick opened the door again.

Now, what does ‘again’ add to the information conveyed here? It must be the case that Frederick had opened the door at some point before. This makes ‘again’ a presupposition trigger. The sentence it is part of does not just assert something – the proposition that Frederick opened the door – but also presupposes, or assumes, something else – that he had done it another time, and that other time was before the time that is asserted. This means that ‘again’ joins other additive particles like ‘too’, and ‘as well’ which behave in a similar way (consider, ‘Frederick opened the door too’, which presupposes that someone else also opened the door).

IMG_20150207_120233

But the fun doesn’t stop there. Have a look at these two contexts for our ‘again’ sentence:

A: Frederick opened the door. The wind blew it shut.
B: Frederick closed the door.
Frederick opened the door again.

Context A is what we’ve been thinking about already. The important thing is that Frederick had opened the door before, somehow it was shut, and now he’s doing it for a second, or nth, time. But would you agree that Context B also works as a background for our sentence? And here Frederick has not opened the door before; he’s reversing what he’s just done, restoring the door’s state of being open. For this reason, the reading in Context A is often called repetitive, and in Context B restitutive.

Perhaps you’re thinking: what’s so surprising about this? Doesn’t this just make ‘again’ like loads of polysemous words that have several related meanings. (Think of ‘newspaper’ here: I read the newspaper that my friend works at). Well, some linguists (like Fabricius-Hansen, 2001) would agree with you. Others, noticing that in both cases there is repetition – either of the whole event of Frederick’s opening the door, or of the door’s state of being open – have tried another approach, one that has been fundamental to the development of decompositional semantics (Dowty, 1979).

The problem is that in Context B, what is repeated is not the action of opening the door (the verb ‘open’), but only part of that meaning, the end result – the state of the door’s being open. How can ‘again’ effect (or scope over, to use the technical phrase) only part of a verb’s meaning? Perhaps it’s because the verb’s meaning itself is made up of more basic building blocks. One solution (Dowty, 1979; von Stechow, 1996, Beck, 2005) is to decompose ‘open’ into CAUSE, BECOME, open (the capitals just tell us that these aren’t the same as English words, but rather semantic operators). Very informally, you then get something like this:

CAUSEFrederick (BECOME (openthe door))

We can then drop ‘again’ in at different spots, giving us the repetitive reading (a), and restitutive reading (b) – ‘again’ scopes over what comes in the brackets to the right:

a. again (CAUSEFrederick (BECOME (openthe door)))
b. CAUSEFrederick (BECOME (again(openthe door)))

This may seem neat, or it might strike you as like constructing a theoretical Taj Mahal to house a guinea pig. But actually it’s more appealing than that, because we can see that lots of telic verbs (that’s verbs with an inherent endpoint) work in the same way:

Mary closed the window again.
Bob locked the gate again.
Jane emptied the bucket again.
Philip painted the wall blue again.
Ted remembered the shopping list again.

Plus a host of other types of verb, that we don’t have time to get into here.

CC Yarl

CC Yarl

One intriguing point, though, is that breaking down the verb meaning into these more basic building blocks, between which, at the semantic level, ‘again’ can nestle, opens up perhaps more possibilities than we want.

CAUSEFrederick (again(BECOME (openthe door)))

What context would make this semantic structure true? Context A, certainly, but also C:

C: Gerry opened the door. Maureen closed it.
Frederick opened the door again.

Here the action of the window’s being opened is repeated, but not the whole event including the agent (Frederick). Do we ever get this interpretation? It’s hard to tell, because such Context C also entails Context B (repetition of the door’s being open), and it’s hard to disentangle our intuitions. In a study for my masters degree, I looked into real speakers’ intuitions (not those dodgy linguists’) about such sentences and got mixed results for whether scenarios like this are acceptable:

The recital began. Sue played the piano, then Anthony read poetry, then Sue played the piano again.

What do you think?

And that’s just the start of the fascinating properties of that innocent word ‘again’. Look out for another post, where I explore ‘again’, again!

References

Beck, S. 2005b. “There and Back Again: A Semantic Analysis”. Journal of Semantics 22, 3-51.

Dowty, D. 1979. Word Meaning and Montague Grammar. Dordrecht: Reidel.

Fabricius-Hansen, C. 2001. Wi(e)der and Again(st). In C. Féry and W. Sternefeld (eds), Audiatur Vox Sapientiae. A Festschrift for Arnim von Stechow. Berlin: Akademie Verlag. 101–130.

von Stechow, A. 1996. The different readings of wieder ‘again’: A structural account. Journal of Semantics 13: 87–138.

von Stechow, A. 2003. How are results represented and modified? Remarks on Jäger & Blutner’s anti-decomposition. Modifying adjuncts, 416–451.

A Secret Vice: Conlanging Tolkien-style

A secret vice – this was how J.R.R. Tolkien described his love of creating, crafting and changing his invented languages. With the popularity of his books and the modern film adaptations, the product of this vice is no longer as ‘secret’ as it once was – almost everyone will have heard of Elvish by now; some will have heard of Quenya and Sindarin; and a small number will have heard of more besides …

I started thinking about the theme of this post having read this article from the Guardian on constructed languages (or ‘conlangs’):

http://www.theguardian.com/education/2014/dec/05/star-wars-ewokese-star-trek-klingon-language?CMP=share_btn_fb

Conlangs can be used to add depth, character, culture, history among many other things, but I think that Tolkien’s invented languages are in a class apart from other famous invented languages, e.g. Klingon, Na’vi, Dothraki, Esperanto, etc.

What many people don’t know is that Tolkien’s Elvish languages weren’t ‘invented for’ the Lord of the Rings, or the Hobbit or even what was to become the Silmarillion. In fact, in many ways it is more accurate to say that these stories and legends were invented for the Elvish languages!

Tolkien’s Elvish languages began to grow at about the time of the First World War, and they continued to grow for the rest of Tolkien’s life. Tolkien gave to two of these languages, Sindarin and Quenya, the aesthetic of two of his favourite languages, Welsh and Finnish respectively. However, rather than develop comprehensive dictionaries and grammars of the Elvish languages, Tolkien approached their invention from a primarily historical and philological perspective – something that the other famous conlangs do not do to anywhere near the same extent.

Sindarin and Quenya were designed to be natural languages, i.e. languages with their own irregularities, quirks and oddities (like real-world languages) but whose peculiarities would make sense when looked at from a historical linguistic perspective. Furthermore, Sindarin and Quenya are related languages, i.e. they share a common (and invented!) ancestor. Whenever Tolkien compiled anything like a dictionary, it was more akin to an etymological dictionary or a list of primitive roots and affixes. He would build up a vocabulary using these roots and affixes then submit the results to various phonological changes (as well as language contact effects, borrowings, reanalyses, etc thrown in for good measure! Did you know that the Sindarin word heledh ‘glass’ was borrowed from Khuzdul (Dwarvish) kheled?). The result is a family of related languages and dialects.

But these languages and dialects needed speakers, and their speakers needed a history and a world in which this history could play out. Tolkien believed that language and myth were intimately related – the words of our language reflect the way we perceive the world and myths embody these perceptions and are couched in language, yielding a rich melting pot of associations. To appreciate something of what Tolkien might have felt consider the English names for the days of the week or the months of the year. Why do they have the names they do? What does this tell us about our heritage and cultural history? What does it say about what we used to think and feel about the world? Now imagine thinking like this about other words … I found out earlier this week that English lobster is from Old English lobbe+stre ‘spider(y) creature’ (incidentally, lobbe ‘spider’ provided Tolkien with the inspiration for Shelob, the giant spider from The Two Towers (or, if you’re more familiar with the films, The Return of the King)). That is the kind of philological delight Tolkien wanted Sindarin and Quenya to have, and they do (nai elyë hiruva)!

Do happiness and sadness taste like sweet and sour chicken?

It may sound a bit weird to you when you see this title; it did to me when I was invited to answer that question on a Chinese question website – ‘in Chinese, why do we use the same word sour to represent the taste of vinegar and the sad feeling when you hear a touching story?’ Several similar questions can be found on that website, such as ‘why do we use up/high for something good while down/low for something bad’, or ‘why does English use in to talk about time relation’. Fortunately (or not), my current work is about semantics, specifically about metaphor, which meant I could give an answer when they turned to me. And today, my blog starts from that story and will go slightly beyond to discover the question: when we mean ‘happy’ and ‘sad’ by saying ‘sweet’ and ‘sour’, do we really taste that in mind?

 

CC stu_spivack

CC stu_spivack

The whole story comes from the development of the so-called ‘contemporary theory of metaphor’ (henceforth CTM), which comes out of the field of cognitive semantics and is represented by Lakoff and Johnson and their book Metaphors We Live by (1980). Lakoff and Johnson’s idea is about the cognitive realisation and conceptual formation of metaphor. They classify metaphor as a mapping between two concepts in different conceptual domains, which turns ‘metaphor’ into a phenomenon at the level of concept formation. Lakoff and Johnson believe that metaphor, as a mirror, faithfully reflects our perception and cognition of the whole world, and such reflection is embedded in our daily language. The reason we use ‘up’ for happiness (e.g. ‘cheer up’) and ‘down’ for sadness (e.g. ‘his mood is low’) is not simply because we want to make our speech fancier; instead, we do feel ‘high’ and jump ‘up’ when we are full of joy, while we lower our heads when we are disappointed. They also claim that these metaphor mappings should be universal, since human beings should perceive these events in a similar way – which is also a fundamental proposal of cognitive linguistics.

The presence of CTM leads to an earthquake-like shift in the field of metaphor research. Our definition of ‘metaphor’ changes drastically due to their proposal ‘metaphor is a mapping at the conceptual level’. In the traditional view, such as a Gricean account (Grice 1989), a metaphorical sentence is always non-literal, and we can always sense the deviance when we hear someone saying to his lover ‘you are the cream in my coffee’. Under the framework of CTM, however, even some typical literal sentences can contain a conceptual metaphor. For instance, ‘her voice is sweet’, which sounds quite literal to most of native speakers of English and a lot of English learners, contains a conceptual metaphor PLEASURABLE EXPERIENCES ARE SWEET FOOD. (When we refer to conceptual metaphors, we use small capital letters to show that it is the mapping at the level of concept: ‘Pleasurable experiences’ is the target domain of the metaphor, and ‘sweet food’ is the source domain – see Barcelona 2000 for more examples). Pleasurable experiences could bring people a good mood, just like what sweet food does. The linguistic realisation of a conceptual metaphor is called a ‘linguistic metaphor’, although it may be classified as ‘literal’ in the traditional semantic view. Iconic conceptual metaphors identified by Lakoff and Johnson include argument is war, time is space, life is a journey and so on, – you won’t miss them if you read any article on CTM.

Let’s go back to our sweet and sour examples, with some analyses and counterexamples. Based on CTM, a series of interpretation of ‘sweet’ and ‘sour’ sentences are produced, which makes use of conceptual metaphors like PLEASURABLE EXPERIENCES ARE SWEET FOOD (Dirven 1985; Barcelona 2000), UNPLEASANT EXPERIENCE ARE SOUR OR BITTER FOOD (Barcelona 2000) and JEALOUSY IS SOUR/BITTER (Yu 1998; Buss 2000). These observations show that cross-linguistically sweetness is associated with pleasant experiences and joyful objects, while sourness is associated with the opposite. The reason for such association, as is inferred from the spirit of CTM, is that both the source domain and the target domain could evoke some similar cognitive effects. However, soon we will see that these basic conceptual metaphors cannot cater for all the possibilities that ‘sweet’ and ‘sour’ can present in different languages.

Although Lakoff and Johnson claim that conceptual metaphors exist across languages and cultures, the realisation of these conceptual metaphors varies in different languages, which means the mapping may not be really ‘universal’. Take our favourite example ‘sweet’. In a number of languages, the word ‘sweet’ is associated with nice feelings and delicate objects, for instance, ‘sweet music’ and ‘sweet voice’ in English, or ‘xinli ganjue hentian’ (feeling sweet in one’s heart) and ‘tianyan miyu’ (sweet sentences and honey words) in Chinese. But an extraordinary example is discovered in Japanese: the Japanese correspondence ‘amai’ (sweet) can be used to describe a naive person without any knowledge, which has an obviously negative implication. Such use is also transferred to Chinese, and I was totally surprised when one of my close friends said ‘ta taitian-le’ (he is too sweet) while her intention was ‘he is so naive’. There is even a semi-formulaic popular expression in Chinese ‘sha bai tian’ (lit. stupid, white and sweet) to describe ‘a super naive, super foolish person’. The use of ‘sweet’ for naivety is clearly not a part of the conceptual metaphor PLEASURABLE EXPERIENCES ARE SWEET FOOD.

Another interesting example is that both English and Japanese demonstrate the use (although limited) of ‘sweet’ when describe ‘a large amount’, which is reflected in ‘a sweet amount of time’ and ‘mizu ga amai’ (lit. the water is a large amount); in Chinese, however, this expression is absent. It is also difficult to cover the meaning ‘a large amount’ if we apply the conceptual metaphor PLEASURABLE EXPERIENCES ARE SWEET FOOD.

Such cross-linguistic differences lead me to question whether these associations are systematic or merely coincidental, or a combination of the two. It is clearly shown in the case above that the use of ‘sweet’ for ‘naive’ in Chinese is a borrowing from Japanese, while in English, the connection ‘naivety is sweet’ is totally absent. At that stage, we have three choices to explain this phenomenon. First, maybe we do have a conceptual metaphor NAIVETY IS SWEET FOOD; this argument is difficult to prove, because cognitively we cannot directly associate naivety with sweetness, and we also need to find the reason to explain why it only appears in a limited number of languages. Second, maybe ‘naivety is sweet’ is derived from some existing conceptual metaphors which have not been discovered yet, since ‘naivety’ is definitely not a pleasant experience; it is no less difficult to find the conceptual metaphor, however. Third, it is a mere coincidence that Japanese uses ‘sweet’ for naivety, which makes the seeming-conceptual-metaphor nothing. The use of ‘sweet’ for ‘a large amount’ in English and Japanese faces the same problem. Either we need to find a valid conceptual metaphor to cater for these expressions and explain why it is only present in some languages, or we should admit that it is not a metaphor at all, even though it involves some domain mappings.

These are the problems that challenge CTM today. Maybe humans systematically use ‘sweet’ to represent happiness because they feel good when they encounter the sweet flavour, but before we research all the possibilities in different languages and cultures, we cannot claim that this usage is universal, and we cannot attribute all the different usages to human cognition. We should always keep in mind that those cross-linguistic similarities might only be a coincidence or a result of semantic borrowing. When we use ‘sweet and sour’ to describe the mixture of happiness, unease and anxiety, it is possible that we use it only because it is a linguistic convention. Maybe we do not have a plate of sweet and sour chicken in our mind after all.

For more sweet and sour feelings, have a look at these references:

Barcelona, Antonio. 2000. ‘On the plausibility of claiming a metonymic motivation for conceptual metaphor’, in Antonio Barcelona (ed.), Metaphor and Metonymy at the Crossroads: A Cognitive Perspective (Walter de Gruyter), pp. 31–58

Buss, David M. 2000. The Dangerous Passion: Why Jealousy Is as Necessary as Love and Sex (Simon and Schuster)

Dirven, René. 1985. ‘Metaphor as a basic means for extending the lexicon’, in Wolf Paprotté and René Dirven (eds.), The Ubiquity of Metaphor: Metaphor in language and thought (John Benjamins Publishing), pp. 85–119

Grice, H. Paul. 1989. Studies in the Way of Words (Cambridge, Massachusetts: Harvard University Press)

Lakoff, George, and Mark Johnson. 1980. Metaphors We Live By (Chicago: University Of Chicago Press)

Yu, Ning. 1998. The Contemporary Theory of Metaphor: A Perspective from Chinese (John Benjamins Publishing)