Main
Language Unlimited: The Science Behind Our Most Creative Power
Language Unlimited: The Science Behind Our Most Creative Power
David Adger
0 /
0
How much do you like this book?
What’s the quality of the file?
Download the book for quality assessment
What’s the quality of the downloaded files?
All humans, but no other species, have the capacity to create and understand language. It provides structure to our thoughts, allowing us to plan, communicate, and create new ideas, without limit. Yet we have only finite experiences, and our languages have finite stores of words. Where does our linguistic creativity come from? How does the endless scope of language emerge from our limited selves?
Drawing on research from neuroscience, psychology, and linguistics, David Adger takes the reader on a journey to the hidden structure behind all we say (or sign) and understand. Along the way you'll meet children who created language out of almost nothing, and find out how new languages emerge using structures found in languages spoken continents away. David Adger will show you how the more than 7000 languages in the world appear to obey the same deep scientific laws, how to invent a language that breaks these, and how our brains go crazy when we try to learn languages that just aren't possible. You'll discover why rats are better than we are at picking up certain language patterns, why apes are far worse at others, and how artificial intelligences, such as those behindAlexaandSiri, understand language in a very un-human way.
Language Unlimitedexplores the many mysteries about our capacity for language and reveals the source of its endless creativity.
Drawing on research from neuroscience, psychology, and linguistics, David Adger takes the reader on a journey to the hidden structure behind all we say (or sign) and understand. Along the way you'll meet children who created language out of almost nothing, and find out how new languages emerge using structures found in languages spoken continents away. David Adger will show you how the more than 7000 languages in the world appear to obey the same deep scientific laws, how to invent a language that breaks these, and how our brains go crazy when we try to learn languages that just aren't possible. You'll discover why rats are better than we are at picking up certain language patterns, why apes are far worse at others, and how artificial intelligences, such as those behindAlexaandSiri, understand language in a very un-human way.
Language Unlimitedexplores the many mysteries about our capacity for language and reveals the source of its endless creativity.
Categories:
Year:
2019
Publisher:
Oxford University Press, USA
Language:
english
Pages:
272
ISBN 10:
0198828098
ISBN 13:
9780198828099
File:
EPUB, 675 KB
Your tags:
Download (epub, 675 KB)
- Checking other formats...
- Convert to FB2
- Convert to PDF
- Convert to MOBI
- Convert to TXT
- Convert to RTF
- Converted file can differ from the original. If possible, download the file in its original format.
Report a problem
This book has a different problem? Report it to us
Check Yes if
Check Yes if
Check Yes if
Check Yes if
you were able to open the file
the file contains a book (comics are also acceptable)
the content of the book is acceptable
Title, Author and Language of the file match the book description. Ignore other fields as they are secondary!
Check No if
Check No if
Check No if
Check No if
- the file is damaged
- the file is DRM protected
- the file is not a book (e.g. executable, xls, html, xml)
- the file is an article
- the file is a book excerpt
- the file is a magazine
- the file is a test blank
- the file is a spam
you believe the content of the book is unacceptable and should be blocked
Title, Author or Language of the file do not match the book description. Ignore other fields.
Are you sure the file is of bad quality? Report about it
Change your answer
Thanks for your participation!
Together we will make our library even better
Together we will make our library even better
The file will be sent to your email address. It may take up to 1-5 minutes before you receive it.
The file will be sent to your Kindle account. It may takes up to 1-5 minutes before you received it.
Please note: you need to verify every book you want to send to your Kindle. Check your mailbox for the verification email from Amazon Kindle.
Please note: you need to verify every book you want to send to your Kindle. Check your mailbox for the verification email from Amazon Kindle.
Conversion to is in progress
Conversion to is failed
You may be interested in Powered by Rec2Me
Most frequent terms
languages362
sentence249
sentences197
verb182
grammatical148
merge141
grammar122
humans113
linguistic113
patterns106
anson88
cat86
noun78
syntax74
spoken72
meanings68
speakers65
discrete54
abstract54
nouns53
emojis52
examples51
kinds51
aspects51
speech50
systems50
chunking49
analogy46
pronoun46
appears45
tense45
babies44
lilly44
gestures42
laws41
symbols40
refer39
animals39
speaker39
verbs38
brains37
monkeys37
phrase36
kanzi36
phrases36
concepts36
allows34
bits33
homesign32
Related Booklists
0 comments
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1
|
2
|
LANGUAGE UNLIMITED Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © David Adger 2019 The moral rights of the author have been asserted First Edition published in 2019 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2019939545 ISBN 978–0–19–882809–9 ebook ISBN 978–0–19–256319–4 Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A. Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work. CONTENTS Preface 1. Creating language 2. Beyond symbols and signals 3. A sense of structure 4. The question of Psammetichus 5. Impossible patterns 6. All in the mind 7. A Law of Language 8. Botlang 9. Merge 10. Grammar and culture Notes Acknowledgments Index PREFACE My fascination for language appeared when I was about ten years old. I; ’d been reading Ursula Le Guin’s A Wizard of Earthsea, still one of my favourite books. In it, our hero Ged, is sent to a windy isolated tower on Roke, an island in the centre of Le Guin’s world of Earthsea. The tower is the home of the Master Namer, Kurremkarmerruk, who teaches the core of the magical system of Earthsea: the true names of things. There, Ged learns name after name. Each plant and all its leaves, sepals, and stamens, each animal, and all their scales, feathers, and fangs. Kurremkarmerruk teaches his students that to work magic on something, you need to know the name of not just that thing, but all of its parts and their parts. To enchant the sea, Ged needed to know not just the name of the sea, but also the names of each gully and inlet, each reef and trench, each whirlpool, channel, shallows, and swell, down to the name of the foam that appears momentarily on a wave. I found this thought fascinating even at the age of ten. I didn’t really understand it, because it is paradoxical. How infinitesimal do you need to go before there are no more names? How particular do you need to be? A wave on the sea appears once, for a moment in time, and the foam on that wave is unique and fleeting. No language could have all the words to name every iota of existence. How could a language capture the numberless things and unending possibilities of the world? I was captivated by this question. And I still am. For although Le Guin’s Language of Making is mythical, human language does, in fact, have this almost mystical power. It can describe the infinite particularity of the world as we perceive it. Language doesn’t do this through words, giving a unique name to each individual thing. It does it through sentences, through the power to combine words, what linguists call syntax. Syntax is where the magic happens. It takes the words we use to slice up our reality, and puts them together in infinitely varied ways. It allows me to talk about the foam I saw on a wave, the first one that tickled my bare toes on a beach in Wemyss, in Fife, on my tenth birthday. It gave Le Guin the power to put Kurremkarmerruk’s Isolate Tower into the mind’s eye of that same ten year old. It both captures the world as it is, and gives us the power to create new worlds. In this book, I explain how syntax gives language its infinitely creative power. The book is a dip into the sea of the syntax of human language. It is no more than a skimming of the foam on a single wave, but I hope it gives an idea of how important understanding syntax is to the broader project of understanding human language. David Adger London October 2018 1 CREATING LANGUAGE I want to begin this book by asking you to make up a sentence. It should be more than a few words long. Make one up that, say, spans at least one line on the page. Now go to your favourite search engine and put in the sentence you’ve made up, in inverted commas, so that the search engine looks for an exact match. Now hit return. Question: does your sentence exist anywhere else on the internet? I’ve tried this many times and each time, the answer is no. I’m guessing that that was your experience too. This isn’t just a side effect of using the internet either. The British National Corpus is an online collection of texts, some from newspapers, some that have been transcribed from real conversations between people speaking English. There are over 100 million words in this collection. I took the following sentence from the corpus at random, and searched for it again, to see if it appeared elsewhere in the millions of sentences in the corpus. I then did the same on Google. It’s amazing how many people leave out one or more of those essential details. There are no other examples. It seems crazy, but sentences almost never reoccur. Think about your sense of familiarity with the sentences you hear or say. None of the sentences I’ve written so far feel new or strange. You aren’t surprised when you read them. You just accept them and get on with it. This is, if you think about it, quite remarkable. These sentences are new to you, in fact perhaps new to the human race. But they don’t seem new. The fact that sentences hardly reoccur shows us that we use our language in an incredibly rich, flexible, and creative way, while barely noticing that we are doing this. Virtually every sentence we utter is novel. New to ourselves, and, quite often, new to humanity. We come up with phrases and sentences as we need to, and we make them express what we need to express. We do this with incredible ease. We don’t think about it, we just do it. We create language throughout our lives, and respond creatively to the language of others. How can we do this? How can humans, who are finite creatures, with finite experiences, use language over such an apparently limitless range? This book is an answer to that question. It is an explanation of what it is about human language that allows us to create sentences as we need them, and understand sentences we’ve never heard before. The answer has three parts. The first is that human languages are organized in a special way. This organization is unique, as far as we know, to humans. Sentences look as though they consist of words in a sequence, but that is not how the human mind understands them. We sense, instead, a structure in every sentence of every language. We cannot consciously perceive this structure, but it contours and limits everything we say, and much of what we think. Our sense of linguistic structure, like our other senses, channels particular aspects of our linguistic experience into our minds. The second part of the answer is that linguistic structure builds meaning in a hierarchical way. Words cluster together and these clusters have special properties. A simple sentence, like Lilly bit Anson, is a complex weave of inaudible, invisible relationships. The words bit and Anson cluster together, creating a certain meaning. Lilly connects to that cluster, adding in a different kind of meaning. Laws of Language, universal to our species, govern the ways that this happens. The final part of the answer tells us where this special structure comes from, and explains why we can use our languages with such flexibility and creativity. Throughout Nature, when life or matter is organized in a hierarchical way, we see smaller structures echoing the shape of the larger ones that contain them. We find this property of self-similarity everywhere. A fern frond contains within it smaller fronds, almost identical in shape, which in turn contain yet smaller ones. Lightning, when it forks from the sky, branches down to earth over and over, each new fork forming in the same way as higher forks, irrespective of scale. From slime mould to mountain ranges, from narwhale tusks to the spiraling of galaxies, Nature employs the same principle: larger shapes echo the structure of what they contain. I argue, in this book, that human language is also organized in this way. Phrases are built from smaller phrases and sentences from smaller sentences. Self-similarity immediately makes available an unending collection of structures to the speaker of a language. The infinite richness of languages is a side-effect of the simplest way Nature has of organizing hierarchies. These three ideas, that we have a sense of linguistic structure, that that structure is governed by Laws of Language, and that it emerges through self-similarity, provide a coherent explanation of creative powers that lie at the heart of human language. I wrote this book because I think that the three core positions it takes are deep explanations of how language works. Each of these ideas is about how our minds impose structure, of a particular sort, on our experiences of reality. Over recent years, however, an alternative to these ideas, with an impressive pedigree, has emerged. This alternative focusses not on how the mind imposes structure on our linguistic experiences, but rather on how we humans have very general powerful learning abilities that extract structure from experience. Language, from this perspective, is like many other aspects of human culture. It is learned from our experiences, not imposed upon them by limits of the mind. This view goes back to Darwin in his book The Descent of Man. The idea is that our minds are powerful processors of the information in our environment, and language is just one kind of information. The way that language works depends totally on what language users have heard or seen throughout their lives. This idea places an emphasis not on the limits of the mind, but on the organization of the world we experience. These two different perspectives on how the mind encounters the world are both important. This book is intended to show how the first approach is better suited to language in particular. How would language look, from a perspective where its structure emerges from our experiences? Language, Darwin said, should be thought of in the same way as all the other mental traits. Darwin gave examples of monkeys using different calls to signify different kinds of danger, and argued that this was analogous to human language, just more limited. He argued that, since dogs may understand words like ‘fetch!’ and parrots might articulate ‘Pretty Polly’, the capacity to understand and imitate words does not distinguish us from other animals. The difference between humans and animals in language, as in everything else, is a matter of degree. … the lower animals differ from man solely in his almost infinitely larger power of associating together the most diversified sounds and ideas; and this obviously depends on the high development of his mental powers.1 Darwin believed that humans have rich and complex language because we have highly developed, very flexible, and quite general, intellectual abilities. These allow us to pass on, augment, and refine what we do. They underpin our culture, traditions, religions, and languages. The vast range of diversity we see in culture and language is because our general mental powers are so flexible that they allow huge variation. Darwin argued that this cultural development of language augmented our ability to think and reason. More concretely, the idea is that we can understand sentences we’ve never produced because we’re powerful learners of patterns in general. We apply that talent to language. We hear sentences as we grow up, and we extract from these certain common themes. For example, we might hear certain words together over and over again, say, give Mummy the toy. We store this as a pattern, alongside give me the banana. As we develop, we generalize these into more abstract patterns, something like give SOMEONE SOMETHING, where the capitalized words stand in for lots of different things that have been heard.2 Once this general pattern is in place, we can use it to make new sentences. The structure of our language emerges from what we experience of it as we grow up, combined with very general skills we have to create and generalize patterns. The same skills we’d use in other complex activities, like learning to bake a cake, or tie shoelaces. Other animals have pattern matching abilities too, but, in Darwin’s words, their ‘mental powers’ are less developed. The reason humans are the only species with syntax, from this viewpoint, is the huge gulf between us and other animals in our ability to generalize patterns. We have more oomph. This book was written to make the argument that it’s not a matter of more oomph, it’s a matter of different oomph! We are not powerful pattern learners when it comes to language. We are limited—only really able use one kind of pattern for syntax, a hierarchical one. This is what I’ll argue in the first half of the book. I’ll also argue that patterns that depend on sequences of words are invisible to us, while syntactic hierarchy is unavailable to other animals. Though we do of course learn our languages as we grow up, what we can learn is constrained. Our limited minds are oblivious to the continuous in language, and to the sequential, and to many possible kinds of patterns that other animals can pick up on. The source of hierarchy in language is not creating patterns, storing them, and generalizing them. It’s an inner sense that can’t help but impose hierarchical structure, and it’s the self-similarity of that structure that creates limitless sentences. That, rather than highly developed mental powers, underpins our incredible ability to use language creatively. That is our different oomph. Unless you’re an editor, or a teacher, you probably don’t notice the hundreds or thousands of sentences you come across during your day. Most fly by you. In a sense you hear what they mean, without hearing what they are. But sometimes you might come across someone writing or saying something and think ‘That’s a bit odd.’ Maybe a verb is missing. Maybe the sentence starts but doesn’t end. Maybe it doesn’t mean what the speaker obviously wanted it to mean. You know certain things about the sentences of your language, though you usually don’t stop to think about it. Here are some examples. Which of them are clearly sentences of English, and which are ‘a bit odd’? Zfumkxqviestblwzzulnxdsorjj kwwapotud jjqltu ykualfzgixz, zfna ngu izyqr jgnsougdd. Sunglasses traumatize to likes that water by perplexed usually is tinnitus with amoeba an. An amoeba with tinnitus is usually perplexed by water that likes to traumatize sunglasses. A cat with dental disease is rarely treated by a vet who is unable to cure it. If you’re a native English speaker—and probably even if you’re not—you have probably judged that the first two are not good sentences of English but the latter two are. Of these, the last one is a completely normal English sentence, while the one about the amoeba is weird, but definitely English. If I give you many more examples of this sort, your judgments about their oddness are likely to agree with mine, and with those of many other native English speakers. Not entirely, of course. There may be words that I don’t know that you do, or vice versa. Our dialects might differ in some way. I might allow the dog needs fed, while you might think this should be the dog needs feeding. You might have learned at school that prepositions are not something that we end sentences with—or not! I might not care about what they taught at school. You might be a copy editor, armed with a red pen to swiftly excise every split infinitive. I might think that split infinitives have been part of English since Chaucer, and be very happy with phrases like to swiftly excise every split infinitive. If we put these minor differences aside, however, we’d agree about most of it and we could agree to disagree about the rest. How do we all do this? Why do we mostly agree? Every speaker of every language has a store of linguistic information in their minds that allows them to create and to understand new sentences. Part of that store is a kind of mental dictionary. It grows over our lives, and sometimes shrinks as we forget words. It is a finite list of the basic bits of our language. But that’s not enough. We also need something that will allow us to combine words to express ourselves, and to understand those combinations when we hear them. Linguists call this the mental grammar. It is what is responsible for distinguishing between the first two examples and the latter two. As every speaker grows up, they learn words, but they also develop an ability that allows them to put words together to make sentences of their languages, to understand sentences, and to judge whether certain sentences are unremarkable or odd. But do we really need a mental grammar? Maybe all we need is the mental dictionary, and we just put words together and figure out the meanings from there. Knowing what the words mean isn’t, however, enough. The meanings of sentences depend on more than just the meanings of the words in them. Take a simple example like the following: The flea bit the woman. Using exactly the same words we can come up with a quite different meaning. The woman bit the flea. How we put words together matters for what a sentence means. Just knowing the meanings of words isn’t sufficient. There’s something more going on. These two sentences also show us that how likely one word is to follow another makes no difference to whether we judge a sentence to be English or not. A bit of quick Googling gives about a million results for the phrase ‘bit the woman’ and just eight results for ‘bit the flea’. This makes complete sense of course. We talk more about people being bitten than fleas being bitten. But the likelihood of these two sentences makes no difference as to whether they are both English or not. One is more probable than the other, but they are both perfect English. The mental grammar can’t be reduced to the mental dictionary plus meaning, or frequency. We need both the mental dictionary and the mental grammar to explain how each of us speaks and understands our language(s). The question of whether we have mental grammars or not isn’t really disputed. Whether we think of the human capacity for syntax as emerging from the structure of experience, or from the particular limits of our minds, we still need to say that the general rules of our particular languages are somehow stored in our minds. But we can use the nature of what our mental grammars must be like to begin to dig into the question of the source of syntax. Is it part of our nature as human beings, or is it something we pick up from the world we experience? To ask a certain kind of question in English, you use a word like what, who, where, when. Take a scenario where someone is chatting away and mentions that my cat, Lilly, had caught something in the garden. I didn’t quite hear the full details, so I ask: What did you say that Lilly had caught? Here, the word what is asking a question about the thing that Lilly caught. Although what is pronounced at the start of the sentence, it is really meant at the end. After all, we say Lilly caught something. In many other languages, like Mandarin Chinese, Japanese, or Hindi, to ask a question like this you’d just leave the word for what right next to the word for caught, giving the equivalent of You say Lilly caught what? Here’s how this looks in Mandarin Chinese: Nǐ shuō Lìlì zhuā shénme You say Lilly catch what The Chinese word shénme corresponds to English what, and it comes after the verb zhuā which means catch. That’s the normal order of words in a Chinese sentence. In a question, nothing changes. Let’s think about how to capture this difference if what we have learned of our language, our mental grammars, develops through noticing and storing patterns from our experiences. Imagine a person, Pat, whose mental grammar grows and is refined over time in this way. Pat learns through noticing, and storing, patterns. If Pat grew up speaking Mandarin Chinese, they would learn to treat question words no different from non-question words. If exposed to English, they would learn that a question word is placed at the start of the sentence. Pat’s mental grammar in this latter case would contain a statement something like this: If you want to ask a question about a thing, a time, a place etc., use a word like what, when, where, etc. or a phrase like which X, and place this at the start of the sentence. Pat doesn’t consciously know this, but something about Pat’s mind makes them behave according to this pattern. Pat has unconsciously learned how to make and understand certain questions in English. The word that, as we just saw in the example above, is used in English after words like say, think, believe, and so on, to introduce what is said, thought, or believed. When that introduces a sentence in this way, it is often optional in English. We see this in sentences like the following: Anita said Lilly had caught a mouse. Anita said that Lilly had caught a mouse. We can put the word that in here, or leave it out. Both sentences are perfectly fine ways to express what we mean here. It’s not at all surprising, then, that we can leave out the word that when we ask a question too. Both of these next examples are perfectly fine ways of asking the same question: What did Anita say Lilly had caught? What did Anita say that Lilly had caught? How would Pat’s mental grammar look if they were an English speaker? They would have learned that the word that is optional after the verb say, and other verbs like it, so their mental grammar would contain something like this generalization: Optionally put the word that after verbs like say, believe, think … So far, so good. Pat’s mental grammar contains these two patterns, and many more. But now let’s imagine I had had a different conversation. Imagine the discussion was about one of the neighbourhood cats catching a frog in my garden. If I want to identify the cat, I can ask: Which cat did Anita say had caught a frog? A superficial difference between these two questions is whether we are asking about what was being caught, or who did the catching. Given that the word that is optional after say, we expect Pat to think that the following sentence should also be fine: Which cat did Anita say that had caught a frog? For most speakers of English, though, this sentence is ‘a bit odd’. It is much better without the that. This poses a problem for Pat. They would be led to the wrong conclusion about this sentence. It is a question, using which cat, and, as expected, which cat occurs at the start. Pat, as we know, has learned a pattern which allows the word that to appear as an option after say. The sentence matches the pattern: we have taken the option to put in the word that. The trouble is that Pat, who is a good pattern learner, would think this sentence is perfectly fine. But most speakers of English think it’s not fine—it’s decidedly odd. This suggests that most speakers of English, unlike Pat, are not good pattern learners. This argument doesn’t prove that the pattern learning approach is wrong. Real English speakers could be more sophisticated than Pat is. For example, it could be that children learning English do learn patterns like Pat does, and use those patterns to predict what they will hear. They expect to hear sentences like What did you say that Lilly caught?. But, the explanation goes, they never do. This means that what they experience doesn’t match up with their expectations. The way that the children deal with this is to store an exception to the pattern they have learned. In this scenario, the children’s experiences would contain enough structure to help them come to a more complex pattern. This is an interesting idea, which we can test. In 2013, two linguists, Lisa Pearl and Jon Sprouse, did a careful study of the speech directed at young children who are acquiring English. They looked at over 11,000 real examples where parents, or other caregivers, speak to their children.3 They found that parents, when they asked their children these kinds of questions, almost always dropped the word that. They did this whether they were asking a question about what had had something done to it, or what was doing something. It made no difference. The parents never took the option to put that after words like say, believe, etc. This means that the children didn’t ever get the information they would need to learn that there was a difference between the two types of questions. If we think about this from Pat’s perspective, the syntax of English is completely mysterious. Pat’s mental grammar consists of patterns they’ve learned from their experiences. If Pearl and Sprouse are right, Pat couldn’t have learned the exception to the pattern that allows that to disappear. Pat’s experiences, which we are assuming are just the experiences children learning English have, aren’t rich enough to learn an exception to the generalization about when that appears. Adult English speakers’ mental grammars, however, clearly have that exception in them. This seems like a strong argument that English speakers don’t work like our imaginary friend Pat. They aren’t simply good pattern learners. Intriguingly, many other languages behave in the same way as English, even though these languages are not related to English or to each other. For example, Jason Kandybowicz studied the Nupe language, spoken in Nigeria, and found exactly the same pattern there. Here’s how you say What did Gana say that Musa cooked? in Nupe, with a word by word translation:4 Ké Gana gàn gànán Musa du o? What Gana say that Musa cook o The order of words here is quite similar to English. The little word o at the end marks that a question is being asked and the word gànán is the equivalent of English that. Just as in English, it is impossible to say the equivalent of Who did Gana say that cooked the meat? You can put the words together, but Nupe speakers don’t judge it to be a sentence of Nupe: Zě Gana gàn gànán du nakàn o? Who Gana say that cook meat o There are many other languages that work similarly (Russian, Wolof, French, Arabic, and some Mayan languages).5 It is a fascinating puzzle. Speakers end up with judgments about sentences of the languages they speak that don’t depend on what they have heard as children. Certain ways of putting words together just aren’t right, even though, logically, they should be. And these quite subtle patterns appear in unrelated languages over and over again. We humans seem to be biased against our languages working in perfectly reasonable ways! There are many puzzles just like this in the syntax of human languages. Languages do have a logic, but that logic is not one that emerges from the patterns of language we experience. The linguist’s task is to understand the special logic of language, what laws govern it, and how different languages find different ways to obey those laws. We’ll find out in the rest of the book that it’s the hierarchical structures that underlie sentences that are responsible for many of these quirks. Some are, without doubt, learned from experience, but others, as we’ve just seen, are not. Syntax is a deep source of human creativity. You constantly come across sentences that you’ve never heard before, but you have no trouble understanding them. My favourite headline of 2017 simply said Deep in the belly of a gigantic fibreglass triceratops, eight rare bats have made a home. Beautiful, crazy, and true. Syntax gives us the capacity to describe even the weirdest aspects of our existence, and, of course, allows us to create new worlds of the imagination. The most basic units of language, words and parts of words, are limited. We can create new ones on the fly, if we need to, but we don’t have a distinct word for every aspect of our existence, unlike the wizards of Earthsea. The number of words speakers know is a finite store, a kind of dictionary. We can add words to that store, and we can forget words. But the sentences we can create, or understand, are unlimited in number. There is no store of them. This book makes the argument that hierarchy and selfsimilarity underlie our creative use of language. On the way, we’ll find out why language is not just communication, how we can sense linguistic structure without being aware of it, and how sentences are like gestures in the mind. We’ll meet children who cannot experience the language spoken around them, and so they create new languages for themselves, languages that are taken up by communities and become fully-fledged ways of expressing thoughts. We’ll see how human languages follow particular, limited, patterns; how scientists have invented languages that break these; and how they have used these languages to test the limits of the human brain. We’ll invent languages to be spoken by imaginary beings, and imagine languages that could never be used. I’ll show you how rats can pick up on linguistic structures humans cannot perceive, and how humans can discern ones invisible to our closest evolutionary cousins, the apes. I’ll reveal the mysteries of how AIs understand sentences, and how different that is from what we do when we speak and understand language. We’ll also do a little linguistics. You’ll learn about some of the Laws that limit how human languages work, and why these Laws can be Universal without being universal. You’ll also meet some unusual languages, from Chechen to Gaelic, Korean to Passamaquoddy, and Yoruba to Zinacantán Sign Language. I’ll gently introduce you to one of the most cutting-edge ideas in linguistics: Noam Chomsky’s proposal that one linguistic rule creates all the innumerable structures of human language. This idea provides a foundation for understanding what underlies our ability to use language in the creative ways we do, but it also leaves open a space for understanding how that use is affected by our social nature, our identity, emotions, and personal style. 2 BEYOND SYMBOLS AND SIGNALS In 2011, an internet entrepreneur, Fred Benenson, crowdsourced a translation of Moby Dick into emojis. The word Emoji comes from two Japanese words: e, meaning picture, and moji, meaning a written symbol, like a Chinese character, a hieroglyph, or even a letter of the alphabet. Emojis, then, are intended to be similar to written words: they convey meaning through a written form. Because emojis seem like words, people have talked about their use as the ‘fastest growing language’. The initial set of about 180 emojis has grown to over 3,000. Over five billion emojis are used every day on Facebook. Even more exciting is the idea that emojis are somehow universal. They are pictures, so we can understand them no matter what language we speak. But they are also like words, opening up the idea that emojis could be a universal way of communicating, a language for everyone. Are emojis like words? When we string them together in our electronic communication is that a universal language? The linguists Gretchen McCulloch and Lauren Gawne have argued that emojis, as we actually use them, are far more like gestures than like words. They are a body language for our bodiless internet selves. The thumbs-up, middle finger, or eye-roll emojis directly represent gestures, but the way we use other emojis is also gesture-like. McCulloch points out that we often repeat gestures three or four times to emphasize what we’re saying, adding to speech by thumping a fist repeatedly on a table, or opening up our hands, entreatingly, in front of our bodies. The most common sequences of emojis are just repetitions: lots of smiley faces, love hearts, or thumbs-ups. We don’t repeat most words in the same way—words have a place in our sentences, and few of them can be repeated without something going wrong.1 When we play charades, or watch mime artists, we’re using and understanding a kind of pantomime. This, McCulloch argues, is very similar to the ways that you can use strings of emojis to tell stories. This is why Benenson’s project was never going to work. It’s the equivalent of miming the whole of Moby Dick. Emojis are not really like words then. Though we use them to communicate, that communication is more like what happens with body language. It’s interesting to think about what we’d have to do to emojis to make them work more like words. Perhaps if we enriched emojis, they could work more like a universal language? Unlike words in spoken or written language, emojis don’t express sounds. Expressing sounds, though, is important for even something so simple as someone’s name. Benenson’s translations of characters from Moby Dick, like Ishmael or Queeqeg, are impenetrable. Ishmael becomes a boat, a whale, and an ok sign, signifying what roles he plays in the novel, not the sound of his name. It is possible, though, to develop emojis so that they could express sounds. For example, you could associate certain emojis with the sounds of the English words that those emojis make you think of. An emoji for a cat could be used for the syllable cat, so you could express catatonic, say, by using a cat emoji and an emoji of a gin and tonic. Each emoji would stand for a sound, rather than for what it pictures. This would allow us to express the sounds of names. My name, for example, could be a picture of a sun rising (day) and an old style video cassette (vid). The alphabetic system that English uses connects written letters to sounds, so it can easily represent how names are pronounced. The Chinese writing system works differently, and is similar to the original intent of emojis. It involves symbols for particular words as opposed to sounds. Because of this, it also faces challenges representing names, especially those that are not native Chinese. However, the users of this system have developed sophisticated ways of writing foreign names by using Chinese characters that have sounds similar to the syllables of the name. I was once given the Chinese name Ai Dao Fu. Surnames in Chinese come first, and usually consist of just one syllable. The Mandarin Chinese word ài, which means ‘love’, is close in sound to the first syllable of my surname (the ‘a’ in Adger). The words dào (meaning ‘way’, as in Daoism), and fú ‘happiness’ are close in sound, when put together, to David. Chinese has characters for the words ‘love’, ‘way’, and ‘happiness’, so you can use these characters with their associated sounds to write something that is pronounced a bit like my name: Ai Dao Fu—with some lovely meanings to go with the sounds. A bit more abstract than this would be to use the cat emoji as a kind of shorthand for the sound k—often written in English as a c—that appears at the start of the word. Doing this connects the symbol to a sound and that’s how many of us learned the alphabet. ‘A’ is for apple, ‘B’ is for book, ‘C’ is for cat, and ‘D’ is for dog. This basic idea has appeared again and again in the history of writing systems. Pictures which are initially used to represent ideas end up being used to represent sounds. Ancient Egyptian hieroglyphs worked like this. The word for ‘mouth’ in that language was pronounced something like re, and it could be written using a picture of a mouth: This hieroglyph is actually usually used to convey the sound r. For example, the Ancient Egyptian god Ra, the sun god, was written as the sound r above another hieroglyph that was used for a sound that comes out a bit like what happens when you try to cough and swallow at the same time—linguists write this, in the international phonetic alphabet like this ʕ, and the Ancient Egyptian word for ‘arm’ started with it: Adopting this idea would allow us to use emojis to write sounds. We could use a cat emoji for the k sound, an arm emoji for an a sound, and a cup of tea emoji for a t sound. We could then express the word for a furry purring animal as follows.2 It’s rather hard to see how this would be an improvement on just texting though! Much of the early hype around emojis was about how they were universal. Anyone who spoke any language would be able to understand them. This is certainly an exciting idea, but no symbol is truly universal to humankind. A symbol is just some kind of a mark made on the world that stands in for something else, usually an idea in your head. This means that there are two parts to a symbol. There’s a concept, something inside your mind. This is the meaning of the symbol. There’s also something that is external to you, something which you can see (like an emoji), hear (like a spoken word), or feel (like Braille letters). This is called the form of the symbol. So a symbol is a connection between a mind-internal meaning and a mind-external form, between something abstract, and something concrete. When the mind external part of the symbol, the part you can see, hear, or feel, resembles the symbol’s content, then there’s a direct psychological link between the two. In this case, the symbol is a bit like a computer icon, say one for a wastepaper basket. Symbols like this are called iconic symbols. Many emojis are iconic, like the ones we just saw for cat, arm, and tea. There are also symbols without that direct link of resemblance—the relationship between the content and its expression can be quite abstract or even arbitrary. A love heart is like this. A love heart doesn’t look much like a real heart, and the association between the emotion of love and an internal organ is, at best, indirect. Could we build a universal language built out of iconic symbols like emojis? Since they’d be iconic, people should understand them regardless of what language they speak or culture they come from. In the early 1990s the US Government commissioned a report on nuclear waste. It had the rather dry title ‘Expert judgment on markers to deter inadvertent human intrusion into the waste isolation pilot plant’. A team of experts was set up to figure out how to communicate to unknown people in the far distant future, that a particular plot of land in New Mexico was going to be dangerously radioactive for many millennia. The standard symbol for nuclear waste may not be recognizable in millennia. There may be many radical environmental or cultural changes for humanity. Various ideas were considered as possible symbols, including ‘menacing earthworks’, ‘forbidding blocks’, ‘horrifying facial icons’ like Munch’s The Scream. Carl Sagan, the astronomer, physicist, and novelist suggested a skull and crossbones. That didn’t fly. The team reported that ‘The lineage of the skull and crossbones … leads back to medieval alchemists, for whom the skull represented Adam’s skull and the crossed bones the cross that promised resurrection. It is almost certainly a Western cultural artefact’. The fundamental problem is that all of the symbols of danger that the team could come up with simply might not mean danger to an unknown population in the future. As the report says, No symbol is certain to stay in use for the 10,000 year period. Future societies will probably create many of their own symbols, and symbols from our time may have their meanings changed or distorted with the passage of time. Compare how the meaning of the swastika has changed in our own century, going from positive religious symbol of India to a hated emblem of the Nazis.3 The basic idea that emojis could be truly universal, then, could never get off the ground. Human symbols are always, in the end, deeply connected to our cultures. Words are the crème de la crème of arbitrary symbols. Aside from a few cases, like animal noises—did you know that the Mandarin Chinese word for ‘cat’ is māo?—they are associated with their meanings through a socially agreed convention. They don’t resemble them in any psychological way. This is why the word for ‘dog’ is dog in English, txakur in Basque, and inja in Xhosa—the same concept expressed by quite different sounds. We could, then, just as we do with words of spoken languages, or the signs in sign languages, link emojis to meaning using social conventions. The resemblance relationship between an emoji and its meaning would then be useful in guessing a meaning, but the meaning itself would be fixed by communities of emoji users. In fact, such conventions have arisen already through internet users interacting with each other. Sanjaya Wijeratne, while researching his PhD at Wright State University, discovered that gang members were using a gas-pump emoji in their tweets to signify marijuana. Other researchers have found that the meaning of emojis changes across cultures. In some cultures the handwave emoji is just a sign off, in others it’s a snub.4 Emojis then could be developed to work more like words, though, if McCulloch’s gesture idea is right, it’s intriguing that that has not been what has happened naturally. Such an emoji language wouldn’t, however, be universal. There is more to a language than just words, though. If someone texted you the stream of emojis you see here, what would it mean? Does it mean a cat is kicking something? Or someone is kicking a cat? Or is it about the story of Puss in Boots? Or maybe your friend wants you to get a pair of boots with a cat on them? And how would you even go about clarifying which of these you meant? In spoken or written English (or Cantonese, or Swahili), it’s easy to express what you mean with a fair level of precision—in fact, I just did. When you are using emojis, the context might make the message clear. Perhaps you’ve already been talking about one of these topics with your friend. But in the absence of context, emojis are far too vague to work like a language. Or think of this the other way around, in terms of expressing yourself, rather than understanding what someone else is trying to convey. How would you express, in emojis, that something has happened in the past? Or the thought that, if something were to happen, so would something else? Or that something didn’t happen? Or how would you express that every cat was kicked, not lots of cats, every cat. These concepts, so easily expressed in a few words using a language like English or any other human language, are completely beyond the capacity of emojis, at least without changing what emojis are: a simple connection of a picture and an idea, obvious to everyone when they see it. The failure of emojis to express past time, events not happening, possibilities of events taking place, quantifying objects, and hundreds of other purely grammatical ideas, gives us a clue to why emojis are different from a natural human language, even if we let emojis include arbitrary symbols. Emojis do communicate ideas using symbols, but human language goes beyond symbols and, as we will see, beyond communication. To see this, let’s go back to our cat and boot emojis. Is the cat kicking or walking? Or is it being kicked? Let’s add one more emoji: We’ve got a cat, a boot, and a boy. What message is being expressed? If you speak a language like English, you might be tempted to assume that the order of the emojis is linked to the order of the corresponding English words ‘cat’ ‘kick’ ‘boy’ in the sentence The cat kicked the boy. This would give you the meaning that the cat kicked the boy. But isn’t it more likely that the boy kicked the cat—after all, boys wear footwear, but cats generally don’t? That would be a more sensible and likely message, so maybe you should ignore the order and just go for what is the most probable message that’s being communicated. But maybe you speak a language like Malagasy. The order of words in Malagasy is quite different to that in English. In Malagasy you’d say something like ‘kicked the cat the boy’, to express that the boy kicked the cat. The person doing the kicking comes last in the Malagasy sentence. This might tempt a speaker of Malagasy towards the meaning that the boy did the kicking. Or since in Malagasy the verb actually comes first, maybe you’d think that these emojis mean that the boy miaowed at the boot—maybe he was pretending to be a cat. This discussion tells us something important: human languages have ways and means of expressing certain ideas—who did what to whom, for example—that go beyond iconic symbols. English can express who does what to whom partly by the order of the words it uses. Malagasy does the same, but uses a different kind of link between aspects of meaning and the order of the words. Emojis, even if we enrich them, and make them true symbols, don’t have this property. There’s no convention about how emojis express who did what to whom. Let’s imagine we can somehow add such a convention. Let’s say that the first emoji is always the individual performing some kind of action, the second emoji represents that action, and the last emoji is an individual who gets affected by the action. This is similar to what we just saw in English. Would this bring us closer to how human languages actually work? We’d be adding in a new kind of convention, a kind of extended symbol. It would still be a link between meaning and form. The meaning of who did what to whom is linked to the form, the observable order that the emojis come in. This idea of an extended symbol doesn’t really work in the way spoken or signed languages do, though. Take the simple English sentence: The boy was kicked by the cat. In this sentence, the boy comes first, but he’s not doing the kicking. So although English can express who does what to whom through one particular order, it actually has many possible orders. Emojis don’t lend themselves to this kind of complexity. Another kind of interesting example is a sentence like: The boot filled with water. In this sentence, it’s the boot that is affected by the action, and the water that’s causing that filling up to happen, even though the words the boot come before the word water. This time it’s the particular meaning of the verb that overrides the usual conventions. These kinds of examples tell us that the link between the form of a sentence in a human language and what it means is quite subtle and indirect. We can’t make emojis into a language by just adding in some conventions about how meanings link to orders. Languages are far more sophisticated and intricate than that. The extended symbol idea also falls foul of one of the most important properties of the sentences of human languages: words cluster together in groups and languages are exquisitely sensitive to this grouping. To see this, imagine I say to you that Anson is off to run a marathon up and down the mountains of Glen Coe in Scotland. You might say to your friend, Anson’s doctor: Wow! Can Anson run a marathon with his sprained calf? Your friend, if she likes, could reply: Yes. Anson can run a marathon with his sprained calf. If we compare these two sentences, you can see that the difference is where the word can appears. If can appears before Anson, then the meaning is a question, not a statement. Let’s try to understand this difference in terms of symbols. It would again be a kind of extended symbol. Putting the word can, and other words like it, before Anson links to the meaning that a question is being asked. Putting it after, links to the meaning that a statement is being made. The position of the word can in the sentence is the form, linked to the question or statement meaning. We need to be a bit more precise about the position of the word can. First, it’s not just this word that has this effect. We can see this by looking at other similar cases. In these examples, I’ve put the word that shifts around in bold: Lilly is jumping. Is Lilly jumping? The cat has caught a frog. Has the cat caught a frog? We did, in fact, arrive early. Did we, in fact, arrive early? There’s a particular set of words that shifts around like this in English. They are called auxiliary verbs. We can see, in each of the statements, that the auxiliary verb appears after a certain word or phrase. In the corresponding question, it appears before that word or phrase. This is quite abstract but could serve as the form to which the meaning is linked. At first glance, then, it looks like we can understand these statement-question examples in terms of a kind of extended symbol. The form is the order of words, as opposed to just how particular words are pronounced, and the meaning is what the form can be used for, a statement or a question. When we look a little deeper, though, we see that we need to go beyond symbols to really understand what is going on in these examples. Since a symbol is a link between form and meaning, we’d expect that whenever we see the form, we get the meaning, and, whenever we want the meaning, we use the form. For these statement-question examples, and many others, it turns out that you can get the meaning without the form, and the form without the meaning, undermining the idea that this should be thought of symbolically. I can express a question without putting the word can before Anson. A verb like ask explicitly calls for a question, as in the following sentence: I’ll ask if Anson can run a marathon with his sprained calf. What comes after ask, which is in bold, expresses a question, in fact the same question that is expressed by saying Can Anson run a marathon with his sprained calf? But the word can stays put. Instead, we find the word if at the start of the question. Maybe there are two different extended symbols for questions then? Either we put can before Anson, or we leave it where it is and put the word if before Anson. But that isn’t sufficient. We don’t, for example, just put if at the start of a sentence in English to make a question. Otherwise the next sentence would be a perfectly good way to ask a question in English, and it’s not, though some languages do actually work like this, Scottish Gaelic, for example: If Anson can run a marathon with his sprained calf? Similarly, in many people’s English—although not everyone’s— you can’t swap can and Anson around after the word ask. The next sentence isn’t a way of saying the same thing as I’ll ask if Anson can run a marathon with his sprained calf. It means something quite different, and would have a different punctuation: I’ll ask can Anson run a marathon with his sprained calf. This little discussion shows that you can have the same meaning with a different form. There are also problems for the extended symbol idea the other way around. For example, there are examples where we swap around the order of can and Anson but we don’t get a question. Instead we get an even stronger statement. Our doctor friend, who may have been administering a miracle cure to Anson, could reply to our very first question like this: Boy can Anson run a marathon with his sprained calf ! This shows us that there are different forms linked to the same meaning (two ways of making a question), and the same form linking to different meanings (two meanings swapping round can and Anson). The link between form and meaning in a language like English just isn’t the same as that between form and meaning in a symbol like an emoji. There’s one final way in which these kinds of sentence show us that human languages go beyond the symbolic. A symbol, as we’ve seen, is a link between a concept or idea and something we can see or hear. But it turns out that, in human languages, sometimes the form of sentences is actually invisible. This means that symbols, however extended or elaborated, are just insufficient as an explanation of language. In the following sentence the word can appears twice: The person who can run fastest can win the marathon. Now, if we want to make a question of such a statement, we say: Can the person who can run fastest win the marathon? Weirdly, we’ve taken the second can and put it at the start of the sentence, not the first one. Maybe it’s always the last can that is affected by the rule that makes questions? That would explain what happens in the next sentences: The person who can catch the cat that can run fastest can win the marathon. Can the person who can catch the cat that can run fastest win the marathon? But no. It’s not the last one: That person can win any marathon you can. Can that person win any marathon you can? Now it’s the first can that is placed at the start of the question. What is happening here? What is the rule of English that picks out the right can in these sentences? The best answer we have to this goes beyond the idea of symbol entirely. Think about the collection of words that can hops over to turn a statement into a question. We can replace these words by a single word—in this case the word he, given that Anson is male. We can do this no matter how long that collection of words is. I’ve put them in bold here so it’s easy to see: The person who can run fastest can win the marathon. The person who can catch the cat that can run fastest can win the marathon. He can win the marathon. This replacement preserves the basic message that the sentence communicates as long as we know who he is being used to refer to. This shows us that these words behave as a single group. The auxiliary verb that appears after that group in a statement, appears before it in a question. But there’s nothing that visibly signals the ‘groupiness’ of the group. It’s an invisible, inaudible, property of those words that they group together. There are lots of other properties that single out this same group of words. But these properties are not symbolic. They don’t involve a simple relationship between something you can directly perceive (a sound, or a written symbol) and a concept or meaning. Many people will recognize that rule of English that is at work here involves the notion of a grammatical Subject. But what exactly is a Subject in English? This is actually a pretty hard question, but here are some things it’s not. It’s not the first word or phrase in a sentence. In fact is not the Subject of: In fact, Lilly will scratch the sleepy girl. We can see this if we try to make this sentence into a question. The word will hops in front of just the word Lilly, not in fact: In fact, will Lilly scratch the sleepy girl? The Subject is also not the person or thing that does the action in a sentence. The sleepy girl is not the ‘doer’ in either: The sleepy girl will get scratched by Lilly. or: The sleepy girl was frightened of Lilly. but the sleepy girl is the Subject of these sentences: if we make them into questions, the words will and was hop in front of the sleepy girl. Will the sleepy girl get scratched by Lilly? Was the sleepy girl frightened of Lilly? This shows that the meaning of the words is not relevant to the idea of Subject, whether it’s what the words are being used to talk about, or what kind of role they are playing in the situation being described. The specific place of the words in the sentence— first word, second word, etc.—is also not relevant. The notion of Subject can’t be reduced to meaning, or to word order. There are other properties of words that allow you to work out what the Subject is in English. Sometimes the number of things the Subject is used to refer to affects the shape of the verb. When the Subject is used to refer to multiple things, like Anson and Minnie, the verb in the following sentence takes the form fear. Anson and Minnie fear Lilly. But, if we change the Subject and use it to refer to one thing, the verb changes its form to fears, with a final s. Minnie fears Lilly. We don’t see the same change in the verb when we alter the number of individuals of non-Subjects in the sentence. Minnie fears Dodger and Lilly. Minnie fears Lilly. It doesn’t matter here how many people—well, cats—the words after the verb are being used to refer to. The verb doesn’t change its form. In English, the form of the verb cares about the Subject. This phenomenon, where the verb changes to track properties of the Subject, is called Agreement. We say that the verb agrees with the Subject. We see Agreement in examples like those above, and also when the verb be changes its form—in most dialects of English, you say I am, you are, and she is, and not I are, she am, and you is. In English, verbs agree with Subjects. There are languages that allow Agreement with non-Subjects. My favourite of these is Kiowa, an endangered Native American language spoken mainly in Oklahoma, that I worked on with my colleague Daniel Harbour.5 In Kiowa, a verb will show different Agreement depending on properties of not just the Subject, but also of other phrases in the sentence. Here’s how you say I gave a book to the man in Kiowa: náw k’yáahîĩ kút The Kiowa word for ‘man’ is k’yáahîĩ, and the word for ‘book’ is kút. The náw at the start means ‘I’ or ‘we’. The verb meaning is given by just the that appears at the end of the last word, which signifies ‘give’. The rest of that verb is the syllable yán which signifies that the Subject is the person speaking, that the thing that’s being given away is just one thing, and, that there is just a single individual receiving it. That one syllable is the part of the verb that agrees, and it agrees with everything else in the sentence, not just the Subject. We can line up Kiowa and English to make the correspondences clearer: náw k’yáahîĩ kút I man book I-it-him-give If a bunch of people were giving someone two books, that syllable at the start of the verb would look completely different. It would have been mé, not yán. Kiowa Agreement gets pretty complex because so much of the sentence gets involved. Kiowa shows us that Agreement with a verb is not restricted to Subjects across languages. Languages like Kiowa don’t single out the Subject as something special. Languages like English do. There’s an abstract property of parts of English sentences— grammatical Subject—that is central to how that language works. This abstract property is not a symbol. It is not a link between what we see or hear and a meaning. We can’t reduce it to a link between, say, the first word in a sentence and the actor in a situation. To really define what a Subject is in English, we need to look at the way that English syntax works as a whole, taking into account word order, Agreement, and many other properties of the way that English works. A Subject is a crucial part of the invisible weave of structure that makes up English sentences. This notion of Subject is not something that is detectable in the hearable or seeable form of the sentence. It is an imperceptible property. But if a symbol is a link between a concrete form and a meaning, then the notion of Subject can’t be a symbol. It neither has visible form, nor does it signify a particular meaning, yet it is crucial for explaining how English works. Language goes beyond symbols. We’ve used emojis so far in this chapter as a kind of tool, as a way of thinking about how far simple symbols are from human language. I’ve shown you how we might augment symbols to try to capture some of the properties of actual language. In the end, even extended symbols aren’t sufficient. Abstract properties, that can’t be seen or heard, are an inescapable characteristic of how language works. Is language just communication? To answer that, we need to ask: what is communication? At first blush, we might say that communication is the exchange of information. We do use the English word ‘communication’ to talk about when information is exchanged, but we also use it to talk about expressing desires, feelings, orders, hopes, and all sorts of other aspects of our internal mental life. At least as we use the word in English, human languages seem to go beyond mere exchange of information. We don’t communicate only through language. We can communicate all sorts of things through fashion, painting, music, dance, and other cultural activities. We can also communicate through raised eyebrows, smiles and groans, and emojis. Some of our communication is intentional, some of it is inadvertent— think of those emails where you’ve cced the wrong person. Some of it is truth, some of it lies, and some of it neither. Some communication is about social status, or expectations of the moment, and much of it is unconscious. When my cat’s miaowing at me for food, and I impatiently say ‘Yes, yes. I’m getting it. Just hang on till I get the tin opener,’ do I communicate to her? She’s not a person, I’m pretty sure her miaowing isn’t a human language, and I’m pretty sure she’s no idea what I’m saying. In fact, she continually miaows at me in a more and more desperate fashion as I struggle to open the tin of food, so me telling her I’m opening it is definitely not being successfully communicated. Saying that language is communication doesn’t really give us much insight if we just think about what the English word ‘communication’ means. Can we do better than just trying to analyse the concept? Is there a way of understanding communication from the point of view of science? There are, in fact, scientific theories of what communication is. Communication can be understood as what happens when some information gets encoded as a signal and is transmitted to something that receives it, decodes it, and thereby ends up with the message. Language doesn’t need to be involved at all. A digital radio transmitter communicates information to a radio receiver by coding the sounds made in the studio as a digital signal. This is then sent zooming over the internet, or over digital radio networks, to your phone or laptop, which decodes it, and plays the music. Human beings communicate without language too. In the Sherlock Holmes story, the Hound of the Baskervilles, there’s a murderer living on the moors (spoiler alert!). The moors are barren and freezing and there’s nothing to eat. But luckily for the murderer, his sister works in the big manor house and is married to the butler there. The butler and the sister concoct a plan to feed the murderer—the sister has a soft heart. The butler communicates to the murderer that he can come and pick up food by holding a candle by a particular window at a particular time. The murderer communicates he’s got the message, by holding up his own candle, in return. All this ends up disastrously when the intrepid Dr Watson gets involved. This butler-murderer example is particularly instructive. There’s no language involved in the actual act of communication—though there probably was to set up how the communication would work—but a life-or-death message is communicated. How does this happen? It’s because both the sender of the message (the butler) and the receiver (the murderer) know what the range of messages can be: it’s safe to come and get food, or it’s not safe. There are only two possibilities: a candle at the window conveys it’s safe. No candle, it’s not. Communication happens when the butler produces a signal. This is carried by light waves through the night, to the eyes then the brain of the murderer, who is able to decode it. The act of communication has an effect on what the murderer believes about the situation: his uncertainty about whether there is food to be got at the back door is reduced. You can even lie with this incredibly simple system of communication. Imagine that someone had learned what the butler was up to, and signalled using a candle with the intention of luring the murderer to the back door to capture him. Communication would still have happened, as the murderer’s uncertainty about the situation would have been reduced. Unfortunately for him, that particular act of communication would have effectively been a lie. However, it was still communication: a meaning was got across by means of a signal. The American engineer and mathematician Claude Shannon, sometimes called the father of information theory, developed a scientific understanding of communication along these lines. At the heart of this is the idea that communication happens when the uncertainty of the receiver of the message is reduced. In our Sherlock Holmes example, the murderer has a finite set of possible messages—there are just two possible messages. Before he’s seen the signal, he doesn’t know whether coming to the back door to get food is going to be successful. After the signal, he at least thinks he knows. So he’s received a unit of information— what Shannon called a bit. For Shannon, communication happens when something receives units of information and a unit of information is just something that affects your certainty about the world.6 Shannon’s theory also allowed for what happens when the message is corrupted as it’s transmitted. In our example, we could imagine that the murderer might be hallucinating, and see a candle when there was none. The message—no candle at the appointed time, so it’s not safe—is not received properly because of the murderer’s hallucinations. Or perhaps a gargoyle, knocked off its perch by a Dartmoor storm, blocks the line of sight from the murderer’s hideaway, so he doesn’t see the signal. In this case the signal is given, but not received. Shannon modelled interference like this as noise in the signal, and its effect was to lower the amount of information that the receiver gets. Less is communicated. We certainly do use language to communicate in Shannon’s sense. When I’m writing this, I’m attempting to provide information to you that reduces your uncertainty about what I think about the topics in this book. You gain information, that you can then think about, ignore, criticize, laugh at, blog about, or whatever. We can, in fact, take a well developed scientific approach to communication, like Shannon’s, and say that language is used to communicate in that sense. Perhaps all of the other things we do with language which aren’t strictly communication—like me talking to my cat—are offshoots of that primary fact. In Shannon’s approach, communication has happened when a signal is transmitted that changes the receiver’s certainty about the world. This means that the receiver has to have a finite bunch of possible ways she or he thinks the world is, and all the signal does is shrink these down to a smaller bunch. For the murderer, there are two possible ways the world can be (safe or not safe), and the candle signal reduces these down to one. Meaning in language doesn’t work like that, though. Sentences in language create meanings where there were none before: part of the amazingness of language is its creativity, its ability to conjure up new ideas that have never been considered before. It’s the engine of our imaginations. If I say to you The giant spider knitted me a beautiful new hat, or A purple hippo just licked my toe, I’ve not reduced your uncertainty about the world, I’ve created new concepts in your mind. I’ve created a fictional world for you. There is another objection to thinking of communication in Shannon’s sense as central to what language is. Communication is one of the things language can be used for, certainly, but just because something is used to do something, that doesn’t tell us what that something is. Use isn’t essence. Alcohol—strictly speaking, ethyl alcohol—is used to, shall we say, lubricate social situations. But it is also used to disinfect wounds or medical instruments, or to ease stress or heartache. It can be used to dissolve other chemicals to make a solution (think sloe gin), to preserve foods, and it is used in thermometers because of its low freezing point. However, although alcohol is used for all of these things, the uses don’t tell us what alcohol is. To know what alcohol is, we ask a chemist, who tells us that its chemical formula is CH3CH2OH. Alcohol has a structure, and many uses. It occurs naturally as a side effect of fermenting sugar. Certainly, human beings and some enterprising other animals, including the chimpanzees of Guinea in West Africa, have learned to use alcohol to alter the way they feel, and, we humans have learned how to make it ourselves. But to understand why alcohol has the uses it has, we need to understand it scientifically. For example, the reason that alcohol gets us drunk is that its chemical structure allows it to lock onto a particular kind of neural organization in our brains. When it does this, we end up with an imbalance in our neurotransmitters, and that lowers inhibitions, lowers our control over our physical actions and thought capacities, and produces the various other pleasurable and not-so-pleasurable effects of being drunk. Other aspects of alcohol’s chemical structure ensure it has a low freezing point, is inimical to bacteria, and so on. When we talk about what alcohol is (its chemical structure), and what it’s used for (lots of things), these are quite distinct things. On the one hand we have the form of alcohol, which we understand by using chemistry, and on the other hand we have the functions of alcohol, what it does and what it is used for. The structure tells us why the alcohol does what it does. Both the structure and the use of alcohol are important in understanding what it is and how it works in human societies. Language is just the same. Language is used to communicate à la Shannon or in some other way, without doubt, but it is used to do many other things too. Some of these might be thought of as side effects of its primary use as communication. Talking to my cat might be like this: I’m so used to using language to communicate that I still use it in circumstances where communication is impossible. We also use language to order our thoughts, when we speak to ourselves in our heads: planning what to do next, thinking about why the things that happened took place, considering other people’s feelings, motivations, and intentions. We use language to express our own feelings and thoughts, even when no one is around to hear them. Reams of poetry, and diaries, and academic papers have been written that were never meant to be read by anyone else than their author. I have tens of notebooks full of writing that (I hope) no one else is going to see. The function of that writing is not to communicate. It is to help me to think. I’m not communicating to myself, since I can’t be transferring information to me that I already have. There are at least two broad functions for language: communication, and expressing, ordering, and even creating our thoughts. We don’t really have any way of saying which is the primary use. We do, however, have ways of trying to find out what the structure of language is. Fred Benenson’s idea of translating Moby Dick into emojis worked as an art project but showed the fundamental limitations of emojis as a language. We can use symbols to communicate, but human languages go beyond symbols because they have abstract structure. While communication is one of the uses of language, we cannot identify what something is used for with what it is. To understand use, we need to understand structure. I began Chapter 1 of the book by showing you that a central use of language is the ability to respond creatively to our experiences and to use language to invent new ideas and ways of thinking. With this in mind, we can ask: what is the structure of language that allows us to use it in this way? 3 A SENSE OF STRUCTURE Massachusetts, 2014: A marijuana dealer in Middlesex County attempts to sell some drugs to an undercover police officer. This, as you might imagine, turns out not to be a good idea. Worse for the dealer, Massachusetts has a special law that applies extra penalties to drug dealers when they are plying their trade within a hundred feet of a public park or playground. Guess where the undercover police officer had set up the sting! It’s not often that grammar comes to the rescue of criminals, but in this case the drug dealer won an appeal in the Massachusetts Appeals Court.1 His lawyer argued that the law banned him from selling within a hundred feet of a public park or playground, and he was actually within a hundred feet of a privately owned playground. The actual phrase, public park or playground, is ambiguous between the two meanings: the word public might be taken just to restrict the meaning of park, or, of the whole phrase park or playground. It’s likely that the legislators, when they drew up this law, didn’t even notice the ambiguity, because, given what they were trying to do, it would be a pretty perverse law that allowed dealers a more lenient sentence when they were dealing drugs near a private playground, as opposed to a public one. But, irrespective of their intention, the phrase means what it means, and the dealer’s lawyer must have made a good case that the perverse interpretation was, in fact, legitimate. Unfortunately for the pot-dealer, he wasn’t so lucky on the other thirteen counts he was facing. In this example, the words public, or, park, and playground, have fixed meanings. The phrase ends up being ambiguous not because of the properties of the words it is made up out of, but because of how those words are put together—its syntax. The kind of ambiguity at play in the pot-dealer situation, where the different meanings emerge from the way the words are put together, is called syntactic ambiguity. Syntactically ambiguous phrases can be represented, a bit like the chemist’s formula for alcohol, by using diagrams, like this: You can see that we have the same order of words in both of these structures, but the word public is crammed up against the word park in the right-hand structure. If we say that the meaning of the word public restricts just what it is right next to, we can explain the ambiguity of the phrase public park or playground. A common sense interpretation of the law would be that it intended the structure on the right: dealers should have extra penalties if they dealt drugs in either a public park, or any playground at all. However, the Massachusetts Appeal Court decided that it was reasonable to interpret the law as intending the structure on the left: the pot-dealer would be penalized extra if he sold his wares within a hundred feet of something that was public, and was either a park or playground. Since the dealer was actually dealing near a private playground, his lawyer was able to successfully argue that the law didn’t apply. The word public in the structure on the left applies to parks and playgrounds equally, and this therefore means that only public playgrounds are in the scope of the law. Private playgrounds are fair game for dealers! Just as the chemical structure for alcohol explains aspects of why alcohol can be used in various ways, these syntactic structures for the phrase public park or playground can explain why that phrase has the property of ambiguity. In fact, when I first heard the story of the pot-dealer, I envisaged lots of new jobs for linguists, who could go through laws as they were written down, and disambiguate them once and for all, using structures like the ones above. I still think it’s a good idea, but a lawyer friend of mine told me that if there were no ambiguities, lawyers would have nothing to do. The marijuana dealer story shows us that phrases of languages can be ambiguous in their structure. None of the actual words in public park or playground are ambiguous in this example in the way that, say, sty is ambiguous (an eye inflammation or a place to keep pigs). This is why we conclude that the ambiguity comes from how the words are put together. The ambiguity is structural. But this, if you think about it, means that there’s something quite odd going on. It entails that a sentence or phrase doesn’t just consist of the words that we hear or read or write down (or signs, if we are using a sign language). Beyond the words, inaudible and invisible, there’s something extra that we are unconsciously sensitive to when we hear sentences. It’s as though we have a sixth sense, a sense of linguistic structure, that allows us to detect the ways that words can be put together. Otto Jespersen, a Danish grammarian and the author of The Philosophy of Grammar, published in 1924, writes of a child learning a language that without any grammatical instruction, from innumerable sentences heard and understood, he will abstract some notion of their structure which is definite enough to guide him in framing sentences of his own, though it is difficult or impossible to state what that notion is except by means of technical terms like subject, verb, etc. Jespersen put his finger on the issue in this quote. He talks about a notion of structure that guides us when we form and understand sentences. Human beings appear to have an ability to unconsciously sense what the structure of sentences is when we hear them, though this structure is abstract. This is what allows us to judge, as we saw in Chapter 1, whether sentences are unremarkable, or somehow ‘a bit odd’. Everybody has the sense of structure that gives us the ability to make such judgments, not just people who have been schooled in the grammar of their language. I’ve worked with speakers of languages that have never been codified by linguists. They certainly didn’t learn any grammar at school, their languages have never been written down, but they have just as firm a sense of structure as highly literate speakers of other languages. They will tell you quite firmly and consistently which sentences are part of their language, and which are not. They know when sentences are ambiguous, or have untoward meanings. We not only sense the structure of sentences we hear or read, our sense of structure also guides us in producing sentences. Each time we turn a thought into a sentence, we give it a particular structure, a structure which connects it with meaning—hence the ambiguity of public park or playground—and, as we will see, with sound. Like a sculptor using their senses of touch and vision to create a sculptural form from clay, we use our sense of structure to create sentences from thought. Whoever hears or reads the sentences we construct perceives not just the sounds or letters, but also how we have created them, what invisible ties bind them together. Together with the context in which the sentence is uttered, our sense of structure allows understanding to flow. People’s sense of linguistic structure is, in some ways, not too different from their other senses. We often think of our senses as simply passively taking in information from the world. However, although we certainly perceive the world through our senses, these senses structure what we perceive. I had a bizarre experience of this a while back at a friend’s party. We were staying in a house that she had rented for the weekend, and it had a log fire. I was sitting across from the pile of wood that was to feed the fire, and I saw a face, a quite demonic face, in the woodpile. I knew consciously that it was just a collection of wood logs, red string netting, and other bits and pieces, but there was no getting away from what my brain wanted to do with it: a red demonic visage. When I got other people to sit in the same position as me, they also saw it. That illusion arose because human brains have a propensity to interpret shapes with a face-like configuration as an actual face, even if those shapes arise from how bits of wood, netting, and so on are arranged. If you see a face, your brain also has a propensity to attribute to it all the things that usually go along with faces: intention, thoughts, emotions, etc. Hence, the spookiness of the image. My sense of vision didn’t allow me to perceive sticks and wood; it created a face and that’s what I saw. The ancient Greek philosopher Epikharmos of Kos wrote that ‘only the mind sees and hears, all else is blind and deaf.’ Although poetically expressed, this is not far off of the truth. Similarly, when we look at an illusion, like the famous Müller-Lyer illusion, we can’t help but see the lines as being different lengths, even though when we measure them they are identical. We don’t consciously calculate aspects of the world, we just unthinkingly perceive them, and we have no conscious access to how that perception works. The philosopher Daniel Dennett has suggested that we have conscious access to the results of the processes of our minds, but we never have conscious access to the processes themselves. We don’t know what the mental processes are that make us see the two lines in the Müller-Lyer illusion as different sizes unless we learn about the psychology of vision. However, we are conscious of the result of whatever our mind is doing to make them appear so.2 Just as we don’t really have conscious access to how our sense of vision works, we don’t have conscious access to how our sense of linguistic structure works. We automatically and unthinkingly know that sentences and phrases have certain properties, without knowing how we know that. The process that assigns the structure is an unconscious one. Here’s another simple example of our sense of linguistic structure that shows this. The sentence She looked up the mountain is ambiguous, as can be seen from two quite different ways we can continue the sentence: She looked up the mountain (and saw tiny goats climbing its flanks). She looked up the mountain (in her compendium of mountains). Compare the ambiguity of this sentence to the Necker Cube illusion: When you look at an image of a Necker Cube for a minute or so, it flips between appearing as though it is oriented down towards your left, or up towards your right. Like the linguistic example, it is ambiguous. The cube never has both orientations at once, or some kind of a mishmash between the two. It’s always one or the other. This is very like our perception of the meanings of an ambiguous sentence like She looked up the mountain: the sentence can have one meaning, or the other, but never both at the same time, or a mixture between the two. The Necker Cube illusion goes away if we colour one side of the cube with an opaque tint obscuring some of the lines. That signals to our sense of vision how the image should be interpreted. We find the same kind of effect with ambiguous sentences. If we put the word desperately just after looked in She looked up the mountain, only one of the meanings is possible. The meaning which involves gazing, with cricked neck, at the goats is fine: She looked desperately up the mountain (and saw tiny goats climbing its flanks). The presence of desperately disambiguates the sentence so that our sense of linguistic structure only perceives one meaning, in just the same way that colouring in one side of a Necker Cube disambiguates the image. We can see this very clearly by continuing the sentence in a way that tries to force the meaning where our heroine needs to find details of the mountain in her book: She looked desperately up the mountain (in her compendium of mountains). This is just a weird sentence. The intuitive reason is that the word up is more closely associated with looked when it means something like ‘find information’ than it is when it means ‘perceive in an upwards direction’. But what is that ‘closer association’? There’s nothing visible or audible about it. Our sense of linguistic structure, working beyond the level of our consciousness, just tells us that these are the meanings that these sentences can have. A final, striking, example of this comes from sentences like What Anson is is silly. Like our previous examples, this sentence has two meanings. We can see these by setting up the context in two different ways: Anson is always joking around and being an idiot. If I were asked, I’d say that what Anson is is silly. Anson has just been appointed to the job of secretary of the new committee. But that committee doesn’t even need a secretary, so what Anson is is silly. The two meanings are quite distinct. In the first, we just emphasize the fact of Anson himself being silly. In the second meaning, we’re not saying that Anson is silly, but that being the secretary of the committee is silly, and that Anson has that job. Anson could, in fact, be very sensible and you could still say What Anson is is silly. Not all sentences with this surface form have this ambiguity. For example, What Anita is is proud of her garden only has the meaning that Anita is proud of her garden. The meaning where the job that Anita has is proud of Anita’s garden doesn’t make sense, and so isn’t present. However, when the two meanings both make sense, the ambiguity arises. For example, in What Amelia is is important, it could be that Amelia is important, or that whatever job she has, or role she plays, is important. The sense of structure shared by native English speakers just forces us to interpret this sentence in both ways. But now look at the following sentence: What Amelia is is important to her. If we take her and Amelia to pick out the same person, suddenly, one of the meanings disappears: the sentence can only mean that Amelia’s job is important to her. It can’t mean that Amelia is important to herself. Just like the Necker Cube or the look desperately up examples, the ambiguity is gone. Our sense of linguistic structure is unable to provide this sentence with both meanings. If you ask yourself why, there’s no obvious answer. To provide an answer, we have to understand what the invisible structures are that underlie the ambiguity. The examples we’ve looked at already, where sentences are structurally ambiguous, are solid evidence for the existence of our sense of linguistic structure. There’s also interesting evidence from psychological experiments that our sense of structure can’t be reduced to the meaning of sentences, or how they are pronounced. The psycholinguist Kathryn Bock carried out a series of experiments starting in the mid 1980s, to test whether the abstract structures that linguists propose are subconsciously used by people as they process sentences.3 She developed an experiment where she showed the participants pictures of people giving gifts, showing things to other people, doing things for others, and so on. Now, in English you can describe these kinds of actions in various ways. For example, you could say: The girl is giving the book to the boy. but you can also say: The girl is giving the boy the book. These sentences differ in structure, although they are close paraphrases of each other. The obvious differences are that the order of the words is not the same, and there’s an extra word to in the first sentence. Bock was interested in finding out whether this difference in structure was something people were sensitive to. To test this, she first explained to the participants in her experiment what the task she wanted them to do was. For example, she’d show them a picture of a girl, a boy, and a book, where the girl is giving the book to the boy. She’d say one of the sentences above as part of explaining that she wanted the participants to describe the scene. Then the participants were shown a new picture (of, say, a man throwing a stick to a dog), and asked how they’d describe what was going on. Unknown to the participants, Bock was carefully controlling whether, in describing what she wanted them to do, she used one kind of sentence structure as opposed to another. She then noted down which kind of structure the participants themselves used to describe their picture. In doing this experiment, Bock discovered something which no one had seen so clearly before. If Bock used the first kind of sentence to set up the task, the participants would be far more likely to use that kind of sentence to describe their picture. This was true even though the experiment used a completely different scene and completely different words. What was even more striking is that the participants in the experiment had no idea they were doing this. They made the choice subconsciously. Since the meaning of the sentences they were using was quite different from the meanings of the sentences Bock had used, and since the words themselves were different, the participants must have been accessing the abstract structure. This experiment shows the sense of structure at play not only in understanding sentences, which is what we’ve seen already, but also in creating and producing them. Bock’s experiment has been done over and over again by different researchers, with many different kinds of grammatical structure, using different kinds of set-up, and doing it with speakers of different languages. The researchers always come up with the same result: people are subconsciously sensitive to the abstract structure that sentences have, and that abstract structure influences their behaviour in doing a similar task. Bock and other researchers have also shown that you can’t tie this effect down to the particular words used, for example, the word to in the sentences above: what matters is the structure, which isn’t even pronounced. The meaning and the words can be completely different, so the abstract unpronounced structure has to be somehow there. It’s something that we subconsciously, unreflectingly, impose on what we hear, just as we impose a meaning on a visual image. We have seen good evidence from sentence ambiguities and from how people process language that we humans subconsciously attribute an abstract structure to sentences, both sentences we hear and sentences we produce. How should we conceive of this? The way I like to think of these structures is as kinds of mental gestures. Look at one of your hands, or your face in a mirror, and make a gesture or an expression. Your hand or face has taken on a particular structure, for just a moment: your thumb is crossing your palm, or your eyebrows are raised and your lips are opened, or whatever. What structures are possible depend on the limits your anatomy puts on your hand or face. Which particular structure ends up happening depends on your intentions or reactions. A sentence is a bit like this. You intend to say something, and your mind creates a gesture. This gesture is just a particular configuration, for a moment in time, of your brain. The structure of this gesture is limited by the rules of your language. This structure is profoundly connected to what you intend to say, as we’ve already seen in the pot-dealer example. It also has an impact on how the sentence is pronounced. Certain aspects of intonation depend on the structure, as do where pauses can go, as does the order of words. The analogy with a hand gesture or facial expression isn’t perfect here. There’s only a limited set of gestures you can make with your hand or face, but there’s a vast number of sentences that any human mind will create and understand over a lifetime. We’ll see in later chapters that language works on discrete elements, but gestures are continuous. We’ll also see that our mind’s capacity for language is, in principle, infinitely more flexible than a hand’s for gesture. The kinds of structure we see in a hand, and in language are profoundly different, but what we do with structure, whether we make a gesture or say a sentence, is quite similar. I’ve been talking in terms of gestures of the mind. When I say mind, I’m just talking about the brain, but in a more abstract way. We know that the human brain must be doing something with abstract structures, as Bock’s work shows that human beings are sensitive to these. Most of what linguists do is abstracted away from what the brain does. We look at languages and linguistic behaviour and see what kind of understanding we can build of that, and we are a long way from connecting particular patterns in language to particular brain signals at a detailed level. However, although studies of how the linguistic abilities of human brains work are at a very early stage, there is some good evidence from brain imaging research for the kinds of structure linguists have proposed. In 2011, a Paris based team of researchers, Christophe Pallier, Anne-Dominique Devauchelle, and Stanislas Dehaene, used MRI scanning technology to see if there was any particular part of the brain that got more active when processing syntactic structures.4 They did this by showing people, who had been placed in MRI machines, lists of words. Some people got lists that were just unrelated words. Others got lists where two words right next to each other could be understood as a single unit. Yet others got lists where this was the case with three words, and so on. The researchers then looked at how the participants’ brains reacted. They found that particular parts of the participants’ brains increased in activity in a way that was matched to the increase in grammatical structure in the lists of words the participants saw. You might think that this isn’t about grammatical structure; perhaps it’s rather about meaning. To control for this, the team used an idea from Lewis Carroll’s famous poem ‘Jabberwocky’, which starts: ’Twas brillig and the slithey toves Did gyre and gimble in the wabe; All mimsy were the borogroves, and the mome raths outgrabe. Jabberwocky is from Carroll’s 1871 novel, Through the Looking Glass, and What Alice Found There. In the strange new land that Alice finds herself in, she comes across a book containing it. After Alice reads the poem, Carroll’s novel continues: “It seems very pretty,” she said when she had finished it, “but it’s rather hard to understand!” (You see she didn’t like to confess, even to herself, that she couldn’t make it out at all.) “Somehow it seems to fill my head with ideas—only I don’t exactly know what they are! …” Alice’s problem is that the poem is full of nonsense words: brillig, mimsy, mome raths, and so on. But Alice still gets an idea of what is going on because the poem recognizably follows the structures of English. This is because, although he used nonsense words which are not English, Carroll kept in English all the special little grammatical words and parts of words that signify the structure of the language: ’twas, and, the, all, the out in outgrabe, the -s that signifies Plural on toves, borogroves, and raths, and so on. These grammatical elements, together with the word order, are what make Carroll’s poem seem like some kind of English, even though Alice has little idea about what is actually going on. In fact, if we were to change each of these little grammatical elements, I think Alice would have been completely perplexed: Va bright iss na slimey hindep rayn dance iss jiggle awns an grove oolya weary ro na porcupinep iss na little pigep machdove. In this version, I’ve changed the forms of the grammatical words and word-parts, and, instead of nonsense words, I’ve used bits of real English. But, unless you know the original, it’s now pretty unrecognizable as a set of English sentences: it reads like some English words surrounded by random sounds. That’s quite different from Carroll’s original, where the grammatical words are responsible for holding the meaning together. This is what allowed him to write ‘Jabberwocky’, and what filled Alice’s head with ideas. The structure of sentences is held together in many languages by these little grammatical words and endings, and the Paris team took advantage of this. As a second experiment, the team used a Jabberwocky-style list of words. Just like in Carroll’s poem, the grammar was clear, but the meaning was impossible to work out. By doing this, they hoped to find areas of the brain that were sensitive to structure and not meaning. And they did. A number of areas of the brain, working together, seemed to be particularly active in just the cases where there was grammatical sentence structure, even when there was no real content to the words in the sentences. One of these areas is low down at the front of the brain—it’s called the Inferior Frontal Gyrus. Combining the two experiments allowed the team to show that, as the brain processes linguistic structure, particular networks of connections became active; more and more structure in the sentences presented to the participants leads to more and more neural activity in the Inferior Frontal Gyrus. This work was complemented by research done by a team led by David Poeppel and based in New York in 2016.5 Poeppel’s team used the fact that the brain has a kind of pulse, in fact many pulses. These are called brain rhythms. The neurons in our brains work as an organized system of rhythms which tune our minds to our environment. Brain rhythms are implicated in much of what we do: walking, breathing and, it turns out, language. For example, syllables in languages have an average length, a quarter of a second. This is true no matter which language you are speaking, and it is a consequence of the rhythmic processing of language by the brain. If you try to stretch out some syllables and compress others, speech becomes much more difficult to make out. Brain rhythms can be detected by a kind of brain scanner that measures the magnetic field around the brain. What this team did was show how the rhythms of the brain get in sync with the sounds people hear. They carefully carried out experiments, playing different kinds of sequences of sounds to people in magnetic scanners, and showing how different brain rhythms synchronize with aspects of these sounds. You can probably guess what I’m going to tell you. Certain brain rhythms synced with the abstract structure of sentences. The New York team were able to show that the syncing went beyond what could be connected to either the intonation of sentences, or the statistical frequency of aspects of sentences. This means that our brains, as we listen to sentences, get into rhythm with the abstract structure. Slightly more scarily, the team also inserted electrodes into peoples’ brains to find out the location of the bits of the brain where the abstract structure tracking takes place. They found that the parts of the brain that seemed to be the source for this tracking behaviour included, but wasn’t limited to, the same Inferior Frontal Gyrus that the Paris team identified. These experiments which try to localize where in the brain abstract structure happens and how it happens are still quite limited. There’s a huge amount more to learn, and we can only really take them as indications as to how the brain encodes abstract linguistic structure. But they do show quite conclusively that that’s what our brains do. There are, then, a lot of reasons to think that sentences are associated with an abstract structure. Our brains seem to be sensitive to it in particular ways, our behaviour seems to be sensitive to it as we process sentences, and the languages we speak show a great deal of evidence for this structure in the patterns they allow. We can think of abstract linguistic structure as a momentary mental gesture. A human language, like English, goes beyond symbols and beyond a means of communication. A speaker of a language has Jespersen’s ‘notion of structure’, rather than a list of symbols or a way to communicate. This underlies both our ability to create sentences, and our sense of linguistic structure, which we use to work out the properties of the sentences we hear. It doesn’t matter what the language is that people are speaking or signing around us. Whatever it is, we impose upon it, to the extent that we are able, the kind of structure that human language has. Our job as baby language acquirers is to work out, subconsciously, what particular variety of human language we are immersed in, but we will always be using the same basic principles. The innate resource that we bring to bear in doing this is what the American linguist, Noam Chomsky, calls Universal Grammar. Universal Grammar is just the specialized inbuilt capacities we humans bring to bear when we are acquiring a language or languages. From Chomsky’s perspective, Universal Grammar, plus our linguistic experiences, as well as our general intellectual skills, allow us to develop Jespersen’s ‘notion of structure’ for our own language. Using this notion of structure, we are able to frame sentences of our own, and understand those of other people.6 Is there any reason to say that we have Universal Grammar, an innate and particularly human capacity, rather than just an ability to extract patterns and generalize them? Surely it would be simpler and more elegant if we didn’t have to say anything special about human beings beyond saying that we are particularly good at learning. I provided some initial reasons to think that there is Universal Grammar already in Chapter 1. We seem to be able to judge whether something is a sentence of our language or not even when all the evidence points to us never having had the relevant experiences. But to really get evidence, we’d have to raise a child in a situation where we could completely control the language they get to hear, which would be deeply unethical. Surprisingly, however, there are