Main
A syntax of substance
A syntax of substance
David Adger
0 /
0
How much do you like this book?
What’s the quality of the file?
Download the book for quality assessment
What’s the quality of the downloaded files?
Categories:
Year:
2013
Publisher:
The MIT Press
Language:
english
Pages:
203
ISBN 10:
0262518309
ISBN 13:
9780262518307
Series:
Linguistic inquiry monographs
File:
PDF, 910 KB
Your tags:
Download (pdf, 910 KB)
 Open in Browser
 Checking other formats...
 Convert to EPUB
 Convert to FB2
 Convert to MOBI
 Convert to TXT
 Convert to RTF
 Converted file can differ from the original. If possible, download the file in its original format.
Report a problem
This book has a different problem? Report it to us
Check Yes if
Check Yes if
Check Yes if
Check Yes if
you were able to open the file
the file contains a book (comics are also acceptable)
the content of the book is acceptable
Title, Author and Language of the file match the book description. Ignore other fields as they are secondary!
Check No if
Check No if
Check No if
Check No if
 the file is damaged
 the file is DRM protected
 the file is not a book (e.g. executable, xls, html, xml)
 the file is an article
 the file is a book excerpt
 the file is a magazine
 the file is a test blank
 the file is a spam
you believe the content of the book is unacceptable and should be blocked
Title, Author or Language of the file do not match the book description. Ignore other fields.
Are you sure the file is of bad quality? Report about it
Change your answer
Thanks for your participation!
Together we will make our library even better
Together we will make our library even better
The file will be sent to your email address. It may take up to 15 minutes before you receive it.
The file will be sent to your Kindle account. It may takes up to 15 minutes before you received it.
Please note: you need to verify every book you want to send to your Kindle. Check your mailbox for the verification email from Amazon Kindle.
Please note: you need to verify every book you want to send to your Kindle. Check your mailbox for the verification email from Amazon Kindle.
Conversion to is in progress
Conversion to is failed
You may be interested in Powered by Rec2Me
Most frequent terms
structure^{413}
picture^{218}
syntactic^{216}
complement^{195}
chapter^{185}
merge^{185}
argument^{157}
label^{150}
syntax^{146}
projection^{141}
root^{134}
structures^{132}
functional^{131}
noun^{127}
extended^{126}
nominal^{117}
movement^{116}
category^{112}
arguments^{109}
relational^{108}
relation^{106}
def^{105}
semantic^{104}
lexical^{102}
possessor^{100}
nominals^{94}
relations^{93}
extended projection^{92}
genitive^{89}
languages^{89}
lilly^{86}
semantics^{85}
labeling^{84}
pps^{84}
cat^{83}
example^{78}
approach^{76}
phrase^{76}
via^{74}
interpretation^{73}
labels^{73}
nouns^{72}
rep^{71}
theory^{70}
analysis^{68}
pictures^{64}
categories^{64}
generalization^{64}
chomsky^{58}
derivation^{58}
gaelic^{58}
complements^{55}
pp peripherality^{55}
labeled^{55}
verbs^{53}
proposal^{53}
constituent^{51}
linguistic^{47}
adger^{47}
verb^{46}
Related Booklists
0 comments
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1

2

A Syntax of Substance Linguistic Inquiry Monographs Samuel Jay Keyser, general editor A complete list of books published in the Linguistic Inquiry Monographs series appears at the back of this book. A Syntax of Substance The MIT Press Cambridge, Massachusetts London, England David Adger c 2013 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please email special_sales@mitpress.mit.edu or write to Special Sales Department, The MIT Press, 55 Hayward Street, Cambridge, MA 02142. This book was set in Times Roman by Westchester books group. Printed and bound in the United States of America. Library of Congress CataloginginPublication Data Adger, David. A syntax of substance/David Adger. p. cm.—(Linguistic inquiry monographs) Includes bibliographical references and index. ISBN 9780262018616 (alk. paper)—ISBN 9780262518307 (pbk. : alk. paper) 1. Phrase structure grammar. 2. Grammar, Comparative and general—Syntax. 3. Semantics. I. Title. P158.3.A34 2013 415—dc23 2012020867 10 9 8 7 6 5 4 3 2 1 Contents Series Foreword Preface ix vii Chapter 1 Introduction 1 Chapter 2 Labels and Structures 9 2.1 Introduction 9 2.2 The Speciﬁer Problem 2.3 Diagnosis: The Problem Is Heads, Not Labels 18 2.4 Conclusion 3.1 Introduction 3.2 IComplements and ISpeciﬁers 38 3.3 Labeled Structures and the Impossibility of Rollup Derivations 40 3.4 Semantic Interpretation 3.5 Linearization 3.6 Conclusion 9 34 Chapter 3 Syntactic Interpretation 37 37 48 50 46 vi Contents Chapter 4 Puzzles in the Syntax of Relational Nominals 51 4.1 A Settled View 51 4.2 Optionality of “Arguments” of Relational Nominals 57 4.3 Relationality in Functional, No; t Lexical, Structure 70 4.4 Conclusion 89 Chapter 5 The PP Peripherality Generalization 91 5.1 Introduction 91 5.2 PP Complements 96 5.3 HeadInitial Languages 97 5.4 Determiners and Possessives 5.5 Conclusion 113 132 Chapter 6 The Etiology of the PP Argument 135 6.1 Introduction 135 6.2 Analyzing PP Peripherality 6.3 BoundPronoun Interpretations 145 6.4 VariableOrder PPs 146 6.5 PP Peripherality Redux 6.6 HeadFinal Languages 6.7 Conclusion Chapter 7 Conclusion 167 Notes 169 References 177 Index 187 166 158 162 136 Series Foreword We are pleased to present the sixtyfourth in the series Linguistic Inquiry Monographs. These monographs present new and original research beyond the scope of the article. We hope they will beneﬁt our ﬁeld by bringing to it perspectives that will stimulate further research and insight. Originally published in limited edition, the Linguistic Inquiry Monographs are now more widely available. This change is due to the great interest engendered by the series and by the needs of a growing readership. The editors thank the readers for their support and welcome suggestions about future directions for the series. Samuel Jay Keyser for the Editorial Board Preface This book arose because two shorter papers I was working on separately wouldn’t leave each other alone. One was to be an attempt at defending and analyzing a new crosslinguistic generalization (PP Peripherality: PP complements are always more peripheral to their noun heads than adjectives). The other was to be a theoretical solution to a problem of phrase structure theory (how to label speciﬁer–head structures) that had the added consequence of ruling out rollup and remnant rollup derivations. However, it became clear to me that the theoretical article needed an indepth case study of a particular domain to give it bite, whereas the more empirical paper relied heavily on theoretical proposals articulated in the other article. Each needed to be complemented with the other. However, doing that would have led to far too many words for any selfrespecting journal editor to accept. I hope, however, that the result makes for a reasonable read in book form. The book, of course, is longer than I had planned, but it is also far too short in that it leaves many questions open. I have not touched on clausal complements to nouns (but see Moulton 2009 for a proposal that ﬁts well with the system developed here) nor on analyses of complementation that treat it as relativization (Arsenijević 2009; Kayne 2010), and although the discussion of headinitial languages has some depth, there is still much work to do on the realization of nominal relations in headﬁnal (and Ezafe) languages. I have also left aside much of the literature that takes certain nominal relations to be, at heart, a form of predication (den Dikken 2007a; Boneh and Sichel 2010). Furthermore, I only brieﬂy touch on event nominalizations, which have generated a huge literature in the history of generative grammar, choosing to focus instead on what Barker and Dowty (1993) call “ultranominal” nouns). The material presented here, has, in various incarnations, been presented at the following venues, and I’d like to thank the participants for helpful and stimulating feedback: the LISSIM Summer School, Kausani, Uttarachand (2009); the 6th Celtic Linguistics Conference, Dublin (2010); the Comparative Germanic x Preface Syntax Workshop, Tromsø (2010); the MIT Colloquium (2011); and Richie Kayne’s Advanced Syntax Seminar, New York University (2011), as well as at seminars and colloquia at the University of Tromsø, the University of Cambridge, Boğaziçi University, and of course presentations at Queen Mary’s Syntax Semantics Research Group (thanks here, especially, to Hagit Borer, Paul Elbourne, Daniel Harbour, Luisa Martí, and Linnaea Stockall). I would also like to thank the following people for discussions about the ideas, or, indeed, for comments on written drafts: two anonymous referees for the MIT Press, Klaus Abels, Chris Barker, Hagit Borer (again), Dirk Bury, Terje Lohndal, Daniel Harbour (again), Gillian Ramchand, and especially Peter Svenonius for some detailed comments on a lastminute draft. For linguistic aid, many thanks to: Iseabail NicIlleathainn, Iain MacLeòid, Murchadh MacLeòid, Boyd Robasdan, and Marion NicAoidh (Gaelic) and Mark Wringe and Sìlas Landgraf and the staff and students of Sabhal Mòr Ostaig, Isle of Skye, for datagathering advice and help; ‘Ōiwi Parker Jones (Hawaiian); Peadar Ó Muircheartaigh (Irish); Maria Arché, Luisa Martí, and Álvaro Recio Diego (Spanish); Chiara Ciarlo, Roberta d’Alessandro, and Vieri SamekLodovici (Italian); Erez Levon and Itamar Kastner (Hebrew); Issa Razaq and Abdul Gadalla (Arabic); Øystein Nilsen and Kristine Bentzen (Norwegian); Shiti Malhotra (Hindi); Meltem Kelepır (Turkish); Tanmoy Bhattacharya (Bangla); Mythili Menon and Parvati Nair (Malayalam); Itziar Laka and Marta Uzchanga (Basque); Kaori Takamine (Japanese); Éva Dékány (Hungarian); Deepak Alok Sharma (Angika); and Anson Mackay (English). Thanks also to Anson for putting up with my linguistics obsessions for 25 years! For caffeinic assistance (at times, subsistence), thanks to @NudeEspresso on Hanbury Street, Spitalﬁelds, for an endless supply of ﬂat whites. The core empirical work on Gaelic that is reported here was undertaken during a Leverhulme Major Research Fellowship, for which I am extremely grateful. Finally, a word on the title. I propose in this book that the apparent relationality of nominals does not inhere in the nominal itself but rather in higher structure. This means that nouns are never relations; they simply denote undifferentiated substance. In terms of Aristotle’s Categories: Moreover, primary substances are most properly called substances in virtue of the fact that they are the entities which underlie everything else, and that everything else is either predicated of them or present in them. (Aristotle, Categories 1.5) It is in this Aristotelian sense that I mean “substance” here, with no claims about issues such as the mass/count distinction, which the book does not touch on. Chapter 1 Introduction The aims of this book are to develop a syntactic system that entirely separates structure building from the labeling of structure and to examine the theoretical, and some of the empirical, consequences of this idea. The primary reason to explore such a system comes from a number of problems that arise in the Bare Phrase Structure approach to syntactic representation (Chomsky 1995b). In Bare Phrase Structure, labeling is a side effect of the structurebuilding operation Merge: when two elements X and Y are Merged, creating a new syntactic object, one of these elements is chosen to be the label. However, this raises the question of how to choose the label. There are a number of possible approaches in the literature, but none of these is entirely satisfactory. I argue in chapter 2 that they all have problems in providing a uniﬁed labeling algorithm, especially when speciﬁer–head structures are considered. The alternative solution I propose builds on the idea that there are actually no true functional heads qua lexical items. Rather structure is always built from lexical roots via Self Merge or standard binary Merge, where Self Merge is just the subcase of binary Merge where both inputs to the operation are token identical. The structures so built are directly labeled on the basis of a (set of) universal sequences of functional categories (roughly equivalent to the extended projections of Grimshaw 1991). That is, Self Merged roots are labeled with the start of some extended projection, and then that structure undergoes further structurebuilding operations. Each new structure is built from the previous one and is labeled on the basis of the labels of its immediate constituents and the relevant extended projection. √ For example, take the root of the word cat, cat. It has no category but may √ Self Merge, giving the set { cat}, a syntactic object distinct from the root it contains. Now this object needs a label. That label can be any category that can start an extended projection. We could choose N, in which case each further 2 Chapter 1 structurebuilding operation will elaborate a nominal extended projection, or we could choose V, or A, depending in part on the root’s categorial ﬂexibility. √ Let’s say we take the label of { cat} to be N. Now we can either Self Merge √ this, giving {{ cat}}, or we could Merge, say, (the extended projection of) √ √ some quantiﬁer with it, giving {{ some}{ cat}}. In either case, the new object needs a label, and in both cases that label will be a function of the labels of the constituent(s) that the object contains and the sequence of categories in the independently given extended projection of N. For example, in the binary case, the label will be some category in the extended projection of N whose speciﬁer can be a quantiﬁer (say, Q). In the unary case, it will be a further category in the semantic development of the nominal (e.g., a category Num, marking number). The structures that emerge from a system like this are what Brody (2000a) calls “telescoped” (see also Starke 2004). There is no independent head for any category except the root. Thus, rather than (1), we have (2). (1) Q NumP QuantP some Num NP N (2) √ cat Q Quant Num some N √ cat I argue that this way of labeling structure is simpler than the standard Bare Phrase Structure system. In either system, one needs to state both the order of categories in an extended projection or functional sequence (see Starke 2001; Adger 2003; Williams 2003) and to provide categories for roots. Bare Phrase Structure just adds to that an extra notion of endocentricity that arises because functional categories are taken to be lexical items. I reject that assumption. Within this new system, the labeling problems do not arise, and as I argue in chapter 2, a uniﬁed labeling algorithm can be given. This then is the basic architecture of the system I propose for separating off structure building from structure labeling. There are some immediate properties of this system that need comment. Introduction 3 First, Self Merge (Guimaraes 2000; Kayne 2010) is a fundamental operation. I argue that this operation comes for “free” by removing a stipulation in the standard version of Merge, thus simplifying the deﬁnition of Merge. Second, it is, in this system, impossible to Merge a root with a syntactic object distinct from that root. This is because roots on their own are not in the domain of the labeling algorithm (see section 2.3.1). It follows that arguments cannot be introduced as sisters to lexical roots and that the semantic relation between a root and an argument must be negotiated by functional structure. Of course, this is no surprise, given the huge range of work that has argued for just this conclusion, on mainly empirical grounds, in the last decade (Kratzer 1996; Hale and Keyser 2002, Ramchand 2003; Borer 2005b; Bowers 2010, among many others). However, in the theory I develop here, this conclusion is a consequence of the computational system rather than an empirical claim. This property of the system also highlights the stipulative nature of the notion of a special local domain for the introduction of arguments: there is no theoretically sound reason to take arguments to be local to their apparent root. In fact, the phrase structure system forces a divorce between a root and its arguments. Third, if there are no functional heads, what are we to make of functional morphemes, both bound and free? I propose that bound morphemes are just pronunciations of functional categories attached to roots via extended projections (in a way that is similar to Brody 2000a or more particularly to the notion of spanning developed in Williams 2003), whereas at least some free functional morphemes are spellouts of these categories that are not so attached (i.e., they are spellouts of fragments of extended projections). Other free, apparently functional, morphemes, like auxiliaries, are spellouts of structures √ built up from lexical roots, as described above for cat. Finally, in a binary structure like the uppermost branches in (2), given that the label is dependent on both daughters, there is no way of deﬁning the classical notion of speciﬁer or complement (as, say, second and ﬁrst Merge, respectively). The structure is, as far as the syntactic operations are concerned, entirely symmetrical. However, asymmetrical interpretations need to be imposed by the semantic interface for identiﬁcation of functionargument structure and by the articulatory or acoustic interface for identiﬁcation of linear order. I deﬁne new notions of complement and speciﬁer that read these asymmetries off of the extended projection information in the tree. If a mother and daughter are in the same extended projection, and the daughter is lower in that projection, then the daughter is a complement of the mother; otherwise, the daughter is a speciﬁer of the mother. So in (2), because Num and Q are in the same extended projection, and Num is lower than Q, Num is the complement of Q. Because Quant and Q are not in the same extended projection, Quant is a speciﬁer of Q. 4 Chapter 1 These relations are then treated asymmetrically by both the semantics (where complements are composed before speciﬁers) and by the linearization systems (where complements are linearized after speciﬁers). This last point has an important consequence, probably the most important of the entire system. It makes rollup (and hence remnant rollup) derivations impossible. To see why, consider a structure like (3). In this structure, suppose that all nodes labeled X are in the same extended projection, and that the subscripted numbers indicate the height of the label in that extended projection. In (3), where X2 has moved from inside X 4, both daughters of X5 are in the same extended projection, and both are lower in that extended projection than their mother. In such a conﬁguration, it is impossible to determine which is the complement, no asymmetry can be imposed by the interfaces, and the structure is uninterpretable. (3) X5 X2 X4 X2 This system then rules out rollup derivations as a matter of the computational system and therefore provides a more restrictive theory of syntax than that currently supposed. I argue for a different way of capturing apparent rollup effects in chapter 3 that replaces them with basegenerated structures (see also Brody and Szabolcsi 2003; Adger, Harbour, and Watkins 2009). I explore these various consequences of the theory in chapters 4 to 6, concentrating on the syntax of relational nominals, which provides a strong argument for the nonexistence of a notion of a locality domain for the satisfaction of argument structure. I also show that there is surprising evidence for a basegeneration approach over a rollup movement approach to the ordering and hierarchy of the constituents of the noun phrase. I provide a brief summary here. The standard view of relational nominals emerges from a combination of the syntactic analysis proposed in Chomsky 1970 combined with the idea that relational nominals are semantically parallel to transitive verbs in being twoplace predicates: (4) N̄:λx.side(x, thetable) side:λyλx.side(x,y) PP:thetable the table Introduction 5 However, a major problem with this approach is that, across languages, the presence of the internal argument of the relational nominal is systematically optional, whereas for verbs it is (at least descriptively speaking) lexically determined. I argue that the evidence for true argument structure in relational nominals is lacking (see Higginbotham 1983; Zubizarreta 1987; Grimshaw 1990). Furthermore, connected to this lexicosemantic claim, the theoretical system developed in this book makes (4) an impossible representation because the PP cannot be a complement of a lexical head. Instead, the closest representation is (5), where N̂ and G are categories in the extended projection of the nominal: (5) G PP N̂ of the table side This representation itself raises two problems: one of the ordering of the constituents, and one of the etiology of the relational semantics. Given that the PP is a speciﬁer of G, why is the order not of the table side and how is the semantics of side appropriately projected through its extended projection to the point where it can take of the table as an argument? I show, however, that the representation in (5) should be replaced by (6), where side is not relational (i.e., it just means λx.side(x)), and where the type of the relation (in this case, it is a part type of relation) is introduced by a light root. The structure built from Self Merge of this root is labeled with a category I dub ק, which is responsible for the functionargument structure that encodes relationality and for the introduction of the prepositional casemarking morphology.1 ק (6) ק N̂ side ק PP of the table √ PART The semantics of קis a relation whose type is identiﬁed by the root, in this case λyλx.part(x,y). This directly combines with its speciﬁer the table to give a meaning of λx.part(x,thetable). Morphosyntactically, קvalues the case feature 6 Chapter 1 on the table, and this valued case feature is realized as of. Once קand of the table have Merged, the new constituent is then of the correct semantic type to combine with side as a predicate modiﬁer, giving: (7) λx.side(x) ∧ part(x,thetable) This approach provides a solution for the ordering problem in that the projection of the relational nominal (understanding this phrase as now being purely descriptive) is in a speciﬁer of ק. The category קcombines ﬁrst with its argument PP, which is a speciﬁer and which linearizes to the left of the projection line, assuming a standard view that takes speciﬁers to linearize to the left of their complement (Kayne 1994; Brody 2000a). The N̂ containing side is then also speciﬁer of the קcategory that has a complement that contains the PP. If we continue to assume the standard view of linearization, the N̂ containing side will, perforce, appear to the left of the PP. The etiology of the relationality is in a functional category ק, whose relation is named by the root it contains (in this √ case, PART) and whose semantics projects through the structure as is standard. This approach immediately captures the optionality of the “argument” of a relational nominal. There is a perfectly well formed syntactic derivation for side that does not involve relational ק, in much the same way that there is a perfectly well formed syntactic derivation for side that does not involve a numeral, or an adjective, giving the “optionality” of numerals and adjectives. Assuming that D is Merged higher than ק, and that there is a syntactic dependency between the structure projected by the root side and D, we rule out a structure with no “lexical” root, containing only ( קso *the of the table). The next question that arises is the identity of N̂. If N̂ is actually just N, then the resulting structure closely mimics the traditional view, with the PP being structurally separated from the N by a minimal layer of functional structures. However, unlike the standard approach, the perspective adopted here takes N̂ to be a speciﬁer, so it is possible that N̂ is actually rather larger than just the root plus the lexical category N. We therefore, unlike the classical approach, allow constituent structures where the PP is external to a constituent containing a fair amount of nominal material: ק (8) ק N̂ three rough sides ק PP of the table √ PART Introduction 7 This contrasts with a structure that would be more similar to the classical view proposed and defended in Chomsky 1970: (9) three ק rough ק N √ sides ק PP of the table √ PART It is then an empirical question as to which is superior. I show in chapter 5 that the correct view is the one allowed by the new system: the relational nominal projects sufﬁcient structure to allow Merge of intersective APs, numerals, and cardinal quantiﬁers, and some markers of definiteness before it is Merged with ק. The primary evidence for this is the interaction of the syntax of APs, PPs, and N, which I show is best captured by this new approach. The conclusions of this investigation also allow us an understanding of a new typological generalization that I call PP Peripherality: (10) PP Peripherality When (intersective) AP modiﬁers and PP “complements” both occur to one side of N inside a noun phrase, the PP is separated from the N by the AP. What (10) captures is the fact that, across languages, the PP complement appears further away from the head noun than most AP modiﬁers. This is entirely unexplained on the standard account but is expected on the picture drawn here. Chapter 5 also takes up the issue of the relations within the DP in more detail. It proposes that articles are actually the spellout of a deﬁniteness projection “lower” than ק, when that projection has moved to the speciﬁer of D. Combined with a view of genitive possessors that takes them to be derived via movement from a קlike projection to the speciﬁer of D for case reasons, this predicts the complementarity between articles and genitive possessors seen in many unrelated languages. However, the empirical evidence presented in chapter 5 for the order and constituency of AP and PP constituents of the noun phrase is actually also compatible with a movement account. That is, the structure in (9) can be mimicked 8 Chapter 1 by taking the PP to be generated in the standard position as a complement of N and then to raise to a higher speciﬁer position, followed by movement of the remnant, as in Kayne 2004, Cinque 2006, and elsewhere: (11) N̂ PP rough [sides of the table] N̂ of the table Here the PP is Merged with the noun side, this constituent is then modiﬁed by an AP (rough), the PP is then moved leftward, and the remnant raised yet further leftward. The theory laid out in chapters 2 and 3 rules out such a derivation, creating a sharp contrast with a looser remnant movement approach. This issue is taken up in chapter 6, where I show that the system developed here makes superior predictions to those of a remnant rollup analysis in the domain of the interaction of binding and linear order (Pesetsky 1995; Cinque 2006). I argue that a simple surface binding algorithm is available in the representations predicted by the theory developed here, whereas the remnant movement analysis must appeal to selective reconstruction in a way that simply recapitulates the empirical observations. Overall, on an empirical level, chapters 4 to 6 of the book argue that relational nominals are not relational, that relationality is negotiated at some structural distance from its apparent source, and that its true source is a light root that names a relation that is semantically negotiated via functional structure. This leads to an explanation for the new (putative) universal mentioned before: PP “complements” are more peripheral with respect to their apparent selector than intersective modiﬁers. On a theoretical level, the book makes a case for separating off the algorithm for labeling from the structurebuilding operations (Hornstein 2009) and for telescoped syntactic representations (Brody 2000a) whose labels are determined by universally given sequences of categories. The resulting symmetry of structure requires that the interfaces impose asymmetries for semantic and phonological interpretation, and I propose that the sequences of categories (extended projections) are responsible for this assignment. This theoretical conﬁguration leads to two constraints on syntactic representations: ﬁrst, lexical roots cannot Merge with phrases, forcing complete severance of argument introduction from the root; and second, rollup and remnant rollup derivations are impossible, and so cannot be used as a means for capturing apparent mirror effects in syntactic hierarchy and linear order.2 Chapter 2 Labels and Structures 2.1 Introduction In this chapter, I outline the reasons that the standard Bare Phrase Structure– style system (Chomsky 1995a) has problems with labeling structures containing speciﬁers. I argue against a range of proposals for the labeling of speciﬁers and propose a new system where this problem no longer arises. The new system is a theoretical advance on the standard system in that it removes a stipulation built into Merge and simultaneously solves the labeling problem for speciﬁers. Additionally, it has the consequence that phrases can never be Merged with lexical roots, forcing a complete severance of arguments from their lexical entries; arguments must be introduced by syntactic categories rather than by lexical properties of roots (Borer 2005a, and also work in the Distributed Morphology tradition, following Marantz (1997)). 2.2 The Speciﬁer Problem The standard view of how syntactic structures are built up in minimalist theorizing is that lexical items are subject to Merge, deﬁned in (1), where X, Y, and {X, Y} are all syntactic objects: (1) “. . . we take Merge(X,Y) = {X, Y} . . . ” Chomsky (2007, 8) More explicitly (adapted from Collins and Stabler 2009): (2) Let W be a workspace and let X, Y be syntactic objects where X, Y ∈ W and X and Y are distinct (X=Y). Then, ExternalMergeW(X,Y) = {X,Y}. This deﬁnition takes Merge to be responsible for the creation of structure in an unstructured space (the workspace). For example, take a workspace like (3), where an application of Merge(A, B) yields the workspace in (4). 10 Chapter 2 (3) {A, B, C, D} (4) {{A, B}, C, D} Given that no restriction is placed on the provenance of the inputs to Merge, this deﬁnition also yields a movement conﬁguration, generated in the same basic way but with X in the deﬁnition taken to contain Y (or vice versa). So Merge(A, {A, B}) yields:1 (5) {{A, {A, B}}, C, D} This gives the, by now familiar, distinction between External Merge and Internal Merge. The output of Merge then enters into further syntactic operations. Chomsky suggests that the output therefore must have some properties so that these further operations can apply to it: Each SO [syntactic object] generated enters into further computations. Some information about the SO is relevant to these computations. In the best case, a single designated element should contain all the relevant information: the label (the item “projected” in X theories; the locus in the labelfree system of Collins 2002). The label selects and is selected in EM [External Merge], and is the probe that seeks a goal for operations internal to the SO: Agree or IM [Internal Merge]. (Chomsky (2008, 141)) Assuming this to be the case, a means of determining the label is necessary. Chomsky (2008, 145) suggests: (6) a. In {H, α}, H an LI [lexical item], H is the label b. If α is Internally Merged to β, forming {α, β} then the label of β is the label of {α, β}. That is, the label is predictable from the internal conﬁguration of the output of Merge: in the head–complement case of Merge at least, the label is the element whose properties are available with minimal search. It follows that in this case the label is H, which is a lexical item. If we Merge C to the complex syntactic object created in (5) (i.e., we apply Merge(C, {A, {A, B}}) to this workspace, yielding (7)), then C is the label of {C, {A, {A, B}}} by virtue of clause (6a). (7) {{C, {A, {A, B}}}, D} With suitable statements about the timing of operations, it may be possible to unify the movement case in (6b) with (6a) (e.g., minimal search identiﬁes the LI β in {β, γ}, this then probes α in γ and is therefore identiﬁed as the label of {α, β}, as it is still “active” until α is raised and Merged). However, Chomsky does not explicitly unify the two parts of the labeling algorithm, and doing so is not trivial. As matters stand, we have a nonuniform labeling algorithm. Labels and Structures 11 However, as noted by Chomsky (2008, 145), even this nonuniﬁed algorithm is insufﬁcient. There are three problematic cases. The ﬁrst is the initial step of the derivation (in fact, of most extended projections in the derivation) where two LIs Merge. Given that both inputs to Merge are LIs, the algorithm determines neither as the label uniquely. Chomsky suggests that, in this case, either may be the label and that, if the wrong choice is made, the resulting structure will be quickly ﬁltered out by the interface systems. This approach actually takes labeling to be nondeterministic in contrast to the thrust of the labeling algorithm given in (7) (see Citko 2008): the label may be drawn from either constituent of the relevant SO, and derivations with incorrect choices are ﬁltered out (contra the crashproof approach of Frampton and Gutmann 2002). Another problematic case is when an LI (α in (8)) is Internally Merged to some higher projection: (8) { α, {β, { γ, α}}} In this case, the two subclauses of the labeling algorithm conﬂict, with (6a) making the LI α the label, whereas (6b) makes β the label. Chomsky, taking labeling to be nondeterministic in this case, and following Donati (2006), suggests that in such a situation both outcomes are in fact possible. For example, in a case where some whword has been raised to the speciﬁer of CP, either the whdeterminer itself labels the resulting structure as a DP or the attracting probe C labels it as a CP. This is then taken to be what happens in an Englishstyle freerelative construction: (9) [CP/DP What[D] [C̄ C [TP you wrote what] This particular analysis is problematic, because it is unclear that what is itself not syntactically complex. In other languages, freerelative constructions have an entirely different syntax and morphology from indirect whquestions, casting doubt on their uniﬁcation in English. Compare the indirectquestion and freerelative variants in Scottish Gaelic, for example: thu. (10) a. Dh’fhaighnich mi dè a sgrìobh I what that write.PAST you ask.PAST ‘I asked what you wrote.’ mi na sgrìobh thu. b. Leugh read.PAST I F  REL write.PAST you. ‘I read what you wrote.’ The indirect question is formed by a fronted whexpression dè, which shares distribution with any other whexpression (i.e., instead of dè ‘what’, we can 12 Chapter 2 also have cò ‘who’, cuine ‘when’, etc.). The free relative can only be formed with the particle na, which is plausibly a contraction of the deﬁnite article an and the relative particle a. Further evidence for this analysis is that certain prepositions in Gaelic carry a special inﬂection when they appear with a following deﬁnite article (although not with other deﬁnite expressions, such as proper names): na caileagan ris to.DEF the.PL girls ‘with/to the girls’ caileagan b. *ri na to the.PL girls ‘with/to the girls’ c. ri Màiri to Màiri ‘with/to Màiri’ Màiri d. *ris to.DEF Màiri ‘with/to Màiri’ (11) a. We can take this inﬂection as diagnostic, then, of the presence of deﬁnite article (rather than of syntactic deﬁniteness generally). This inﬂection appears obligatorily with free relatives: mi ris na sgrìobh thu. (12) Èisd listen.PAST I to.DEF F  REL write.PAST you. ‘I listened to what you wrote.’ However, in constructions where the whword dè ‘what’ is in situ (e.g., echo questions), it does not trigger the deﬁniteness inﬂection on a P, which suggests that these expressions do not incorporate an article: dè? thu ri(*s) (13) Èisd listen.PAST you to.(*DEF) what ‘You listened to what?’ This suggests that free relatives in Gaelic involve an article (D) taking a relativeclause complement. For example, following Adger and Ramchand’s (2005) proposal that Gaelic relatives simply involve the direct binding of a variable from C, we would have: (14) [D na [C[RELi ] you wrote proi ]] Labels and Structures 13 A similar analysis might be extended to the English freerelative case, reducing the force of Donati’s argument and Chomsky’s appeal to it to explain this problem for the labeling algorithm.2 The ﬁnal case is however the most problematic. I quote Chomsky here: The exceptions are EM of nonheads XP, YP, forming {XP, YP}, as in external argument merger of DP to v*P. The conventional assumption is that the label is v*. A possibility is that either label projects, but only v*labeling will yield a coherent argument structure at CI. Another possible case is small clauses, if they are headless. A suggestive approach, along the general lines of Moro (2000), is that these structures lack a label and have an inherent instability, so that one of the two members of the small clause must raise. Chomsky (2008, 160, note 34) This is what I term the Speciﬁer Problem: (15) In a conﬁguration {XP, YP}, how is the label determined? Given that neither of XP or YP is an LI, Chomsky’s algorithm does not apply. Moreover, no obvious considerations based on simplicity of search seem to pertain. For External Merge of a speciﬁer, Chomsky suggests two possible solutions. One is based on Moro’s (2000) idea that {XP, YP} structures are somehow too symmetrical, and this symmetricality has to be disrupted by movement of one or other of XP and YP. Applying this to Merge of the speciﬁer of v*P, we could say that the speciﬁer has to raise, leaving a structure with just a head (LI), v*, which provides the label (assuming that the trace can be ignored). However, this would mean that all base Merged speciﬁers have to raise, given that they will all give rise to the same problem. But the question then is whether there is always a target for such raising. Take, for example, smallclause absolutives in English: (16) With the vase on the table, the room looks perfect. There is no evidence that the vase has moved. In fact, the lack of expletives in such structures suggests that there is no target position: (17) *With there a vase on the table, the room looks perfect. Furthermore, taking Moro’s (2000) position, we might expect predicate inversion in such constructions, which is also impossible in English: (18) *With on the table the vase, . . . Similar considerations apply to PP and AP complements of smallclausetaking predicates like consider and possibly also to causative make and perception see. 14 Chapter 2 Connected to this empirical problem is a theoretical one: if it is the label that selects and is selected in External Merge, when T combines with the unlabeled constituent {Subject, {v, V}}, before movement of the subject, the constituent has no label and so cannot be selected by T. The subject cannot move before Merge of whatever selects {v, V}, because there is no position for it to move to, but if the subject does not move, then {Subject, {v, V}} cannot be selected, leading to a paradox: the subject must move so the constituent can have a label, but it cannot move because there is no position to move to unless {Subject, {v, V}} already has a label.3 The other idea Chomsky considers is the same strategy as for External Merge of two LIs: either can be the label, but the interface will ﬁlter out the incorrect labeling via appeal to the argumental properties of the embedded predicate. This will be the case for initial Merge of the speciﬁer of v* or, more generally, for any subject introducing functional head such as PredP (Bowers 1993) or the head that introduces possessors (Radford 2000), and so forth. One problem with this proposal is its inconsistency with the general algorithm. Why not just allow either label to project in general, with the interface ﬁltering out the problematic cases? The answer is that, in most other instances, it is not possible to appeal to general considerations of coherent argument structure (e.g., in Merge of T and Asp no obvious argumentstructure considerations arise that will determine which of the two projects). Moreover, appeal to conceptualintentional (CI) properties for this structure, but to syntactic labeling algorithms for the others, seems decidedly unminimalist: either the interface conditions apply generally across the various subcases, or the syntactic system determines the label via some formal property deriving from the functioning of Merge. One response to this could be to specify that one of T or Asp is the semantic functor, that semantic functors correspond with syntactic argument taking status, and therefore the projecting head is whatever the semantic functor is. However, this is not an appeal to argument structure at the CI interface in the same sense, and it effectively amounts to just stating which constituent is to be taken as the head. Recall that it is always possible to raise the semantic type of an argument to that of a functor (as in the standard analysis of generalized quantiﬁers), effectively negating the possibility of an appeal to functionargument representation as a constraint on syntactic labeling. Furthermore, Chomsky’s appeal to coherent argument structure will not apply to all cases of the Speciﬁer Problem. For possessors in particular, it seems unlikely that the interface will simply ﬁlter out the wrong answer by appeal to properties of argument structure. Take DPs like Anson’s picture of Lilly or Anson’s side of the table. The argument structure of picture or side is irrelevant Labels and Structures 15 to the interpretation of the possessor. On initial Merge of Anson with whatever structure is built above picture of Lilly or side of the table that allows the possessive interpretation, we need to ensure that the label is that of picture or side, and not that of Anson. This is for two reasons: ﬁrst, the label is a signal to the CI interface as to what the phrase is to mean, and projecting the wrong label will give a meaning something like ‘Anson who is relevantly related to the picture of Lilly’; and second, languages treat the two projections differently (e.g., Hungarian requires a special possessive morpheme on the noun picture when it is possessed, so something must identify it as the syntactic possessee). Given this discussion, we can in fact strengthen the Speciﬁer Problem and ask: (19) Is there a uniﬁed labeling algorithm that applies in the same way to all syntactic conﬁgurations? One response to this question is that of Collins 2002: structures are not labeled. The various syntactic relations that elements enter into are asymmetrical enough to provide information about which of the two subconstituents of a syntactic object is the head. That information will serve the purposes of labels in a labeled system. As Collins notes, this requires the syntactic system to be sensitive to all sorts of syntactic relations (syntactic selection, agreement, θrole assignment, Extended Projection Principle [EPP], etc.), with the asymmetry of each relation effectively providing the information about which subconstituent of a syntactic object is taken to be the label. There have been a number of criticisms of this labelfree system (Seely 2006; Hendrick 2007). However, I think the most compelling reason not to adopt this approach is how it interacts with movement theory. Take, for example, a derivation where an unaccusative verb combines with a DP containing a speciﬁer: (20) Anson’s cat arrived. This derivation includes, in a labelfree system, a structure of the following form, at the point before EPPdriven movement applies: (21) T[uD] v arrive[V] DP Anson’s D cat 16 Chapter 2 Now the EPP feature on D attracts the closest DP. But there is no information on the label of the complement of V to ensure that Anson’s cat counts as the closest DP, and we incorrectly predict generalized possessor raising, as in (22).4 The issue is, given (20), how to ensure that the whole constituent containing Anson’s cat is moved, rather than just Anson: (22) *Anson’s arrived cat There are solutions to this problem, including developing particular theories of piedpiping, but these all effectively restate standard labeling. The labelless structures effectively predict that a DP in the speciﬁer of another DP will always be more prominent for syntactic relations outside the latter DP. That prediction does not seem, in general, to be correct. Adger (2003) suggested an alternative but related solution to these problems that relies on the idea that operations such as Merge are triggered (or at least swiftly checked by the syntactic system). In that system, selectional features are uninterpretable features which have to be checked by a matching feature on the selectee (see Chomsky 2000, 133). Merge of a head with its complement will invariably require the head to bear such a feature (regardless of whether the complement is just an LI or a phrase). The proposal takes the X̄projection of an LI to be identical to that LI, which allows, for example, v* to bear a selectional feature (uD) that projects to v̄* and that can then be checked by Merge of a DP subject under sisterhood. In such a system, the slogan what selects projects determines labels. Boeckx (2008) proposes a similar system but takes the labels to be given by the element that probes for ϕfeatures. More recently, Cecchetto and Donati (2010), following Adger (2003) and Boeckx (2008), have suggested that it is always the probe in any syntactic operation that labels the output of that operation. All of these systems attempt to argue that there is, in fact, a uniﬁed labeling algorithm (for Collins, the uniﬁcation of labeling is to be achieved by eliminating labeling), and they all adopt the intuition that it is an internal property of a lexical item (possibly inherited by that LI’s projections) that determines its capacity to have complements and speciﬁers. However, none of them solve the essential problem for speciﬁers: in a conﬁguration {XP, YP}, how can the label be determined without additional syntactic computation? What these proposals do instead is say: in a conﬁguration {XP, YP}, inspect the heads of XP and YP to see whether they have a property that will determine the label of the whole conﬁguration. For example, imagine the derivation has reached a point where Y, which bears a selectional feature for Z and a selectional feature for X, has Merged with Z: Labels and Structures 17 (23) Y[Z, X] Z In the Adger/Boeckx/Cecchetto/Donati approach, Y labels the new syntactic object. At the next stage of the derivation, XP is Merged and the label for the new structure is to be calculated. For Boeckx, and for Cecchetto and Donati, the asymmetry of selection (i.e., the fact that Y still has an X feature to be satisﬁed) is enough to label the resulting object as Y. For Adger, the whole complex Y[Z, X] labels the mother node in (23) and then the Xfeature is satisﬁed under sisterhood, so that the new object is a YP. In either case, the XP must be ﬁrst Merged before the YX relationship is determined. But now the basic problem reemerges: (24) XP Y[Z, X] Y[Z, X] Z We must look at internal properties of both XP and the projection of Y to determine the label, but this means that the label of α is not determined by properties of the elements that α immediately contains.5 Furthermore, if a single LI can have both a complement and a speciﬁer, and these have different syntactic properties, the system requires further stipulations to order them. For example, if a ditransitive verb takes both a DP and a PP as internal arguments, how is this “base” order effected so that one is the complement and the other the speciﬁer? Solutions to this problem are unappealing: we could stipulate that the P selectional feature is somehow less embedded inside the LI than the D selectional feature and hence accessed ﬁrst, or we could have a lexical representation that stipulates that PP is a complement and DP is a speciﬁer, keying the syntactic combinatory rules to this featural stipulation. In such a system, we are forced to pack the information about the syntactic computation into the lexical item itself, effectively stipulating structure and order in each lexical item itself. This is surely one valid theoretical move, and one that was taken in the development of X̄theory by Jackendoff (1975) as well as in uniﬁcation based and categorial frameworks (Bach 1984; Pollard and Sag 1987). Syntactic generalizations then become generalizations about classes of lexical items. However, I want to pursue a different line here, keeping to the view expressed in Adger 2010a that hierarchical structure is always built by the syntax rather than speciﬁed lexically. In that paper, I made the point that all hierarchical structure in human language should be built by the same operation. Given that Merge 18 Chapter 2 builds hierarchical structure and that, by hypothesis, LIs are the input to Merge, LIs should lack hierarchical structure entirely. This has the effect of constraining what an LI can look like in a way that appears to be empirically useful. For example, it rules out an LI that selects a complement whose head in turn selects a particular category (i.e., it imposes a locality of selection on LIs). Keeping to this No Complex Values Hypothesis, we are led to pursue an alternative to the line of Jackendoff and HeadDriven Phrase Structure Grammar (HPSG) for solving the following two problems. (25) a. The Speciﬁer Problem In {α, β}, where neither α nor β are lexical items, how is the label to be determined? b. The Labeling Problem Is there a uniﬁed labeling algorithm that will sufﬁce for all cases, and if so, what is it? 2.3 Diagnosis: The Problem Is Heads, Not Labels The approach I explore takes these problems to emerge because of the LIdrivenness of the system. In Adger 2010b, for rather different reasons, I suggested that labeling should be exocentric rather than endocentric (see also Boeckx 2010). In the remainder of this chapter, I develop this idea more fully and show how it provides a solution to both the Speciﬁer Problem and the Labeling Problem. Merge is usually understood to be a binary operation. As discussed earlier, given a binary operation, the logical possibility exists that one operand may be part of the other, allowing us to distinguish between binary and singulary transformations (see Chomsky 1961) and resulting in a system with both External Merge and Internal Merge. Consider again the formalization of this idea given by Collins and Stabler (2009): (26) Let W be a workspace and let X, Y be syntactic objects where X, Y ∈ W and X and Y are distinct (X=Y). Then, ExternalMergeW(X,Y) = {X,Y}. This formalization makes clear that there is a further logical possibility, currently ruled out by the clause in boldface: the operands may be identical. If the operands are identical, the output of the Merge operation is a singleton set (see also Guimaraes 2000; Kayne 2010).6 Labels and Structures 19 Following these suggestions, the ﬁrst theoretical proposal I would like to make is: (27) Remove the distinctness condition on Merge. We can schematize the three ensuing possibilities as: (28) a. Merge(X, Y), X distinct from Y, → {X, Y} (External Merge). b. Merge(X, Y), X part of Y, → {X, Y/X} (where Y/X signiﬁes X is contained in Y) (Internal Merge). c. Merge(X, Y), X = Y, → {X, X} = {X} (Self Merge). Whereas External Merge and Internal Merge give rise to a syntactic object with a cardinality of 2 (i.e., it has a binary structure), Self Merge gives rise to a syntactic object with a cardinality of 1 (i.e., it is a unary structure).7 Schematically, one kind of derivation, utilizing only Self Merge, will look as follows: (29) a. b. c. d. Merge x with x = {x, x} = {x}. Merge {x} with {x} = {{x}, {x}} = {{x}}. Merge {{x}} with {{x}} = {{{x}}, {{x}}} = {{{x}}}. ... Mixing Self Merge with External Merge will give a derivation of the following general shape: (30) a. b. c. d. e. Merge x with x = {x, x} = {x}. Merge {x} with {x} = {{x}, {x}} = {{x}}. Merge y with y = {y, y} = {y}. Merge {{x}} with {y} = {{{x}}, {y}}. ... The immediate issue to address now is that of the label of these various constituents. Taking the Self Merge derivation ﬁrst, it appears that we would expect no labeling to be possible, because no head is Merged (i.e., the only LI is x). If it is heads that provide labels, and all structure needs to be labeled, then we could rule out unary branching structures (see Kayne 2010). However, I want to pursue here the idea that the effect of iterated Self Merge is to create an extended projection of the initial root category in the absence of any further merger of heads. I am going to adopt a methodology that simply assumes that work in the cartographic approach to syntactic structure (e.g., Rizzi 1997, Cinque 1999) is along the right lines, and I will further assume that there is a solution to the problem of what gives rise to the cartographic 20 Chapter 2 ordering and this solution is not based on one functional head syntactically or semantically selecting the next (see, especially, Starke 2001 and Adger 2003, which take the extended projection of a root to be given by an interface constraint on Merge, and Williams 2003 who applies this same methodology in theory development). Starke states this as: (31) there exists an “fseq”—a sequence of functional projections—such that the output of [Merge] must respect fseq. (Starke 2001, 155) Williams takes “the existence of the functional sequence and its linear structure as axiomatic” (Williams 2003, 175) and leaves open the mystery of the difference between functional embedding (i.e., the hierarchical ordering of functional categories) and what he calls complement embedding (i.e., the capacity of a verb or other lexical category to take a whole new functional hierarchy as a complement). Adger, following ideas stemming from Abney (1987) and Grimshaw (1991), deﬁnes a Hierarchy of Projections taking, for example, vP to be “an extension of the projection of VP, in that it is still verbal, but it adds further semantic information” (Adger 2003, 135). For Adger (2003), Merge requires either satisfaction of a selectional relationship via feature checking or satisfaction of the Hierarchy of Projections (the acuteness of the mystery raised by complement vs. functional embedding becomes especially clear in the partial formalization of the system given by Adger (2010a), where two different deﬁnitions of Merge have to be developed—a problem solved here by actually simplifying the deﬁnition of Merge). Adopting this method of theory construction, let us take the extended projection of any root to be given axiomatically, as far as the syntax is concerned. It is simply a property of Universal Grammar (UG) (hopefully to be derived in some fashion; see, e.g., Nilsen 2003). This leads to the second major theoretical proposal:8 (32) There are no functional categories qua lexical items. In any particular act of syntactic combination, the label can be given directly, and locally, on the basis of antecedently assigned labeling and the axiomatic functional sequence. The only lexical item necessary is the root of the extended projection, and this provides the initial label. For the moment, I will take the core lexical category labels to be N, V, and A (following Baker 2003) and assume that these categories label the output of Self Merge of lexical roots (taking roots themselves to be labelless; e.g., Marantz 2006; Borer 2005a; and ultimately Chomsky 1970).9 Rather than a single lexicon consisting of both “functional” and “lexical” LIs, we have:10 Labels and Structures 21 √ √ (33) a. RLex = { 1, . . . , n}, the set of LIs (roots) b. CLex = {l1, . . . , ln}, the set of category labels In this system, elements of RLex are in the domain of Merge, as are outputs of Merge. Structure is built from RLex plus Merge. On the assumption that CLex is disjoint from RLex, elements of CLex are simply labels for the structures built by Merge. Making explicit the assumption defended earlier that the extended projections given by UG (however derived) can be treated as axiomatic, I deﬁne such extended projections as: (34) A Universal Extended Projection of a category C (UEPC) is a sequence of labels drawn from CLex (ls, . . . lt), where ls is the Start Label and lt is the Terminal Label. I assume, initially, three of these, started by N, V, and A (Baker 2003), so we have EPN , EPV , and EPA. N, V, and A are the labels of the syntactic objects immediately containing roots. We can then state the binary Cartesian product of CLex as a set of Label Transition Functions (LTFs), which I will call Λ: (35) Λ = CLex x CLex = {<N, Cl>, <N, N>, <Cl, N>, <Cl, Cl>, <N, Num>, . . . } Λ itself is subject to no constraints; it allows mappings from any category to any other. It is therefore extremely liberal in what it allows. However, for any particular (I)language, some subset of Λ will exist and will deﬁne, for that language, the particular extended projections available, as well as the possible mappings from one projection to another (i.e., what category can be a speciﬁer of what). Part of the acquisition process is determining what the content of Λ is. Evidence for this in particular languages will be found in the morphology and in the distributional patterns found in the primary linguistic data. It is plausible to assume that during acquisition of a particular language’s Λ certain LTFs are universally ruled out as they do not track the properties of the relevant UEP. For example, we might impose conditions on elements of Λ universally that will restrict the way that subparts of the UEPs are instantiated in a particular language. Such conditions might, for example, bar LTFs that map from higher to lower categories in a UEP (imposing a sequence on the particular extended projections instantiated in a language) or they might bar LTFs that map to start categories (effectively deﬁning a set of such start categories). We will see one such condition when we consider the deﬁnition of syntactic relations in chapter 3, which will ensure that in an instantiated structure the labels in an extended projection go “up.” 22 Chapter 2 The idea is that during development, a child acquiring a language will successively manipulate the LTFs in Λ, subject to whatever universal constraints apply, so that the fully developed language has only a subset of the possible LTFs. Additionally, I assume that the labels in the LTFs may have further idiosyncratic properties that will have an impact on the morphosyntax of the learned language (these correspond to the secondorder interface features of Adger and Svenonius 2011). These properties will be learned on the basis of the primary linguistic data and are not properties of elements of CLex directly (given that CLex is just the set of universal category labels)—for example, properties that identify a piece of labeled structure as the locus for spellout (see section 3.5) or as requiring a speciﬁer. These properties will not be a focus of this investigation, but see Adger and Svenonius 2011 for discussion. We now tackle the labeling problem and deﬁne a uniﬁed labeling function, where α and β are syntactic objects: (36) a. Transition Labeling If α, β ∈ γ, then Label(γ) = some L∈CLex, such that there are (possibly nondistinct) f and g ∈ Λ such that f(Label(α)) = g(Label(β)) = L. b. Root Labeling √ Label({ x}) = some L ∈ {N, V, A} What (36) does is the following: it says that the label of a syntactic object built by Merge is dependent on (but not identical to) the label of both of its subconstituents. Rather than drawing a functional category from the lexicon and Merging it with some syntactic object, and hence labeling the result, the system capitalizes on the idea that the order of functional categories must be given anyway. This order is speciﬁed universally in the UEPs, but in any particular language, Λ will specify allowable subsequences of the universal orders (as well as allowable transitions from one sequence to another). So rather than having a functional lexicon, we simply use the antecedently given order of functional categories in a language as the source of labeling information. The label of some syntactic object is L if there is a transition from the labels of that object’s subconstituents to L. I will (rather laxly) use a function or an orderedpair notation for LTFs, depending on what makes best expositional sense. Structure is then built by Merge, but labeled by (36), but we have not yet speciﬁed how Λ is constrained by the UEPs. It cannot be the case that every LTF must be within a UEP, or else speciﬁers would be impossible. Rather we must ensure that for any structure, at least one LTF must be within a UEP. I will Labels and Structures 23 do this in the next chapter, but roughly, the proposal is tied to the interpretability of a labeled structure. In any particular structure, there will be at least one root whose Self Merge is labeled by some category that is not the output of an LTF (i.e., it will effectively be the start category of an extended projection instantiated as a series of labeled structures in a containment relation—note that it need not be the start category of a UEP). This means that, in a particular language, we can identify Rooted Extended Projections (REPs): they are subparts of structures that track UEPs in a language. In chapter 3, we will see how REPs are used to deﬁne syntactic relations and how a general condition on syntactic relations effectively forces every structure in a language to contain at least one REP. This constraint restricts the LTFs in Λ to be just those that track UEPs plus a set of LTFs that license speciﬁers. Architecturally, then, we have UEPs given by UG, Λ, a result of the acquisition process allowing only certain transitions between labels, and a condition on the interpretability of structures that forces the existence of an extended projection relation in every structure. This condition will be the major focus of the next chapter. Let us now return to the workings of the Labeling Function: for unary branching structures, the system builds on the fact that {A} = {A, A}, so a label for {A} can be calculated by seeing if there are LTFs ∈ Λ that will take us from the label of A to another label, which will be the label of {A}. Given that f and g can be nondistinct, all we need is that there is some function f that will take us from the label of A to that of {A}. Assume that Λ for English is partially speciﬁed as in (37), where Cl is the category that a classiﬁed noun bears (see Borer 2005a; Svenonius 2008) and Num is the category that a counted nominal has. As usual, D is the category of a determined nominal projection. (37) Λ = {<N, Cl>, <Cl, Num>, <Num, D>, . . .} We have, then, the following kind of derivation: √ √ √ √ √ (38) a. Merge cat with cat = { cat, cat} = { cat} √ b. Label({ cat}) = N by Root Labeling √ √ √ √ √ c. Merge { cat} with { cat} = {{ cat}, { cat}} = {{ cat}} √ d. Label({{ cat}}) = Cl because there are f and g ∈ Λ such that f(N) = g(N) = Cl (f and g nondistinct = <N, Cl>) √ √ √ √ e. Merge {{ cat}} with {{ cat}} = {{{ cat}}, {{ cat}}} = √ {{{ cat}}} √ f. Label({{{ cat}}} = Num because there are f and g ∈ Λ such that f(Cl) = g(Cl) = Num 24 Chapter 2 √ √ √ √ g. Merge {{{ cat}}} with {{{ cat}}} = {{{{ cat}}}, {{{ cat}}}} = √ {{{{ cat}}}} √ h. Label({{{{ cat}}}} = D because there are f and g ∈ Λ such that f(Num) = g(Num) = D The function Label takes an unlabeled syntactic object as its argument and provides it with a label. This function involves minimal search: it inspects the unlabeled object to see what it immediately contains and uses that information to provide the new label. The new label, however, is not identical to the label of what is contained in the object; rather, it is calculated from that label, conforming with the languageparticular instantiation of the universally given extended projection of the root category. We can represent this as a tree structure that contains a sequence of labels erected above a lexical category: (39) D Num Cl N √ cat This is a “telescoped” representation in the sense of Brody 2000a, although built by different means. The tree in (39) instantiates the UEP of N as a syntactic object that is the result of a particular derivation in a particular language. This object has a hierarchical structure such that each element contains a further syntactic object √ (until we hit the start category N, which contains only the root cat). These are the “terms” of Chomsky (1995b). Furthermore, each term is associated with a label. Given that all of the labels in (39) are drawn from UEPN , and that the labels are organized into a sequence comporting with UEPN by virtue of the labeling function and Λ, (39) is an Extended Projection of N instantiated in a √ structure rooted by cat. Merge plus the labeling function will produce instantiated extended projections that will consist of a root contained in a series of higher structures each bearing a label that respects the relevant UEP sequence. This exempliﬁes the notion of REP mentioned previously and discussed in more depth in the next chapter. We have now provided a particular solution to the labeling problem by deﬁning a system that treats the labels of categories immediately containing Labels and Structures 25 both speciﬁers and complements in a uniﬁed fashion. In a sense, the proposal here deconstructs the structurecreation and structurelabeling components of a classical production system (e.g., a phrase structure grammar), using Merge for the former and extended projections plus a labeling function for the latter. To see how the whole system works for the Speciﬁer Problem, let us look at the most recalcitrant situation: speciﬁer of v*. For concreteness, take, for ex√ ample, v* as it appears in an unergative structure with some verb (say jump). To create a structure where v* has both a complement and a speciﬁer, there simply have to be, in Λ, two LTFs: (40) Λ = {. . . , <V, v*>, <D, v*>, . . . } <D, v*> maps from labels in one extended projection to another, whereas <V, v*> follows the extended projection of V. It follows from these, and from the uniﬁed deﬁnition of binary labels, that in a binary Merge structure, v* can label a structure that immediately contains both V and D. More concretely, a derivation of Lilly jumps (assuming, possibly counterfactually, that Lilly is a single lexical item) is:11 √ √ (41) a. Self Merge jump = { jump} √ b. Label({ jump}) = V √ √ c. Self Merge Lilly = { Lilly} √ d. Label({ Lilly}) = D √ √ √ √ e. Merge { jump} and { Lilly} = {{ jump}, { Lilly}} √ √ f. Label({{ jump}, { Lilly}}) = v* because there are f and g ∈ Λ such that f(V) = g(D) = v* As a tree structure: (42) v* D V Lilly jump The same kind of derivation will also be needed for objects of transitives. These also require binary Merge. In this system, objects are introduced via a piece of functional structure, rather than being directly Merged with a root, because it is Self Merge of a root that provides the “start category” for the relevant extended projection (see Section 2.3.1 for further discussion). Following Adger, Harbour, and Watkins (2009), I simply call this functional structure O (cf. the aspectual projections of Borer 2005a or Ramchand 2008). An object will then be licensed as there is an LTF that maps from V to O, and similarly one that maps from D to O. 26 Chapter 2 (43) O D V Anson √ bite Unaccusative and zeroplace predicates involve projection of v rather than v* above O or V, respectively (with potentially richer structures required, as in Ramchand 2008). For example, we will have: (44) a. v b. v O V √ D V Anson √ fall rain The binary structures introducing speciﬁer of v* and speciﬁer of O presented in (42) and (44) look very similar, qua structures, but it is crucial that what is interpreted is the labeled structure, so that it is the presence of O versus v* (vs. v) that signals the correct interpretation of the single argument in each case. This is in contrast to the standard view that takes structural position and labeling to be relevant (Hale and Keyser 2002). There are, in this system, no true complements of lexical roots, only speciﬁers of labeled structures built above such roots. A transitive, then, can be represented as: (45) v* D Lilly O D V Anson √ bite More generally, speciﬁer head complement structures are simply the case of Merge(X, Y) where X = Y, whereas unary branching complement structures are built when X = Y. Labels are given in the same way for both: via a labeling function that maps from the information already present in the derivation. Some transitions are within an extended projection whereas others map from one extended projection to another. I use this distinction in the next chapter to deﬁne the notion of complement and speciﬁer. Labels and Structures 27 With this in hand, the Speciﬁer Problem and the Labeling Problem both melt away. Recall that we had: (46) a. The Speciﬁer Problem In {α, β}, where neither α nor β are lexical items, how is the label to be determined? b. The Labeling Problem Is there a uniﬁed labeling algorithm that will sufﬁce for all cases, and if so, what is it? In a system where labels are determined by the LI status of α or β, and where neither is an LI, there is no obvious answer to the Speciﬁer Problem. In a system where the label is determined by whichever of α or β probes, we need to inspect further the properties of the constituents of α and β, or we need to stipulate that the probing capacity is somehow able to project upward. Even accepting this, some statement of the relationship between probing or selecting and the labelhood or locushood of α and/or β needs to be made. The problem arises because of the assumption that it is properties of heads that are relevant. Under the alternative system I have sketched in this section, the Speciﬁer Problem just does not arise. We jettison heads and adopt LTFs in their place. These map from one category to another. The higher label is dependent on the lower, but it is not identical to it. There is a single general algorithm for the use of these functions in labeling that applies uniformly to unary and binary structures, providing a positive answer and concrete proposal for the Labeling Problem. Importantly, any system will need to have some statement of the ordering of the various functional categories. The standard system has heads, and the ordering of functional heads is either given axiomatically (as in Starke 2001; Adger 2003) or by selection. However, any system that gives the order via selection needs either to allow disjunctive selectional requirements (to allow for optional functional heads intervening between selector and selectee) or to assume that there are no optional projections. In the latter case, statement of the selectional properties of the functional heads is once again simply axiomatic. Given this, the standard system needs to state the transition from one functional head to the next, whether via an independent constraint or via selection. So the standard system has what Williams calls “functional embedding,” however executed, plus the Speciﬁer Problem and the Uniﬁed Labeling Problem. The system I have proposed here as an alternative also has functional embedding but lacks the other problems. 28 Chapter 2 A major reason that labeling is generally taken to be endocentric is inclusiveness (Chomsky 1995b). However, whether we label structure in the way I have proposed or we label structure by drawing heads from a functional lexicon is actually immaterial. Inclusiveness would be trivially satisﬁed if we were to build all syntactic structure into lexical items and allow that structure to project via simple syntactic processes; it is not the lexical nature of the source of information that is crucial to inclusiveness: inclusiveness effectively bars the introduction of descriptive technology during the course of a derivation (see Chomsky 2008) minimizing the addition of information to the derivation. But adding information via Merge of functional heads or adding information via LTFs are effectively equivalent in these terms (in fact, Λ plays a role that is equivalent to that of a functional lexicon in terms of how it introduces information into the derivation; the only real difference is that it does not introduce structure). I conclude that the system presented here is an improvement over the standard Bare Phrase Structure system, at least inasmuch as it sidesteps the Labeling Problem and the Speciﬁer Problem but does not increase the general complexity of the system. LTFs replace the lexicon of functional categories, the equivalent of extended projections or functional sequences are required to organize structures in both systems, but in the new system Merge is simpliﬁed and there is a uniﬁed labeling algorithm. Structure building is sharply separated from labeling, which is taken to be dependent on a languageparticular instantiation of a universal sequence of categories.12 In the next section, I explore one important theoretical consequence of the new system. 2.3.1 No Complements of Lexical Roots The system that has been set up has an interesting corollary: a root cannot Merge with another syntactic object. Recall the deﬁnition of the function Label: (47) a. Transition Labeling If α, β ∈ γ, then Label(γ) = some L∈CLex, such that there are (possibly nondistinct) f and g ∈ Λ such that f(Label(α)) = g(Label(β)) = L. b. Root Labeling √ Label({ x}) = some L ∈ {N, V, A} I have already argued for the necessity of Transition Labeling. All systems have some means of specifying the embedding relation between one functional Labels and Structures 29 category and another. All systems, equally, need a way of specifying the category of a root, whether by stipulation as a lexical property or via labeling in the syntactic system by some category bearing element. Root Labeling also, therefore, has to be stated in any system (note that this issue is orthogonal to the question of the underspeciﬁcation of roots for category information—even if roots do not carry syntactic information, they must be embedded in something that does carry some syntactic information). √ Now, if some root (e.g., picture) were to Merge with some previously constructed syntactic object (say, the PP of Lilly), then we have (simplifying the structure of of Lilly): √ √ (48) a. Merge( picture, {of Lilly}) = { picture, {of Lilly}} √ b. Label({ picture, {of Lilly}}) = L if there are LTFs f, g ∈ Λ such that √ f(Label( picture)) = g(Label({of Lilly}) = L c. By hypothesis, Label({of Lilly}) = P, and assume that there is an LTF g = <P, N>, allowing a prepositional element to combine with the extended projection of a nominal root √ d. But picture is not in the domain of Label, because Root Labeling √ applies only to { picture} √ The crucial step here is (48d). The way a derivation of a { , Complement} structure would have to work would require the Label function to apply to a root, but roots are not in the domain of that function. The root has to Self Merge, which creates a structure which can be labeled, but then we do not √ have a binary { , Complement} conﬁguration. This means that no Label can be determined. We therefore rule out the following, in the general case: √ (49) *{ root XP} We will explore this consequence as we go on, but in brief it means that expressions like (50) cannot have the structure attributed to them by most theories of syntax since Chomsky 1970 (but cf. Kayne 2010): (50) the color of the car Rather the structure of such examples must be: √ (51) [[N root ] . . . PP] That is, the PP complement is Merged in a position outside of the projection of the lexical root. This will rule out Bare Phrase Structure representations (Chomsky 1995a) of the following sort: 30 (52) Chapter 2 picture picture of of Lilly Instead we have, at best, the PP being a daughter of the category containing the root: (53) F PP N of Lilly √ picture I place of Lilly to the left in this tree because, if F is a category in the extended projection of N, then of Lilly is a speciﬁer (because N is the complement of F). We will look at the relevant notions of speciﬁer and complement in the next chapter, but, anticipating the issues discussed there, this consequence of the system raises two analytical questions: (54) a. The Ordering Question Given the PP is not a complement to the root, why can it occur to the right of the root, assuming that syntax disallows rightward speciﬁers (Kayne 1994 and the next chapter); b. The Etiology Question Given the PP is outside of the projection of the root, how is the semantic relation between the root and the PP negotiated? The same issues, of course, arise for verbal structures. The following is ruled out: (55) V √ arrive DP Rather we have: √ (56) [[V root ] . . . DP] Here the DP must actually be a speciﬁer of some element within the extended projection of V. There are, of course, many proposals that separate the root from its object, generating arguments of the verbs in speciﬁer position (Travis 2000; Borer 2005b; Ramchand 2008, etc); in the theory of phrase structure developed here, the alternative standard view is not an option. We are forced Labels and Structures 31 into, rather than simply stipulating, the introduction of arguments by syntactic structure. It is important to see just where the system developed here differs from the standard system. In the standard system, it is possible to Merge an XP with a root: (57) XP √ arrive DP Moreover, this classical head–complement structure is usually interpreted as involving an internal argument of a predicate: in this framework, the thematic relation between a verb root and its arguments is syntactically instantiated as a maximallylocalrelationbetweenthecategoryVandthecategoryoftheargument. On the proposal here, what is available is: (58) X DP V √ arrive Here, the “internal argument” can never be in a maximally local syntactic relation with the lexical root. At most, it is the speciﬁer of a category that takes the category containing the root as its complement. This is not a local relation between argument and lexical root at all. In fact, there is nothing in the theory developed here that disallows various elements of functional structure to appear before introduction of the “internal argument”: (59) X DP F G V √ arrive That is, the current system allows the dissociation of argument introduction from the lexical root entirely—something that is unexpected on the standard view. We are then left with an empirical question: is there evidence for such dissociation? That is, do we ﬁnd cases where syntactic functional structure is built above a lexical root before the introduction of the argument. If we do, then the current system is superior to the standard one. 32 Chapter 2 2.3.2 Spellout of Functional Categories The denial of the existence of functional heads forces us into a position that takes the phonology and morphology of functional morphemes to be read off of labeled structures, rather than being able to adopt the standard position, where they are functional heads Merged as independent pieces of structure. There are two cases to consider: (60) a. bound functional morphemes (bfms) b. free functional morphemes (ffms) For bfms, I adopt a version of the approach advocated by Brody (2000a). Take a structure like the following, where each H is in the same extended projection with E as the root. (I give the tree here slanted toward the right, as Brody does. In the current system, of course, no such slant is necessary for unary structures.) (61) H3 EE EE EE EE H2 EE EE EE EE H1 EE EE EE EE √ E One of the insights of Brody’s system is that the syntactic complement line corresponds to a morphological structure. In (62), each h is the morpheme corresponding to the category H. (62) h3 yy y y yy yy h2 yy y y yy yy h1 y y yy y y yy e Labels and Structures 33 For Brody, (62) is a morphological speciﬁer structure, with a general principle, that speciﬁers precede heads, ensuring the linear order of the afﬁxes. I do not adopt Brody’s Mirror Axiom here, so I will assume that the linear order is stipulated for elements in a complement line (and, in fact, may be dependent on the linearization properties of particular morphemes, as in Bye and Svenonius 2010, allowing certain limited violations of the Mirror Principle). However, the scope order of the labels of syntactic structure is, at heart, the source of the sequential order of afﬁxes. Following Brody, we can assume that it is a property of the label of a syntactic object that is responsible for where a sequence of morphemes is spelled out, so that, whereas one language spells out h2h3 at H2, another might spellout h2h3 at H3, with concomitant ordering effects if H2 and H3 have speciﬁers. This “spellout here” diacritic, which replaces head movement, will be a secondorder feature of the label, acquired during the acquisition of Λ. Turning to ffms, in Brody’s system (e.g., Brody 2000a, Brody 2000b), if x is the complement of y then y is sufﬁxed to x; that is, the syntactic complement line corresponds to a morphological speciﬁer relation (this is the Mirror Axiom of Brody’s theory). It then follows that if y is not sufﬁxed to x, then x cannot be the complement of y. This leads Brody to take separate morphological words in the same extended projection to involve a wiggly complement line. For example, if eh1 is a ffm, and h2h3 is a ffm, then the structure will look as follows: (63) H3 EE EE EE EE H2 yy y y yy yy H1 EE EE EE EE E Here, H1 is a speciﬁer of H2, which means that H1 and H2 do not correspond to a single morphological word. The morphological words in this structure are those that correspond to (H2,H3) and (E,H1). There is an alternative to the wiggly word approach, sketched in Williams 2003 (see also Svenonius 2012 for further arguments). Williams does not 34 Chapter 2 assume that there is a Mirror Axiom. Rather he takes morphological words to span sections of an extended projection: (64) H3 EE EE EE EE H2→h2h3 EE EE EE EE H1→eh1 EE EE EE EE √ E Here the complement line of functional categories above the root is H1H2H3 but the free lexical word is the bimorphemic eh1, which spans the structure H1E and is spelled out at H1. The free functional word h2h3 spans the structure H3H2 and is spelled out at H2. This approach makes immediate sense of fusional morphology, in that a single morpheme can correspond to a number of functional category labels. Rather than the bimorphemic h2h3, we could have the single fusional morpheme h5 spanning H3H2. I will adopt this approach to ffms in what follows, taking such morphemes (i.e., morphemes without lexical roots) to be the spellouts of spans of functional categories. In the system developed here, then, there are no functional categories qua lexical items, which means that free functional morphemes must either be spellouts of fragments of structure or must actually have lexical roots in them. 2.4 Conclusion I have argued in this chapter for a new view of how structures are built and labeled. The resulting system provides a uniﬁed solution to the problem of labeling. A consequence of this uniﬁed theory of labeling is that a root must be embedded in some structure, which receives a label, before any further phrase can be Merged. This theory contrasts with the standard view of phrase structure in a number of ways. Because the notion of complement line is reserved for extended projections, the standard view of the syntax semantics interface, which correlates semantic internal argumenthood with syntactic complement status, is taken to be false. This implies what one might call complete severance: not Labels and Structures 35 only is the external argument severed from the predicate and introduced by a (semi)functional category v (Kratzer 1996), but the internal argument also has to be introduced outside the lexical category (Borer 2005b; and more recently, work following Pietroski’s conjunctivist program for semantics, such as Pietroski 2005; Hunter 2011; Lohndal 2011). This amounts to a deconstruction of the notion of θdomain (or θstructure or ﬁrst phase) as a series of syntactic Merge operations, each of which correlates with the introduction of a semantic argument. Rather arguments can be introduced anywhere. This might seem to introduce a puzzle into our syntactic system, because there is now no notion of a syntactic domain that correlates with the semantic function of argument introduction. However, notice that the notion of a θdomain is, under standard assumptions, just a stipulation about particular Merge operations.13 In fact, in the standard system of Chomsky (1995b et seq.), it is only ﬁrst Merge (the complement of the lexical category) that counts as Merge to the lexical category, and nothing, beyond a stipulation, guarantees that subsequent Merge operations will correlate with argument introduction. Complete severance simply removes the stipulation. However, although there is no theoretical puzzle, one might think that there is an empirical one: the relevant assumption of the standard system, where all arguments are introduced before any other functional structure is introduced, has been classically assumed with little ill effect. In the remainder of this book, however, I will argue that, at least for relational nominals, the apparent internal argument is in fact introduced at a point in the derivation where much other functional structure has already been built. That is, removal of the stipulation that there is a special θdomain seems, in nominals at least, to be well motivated. This then requires a reappraisal of verbal syntax, allowing more interspersal of modiﬁers and arguments than is classically assumed, a job that I will not undertake in this book. In addition to the empirical question about locality just mentioned, we have two more analytical puzzles: the Ordering Question (if traditional complements are actually speciﬁers, why can they occur to the right of their root?) and the Etiology Question (what is the source of the semantic relation between the root and the “argument” if it does not inhere in the root?). We also have a theoretical puzzle to solve: the structures built by the syntactic system developed in this chapter are purely symmetrical. There is nothing in the structurebuilding operations, or in the labeling algorithm, that allows one to distinguish traditional asymmetries in syntactic relations (such as complement vs. speciﬁer); all that we have are binary or unary structures bearing labels (with no representation of headedness in the structures). 36 Chapter 2 I address this theoretical puzzle ﬁrst, further developing the system with notions that introduce the relevant asymmetry at the semantic and phonological interfaces. The system so developed imposes some restrictions on movement (speciﬁcally it rules out rollup and remnant rollup derivations). With these in place, I then turn to the analytical problems (the Ordering Question and the Etiology Question) and show how the theoretical assumptions defended here play out in the analysis of relational nominals in chapters 4, 5, and 6, and how the expectation that there is no θdomain is met. Chapter 3 Syntactic Interpretation 3.1 Introduction In the system I developed in the last chapter, complement and speciﬁer are not structurally distinguished by derivational timing, as in the standard First Merge versus Second Merge deﬁnitions (Chomsky 1995b). In fact, as far as the syntactic computation goes, all cases of Merge give a perfectly symmetrical structure. However, this seems to be empirically incorrect. Numerous asymmetries must be determined for appropriate interpretation by the interface systems. For example, for a label that is interpreted as a relation, two separate arguments have to be identiﬁed (e.g., v* needs to determine which of its two dependents is the agent and which is the event). Similarly, the spellout systems seem to be sensitive to whether a moved expression can appear to the right or the left of the structure that it targets (i.e., movement to a right speciﬁer seems to be disallowed, at least in spoken languages). In the theory developed here, the only information available to the derivation of the structure is the extended projection of the root built via the labeling function. Accordingly, I use this information to deﬁne two syntactic relations, similar to the classical notions of complement and speciﬁer. These syntactic relations are then used by the interface systems to constrain the composition of the information in that structure both semantically and in terms of linear order. The idea here is not to offer any new claims about how syntactic relations affect the interface systems. The effect they have will be familiar: just as in the classical view, complements compose semantically ﬁrst, followed by speciﬁers, and when labels of mother and daughter are identical (segments), only one is interpreted; speciﬁers are, following Kayne (1994), barred from appearing on right branches (alternatively, speciﬁers are linearized to the left of their head). However, the deﬁnitions of syntactic relations I offer to capture these 38 Chapter 3 classical views on the interpretation of syntactic relations have a further effect: they make the generation of rollup structures impossible. This, we will see in following chapters, turns out to be the right result. 3.2 IComplements and ISpeciﬁers Let us then deﬁne the notions we need, based on the existence of an extended projection relation in a structure. Recall that UG provides a small set of universal extended projections (UEPs). In a particular language, a root will Self Merge and the resulting structure will be labeled by a category from a UEP. Some roots will be labeled by the start category of a UEP, but, given the way that Λ is deﬁned, nothing restricts roots to be so labeled. That is, there is no constraint imposed that blocks the Self Merge of a root being labeled with some other category in CLex, with the relevant LTFs in Λ then allowing the elaboration of an extended projection above this. For example, it is uncontroversial that eat will be contained in a structure labeled by V; however, nothing stops us taking a modal verb, say will, to be contained in a structure labeled by a category Modal. The category Modal is not in the UEP of V, and, in English, verbs do not inﬂect for modality. This allows us to say that, in English, there are two distinct rooted extended projections (REPs): one started by a category Modal that immediately contains the √ √ root will, and one started by V immediately containing eat: (1) a. C b. C T T Modal v* √ will V √ eat The start category of an REP bears a selectional relation to a subset of RLex, with the categories V, N, and A being very liberal in the roots they combine with, whereas categories like Modal and Pass are much more restricted. The view that the functional category selects a set of roots is similar to the proposal made by Kayne (2006). Contrast this with C or T, for example, which do not start REPs in English. UG, then, provides for a REP in a language to be any subsequence of a UEP, effectively parceling out UEP information into REPs started by the categories Syntactic Interpretation 39 immediately containing roots. It is the REP that is relevant for determining in any particular structure what the syntactic relations will be, as we will see directly. There is an intricate relation between Λ and REPs. An REP is a subportion of a structure, where, for every structure that immediately contains another structure, the label of the ﬁrst is higher in the relevant UEP than the label of the second. The labeling of any structure is, however, dependent on what LTFs are in the Λ of the particular language. If in any immediate containment structure there must be a REP, then acquisition of Λ is equivalent to determining which possible subparts of the UEPs are part of the language and which other label transitions are available. The notion of REP gives us the wherewithal to deﬁne the relevant syntactic relations. The intuition is that if an REP relation holds between mother and daughter we have a kind of complementation relation (cf. Williams’s (2003) notion of functional complementation), whereas if it does not hold, we have a speciﬁerlike relation. More precisely, we can deﬁne the notions of i(nterpretive)complement and i(nterpretive)speciﬁer: (2) In a unary labeled structure [γ β], β is assigned the syntactic relation of being an icomplement of γ iff there is a rooted extended projection Σ such that (i) β and γ ∈ Σ and (ii) label(γ) ≥ label(β) in Σ. (3) In a binary labeled structure [γ α β], a. β is assigned the syntactic relation of being an icomplement of γ iff there is a rooted extended projection Σ such that (i) β and γ ∈ Σ and (ii) label(γ) ≥ label(β) in Σ, and (iii) α ∈ / Σ and b. α is assigned the syntactic relation of being an ispeciﬁer of γ iff β is an icomplement of γ. Icomplement and ispeciﬁer relations hold between mothers and daughters and, when there is more than one daughter, provide an asymmetry that can be exploited by the interface systems, both for meaning and linearization. I have adopted a view of ispeciﬁers that takes them to be deﬁned only when an icomplement is deﬁned, rather than allowing them to be deﬁned independently. This reﬂects the idea that it is the icomplement relation deﬁned by extended projections that is fundamental, and the ispeciﬁer effectively breaks the symmetry that emerges from a unary icomplement relation. I will also assume the following condition on labeling as an interface condition. 40 Chapter 3 (4) Full Interpretation of Labeled Structures (FILS) In a labeled structure, there must be a unique successful assignment of syntactic relations to motherdaughter pairs. The condition in (4) ensures that for any structure there is just one successful way of assigning syntactic relations, where assignments are differentiated by choice of REP and it also ensures that in every structure it must be possible to discern an REP. Roots themselves will not fall under FILS, because an expression consisting of only a root is not a labeled structure (given that the root does not have a label). The intuition behind FILS is that the interfaces are to be presented with an unambiguous signal for the interpretation of syntactic structure so that meaning and linear order can be connected: that is, for any particular structure, there is a unique interpretation and unique linearization.1 3.3 Labeled Structures and the Impossibility of Rollup Derivations In a unary branching structure, the icomplement relation is trivial. If there is a single REP containing the two labels and the label of the containing object is higher in that extended projection than that of the contained object, then the contained object is an icomplement of the containing object. (5) X5 X3 Here the subscripts give the height that the category has in the relevant extended projection and the identity of the letter speciﬁes sameness of that extended projection (e.g., all Xs are in the extended projection of V, and all Ys in the extended projection of N). An icomplement relation is deﬁnable for a unary branching structure also in cases where the same label recurses, by virtue of the deﬁnition of icomplement involving ≥: (6) X3 X3 If the height relations in (5) are reversed, there is no icomplement deﬁned. However, the ispeciﬁer relation is only deﬁned when the icomplement relation is, so (7) lacks any syntactic relation between its substructures, violating FILS: Syntactic Interpretation 41 (7) X3 X5 Similar considerations rule out the following: (8) Y5 X3 Turning to binary structures, there are three basic possibilities: both daughters are in different REPs from the mother; one is in the same REP, and the other in a different REP; or both daughters are in the same REP as the mother. The ﬁrst case looks as follows: (9) X5 Y4 Z3 Given that neither Y nor Z, by hypothesis, is in the same REP as X, no icomplement relation is deﬁned, and so no ispeciﬁer relation is deﬁnable. The same, of course, will hold for cases where the daughters are both higher in an extended projection than the mother, because although the daughters of X5 are potentially in the same REP as X5, they are higher in that REP. (10) X5 X10 X7 Such structures are again ruled out by the requirement that there must be a legitimate assignment of syntactic relations to the (daughter) syntactic objects in (10). The second case is the typical one. The deﬁnitions of ispeciﬁer and icomplement mean that, in a structure like (11), we can uniquely determine both the icomplement and the ispeciﬁer. (11) X4 Y10 X3 Given our deﬁnitions, X3 is the icomplement of X4, because there is an REP that contains both, X3 is lower in that REP than X4, and Y, by hypothesis, is not in that REP; Y10 is then an ispeciﬁer of X4. We have the same assignment of syntactic relations if X4 here is replaced with X3, where the lower X3 counts as an icomplement of the higher (note 42 Chapter 3 that this makes Y10 an ispeciﬁer; there is no separate deﬁnition of adjunct in this system, at least as it is developed here): (12) X3 Y10 X3 Icomplement and ispeciﬁer are equally deﬁned in (13), where X10 is an ispeciﬁer of X4 because although X4 contains X10, X4 is < X10 in the REP of X, so X10 cannot be an icomplement of X4. However, X3 can be an icomplement of X4, in which case X10 is successfully assigned the ispeciﬁer relation. (13) X4 X10 X3 The most interesting case is the third, where both daughters are in the same UEP as the mother and both are lower in that EP, hence satisfying the conditions, at least potentially, to be icomplements. Consider (14). (14) X5 X3 X4 There are two possibilities for how this structure might be derived. The ﬁrst is that X3 and X4 are externally Merged, containing different roots. In such a circumstance, the two daughters are in different REPs: (15) X5 X3 X4 √ a √ b There is now an ambiguity in the assignment of syntactic relations. There is an √ REP, rooted by a which contains both X5 and X3, with X5 ≥ X3. Furthermore, X4 is not in this REP and so this choice of REP allows assignment of the √ icomplement relation to X3; however, there is also an REP, rooted by b, which also meets the conditions for assignment of the icomplement relation, mutatis mutandis. It follows that there is more than one choice of REP that leads to successful assignment of syntactic relations to mother daughter pairs, but FILS requires a unique successful assignment, and hence (15) is excluded by FILS. Violation of FILS in this structure breaks the link between sound and meaning, because what is an ispeciﬁer to the semantic systems could be an icomplement to the linearization systems. Syntactic Interpretation 43 This effectively rules out structures where, for example, a D has a NumP daughter and an NP daughter. Such structures are typically taken to be ill formed anyway, although I know of no theory that rules them out as the present one does. The system developed here derives the result, generally taken to be true but never, to my knowledge explained,2 that if a category has a speciﬁer that is in the same (U)EP, the speciﬁer must be higher in that extended projection.3 The second possibility is also interesting from the perspective of current syntactic theory: X3 is internally Merged from inside X4; that is, there is a single root token in the structure and hence a single REP. This would entail the following kind of derivation: (16) a. Build X3. b. Self Merge X3 and Label the structure as X4, using a Label Transition Function <3,4>. c. Merge X3 with X4. d. Label resulting structure, using Label Transition Functions <3,5>, <4,5>. (17) X5 X3 X4 X3 ... For X3 to be an icomplement of X5, X4 must not be in the same REP as X3, but it is. Similarly, for X4 to be an icomplement of X5, X3 must not be in the same REP as X4, but it is. So neither daughter can be an icomplement (a fortiori, neither can be an ispeciﬁer), violating FILS: there is no successful assignment of grammatical relations to this structure. The same result emerges if we replace X5 with X4, resulting in a conﬁguration that mimics a classical adjunction structure: (18) X4 X3 X4 X3 ... 44 Chapter 3 Intuitively, the system excludes movement of part of an extended projection line to some position within that same projection line. That is, it follows from the system that a certain class of rollup derivations (those that result in rollup of the same extended projection), is impossible. For example, in a VP topicalization construction such as (19), the moved VP must actually be part of a different REP from that containing the auxiliary: (19) . . . and eat the mouse Lilly certainly will! This rules out a set of standard analyses for these cases, where Lilly raises from the vPinternal subject position, followed by movement of the vP to some focus position in the Cdomain (I annotate the trace of the lower v* with a P only for the sake of clarity, to signal that what has been moved is a phrase): (20) Foc v* C Lilly T O Lilly D V the mouse √ eat v*P √ Foc and C are in the same REP (that rooted by eat) and Foc>C in that REP. However, v* is also in the same REP, so no icomplement relation can be assigned in the structure, violating FILS. However, if the modal in (20) begins its own REP, as in (21), then a different result emerges (recall that Modal is not in the same UEP as V; compare (15)): (21) Foc v*P C eat the mouse T Lilly Modal v*P Modal √ will Syntactic Interpretation 45 In this structure, I have assumed that the modal auxiliary will is a member of RLex. As a root it must Self Merge, and the resulting syntactic object is labeled Modal by Root Labeling. The structure labeled v* is then Merged with the object labeled Modal, which recurses (i.e., there is a Label Transition Function that takes us from v* to Modal and from Modal to Modal), but Modal is not in the REP of V (although, of course, there may be categories contributing modal semantics that might be in both the REP of V and the REP of Modal). √ In (21), there is an REP (the REP rooted by will) containing Foc and C and not containing v*, so there is a successful assignment of syntactic relations to the structure where C is the icomplement of Foc and v*(P) its ispeciﬁer. There is another potential assignment, here, where Foc, C, and v* are all in the REP √ rooted by eat, but such a choice of REP fails to assign syntactic relations, as detailed in the discussion of (17). It follows that there is a unique successful assignment of syntactic relations to the structure, and FILS is satisﬁed. The system actually has a broader consequence if speciﬁers are generally phasal (e.g., Adger 2003), and movement from a phase ﬁrst requires movement to the speciﬁer of that phase (Chomsky 2008). With these assumptions in place, it follows that movement of a part of an extended projection (i.e., movement of an icomplement) is impossible in general, given that to move part of an extended projection out of a speciﬁer, we must ﬁrst move to a speciﬁer of that extended projection. This is legitimate in the standard system, but in the system developed here, it will always give rise to the FILS problem just outlined. Of course, movement of a true ispeciﬁer from an ispeciﬁer (e.g., movement of Lilly to [Spec, T] above) is legitimate. We derive, then, the consequence that only ispeciﬁers can move and that apparent cases of movement of a part of an extended projection must rather be movement of an ispeciﬁer in an extended projection started by some root where the upper part of that extended projection and the upper part of the extended projection of the apparentl