1 Generative Grammar and the Faculty of Language: Insights, Questions, and Chal
1 Generative Grammar and the Faculty of Language: Insights, Questions, and Challenges* Noam Chomsky Ángel J. Gallego Dennis Ott Massachusetts Institute of Technology Universitat Autònoma de Barcelona University of Ottawa 1. Introduction Generative Grammar (GG) is the study of linguistic capacity as a component of human cognition. Its point of departure is Descartes’ observation that “there are no men so dull-witted or stupid […] that they are incapable of arranging various words together and forming an utterance from them in order to make their thoughts understood; whereas there is no other animal, however perfect and well endowed it may be, that can do the same” (Discours de la méthode, 1662). Studies in comparative cognition over the last decades vindicate Decartes’ insight: only humans appear to possess a mental grammar—an “I-language,” or internal-individual language system— that permits the composition of infinitely many meaningful expressions from a finite stock of discrete units (Hauser et al. 2002; Anderson 2004; Chomsky 2012a, 2017). The term Universal Grammar (UG) is simply a label for this striking difference in cognitive capacity between “us and them.” As such, UG is the research topic of GG: what is it, and how did it evolve in us? While we may never find a satisfying answer to the latter question, any theory of UG seeking to address the former must meet a criterion of evolvability: any mechanisms and primitives ascribed to UG rather than derived from independent factors must plausibly have emerged in what appears to have been a unique and relatively sudden event on the evolutionary timescale (Bolhuis et al. 2014; Berwick & Chomsky 2016). * For feedback and suggestions, we are indebted to Luigi Rizzi and Juan Uriagereka. Parts of this paper are based on a Question & Answer session with Noam Chomsky that took place at the Residència d’Investigadors (Barcelona) on November 6, 2016. We would like to thank the students who helped with the transcription of that session: Alba Cerrudo, Elena Ciutescu, Natalia Jardón, Pablo Rico, and Laura Vela. Ángel J. Gallego would like to acknowledge support from the Ministerio de Economía y Competitividad (FFI2014-56968-C4-2-P), the Generalitat de Catalunya (2014SGR-1013), and the Institució Catalana de Recerca i Estudis Avançats (ICREA Acadèmia 2015). 2 GG’s objectives open up many avenues for interdisciplinary research into the nature of UG. Fifty years ago, Eric Lenneberg published his now-classic work that founded the study of the biology of language, sometimes called “biolinguistics” (Lenneberg 1967). In conjunction with the then- nascent generative-internalist perspective on language (Chomsky 1956[1975], 1957, 1965), this major contribution inspired a wealth of research, and much has been learned about language as a result. The techniques of psychological experimentation have become far more sophisticated in recent years, and work in neurolinguistics is beginning to connect in interesting ways with the concerns of GG (Berwick et al. 2013; Nelson et al. 2017; Friederici to appear). Important results have emerged from the study of language acquisition, which is concerned with the interaction of UG and learning mechanisms in the development of an I-language (Yang 2002, 2016; Yang et al. in press). Work by Rosalind Thornton and others shows that children spontaneously produce expressions conforming to UG-compliant options realized in languages other than the local “target” language, without any relevant evidence; but they do not systematically produce innovative sentences that violate UG principles. This continuity between children’s seemingly imperfect knowledge and the range of variation in adult grammars suggests that children are following a developmental pathway carved out by UG, exploring the range of possible languages and ultimately converging on a steady state (for review and references, see Crain & Thornton 1998, 2012; Crain et al. 2016; for a theory of the steady state as a probability distribution over I-languages, see Yang 2016). Converging conclusions follow from the spontaneous creation of sign languages by deaf children without linguistic input (Feldman et al. 1978; Kegl. et al. 1999; Sandler & Lillo-Martin 2006). On the whole, we believe that GG has made significant progress in identifying some of the computational mechanisms distinguishing man from animal in the way recognized by Descartes. In this paper, we offer our view of the current state of the field, highlighting some of its central achievements and some of the many remaining challenges, in the hope of inspiring future research. Section 2 discusses the fundamental, “non-negotiable” properties of human language that any theory of UG has to account for. Section 3 focuses on core computational operations and their properties. Section 4 turns to the interfaces of I-language and systems entering into language use, and how conditions imposed by these systems constrain syntactic computation. Section 5 reviews a number of challenges emerging from recent work, which call for resolution under minimalist desiderata. Section 6 concludes. 3 2. Basic Properties of I-language A traditional characterization of language, going back to Aristotle, defines it as “sound with meaning.” Building on this definition, we can conceive of an I-language as a system that links meaning and sound/sign in a systematic fashion, equipping the speaker with knowledge of these correlations. What kind of system is an I-language? We consider two empirical properties non- negotiable, in the sense that any theory that shares GG’s goal of providing an explanatory model of human linguistic capacity must provide formal means of capturing them: discrete infinity and displacement.1 Atomic units—lexical items, whose nature remains the subject of much debate2— are assembled into syntactic objects, and such objects can occupy more than one position within a larger structure. The first property is the technical statement of the traditional observation that “there is no longest sentence,” the informal notion “sentence” now abandoned in favor of hierarchically structured objects. The second property is illustrated by a plethora of facts across the world’s languages. To pick one random illustration, consider the familiar active/passive alternation: (1) a. Sensei-ga John-o sikar-ta. (Japanese) teacher-NOM John-ACC scold-PST ‘The teacher scolded John.’ b. John-ga sensei-ni sikar-are-ta. John-NOM teacher-by scold-PASS-PST ‘John was scolded by the teacher.’ The noun phrase John bears the same thematic relation to the verb sikar in both (1a) and (1b), but appears sentence-initially (displaced from his base position) in the latter. On the assumption that thematic relations are established in a uniform and strictly local fashion—a guiding idea of GG since its inception—, this entails that the nominal is displaced from its original position in (1b). 1 The latter notion is non-negotiable in its abstract sense: there can be multiple determinants of interpretation for some syntactic object. The mechanisms implementing this basic fact vary dramatically across theoretical frameworks, of course. 2 For a sample, see Hale & Keyser 1993, 1999; Borer 2005; Marantz 2001, 2013; Mateu 2005; Ramchand 2008; Starke 2014. 4 To account for these elementary properties, any theory of GG must assume the existence of a computational system that constructs hierarchically structured expressions with displacement. The optimal course to follow, we think, is to assume a basic compositional operation MERGE, which applies to two objects X and Y, yielding a new one, K = {X,Y}. If X, Y are distinct (taken directly from the lexicon or independently assembled), K is constructed by External MERGE (EM); if Y is a term of X, by Internal MERGE (IM). If K is formed by IM, Y will occur twice in K, otherwise once; but the object generated is {X,Y} in either case. IM thus turns Y into a discontinuous object (or chain), which can be understood as a sequence of occurrences of Y in K. (2) illustrates for (1b) above (abstracting away from irrelevant details), where MERGE combines K and the internal NP John-ga: (2) a. {sensei-ni,{sikarareta,John-ga}} = K ® MERGE(K,John-ga) b. {John-ga,{sensei-ni,{sikarareta,John-ga}}} = K′ MERGE, applying recursively so that any generated object is accessible to further operations,3 thus suffices in principle to model the basic properties of discrete infinity and displacement. Furthermore, it is the computationally simplest operation that implements the basic properties of an I-language, and as such a conceptually necessary, irreducible component of UG. MERGE(X,Y), yielding K = {X,Y}, imposes hierarchical structure (X, Y are terms of K, but not vice versa) but no order ({X,Y} = {Y,X}). Languages differ in how they ultimately linearize objects constructed by MERGE, an important research topic for the study of the interaction between core syntax and the sensorimotor systems involved in perception and articulation. In (1a) above, the VP is linearized with OV order (John-o sikarta), whereas a corresponding English VP would surface with VO order (scolded John). Interpretation is not affected by this difference, suggesting that the relevant parameter should be a matter of externalization of internally generated expressions alone (see Travis 1984 for original ideas along these lines). 3 Recursion is thus a “deep” property of the generative procedure; to what extent constructions displaying category recursion are used in some particular language (e.g., English but not German permits recursive possessors, as in Maria’s neighbor’s friend’s house) is an entirely different issue. See Arsenijević & Hinzen 2012; Chomsky 2014. 5 A corollary of restricting composition to MERGE is the structure-dependence of syntactic operations: uploads/Geographie/ chomskyetal-17-generative-2.pdf
Documents similaires
-
28
-
0
-
0
Licence et utilisation
Gratuit pour un usage personnel Attribution requise- Détails
- Publié le Oct 30, 2022
- Catégorie Geography / Geogra...
- Langue French
- Taille du fichier 0.3543MB