linguistics

linguistics
/ling gwis"tiks/, n. (used with a sing. v.)
the science of language, including phonetics, phonology, morphology, syntax, semantics, pragmatics, and historical linguistics.
[1850-55; see LINGUISTIC, -ICS]

* * *

Study of the nature and structure of language.

Linguists use a synchronic (describing a language as it exists at a given time) or a diachronic (tracing a language's development through its history) approach to language study. Greek philosophers in the 5th century BC who debated the origins of human language were the first in the West to be concerned with linguistic theory. The first complete Greek grammar, written by Dionysus Thrax in the 1st century BC, was a model for Roman grammarians, whose work led to the medieval and Renaissance vernacular grammars. With the rise of historical linguistics in the 19th century, linguistics became a science. In the late 19th and early 20th centuries Ferdinand de Saussure established the structuralist school of linguistics (see structuralism), which analyzed actual speech to learn about the underlying structure of language. In the 1950s Noam Chomsky challenged the structuralist approach, arguing that linguistics should study native speakers' unconscious knowledge of their own language (competence), not their actual production of language (performance), and developed generative grammar.

* * *

Introduction

      the scientific study of language. The word was first used in the middle of the 19th century to emphasize the difference between a newer approach to the study of language that was then developing and the more traditional approach of philology. The differences were and are largely matters of attitude, emphasis, and purpose. The philologist is concerned primarily with the historical development of languages as it is manifest in written texts and in the context of the associated literature and culture. The linguist, though he may be interested in written texts and in the development of languages through time, tends to give priority to spoken languages and to the problems of analyzing them as they operate at a given point in time.

      The field of linguistics may be divided in terms of three dichotomies: synchronic (synchronic linguistics) versus diachronic (historical linguistics), theoretical versus applied, microlinguistics versus macrolinguistics. A synchronic description of a language describes the language as it is at a given time; a diachronic description is concerned with the historical development of the language and the structural changes that have taken place in it. The goal of theoretical linguistics is the construction of a general theory of the structure of language or of a general theoretical framework for the description of languages; the aim of applied linguistics is the application of the findings and techniques of the scientific study of language to practical tasks, especially to the elaboration of improved methods of language teaching. The terms microlinguistics and macrolinguistics are not yet well established, and they are, in fact, used here purely for convenience. The former refers to a narrower and the latter to a much broader view of the scope of linguistics. According to the microlinguistic view, languages should be analyzed for their own sake and without reference to their social function, to the manner in which they are acquired by children, to the psychological mechanisms that underlie the production and reception of speech, to the literary and the aesthetic or communicative function of language, and so on. In contrast, macrolinguistics embraces all of these aspects of language. Various areas within macrolinguistics have been given terminological recognition: psycholinguistics, sociolinguistics, anthropological linguistics, dialectology, mathematical and computational linguistics, and stylistics. Macrolinguistics should not be identified with applied linguistics. The application of linguistic methods and concepts to language teaching may well involve other disciplines in a way that microlinguistics does not. But there is, in principle, a theoretical aspect to every part of macrolinguistics, no less than to microlinguistics.

      A large portion of this article is devoted to theoretical, synchronic microlinguistics, which is generally acknowledged as the central part of the subject; it will be abbreviated henceforth as theoretical linguistics.

History of linguistics

Earlier history
Non-Western traditions
      Linguistic speculation and investigation, insofar as is known, has gone on in only a small number of societies. To the extent that Mesopotamian, Chinese, and Arabic learning dealt with grammar, their treatments were so enmeshed in the particularities of those languages and so little known to the European world until recently that they have had virtually no impact on Western linguistic tradition. Chinese (Chinese literature) linguistic and philological scholarship stretches back for more than two millennia, but the interest of those scholars was concentrated largely on phonetics, writing, and lexicography; their consideration of grammatical problems was bound up closely with the study of logic.

      Certainly the most interesting non-Western grammatical tradition—and the most original and independent—is that of India (Indian languages), which dates back at least two and one-half millennia and which culminates with the grammar of Pāṇini, of the 5th century BC. There are three major ways in which the Sanskrit (Sanskrit language) tradition has had an impact on modern linguistic scholarship. As soon as Sanskrit became known to the Western learned world the unravelling of comparative Indo-European grammar ensued and the foundations were laid for the whole 19th-century edifice of comparative philology and historical linguistics. But, for this, Sanskrit was simply a part of the data; Indian grammatical learning played almost no direct part. Nineteenth-century workers, however, recognized that the native tradition of phonetics in ancient India was vastly superior to Western knowledge; and this had important consequences for the growth of the science of phonetics in the West. Thirdly, there is in the rules or definitions (sutras) of Pāṇini a remarkably subtle and penetrating account of Sanskrit grammar. The construction of sentences, compound nouns, and the like is explained through ordered rules operating on underlying structures in a manner strikingly similar in part to modes of contemporary theory. As might be imagined, this perceptive Indian grammatical work has held great fascination for 20th-century theoretical linguists. A study of Indian logic in relation to Pāṇinian grammar alongside Aristotelian and Western logic in relation to Greek grammar and its successors could bring illuminating insights.

      Whereas in ancient Chinese learning a separate field of study that might be called grammar scarcely took root, in ancient India a sophisticated version of this discipline developed early alongside the other sciences. Even though the study of Sanskrit grammar may originally have had the practical aim of keeping the sacred Vedic texts and their commentaries pure and intact, the study of grammar in India in the 1st millennium BC had already become an intellectual end in itself.

Greek (Greek language) and Roman antiquity
      The emergence of grammatical learning in Greece is less clearly known than is sometimes implied, and the subject is more complex than is often supposed; here only the main strands can be sampled. The term hē grammatikē technē (“the art of letters”) had two senses. It meant the study of the values of the letters and of accentuation and prosody and, in this sense, was an abstract intellectual discipline; and it also meant the skill of literacy and thus embraced applied pedagogy. This side of what was to become “grammatical” learning was distinctly applied, particular, and less exalted by comparison with other pursuits. Most of the developments associated with theoretical grammar grew out of philosophy and criticism; and in these developments a repeated duality of themes crosses and intertwines.

      Much of Greek philosophy was occupied with the distinction between that which exists “by nature” and that which exists “by convention.” So in language it was natural to account for words and forms as ordained by nature (by onomatopoeiai.e., by imitation of natural sounds) or as arrived at arbitrarily by a social convention. This dispute regarding the origin of language and meanings paved the way for the development of divergences between the views of the “analogists,” who looked on language as possessing an essential regularity as a result of the symmetries that convention can provide, and the views of the “anomalists,” who pointed to language's lack of regularity as one facet of the inescapable irregularities of nature. The situation was more complex, however, than this statement would suggest. For example, it seems that the anomalists among the Stoics (Stoicism) credited the irrational quality of language precisely to the claim that language did not exactly mirror nature. In any event, the anomalist tradition in the hands of the Stoics brought grammar the benefit of their work in logic and rhetoric. This led to the distinction that, in modern theory, is made with the terms signifiant (“what signifies”) and signifié (“what is signified”) or, somewhat differently and more elaborately, with “expression” and “content”; and it laid the groundwork of modern theories of inflection, though by no means with the exhaustiveness and fine-grained analysis reached by the Sanskrit grammarians.

      The Alexandrians (Alexandria, Library of), who were analogists working largely on literary criticism and text philology, completed the development of the classical Greek grammatical tradition. Dionysius Thrax, in the 2nd century BC, produced the first systematic grammar of Western tradition; it dealt only with word morphology. The study of sentence syntax was to wait for Apollonius Dyscolus, of the 2nd century AD. Dionysius called grammar “the acquaintance with [or observation of] what is uttered by poets and writers,” using a word meaning a less general form of knowledge than what might be called “science.” His typically Alexandrian literary goal is suggested by the headings in his work: pronunciation, poetic figurative language, difficult words, true and inner meanings of words, exposition of form-classes, literary criticism. Dionysius defined a sentence as a unit of sense or thought, but it is difficult to be sure of his precise meaning.

      The Romans, who largely took over, with mild adaptations to their highly similar language, the total work of the Greeks, are important not as originators but as transmitters. Aelius Donatus (Donatus, Aelius), of the 4th century AD, and Priscian, an African of the 6th century, and their colleagues were slightly more systematic than their Greek models but were essentially retrospective rather than original. Up to this point a field that was at times called ars grammatica was a congeries of investigations, both theoretical and practical, drawn from the work and interests of literacy, scribeship, logic, epistemology, rhetoric, textual philosophy, poetics, and literary criticism. Yet modern specialists in the field still share their concerns and interests. The anomalists, who concentrated on surface irregularity and who looked then for regularities deeper down (as the Stoics sought them in logic) bear a resemblance to contemporary scholars of the transformationalist school. And the philological analogists with their regularizing surface segmentation show striking kinship of spirit with the modern school of structural (or taxonomic or glossematic) grammatical theorists.

The European Middle Ages
      It is possible that developments in grammar during the Middle Ages constitute one of the most misunderstood areas of the field of linguistics. It is difficult to relate this period (Scholasticism) coherently to other periods and to modern concerns because surprisingly little is accessible and certain, let alone analyzed with sophistication. In the early 1970s the majority of the known grammatical treatises had not yet been made available in full to modern scholarship, so that not even their true extent could be classified with confidence. These works must be analyzed and studied in the light of medieval learning, especially the learning of the schools of philosophy then current, in order to understand their true value and place.

      The field of linguistics has almost completely neglected the achievements of this period. Students of grammar have tended to see as high points in their field the achievements of the Greeks, the Renaissance growth and “rediscovery” of learning (which led directly to modern school traditions), the contemporary flowering of theoretical study (men usually find their own age important and fascinating), and, in recent decades, the astonishing monument of Pāṇini. Many linguists have found uncongenial the combination of medieval Latin (Latin language) learning (Latin literature) and premodern philosophy. Yet medieval scholars might reasonably be expected to have bequeathed to modern scholarship the fruits of more than ordinarily refined perceptions of a certain order. These scholars used, wrote in, and studied Latin, a language that, though not their native tongue, was one in which they were very much at home; such scholars in groups must often have represented a highly varied linguistic background.

      Some of the medieval treatises continue the tradition of grammars of late antiquity; so there are versions based on Donatus and Priscian, often with less incorporation of the classical poets and writers. Another genre of writing involves simultaneous consideration of grammatical distinctions and scholastic logic; modern linguists are probably inadequately trained to deal with these writings.

      Certainly the most obviously interesting theorizing to be found in this period is contained in the “speculative grammar” of the modistae, who were so called because the titles of their works were often phrased De modis significandi tractatus (“Treatise Concerning the Modes of Signifying”). For the development of the Western grammatical tradition, work of this genre was the second great milestone after the crystallization of Greek thought with the Stoics and Alexandrians. The scholastic philosophers were occupied with relating words and things—i.e., the structure of sentences with the nature of the real world—hence their preoccupation with signification. The aim of the grammarians was to explore how a word (an element of language) matched things apprehended by the mind and how it signified reality. Since a word cannot signify the nature of reality directly, it must stand for the thing signified in one of its modes or properties; it is this discrimination of modes that the study of categories and parts of speech is all about. Thus the study of sentences should lead one to the nature of reality by way of the modes of signifying.

      The modistae did not innovate in discriminating categories and parts of speech; they accepted those that had come down from the Greeks through Donatus and Priscian. The great contribution of these grammarians, who flourished between the mid-13th and mid-14th century, was their insistence on a grammar to explicate the distinctions found by their forerunners in the languages known to them. Whether they made the best choice in selecting logic, metaphysics, and epistemology (as they knew them) as the fields to be included with grammar as a basis for the grand account of universal knowledge is less important than the breadth of their conception of the place of grammar. Before the modistae, grammar had not been viewed as a separate discipline but had been considered in conjunction with other studies or skills (such as criticism, preservation of valued texts, foreign-language learning). The Greek view of grammar was rather narrow and fragmented; the Roman view was largely technical. The speculative medieval grammarians (who dealt with language as a speculum, “mirror” of reality) inquired into the fundamentals underlying language and grammar. They wondered whether grammarians or philosophers discovered grammar, whether grammar was the same for all languages, what the fundamental topic of grammar was, and what the basic and irreducible grammatical primes are. Signification was reached by imposition of words on things; i.e., the sign was arbitrary. Those questions sound remarkably like current issues of linguistics, which serves to illustrate how slow and repetitious progress in the field is. While the modistae accepted, by modern standards, a restrictive set of categories, the acumen and sweep they brought to their task resulted in numerous subtle and fresh syntactic observations. A thorough study of the medieval period would greatly enrich the discussion of current questions.

      It is customary to think of the Renaissance as a time of great flowering. There is no doubt that linguistic and philological developments of this period are interesting and significant. Two new sets of data that modern linguists tend to take for granted became available to grammarians during this period: (1) the newly recognized vernacular languages of Europe, for the protection and cultivation of which there subsequently arose national academies and learned institutions that live down to the present day; and (2) the exotic languages of Africa, the Orient, the New World, and, later, of Siberia, Inner Asia, Papua, Oceania, the Arctic, and Australia, which the voyages of discovery opened up. Earlier, the only non-Indo-European grammar at all widely accessible was that of the Hebrews (and to some extent Arabic); and Semitic in fact shares many categories with Indo-European in its grammar. Indeed, for many of the exotic languages scholarship barely passed beyond the most rudimentary initial collection of word lists; grammatical analysis was scarcely approached.

      In the field of grammar, the Renaissance did not produce notable innovation or advance. Generally speaking, there was a strong rejection of speculative grammar and a relatively uncritical resumption of late Roman views (as stated by Priscian). This was somewhat understandable in the case of Latin or Greek grammars, since here the task was less evidently that of intellectual inquiry and more that of the schools, with the practical aim of gaining access to the newly discovered ancients. But, aside from the fact that, beginning in the 15th century, serious grammars of European vernaculars were actually written, it is only in particular cases and for specific details (e.g., a mild alteration in the number of parts of speech or cases of nouns) that real departures from Roman grammar can be noted. Likewise, until the end of the 19th century, grammars of the exotic languages, written largely by missionaries and traders, were cast almost entirely in the Roman model, to which the Renaissance had added a limited medieval syntactic ingredient.

      From time to time a degree of boldness may be seen in France (French literature): Petrus Ramus (Ramus, Petrus), a 16th-century logician, worked within a taxonomic framework of the surface shapes of words and inflections, such work entailing some of the attendant trivialities that modern linguistics has experienced (e.g., by dividing up Latin nouns on the basis of equivalence of syllable count among their case forms). In the 17th century, members of Solitaires (a group of hermits who lived in the deserted abbey of Port-Royal in France) produced a grammar that has exerted noteworthy continuing influence, even in contemporary theoretical discussion. Drawing their basic view from scholastic logic as modified by rationalism, these people aimed to produce a philosophical grammar that would capture what was common to the grammars of languages—a general grammar, but not aprioristically universalist. This grammar has attracted recent attention because it employs certain syntactic formulations that resemble in detail contemporary transformational rules, which formulate the relationship between the various elements of a sentence.

      Roughly from the 15th century to World War II, however, the version of grammar available to the Western public (together with its colonial expansion) remained basically that of Priscian with only occasional and subsidiary modifications, and the knowledge of new languages brought only minor adjustments to the serious study of grammar. As education has become more broadly disseminated throughout society by the schools, attention has shifted from theoretical or technical grammar as an intellectual preoccupation to prescriptive grammar suited to pedagogical purposes, which started with Renaissance vernacular nationalism. Grammar increasingly parted company with its older fellow disciplines within philosophy as they moved over to the domain known as natural science, and technical academic grammatical study has increasingly become involved with issues represented by empiricism versus rationalism and their successor manifestations on the academic scene.

      Nearly down to the present day, the grammar of the schools has had only tangential connections with the studies pursued by professional linguists; for most people prescriptive grammar has become synonymous with “grammar,” and the prevailing view held by educated people regards grammar as an item of folk knowledge open to speculation by all, and in nowise a formal science requiring adequate preparation such as is assumed for chemistry.

Eric P. Hamp Ed.

The 19th century
Development of the comparative (comparative linguistics) method
      It is generally agreed that the most outstanding achievement of linguistic scholarship in the 19th century was the development of the comparative method, which comprised a set of principles whereby languages could be systematically compared with respect to their sound systems, grammatical structure, and vocabulary and shown to be “genealogically” related. As French, Italian, Portuguese, Romanian, Spanish, and the other Romance languages had evolved from Latin, so Latin, Greek, and Sanskrit as well as the Celtic, Germanic, and Slavic languages and many other languages of Europe and Asia had evolved from some earlier language, to which the name Indo-European (Indo-European languages) or Proto-Indo-European is now customarily applied. That all the Romance languages were descended from Latin and thus constituted one “family” had been known for centuries; but the existence of the Indo-European family of languages and the nature of their genealogical relationship was first demonstrated by the 19th-century comparative philologists. (The term philology in this context is not restricted to the study of literary languages.)

      The main impetus for the development of comparative philology came toward the end of the 18th century, when it was discovered that Sanskrit (Sanskrit language) bore a number of striking resemblances to Greek and Latin. An English orientalist, Sir William Jones (Jones, Sir William), though he was not the first to observe these resemblances, is generally given the credit for bringing them to the attention of the scholarly world and putting forward the hypothesis, in 1786, that all three languages must have “sprung from some common source, which perhaps no longer exists.” By this time, a number of texts and glossaries of the older Germanic languages (Gothic, Old High German, and Old Norse) had been published, and Jones realized that Germanic as well as Old Persian and perhaps Celtic had evolved from the same “common source.” The next important step came in 1822, when the German scholar Jacob Grimm, following the Danish linguist Rasmus Rask (Rask, Rasmus) (whose work, being written in Danish, was less accessible to most European scholars), pointed out in the second edition of his comparative grammar of Germanic that there were a number of systematic correspondences between the sounds of Germanic and the sounds of Greek, Latin, and Sanskrit in related words. Grimm noted, for example, that where Gothic (the oldest surviving Germanic language) had an f, Latin, Greek, and Sanskrit frequently had a p (e.g., Gothic fotus, Latin pedis, Greek podós, Sanskrit padás, all meaning “foot”); when Gothic had a p, the non-Germanic languages had a b; when Gothic had a b, the non-Germanic languages had what Grimm called an “aspirate” (Latin f, Greek ph, Sanskrit bh). In order to account for these correspondences he postulated a cyclical “soundshift” (Lautverschiebung) in the prehistory of Germanic, in which the original “aspirates” became voiced unaspirated stops (bh became b, etc.), the original voiced unaspirated stops became voiceless (b became p, etc.), and the original voiceless (unaspirated) stops became “aspirates” (p became f). Grimm's term, “aspirate,” it will be noted, covered such phonetically distinct categories as aspirated stops (bh, ph), produced with an accompanying audible puff of breath, and fricatives (f ), produced with audible friction as a result of incomplete closure in the vocal tract.

      In the work of the next 50 years the idea of sound change was made more precise, and, in the 1870s, a group of scholars known collectively as the Junggrammatiker (“young grammarians,” or Neogrammarians (Neogrammarian)) put forward the thesis that all changes in the sound system of a language as it developed through time were subject to the operation of regular sound laws. Though the thesis that sound laws were absolutely regular in their operation (unless they were inhibited in particular instances by the influence of analogy) was at first regarded as most controversial, by the end of the 19th century it was quite generally accepted and had become the cornerstone of the comparative method. Using the principle of regular sound change, scholars were able to reconstruct “ancestral” common forms from which the later forms found in particular languages could be derived. By convention, such reconstructed forms are marked in the literature with an asterisk. Thus, from the reconstructed Proto-Indo-European word for “ten,” *dekm, it was possible to derive Sanskrit daśa, Greek déka, Latin decem, and Gothic taihun by postulating a number of different sound laws that operated independently in the different branches of the Indo-European family. The question of sound change is dealt with in greater detail in the section entitled Historical (diachronic) linguistics (linguistics).

The role of analogy
      Analogy has been mentioned in connection with its inhibition of the regular operation of sound laws in particular word forms. This was how the Neogrammarians thought of it. In the course of the 20th century, however, it has come to be recognized that analogy, taken in its most general sense, plays a far more important role in the development of languages than simply that of sporadically preventing what would otherwise be a completely regular transformation of the sound system of a language. When a child learns to speak he tends to regularize the anomalous, or irregular, forms by analogy with the more regular and productive patterns of formation in the language; e.g., he will tend to say “comed” rather than “came,” “dived” rather than “dove,” and so on, just as he will say “talked,” “loved,” and so forth. The fact that the child does this is evidence that he has learned or is learning the regularities or rules of his language. He will go on to “unlearn” some of the analogical forms and substitute for them the anomalous forms current in the speech of the previous generation. But in some cases, he will keep a “new” analogical form (e.g., “dived” rather than “dove”), and this may then become the recognized and accepted form.

Other 19th-century theories and development

Inner and outer form
      One of the most original, if not one of the most immediately influential, linguists of the 19th century was the learned Prussian statesman, Wilhelm von Humboldt (Humboldt, Wilhelm, Baron von) (died 1835). His interests, unlike those of most of his contemporaries, were not exclusively historical. Following the German philosopher Johann Gottfried von Herder (1744–1803), he stressed the connection between national languages and national character: this was but a commonplace of romanticism. More original was Humboldt's theory of “inner” and “outer” form in language. The outer form of language was the raw material (the sounds) from which different languages were fashioned; the inner form was the pattern, or structure, of grammar and meaning that was imposed upon this raw material and differentiated one language from another. This “structural” conception of language was to become dominant, for a time at least, in many of the major centres of linguistics by the middle of the 20th century. Another of Humboldt's ideas was that language was something dynamic, rather than static, and was an activity itself rather than the product of activity. A language was not a set of actual utterances produced by speakers but the underlying principles or rules that made it possible for speakers to produce such utterances and, moreover, an unlimited number of them. This idea was taken up by a German philologist, Heymann Steinthal, and, what is more important, by the physiologist and psychologist Wilhelm Wundt (Wundt, Wilhelm), and thus influenced late 19th- and early 20th-century theories of the psychology of language. Its influence, like that of the distinction of inner and outer form, can also be seen in the thought of Ferdinand de Saussure (Saussure, Ferdinand de), a Swiss linguist. But its full implications were probably not perceived and made precise until the middle of the 20th century, when the U.S. linguist Noam Chomsky re-emphasized it and made it one of the basic notions of generative grammar (see below Transformational-generative grammar (linguistics)).

phonetics and dialectology
      Many other interesting and important developments occurred in 19th-century linguistic research, among them work in the areas of phonetics and dialectology. Research in both these fields was promoted by the Neogrammarians' concern with sound change and by their insistence that prehistoric developments in languages were of the same kind as developments taking place in the languages and dialects currently spoken. The development of phonetics in the West was also strongly influenced at this period, as were many of the details of the more philological analysis of the Indo-European languages, by the discovery of the works of the Indian grammarians who, from the time of the Sanskrit grammarian Pāṇini (5th or 6th century BC), if not before, had arrived at a much more comprehensive and scientific theory of phonetics, phonology, and morphology than anything achieved in the West until the modern period.

The 20th century
      The term structuralism has been used as a slogan and rallying cry by a number of different schools of linguistics, and it is necessary to realize that it has somewhat different implications according to the context in which it is employed. It is convenient to draw first a broad distinction between European and American structuralism and, then, to treat them separately.

Structural linguistics in Europe
      Structural linguistics in Europe is generally said to have begun in 1916 with the posthumous publication of the Cours de Linguistique Générale (Course in General Linguistics) of Ferdinand de Saussure. Much of what is now considered as Saussurean can be seen, though less clearly, in the earlier work of Humboldt, and the general structural principles that Saussure was to develop with respect to synchronic linguistics in the Cours had been applied almost 40 years before (1879) by Saussure himself in a reconstruction of the Indo-European vowel system. The full significance of the work was not appreciated at the time. Saussure's structuralism can be summed up in two dichotomies (which jointly cover what Humboldt referred to in terms of his own distinction of inner and outer form): (1) langue versus parole and (2) form versus substance. By langue, best translated in its technical Saussurean sense as language system, is meant the totality of regularities and patterns of formation that underlie the utterances of a language; by parole, which can be translated as language behaviour, is meant the actual utterances themselves. Just as two performances of a piece of music given by different orchestras on different occasions will differ in a variety of details and yet be identifiable as performances of the same piece, so two utterances may differ in various ways and yet be recognized as instances, in some sense, of the same utterance. What the two musical performances and the two utterances have in common is an identity of form, and this form, or structure, or pattern, is in principle independent of the substance, or “raw material,” upon which it is imposed. “Structuralism,” in the European sense then, refers to the view that there is an abstract relational structure that underlies and is to be distinguished from actual utterances—a system underlying actual behaviour—and that this is the primary object of study for the linguist.

      Two important points arise here: first, that the structural approach is not in principle restricted to synchronic linguistics; second, that the study of meaning, as well as the study of phonology and grammar, can be structural in orientation. In both cases “structuralism” is opposed to “atomism” in the European literature. It was Saussure who drew the terminological distinction between synchronic and diachronic linguistics in the Cours; despite the undoubtedly structural orientation of his own early work in the historical and comparative field, he maintained that, whereas synchronic linguistics should deal with the structure of a language system at a given point in time, diachronic linguistics should be concerned with the historical development of isolated elements—it should be atomistic. Whatever the reasons that led Saussure to take this rather paradoxical view, his teaching on this point was not generally accepted, and scholars soon began to apply structural concepts to the diachronic study of languages. The most important of the various schools of structural linguistics to be found in Europe in the first half of the 20th century have included the Prague school, most notably represented by Nikolay Sergeyevich Trubetskoy (died 1938) and Roman Jakobson (born 1896), both Russian émigrés, and the Copenhagen (or glossematic) school, centred around Louis Hjelmslev (died 1965). John Rupert Firth (died 1960) and his followers, sometimes referred to as the London school, were less Saussurean in their approach, but, in a general sense of the term, their approach may also be described appropriately as structural linguistics.

Structural linguistics in America
      American and European structuralism shared a number of features. In insisting upon the necessity of treating each language as a more or less coherent and integrated system, both European and American linguists of this period tended to emphasize, if not to exaggerate, the structural uniqueness of individual languages. There was especially good reason to take this point of view given the conditions in which American linguistics developed from the end of the 19th century. There were hundreds of indigenous American Indian languages that had never been previously described. Many of these were spoken by only a handful of speakers and, if they were not recorded before they became extinct, would be permanently inaccessible. Under these circumstances, such linguists as Franz Boas (Boas, Franz) (died 1942) were less concerned with the construction of a general theory of the structure of human language than they were with prescribing sound methodological principles for the analysis of unfamiliar languages. They were also fearful that the description of these languages would be distorted by analyzing them in terms of categories derived from the analysis of the more familiar Indo-European languages.

      After Boas, the two most influential American linguists were Edward Sapir (Sapir, Edward) (died 1939) and Leonard Bloomfield (died 1949). Like his teacher Boas, Sapir was equally at home in anthropology and linguistics, the alliance of which disciplines has endured to the present day in many American universities. Boas and Sapir were both attracted by the Humboldtian view of the relationship between language and thought, but it was left to one of Sapir's pupils, Benjamin Lee Whorf (Whorf, Benjamin Lee), to present it in a sufficiently challenging form to attract widespread scholarly attention. Since the republication of Whorf's more important papers in 1956, the thesis that language determines perception and thought has come to be known as the Whorfian hypothesis.

      Sapir's work has always held an attraction for the more anthropologically inclined American linguists. But it was Bloomfield who prepared the way for the later phase of what is now thought of as the most distinctive manifestation of American “structuralism.” When he published his first book in 1914, Bloomfield was strongly influenced by Wundt's psychology of language. In 1933, however, he published a drastically revised and expanded version with the new title Language; this book dominated the field for the next 30 years. In it Bloomfield explicitly adopted a behaviouristic (behaviourism) approach to the study of language, eschewing in the name of scientific objectivity all reference to mental or conceptual categories. Of particular consequence was his adoption of the behaviouristic theory of semantics according to which meaning is simply the relationship between a stimulus and a verbal response. Because science was still a long way from being able to give a comprehensive account of most stimuli, no significant or interesting results could be expected from the study of meaning for some considerable time, and it was preferable, as far as possible, to avoid basing the grammatical analysis of a language on semantic considerations. Bloomfield's followers pushed even further the attempt to develop methods of linguistic analysis that were not based on meaning. One of the most characteristic features of “post-Bloomfieldian” American structuralism, then, was its almost complete neglect of semantics.

      Another characteristic feature, one that was to be much criticized by Chomsky, was its attempt to formulate a set of “discovery procedures”—procedures that could be applied more or less mechanically to texts and could be guaranteed to yield an appropriate phonological and grammatical description of the language of the texts. Structuralism, in this narrower sense of the term, is represented, with differences of emphasis or detail, in the major American textbooks published during the 1950s.

      The most significant development in linguistic theory and research in recent years was the rise of generative grammar, and, more especially, of transformational-generative grammar, or transformational grammar, as it came to be known. Two versions of transformational grammar were put forward in the mid-1950s, the first by Zellig S. Harris (Harris, Zellig S.) and the second by Noam Chomsky, his pupil. It is Chomsky's system that has attracted the most attention so far. As first presented by Chomsky in Syntactic Structures (1957), transformational grammar can be seen partly as a reaction against post-Bloomfieldian structuralism and partly as a continuation of it. What Chomsky reacted against most strongly was the post-Bloomfieldian concern with discovery procedures. In his opinion, linguistics should set itself the more modest and more realistic goal of formulating criteria for evaluating alternative descriptions of a language without regard to the question of how these descriptions had been arrived at. The statements made by linguists in describing a language should, however, be cast within the framework of a far more precise theory of grammar than had hitherto been the case, and this theory should be formalized in terms of modern mathematical notions. Within a few years, Chomsky had broken with the post-Bloomfieldians on a number of other points also. He had adopted what he called a “mentalistic” theory of language, by which term he implied that the linguist should be concerned with the speaker's creative linguistic competence and not his performance, the actual utterances produced. He had challenged the post-Bloomfieldian concept of the phoneme (see below), which many scholars regarded as the most solid and enduring result of the previous generation's work. And he had challenged the structuralists' insistence upon the uniqueness of every language, claiming instead that all languages were, to a considerable degree, cut to the same pattern—they shared a certain number of formal and substantive universals.

Tagmemic, stratificational, and other approaches
      The effect of Chomsky's ideas has been phenomenal. It is hardly an exaggeration to say that there is no major theoretical issue in linguistics today that is debated in terms other than those in which he has chosen to define it, and every school of linguistics tends to define its position in relation to his. Among the rival schools are tagmemics, stratificational grammar, and the Prague school. Tagmemics is the system of linguistic analysis developed by the U.S. linguist Kenneth L. Pike (Pike, Kenneth L.) and his associates in connection with their work as Bible translators. Its foundations were laid during the 1950s, when Pike differed from the post-Bloomfieldian structuralists on a number of principles, and it has been further elaborated since then. Tagmemic analysis has been used for analyzing a great many previously unrecorded languages, especially in Central and South America and in West Africa. Stratificational grammar, developed by a U.S. linguist, Sydney M. Lamb (Lamb, Sydney M.), has been seen by some linguists as an alternative to transformational grammar. Not yet fully expounded or widely exemplified in the analysis of different languages, stratificational grammar is perhaps best characterized as a radical modification of post-Bloomfieldian linguistics, but it has many features that link it with European structuralism. The Prague school has been mentioned above for its importance in the period immediately following the publication of Saussure's Cours. Many of its characteristic ideas (in particular, the notion of distinctive features in phonology) have been taken up by other schools. But there has been further development in Prague of the functional approach to syntax (see below). The work of M.A.K. Halliday in England derived much of its original inspiration from Firth (above), but Halliday provided a more systematic and comprehensive theory of the structure of language than Firth had, and it has been quite extensively illustrated.

Methods of synchronic linguistic (synchronic linguistics) analysis

Structural linguistics
      This section is concerned mainly with a version of structuralism (which may also be called descriptive linguistics) developed by scholars working in a post-Bloomfieldian tradition.

      With the great progress made in phonetics in the late 19th century, it had become clear that the question whether two speech sounds were the same or not was more complex than might appear at first sight. Two utterances of what was taken to be the same word might differ quite perceptibly from one occasion of utterance to the next. Some of this variation could be attributed to a difference of dialect or accent and is of no concern here. But even two utterances of the same word by the same speaker might vary from one occasion to the next. Variation of this kind, though it is generally less obvious and would normally pass unnoticed, is often clear enough to the trained phonetician and is measurable instrumentally. It is known that the “same” word is being uttered, even if the physical signal produced is variable, in part, because the different pronunciations of the same word will cluster around some acoustically identifiable norm. But this is not the whole answer, because it is actually impossible to determine norms of pronunciation in purely acoustic terms. Once it has been decided what counts as “sameness” of sound from the linguistic point of view, the permissible range of variation for particular sounds in particular contexts can be measured, and, within certain limits, the acoustic cues for the identification of utterances as “the same” can be determined.

      What is at issue is the difference between phonetic and phonological (or phonemic) identity, and for these purposes it will be sufficient to define phonetic identity in terms solely of acoustic “sameness.” Absolute phonetic identity is a theoretical ideal never fully realized. From a purely phonetic point of view, sounds are more or less similar, rather than absolutely the same or absolutely different. Speech sounds considered as units of phonetic analysis in this article are called phones, and, following the normal convention, are represented by enclosing the appropriate alphabetic symbol in square brackets. Thus will refer to a p sound (i.e., what is described more technically as a voiceless, bilabial stop); and [pit] will refer to a complex of three phones—a p sound, followed by an i sound, followed by a t sound. A phonetic transcription may be relatively broad (omitting much of the acoustic detail) or relatively narrow (putting in rather more of the detail), according to the purpose for which it is intended. A very broad transcription will be used in this article except when finer phonetic differences must be shown.

      Phonological, or phonemic, identity was referred to above as “sameness of sound from the linguistic point of view.” Considered as phonological units—i.e., from the point of view of their function in the language—sounds are described as phonemes (phoneme) and are distinguished from phones by enclosing their appropriate symbol (normally, but not necessarily, an alphabetic one) between two slash marks. Thus /p/ refers to a phoneme that may be realized on different occasions of utterance or in different contexts by a variety of more or less different phones. Phonological identity, unlike phonetic similarity, is absolute: two phonemes are either the same or different, they cannot be more or less similar. For example, the English words “bit” and “pit” differ phonemically in that the first has the phoneme /b/ and the second has the phoneme /p/ in initial position. As the words are normally pronounced, the phonetic realization of /b/ will differ from the phonetic realization of /p/ in a number of different ways: it will be at least partially voiced (i.e., there will be some vibration of the vocal cords), it will be without aspiration (i.e., there will be no accompanying slight puff of air, as there will be in the case of the phone realizing /p/), and it will be pronounced with less muscular tension. It is possible to vary any one or all of these contributory differences, making the phones in question more or less similar, and it is possible to reduce the phonetic differences to the point that the hearer cannot be certain which word, “bit” or “pit,” has been uttered. But it must be either one or the other; there is no word with an initial sound formed in the same manner as /p/ or /b/ that is halfway between the two. This is what is meant by saying that phonemes are absolutely distinct from one another—they are discrete rather than continuously variable.

      How it is known whether two phones realize the same phoneme or not is dealt with differently by different schools of linguists. The “orthodox” post-Bloomfieldian school regards the first criterion to be phonetic similarity. Two phones are not said to realize the same phoneme unless they are sufficiently similar. What is meant by “sufficiently similar” is rather vague, but it must be granted that for every phoneme there is a permissible range of variation in the phones that realize it. As far as occurrence in the same context goes, there are no serious problems. More critical is the question of whether two phones occurring in different contexts can be said to realize the same phoneme or not. To take a standard example from English: the phone that occurs at the beginning of the word “pit” differs from the phone that occurs after the initial /s/ of “spit.” The “p sound” occurring after the /s/ is unaspirated (i.e., it is pronounced without any accompanying slight puff of air). The aspirated and unaspirated “p sounds” may be symbolized rather more narrowly as [ph] and respectively. The question then is whether [ph] and realize the same phoneme /p/ or whether each realizes a different phoneme. They satisfy the criterion of phonetic similarity, but this, though a necessary condition of phonemic identity, is not a sufficient one.

      The next question is whether there is any pair of words in which the two phones are in minimal contrast (or opposition); that is, whether there is any context in English in which the occurrence of the one rather than the other has the effect of distinguishing two or more words (in the way that [ph] versus distinguishes the so-called minimal pairs “pit” and “bit,” “pan” and “ban,” and so on). If there is, it can be said that, despite their phonetic similarity, the two phones realize (or “belong to”) different phonemes—that the difference between them is phonemic. If there is no context in which the two phones are in contrast (or opposition) in this sense, it can be said that they are variants of the same phoneme—that the difference between them is nonphonemic. Thus, the difference between [ph] and in English is nonphonemic; the two sounds realize, or belong to, the same phoneme, namely /p/. In several other languages—e.g., Hindi—the contrast between such sounds as [ph] and is phonemic, however. The question is rather more complicated than it has been represented here. In particular, it should be noted that is phonetically similar to as well as to [ph] and that, although [ph] and are in contrast, and are not. It would thus be possible to regard and as variants of the same phoneme. Most linguists, however, have taken the alternative view, assigning to the same phoneme as [ph]. Here it will suffice to note that the criteria of phonetic similarity and lack of contrast do not always uniquely determine the assignment of phones to phonemes. Various supplementary criteria may then be invoked.

      Phones that can occur and do not contrast in the same context are said to be in free variation in that context, and, as has been shown, there is a permissible range of variation for the phonetic realization of all phonemes. More important than free variation in the same context, however, is systematically determined variation according to the context in which a given phoneme occurs. To return to the example used above: and [ph], though they do not contrast, are not in free variation either. Each of them has its own characteristic positions of occurrence, and neither occurs, in normal English pronunciation, in any context characteristic for the other (e.g., only [ph] occurs at the beginning of a word, and only occurs after s). This is expressed by saying that they are in complementary distribution. (The distribution of an element is the whole range of contexts in which it can occur.) Granted that and [ph] are variants of the same phoneme /p/, it can be said that they are contextually, or positionally, determined variants of it. To use the technical term, they are allophones (allophone) of /p/. The allophones of a phoneme, then, are its contextually determined variants and they are in complementary distribution.

      The post-Bloomfieldians made the assignment of phones to phonemes subject to what is now generally referred to as the principle of bi-uniqueness. The phonemic specification of a word or utterance was held to determine uniquely its phonetic realization (except for free variation), and, conversely, the phonetic description of a word or utterance was held to determine uniquely its phonemic analysis. Thus, if two words or utterances are pronounced alike, then they must receive the same phonemic description; conversely, two words or utterances that have been given the same phonemic analysis must be pronounced alike. The principle of bi-uniqueness was also held to imply that, if a given phone was assigned to a particular phoneme in one position of occurrence, then it must be assigned to the same phoneme in all its other positions of occurrence; it could not be the allophone of one phoneme in one context and of another phoneme in other contexts.

      A second important principle of the post-Bloomfieldian approach was its insistence that phonemic analysis should be carried out prior to and independently of grammatical analysis. Neither this principle nor that of bi-uniqueness was at all widely accepted outside the post-Bloomfieldian school, and they have been abandoned by the generative phonologists (see below).

      Phonemes of the kind referred to so far are segmental; they are realized by consonantal or vocalic (vowel) segments of words, and they can be said to occur in a certain order relative to one another. For example, in the phonemic representation of the word “bit,” the phoneme /b/ precedes /i/, which precedes /t/. But nonsegmental, or suprasegmental, aspects of the phonemic realization of words and utterances may also be functional in a language. In English, for example, the noun “import” differs from the verb “import” in that the former is accented on the first and the latter on the second syllable. This is called a stress accent: the accented syllable is pronounced with greater force or intensity. Many other languages distinguish words suprasegmentally by tone. For example, in Mandarin Chinese (Mandarin language) the words haò “day” and haǒ “good” are distinguished from one another in that the first has a falling tone and the second a falling-rising tone; these are realized, respectively, as (1) a fall in the pitch of the syllable from high to low and (2) a change in the pitch of the syllable from medium to low and back to medium. Stress and tone are suprasegmental in the sense that they are “superimposed” upon the sequence of segmental phonemes. The term tone is conventionally restricted by linguists to phonologically relevant variations of pitch at the level of words. intonation, which is found in all languages, is the variation in the pitch contour or pitch pattern of whole utterances, of the kind that distinguishes (either of itself or in combination with some other difference) statements from questions or indicates the mood or attitude of the speaker (as hesitant, surprised, angry, and so forth). Stress, tone, and intonation do not exhaust the phonologically relevant suprasegmental features found in various languages, but they are among the most important.

      A complete phonological description of a language includes all the segmental phonemes and specifies which allophones occur in which contexts. It also indicates which sequences of phonemes are possible in the language and which are not: it will indicate, for example, that the sequences /bl/ and /br/ are possible at the beginning of English words but not /bn/ or /bm/. A phonological description also identifies and states the distribution of the suprasegmental features. Just how this is to be done, however, has been rather more controversial in the post-Bloomfieldian tradition. Differences between the post-Bloomfieldian approach to phonology and approaches characteristic of other schools of structural linguistics will be treated below.

      The grammatical description of many, if not all, languages is conveniently divided into two complementary sections: morphology and syntax. The relationship between them, as generally stated, is as follows: morphology accounts for the internal structure of words, and syntax describes how words are combined to form phrases, clauses, and sentences.

      There are many words in English (English language) that are fairly obviously analyzable into smaller grammatical units. For example, the word “unacceptability” can be divided into un-, accept, abil-, and -ity (abil- being a variant of -able). Of these, at least three are minimal grammatical units, in the sense that they cannot be analyzed into yet smaller grammatical units—un-, abil-, and ity. The status of accept, from this point of view, is somewhat uncertain. Given the existence of such forms as accede and accuse, on the one hand, and of except, exceed, and excuse, on the other, one might be inclined to analyze accept into ac- (which might subsequently be recognized as a variant of ad-) and -cept. The question is left open. Minimal grammatical units like un-, abil-, and -ity are what Bloomfield called morphemes; he defined them in terms of the “partial phonetic-semantic resemblance” holding within sets of words. For example, “unacceptable,” “untrue,” and “ungracious” are phonetically (or, phonologically) similar as far as the first syllable is concerned and are similar in meaning in that each of them is negative by contrast with a corresponding positive adjective (“acceptable,” “true,” “gracious”). This “partial phonetic-semantic resemblance” is accounted for by noting that the words in question contain the same morpheme (namely, un-) and that this morpheme has a certain phonological form and a certain meaning.

      Bloomfield's definition of the morpheme in terms of “partial phonetic-semantic resemblance” was considerably modified and, eventually, abandoned entirely by some of his followers. Whereas Bloomfield took the morpheme to be an actual segment of a word, others defined it as being a purely abstract unit, and the term morph was introduced to refer to the actual word segments. The distinction between morpheme and morph (which is, in certain respects, parallel to the distinction between phoneme and phone) may be explained by means of an example. If a morpheme in English is posited with the function of accounting for the grammatical difference between singular and plural nouns, it may be symbolized by enclosing the term plural within brace brackets. Now the morpheme [plural] is represented in a number of different ways. Most plural nouns in English differ from the corresponding singular forms in that they have an additional final segment. In the written forms of these words, it is either -s or -es (e.g., “cat” : “cats”; “dog” : “dogs”; “fish” : “fishes”). The word segments written -s or -es are morphs. So also is the word segment written -en in “oxen.” All these morphs represent the same morpheme. But there are other plural nouns in English that differ from the corresponding singular forms in other ways (e.g., “mouse” : “mice”; “criterion” : “criteria”; and so on) or not at all (e.g., “this sheep” : “these sheep”). Within the post-Bloomfieldian framework no very satisfactory account of the formation of these nouns could be given. But it was clear that they contained (in some sense) the same morpheme as the more regular plurals.

      Morphs that are in complementary distribution and represent the same morpheme are said to be allomorphs of that morpheme. For example, the regular plurals of English nouns are formed by adding one of three morphs on to the form of the singular: /s/, /z/, or /iz/ (in the corresponding written forms both /s/ and /z/ are written -s and /iz/ is written -es). Their distribution is determined by the following principle: if the morph to which they are to be added ends in a “sibilant” sound (e.g., s, z, sh, ch), then the syllabic allomorph /iz/ is selected (e.g., fish-es /fiš-iz/, match-es /mač-iz/); otherwise the nonsyllabic allomorphs are selected, the voiceless allomorph /s/ with morphs ending in a voiceless consonant (e.g., cat-s /kat-s/) and the voiced allomorph /z/ with morphs ending in a vowel or voiced consonant (e.g., flea-s /fli-z/, dog-s /dog-z/). These three allomorphs, it will be evident, are in complementary distribution, and the alternation between them is determined by the phonological structure of the preceding morph. Thus the choice is phonologically conditioned.

      Very similar is the alternation between the three principal allomorphs of the past participle ending, /id/, /t/, and /d/, all of which correspond to the -ed of the written forms. If the preceding morph ends with /t/ or /d/, then the syllabic allomorph /id/ is selected (e.g., wait-ed /weit-id/). Otherwise, if the preceding morph ends with a voiceless consonant, one of the nonsyllabic allomorphs is selected—the voiceless allomorph /t/ when the preceding morph ends with a voiceless consonant (e.g., pack-ed /pak-t/) and the voiced allomorph /d/ when the preceding morph ends with a vowel or voiced consonant (e.g., row-ed /rou-d/; tame-d /teim-d/). This is another instance of phonological conditioning. Phonological conditioning may be contrasted with the principle that determines the selection of yet another allomorph of the past participle morpheme. The final /n/ of show-n or see-n (which marks them as past participles) is not determined by the phonological structure of the morphs show and see. For each English word that is similar to “show” and “see” in this respect, it must be stated as a synchronically inexplicable fact that it selects the /n/ allomorph. This is called grammatical conditioning. There are various kinds of grammatical conditioning.

      Alternation of the kind illustrated above for the allomorphs of the plural morpheme and the /id/, /d/, and /t/ allomorphs of the past participle is frequently referred to as morphophonemic. Some linguists have suggested that it should be accounted for not by setting up three allomorphs each with a distinct phonemic form but by setting up a single morph in an intermediate morphophonemic representation. Thus, the regular plural morph might be said to be composed of the morphophoneme /Z/ and the most common past-participle morph of the morphophoneme /D/. General rules of morphophonemic interpretation would then convert /Z/ and /D/ to their appropriate phonetic form according to context. This treatment of the question foreshadows, on the one hand, the stratificational treatment and, on the other, the generative approach, though they differ considerably in other respects.

      An important concept in grammar and, more particularly, in morphology is that of free and bound forms. A bound form is one that cannot occur alone as a complete utterance (in some normal context of use). For example, -ing is bound in this sense, whereas wait is not, nor is waiting. Any form that is not bound is free. Bloomfield based his definition of the word on this distinction between bound and free forms. Any free form consisting entirely of two or more smaller free forms was said to be a phrase (e.g., “poor John” or “ran away”), and phrases were to be handled within syntax. Any free form that was not a phrase was defined to be a word and to fall within the scope of morphology. One of the consequences of Bloomfield's definition of the word was that morphology became the study of constructions involving bound forms. The so-called isolating languages, which make no use of bound forms (e.g., Vietnamese), would have no morphology.

      The principal division within morphology is between inflection and derivation (or word formation). Roughly speaking, inflectional constructions can be defined as yielding sets of forms that are all grammatically distinct forms of single vocabulary items, whereas derivational constructions yield distinct vocabulary items. For example, “sings,” “singing,” “sang,” and “sung” are all inflectional forms of the vocabulary item traditionally referred to as “the verb to sing”; but “singer,” which is formed from “sing” by the addition of the morph -er (just as “singing” is formed by the addition of -ing), is one of the forms of a different vocabulary item. When this rough distinction between derivation and inflection is made more precise, problems occur. The principal consideration, undoubtedly, is that inflection is more closely integrated with and determined by syntax. But the various formal criteria that have been proposed to give effect to this general principle are not uncommonly in conflict in particular instances, and it probably must be admitted that the distinction between derivation and inflection, though clear enough in most cases, is in the last resort somewhat arbitrary.

      Bloomfield and most linguists have discussed morphological constructions in terms of processes. Of these, the most widespread throughout the languages of the world is affixation (affix); i.e., the attachment of an affix to a base. For example, the word “singing” can be described as resulting from the affixation of -ing to the base sing. (If the affix is put in front of the base, it is a prefix; if it is put after the base, it is a suffix; and if it is inserted within the base, splitting it into two discontinuous parts, it is an infix.) Other morphological processes recognized by linguists need not be mentioned here, but reference may be made to the fact that many of Bloomfield's followers from the mid-1940s were dissatisfied with the whole notion of morphological processes. Instead of saying that -ing was affixed to sing they preferred to say that sing and -ing co-occurred in a particular pattern or arrangement, thereby avoiding the implication that sing is in some sense prior to or more basic than -ing. The distinction of morpheme and morph (and the notion of allomorphs) was developed in order to make possible the description of the morphology and syntax of a language in terms of “arrangements” of items rather than in terms of “processes” operating upon more basic items. Nowadays, the opposition to “processes” is, except among the stratificationalists, almost extinct. It has proved to be cumbersome, if not impossible, to describe the relationship between certain linguistic forms without deriving one from the other or both from some common underlying form, and most linguists no longer feel that this is in any way reprehensible.

      Syntax, for Bloomfield, was the study of free forms that were composed entirely of free forms. Central to his theory of syntax were the notions of form classes and constituent structure. (These notions were also relevant, though less central, in the theory of morphology.) Bloomfield defined form classes, rather imprecisely, in terms of some common “recognizable phonetic or grammatical feature” shared by all the members. He gave as examples the form class consisting of “personal substantive expressions” in English (defined as “the forms that, when spoken with exclamatory final pitch, are calls for a person's presence or attention”—e.g., “John,” “Boy,” “Mr. Smith”); the form class consisting of “infinitive expressions” (defined as “forms which, when spoken with exclamatory final pitch, have the meaning of a command”—e.g., “run,” “jump,” “come here”); the form class of “nominative substantive expressions” (e.g., “John,” “the boys”); and so on. It should be clear from these examples that form classes are similar to, though not identical with, the traditional parts of speech and that one and the same form can belong to more than one form class.

      What Bloomfield had in mind as the criterion for form class membership (and therefore of syntactic equivalence) may best be expressed in terms of substitutability. Form classes are sets of forms (whether simple or complex, free or bound), any one of which may be substituted for any other in a given construction or set of constructions throughout the sentences of the language.

 The smaller forms into which a larger form may be analyzed are its constituents, and the larger form is a construction. For example, the phrase “poor John” is a construction analyzable into, or composed of, the constituents “poor” and “John.” Because there is no intermediate unit of which “poor” and “John” are constituents that is itself a constituent of the construction “poor John,” the forms “poor” and “John” may be described not only as constituents but also as immediate constituents (immediate constituent analysis) of “poor John.” Similarly, the phrase “lost his watch” is composed of three word forms—“lost,” “his,” and “watch”—all of which may be described as constituents of the construction. Not all of them, however, are its immediate constituents. The forms “his” and “watch” combine to make the intermediate construction “his watch”; it is this intermediate unit that combines with “lost” to form the larger phrase “lost his watch.” The immediate constituents of “lost his watch” are “lost” and “his watch”; the immediate constituents of “his watch” are the forms “his” and “watch.” By the constituent structure of a phrase or sentence is meant the hierarchical organization of the smallest forms of which it is composed (its ultimate constituents) into layers of successively more inclusive units. Viewed in this way, the sentence “Poor John lost his watch” is more than simply a sequence of five word forms associated with a particular intonation pattern. It is analyzable into the immediate constituents “poor John” and “lost his watch,” and each of these phrases is analyzable into its own immediate constituents and so on, until, at the last stage of the analysis, the ultimate constituents of the sentence are reached. The constituent structure of the whole sentence is represented by means of a tree diagram in Figure 1—>.

 Each form, whether it is simple or composite, belongs to a certain form class. Using arbitrarily selected letters to denote the form classes of English, “poor” may be a member of the form class A, “John” of the class B, “lost” of the class C, “his” of the class D, and “watch” of the class E. Because “poor John” is syntactically equivalent to (i.e., substitutable for) “John,” it is to be classified as a member of A. So too, it can be assumed, is “his watch.” In the case of “lost his watch” there is a problem. There are very many forms—including “lost,” “ate,” and “stole”—that can occur, as here, in constructions with a member of B and can also occur alone; for example, “lost” is substitutable for “stole the money,” as “stole” is substitutable for either or for “lost his watch.” This being so, one might decide to classify constructions like “lost his watch” as members of C. On the other hand, there are forms that—though they are substitutable for “lost,” “ate,” “stole,” and so on when these forms occur alone—cannot be used in combination with a following member of B (cf. “died,” “existed”); and there are forms that, though they may be used in combination with a following member of B, cannot occur alone (cf. “enjoyed”). The question is whether one respects the traditional distinction between transitive and intransitive verb forms. It may be decided, then, that “lost,” “stole,” “ate” and so forth belong to one class, C (the class to which “enjoyed” belongs), when they occur “transitively” (i.e., with a following member of B as their object) but to a different class, F (the class to which “died” belongs), when they occur “intransitively.” Finally, it can be said that the whole sentence “Poor John lost his watch” is a member of the form class G. Thus the constituent structure not only of “Poor John lost his watch” but of a whole set of English sentences can be represented by means of the tree diagram given in Figure 2—>. New sentences of the same type can be constructed by substituting actual forms for the class labels.

 Any construction that belongs to the same form class as at least one of its immediate constituents is described as endocentric; the only endocentric construction in the model sentence above is “poor John.” All the other constructions, according to the analysis, are exocentric. This is clear from the fact that in Figure 2—> the letters at the nodes above every phrase other than the phrase A + B (i.e., “poor John,” “old Harry,” and so on) are different from any of the letters at the ends of the lower branches connected directly to these nodes. For example, the phrase D + E (i.e., “his watch,” “the money,” and so forth) has immediately above it a node labelled B, rather than either D or E. Endocentric constructions fall into two types: subordinating and coordinating. If attention is confined, for simplicity, to constructions composed of no more than two immediate constituents, it can be said that subordinating constructions are those in which only one immediate constituent is of the same form class as the whole construction, whereas coordinating constructions are those in which both constituents are of the same form class as the whole construction. In a subordinating construction (e.g., “poor John”), the constituent that is syntactically equivalent to the whole construction is described as the head, and its partner is described as the modifier: thus, in “poor John,” the form “John” is the head, and “poor” is its modifier. An example of a coordinating construction is “men and women,” in which, it may be assumed, the immediate constituents are the word “men” and the word “women,” each of which is syntactically equivalent to “men and women.” (It is here implied that the conjunction “and” is not a constituent, properly so called, but an element that, like the relative order of the constituents, indicates the nature of the construction involved. Not all linguists have held this view.)

      One reason for giving theoretical recognition to the notion of constituent is that it helps to account for the ambiguity of certain constructions. A classic example is the phrase “old men and women,” which may be interpreted in two different ways according to whether one associates “old” with “men and women” or just with “men.” Under the first of the two interpretations, the immediate constituents are “old” and “men and women”; under the second, they are “old men” and “women.” The difference in meaning cannot be attributed to any one of the ultimate constituents but results from a difference in the way in which they are associated with one another. Ambiguity of this kind is referred to as syntactic ambiguity. Not all syntactic ambiguity is satisfactorily accounted for in terms of constituent structure.

      Bloomfield thought that semantics, or the study of meaning, was the weak point in the scientific investigation of language and would necessarily remain so until the other sciences whose task it was to describe the universe and man's place in it had advanced beyond their present state. In his textbook Language (1933), he had himself adopted a behaviouristic theory of meaning, defining the meaning of a linguistic form as “the situation in which the speaker utters it and the response which it calls forth in the hearer.” Furthermore, he subscribed, in principle at least, to a physicalist thesis, according to which all science should be modelled upon the so-called exact sciences and all scientific knowledge should be reducible, ultimately, to statements made about the properties of the physical world. The reason for his pessimism concerning the prospects for the study of meaning was his feeling that it would be a long time before a complete scientific description of the situations in which utterances were produced and the responses they called forth in their hearers would be available. At the time that Bloomfield was writing, physicalism was more widely held than it is today, and it was perhaps reasonable for him to believe that linguistics should eschew mentalism and concentrate upon the directly observable. As a result, for some 30 years after the publication of Bloomfield's textbook, the study of meaning was almost wholly neglected by his followers; most American linguists who received their training during this period had no knowledge of, still less any interest in, the work being done elsewhere in semantics.

      Two groups of scholars may be seen to have constituted an exception to this generalization: anthropologically minded linguists and linguists concerned with Bible translation. Much of the description of the indigenous languages of America has been carried out since the days of Boas and his most notable pupil Sapir by scholars who were equally proficient both in anthropology and in descriptive linguistics; such scholars have frequently added to their grammatical analyses of languages some discussion of the meaning of the grammatical categories and of the correlations between the structure of the vocabularies and the cultures in which the languages operated. It has already been pointed out that Boas and Sapir and, following them, Whorf were attracted by Humboldt's view of the interdependence of language and culture and of language and thought. This view was quite widely held by American anthropological linguists (athough many of them would not go as far as Whorf in asserting the dependence of thought and conceptualization upon language).

      Also of considerable importance in the description of the indigenous languages of America has been the work of linguists trained by the American Bible Society and the Summer Institute of Linguistics, a group of Protestant missionary linguists. Because their principal aim is to produce translations of the Bible, they have necessarily been concerned with meaning as well as with grammar and phonology. This has tempered the otherwise fairly orthodox Bloomfieldian approach characteristic of the group.

      The two most important developments evident in recent work in semantics are, first, the application of the structural approach to the study of meaning and, second, a better appreciation of the relationship between grammar and semantics. The second of these developments will be treated in the following section on Transformational-generative grammar (linguistics). The first, structural semantics, goes back to the period preceding World War II and is exemplified in a large number of publications, mainly by German scholars—Jost Trier, Leo Weisgerber, and their collaborators.

      The structural approach to semantics is best explained by contrasting it with the more traditional “atomistic” approach, according to which the meaning of each word in the language is described, in principle, independently of the meaning of all other words. The structuralist takes the view that the meaning of a word is a function of the relationships it contracts with other words in a particular lexical field, or subsystem, and that it cannot be adequately described except in terms of these relationships. For example, the colour terms in particular languages constitute a lexical field, and the meaning of each term depends upon the place it occupies in the field. Although the denotation of each of the words “green,” “blue,” and “yellow” in English is somewhat imprecise at the boundaries, the position that each of them occupies relative to the other terms in the system is fixed: “green” is between “blue” and “yellow,” so that the phrases “greenish yellow” or “yellowish green” and “bluish green” or “greenish blue” are used to refer to the boundary areas. Knowing the meaning of the word “green” implies knowing what cannot as well as what can be properly described as green (and knowing of the borderline cases that they are borderline cases). Languages differ considerably as to the number of basic colour terms that they recognize, and they draw boundaries within the psychophysical continuum of colour at different places. Blue, green, yellow, and so on do not exist as distinct colours in nature, waiting to be labelled differently, as it were, by different languages; they come into existence, for the speakers of particular languages, by virtue of the fact that those languages impose structure upon the continuum of colour and assign to three of the areas thus recognized the words “blue,” “green,” “yellow.”

      The language of any society is an integral part of the culture of that society, and the meanings recognized within the vocabulary of the language are learned by the child as part of the process of acquiring the culture of the society in which he is brought up. Many of the structural differences found in the vocabularies of different languages are to be accounted for in terms of cultural differences. This is especially clear in the vocabulary of kinship (kinship terminology) (to which a considerable amount of attention has been given by anthropologists and linguists), but it holds true of many other semantic fields also. A consequence of the structural differences that exist between the vocabularies of different languages is that, in many instances, it is in principle impossible to translate a sentence “literally” from one language to another.

      It is important, nevertheless, not to overemphasize the semantic incommensurability of languages. Presumably, there are many physiological and psychological constraints that, in part at least, determine one's perception and categorization of the world. It may be assumed that, when one is learning the denotation of the more basic words in the vocabulary of one's native language, attention is drawn first to what might be called the naturally salient features of the environment and that one is, to this degree at least, predisposed to identify and group objects in one way rather than another. It may also be that human beings are genetically endowed with rather more specific and linguistically relevant principles of categorization. It is possible that, although languages differ in the number of basic colour categories that they distinguish, there is a limited number of hierarchically ordered basic colour categories from which each language makes its selection and that what counts as a typical instance, or focus, of these universal colour categories is fixed and does not vary from one language to another. If this hypothesis is correct, then it is false to say, as many structural semanticists have said, that languages divide the continuum of colour in a quite arbitrary manner. But the general thesis of structuralism is unaffected, for it still remains true that each language has its own unique semantic structure even though the total structure is, in each case, built upon a substructure of universal distinctions.

Transformational-generative grammar (transformational grammar)
      A generative grammar, in the sense in which Noam Chomsky uses the term, is a rules system formalized with mathematical precision that generates, without need of any information that is not represented explicitly in the system, the grammatical sentences of the language that it describes, or characterizes, and assigns to each sentence a structural description, or grammatical analysis. All the concepts introduced in this definition of “generative” grammar will be explained and exemplified in the course of this section. Generative grammars fall into several types; this exposition is concerned mainly with the type known as transformational (or, more fully, transformational-generative). Transformational grammar was initiated by Zellig S. Harris (Harris, Zellig S.) in the course of work on what he called discourse analysis (the formal analysis of the structure of continuous text). It was further developed and given a somewhat different theoretical basis by Chomsky.

Harris's grammar
      Harris distinguished within the total set of grammatical sentences in a particular language (for example, English) two complementary subsets: kernel sentences (the set of kernel sentences being described as the kernel of the grammar) and nonkernel sentences. The difference between these two subsets lies in nonkernel sentences being derived from kernel sentences by means of transformational rules. For example, “The workers rejected the ultimatum” is a kernel sentence that may be transformed into the nonkernel sentences “The ultimatum was rejected by the workers” or “Did the workers reject the ultimatum?” Each of these may be described as a transform of the kernel sentence from which it is derived. The transformational relationship between corresponding active and passive sentences (e.g., “The workers rejected the ultimatum” and “The ultimatum was rejected by the workers”) is conventionally symbolized by the rule N1 V N2 → N2 be V + en by N1, in which N stands for any noun or noun phrase, V for any transitive verb, en for the past participle morpheme, and the arrow (→) instructs one to rewrite the construction to its left as the construction to the right. (There has been some simplification of the rule as it was formulated by Harris.) This rule may be taken as typical of the whole class of transformational rules in Harris's system: it rearranges constituents (what was the first nominal, or noun, N1, in the kernel sentence is moved to the end of the transform, and what was the second nominal, N2, in the kernel sentence is moved to initial position in the transform), and it adds various elements in specified positions (be, en, and by). Other operations carried out by transformational rules include the deletion of constituents; e.g., the entire phrase “by the workers” is removed from the sentence “The ultimatum was rejected by the workers” by a rule symbolized as N2 be V+en by N1 → N2 be V+en. This transforms the construction on the left side of the arrow (which resulted from the passive transformation) by dropping the by-phrase, thus producing “The ultimatum was rejected.”

Chomsky's grammar
      Chomsky's system of transformational grammar, though it was developed on the basis of his work with Harris, differs from Harris's in a number of respects. It is Chomsky's system that has attracted the most attention and has received the most extensive exemplification and further development. As outlined in Syntactic Structures (1957), it comprised three sections, or components: the phrase-structure component, the transformational component, and the morphophonemic component. Each of these components consisted of a set of rules operating upon a certain “input” to yield a certain “output.” The notion of phrase structure may be dealt with independently of its incorporation in the larger system. In the following system of rules, S stands for Sentence, NP for Noun Phrase, VP for Verb Phrase, Det for Determiner, Aux for Auxiliary (verb), N for Noun, and V for Verb stem.

      This is a simple phrase-structure grammar. It generates and thereby defines as grammatical such sentences as “The man will hit the ball,” and it assigns to each sentence that it generates a structural description. The kind of structural description assigned by a phrase-structure grammar is, in fact, a constituent structure analysis of the sentence.

 In these rules, the arrow can be interpreted as an instruction to rewrite (this is to be taken as a technical term) whatever symbol appears to the left of the arrow as the symbol or string of symbols that appears to the right of the arrow. For example, rule (2) rewrites the symbol VP as the string of symbols Verb + NP, and it thereby defines Verb + NP to be a construction of the type VP. Or, alternatively and equivalently, it says that constructions of the type VP may have as their immediate constituents constructions of the type Verb and NP (combined in that order). Rule (2) can be thought of as creating or being associated with the tree structure in Figure 3—>.

      Rules (1)–(8) do not operate in isolation but constitute an integrated system. The symbol S (standing mnemonically for “sentence”) is designated as the initial symbol. This information is not given in the rules (1)–(8), but it can be assumed either that it is given in a kind of protocol statement preceding the grammatical rules or that there is a universal convention according to which S is always the initial symbol. It is necessary to begin with a rule that has the initial symbol on the left. Thereafter any rule may be applied in any order until no further rule is applicable; in doing so, a derivation can be constructed of one of the sentences generated by the grammar. If the rules are applied in the following order: (1), (2), (3), (3), (4), (5), (5), (6), (6), (7), (8), then assuming that “the” is selected on both applications of (5), “man” on one application of (6), and “ball” on the other, “will” on the application of (7), and “hit” on the application of (8), the following derivation of the sentence “The man will hit the ball” will have been constructed:

 Many other derivations of this sentence are possible, depending on the order in which the rules are applied. The important point is that all these different derivations are equivalent in that they can be reduced to the same tree diagram; namely, the one shown in Figure 4—>. If this is compared with the system of rules, it will be seen that each application of each rule creates or is associated with a portion (or subtree) of the tree. The tree diagram, or phrase marker, may now be considered as a structural description of the sentence “The man hit the ball.” It is a description of the constituent structure, or phrase structure, of the sentence, and it is assigned by the rules that generate the sentence.

      It is important to interpret the term generate in a static, rather than a dynamic, sense. The statement that the grammar generates a particular sentence means that the sentence is one of the totality of sentences that the grammar defines to be grammatical or well formed. All the sentences are generated, as it were, simultaneously. The notion of generation must be interpreted as would be a mathematical formula containing variables. For example, in evaluating the formula y 2 + y for different values of y, one does not say that the formula itself generates these various resultant values (2, when y = 1; 5, when y = 2; etc.) one after another or at different times; one says that the formula generates them all simultaneously or, better still perhaps, timelessly. The situation is similar for a generative grammar. Although one sentence rather than another can be derived on some particular occasion by making one choice rather than another at particular places in the grammar, the grammar must be thought of as generating all sentences statically or timelessly.

      It has been noted that, whereas a phrase-structure grammar is one that consists entirely of phrase-structure rules, a transformational grammar (as formalized by Chomsky) includes both phrase-structure and transformational rules (as well as morphophonemic rules). The transformational rules depend upon the prior application of the phrase-structure rules and have the effect of converting, or transforming, one phrase marker into another. What is meant by this statement may be clarified first with reference to a purely abstract and very simple transformational grammar, in which the letters stand for constituents of a sentence (and S stands for “sentence”):

  The first five rules are phrase-structure rules (PS rules); rule (6) is a transformational rule (T rule). The output of rules (1)–(5) is the terminal string a + b + c + e + f + d + g + h, which has associated with it the structural description indicated by the phrase marker shown in Figure 5—> (left). Rule (6) applies to this terminal string of the PS rules and the associated phrase marker. It has the effect of deleting C (and the constituents of C) and permuting A and D (together with their constituents). The result is the string of symbols d + g + h + a + b, with the associated phrase marker shown in Figure 5—> (right).

  The phrase marker shown in Figure 5—> (left) may be described as underlying, and the phrase marker shown in Figure 5—> (right) as derived with respect to rule (6). One of the principal characteristics of a transformational rule is its transformation of an underlying phrase marker into a derived phrase marker in this way. Transformational rules, in contrast with phrase-structure rules, are also formally more heterogeneous and may have more than one symbol on the left-hand side of the arrow. The linguistic importance of these abstract considerations may be explained with reference to the relationship that holds in English between active and passive sentences.

      Chomsky's rule for relating active and passive sentences (as given in Syntactic Structures) is very similar, at first sight, to Harris's, discussed above. Chomsky's rule is:

  This rule, called the passive transformation, presupposes and depends upon the prior application of a set of phrase-structure rules. For simplicity, the passive transformation may first be considered in relation to the set of terminal strings generated by the phrase-structure rules (1)–(8) given earlier. The string “the + man + will + hit + the + ball” (with its associated phrase marker, as shown in Figure 4—>) can be treated not as an actual sentence but as the structure underlying both the active sentence “The man will hit the ball” and the corresponding passive “The ball will be hit by the man.” The passive transformation is applicable under the condition that the underlying, or “input,” string is analyzable in terms of its phrase structure as NP - Aux - V - NP (the use of subscript numerals to distinguish the two NPs in the formulation of the rule is an informal device for indicating the operation of permutation). In the phrase marker in Figure 4—> “the” + “man” are constituents of NP, “will” is a constituent of Aux, “hit” is a constituent of V, and “the” + “ball” are constituents of NP. The whole string is therefore analyzable in the appropriate sense, and the passive transformation converts it into the string “the + ball + will + be + en + hit + by + the + man.” A subsequent transformational rule will permute “en + hit” to yield “hit + en,” and one of the morphophonemic rules will then convert “hit + en” to “hit” (as “ride + en” will be converted to “ridden”; “open + en” to “opened,” and so on).

   Every transformational rule has the effect of converting an underlying phrase marker into a derived phrase marker. The manner in which the transformational rules assign derived constituent structure to their input strings is one of the major theoretical problems in the formalization of transformational grammar. Here it can be assumed not only that “be + en” is attached to Aux and “by” to NP (as indicated by the plus signs in the rule as it has been formulated above) but also that the rest of the derived structure is as shown in Figure 6—>. The phrase marker in Figure 6—> formalizes the fact, among others, that “the ball” is the subject of the passive sentence “The ball will be hit by the man,” whereas “the man” is the subject of the corresponding active “The man will hit the ball” (cf. Figure 4—>).

      Although the example above is a very simple one, and only a single transformational rule has been considered independently of other transformational rules in the same system, the passive transformation must operate, not only upon simple noun phrases like “the man” or “the ball,” but upon noun phrases that contain adjectives (“the old man”), modifying phrases (“the man in the corner”), relative clauses (“the man who checked in last night”), and so forth. The incorporation, or embedding, of these other structures with the noun phrase will be brought about by the prior application of other transformational rules. It should also be clear that the phrase-structure rules require extension to allow for the various forms of the verb (“is hitting,” “hit,” “was hitting,” “has hit,” “has been hitting,” etc.) and for the distinction of singular and plural.

      It is important to note that, unlike Harris's, Chomsky's system of transformational grammar does not convert one sentence into another: the transformational rules operate upon the structures underlying sentences and not upon actual sentences. A further point is that even the simplest sentences (i.e., kernel sentences) require the application of at least some transformational rules. Corresponding active and passive sentences, affirmative and negative sentences, declarative and interrogative sentences, and so on are formally related by deriving them from the same underlying terminal string of the phrase-structure component. The difference between kernel sentences and nonkernel sentences in Syntactic Structures (in Chomsky's later system the category of kernel sentences is not given formal recognition at all) resides in the fact that kernel sentences are generated without the application of any optional transformations. Nonkernel sentences require the application of both optional and obligatory transformations, and they differ one from another in that a different selection of optional transformations is made.

Modifications in Chomsky's grammar
 Chomsky's system of transformational grammar was substantially modified in 1965. Perhaps the most important modification was the incorporation, within the system, of a semantic component, in addition to the syntactic component and phonological component. (The phonological component may be thought of as replacing the morphophonemic component of Syntactic Structures.) The rules of the syntactic component generate the sentences of the language and assign to each not one but two structural analyses: a deep structure analysis as represented by the underlying phrase marker, and a surface structure analysis, as represented by the final derived phrase marker. The underlying phrase marker is assigned by rules of the base (roughly equivalent to the PS [Phrase-Structure] rules of the earlier system); the derived phrase marker is assigned by the transformational rules. The interrelationship of the four sets of rules is shown diagrammatically in Figure 7—>. The meaning of the sentence is derived (mainly, if not wholly) from the deep structure by means of the rules of semantic interpretation; the phonetic realization of the sentence is derived from its surface structure by means of the rules of the phonological component. The grammar (“grammar” is now to be understood as covering semantics and phonology, as well as syntax) is thus an integrated system of rules for relating the pronunciation of a sentence to its meaning. The syntax, and more particularly the base, is at the “heart” of the system, as it were: it is the base component (as the arrows in the diagram indicate) that generates the infinite class of structures underlying the well-formed sentences of a language. These structures are then given a semantic and phonetic “interpretation” by the other components.

      The base consists of two parts: a set of categorial rules and a lexicon. Taken together, they fulfill a similar function to that fulfilled by the phrase-structure rules of the earlier system. But there are many differences of detail. Among the most important is that the lexicon (which may be thought of as a dictionary of the language cast in a particular form) lists, in principle, all the vocabulary words in the language and associates with each all the syntactic, semantic, and phonological information required for the correct operation of the rules. This information is represented in terms of what are called features. For example, the entry for “boy” might say that it has the syntactic features: [+ Noun], [+ Count], [+ Common], [+ Animate], and [+ Human]. The categorial rules generate a set of phrase markers that have in them, as it were, a number of “slots” to be filled with items from the lexicon. With each such “slot” there is associated a set of features that define the kind of item that can fill the “slot.” If a phrase marker is generated with a “slot” for the head of a noun phrase specified as requiring an animate noun (i.e., a noun having the feature [+ Animate]), the item “boy” would be recognized as being compatible with this specification and could be inserted in the “slot” by the rule of lexical substitution. Similarly, it could be inserted in “slots” specified as requiring a common noun, a human noun, or a countable noun, but it would be excluded from positions that require an abstract noun (e.g., “sincerity”) or an uncountable noun (e.g., “water”). By drawing upon the syntactic information coded in feature notation in the lexicon, the categorial rules might permit such sentences as “The boy died,” while excluding (and thereby defining as ungrammatical) such nonsentences as “The boy elapsed.”

      One of the most controversial topics in the development of transformational grammar is the relationship between syntax and semantics. Scholars working in the field are now agreed that there is a considerable degree of interdependence between the two, and the problem is how to formalize this interdependence. One school of linguists, called generative semanticists, accept the general principles of transformational grammar but have challenged Chomsky's conception of deep structure as a separate and identifiable level of syntactic representation. In their opinion, the basic component of the grammar should consist of a set of rules for the generation of well-formed semantic representations. These would then be converted by a succession of transformational rules into strings of words with an assigned surface-structure syntactic analysis, there being no place in the passage from semantic representation to surface structure identifiable as Chomsky's deep structure. Chomsky himself has denied that there is any real difference between the two points of view and has maintained that the issue is purely one of notation. That this argument can be put forward by one party to the controversy and rejected by the other is perhaps a sufficient indication of the uncertainty of the evidence. Of greater importance than the overt issues, in so far as they are clear, is the fact that linguists are now studying much more intensively than they have in the past the complexities of the interdependence of syntax, on the one hand, and semantics and logic, on the other. Whether it will prove possible to handle all these complexities within a comprehensive generative grammar remains to be seen.

 The role of the phonological component of a generative grammar of the type outlined by Chomsky is to assign a phonetic “interpretation” to the strings of words generated by the syntactic component. These strings of words are represented in a phonological notation (taken from the lexicon) and have been provided with a surface-structure analysis by the transformational rules (see Figure 7—>). The phonological elements out of which the word forms are composed are segments consisting of what are referred to technically as distinctive features (following the usage of the Prague school, see below The Prague school (linguistics)). For example, the word form “man,” represented phonologically, is composed of three segments: the first consists of the features [+ consonantal], [+ bilabial], [+ nasal], etc.; the second of the features [+ vocalic], [+ front], [+ open], etc.; and the third of the features [+ consonantal], [+ alveolar], [+ nasal], etc. (These features should be taken as purely illustrative; there is some doubt about the definitive list of distinctive features.) Although these segments may be referred to as the “phonemes” /m/, /a/, and /n/, they should not be identified theoretically with units of the kind discussed in the section on Phonology (linguistics) under Structural linguistics. They are closer to what many American structural linguists called “morphophonemes” or the Prague school linguists labelled “archiphonemes,” being unspecified for any feature that is contextually redundant or predictable. For instance, the first segment of the phonological representation of “man” will not include the feature [+ voice]; because nasal consonants are always phonetically voiced in this position in English, the feature [+ voice] can be added to the phonetic specification by a rule of the phonological component.

      One further important aspect of generative phonology (i.e., phonology carried out within the framework of an integrated generative grammar) should be mentioned: its dependence upon syntax. Most American structural phonologists made it a point of principle that the phonemic analysis of an utterance should be carried out without regard to its grammatical structure. This principle was controversial among American linguists and was not generally accepted outside America. Not only has the principle been rejected by the generative grammarians, but they have made the phonological description of a language much more dependent upon its syntactic analysis than has any other school of linguists. They have claimed, for example, that the phonological rules that assign different degrees of stress to the vowels in English words and phrases and alter the quality of the relatively unstressed vowel concomitantly must make reference to the derived constituent structure of sentences and not merely to the form class of the individual words or the places in which the word boundaries occur.

      The system of tagmemic analysis, as presented by Kenneth L. Pike, was developed for the analysis not only of language but of all of human behaviour that manifests the property of patterning. In the following treatment, only language will be discussed.

Modes of language
      Every language is said to be trimodal—i.e., structured in three modes: phonology, grammar, and lexicon. These modes are interrelated but have a considerable degree of independence and must be described in their own terms. Phonology and lexicon should not be seen as mere appendages to grammar, the former simply specifying which phonemes can combine to form morphemes (or morphs), and the latter simply listing the morphemes and other meaningful units with a description of their meaning. There are levels of structure in each of the modes, and the units of one level are not necessarily coterminous with those of another. Phonemes, for example, may combine to form syllables and syllables to form phonological words (“phonological word” is defined as the domain of some phonological process such as accentuation, assimilation, or dissimilation), but the morpheme (or morph) will not necessarily consist of an integral number of syllables, still less of a single syllable. Nor will the word as a grammatical unit necessarily coincide with the phonological word. Similarly, the units of lexical analysis, sometimes referred to as lexemes (in one sense of this term), are not necessarily identifiable as single grammatical units, whether as morphemes, words, or phrases. No priority, then, is ascribed to any one of the three modes.

      The originality of tagmemic analysis and the application of the term tagmeme is most clearly manifest in the domain of grammar. By a tagmeme is meant an element of a construction, the element in question being regarded as a composite unit, described in such terms as “slot-filler” or “function-class.” For example, one of the tagmemes required for the analysis of English at the syntactic level might be noun-as-subject, in which “noun” refers to a class of substitutable, or paradigmatically related, morphemes or words capable of fulfilling a certain grammatical function, and “subject” refers to the function that may be fulfilled by one or more classes of elements. In the tagmeme noun-as-subject—which, using the customary tagmemic symbolism, may be represented as Subject:noun—the subject slot is filled by a noun. When a particular tagmeme is identified in the analysis of an actual utterance, it is said to be manifested by the particular member of the grammatical class that occurs in the appropriate slot in the utterance. For example, in the utterance “John is asleep,” the subject tagmeme is manifested by the noun “John.” Tagmemicists insist that tagmemes, despite their bipartite structure, are single units. In grammatical analysis, the distribution of tagmemes, not simply of classes, is stated throughout the sentences of the language. Subject:noun is a different tagmeme from Object:noun, as it is also a different tagmeme from Subject:pronoun.

Hierarchy of levels
      Within the grammar of a language there is a hierarchy of levels, units of one level being composed of sequences of units of the level below. In many languages, five such levels are recognized, defined in terms of the following units: morpheme, word, phrase, clause, and sentence. (The term level is being used in a different sense from that in which it was used earlier to refer to phonology and grammar.) The difference between morphology and syntax is simply a difference between two of these five levels, no greater than the difference, for example, between the phrase level and the clause level. Normally, tagmemes at one level are manifested by units belonging to the level below: clause tagmemes by phrases, phrase tagmemes by words, and so on. Intermediate levels may, however, be skipped. For example, the subject tagmeme in a clause may be manifested by a single word in English (e.g., “John,” “water”) and not necessarily by a phrase (“the young man”).

      It is also possible for there to be loop-backs in the grammatical hierarchy of a language. This means that a unit of higher level may be embedded within the structure of a unit of lower level; for example, a clause may fill a slot within a phrase (e.g., “who arrived late,” in “the man who arrived late”).

      In regard to the notation of tagmemics, a construction is symbolized as a string of tagmemes (which commonly, though not necessarily, will be sequentially ordered according to the order in which elements manifesting the tagmemes occur in utterances). Each tagmeme is marked as obligatory or optional by having preposed to it a plus sign (+) or a plus-or-minus sign (±), respectively. For example, a formula representing the structure of a clause composed of an obligatory subject tagmeme, an obligatory predicate tagmeme, and an optional object tagmeme might be Cl = + S:n + P:v ± O:n (in which Cl stands for a clause of a certain type and n and v stand for the classes of nouns and verbs, respectively). This formula does not represent in any way the fact (if it is a fact) that the predicate tagmeme and object tagmeme together form a unit that is one of the two immediate constituents of the clause. It is one of the characteristic features of tagmemic grammar that it gives much less emphasis to the notion of constituent structure than other American approaches to grammatical analysis.

Stratificational grammar
      This system of analysis (whose principal advocate is Sydney M. Lamb, a U.S. linguist) is called stratificational because it is based upon the notion that every language comprises a restricted number of structural layers or strata, hierarchically related in such a way that units or combinations of units on one stratum realize units or combinations of units of the next higher stratum. The number of strata may vary from language to language. Four strata have been recognized for English, and it is probable that all languages may have at least these four: the sememic, the lexemic, the morphemic, and the phonemic strata. The sememic stratal system constitutes the semology of the language; the lexemic and morphemic stratal systems constitute the grammar (in the narrower sense of this term); and the phonemic system constitutes the phonology. In some later stratificational work, the term grammar covers the three higher stratal systems—the sememic, the lexemic, and the morphemic—and is opposed to “phonology.” The deep structure of sentences is described on the sememic stratum and the surface structure on the morphemic. In the present account, “grammar” is used in the narrower sense and will be opposed to “semology” as well as “phonology.”

      The originality of stratificational grammar does not reside in the recognition of these three major components of a linguistic description. The stratificational approach to linguistic description is distinguished from others in that it relates grammar to semology and phonology by means of the same notion of realization that it employs to relate the lexemic and the morphemic stratal systems within the grammatical component. Another distinguishing feature of stratificational grammar, in its later development at least, is its description of linguistic structure in terms of a network of relationships, rather than by means of a system of rules; linguistic units are said to be nothing more than points, or positions, in the relational network.

Technical terminology
      Lamb (Lamb, Sydney M.) has been very careful to make the terminology of stratificational grammar as consistent and perspicuous as possible; but, in fitting some of the more or less established terms into his own theoretical framework, he has reinterpreted them in a potentially confusing manner. Thus, the same terms have been used in different senses in different versions of the system. For example, “morpheme” in stratificational grammar corresponds neither to the unit to which Bloomfield (Bloomfield, Leonard) applied the term (i.e., to a word segment consisting of phonemes) nor to the more abstract grammatical unit that a Bloomfieldian morpheme might be described as representing (e.g., the past-tense morpheme that might be variously represented by such allomorphs as /id/, /t/, /d/, etc.). Lamb describes the morpheme as a unit composed of morphons (roughly equivalent to what other linguists have called morphophonemes) that is related to a combination of one or more compositional units of the stratum above, lexons, by means of the relationship of realization. For example, the word form “hated” realizes (on the morphemic stratum) a combination of two lexons, one of which, the stem, realizes the lexeme HATE and the other, the suffix, realizes the PAST TENSE lexeme; each of these two lexons is realized on the stratum below by a morpheme. Another example brings out more clearly the difference between morphemes (the minimal grammatical elements) and lexemes (the minimal meaningful elements). The word form “understood” realizes a combination of three morphemes UNDER, STAND, and PAST. UNDER and STAND jointly realize the single lexeme UNDERSTAND (whose meaning cannot be described as a function of the meanings of UNDER and STAND), whereas the single PAST morpheme directly realizes the single lexeme PAST TENSE.

      The stratificational framework, presented in Lamb's work, consistently separates compositional and realizational units, the former being designated by terms ending in the suffix -on (semon, lexon, morphon, phonon), the latter by terms ending in the suffix -eme (sememe, lexeme, morpheme, phoneme). Ons are components or compounds of emes on the same stratum (semons are components of sememes, lexons are composed of lexemes, etc.) and emes realize ons of the stratum above (phonemes realize morphons, morphemes realize lexons, etc.). Each stratum has its own combinatorial pattern specifying the characteristic combinations of elements on that stratum. Syllable structure is specified on the phonemic stratum, the structure of word forms on the morphemic stratum, the structure of phrases on the lexemic stratum, and the structure of clauses and sentences on the sememic stratum. Phonons are roughly equivalent to phonological distinctive features and include such properties or components of phonemes as labial, nasal, and so on. Sememes are roughly equivalent to what other linguists have called semantic components or features and include such aspects of the meaning of the lexeme “man” as “male,” “adult,” “human,” and so forth. Once again, however, compositional function is distinguished from interstratal realizational function, so that no direct equivalence can be established with nonstratificational terminology. In more recent work in stratificational grammar, the notion that emes are composed of ons has been abandoned, and greater emphasis is laid upon the fact that emes are points, or positions, in a relational network; they are connected to other points in the network but have themselves no internal structure.

Interstratal relationships
      One of the principal characteristics of the stratificational approach is that it sets out to describe languages without making use of rules that convert one entity into another. (Reference has been made above to the antipathy many linguists have felt towards describing languages in terms of processes.) The stratificationalist would handle the phenomena in terms of the interstratal relationships of realization. Various kinds of interstratal relationships, other than that of one-to-one correspondence may be recognized: diversification, in which one higher unit has alternative realizations; zero realization, in which a higher unit has no overt realization on the lower stratum; neutralization, in which two or more higher units are mapped into the same lower level unit; and so on. All these interstratal one–many or many–one relations are then analyzed in terms of the logical notions of conjunction and disjunction (AND-relations versus OR-relations), of ordering (x precedes y in an AND-relationship, x is selected in preference to y in an OR-relationship), and the directionality (“upward” towards meaning, or “downward” towards sound). Many of the phenomena that are described by other linguists in terms of processes that derive one unit from another can be described elegantly enough in terms of interstratal relationships of this kind.

      Critics, however, have objected to the proliferation of strata and theoretical constructs in stratificational grammar, arguing that they result from an a priori commitment to the notion of realization and that the only stratal distinction for which there is any independent evidence is the distinction of phonology and grammar. It has been suggested by Lamb that stratificational grammar provides a model for the way in which linguistic information is stored in the brain and activated during the production and reception of speech. But little is known yet about the neurology of language and speech, and it would be premature to draw any firm conclusions about this aspect of stratificational grammar (see below Psycholinguistics (linguistics)).

      What is now generally referred to as the Prague school comprises a fairly large group of scholars, mainly European, who, though they may not themselves have been members of the Linguistic Circle of Prague, derived their inspiration from the work of Vilém Mathesius, Nikolay Trubetskoy, Roman Jakobson and other scholars based in Prague in the decade preceding World War II.

Combination of structuralism and functionalism
      The most characteristic feature of the Prague school approach is its combination of structuralism with functionalism. The latter term (like “structuralism”) has been used in a variety of senses in linguistics. Here it is to be understood as implying an appreciation of the diversity of functions fulfilled by language and a theoretical recognition that the structure of languages is in large part determined by their characteristic functions. Functionalism, taken in this sense, manifests itself in many of the more particular tenets of Prague school doctrine.

      One very famous functional analysis of language, which, though it did not originate in Prague, was very influential there, was that of the German psychologist Karl Bühler (Bühler, Karl), who recognized three general kinds of function fulfilled by language: Darstellungsfunktion, Kundgabefunktion, and Appelfunktion. These terms may be translated, in the present context, as the cognitive (cognition), the expressive, and the conative (or instrumental) functions. The cognitive function of language refers to its employment for the transmission of factual information; by expressive function is meant the indication of the mood or attitude of the speaker (or writer); and by the conative function of language is meant its use for influencing the person one is addressing or for bringing about some practical effect. A number of scholars working in the Prague tradition have suggested that these three functions correlate in many languages, at least partly, with the grammatical categories of mood and person. The cognitive function is fulfilled characteristically by 3rd-person nonmodal utterances (i.e., utterances in the indicative mood, making no use of modal verbs); the expressive function by 1st-person utterances in the subjunctive or optative mood; and the conative function by 2nd-person utterances in the imperative. The functional distinction of the cognitive and the expressive aspects of language has also been applied by Prague school linguists in their work on stylistics and literary criticism. One of their key principles is that language is being used poetically or aesthetically when the expressive aspect is predominant, and that it is typical of the expressive function of language that this should be manifest in the form of an utterance and not merely in the meanings of the component words.

Phonological contributions
      The Prague school is best known for its work on phonology. Unlike the American phonologists, Trubetskoy (Trubetskoy, Nikolay Sergeyevich) and his followers did not take the phoneme to be the minimal unit of analysis. Instead, they defined phonemes as sets of distinctive features. For example, in English, /b/ differs from /p/ in the same way that /d/ differs from /t/ and /g/ from /k/. Just how they differ in terms of their articulation is a complex question. For simplicity, it may be said that there is just one feature, the presence of which distinguishes /b/, /d/, and /g/ from /p/, /t/, and /k/, and that this feature is voicing (vibration of the vocal cords). Similarly, the feature of labiality can be extracted from /p/ and /b/ by comparing them with /t/, /d/, /k/, and /g/; the feature of nasality from /n/ and /m/ by comparing them with /t/ and /d/, on the one hand, and with /p/ and /b/, on the other. Each phoneme, then, is composed of a number of articulatory features and is distinguished by the presence or absence of at least one feature from every other phoneme in the language. The distinctive function of phonemes, which depends upon and supports the principle of the duality of structure, can be related to the cognitive function of language. This distinctive feature analysis of Prague school phonology as developed by Jakobson (Jakobson, Roman) has become part of the generally accepted framework for generative phonology (see above).

      Two other kinds of phonologically relevant function are also recognized by linguists of the Prague school: expressive and demarcative. The former term is employed here in the sense in which it was employed above (i.e., in opposition to “cognitive”); it is characteristic of stress, intonation, and other suprasegmental aspects of language that they are frequently expressive of the mood and attitude of the speaker in this sense. The term demarcative is applied to those elements or features that in particular languages serve to indicate the occurrence of the boundaries of words and phrases and, presumably, make it easier to identify such grammatical units in the stream of speech. There are, for example, many languages in which the set of phonemes that can occur at the beginning of a word differs from the set of phonemes that can occur at the end of a word. These and other devices are described by the Prague school phonologists as having demarcative function: they are boundary signals that reinforce the identity and syntagmatic unity of words and phrases.

Theory of markedness
      The notion of markedness was first developed in Prague school phonology but was subsequently extended to morphology and syntax. When two phonemes are distinguished by the presence or absence of a single distinctive feature, one of them is said to be marked and the other unmarked for the feature in question. For example, /b/ is marked and /p/ unmarked with respect to voicing. Similarly, in morphology, the regular English verb can be said to be marked for past tense (by the suffixation of -ed) but to be unmarked in the present (cf. “jumped” versus “jump”). It is often the case that a morphologically unmarked form has a wider range of occurrences and a less definite meaning than a morphologically marked form. It can be argued, for example, that, whereas the past tense form in English (in simple sentences or the main clause of complex sentences) definitely refers to the past, the so-called present tense form is more neutral with respect to temporal reference: it is nonpast in the sense that it fails to mark the time as past, but it does not mark it as present. There is also a more abstract sense of markedness, which is independent of the presence or absence of an overt feature or affix. The words “dog” and “bitch” provide examples of markedness of this kind on the level of vocabulary. Whereas the use of the word “bitch” is restricted to females of the species, “dog” is applicable to both males and females. “Bitch” is the marked and “dog” the unmarked term, and, as is commonly the case, the unmarked term can be neutral or negative according to context (cf. “That dog over there is a bitch” versus “It's not a dog, it's a bitch”). The principle of markedness, understood in this more general or more abstract sense, is now quite widely accepted by linguists of many different schools, and it is applied at all levels of linguistic analysis.

Recent contributions
      Current Prague school work is still characteristically functional in the sense in which this term was interpreted in the pre-World War II period. The most valuable contribution made by the postwar Prague school is probably the distinction of theme and rheme and the notion of “functional sentence perspective” or “communicative dynamism.” By the theme of a sentence is meant that part that refers to what is already known or given in the context (sometimes called, by other scholars, the topic or psychological subject); by the rheme, the part that conveys new information (the comment or psychological predicate). It has been pointed out that, in languages with a free word order (such as Czech or Latin), the theme tends to precede the rheme, regardless of whether the theme or the rheme is the grammatical subject and that this principle may still operate, in a more limited way, in languages, like English, with a relatively fixed word order (cf. “That book I haven't seen before”). But other devices may also be used to distinguish theme and rheme. The rheme may be stressed (“Jóhn saw Mary”) or made the complement of the verb “to be” in the main clause of what is now commonly called a cleft sentence (“It's Jóhn who saw Mary”).

      The general principle that has guided research in “functional sentence perspective” is that the syntactic structure of a sentence is in part determined by the communicative function of its various constituents and the way in which they relate to the context of utterance. A somewhat different but related aspect of functionalism in syntax is seen in current work in what is called case grammar. Case grammar is based upon a small set of syntactic functions (agentive, locative, benefactive, instrumental, and so on) that are variously expressed in different languages but that are held to determine the grammatical structure of sentences. Although case grammar does not derive directly from the work of the Prague school, it is very similar in inspiration.

Historical (diachronic) linguistics (historical linguistics)

Linguistic change
      All languages change in the course of time. Written records make it clear that 15th-century English is quite noticeably different from 20th-century English, as is 15th-century French or German from modern French or German. It was the principal achievement of the 19th-century linguists not only to realize more clearly than their predecessors the ubiquity of linguistic change but also to put its scientific investigation on a sound footing by means of the comparative method (see the section History of linguistics: The 19th century (linguistics)). This will be treated in greater detail in the following section. Here various kinds, or categories, of linguistic change will be listed and exemplified.

Sound change
      Since the beginning of the 19th century, when scholars observed that there were a number of systematic correspondences in related words between the sounds of the Germanic languages and the sounds of what were later recognized as other Indo-European languages, particular attention has been paid in diachronic linguistics to changes in the sound systems of languages.

      Certain common types of sound change, most notably assimilation and dissimilation, can be explained, at least partially, in terms of syntagmatic, or contextual, conditioning. By assimilation is meant the process by which one sound is made similar in its place or manner of articulation to a neighbouring sound. For example, the word “cupboard” was presumably once pronounced, as the spelling indicates, with the consonant cluster pb in the middle. The p was assimilated to b in manner of articulation (i.e., voicing was maintained throughout the cluster), and subsequently the resultant double consonant bb was simplified. With a single b in the middle and an unstressed second syllable, the word “cupboard,” as it is pronounced nowadays, is no longer so evidently a compound of “cup” and “board” as its spelling still shows it to have been. The Italian words notte “night” and otto “eight” manifest assimilation of the first consonant to the second consonant of the cluster in place of articulation (cf. Latin nocte(m), octo). Assimilation is also responsible for the phenomenon referred to as umlaut in the Germanic languages. The high front vowel i of suffixes had the effect of fronting and raising preceding back vowels and, in particular, of converting an a sound into an e sound. In Modern German this is still a morphologically productive process (cf. Mann “man”: Männer “men”). In English it has left its mark in such irregular forms as “men” (from *manniz), “feet” (from *fotiz), and “length” (from *langa).

      Dissimilation refers to the process by which one sound becomes different from a neighbouring sound. For example, the word “pilgrim” (French pèlerin) derives ultimately from the Latin peregrinus; the l sound results from dissimilation of the first r under the influence of the second r. A special case of dissimilation is haplology, in which the second of the two identical or similar syllables is dropped. Examples include the standard modern British pronunciations (articulation) of “Worcester” and “Gloucester” with two syllables rather than three and the common pronunciation of “library” as if it were written “libry.” Both assimilation and dissimilation are commonly subsumed under the principle of “ease of articulation.” This is clearly applicable in typical instances of assimilation. It is less obvious how or why a succession of unlike sounds in contiguous syllables should be easier to pronounce than a succession of identical or similar sounds. But a better understanding of this phenomenon, as of other “slips of the tongue,” may result from current work in the physiological and neurological aspects of speech production.

      Not all sound change is to be accounted for in terms of syntagmatic conditioning. The change of p, t, and k to f, θ (the th sound in “thin”), and h or of b, d, g to p, t, and k in early Germanic cannot be explained in these terms. Nor can the so-called Great Vowel Shift in English, which, in the 15th century, modified the quality of all the long vowels (cf. “profane” : “profanity”; “divine” : “divinity”; and others). Attempts have been made to develop a general theory of sound change, notably by the French linguist André Martinet. But no such theory has yet won universal acceptance, and it is likely that the causes of sound change are multiple.

      Sound change is not necessarily phonological; it may be merely phonetic (see above Structural linguistics: Phonology (linguistics)). The pronunciation of one or more of the phones realizing a particular phoneme may change slightly without affecting any of the previously existing phonological distinctions; this no doubt happens quite frequently as a language is transmitted from one generation to the next. Two diachronically distinct states of the language would differ in this respect in the same way as two coexistent but geographically or socially distinct accents of the same language might differ. It is only when two previously distinct phonemes are merged or a unitary phoneme splits into two (typically when allophonic variation becomes phonemic) that sound change must definitely be considered as phonological. For example, the sound change of p to f, t to θ(th), and k to h, on the one hand, and of b to p, d to t, and g to k, on the other, in early Germanic had the effect of changing the phonological system. The voiceless stops did not become fricatives in all positions; they remained as voiceless stops after s. Consequently, the p sound that was preserved after s merged with the p that derived by sound change from b. (It is here assumed that the aspirated p sound and the unaspirated p sound are to be regarded as allophones of the same phoneme). Prior to the Germanic sound shift the phoneme to be found at the beginning of the words for “five” or “father” also occurred after s in words for “spit” or “spew”; after the change this was no longer the case.

Grammatical change
      A language can acquire a grammatical distinction that it did not have previously, as when English developed the progressive (“He is running”) in contrast to the simple present (“He runs”). It can also lose a distinction; e.g., modern spoken French has lost the distinction between the simple past (Il marcha “he walked”) and the perfect (Il a marché “he has walked”). What was expressed by means of one grammatical device may come to be expressed by means of another. For example, in the older Indo-European languages the syntactic function of the nouns and noun phrases in a sentence was expressed primarily by means of case endings (the subject of the sentence being in the nominative case, the object in the accusative case, and so on); in most of the modern Indo-European languages these functions are expressed by means of word order and the use of prepositions. It is arguable, although it can hardly be said to have been satisfactorily demonstrated yet, that the grammatical changes that take place in a language in the course of time generally leave its deep structure unaffected and tend to modify the ways in which the deeper syntactic functions and distinctions are expressed (whether morphologically, by word order, by the use of prepositions and auxiliary verbs, or otherwise), without affecting the functions and distinctions themselves. Many grammatical changes are traditionally accounted for in terms of analogy.

Semantic change
      Towards the end of the 19th century, a French scholar, Michel Bréal, set out to determine the laws that govern changes in the meaning of words. This was the task that dominated semantic research until the 1930s, when scholars began to turn their attention to the synchronic study of meaning. Many systems for the classification of changes of meaning have been proposed, and a variety of explanatory principles have been suggested. So far no “laws” of semantic change comparable to the phonologist's sound laws have been discovered. It seems that changes of meaning can be brought about by a variety of causes. Most important, perhaps, and the factor that has been emphasized particularly by the so-called words-and-things movement in historical semantics is the change undergone in the course of time by the objects or institutions that words denote. For example, the English word “car” goes back through Latin carrus to a Celtic word for a four-wheeled wagon. It now denotes a very different sort of vehicle; confronted with a model of a Celtic wagon in a museum, one would not describe it as a car.

      Some changes in the meaning of words are caused by their habitual use in particular contexts. The word “starve” once meant “to die” (cf. Old English steorfan, German sterben); in most dialects of English, it now has the more restricted meaning “to die of hunger,” though in the north of England “He was starving” can also mean “He was very cold” (i.e., “dying” of cold, rather than hunger). Similarly, the word “deer” has acquired a more specialized meaning than the meaning “wild animal” that it once bore (cf. German Tier); and “meat,” which originally meant food in general (hence, “sweetmeats” and the archaic phrase “meat and drink”) now denotes the flesh of an animal treated as food. In all such cases, the narrower meaning has developed from the constant use of the word in a more specialized context, and the contextual presuppositions of the word have in time become part of its meaning.

      Languages borrow words freely from one another. Usually this happens when some new object or institution is developed for which the borrowing language has no word of its own. For example, the large number of words denoting financial institutions and operations borrowed from Italian by the other western European languages at the time of the Renaissance testifies to the importance of the Italian bankers in that period. (The word “bank” itself, in this sense, comes through French from the Italian banca). Words now pass from one language to another on a scale that is probably unprecedented, partly because of the enormous number of new inventions that have been made in the 20th century and partly because international communications are now so much more rapid and important. The vocabulary of modern science and technology is very largely international.

The comparative (comparative linguistics) method
      The comparative method in historical linguistics is concerned with the reconstruction of an earlier language or earlier state of a language on the basis of a comparison of related words and expressions in different languages or dialects derived from it. The comparative method was developed in the course of the 19th century for the reconstruction of Proto-Indo-European and was subsequently applied to the study of other language families. It depends upon the principle of regular sound change—a principle that, as explained above, met with violent opposition when it was introduced into linguistics by the Neogrammarians in the 1870s but by the end of the century had become part of what might be fairly described as the orthodox approach to historical linguistics. Changes in the phonological systems of languages through time were accounted for in terms of sound laws.

Grimm's law
 The most famous of the sound laws is Grimm's law (though Grimm himself did not use the term law). Some of the correspondences accounted for by Grimm's law are given in Table 1—>. It will be observed that, when other Indo-European languages, including Latin and Greek, have a voiced unaspirated stop (b, d ), Gothic has the corresponding voiceless unaspirated stop (p, t) and that, when other Indo-European languages have a voiceless unaspirated stop, Gothic has a voiceless fricative (f, θ). The simplest explanation would seem to be that, under the operation of what is now called Grimm's law, in some prehistoric period of Germanic (before the development of a number of distinct Germanic languages), the voiced stops inherited from Proto-Indo-European became voiceless and the voiceless stops became fricatives. The situation with respect to the sounds corresponding to the Germanic voiced stops is more complex. Here there is considerable disagreement between the other languages: Greek has voiceless aspirates (ph, th), Sanskrit (Sanskrit language) has voiced aspirates (bh, dh), Latin has voiceless fricatives in word-initial position (f) and voiced stops in medial position (b, d), Slavic has voiced stops (b, d), and so on. The generally accepted hypothesis is that the Proto-Indo-European sounds from which the Germanic voiced stops developed were voiced aspirates and that they are preserved in Sanskrit but were changed in the other Indo-European languages by the loss of either voice or aspiration. (Latin, having lost the voice in initial position, subsequently changed both of the resultant voiceless aspirates into the fricative f, and it lost the aspiration in medial position.) It is easy to see that this hypothesis yields a simpler account of the correspondences than any of the alternatives. It is also in accord with the fact that voiced aspirates are rare in the languages of the world and, unless they are supported by the coexistence in the same language of phonologically distinct voiceless aspirates (as they are in Hindi and other north Indian languages), appear to be inherently unstable.

Proto-Indo-European reconstruction
       Velar and palatal stops in the Indo-European languagesReconstruction of the Proto-Indo-European labial stops (made with the lips) and dental stops (made with the tip of the tongue touching the teeth) is fairly straightforward. More controversial is the reconstruction of the Proto-Indo-European sounds underlying the correspondences shown in Table 2 (Velar and palatal stops in the Indo-European languages). According to the most generally accepted hypothesis, there were in Proto-Indo-European at least two distinct series of velar (or “guttural”) consonants: simple velars (or palatals), symbolized as *k, *g, and *gh, and labiovelars, symbolized as *kw, *gw, and *gwh. The labiovelars may be thought of as velar stops articulated with simultaneous lip-rounding. In one group of languages, the labial component is assumed to have been lost, in another group the velar component; and it is only in the Latin reflex of the voiceless *kw that both labiality and velarity are retained (cf. Latin quis from *kwi-). It is notable that the languages that have a velar for the Proto-Indo-European labiovelar stops (e.g., Sanskrit and Slavic) have a sibilant or palatal sound (s or ś) for the Proto-Indo-European simple velars. Earlier scholars attached great significance to this fact and thought that it represented a fundamental division of the Indo-European family into a western and an eastern group. The western group—comprising Celtic, Germanic, Italic, and Greek—is commonly referred to as the centum group; the eastern group—comprising Sanskrit, Iranian, Slavic, and others—is called the satem (satəm) group. (The words centum and satem come from Latin and Iranian, respectively, and mean “hundred.” They exemplify, with their initial consonant, the two different treatments of the Proto-Indo-European simple velars.) Nowadays less importance is attached to the centum–satem distinction. But it is still generally held that in an early period of Indo-European, there was a sound law operative in the dialect or dialects from which Sanskrit, Iranian, Slavic and the other so-called satem languages developed that had the effect of palatalizing the original Proto-Indo-European velars and eventually converting them to sibilants.

Steps in the comparative method
      The information given in the previous paragraphs is intended to illustrate what is meant by a sound law and to indicate the kind of considerations that are taken into account in the application of the comparative method. The first step is to find sets of cognate or putatively cognate forms in the languages or dialects being compared: for example, Latin decem = Greek deka = Sanskrit daśa = Gothic taihun, all meaning “ten.” From sets of cognate forms such as these, sets of phonological correspondences can be extracted; e.g., (1) Latin d = Greek d = Sanskrit d = Gothic t; (2) Latin e = Greek e = Sanskrit a = Gothic ai (in the Gothic orthography this represents an e sound); (3) Latin c (i.e., a k sound) = Greek k = Sanskrit ś = Gothic h; (4) Latin em = Greek a = Sanskrit a = Gothic un. A set of “reconstructed” phonemes can be postulated (marked with an asterisk by the standard convention) to which the phonemes in the attested languages can be systematically related by means of sound laws. The reconstructed Proto-Indo-European word for “ten” is *dekm. From this form the Latin word can be derived by means of a single sound change, *m changes to em (usually symbolized as *m > em); the Greek by means of the sound change *m > a (i.e., vocalization of the syllabic nasal and loss of nasality); the Sanskrit by means of the palatalizing sound law, *k > ś and the sound change *m > a (whether this is assumed to be independent of the law operative in Greek or not); and the Gothic by means of Grimm's law (*d > t, *k > h) and the sound change *m > un.

      Most 19th-century linguists took it for granted that they were reconstructing the actual word forms of some earlier language, that *dekm, for example, was a pronounceable Proto-Indo-European word. Many of their successors have been more skeptical about the phonetic reality of reconstructed starred forms like *dekm. They have said that they are no more than formulae summarizing the correspondences observed to hold between attested forms in particular languages and that they are, in principle, unpronounceable. From this point of view, it would be a matter of arbitrary decision which letter is used to refer to the correspondences: Latin d = Greek d = Sanskrit d = Gothic t, and so on. Any symbol would do, provided that a distinct symbol is used for each distinct set of correspondences. The difficulty with this view of reconstruction is that it seems to deny the very raison d'être of historical and comparative linguistics. Linguists want to know, if possible, not only that Latin decem, Greek deka, and so on are related, but also the nature of their historical relationship—how they have developed from common ancestral form. They also wish to construct, if feasible, some general theory of sound change. This can be done only if some kind of phonetic interpretation can be given to the starred forms. The important point is that the confidence with which a phonetic interpretation is assigned to the phonemes that are reconstructed will vary from one phoneme to another. It should be clear from the discussion above, for example, that the interpretation of *d as a voiced dental or alveolar stop is more certain than the interpretation of *k as a voiceless velar stop. The starred forms are not all on an equal footing from a phonetic point of view.

Criticisms of the comparative method
      One of the criticisms directed against the comparative method is that it is based upon a misleading genealogical metaphor. In the mid-19th century, the German linguist August Schleicher introduced into comparative linguistics the model of the “family tree.” There is obviously no point in time at which it can be said that new languages are “born” of a common parent language. Nor is it normally the case that the parent language “lives on” for a while, relatively unchanged, and then “dies.” It is easy enough to recognize the inappropriateness of these biological expressions. No less misleading, however, is the assumption that languages descended from the same parent language will necessarily diverge, never to converge again, through time. This assumption is built into the comparative method as it is traditionally applied. And yet there are many clear cases of convergence in the development of well-documented languages. The dialects (dialect) of England are fast disappearing and are far more similar in grammar and vocabulary today than they were even a generation ago. They have been strongly influenced by the standard language. The same phenomenon, the replacement of nonstandard or less prestigious forms with forms borrowed from the standard language or dialect, has taken place in many different places at many different times. It would seem, therefore, that one must reckon with both divergence and convergence in the diachronic development of languages: divergence when contact between two speech communities is reduced or broken and convergence when the two speech communities remain in contact and when one is politically or culturally dominant.

      The comparative method presupposes linguistically uniform speech communities and independent development after sudden, sharp cleavage. Critics of the comparative method have pointed out that this situation does not generally hold. In 1872 a German scholar, Johannes Schmidt, criticized the family-tree theory and proposed instead what is referred to as the wave theory, according to which different linguistic changes will spread, like waves, from a politically, commercially, or culturally important centre along the main lines of communication, but successive innovations will not necessarily cover exactly the same area. Consequently, there will be no sharp distinction between contiguous dialects, but, in general, the further apart two speech communities are, the more linguistic features there will be that distinguish them.

      The comparative method is used to reconstruct earlier forms of a language by drawing upon the evidence provided by other related languages. It may be supplemented by what is called the method of internal reconstruction. This is based upon the existence of anomalous or irregular patterns of formation and the assumption that they must have developed, usually by sound change, from earlier regular patterns. For example, the existence of such patterns in early Latin as honos : honoris (“honor” : “of honor”) and others in contrast with orator : oratoris (“orator” : “of the orator”) and others might lead to the supposition that honoris developed from an earlier *honosis. In this case, the evidence of other languages shows that *s became r between vowels in an earlier period of Latin. But it would have been possible to reconstruct the earlier intervocalic *s with a fair degree of confidence on the basis of the internal evidence alone. Clearly, internal reconstruction depends upon the structural approach to linguistics.

      The most recent development in the field of historical and comparative linguistics has come from the theory of generative grammar (see above Transformational-generative grammar (linguistics)). If the grammar and phonology of a language are described from a synchronic point of view as an integrated system of rules, then the grammatical and phonological similarities and differences between two closely related languages, or dialects, or between two diachronically distinct states of the same language can be described in terms of the similarities and differences in two descriptive rule systems. One system may contain a rule that the other lacks (or may restrict its application more or less narrowly); one system may differ from the other in that the same set of rules will apply in a different order in the one system from the order in which they apply in the other. Language change may thus be accounted for in terms of changes introduced into the underlying system of phonological and grammatical rules (including the addition, loss, or reordering of rules) during the process of language acquisition. So far these principles have been applied principally to sound change. There has also been a little work done on diachronic syntax.

Language classification
      There are two kinds of classification of languages practiced in linguistics: genetic (or genealogical) and typological. The purpose of genetic classification is to group languages into families according to their degree of diachronic relatedness. For example, within the Indo-European family, such subfamilies as Germanic or Celtic are recognized; these subfamilies comprise German, English, Dutch, Swedish, Norwegian, Danish, and others, on the one hand, and Irish, Welsh, Breton, and others, on the other. So far, most of the languages of the world have been grouped only tentatively into families, and many of the classificatory schemes that have been proposed will no doubt be radically revised as further progress is made.

      A typological classification groups languages into types according to their structural characteristics. The most famous typological classification is probably that of isolating, agglutinating, and inflecting (or fusional) languages, which was frequently invoked in the 19th century in support of an evolutionary theory of language development. Roughly speaking, an isolating language is one in which all the words are morphologically unanalyzable (i.e., in which each word is composed of a single morph); Chinese and, even more strikingly, Vietnamese are highly isolating. An agglutinating (agglutination) language (e.g., Turkish) is one in which the word forms can be segmented into morphs, each of which represents a single grammatical category. An inflecting (inflection) language is one in which there is no one-to-one correspondence between particular word segments and particular grammatical categories. The older Indo-European languages tend to be inflecting in this sense. For example, the Latin suffix -is represents the combination of categories “singular” and “genitive” in the word form hominis “of the man,” but one part of the suffix cannot be assigned to “singular” and another to “genitive,” and -is is only one of many suffixes that in different classes (or declensions) of words represent the combination of “singular” and “genitive.”

      There is, in principle, no limit to the variety of ways in which languages can be grouped typologically. One can distinguish languages with a relatively rich phonemic inventory from languages with a relatively poor phonemic inventory, languages with a high ratio of consonants to vowels from languages with a low ratio of consonants to vowels, languages with a fixed word order from languages with a free word order, prefixing languages from suffixing languages, and so on. The problem lies in deciding what significance should be attached to particular typological characteristics. Although there is, not surprisingly, a tendency for genetically related languages to be typologically similar in many ways, typological similarity of itself is no proof of genetic relationship. Nor does it appear true that languages of a particular type will be associated with cultures of a particular type or at a certain stage of development. What has emerged from recent work in typology is that certain logically unconnected features tend to occur together, so that the presence of feature A in a given language will tend to imply the presence of feature B. The discovery of unexpected implications of this kind calls for an explanation and gives a stimulus to research in many branches of linguistics.

Linguistics and other disciplines

      The term psycholinguistics was coined in the 1940s and came into more general use after the publication of Charles E. Osgood and Thomas A. Sebeok's Psycholinguistics: A Survey of Theory and Research Problems (1954), which reported the proceedings of a seminar sponsored in the United States by the Social Science Research Council's Committee on Linguistics and Psychology.

      The boundary between linguistics (in the narrower sense of the term: see the introduction of this article) and psycholinguistics is difficult, perhaps impossible, to draw. So too is the boundary between psycholinguistics and psychology. What characterizes psycholinguistics as it is practiced today as a more or less distinguishable field of research is its concentration upon a certain set of topics connected with language and its bringing to bear upon them the findings and theoretical principles of both linguistics and psychology. The range of topics that would be generally held to fall within the field of psycholinguistics nowadays is rather narrower, however, than that covered in the survey by Osgood and Sebeok.

Language acquisition by children
      One of the topics most central to psycholinguistic research is the acquisition of language by children. The term acquisition is preferred to “learning,” because “learning” tends to be used by psychologists in a narrowly technical sense, and many psycholinguists believe that no psychological theory of learning, as currently formulated, is capable of accounting for the process whereby children, in a relatively short time, come to achieve a fluent control of their native language. Since the beginning of the 1960s, research on language acquisition has been strongly influenced by Chomsky's theory of generative grammar, and the main problem to which it has addressed itself has been how it is possible for young children to infer the grammatical rules underlying the speech they hear and then to use these rules for the construction of utterances that they have never heard before. It is Chomsky's conviction, shared by a number of psycholinguists, that children are born with a knowledge of the formal principles that determine the grammatical structure of all languages, and that it is this innate knowledge that explains the success and speed of language acquisition. Others have argued that it is not grammatical competence as such that is innate but more general cognitive principles and that the application of these to language utterances in particular situations ultimately yields grammatical competence. Many recent works have stressed that all children go through the same stages of language development regardless of the language they are acquiring. It has also been asserted that the same basic semantic categories and grammatical functions can be found in the earliest speech of children in a number of different languages operating in quite different cultures in various parts of the world.

      Although Chomsky was careful to stress in his earliest writings that generative grammar does not provide a model for the production or reception of language utterances, there has been a good deal of psycholinguistic research directed toward validating the psychological reality of the units and processes postulated by generative grammarians in their descriptions of languages. Experimental work in the early 1960s appeared to show that nonkernel sentences took longer to process than kernel sentences and, even more interestingly, that the processing time increased proportionately with the number of optional transformations involved. More recent work has cast doubt on these findings, and most psycholinguists are now more cautious about using grammars produced by linguists as models of language processing. Nevertheless, generative grammar continues to be a valuable source of psycholinguistic experimentation, and the formal properties of language, discovered or more adequately discussed by generative grammarians than they have been by others, are generally recognized to have important implications for the investigation of short-term and long-term memory and perceptual strategies.

Speech perception
      Another important area of psycholinguistic research that has been strongly influenced by recent theoretical advances in linguistics and, more especially, by the development of generative grammar is speech perception. It has long been realized that the identification of speech sounds and of the word forms composed of them depends upon the context in which they occur and upon the hearer's having mastered, usually as a child, the appropriate phonological and grammatical system. Throughout the 1950s, work on speech perception was dominated (as was psycholinguistics in general) by information theory, according to which the occurrence of each sound in a word and each word in an utterance is statistically determined by the preceding sounds and words. Information theory is no longer as generally accepted as it was a few years ago, and more recent research has shown that in speech perception the cues provided by the acoustic input are interpreted, unconsciously and very rapidly, with reference not only to the phonological structure of the language but also to the more abstract levels of grammatical organization.

Other areas of research
      Other areas of psycholinguistics that should be briefly mentioned are the study of aphasia and neurolinguistics. The term aphasia is used to refer to various kinds of language disorders; recent work has sought to relate these, on the one hand, to particular kinds of brain injury and, on the other, to psychological theories of the storage and processing of different kinds of linguistic information. One linguist has put forward the theory that the most basic distinctions in language are those that are acquired first by children and are subsequently most resistant to disruption and loss in aphasia. This, though not disproved, is still regarded as controversial. Two kinds of aphasia are commonly distinguished. In motor aphasia the patient manifests difficulty in the articulation of speech or in writing and may produce utterances with a simplified grammatical structure, but his comprehension is not affected. In sensory aphasia the patient's fluency may be unaffected, but his comprehension will be impaired and his utterances will often be incoherent.

       neurolinguistics should perhaps be regarded as an independent field of research rather than as part of psycholinguistics. In 1864 it was shown that motor aphasia is produced by lesions in the third frontal convolution of the left hemisphere of the brain. Shortly after the connection had been established between motor aphasia and damage to this area (known as Broca's area), the source of sensory aphasia was localized in lesions of the posterior part of the left temporal lobe. More recent work has confirmed these findings. The technique of electrically stimulating the cortex in conscious patients has enabled brain surgeons to induce temporary aphasia and so to identify a “speech area” in the brain. It is no longer generally believed that there are highly specialized “centres” within the speech area, each with its own particular function; but the existence of such a speech area in the dominant hemisphere of the brain (which for most people is the left hemisphere) seems to be well established. The posterior part of this area is involved more in the comprehension of speech and the construction of grammatically and semantically coherent utterances, and the anterior part is concerned with the articulation of speech and with writing. Little is yet known about the operation of the neurological mechanisms underlying the storage and processing of language. (See also the articles entitled perception; speech (phonetics).)

Delineation of the field
      Just as it is difficult to draw the boundary between linguistics and psycholinguistics and between psychology and psycholinguistics, so it is difficult to distinguish sharply between linguistics and sociolinguistics and between sociolinguistics and sociology. There is the further difficulty that, because the boundary between sociology and anthropology is also unclear, sociolinguistics merges with anthropological linguistics (see below).

      It is frequently suggested that there is a conflict between the sociolinguistic and the psycholinguistic approach to the study of language, and it is certainly the case that two distinct points of view are discernible in the literature at the present time. Chomsky has described linguistics as a branch of cognitive psychology, and neither he nor most of his followers have yet shown much interest in the relationship between language and its social and cultural matrix. On the other hand, many modern schools of linguistics that have been very much concerned with the role of language in society would tend to relate linguistics more closely to sociology and anthropology than to any other discipline. It would seem that the opposition between the psycholinguistic and the sociolinguistic viewpoint must ultimately be transcended. The acquisition of language, a topic of central concern to psycholinguists, is in part dependent upon and in part itself determines the process of socialization; and the ability to use one's native language correctly in the numerous socially prescribed situations of daily life is as characteristic a feature of linguistic competence, in the broad sense of this term, as is the ability to produce grammatical utterances. Some of the most recent work in sociolinguistics and psycholinguistics has sought to widen the notion of linguistic competence in this way. So far, however, sociolinguistics and psycholinguistics tend to be regarded as relatively independent areas of research.

Social dimensions
      Language is probably the most important instrument of socialization that exists in all human societies and cultures. It is largely by means of language that one generation passes on to the next its myths, laws, customs, and beliefs, and it is largely by means of language that the child comes to appreciate the structure of the society into which he is born and his own place in that society.

      As a social force, language serves both to strengthen the links that bind the members of the same group and to differentiate the members of one group from those of another. In many countries there are social dialects (dialect) as well as regional dialects, so that it is possible to tell from a person's speech not only where he comes from but what class he belongs to. In some instances social dialects can transcend regional dialects. This is notable in England, where standard English in the so-called Received Pronunciation (RP) can be heard from members of the upper class and upper middle class in all parts of the country. The example of England is but an extreme manifestation of a tendency that is found in all countries: there is less regional variation in the speech of the higher than in that of the lower socioeconomic classes. In Britain and the United States and in most of the other English-speaking countries, people will almost always use the same dialect, regional or social, however formal or informal the situation and regardless of whether their listeners speak the same dialect or not. (Relatively minor adjustments of vocabulary may, however, be made: an Englishman speaking to an American may employ the word “elevator” rather than “lift” and so on.) In many communities throughout the world, it is common for members to speak two or more different dialects and to use one dialect rather than another in particular social situations. This is commonly referred to as code-switching. Code-switching may operate between two distinct languages (e.g., Spanish and English among Puerto Ricans in New York) as well as between two dialects of the same language. The term diglossia (rather than bilingualism) is frequently used by sociolinguists to refer to this by no means uncommon phenomenon.

      In every situation, what one says and how one says it depends upon the nature of that situation, the social role being played at the time, one's status vis-à-vis that of the person addressed, one's attitude towards him, and so on. Language interacts with nonverbal behaviour in social situations and serves to clarify and reinforce the various roles and relationships important in a particular culture. Sociolinguistics is far from having satisfactorily analyzed or even identified all the factors involved in the selection of one language feature rather than another in particular situations. Among those that have been discussed in relation to various languages are: the formality or informality of the situation; power and solidarity relationships between the participants; differences of sex, age, occupation, socioeconomic class, and educational background; and personal or transactional situations. Terms such as style and register (as well as a variety of others) are employed by many linguists to refer to the socially relevant dimensions of phonological, grammatical, and lexical variation within one language. So far there is very little agreement as to the precise application of such terms. (For further treatment of sociolinguistics, see dialect (language).)

Other relationships
      The fundamental concern of anthropological linguistics is to investigate the relationship between language and culture. To what extent the structure of a particular language is determined by or determines the form and content of the culture with which it is associated remains a controversial question. Vocabulary differences between languages correlate obviously enough with cultural differences, but even here the interdependence of language and culture is not so strong that one can argue from the presence or absence of a corresponding cultural difference. For example, from the fact that English—unlike French, German, Russian, and many other languages—distinguishes lexically between monkeys and apes, one cannot conclude that there is an associated difference in the cultural significance attached to these animals by English-speaking societies. Some of the major grammatical distinctions in certain languages may have originated in culturally important categories (e.g., the distinction between an animate and an inanimate gender). But they seem to endure independently of any continuing cultural significance. The “Whorfian hypothesis” (the thesis that one's thought and even perception are determined by the language one happens to speak), in its strong form at least, is no longer debated as vigorously as it was a few years ago. Anthropologists continue to draw upon linguistics for the assistance it can give them in the analysis of such topics as the structure of kinship. A more recent development, but one that has not so far produced any very substantial results, is the application of notions derived from generative grammar to the analysis of ritual and other kinds of culturally prescribed behaviour.

      By computational linguistics is meant no more than the use of electronic digital computers in linguistic research. At a theoretically trivial level, computers are employed to scan texts and to produce, more rapidly and more reliably than was possible in the past, such valuable aids to linguistic and stylistic research as word lists, frequency counts, and concordances. Theoretically more interesting, though much more difficult, is the automatic grammatical analysis of texts by computer. Considerable progress was made in this area by research groups working on machine translation and information retrieval in the United States, Great Britain, the Soviet Union, France, and a few other countries in the decade between the mid-1950s and the mid-1960s. But much of the original impetus for this work disappeared, for a time at least, in part because of the realization that the theoretical problems involved in machine translation are much more difficult than they were at first thought to be and in part as a consequence of a loss of interest among linguists in the development of discovery procedures. Whether automatic syntactic analysis and fully automatic high-quality machine translation are even feasible in principle remains a controversial question.

Mathematical linguistics
      What is commonly referred to as mathematical linguistics comprises two areas of research: the study of the statistical structure of texts and the construction of mathematical models of the phonological and grammatical structure of languages. These two branches of mathematical linguistics, which may be termed statistical and algebraic linguistics, respectively, are typically distinct. Attempts have been made to derive the grammatical rules of languages from the statistical structure of texts written in those languages, but such attempts are generally thought to have been not only unsuccessful so far in practice but also, in principle, doomed to failure. That languages have a statistical structure is a fact well known to cryptographers. Within linguistics, it is of considerable typological interest to compare languages from a statistical point of view (the ratio of consonants to vowels, of nouns to verbs, and so on). Statistical considerations are also of value in stylistics (see below).

      Algebraic linguistics derives principally from the work of Noam Chomsky in the field of generative grammar (see above Chomsky's grammar (linguistics)). In his earliest work Chomsky described three different models of grammar—finite-state grammar, phrase-structure grammar, and transformational grammar—and compared them in terms of their capacity to generate all and only the sentences of natural languages and, in doing so, to reflect in an intuitively satisfying manner the underlying formal principles and processes. Other models have also been investigated, and it has been shown that certain different models are equivalent in generative power to phrase-structure grammars. The problem is to construct a model that has all the formal properties required to handle the processes found to be operative in languages but that prohibits rules that are not required for linguistic description. It is an open question whether such a model, or one that approximates more closely to this ideal than current models do, will be a transformational grammar or a grammar of some radically different character.

      The term stylistics is employed in a variety of senses by different linguists. In its widest interpretation it is understood to deal with every kind of synchronic variation in language other than what can be ascribed to differences of regional dialect. At its narrowest interpretation it refers to the linguistic analysis of literary texts. One of the aims of stylistics in this sense is to identify those features of a text that give it its individual stamp and mark it as the work of a particular author. Another is to identify the linguistic features of the text that produce a certain aesthetic response in the reader. The aims of stylistics are the traditional aims of literary criticism. What distinguishes stylistics as a branch of linguistics (for those who regard it as such) is the fact that it draws upon the methodological and theoretical principles of modern linguistics.

Philosophy of language
      The analysis of language has always been a subject of particular concern to philosophers, and traditional grammar was strongly influenced by the dominant philosophical attitudes of the day. Modern linguistics and modern philosophical theories have so far had little influence on one another. Some philosophers have shown an interest in Chomsky's (Chomsky, Noam) controversial suggestion that work in generative grammar lends support to the rationalists in their long-standing dispute about the source of human knowledge. Potentially more fruitful, perhaps, is the interest shown by a number of linguists in philosophical treatments of reference, quantification, and presupposition, in systems of modal logic, and in the work of the so-called philosophers of ordinary language.

      In the sense in which the term applied linguistics is most commonly used nowadays it is restricted to the application of linguistics to language teaching. Much of the recent expansion of linguistics as a subject of teaching and research in the universities in many countries has come about because of its value, actual and potential, for writing better language textbooks and devising more efficient methods of teaching languages. Linguistics is also widely held to be relevant to the training of teachers of the deaf and speech therapists. Outside the field of education in the narrower sense, applied linguistics (and, more particularly, applied sociolinguistics) has an important part to play in what is called language planning; i.e., in advising governments, especially in recent created states, as to which language or dialect should be made the official language of the country and how it should be standardized.

Sir John Lyons

dialectology and linguistic geography

Dialect geography
       dialect study as a discipline—dialectology—dates from the first half of the 19th century, when local dialect dictionaries and dialect grammars first appeared in western Europe. Soon thereafter, dialect maps were developed; most often they depicted the division of a language's territory into regional dialects. The 19th-century rise of nationalism, coupled with the Romantic view of dialects and folklore as manifestations of the ethnic soul, furnished a great impetus for dialectology.

Early dialect studies
      The first dialect dictionaries and grammars were most often written by scholars describing the dialect of their birthplace or by fieldworkers whose main method of investigation was free conversation with speakers of the dialect, usually older persons and, preferably, those who showed the least degree of literacy and who had travelled as little as possible. Many of these grammars and dictionaries recorded dialectal traits that deviated from the standard language. In the second half of the 19th century, when historical and comparative linguistic study was flourishing, it became customary to focus attention on the fate of particular elements of the archaic language in a given dialect; e.g., the changes that Latin vowels and consonants underwent when used in different positions in a particular Romance dialect.

      With the accumulation of dialectal data, investigators became increasingly conscious of the inadequacy of viewing dialects as internally consistent units that were sharply differentiated from neighbouring dialects. It became more and more clear that each dialectal element or phenomenon refused to stay neatly within the borders of a single dialect area and that each had its own isogloss; consequently, maps of dialects would have to be replaced by maps showing the distribution of each particular feature. While sound scientifically, the preparation and compilation of such maps, called linguistic atlases (atlas), is a difficult, costly, voluminous, and time-consuming job.

Dialect atlases
      Dialect atlases are compiled on the basis of investigations of the dialects of a large number of places; a questionnaire provides uniform data. There are two basic methods of data collection: fieldwork and survey by correspondence. Fieldwork, in which a trained investigator transcribes dialectal forms directly (or on tape), affords more precise data and enables the questionnaire to include a greater number of diverse questions; but it implies a necessarily limited number of points to be covered. The advantage of the correspondence method lies in its ability to encompass more points at less cost and with less time expended in gathering the data. On the other hand, rural schoolteachers, normally the persons who complete such questionnaires, can answer only a relatively small number of questions and often imperfectly.

      The first large-scale enterprise in linguistic geography was the preparation of the German linguistic atlas. In the 1880s, the initiator of this great undertaking, Georg Wenker, composed 40 test sentences that illustrated most of the important ways in which dialects differed and sent them to schoolmasters in over 40,000 places in the German Empire. The sentences were to be translated into the local dialect. Publication of the results was not begun until 1926; the main cause of the delay was the enormous quantity of material to be arranged and analyzed.

      The famous French linguistic atlas of Jules Gilliéron and Edmond Edmont was based on a completely different concept. Using a questionnaire of about 2,000 words and phrases that Gilliéron had composed, Edmont surveyed 639 points in the French-speaking area. The atlas, compiled under the direction of Gilliéron, was published in fascicles from 1902 to 1912 and furnished both a strong stimulus and the basic model for work on linguistic atlases elsewhere in the world. European linguists, especially in Romance- and Germanic-speaking countries, were the first to participate in such atlas projects. One of the most significant contributions is the linguistic atlas of Italy and southern Switzerland by Karl Jaberg and Jakob Jud (Jud, Jakob); it appeared from 1928 to 1940. Particularly noteworthy in its attention to precise definitions of meaning, this atlas often used illustrations and described objects and actions of village life denoted by the questionnaire's words.

      At present, dialects of virtually all European languages have been treated in linguistic geography studies. In some countries, data are still being collected and classified and maps are being drawn, but in others a second generation of atlases is already under way. French dialectologists, for instance, are now working on regional atlases that will complement data contained in the Atlas linguistique de la France. In England, work began in 1946, under the direction of Harold Orton and Eugene Dieth; the first volume of the Survey of English Dialects was published in 1962. In Slavic-speaking countries, work is now under way both on atlases of separate Slavic languages and on the large general Slavic linguistic atlas that will cover nearly 1,000 locales in all parts of European territory where Slavic languages are spoken. Outside Europe, the greatest amount of work in linguistic geography has been completed in Japan and in the United States.

      As early as 1905–06, a committee of Japanese dialectologists published the first linguistic atlas of Japan in two volumes, one devoted to phonology and one to morphology. Subsequent work has been done on a new atlas of Japan as a whole and on several regional atlases. The extensive activity of Chinese (Chinese languages) specialists has concentrated on descriptions of particular local and regional dialects. The Chinese situation is a peculiar one because of the enormous number of people who speak Chinese, the very significant dialectal differentiation (certain dialects, particularly those in the south of China, would be considered by Western standards as separate languages), and the nature of the Chinese script. Chinese characters do not represent sounds but concepts. Because of this, the written language can be read without difficulty in many different dialect areas, although its spoken form varies greatly from one region to another.

      Because of the enormous size of the United States, atlas surveys were done by region. Between 1931 and 1933, fieldworkers under the direction of the linguist Hans Kurath (Kurath, Hans) surveyed 213 New England communities; the results were published in the Linguistic Atlas of New England (with 734 maps) in 1939–43. Based on the methodological experience of Jaberg and Jud in their atlas of Italy and southern Switzerland, this work involved systematic investigations not only among the relatively uneducated but also among better educated, more cultured informants and among the very well educated, cultured, and informed members of a community. Thus the dimension of social stratification of language was introduced into linguistic geography, and valuable material about regional linguistic standards became available.

      After 1933, fieldwork was extended to the other Atlantic states. Lack of financial support, however, has hindered the publication of these atlases. Nevertheless, several works based on the material gathered have appeared, among them Kurath's Word Geography of the Eastern United States, E. Bagby Atwood's Survey of Verb Forms in the Eastern United States, and Kurath's and McDavid's Pronunciation of English in the Atlantic States. Independent work was carried out in other U.S. regions, mainly with an adapted form of the questionnaire developed for the Atlantic states; only introductions or summaries of material in the files have been published, however, because of lack of funds.

      The most effective and thorough—as well as the most expensive—way of presenting data in linguistic atlases is by printing the actual responses to questionnaire items right on the maps. Phenomena of linguistic geography, however, are usually represented by geometric symbols or figures at the proper points on the map or, even more summarily, by the drawing of isoglosses (linguistic boundaries) or by shading or colouring the areas of particular features.

      Only dialect atlases can furnish the complexity of data of the major dialectal phenomena in a multitude of geographic locations in a manner that both assures commensurability of the data and allows a panoramic examination of the whole gamut of data. The inventory of linguistic phenomena is so rich, however, that no one questionnaire can encompass it all. Moreover, the use of a questionnaire unavoidably brings about a schematization of answers that is lacking in spontaneity. For these reasons, other kinds of publications, such as dialect dictionaries or monographs based on extensive free conversation with speakers of local dialects, are indispensable complements to linguistic atlases.

The value and applications of dialectology
      The scientific interest of dialectology lies in the fact that dialects are a valuable source of information about popular culture. They reflect not only the history of a language but, to a great extent, the ethnic, cultural, and even political history of a people as well. A knowledge of dialectal facts provides practical guidance to school systems that are trying to teach the standard language to an ever greater number of pupils.

      In the 1930s the value of dialectology to the study of language types became apparent. Because dialects greatly outnumber standard languages, they provide a much greater variety of phenomena than languages and thus have become the main source of information about the types of phenomena possible in linguistic systems. Also, in some languages, but not in others, an extremely wide structural variation among dialects has been found. In the Balkan region, where two closely related Slavic languages, Serbo-Croatian and Slovene, are spoken, dialects are found with synthetic declension (case endings, as in Latin) and analytic declension (use of prepositions and word order, as in English). In addition, there are among these dialects complex systems of verbal tenses contrasting with simple ones, as well as dialects with or without the dual number or the neuter gender. The dialects of Serbo-Croatian and Slovene also exhibit almost every type of prosodic structure (e.g., tone, stress, length) found in European languages. Some dialects differentiate long and short vowels or rising and falling accents, while others do not; and in some, but not all, of them stress fulfills a grammatical function. Of the several dozen vowel and diphthong sounds that occur in these dialects, only five are common to all of them; all the rest are restricted to relatively small areas. All of this rich variety contrasts sharply with the relative structural uniformity of the English language—not only in the United States but wherever it is spoken. (The outstanding exceptions are the creolized dialects, which are distinguished by far-reaching structural peculiarities.)

Social dialectology
      The methodology of generative grammar was first applied to dialectology in the 1960s, when the use of statistical means to measure the similarity or difference between dialects also became increasingly common. The most important development of that time, however, was the rapid growth of methods for investigating the social (sociolinguistics) variation of dialects; social variation, in contrast to geographic variation, is prominent in the United States, above all in large urban centres. In cities such as New York, a whole scale of speech variation can be found to correlate with the social status (social class) and educational level of the speakers. In addition, age groups exhibit different patterns, but such patterns of variation differ from one social stratum to another. Still another dimension of variation, especially important in the United States, is connected with the race and ethnic origin of a speaker as well as with the speaker's date of immigration. So-called Black English (African American English) has been influenced by the southeastern U.S. origin of most of the black population of non-southern U.S. regions: many Black English peculiarities are in reality transplanted southeastern dialectal traits.

      Normally, speakers of one of the social dialects of a city possess at least some awareness of the other dialects. In this way, speech characteristics also become subjectively integrated into the system of signs indicating social status. And, in seeking to enhance their social status, poorer and less educated speakers may try to acquire the dialect of the socially prestigious. Certain groups—e.g., blacks and the working class—however, will, under certain conditions, show a consciousness of solidarity and a tendency to reject members who imitate either the speech or other types of behaviour of models outside their own social group.

      As a consequence of an individual's daily contacts with speakers of the various social dialects of a city, elements of the other dialects are imperceptibly drawn into his dialect. The collective result of such experiences is the spread of linguistic variables—i.e., groups of variants (sounds or grammatical phenomena) primarily determined by social (educational, racial, age, class) influences, an example being the existence of the two forms “He don't know” and the standard “He doesn't know.” Traits representing variables in intergroup relations can become variable features in the speech of individuals as well; i.e., an individual may employ two or more variants for the same feature in his own speech, such as “seeing” and “seein' ” or “he don't” and “he doesn't.” The frequency of usage for each variable varies with the individual speaker as well as with the social group. There are intermediate stages of frequency between different social groups and entire scales of transitions between different age groups, thus creating even greater variation within the dialect of an individual. The variables also behave differently in the various styles of written or spoken language used by each speaker.

      The study of variables is one of the central tasks of any investigation of the dialects of American cities. Applying the statistical methods of modern sociology, linguists have worked out investigative procedures sharply different from those of traditional dialectology. The chief contributor has been William Labov, the pioneer of social dialectology in the U.S. The basic task is to determine the correlation between a group of linguistic variables—such as the different ways of pronouncing a certain vowel—and extralinguistic variables, such as education, social status, age, and race. For a reasonable degree of statistical reliability, one must record a great number of speakers. In general, several examples of the same variable must be elicited from each individual in order to examine the frequency and probability of its usage. Accordingly, the number of linguistic variables that can be examined is quite limited, in comparison with the number of dialectal features normally recorded by traditional fieldworkers in rural communities; in these situations, the investigator is often satisfied with one or two responses for each feature.

      A completely new, flexible, and imaginative method of interviewing is needed for such work in urban centres, as well as new ways of finding and making contact with informants. One example is Labov's method for testing the fate of final and preconsonantal r in speakers of different social levels. Choosing three New York City department stores, each oriented to a completely different social stratum, he approached a large number of salesladies, asking each of them about the location of a certain department that he knew to be on the fourth floor. Thus, their answers always contained two words with potential r's—“fourth” and “floor.” This shortcut enabled Labov to establish in a relatively short time that the salesladies in the store with richer customers clearly tended to use “r-full” forms, whereas those in the stores geared to the poorer social strata more commonly used “r-less” forms.

      Social dialectology has focused on the subjective evaluation of linguistic features and the degree of an individual's linguistic security, phenomena that have considerable influence on linguistic change. Linguistic scientists, in studying the mechanism of such change, have found that it seems to proceed gradually from one social group to another, always attaining greater frequency among the young. Social dialectology also has great relevance for a society as a whole, in that the data it furnishes will help deal with the extremely complex problems connected with the speech of the socially underprivileged, especially of minority groups. Thus, the recent emphasis on the speech of minority groups, such as the Black English of American cities, is not a chance phenomenon. Specific methods for such investigation are being developed, as well as ways of applying the results of such investigation to educational policies.

Pavle Ivić Ed.

Additional Reading
Robert H. Robins, A Short History of Linguistics, 2nd ed. (1979), and General Linguistics: An Introductory Survey, 3rd ed. (1980), together offer a comprehensive and balanced treatment of the whole field. Leonard Bloomfield, Language (1933), a classic introduction to the subject, is still not completely superseded and is essential reading for an understanding of subsequent American work. Charles F. Hockett, A Course in Modern Linguistics (1958), a comprehensive, stimulating, though somewhat personal textbook, represents the post-Bloomfieldian period in the United States. John Lyons has produced a number of notable surveys: Introduction to Theoretical Linguistics (1968), attempts to synthesize more traditional and more modern ideas on language, paying particular attention to generative grammar and semantics; New Horizons in Linguistics (ed., 1970), contains previously unpublished chapters on developments in most areas of linguistics; Language and Linguistics: An Introduction (1981), is a textbook covering theoretical developments. Martin Joos (ed.), Readings in Linguistics (1957), is an excellent selection of key articles on structuralism in the post-Bloomfieldian period. Z.S. Harris, Methods in Structural Linguistics (1951), offers the most extreme and most consistent expression of the distributional approach to linguistic analysis—important for the development of generative grammar. Noam Chomsky, Syntactic Structures (1957), is the first generally accessible and relatively non-technical treatment of generative grammar, widely recognized as one of the most revolutionary books on language to appear in the 20th century; J.P.B. Allen and Paul Van Buren (eds.), Chomsky: Selected Readings (1971), contains an annotated selection of key passages from Chomsky's main works. S. Pit Corder (ed.), The Edinburgh Course in Applied Linguistics, 4 vol. (1973–77), is a collection of readings covering a wide range of views. Richard C. Oldfield and J.C. Marshall (eds.), Language (1968); J.A. Fodor, T.G. Bever, and M.F. Garrett, The Psychology of Language (1974); and Joseph F. Kess, Psycholinguistics (1976), are important works in psycholinguistics. Dell Hymes (ed.), Language in Culture and Society (1964), is an excellent selection of articles in sociolinguistics and anthropological linguistics.Language, Word, International Journal of American Linguistics (United States); Philological Society Transactions, Journal of Linguistics (Great Britain); Lingua, Studies in Language (Holland); Bulletin de la Société de Linguistique de Paris (France). Ed.

* * *


Universalium. 2010.

Synonyms:

Look at other dictionaries:

  • Linguistics — is the scientific study of language, encompassing a number of sub fields. An important topical division is between the study of language structure (grammar) and the study of meaning (semantics). Grammar encompasses morphology (the formation and… …   Wikipedia

  • linguistics — [liŋ gwis′tiks] n. [< LINGUISTIC] 1. the science of language, including phonetics, phonology, morphology, syntax, and semantics: sometimes subdivided into descriptive, historical, comparative, theoretical, and geographical linguistics: often… …   English World dictionary

  • Linguistics — Lin*guis tics (l[i^][ng]*gw[i^]s t[i^]ks), n. [Cf. F. linguistique.] The science of languages, or of the origin, signification, and application of words; glossology. [1913 Webster] …   The Collaborative International Dictionary of English

  • linguistics — index language Burton s Legal Thesaurus. William C. Burton. 2006 …   Law dictionary

  • linguistics — the science of languages, 1847; see LINGUISTIC (Cf. linguistic); also see ICS (Cf. ics) …   Etymology dictionary

  • linguistics — ► PLURAL NOUN (treated as sing. ) ▪ the scientific study of language and its structure. DERIVATIVES linguistician noun …   English terms dictionary

  • linguistics — Synonyms and related words: bowwow theory, comparative linguistics, derivation, descriptive linguistics, dialectology, dingdong theory, etymology, glossematics, glossology, glottochronology, glottology, grammar, graphemics, historical linguistics …   Moby Thesaurus

  • linguistics — n. applied; comparative; contrastive; descriptive; general; generative; historical linguistics; psycholinguistics; sociolinguistics; structural; transformational linguistics * * * [lɪŋ gwɪstɪks] comparative contrastive descriptive general… …   Combinatory dictionary

  • linguistics — /lɪŋˈgwɪstɪks / (say ling gwistiks) noun the science of language, including among its fields phonetics, phonemics, morphology, and syntax, and having as principal divisions descriptive linguistics, which seeks to describe and to develop… …  

  • linguistics — (Roget s IV) pl.n. Syn. grammar, semantics, phonology, morphology, syntax, philology; see also etymology , grammar , language 2 . Branches of the study of linguistics include: historical, diachronic, descriptive, comparative, synchronic,… …   English dictionary for students

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”