logic, history of

logic, history of

Introduction

      the history of the discipline from its origins among the ancient Greeks to the present time.

Origins of logic in the West

Precursors of ancient logic
      There was a medieval tradition according to which the Greek philosopher Parmenides (5th century BC) invented logic while living on a rock in Egypt. The story is pure legend, but it does reflect the fact that Parmenides was the first philosopher to use an extended argument for his views, rather than merely proposing a vision of reality. But using arguments is not the same as studying them, and Parmenides never systematically formulated or studied principles of argumentation in their own right. Indeed, there is no evidence that he was even aware of the implicit rules of inference used in presenting his doctrine.

      Perhaps Parmenides' use of argument was inspired by the practice of early Greek mathematics among the Pythagoreans (Pythagoreanism). Thus it is significant that Parmenides is reported to have had a Pythagorean teacher. But the history of Pythagoreanism in this early period is shrouded in mystery, and it is hard to separate fact from legend.

      If Parmenides was not aware of general rules underlying his arguments, the same perhaps is not true for his disciple Zeno Of Elea (5th century BC). Zeno was the author of many arguments, known collectively as “Zeno's Paradoxes (paradoxes of Zeno),” purporting to infer impossible consequences from a non-Parmenidean view of things and so to refute such a view and indirectly to establish Parmenides' monist position. The logical strategy of establishing a claim by showing that its opposite leads to absurd consequences is known as reductio ad absurdum. The fact that Zeno's arguments were all of this form suggests that he recognized and reflected on the general pattern.

      Other authors too contributed to a growing Greek interest in inference and proof. Early rhetoricians and sophists (Sophist)—e.g., Gorgias, Hippias (Hippias Of Elis), Prodicus, and Protagoras (all 5th-century BC)—cultivated the art of defending or attacking a thesis by means of argument. This concern for the techniques of argument on occasion merely led to verbal displays of debating skills, what Plato called “eristic.” But it is also true that the sophists were instrumental in bringing argumentation to the central position it came uniquely to hold in Greek thought. The sophists were, for example, among the first people anywhere to demand that moral claims be justified by reasons.

      Certain particular teachings of the sophists and rhetoricians are significant for the early history of logic. For example, Protagoras is reported to have been the first to distinguish different kinds of sentences: questions, answers, prayers, and injunctions. Prodicus appears to have maintained that no two words can mean exactly the same thing. Accordingly, he devoted much attention to carefully distinguishing and defining the meanings of apparent synonyms, including many ethical terms.

       Socrates (c. 470–399 BC) is said to have attended Prodicus' lectures. Like Prodicus, he pursued the definitions of things, particularly in the realm of ethics and values. These investigations, conducted by means of debate and argument as portrayed in the writings of Plato (428/427–348/347 BC), reinforced Greek interest in argumentation and emphasized the importance of care and rigour in the use of language.

      Plato continued the work begun by the sophists and by Socrates. In the Sophist, he distinguished affirmation from negation and made the important distinction between verbs and names (including both nouns and adjectives). He remarked that a complete statement (logos) cannot consist of either a name or a verb alone but requires at least one of each. This observation indicates that the analysis of language had developed to the point of investigating the internal structures of statements, in addition to the relations of statements as a whole to one another. This new development would be raised to a high art by Plato's pupil Aristotle (384–322 BC).

      There are passages in Plato's writings where he suggests that the practice of argument in the form of dialogue (Platonicdialectic”) has a larger significance beyond its occasional use to investigate a particular problem. The suggestion is that dialectic is a science in its own right, or perhaps a general method for arriving at scientific conclusions in other fields. These seminal but inconclusive remarks indicate a new level of generality in Greek speculation about reasoning.

Aristotle (Aristotelianism)
      The logical work of all these men, important as it was, must be regarded as piecemeal and fragmentary. None of them was engaged in the systematic, sustained investigation of inference in its own right. That seems to have been done first by Aristotle. At the end of his Sophistic Refutations, Aristotle acknowledges that in most cases new discoveries rely on previous labours by others, so that, while those others' achievements may be small, they are seminal. But then he adds:

Of the present inquiry, on the other hand, it was not the case that part of the work had been thoroughly done before, while part had not. Nothing existed at all. . . . [O]n the subject of deduction we had absolutely nothing else of an earlier date to mention, but were kept at work for a long time in experimental researches.
(From The Complete Works of Aristotle: The Revised Oxford Translation, ed. Jonathan Barnes, 1984, by permission of Oxford University Press.)

      Aristotle's logical writings comprise six works, known collectively as the Organon (“Tool”). The significance of the name is that logic, for Aristotle, was not one of the theoretical sciences. These were physics, mathematics, and metaphysics. Instead, logic was a tool used by all the sciences. (To say that logic is not a science in this sense is in no way to deny it is a rigorous discipline. The notion of a science was a very special one for Aristotle, most fully developed in his Posterior Analytics.)

      Aristotle's logical works, in their traditional but not chronological order, are:
Categories, which discusses Aristotle's 10 basic kinds of entities: substance, quantity, quality, relation, place, time, position, state, action, and passion. Although the Categories is always included in the Organon, it has little to do with logic in the modern sense.
De interpretatione (On Interpretation), which includes a statement of Aristotle's semantics, along with a study of the structure of certain basic kinds of propositions and their interrelations.
Prior Analytics (two books), containing the theory of syllogistic (described below).
Posterior Analytics (two books), presenting Aristotle's theory of “scientific demonstration” in his special sense. This is Aristotle's account of the philosophy of science or scientific methodology.
Topics (eight books), an early work, which contains a study of nondemonstrative reasoning. It is a miscellany of how to conduct a good argument.
Sophistic Refutations, a discussion of various kinds of fallacies. It was originally intended as a ninth book of the Topics.

      Aristotle's logic was a term logic, in the following sense. Consider the schema: “If every β is an α and every γ is a β, then every γ is an α.” The “α,” “β,” and “γ” are variables—i.e., placeholders. Any argument that fits this pattern is a valid syllogism and, in fact, a syllogism in the form known as Barbara. (On this terminology, see below.)

      The variables here serve as placeholders for terms or names. Thus, replacing “α” by “substance,” “β” by “animal,” and “γ” by “dog” in the schema yields: “If every animal is a substance and every dog is an animal, then every dog is a substance,” a syllogism in Barbara. Aristotle's logic was a term logic in the sense that it focused on logical relations among such terms in valid inferences.

      Aristotle was the first logician to use variables. This innovation was tremendously important, since without them it would have been impossible for him to reach the level of generality and abstraction that he did.

Categorical forms
      Most of Aristotle's logic was concerned with certain kinds of propositions that can be analyzed as consisting of (1) usually a quantifier (“every,” “some,” or the universal negative quantifier “no”), (2) a subject, (3) a copula, (4) perhaps a negation (“not”), (5) a predicate. Propositions analyzable in this way were later called categorical propositions (categorical proposition) and fall into one or another of the following forms:
● Universal affirmative: “Every β is an α.”
● Universal negative: “Every β is not an α,” or equivalently “No β is an α.”
● Particular affirmative: “Some β is an α.”
● Particular negative: “Some β is not an α.”
● Indefinite affirmative: “β is an α.”
● Indefinite negative: “β is not an α.”
● Singular affirmative: “x is an α,” where “x” refers to only one individual (e.g., “Socrates is an animal”).
● Singular negative: “x is not an α,” with “x” as before.

      Sometimes, and very often in the Prior Analytics, Aristotle adopted alternative but equivalent formulations. Instead of saying, for example, “Every β is an α,” he would say, “α belongs to every β” or “α is predicated of every β.”

      In syllogistic, singular propositions (affirmative or negative) were generally ignored, and indefinite affirmatives and negatives were treated as equivalent to the corresponding particular affirmatives and negatives. In the Middle Ages, propositions of types 1–4 were said to be of forms A, E, I, and O, respectively. This notation will be used below.

      In the De interpretatione Aristotle discussed ways in which affirmative and negative propositions with the same subjects and predicates can be opposed to one another. He observed that when two such propositions are related as forms A and E, they cannot be true together but can be false together. Such pairs Aristotle called contraries. When the two propositions are related as forms A and O or as forms E and I or as affirmative and negative singular propositions, then it must be that one is true and the other false. These Aristotle called contradictories. He had no special term for pairs related as forms I and O, although they were later called subcontraries. Subcontraries cannot be false together, although, as Aristotle remarked, they may be true together. The same holds for indefinite affirmatives and negatives, construed as equivalent to the corresponding particular forms. Note that if a universal proposition (affirmative or negative) is true, its contradictory is false, and so the subcontrary of that contradictory is true. Thus propositions of form A imply the corresponding propositions of form I, and those of form E imply those of form O. These last relations were later called subalternation, and the particular propositions (affirmative or negative) were said to be subalternate to the corresponding universal propositions.

      Near the beginning of the Prior Analytics, Aristotle formulated several rules later known collectively as the theory of conversion. To “convert” a proposition in this sense is to interchange its subject and predicate. Aristotle observed that propositions of forms E and I can be validly converted in this way: if no β is an α, then so too no α is a β, and if some β is an α, then so too some α is a β. In later terminology, such propositions were said to be converted “simply” (simpliciter). But propositions of form A cannot be converted in this way; if every β is an α, it does not follow that every α is a β. It does follow, however, that some α is a β. Such propositions, which can be converted provided that not only are their subjects and predicates interchanged but also the universal quantifier is weakened to a particular quantifier “some,” were later said to be converted “accidentally” (per accidens). Propositions of form O cannot be converted at all; from the fact that some animal is not a dog, it does not follow that some dog is not an animal. Aristotle used these laws of conversion in later chapters of the Prior Analytics to reduce other syllogisms to syllogisms in the first figure, as described below.

Syllogisms
      Aristotle defined a syllogism as “discourse in which, certain things being stated something other than what is stated follows of necessity from their being so.” (From The Complete Works of Aristotle: The Revised Oxford Translation, ed. Jonathan Barnes, 1984, by permission of Oxford University Press.) But in practice he confined the term to arguments containing two premises and a conclusion, each of which is a categorical proposition. The subject and predicate of the conclusion each occur in one of the premises, together with a third term (the middle) that is found in both premises but not in the conclusion. A syllogism thus argues that because α and γ are related in certain ways to β (the middle) in the premises, they are related in a certain way to one another in the conclusion.

      The predicate of the conclusion is called the major term, and the premise in which it occurs is called the major premise. The subject of the conclusion is called the minor term and the premise in which it occurs is called the minor premise. This way of describing major and minor terms conforms to Aristotle's actual practice and was proposed as a definition by the 6th-century Greek commentator John Philoponus (Philoponus, John). But in one passage Aristotle put it differently: the minor term is said to be “included” in the middle and the middle “included” in the major term. This remark, which appears to have been intended to apply only to the first figure (see below), has caused much confusion among some of Aristotle's commentators, who interpreted it as applying to all three figures (figure).

      Aristotle distinguished three different figures of syllogisms, according to how the middle is related to the other two terms in the premises. In one passage, he says that if one wants to prove α of γ syllogistically, one finds a middle β such that either α is predicated of β and β of γ (first figure), or β is predicated of both α and γ (second figure), or else both α and γ are predicated of β (third figure). All syllogisms must fall into one or another of these figures.

      But there is plainly a fourth possibility, that β is predicated of α and γ of β. Many later logicians recognized such syllogisms as belonging to a separate, fourth figure. Aristotle explicitly mentioned such syllogisms but did not group them under a separate figure; his failure to do so has prompted much speculation among commentators and historians. Other logicians included these syllogisms under the first figure. The earliest to do this was Theophrastus (see below Theophrastus of Eresus (logic, history of)), who reinterpreted the first figure in so doing.

      Four figures, each with three propositions in one of four forms (A, E, I, O), yield a total of 256 possible syllogistic patterns. Each pattern is called a mood. Only 24 moods are valid, 6 in each figure. Some valid moods may be derived from others by subalternation—that is, if premises validly yield a conclusion of form A, the same premises will yield the corresponding conclusion of form I. So too with forms E and O. Such derived moods were not discussed by Aristotle; they seem to have been first recognized by Ariston of Alexandria (c. 50 BC). In the Middle Ages they were called “subalternate” moods. Disregarding them, there are 4 valid moods in each of the first two figures, 6 in the third figure, and 5 in the fourth. Aristotle recognized all 19 of them.

      Here are the valid moods, including subalternate ones, under their medieval mnemonic names (subalternate moods are marked with an asterisk):

      First figure:Barbara, Celarent, Darii, Ferio,

      *Barbari, *Celaront.

      Second figure:Cesare, Camestres, Festino, Baroco,

      *Cesaro, *Camestrop.

      Third figure:Darapti, Disamis, Datisi, Felapton,

      Bocardo, Ferison.

      Fourth figure:Bramantip, Camenes, Dimaris, Fesapo,

      Fresison, *Camenop.

      The sequence of vowels in each name indicates the sequence of categorical propositions in the mood in the order: major, minor, conclusion. Thus, for example, Celarent is a first figure syllogism with an E-form major, A-form minor, and E-form conclusion.

      If one assumes the nonsubalternate moods of the first figure, then, with two exceptions, all valid moods in the other figures can be proved by “reducing” them to one of those “axiomatic” first-figure moods. This reduction shows that, if the premises of the reducible mood are true, then it follows, by rules of conversion and one of the axiomatic moods, that the conclusion is true. The procedure is encoded in the medieval names:
● The initial letter is the initial letter of the first-figure mood to which the given mood is reducible. Thus Felapton is reducible to Ferio.
● When it is not the final letter, “s” after a vowel means “Convert the sentence simply,” and “p” there means “Convert the sentence per accidens.”
● When “s” or “p” is the final letter, the conclusion of the first-figure syllogism to which the mood is reduced must be converted simply or per accidens, respectively.
● The letter “m” means “Change the order of the premises.”
● When it is not the first letter, “c” means that the syllogism cannot be directly reduced to the first figure but must be proved by reductio ad absurdum (reduction). (There are two such moods; see below.)
● The letters “b” and “d” (except as initial letters) and “l,” “n,” “t,” and “r” serve only to facilitate pronunciation.

      Thus the premises of Felapton (third figure) are “No β is an α” and “Every β is a γ.” Convert the minor premise per accidens to “Some γ is a β,” as instructed by the “p” after the second vowel. This new proposition and the major premise of Felapton form the premises of a syllogism in Ferio (first figure), the conclusion of which is “Some γ is not an α,” which is also the conclusion of Felapton. Hence, given Ferio and the rule of per accidens conversion, the premises of Felapton validly imply its conclusion. In this sense, Felapton has been “reduced” to Ferio.

      The two exceptional cases, which must be proven indirectly by reductio ad absurdum, are Baroco and Bocardo. Both are reducible indirectly to Barbara in the first figure as follows: Assume the A-form premise (the major in Baroco, the minor in Bocardo). Assume the contradictory of the conclusion. These yield a syllogism in Barbara, the conclusion of which contradicts the O-form premise of the syllogism to be reduced. Thus, given Barbara as axiomatic, and given the premises of the reducible syllogism, the contradictory of its conclusion is false, so that the original conclusion is true.

      Reduction (reductio ad absurdum) and indirect proof together suffice to prove all moods not in the first figure. This fact, which Aristotle himself showed, makes his syllogistic the first deductive system in the history of logic.

      While the medieval names of the moods contain a great deal of information, they provide no way by themselves to determine to which figure a mood belongs, and so no way to reconstruct the actual form of the syllogism. Mnemonic verses were developed in the Middle Ages for this purpose.

      Categorical propositions in which α is merely said to belong (or not) to some or every β are called assertoric categorical propositions; syllogisms composed solely of such categoricals are called assertoric syllogisms. Aristotle was also interested in categoricals in which α is said to belong (or not) necessarily or possibly to some or every β. Such categoricals are called modal categoricals (modality), and syllogisms in which the component categoricals are modal are called modal syllogisms (they are sometimes called “mixed” if only one of the premises is modal).

      Aristotle discussed two notions of the “possible”: (1) as what is not impossible (i.e., the opposite of which is not necessary) and (2) as what is neither necessary nor impossible (i.e., the contingent). In his modal syllogistic, the term “possible” (or “contingent”) is always used in sense 2 in syllogistic premises, but it is sometimes used in sense 1 in syllogistic conclusions if a conclusion in sense 2 would be incorrect.

      Aristotle's procedure in his modal syllogistic is to survey each valid mood of the assertoric syllogistic and then to test the several modal syllogisms that can be formed from an assertoric mood by changing one or more of its component categoricals into a modal categorical. The interpretation of this part of Aristotle's logic, and the correctness of his arguments, have been disputed since antiquity.

      Although Aristotle did not develop a full theory of propositions in tenses other than the present, there is a famous passage in the De interpretatione that was influential in later developments in this area. In chapter 9 of that work, Aristotle discussed the assertion “There will be a sea battle tomorrow.” The discussion assumes that as of now the question is still unsettled. Although there are different interpretations of the passage, Aristotle seems there to have been maintaining that although now, before the fact, it is neither true nor false that there will be a sea battle tomorrow, nevertheless it is true even now, before the fact, that there either will or will not be a sea battle tomorrow. In short, Aristotle appears to have affirmed the law of excluded middle (for any proposition replacing “p,” it is true that either p or not-p), but to have denied the principle of bivalence (that every proposition is either true or false) in the case of future contingent propositions.

      Aristotle's logic presupposes several principles that he did not explicitly formulate about logical relations among any propositions whatever, independent of the propositions' internal analyses into categorical or any other form. For example, it presupposes that the principle “If p then q; but p; therefore q” (where p and q are replaced by any propositions) is valid. Such patterns of inference belong to what is called the logic of propositions. Aristotle's logic is, by contrast, a logic of terms in the sense described above. A sustained study of the logic of propositions came only after Aristotle.

Theophrastus of Eresus
      Aristotle's successor as head of his school at Athens was Theophrastus of Eresus (c. 371–c. 286 BC). All Theophrastus' logical writings are now lost, and much of what was said about his logical views by late ancient authors was attributed to both Theophrastus and his colleague Eudemus (Eudemus Of Rhodes), so that it is difficult to isolate their respective contributions.

      Theophrastus is reported to have added to the first figure of the syllogism the five moods that others later classified under a fourth figure. These moods were then called indirect moods of the first figure. In order to accommodate them, he had in effect to redefine the first figure as that in which the middle is the subject in one premise and the predicate in the other, not necessarily the subject in the major premise and the predicate in the minor, as Aristotle had it.

      Theophrastus' most significant departure from Aristotle's doctrine occurred in modal syllogistic. He abandoned Aristotle's notion of the possible as neither necessary nor impossible and adopted Aristotle's alternative notion of the possible as simply what is not impossible. This allowed him to effect a considerable simplification in Aristotle's modal theory. Thus, his conversion laws for modal categoricals were exact parallels to the corresponding laws for assertoric categoricals. In particular, for Theophrastus “problematic” universal negatives (“No β is possibly an α”) can be simply converted. Aristotle had denied this.

      In addition, Theophrastus adopted a rule that the conclusion of a valid modal syllogism can be no stronger than its weakest premise. (Necessity is stronger than possibility, and an assertoric claim without any modal qualification is intermediate between the two). This rule simplifies modal syllogistic and eliminates several moods that Aristotle had accepted. Yet Theophrastus himself allowed certain modal moods that, combined with the principle of indirect proof (which he likewise accepted), yield results that perhaps violate this rule.

      Theophrastus also developed a theory of inferences involving premises of the form “α is universally predicated of everything of which γ is universally predicated” and of related forms. Such propositions he called prosleptic propositions, and inferences involving them were termed prosleptic syllogisms. Greek proslepsis can mean “something taken in addition,” and Theophrastus claimed that propositions like these implicitly contain a third, indefinite term, in addition to the two definite terms (“α” and “γ” in the example).

      The term prosleptic proposition appears to have originated with Theophrastus, although Aristotle discussed such propositions briefly in his Prior Analytics without exploring their logic in detail. The implicit third term in a prosleptic proposition Theophrastus called the middle. After an analogy with syllogistic for categorical propositions, he distinguished three “figures (figure)” for prosleptic propositions and syllogisms, based on the position of the implicit middle. The prosleptic proposition “α is universally predicated of everything that is universally predicated of γ” belongs to the first figure and can be a premise in a first-figure prosleptic syllogism. “Everything predicated universally of α is predicated universally of γ” belongs to the second figure and can be a premise in a second-figure syllogism, and so too “α is universally predicated of everything of which γ is universally predicated” for the third figure. Thus, for example, the following is a prosleptic syllogism in the third figure: “α is universally affirmed of everything of which γ is universally affirmed; γ is universally affirmed of β; therefore, α is universally affirmed of β.”

      Theophrastus observed that certain prosleptic propositions are equivalent to categoricals and differ from them only “potentially” or “verbally.” Some late ancient authors claimed that this made prosleptic syllogisms superfluous. But in fact not all prosleptic propositions are equivalent to categoricals.

      Theophrastus is also credited with investigations into hypothetical syllogisms. A hypothetical proposition, for Theophrastus, is a proposition made up of two or more component propositions (e.g.,p or q,” or “if p then q”), and a hypothetical syllogism is an inference containing at least one hypothetical proposition as a premise. The extent of Theophrastus' work in this area is uncertain, but it appears that he investigated a class of inferences called totally hypothetical syllogisms, in which both premises and the conclusion are conditionals. This class would include, for example, syllogisms such as “If α then β; if β than γ; therefore, if α then γ,” or “if α then β; if not α then γ, therefore, if not β then γ.” As with his prosleptic syllogisms, Theophrastus divided these totally hypothetical syllogisms into three “figures,” after an analogy with categorical syllogistic.

      Theophrastus was the first person in the history of logic known to have examined the logic of propositions seriously. Still, there was no sustained investigation in this area until the period of the Stoics.

The Megarians (Megarian school) and Stoics
      Throughout the ancient world, the logic of Aristotle and his followers was one main stream. But there was also a second tradition of logic, that of the Megarians and Stoics (Stoicism).

      The Megarians were followers of Euclid (or Euclides) of Megara (c. 430–c. 360 BC), a pupil of Socrates. In logic the most important Megarians were Diodorus Cronus (4th century BC) and his pupil Philo of Megara. The Stoics were followers of Zeno Of Citium (c. 336–c. 265 BC). By far the most important Stoic logician was Chrysippus (c. 279–206 BC). The influence of Megarian on Stoic logic is indisputable, but many details are uncertain, since all but fragments of the writings of both groups are lost.

      The Megarians were interested in logical puzzles (paradox). Many paradoxes have been attributed to them, including the “ liar paradox” (someone says that he is lying; is his statement true or false?), the discovery of which has sometimes been credited to Eubulides Of Miletus, a pupil of Euclid of Megara. The Megarians also discussed how to define various modal notions and debated the interpretation of conditional propositions.

      Diodorus Cronus originated a mysterious argument called the Master Argument. It claimed that the following three propositions are jointly inconsistent, so that at least one of them is false:
● Everything true about the past is now necessary. (That is, the past is now settled, and there is nothing to be done about it.)
● The impossible does not follow from the possible.
● There is something that is possible, and yet neither is nor will be true. (That is, there are possibilities that will never be realized.)

      It is unclear exactly what inconsistency Diodorus saw among these propositions. Whatever it was, Diodorus was unwilling to give up 1 or 2, and so rejected 3. That is, he accepted the opposite of 3, namely: Whatever is possible either is or will be true. In short, there are no possibilities that are not realized now or in the future. It has been suggested that the Master Argument was directed against Aristotle's discussion of the sea battle tomorrow in the De interpretatione.

      Diodorus also proposed an interpretation of conditional propositions. He held that the proposition “If p, then q” is true (truth-value) if and only if it neither is nor ever was possible for the antecedent p to be true and the consequent q to be false simultaneously. Given Diodorus' notion of possibility, this means that a true conditional is one that at no time (past, present, or future) has a true antecedent and a false consequent. Thus, for Diodorus a conditional does not change its truth value; if it is ever true, it is always true. But Philo of Megara had a different interpretation. For him, a conditional is true if and only if it does not now have a true antecedent and a false consequent. This is exactly the modern notion of material implication. In Philo's view, unlike Diodorus', conditionals may change their truth value over time.

      These and other theories of modality and conditionals were discussed not only by the Megarians but by the Stoics as well. Stoic logicians, like the Megarians, were not especially interested in scientific demonstration in Aristotle's special sense. They were more concerned with logical issues arising from debate and disputation: fallacies, paradoxes, forms of refutation. Aristotle had also written about such things, but his interests gradually shifted to his special notion of science. The Stoics kept their interest focused on disputation and developed their studies in this area to a high degree.

      Unlike the Aristotelians, the Stoics developed propositional logic to the neglect of term logic. They did not produce a system of logical laws arising from the internal structure of simple propositions, as Aristotle had done with his account of opposition, conversion, and syllogistic for categorical propositions. Instead, they concentrated on inferences from hypothetical propositions as premises. Theophrastus had already taken some steps in this area, but his work had little influence on the Stoics.

      Stoic logicians studied the logical properties and defining features of words used to combine simpler propositions into more complex ones. In addition to the conditional, which had already been explored by the Megarians, they investigated disjunction (“or”) and conjunction (“and”), along with words like “since” and “because.” Some of these they defined truth-functionally (connective) (i.e., solely in terms of the truth or falsehood of the propositions they combined). For example, they defined a disjunction as true if and only if exactly one disjunct is true (the modern “exclusive” disjunction). They also knew “inclusive” disjunction (defined as true when at least one disjunct is true), but this was not widely used. More important, the Stoics seem to have been the first to show how some of these truth-functional words may be defined in terms of others.

      Unlike Aristotle, who typically formulated his syllogisms as conditional propositions, the Stoics regularly presented principles of logical inference in the form of schematic arguments. While Aristotle had used Greek letters as variables replacing terms, the Stoics used ordinal numerals as variables replacing whole propositions. Thus: “Either the first or the second; but not the second; therefore, the first.” Here the expressions “the first” and “the second” are variables or placeholders for propositions, not terms.

      Chrysippus regarded five valid inference schemata as basic or indemonstrable.They are:
● If the first, then the second; but the first; therefore, the second.
● If the first, then the second; but not the second; therefore not the first.
● Not both the first and the second; but the first; therefore, not the second.
● Either the first or the second; but the first; therefore, not the second.
● Either the first or the second; but not the second; therefore, the first.

      Using these five “indemonstrables,” Chrysippus proved the validity of many further inference schemata. Indeed, the Stoics claimed (falsely, it seems) that all valid inference schemata could be derived from the five indemonstrables.

      The differences between Aristotelian and Stoic logic were ones of emphasis, not substantive theoretical disagreements. At the time, however, it appeared otherwise. Perhaps because of their real disputes in other areas, Aristotelians and Stoics at first saw themselves as holding incompatible theories in logic as well. But by the late 1st century BC, an eclectic movement had begun to weaken these hostilities. Thereafter the two traditions were combined in commentaries and handbooks for general education.

Late representatives of ancient Greek logic
      After Chrysippus, little important logical work was done in Greek. But the commentaries and handbooks that were written did serve to consolidate the previous traditions and in some cases are the only extant sources for the doctrines of earlier writers. Among late authors, Galen (Galen Of Pergamum) the physician (AD 129–c. 199) wrote several commentaries, now lost, and an extant Introduction to Dialectic. Galen observed that the study of mathematics and logic was important to a medical education, a view that had considerable influence in the later history of logic, particularly in the Arab world. Tradition has credited Galen with “discovering” the fourth figure of the Aristotelian syllogism, although in fact he explicitly rejected it.

       Alexander Of Aphrodisias (fl. c. AD 200) wrote extremely important commentaries on Aristotle's writings, including the logical works. Other important commentators include Porphyry of Tyre (c. 232–before 306), Ammonius Hermeiou (5th century), Simplicius (Simplicius Of Cilicia) (6th century), and John Philoponus (6th century). Sextus Empiricus (late 2nd–early 3rd centuries) and Diogenes Laërtius (probably early 3rd century) are also important sources for earlier writers. Significant contributions to logic were not made again in Europe until the 12th century.

Medieval logic

Transmission of Greek logic to the Latin West
      As the Greco-Roman world disintegrated and gave way to the Middle Ages, knowledge of Greek declined in the West. Nevertheless, several authors served as transmitters of Greek learning to the Latin world. Among the earliest of them, Cicero (Cicero, Marcus Tullius) (106–43 BC) introduced Latin translations for technical Greek terms. Although his translations were not always finally adopted by later authors, he did make it possible to discuss logic in a language that had not previously had any precise vocabulary for it. In addition, he preserved much information about the Stoics. In the 2nd century AD Lucius Apuleius (Apuleius, Lucius) passed on some knowledge of Greek logic in his De philosophia rationali (“On Rational Philosophy”).

      In the 4th century Marius Victorinus produced Latin translations of Aristotle's Categories and De interpretatione and of Porphyry of Tyre's Isagoge (“Introduction,” on Aristotle's Categories), although these translations were not very influential. He also wrote logical treatises of his own. A short De dialectica (“On Dialectic”), doubtfully attributed to St. Augustine (354–430), shows evidence of Stoic influence, although it had little influence of its own. The pseudo-Augustinian Decem categoriae (“Ten Categories”) is a late 4th-century Latin paraphrase of a Greek compendium of the Categories. In the late 5th century Martianus Capella's (Capella, Martianus Minneus Felix) allegorical De nuptiis Philologiae et Mercurii (The Marriage of Philology and Mercury) contains “On the Art of Dialectic” as book IV.

      The first truly important figure in medieval logic was Boethius (Boethius, Anicius Manlius Severinus) (480–524/525). Like Victorinus, he translated Aristotle's Categories and De interpretatione and Porphyry's Isagoge, but his translations were much more influential. He also seems to have translated the rest of Aristotle's Organon, except for the Posterior Analytics, but the history of those translations and their circulation in Europe is much more complicated; they did not come into widespread use until the first half of the 12th century. In addition, Boethius wrote commentaries and other logical works that were of tremendous importance throughout the Latin Middle Ages. Until the 12th century his writings and translations were the main sources for medieval Europe's knowledge of logic. In the 12th century they were known collectively as the Logica vetus (“Old Logic”).

Arabic (Arab) logic
      Between the time of the Stoics and the revival of logic in 12th-century Europe, the most important logical work was done in the Arab world. Arabic interest in logic lasted from the 9th to the 16th century, although the most important writings were done well before 1300.

      Syrian Christian authors in the late 8th century were among the first to introduce Alexandrian scholarship to the Arab world. Through Galen's influence, these authors regarded logic as important to the study of medicine. (This link with medicine continued throughout the history of Arabic logic and, to some extent, later in medieval Europe.) By about 850, at least Porphyry's Isagoge and Aristotle's Categories, De interpretatione, and Prior Analytics had been translated via Syriac into Arabic. Between 830 and 870 the philosopher and scientist al-Kindī (Kindī, Yaʿqūb ibn Isḥāq aṣ-Ṣabāḥ, al-) (c. 805–873) produced in Baghdad what seem to have been the first Arabic writings on logic that were not translations. But these writings, now lost, were probably mere summaries of others' work.

      By the late 9th century, the school of Baghdad was the focus of logic studies in the Arab world. Most of the members of this school were Nestorian or Jacobite Christians, but the Muslim al-Fārābī (Fārābī, al-) (c. 873–950) wrote important commentaries and other logical works there that influenced all later Arabic logicians. Many of these writings are now lost, but among the topics al-Fārābī discussed were future contingents (in the context of Aristotle's De interpretatione, chapter 9), the number and relation of the categories, the relation between logic and grammar, and non-Aristotelian forms of inference. This last topic showed the influence of the Stoics. Al-Fārābī, along with Avicenna and Averroës, was among the best logicians the Arab world produced.

      By 1050 the school of Baghdad had declined. The 11th century saw very few Arabic logicians, with one distinguished exception: the Persian Ibn Sīnā, or Avicenna (980–1037), perhaps the most original and important of all Arabic logicians. Avicenna abandoned the practice of writing on logic in commentaries on the works of Aristotle and instead produced independent treatises. He sharply criticized the school of Baghdad for what he regarded as their slavish devotion to Aristotle. Among the topics Avicenna investigated were quantification of the predicates of categorical propositions, the theory of definition and classification, and an original theory of “temporally modalized” syllogistic, in which premises include such modifiers as “at all times,” “at most times,” and “at some time.”

      The Persian mystic and theologian al-Ghazālī, or Algazel (Ghazālī, al-), (1058–1111), followed Avicenna's logic, although he differed sharply from Avicenna in other areas. Al-Ghazālī was not a significant logician but is important nonetheless because of his influential defense of the use of logic in theology.

      In the 12th century the most important Arab logician was Ibn Rushd, or Averroës (1126–98). Unlike the Persian followers of Avicenna, Averröes worked in Moorish Spain, where he revived the tradition of al-Fārābī and the school of Baghdad by writing penetrating commentaries on Aristotle's works, including the logical ones. Such was the stature of these excellent commentaries that, when they were translated into Latin in the 1220s or 1230s, Averroës was often referred to simply as “the Commentator.”

      After Averroës, logic declined in western Islām (Arabic philosophy) because of the antagonism felt to exist between logic and philosophy on the one hand and Muslim orthodoxy on the other. But in eastern Islām, owing in part to the work of al-Ghazālī, logic was not regarded as being so closely linked with philosophy. Instead, it was viewed as a tool that could be profitably used in any field of study, even (as al-Ghazālī had done) on behalf of theology against the philosophers. Thus the logical tradition continued in Persia long after it died out in Spain. The 13th century produced a large number of logical writings, but these were mostly unoriginal textbooks and handbooks. After about 1300, logical study was reduced to producing commentaries on these earlier, already derivative handbooks.

The revival of logic in Europe
St. Anselm (Anselm of Canterbury, Saint) and Peter Abelard (Abelard, Peter)
      Except (Scholasticism) in the Arabic world, there was little activity in logic between the time of Boethius and the 12th century. Certainly Byzantium produced nothing of note. In Latin Europe there were a few authors, including Alcuin of York (c. 730–804) and Garland the Computist (fl. c. 1040). But it was not until late in the 11th century that serious interest in logic revived. St. Anselm of Canterbury (1033–1109) discussed semantical questions in his De grammatico, and investigated the notions of possibility and necessity in surviving fragments, but these texts did not have much influence. More important was Anselm's general method of using logical techniques in theology. His example set the tone for much that was to follow.

      The first important Latin logician after Boethius was Peter Abelard (1079–1142). He wrote three sets of commentaries and glosses on Porphyry's Isagoge and Aristotle's Categories and De interpretatione; these were the Introductiones parvulorum (also containing glosses on some writings of Boethius), Logica “Ingredientibus,” and Logica “Nostrorum petitioni sociorum” (on the Isagoge only), together with the independent treatise Dialectica (extant in part). These works show a familiarity with Boethius but go far beyond him. Among the topics discussed insightfully by Abelard are the role of the copula in categorical propositions, the effects of different positions of the negation sign in categorical propositions, modal notions like “possibility,” future contingents (as treated, for example, in chapter 9 of Aristotle's De interpretatione), and conditional propositions or “consequences.”

      Abelard's fertile investigations raised logical study in medieval Europe to a new level. His achievement is all the more remarkable since the sources at his disposal were the same ones that had been available in Europe for the preceding 600 years: Aristotle's Categories and De interpretatione and Porphyry's Isagoge, together with the commentaries and independent treatises by Boethius.

The “properties of terms (term)” and discussions of fallacies (fallacy)
      Even in Abelard's lifetime, however, things were changing. After about 1120, Boethius' translations of Aristotle's Prior Analytics, Topics, and Sophistic Refutations began to circulate. Sometime in the second quarter of the 12th century, James of Venice translated the Posterior Analytics from Greek, thus making the whole of the Organon available in Latin. These newly available Aristotelian works were known collectively as the Logica nova (“New Logic”). In a flurry of activity, others in the 12th and 13th centuries produced additional translations of these works and of Greek and Arabic commentaries on them, along with many other philosophical writings and other works from Greek and Arabic sources.

      The Sophistic Refutations proved an important catalyst in the development of medieval logic. It is a little catalog of fallacies, how to avoid them, and how to trap others into committing them. The work is very sketchy. Many kinds of fallacies are not discussed, and those that are could have been treated differently. Unlike the Posterior Analytics, the Sophistic Refutations was relatively easy to understand. And unlike the Prior Analytics—where, except for modal syllogistic, Aristotle had left little to be done—there was obviously still much to be investigated about fallacies. Moreover, the discovery of fallacies was especially important in theology, particularly in the doctrines of the Trinity and the Incarnation. In short, the Sophistic Refutations was tailor-made to exercise the logical ingenuity of the 12th century. And that is exactly what happened.

      The Sophistic Refutations, and the study of fallacy it generated, produced an entirely new logical literature. A genre of sophismata (“sophistical”) treatises developed that investigated fallacies in theology, physics, and logic. The theory of “supposition” (see below The theory of supposition (logic, history of)) also developed out of the study of fallacies. Whole new kinds of treatises were written on what were called “the properties of terms,” semantic properties important in the study of fallacy. In addition, a new genre of logical writings developed on the topic of “syncategoremata”—expressions such as “only”, “inasmuch as,” “besides,” “except,” “lest,” and so on, which posed quite different logical problems than did the terms and logical particles in traditional categorical propositions or in the simpler kind of “hypothetical” propositions inherited from the Stoics. The study of valid inference generated a literature on “consequences” that went into far more detail than any previous studies. By the late 12th or early 13th century, special treatises were devoted to insolubilia (semantic paradoxes such as the liar paradox, “This sentence is false”) and to a kind of disputation called “obligationes,” the exact purpose of which is still in question.

      All these treatises, and the logic contained in them, constitute the peculiarly medieval contribution to logic. It is primarily on these topics that medieval logicians exercised their best ingenuity. Such treatises, and their logic, were called the Logica moderna (“Modern Logic”), or “terminist” logic, because they laid so much emphasis on the “properties of terms.” These developments began in the mid-12th century, and continued to the end of the Middle Ages.

Developments in the 13th and early 14th centuries
      In the 13th century the sophismata literature continued and deepened. In addition several authors produced summary works that surveyed the whole field of logic, including the “Old” and “New” logic as well as the new developments in the Logica moderna. These compendia are often called “summulae” (“little summaries”), and their authors “summulists.” Among the most important of the summulists are: (1) Peter of Spain (also known as Petrus Hispanus; later Pope John XXI), who wrote a Tractatus more commonly known as Summulae logicales (“Little Summaries of Logic”) probably in the early 1230s; it was used as a textbook in some late medieval universities; (2) Lambert of Auxerre, who wrote a Logica sometime between 1253 and 1257; and (3) William of Sherwood, who produced Introductiones in logicam (Introduction to Logic) and other logical works sometime about the mid-century.

      Despite his significance in other fields, Thomas Aquinas (Aquinas, Thomas, Saint) is of little importance in the history of logic. He did write a treatise on modal propositions and another one on fallacies. But there is nothing especially original in these works; they are early writings and are confined to passing on received doctrine. He also wrote an incomplete commentary on the De interpretatione, but it is of no great logical significance.

      About the end of the 13th century John Duns Scotus (Duns Scotus, John) (c. 1266–1308) composed several works on logic. There also are some very interesting logical texts from the same period that have been falsely attributed to Scotus and were published in the 17th century among his authentic works. These are now referred to as the works of “the Pseudo-Scotus,” although they may not all be by the same author.

      The first half of the 14th century saw the high point of medieval logic (England). Much of the best work was done by people associated with the University of Oxford (Oxford, University of). Among them were William of Ockham (Ockham, William of) (c. 1285–1347), the author of an important Summa logicae (“Summary of Logic”) and other logical writings. Perhaps because of his importance in other areas of medieval thought, Ockham's originality in logic has sometimes been exaggerated. But there is no doubt that he was one of the most important logicians of the century. Another Oxford logician was Walter Burley (or Burleigh), an older contemporary of Ockham. Burley was a bitter opponent of Ockham in metaphysics. He wrote a work De puritate artis logicae (“On the Purity of the Art of Logic”; in two versions), apparently in response and opposition to Ockham's views, although on some points Ockham simply copied Burley almost verbatim.

      Slightly later, on the Continent, Jean Buridan (Buridan, Jean) was a very important logician at the University of Paris. He wrote mainly during the 1330s and '40s. In many areas of logic and philosophy his views were close to Ockham's, although the extent of Ockham's influence on Buridan is not clear. Buridan's Summulae de dialectica (“Little Summaries of Dialectic”), intended for instructional use at Paris, was largely an adaptation of Peter of Spain's Summulae logicales. He appears to have been the first to use Peter of Spain's text in this way. Originally meant as the last treatise of his Summulae de dialectica, Buridan's extremely interesting Sophismata (published separately in early editions) discusses many issues in semantics and philosophy of logic. Among Buridan's pupils was Albert Of Saxony (d. 1390), the author of a Perutilis logica (“A Very Useful Logic”) and later first rector of the University of Vienna. Albert was not an especially original logician, although his influence was by no means negligible.

The theory of supposition
      Many of the characteristically medieval logical doctrines in the Logica moderna centred around the notion of “supposition” (suppositio). Already by the late 12th century the theory of supposition had begun to form. In the 13th century, special treatises on the topic multiplied. The summulists all discussed it at length. Then, after about 1270, relatively little was heard about it. In France, supposition theory was replaced by a theory of “ speculative grammar” or “modism” (so called because it appealed to “modes of signifying”). Modism was not so popular in England, but there too the theory of supposition was largely neglected in the late 13th century. In the early 14th century the theory reemerged both in England and on the Continent. Burley wrote a treatise on the topic in about 1302, and Buridan revived the theory in France in the 1320s. Thereafter the theory remained the main vehicle for semantic (semantics) analysis until the end of the Middle Ages.

      Supposition theory, at least in its 14th-century form, is best viewed as two theories under one name. The first, sometimes called the theory of “supposition proper,” is a theory of reference and answers the question “To what does a given occurrence of a term refer in a given proposition?” In general (the details depend on the author) three main types of supposition were distinguished: (1) personal supposition (which, despite the name, need not have anything to do with persons), (2) simple supposition, and (3) material supposition. These types are illustrated, respectively, by the occurrences of the term horse in the statements “Every horse is an animal” (in which the term horse refers to individual horses), “Horse is a species” (in which the term refers to a universal), and “Horse is a monosyllable” (in which it refers to the spoken or written word). The theory was elaborated and refined by considering how reference may be broadened by tense and modal factors (for example, the term horse in “Every horse will die,” which may refer to future as well as present horses) or narrowed by adjectives or other factors (for example, horse in “Every horse in the race is less than two years old”).

      The second part of supposition theory applies only to terms in personal supposition. It divides personal supposition into several types, including (again the details vary according to the author): (1) determinate (e.g., horse in “Some horse is running”), (2) confused and distributive (e.g., horse in “Every horse is an animal”), and (3) merely confused (e.g., animal in “Every horse is an animal”). These types were described in terms of a notion of “descent to (or ascent from) singulars.” For example, in the statement “Every horse is an animal,” one can “descend” under the term “horse” to: “This horse is an animal, and that horse is an animal, and so on,” but one cannot validly “ascend” from “This horse is an animal” to the original proposition. There are many refinements and complications.

      The purpose of this second part of the theory of supposition has been disputed. Since the question of what it is to which a given occurrence of a term refers is already answered in the first part of supposition theory, the purpose of this second part must have been different. The main suggestions are (1) that it was devised to help detect and diagnose fallacies, (2) that it was intended as a theory of truth conditions for propositions or as a theory of analyzing the senses of propositions, and (3) that, like the first half of supposition theory, it originated as part of an account of reference, but, once its theoretical insufficiency for that task was recognized, it was gradually divorced from that first part of supposition theory and by the early 14th century was left as a conservative vestige that continued to be disputed but no longer had any question of its own to answer. There are difficulties with all of these suggestions. The theory of supposition survived beyond the Middle Ages and was frequently applied not only in logical discussions but also in theology and in the natural sciences.

      In addition to supposition and its satellite theories, several logicians during the 14th century developed a sophisticated theory of “connotation” (connotatio or appellatio; in which the term black, for instance, not only refers to black things but also “connotes” the quality, blackness, that they possess) and a subtle theory of “mental language,” in which tools of semantic analysis were applied to epistemology and the philosophy of mind. Important treatises on insolubilia and obligationes, as well as on the theory of consequence or inference, continued to be produced in the 14th century, although the main developments there were completed by mid-century.

Developments in modal logic
      Medieval logicians continued the tradition of modal syllogistic inherited from Aristotle. In addition, modal factors were incorporated into the theory of supposition. But the most important developments in modal logic occurred in three other contexts: (1) whether propositions about future contingent events are now true or false (Aristotle had raised this question in De interpretatione, chapter 9), (2) whether a future contingent event can be known in advance, and (3) whether God (who, the tradition says, cannot be acted upon causally) can know future contingent events. All these issues link logical modality with time. Thus, Peter Aureoli (Petrus Aureoli) (c. 1280–1322) held that if something is in fact ϕ (‘ϕ' is some predicate) but can be not-ϕ, then it is capable of changing from being ϕ to being not-ϕ.

      Duns Scotus in the late 13th century was the first to sever the link between time and modality. He proposed a notion of possibility that was not linked with time but based purely on the notion of semantic consistency. This radically new conception had a tremendous influence on later generations down to the 20th century. Shortly afterward, Ockham developed an influential theory of modality and time that reconciles the claim that every proposition is either true or false with the claim that certain propositions about the future are genuinely contingent.

Late medieval logic
      Most of the main developments in medieval logic were in place by the mid-14th century. On the Continent, the disciples of Jean Buridan—Albert of Saxony (Albert Of Saxony) (c. 1316–90), Marsilius of Inghen (d. 1399), and others—continued and developed the work of their predecessors. In 1372 Pierre d'Ailly (Ailly, Pierre d') wrote an important work, Conceptus et insolubilia (Concepts and Insolubles), which appealed to a sophisticated theory of mental language in order to solve semantic paradoxes such as the liar paradox.

      In England the second half of the 14th century produced several logicians who consolidated and elaborated earlier developments. Their work was not very original, although it was often extremely subtle. Many authors during this period compiled brief summaries of logical topics intended as textbooks. The doctrine in these little summaries is remarkably uniform, making it difficult to determine who their authors were. By the early 15th century, informal collections of these treatises had been gathered under the title Libelli sophistarum (“Little Books for Arguers”)—one collection for Oxford and a second for Cambridge; both were printed in early editions. Among the notable logicians of this period are Henry Hopton (fl. 1357), John Wycliffe (c. 1330–84), Richard Lavenham (d. after 1399), Ralph Strode (fl. c. 1360), Richard Ferrybridge (or Feribrigge; fl. c. 1360s), and John Venator (also known as John Huntman or Hunter; fl. 1373).

      Beginning in 1390, the Italian Paul Of Venice studied for at least three years at Oxford and then returned to teach at Padua and elsewhere in Italy. Although English logic was studied in Italy even before Paul's return, his own writings advanced this study greatly. Among Paul's logical works were the very popular Logica parva (“Little Logic”), printed in several early editions, and possibly the huge Logica magna (“Big Logic”) that has sometimes been regarded as a kind of encyclopaedia of the whole of medieval logic.

      After about 1400, serious logical study was dead in England. However, it continued to be pursued on the Continent until the end of the Middle Ages and afterward.

Paul Vincent Spade

Modern logic
      It is customary to speak of logic since the Renaissance as “modern logic.” This is not to suggest that there was a smooth development of a unified conception of reasoning, or that the logic of this period is “modern” in the usual sense. Logic in the modern era has exhibited an extreme diversity, and its chaotic development has reflected all too clearly the surrounding political and intellectual turmoil. These upheavals include the Renaissance itself, the diminishing role of the Roman Catholic church and of Latin, the Reformation and subsequent religious wars, the scientific revolution and the growth of modern mathematics, the rise and fall of empires and nation-states, and the waxing influence of the New World and the former Soviet Union.

The 16th century
      Renaissance writers sometimes denounced all of scholastic logic. The humanism of the Renaissance is often seen as promoting the study of Greek and Roman classics, but Aristotle's logic was frequently regarded as being so hopelessly bound together with “sterile” medieval logic as to constitute an exception to this spirit of rebirth. Some, such as Martin Luther (1483–1546), were repelled by any hint of Aristotelianism. Others, such as the great humanist essayist Desiderius Erasmus (Erasmus, Desiderius) (1466–1536), occasionally praised Aristotle but never his logical theory; like many writers in the Renaissance, Erasmus found in the theory of the syllogism only “subtlety and arid ingenuity” (Johan Huizinga, Erasmus [1924]). The German Lutheran humanist Philipp Melanchthon (Melanchthon, Philipp) (1497–1560) had a more balanced appreciation of Aristotle's logic. Melanchthon's Compendaria dialectices ratio (“Brief Outline of Dialects”) of 1520, built upon his Institutiones Rhetoricae of the previous year, became a popular Lutheran text. There he described his purpose as presenting “a true, pure and uncomplicated logic, just as we have received it from Aristotle and some of his judicious commentators.” Elsewhere, influential writers such as Rabalais, Petrarch, and Montaigne had few kind words for logic as they knew it.

      The French reformer and pamphleteer Petrus Ramus (Ramus, Petrus) (Pierre de la Ramée) was also the author of extremely influential “Reform” logical texts. His Dialectique (Dialectics) of 1555 (translated into English in 1574) was the first major logical work in a modern language. In this work and in his Dialecticae libri duo (“Two Books of Dialectics”) of 1556 he combined attacks on scholastic logic, an emphasis on the use of logic in actual arguments (“dialectics”), and a presentation of a much simplified approach to categorical syllogism (without an attempt to follow Aristotle). Elsewhere, he proposed that reasoning should be taught by using Euclid's Elements rather than by the study of the syllogism. He devoted special attention to valid syllogisms with singular premises, such as “Octavius is the heir of Caesar. I am Octavius. Therefore, I am the heir of Caesar.” Singular terms (such as proper names) had been treated by earlier logicians: Pseudo-Scotus, among others, had proposed assimilating them to universal propositions by understanding “Julius Caesar is mortal” as “All Julius Caesars are mortal.” Although Ramus' proposals for singular terms were not widely accepted, his concern for explicitly addressing them and his refusal to use artificial techniques to convert them to standard forms prefigured more recent interests. Although it had its precursors in medieval semantic thought, Ramus' division of thought into a hierarchy composed of concepts, judgments, arguments, and method was influential in the 17th and 18th centuries.

 Scholastic logic remained alive, especially in predominantly Roman Catholic universities and countries, such as Italy and Spain. Some of this work had considerable value, even though it was outside of the mainstream logical tradition, from which it diverged in the 16th century. If the Reform tradition of Melanchthon and Ramus represents one major tradition in modern logic, and the neo-scholastic tradition another, then (here following the historian of logic Nicholai Ivanovich Styazhkin) a third tradition is found in the followers of the Spanish (Majorcan) soldier, priest, missionary, and mystic Ramón Lull (Llull, Ramon) (1235–1315). His Ars magna, generalis et ultima (1501; “Great, General and Ultimate Art”) represents an attempt to symbolize (symbol) concepts and derive propositions that form various combinations of possibilities. These notions, associated with lore of the Kabbala, later influenced Pascal and Leibniz and the rise of probability theory. Lull's influence can be seen more directly in the work of his fellow Spaniard Juan Luis Vives (Vives, Juan Luis) (1492–1540), who used a V-shaped symbol to indicate the inclusion of one term in another (see illustration—>). Other work inspired by Lull includes the logic and notational system of the German logician Johann Heinrich Alsted (1588–1638). The work of Vives and Alsted represents perhaps the first systematic effort at a logical symbolism.

      With the 17th century came increasing interest in symbolizing logic. These symbolizations sometimes took graphic or pictorial forms but more often used letters in the manner of algebra to stand for propositions, concepts, classes, properties, and relations, as well as special symbols for logical notions. Inspired by the triumphs achieved in mathematics after it had turned to the systematic use of special symbols, logicians hoped to imitate this success. The systematic application of symbols and abbreviations and the conscious hope that through this application great progress could be made have been a distinguishing characteristic of modern logic into the 20th century.

      The modern era saw major changes not only in the external appearance of logical writings but also in the purposes of logic. Logic for Aristotle was a theory of ideal human reasoning (reason) and inference that also had clear pedagogical value. Early modern logicians stressed what they called “dialectics” (or “rhetoric”), because “logic” had come to mean an elaborate scholastic theory of reasoning that was not always directed toward improving reasoning. A related goal was to extend the scope of human reasoning beyond textbook syllogistic theory and to acknowledge that there were important kinds of valid inference that could not be formulated in traditional Aristotelian syllogistic. But another part of the rejection of Aristotelian logic (broadly conceived to include scholastic logic) is best explained by the changing and quite new goals that logic took on in the modern era. One such goal was the development of an ideal (ideal language) logical language that naturally expressed ideal thought and was more precise than natural languages. Another goal was to develop methods of thinking and discovery that would accelerate or improve human thought or would allow its replacement by mechanical devices. Whereas Aristotelian logic had seen itself as a tool for training “natural” abilities at reasoning, later logics proposed vastly improving meagre and wavering human tendencies and abilities. The linking of logic with mathematics was an especially characteristic theme in the modern era. Finally, in the modern era came an intense consciousness of the importance of logical form (forms of sentences, as well as forms or patterns of arguments). Although the medievals made many distinctions among patterns of sentences and arguments, the modern logical notion of “form” perhaps first crystallized in the work of Sir William Rowan Hamilton and the English mathematician and logician Augustus De Morgan (De Morgan, Augustus) (De Morgan's Formal Logic of 1847). The now standard discussions of validity, invalidity, and the self-conscious separation of “formal” from nonformal aspects of sentences and arguments all trace their roots to this work.

The 17th century
      The Logica Hamburgensis (1638) of Joachim Jung (also called Jungius or Junge) was one replacement for the “Protestant” logic of Melanchthon. Its chief virtue was the care with which late medieval theories and techniques were gathered and presented. Jung devoted considerable attention to valid arguments that do not fit into simpler, standard conceptions of the syllogism and immediate inference. Of special interest is his treatment of quantified relational arguments, then called “oblique” syllogisms because of the oblique (non-nominative) case that is used to express them in Latin. An example is: “The square of an even number is even; 6 is even; therefore, the square of 6 is even.” The technique of dealing with such inferences involved rewriting a premise so that the term in the oblique case (for example, “of an even number”) would occur in the subject position and thus be amenable to standard syllogistic manipulation. Such arguments had in fact been noticed by Aristotle and were also treated in late medieval logic.

      An especially widely used text of the 17th century is usually termed simply the Port-Royal Logic after the seat of the anticlerical Jansenist movement outside Paris. It was written by Antoine Arnauld (Arnauld, Antoine) and Pierre Nicole (Nicole, Pierre), possibly with others, and was published in French in 1662 with the title La Logique ou l'art de penser “Logic or the Art of Thinking”. It was promptly translated into Latin and English and underwent many reprintings in the late 17th and 18th centuries. In its outline, it followed Ramus' outline of concept, judgment, argument, and method; it also briefly mentioned oblique syllogisms. The Port-Royal Logic followed the general Reform program of simplifying syllogistic theory, reducing the number of syllogistic figures from four, and minimizing distinctions thought to be useless. In addition, the work contained an important contribution to semantics in the form of the distinction between comprehension (intension and extension) and extension. Although medieval semantic theory had used similar notions, the Port-Royal notions found their way into numerous 18th- and 19th-century discussions of the meanings and reference of terms; they appeared, for example, in John Stuart Mill's influential text A System of Logic (1843). The “comprehension” of a term consisted of all the essential attributes in it (those that cannot be removed without “destroying” the concept), and the extension consisted of all those objects to which the concept applies. Thus the comprehension of the term “triangle” might include the attributes of being a polygon, three-sided, three-angled, and so on. Its extension would include all kinds of triangles. The Port-Royal Logic also contained an influential discussion of definitions that was inspired by the work of the French mathematician and philosopher Blaise Pascal (Pascal, Blaise). According to this discussion, some terms could not be defined (“primitive” terms), and definitions were divided between nominal and real ones. Real definitions were descriptive and stated the essential properties in a concept, while nominal definitions were creative and stipulated the conventions by which a linguistic term was to be used.

      Discussions of “nominal” and “real” definitions go back at least to the nominalist/realist debates of the 14th century; Pascal's application of the distinction is interesting for the emphasis that it laid on mathematical definitions being nominal and on the usefulness of nominal definitions. Although the Port-Royal logic itself contained no symbolism, the philosophical foundation for using symbols by nominal definitions was nevertheless laid.

      One intriguing 17th-century treatment of logic in terms of demonstrations, postulates, and definitions in a Euclidean fashion occurs in the otherwise quite traditional Logica Demonstrativa (1697; “Demonstrative Logic”) of the Italian Jesuit Gerolamo Saccheri. Saccheri is better known for his suggestion of the possibility of a non-Euclidean geometry in Euclides ab Omni Naevo Vindicatus (1733; “Euclid Cleared of Every Flaw”). Another incisive traditional logic was that of the Dutch philosopher Arnold Geulincx (Geulincx, Arnold), Logica fundamentis suis restituta (1662; “Logic Restored to its Fundamentals”). This work attempted to resurrect the rich detail of scholastic (Scholasticism) logic, including the theory of suppositio and issues of existential import.

 With the logical work of the German mathematician, philosopher, and diplomat Gottfried Wilhelm Leibniz, we encounter one of the great triumphs, and tragedies, in the history of logic. He created in the 1680s a symbolic logic (formal logic) (see illustration—>) that is remarkably similar to George Boole's (Boole, George) system of 1847—and Boole is widely regarded as the initiator of mathematical or symbolic logic. But nothing other than vague generalities about Leibniz' goals for logic was published until 1903—well after symbolic logic was in full blossom. Thus one could say that, great though Leibniz' discoveries were, they were virtually without influence in the history of logic. (There remains some slight possibility that Lambert or Boole may have been directly or indirectly influenced by Leibniz' logical system.)

      Leibniz' logical research was not entirely symbolic, however, nor was he without influence in the history of (nonsymbolic) logic. Early in his life, Leibniz was strongly interested in the program of Lull (Llull, Ramon), and he wrote the De arte combinatoria (1666); this work followed the general Lullian goal of discovering truths by combining concepts into judgments in exhaustive ways and then methodically assessing their truth. Leibniz later developed a goal of devising what he called a “universally characteristic language” (lingua characteristica universalis) that would, first, notationally represent concepts by displaying the more basic concepts of which they were composed, and second, naturally represent (in the manner of graphs or pictures, “iconically”) the concept in a way that could be easily grasped by readers, no matter what their native tongue. Leibniz studied and was impressed by the method of the Egyptians and Chinese in using picturelike expressions for concepts. The goal of a universal language had already been suggested by Descartes (Descartes, René) for mathematics as a “universal mathematics”; it had also been discussed extensively by the English philologist George Dalgarno (c. 1626–87) and, for mathematical language and communication, by the French algebraist François Viète (Viète, François, Seigneur De La Bigotiere) (1540–1603). The search for a universal language to replace Latin was seriously taken up again in the late 19th century, first by Giuseppe Peano (Peano, Giuseppe)—whose work on Interlingua, an uninflected form of Latin, was directly inspired by Leibniz' conception—and then with Esperanto. The goal of a logical language also inspired Gottlob Frege, and in the 20th century it prompted the development of the logical language LOGLAN and the computer language PROLOG.

      Another and distinct goal Leibniz proposed for logic was a “calculus of reason” (calculus ratiocinator). This would naturally first require a symbolism but would then involve explicit manipulations of the symbols according to established rules by which either new truths could be discovered or proposed conclusions could be checked to see if they could indeed be derived from the premises. Reasoning (reason) could then take place in the way large sums are done—that is, mechanically or algorithmically—and thus not be subject to individual mistakes and failures of ingenuity. Such derivations could be checked by others or performed by machines (machine), a possibility that Leibniz seriously contemplated. Leibniz' suggestion that machines could be constructed to draw valid inferences or to check the deductions of others was followed up by Charles Babbage (Babbage, Charles), William Stanley Jevons (Jevons, William Stanley), and Charles Sanders Peirce (Peirce, Charles Sanders) and his student Allan Marquand in the 19th century, and with wide success on modern computers (computer) after World War II.

      The symbolic calculus that Leibniz devised seems to have been more of a calculus of reason than a “characteristic” language. It was motivated by his view that most concepts (concept) were “composite”: they were collections or conjunctions of other more basic concepts. Symbols (letters, lines, or circles) were then used to stand for concepts and their relationships. This resulted in what is called an “intensional” rather than an “extensional” logic—one whose terms stand for properties or concepts rather than for the things having these properties. Leibniz' basic notion of the truth of a judgment was that the concepts making up the predicate were “included in” the concept of the subject. What Leibniz symbolized as “A ∞ Β,” or what we might write as “A = B” was that all the concepts making up concept A also are contained in concept B, and vice versa.

      Leibniz used two further notions to expand the basic logical calculus. In his notation, “A ⊕ B ∞ C” indicates that the concepts in A and those in B wholly constitute those in C. We might write this as “A + B = C” or “A ∪ B = C”—if we keep in mind that A, B, and C stand for concepts or properties, not for individual things. Leibniz also used the juxtaposition of terms in the following way: “AB ∞ C,” which we might write as “A × B = C” or “A ∩ B = C,” signifies in his system that all the concepts in both A and B wholly constitute the concept C.

      A universal affirmative judgment, such as “All A's are B's,” becomes in Leibniz' notation “A ∞ AB.” This equation states that the concepts included in the concepts of both A and B are the same as those in A. A syllogism, “All A's are B's; all B's are C's; therefore all A's are C's,” becomes the sequence of equations “A = AB; B =BC; therefore A =AC.” This conclusion can be derived from the premises by two simple algebraic substitutions and the associativity of logical multiplication. Leibniz' interpretation of particular and negative statements was more problematic. Although he later seemed to prefer an algebraic, equational symbolic logic, he experimented with many alternative techniques, including graphs.

      As with many early symbolic logics, including many developed in the 19th century, Leibniz' system had difficulties with particular and negative statements, and it included little discussion of propositional logic and no formal treatment of quantified relational statements. (Leibniz later became keenly aware of the importance of relations and relational inferences.) Although Leibniz might seem to deserve to be credited with great originality in his symbolic logic—especially in his equational, algebraic logic—it turns out that such insights were relatively common to mathematicians of the 17th and 18th centuries who had a knowledge of traditional syllogistic logic. In 1685 Jakob Bernoulli (Bernoulli, Jakob) published a pamphlet on the parallels of logic and algebra and gave some algebraic renderings of categorical statements. Later the symbolic work of Lambert, Ploucquet, Euler, and even Boole—all apparently uninfluenced by Leibniz' or even Bernoulli's work—seems to show the extent to which these ideas were apparent to the best mathematical minds of the day.

The 18th and 19th centuries
      In the 18th century there were three major contributors to the development of formal logic: Ploucquet, Lambert, and Euler, although none went far beyond Leibniz and none influenced subsequent developments in the way that Boole and Frege later did. Leibniz' major goals for logic, such as the development of a “characteristic” language; the parallels among arithmetic, algebra, and syllogistic; and his notion of the truth of a judgment as the concept of the predicate being “included in” the concept of the subject, were carried forward by Christian Wolff (Wolff, Christian, Freiherr (Baron) von) but without any significant development of a logic, symbolic or otherwise. The prolific Wolff publicized Leibniz' general views widely and spawned two minor symbolic formulations of logic; that of J.A. Segner (Segner, Johann Andreas von) in 1740 and that of Joachim Georg Darjes (1714–91) in 1747. Segner used the notation “B < A” to signify, intensionally in the manner of Leibniz, that the concept of B is included in the concept of A (i.e., “All A's are B's”).

Gottfried Ploucquet
 The work of Gottfried Ploucquet (1716–90) was based on the ideas of Leibniz, although the symbolic calculus Ploucquet developed does not resemble that of Leibniz (see illustration—>). The basis of Ploucquet's symbolic logic was the sign “>,” which he unfortunately used to indicate that two concepts are disjoint—i.e., having no basic concepts in common; in its propositional interpretation, it is equivalent to what became known in the 20th century as the “Sheffer stroke” function (also known to Peirce) meaning “neither . . . nor.” The universal negative proposition, “No A's are B's,” would become “A > B” (or, convertibly, “B > A”). The equality sign was used to denote conceptual identity, as in Leibniz. Capital letters were used for distributed terms, lowercase ones for undistributed terms. The intersection of concepts was represented by “+”; the multiplication sign (or juxtaposition) stood for the inclusive union of concepts; and a bar over a letter stood for complementation (in the manner of Leibniz). Thus “Ā” represented all non-A's, while “ā” meant the same as “some non-A.” Rules of inference were the standard algebraic substitution of identicals along with more complicated implicit rules for manipulating the nonidentities using “>.” Ploucquet was interested in graphic representations of logical relations—using lines, for example. He was also one of the first symbolic logicians to have worried extensively about representing quantification—although his own contrast of distributed and undistributed terms is a clumsy and limited device. Not a mathematician, Ploucquet did not pursue the logical interpretation of inverse operations (e.g., division, square root, and so on) and of binomial expansions; the interpretation of these operations was to plague some algebras of logic and sidetrack substantive development—first in the work of Leibniz and the Bernoullis, then in that of Lambert, Boole, and Schröder. Ploucquet published and promoted his views widely (his publications included an essay on Leibniz' logic); he influenced his contemporary Lambert and had a still greater influence upon Georg Jonathan von Holland and Christian August Semler.

Johann Heinrich Lambert (Lambert, Johann Heinrich)
 The greatest 18th-century logician was undoubtedly Johann Heinrich Lambert. Lambert was the first to demonstrate the irrationality of π (pi), and, when asked by Frederick the Great in what field he was most capable, is said to have curtly answered “All.” His own highly articulated philosophy was a more thorough and creative reworking of rationalist ideas from Leibniz and Wolff. His symbolic and formal logic, developed especially in his Sechs Versuche einer Zeichenkunst in der Vernunftlehre (1777; “Six Attempts at a Symbolic Method in the Theory of Reason”), was an elegant and notationally efficient calculus, extensively duplicating, apparently unwittingly, sections of Leibniz' calculus of a century earlier. Like the systems of Leibniz, Ploucquet, and most Germans, it was intensional, using terms to stand for concepts, not individual things. It used an identity sign and the plus sign in the natural algebraic way that one sees in Leibniz and Boole. Five features distinguish it from other systems. First, Lambert was concerned to separate the simpler concepts constituting a more complex concept into the genus and differentia—the broader and narrowing concepts—typical of standard definitions: the symbols for the genus and differentia of a concept were operations on terms, extracting the genus or differentia of a concept. Second, Lambert carefully differentiated among letters for known, undetermined, and genuinely unknown concepts, using different letters from the Latin alphabet; the lack of such distinctions in algebra instruction has probably caused extensive confusion. Third, his disjunction or union operation, “ + ,” was taken in the exclusive sense—excluding the overlap of two concepts, in distinction to Ploucquet's inclusive operation, for example. Fourth, Lambert accomplished the expression of quantification such as that in “Every A is B” by writing “a = mb” (see illustration—>)—that is, the known concept a is identical to the concepts in both the known concept b and an indeterminate concept m; this device is similar enough to Boole's later use of the letter “y” to suggest some possible influence. Finally, Lambert considered briefly the symbolic theorems that would not hold if the concepts were relations, such as “is the father of.” He also introduced a notation for expressing relational notions in terms of single-placed functions: in his system, “i = α : : c” indicates that the individual (concept) i is the result of applying a function α to the individual concept c. Although it is not known whether Frege (Frege, Gottlob) had read Lambert, it is possible that Lambert's analysis influenced Frege's analysis of quantified relations, which depends on the notion of a function.

Other 18th-century logicians
 Lambert also developed a method of pictorially displaying the overlap of the content of concepts with overlapping line segments. Leibniz had experimented with similar techniques. Two-dimensional techniques were popularized by the Swiss mathematician Leonhard Euler (Euler, Leonhard) in his Lettres à une princesse d'Allemagne (1768–74; “Letters to a German Princess”). These techniques and the related Venn diagrams have been especially popular in logic education. In Euler's method the interior areas of circles represented (intensionally) the more basic concepts making up a concept or property. To display “All A's are B's,” Euler drew a circle labeled “A” that was entirely contained within another circle, “B.” (See illustration—>.) Such circles could be manipulated to discover the validity of syllogisms. Euler did not develop this method very far, and it did not constitute a significant logical advance. Leibniz himself had occasionally drawn such illustrations, and they apparently first entered the literature in the Universalia Euclidea (1661) of Johann C. Sturm and were more frequently used by Johann C. Lange in 1712. (Vives had employed triangles for similar purposes in 1555.) Euler's methods were systematically developed by the French mathematician Joseph-Diez Gergonne in 1816–17, although Gergonne retreated from two-dimensional graphs to linear formulas that could be more easily printed and manipulated. For complicated reasons, almost all German formal logic came from the Protestant areas of the German-speaking world.

      The German philosophers Immanuel Kant (Kant, Immanuel) and Georg Wilhelm Friedrich Hegel (Hegel, Georg Wilhelm Friedrich) made enormous contributions to philosophy, but their contributions to formal logic can only be described as minimal or even harmful. Kant refers to logic as a virtually completed artifice in his important Critique of Pure Reason (1781). He showed no interest in Leibniz' goal of a natural, universal, and efficient logical language and no appreciation of symbolic or mathematical formulations. His own lectures on logic, published in 1800 as Immanuel Kants Logik: ein Handbuch zu Vorlesungen, and his earlier The Mistaken Subtlety of the Four Syllogistic Figures (1762) were minor contributions to the history of logic. Hegel refers early in his massive Science of Logic (1812–16) to the centuries of work in logic since Aristotle as a mere preoccupation with “technical manipulations.” He took issue with the claim that one could separate the “logical form” of a judgment from its substance—and thus with the very possibility of logic based on a theory of logical form. When the study of logic blossomed again on German-speaking soil, contributors came from mathematics and the natural sciences.

      In the English-speaking world, logic had always been more easily and continuously tolerated, even if it did not so early reach the heights of mathematical sophistication that it had in the German- and French-speaking worlds. Logic textbooks in English appeared in considerable numbers in the 17th and 18th centuries: some were translations, while others were handy, simplified handbooks with some interesting and developed positions, such as John Wallis' Institutio Logicae (1687) and works by Henry Aldrich, Isaac Watts, and the founder of Methodism, John Wesley. Out of this tradition arose Richard Whately's (Whately, Richard) Elements of Logic (1826) and, in the same tradition, John Stuart Mill's enormously popular A System of Logic (1843). Although now largely relegated to a footnote, Whately's nonsymbolic textbook reformulated many concepts in such a thoughtful and clear way that it is generally (and first by De Morgan) credited with single-handedly bringing about the “rebirth” of English-language logic.

Boole and De Morgan
      The two most important contributors to British logic in the first half of the 19th century were undoubtedly George Boole (Boole, George) and Augustus De Morgan (De Morgan, Augustus). Their work took place against a more general background of logical work in English by figures such as Whately, George Bentham, Sir William Hamilton, and others. Although Boole cannot be credited with the very first symbolic logic, he was the first major formulator of a symbolic extensional logic that is familiar today as a logic or algebra of classes. (A correspondent of Lambert, Georg von Holland, had experimented with an extensional theory, and in 1839 the English writer Thomas Solly presented an extensional logic in A Syllabus of Logic, though not an algebraic one.)

      Boole published two major works, The Mathematical Analysis of Logic in 1847 and An Investigation of the Laws of Thought in 1854. It was the first of these two works that had the deeper impact on his contemporaries and on the history of logic. The Mathematical Analysis of Logic arose as the result of two broad streams of influence. The first was the English logic-textbook tradition. The second was the rapid growth in the early 19th century of sophisticated discussions of algebra and anticipations of nonstandard algebras. The British mathematicians D.F.Gregory and George Peacock were major figures in this theoretical appreciation of algebra. Such conceptions gradually evolved into “nonstandard” abstract algebras such as quaternions, vectors, linear algebra, and Boolean algebra itself.

      Boole used capital letters to stand for the extensions of terms; they are referred to (in 1854) as classes of “things” but should not be understood as modern sets. The universal class or term—which he called simply “the Universe”—was represented by the numeral “1,” and the null class by “0.” The juxtaposition of terms (for example, “AB”) created a term referring to the intersection of two classes or terms. The addition sign signified the non-overlapping union; that is, “A + B” referred to the entities in A or in B; in cases where the extensions of terms A and B overlapped, the expression was held to be “undefined.” For designating a proper subclass of a class, Boole used the notation “v,” writing for example “vA” to indicate some of the A's. Finally, he used subtraction to indicate the removing of terms from classes. For example, “1 − x” would indicate what one would obtain by removing the elements of x from the universal class—that is, obtaining the complement of x (relative to the universe, 1).

 Basic equations included: 1A = A, 0A = 0, A + 0 = 0, A + 1 = 1 (but only where A = 0), A + B = B + A, AB = BA, AA = A (but not A + A = A), (AB)C = A(BC), and the distribution laws, A(B + C) = AB + AC and A + (BC) = (A + B)(A + C). Boole offered a relatively systematic, but not rigorously axiomatic, presentation. For a universal affirmative statement such as “All A's are B's,” Boole used three alternative notations (see illustration—>): AB = B (somewhat in the manner of Leibniz), A(1 − B) = 0, or A = vB (the class of A's is equal to some proper subclass of the B's). The first and second interpretations allowed one to derive syllogisms by algebraic substitution: the latter required manipulation of subclass (“v”) symbols.

      In contrast to earlier symbolisms, Boole's was extensively developed, with a thorough exploration of a large number of equations (including binomial-like expansions) and techniques. The formal logic was separately applied to the interpretation of propositional logic, which became an interpretation of the class or term logic—with terms standing for occasions or times rather than for concrete individual things. Following the English textbook tradition, deductive logic is but one half of the subject matter of the book, with inductive logic and probability theory constituting the other half of both his 1847 and 1854 works.

      Seen in historical perspective, Boole's logic was a remarkably smooth bend of the new “algebraic” perspective and the English-logic textbook tradition. His 1847 work begins with a slogan that could have served as the motto of abstract algebra: “. . . the validity of the processes of analysis does not depend upon the interpretation of the symbols which are employed, but solely upon the laws of combination.”

      Modifications to Boole's system were swift in coming: in the 1860s Peirce and Jevons both proposed replacing Boole's “ + ” with a simple inclusive union or summation: the expression “A + B” was to be interpreted as designating the class of things in A, in B, or in both. This results in accepting the equation “1+ 1 =1,” which is certainly not true of the ordinary numerical algebra and at which Boole apparently balked.

      Interestingly, one defect in Boole's theory, its failure to detail relational inferences, was dealt with almost simultaneously with the publication of his first major work. In 1847 Augustus De Morgan published his Formal Logic; or, the Calculus of Inference, Necessary and Probable. Unlike Boole and most other logicians in the United Kingdom, De Morgan knew the medieval theory of logic and semantics and also knew the Continental, Leibnizian symbolic tradition of Lambert, Ploucquet, and Gergonne. The symbolic system that De Morgan introduced in his work and used in subsequent publications is, however, clumsy and does not show the appreciation of abstract algebras that Boole's did. De Morgan did introduce the enormously influential notion of a possibly arbitrary and stipulated “universe of discourse” that was used by later Booleans. (Boole's original universe referred simply to “all things.”) This view influenced 20th-century logical semantics. De Morgan contrasted uppercase and lowercase letters: a capital letter represented a class of individuals, while a lowercase letter represented its complement relative to the universe of discourse, a convention Boole might have expressed by writing “x = (1 − X)”; this stipulation results in the general principle: xX = 0. A period indicated a (propositional) negation, and the parentheses “(“ and ”)” indicated, respectively, distributed (if the parenthesis faces toward the nearby term) and undistributed terms. Thus De Morgan would write “All A's are B's” as “A) )B” and “Some A's are B's” as “A ( )B.” These distinctions parallel Boole's account of distribution (quantification) in “A = vB” (where A is distributed but B is not) and “vA = B” (where both terms are distributed). Although his entire system was developed with wit, consistency, and brilliance, it is remarkable that De Morgan never saw the inferiority of his notation to almost all available symbolisms.

      De Morgan's other essays on logic were published in a series of papers from 1846 to 1862 (and an unpublished essay of 1868) entitled simply “On the Syllogism.” The first series of four papers found its way into the middle of the Formal Logic of 1847. The second series, published in 1850, is of considerable significance in the history of logic, for it marks the first extensive discussion of quantified relations since late medieval logic and Jung's massive Logica hamburgensis of 1638. In fact, De Morgan made the point, later to be exhaustively repeated by Peirce and implicitly endorsed by Frege, that relational (logical relation) inferences (inference) are the core of mathematical inference and scientific reasoning (reason) of all sorts; relational inferences are thus not just one type of reasoning but rather are the most important type of deductive reasoning (deduction). Often attributed to De Morgan—not precisely correctly but in the right spirit—was the observation that all of Aristotelian (Aristotelianism) logic was helpless to show the validity of the inference, “All horses are animals; therefore, every head of a horse is the head of an animal.” The title of this series of papers, De Morgan's devotion to the history of logic, his reluctance to mathematize logic in any serious way, and even his clumsy notation—apparently designed to represent as well as possible the traditional theory of the syllogism—show De Morgan to be a deeply traditional logician.

Charles Sanders Peirce (Peirce, Charles Sanders)
      Charles Sanders Peirce, the son of the Harvard mathematics professor and discoverer of linear algebra Benjamin Peirce (Peirce, Benjamin), was the first significant American figure in logic. Peirce had read the work of Aristotle, Whately, Kant, and Boole as well as medieval works and was influenced by his father's sophisticated conceptions of algebra and mathematics. Peirce's first published contribution to logic was his improvement in 1867 of Boole's system. Although Peirce never published a book on logic (he did edit a collection of papers by himself and his students, the Studies in Logic of 1883), he was the author of an important article in 1870, whose abbreviated title was “On the Notation of Relatives,” and of a series of articles in the 1880s on logic and mathematics; these were all published in American mathematics journals.

 It is relatively easy to describe Peirce's main approach to logic, at least in his earlier work: it was a refinement of Boole's algebra of logic and, especially, the development of techniques for handling relations within that algebra. In a phrase, Peirce sought a blend of Boole (on the algebra of logic) and De Morgan (on quantified relational inferences). Described in this way, however, it is easy to underestimate the originality and creativity (even idiosyncrasy) of Peirce. Although committed to the broadly “algebraic” tradition of Boole and his father, Peirce quickly moved away from the equational style of Boole and from efforts to mimic numerical algebra. In particular, he argued that a transitive (transitive law) and asymmetric logical relation of inclusion, for which he used the symbol “,” was more useful than equations; the importance of such a basic, transitive relation was first stressed by De Morgan, and much of Peirce's work can be seen as an exploration of the formal, abstract properties of this distinctively logical relation. He used it to express class inclusion, the “if . . . then” connective of propositional logic, and even the relation between the premises and conclusion of an argument (“illation”). Furthermore, Peirce slowly abandoned the strictly substitutional character of algebraic terms and increasingly used notation that resembled modern quantifiers. Quantifiers were briefly introduced in 1870 and were used extensively in the papers of the 1880s. They were borrowed by Schröder for his extremely influential treatise on the algebra of logic and were later adopted by Peano from Schröder; thus in all probability they are the source of the notation for quantifiers now widely used. In his earlier works, Peirce might have written “AB” to express the universal statement “All A's are B's” (see illustration—>); however, he often wrote this as “Πî Ai  ΠîBî (the class of all the i's that are A is included in the class of all the i's that are B) or, still later and interpreted in the modern way, as “For all i's, if i is A, then i is B.” Peirce and Schröder were never clear about whether they thought these quantifiers and variables were necessary for the expression of certain statements (as opposed to using strictly algebraic formulas), and Frege did not address this vital issue either; the Boolean algebra without quantifiers, even with extensions for relations that Peirce introduced, was demonstrated to be inadequate only in the mid-20th century by Alfred Tarski and others.

      Peirce developed this symbolism extensively for relations. His earlier work was based on versions of multiplication and addition for relations—called relative multiplication and addition—so that Boolean laws still held. Both Peirce's conception of the purposes of logic and the details of his symbolism and logical rules were enormously complicated by highly developed and unusual philosophical views, by elaborate theories of mind and thought, and by his theory of mental and visual signs (semiotics). He argued that all reasoning (reason) was “diagrammatic” but that some diagrams were better (more iconic) than others if they more accurately represented the structure of our thoughts. His earlier works seems to be more in the tradition of developing a calculus of reason that would make reasoning quicker and better and permit one to validate others' reasoning more accurately and efficiently. His later views, however, seem to be more in the direction of developing a “characteristic” language. In the late 1880s and 1890s Peirce developed a far more extensively iconic system of logical representation, his existential graphs. This work was, however, not published in his lifetime and was little recognized until the 1960s.

      Peirce did not play a major role in the important debates at the end of the 19th century on the relationship of logic and mathematics (mathematics, foundations of) and on set theory. In fact, in responding to an obviously quick reading of Russell's restatements of Frege's position that mathematics could be derived from logic, Peirce countered that logic was properly seen as a branch of mathematics, not vice versa. He had no influential students: the brilliant O.H. Mitchell died at an early age, and Christine Ladd Franklin never adapted to the newer symbolic tradition of Peano, Frege, and Russell. On the other hand, Peano and especially Schröder had read Peirce's work carefully and adopted much of his notation and his doctrine of the importance of relations (although they were less fervent than De Morgan and Peirce). Peano and Schröder, using much of Peirce's notation, had an enormous influence into the 20th century.

      In Germany, the older formal and symbolic logical tradition was barely kept alive by figures such as Salomon Maimon, Semler, August Detlev Twesten, and Moritz Wilhelm Drobisch. The German mathematician and philologist Hermann Günther Grassmann (Grassmann, Hermann Günther) published in 1844 his Ausdehnungslehre (“The Theory of Extension”), in which he used a novel and difficult notation to explore quantities (“extensions”) of all sorts—logical extension and intension, numerical, spatial, temporal, and so on. Grassmann's notion of extension is very similar to the use of the broad term “quantity” (and the phrase “logic of quantity”) that is seen in the works of George Bentham and Sir William Hamilton from the same period in the United Kingdom; it is from this English-language tradition that the terms, still in use, of logical “quantification” (quantification) and “quantifiers” derive. Grassmann's work influenced Robert Grassmann's Die Begriffslehre oder Logik (1872; “The Theory of Concepts or Logic”), Schröder, and Peano. The stage for a rebirth of German formal logic was further set by Friedrich Adolf Trendelenburg's (Trendelenburg, Friedrich Adolf) works, published in the 1860s and '70s, on Aristotle's and Leibniz' logic and on the relationship of mathematics and philosophy. Alois Riehl's much-read article “Die englische Logik der Gegenwart (1876; “Contemporary English Logic”) introduced German speakers to the works of Boole, De Morgan, and Jevons.

Gottlob Frege (Frege, Gottlob)
 In 1879 the young German mathematician Gottlob Frege—whose mathematical specialty, like Boole's, had actually been calculus—published perhaps the finest single book on symbolic logic in the 19th century, Begriffsschrift (“Conceptual Notation”). The title was taken from Trendelenburg's translation of Leibniz' notion of a characteristic language. Frege's small volume is a rigorous presentation of what would now be called the first-order predicate logic. It contains a careful use of quantifiers and predicates (although predicates are described as functions, suggestive of the technique of Lambert). It shows no trace of the influence of Boole and little trace of the older German tradition of symbolic logic. One might surmise that Frege was familiar with Trendelenburg's discussion of Leibniz, had probably encountered works by Drobisch and Hermann Grassmann, and possibly had a passing familiarity with the works of Boole and Lambert, but was otherwise ignorant of the history of logic. He later characterized his system as inspired by Leibniz' goal of a characteristic language but not of a calculus of reason. Frege's notation was unique and problematically two-dimensional; this alone caused it to be little read (see illustration—>).

      Frege was well aware of the importance of functions in mathematics, and these form the basis of his notation for predicates; he never showed an awareness of the work of De Morgan and Peirce on relations or of older medieval treatments. The work was reviewed (by Schröder, among others), but never very positively, and the reviews always chided him for his failure to acknowledge the Boolean and older German symbolic tradition; reviews written by philosophers chided him for various sins against reigning idealist dogmas. Frege stubbornly ignored the critiques of his notation and persisted in publishing all his later works using it, including his little-read magnum opus, Grundgesetze der Arithmetik (1893–1903; The Basic Laws of Arithmetic).

      His first writings after the Begriffsschrift were bitter attacks on Boolean methods (Boole, George) (showing no awareness of the improvements by Peirce, Jevons, Schröder, and others) and a defense of his own system. His main complaint against Boole was the artificiality of mimicking notation better suited for numerical analysis rather than developing a notation for logical analysis alone. This work was followed by the Die Grundlagen der Arithmetik (1884; The Foundations of Arithmetic) and then by a series of extremely important papers on precise mathematical and logical topics. After 1879 Frege carefully developed his position that all of mathematics (mathematics, philosophy of) could be derived from, or reduced to, basic “logical” laws—a position later to be known as logicism in the philosophy of mathematics. His view paralleled similar ideas about the reducibility of mathematics to set theory from roughly the same time—although Frege always stressed that his was an intensional logic of concepts, not of extensions and classes. His views are often marked by hostility to British extensional logic and to the general English-speaking tendencies toward nominalism and empiricism that he found in authors such as J.S. Mill. Frege's work was much admired in the period 1900–10 by Bertrand Russell (Russell, Bertrand) who promoted Frege's logicist research program—first in the Introduction to Mathematical Logic (1903), and then with Alfred North Whitehead, in Principia Mathematica (1910–13)—but who used a Peirce-Schröder-Peano system of notation rather than Frege's; Russell's development of relations and functions was very similar to Schröder's and Peirce's. Nevertheless, Russell's formulation of what is now called the “set-theoretic” paradoxes (Russell's paradox) was taken by Frege himself, perhaps too readily, as a shattering blow to his goal of founding mathematics and science in an intensional, “conceptual” logic. Almost all progress in symbolic logic in the first half of the 20th century was accomplished using set theories and extensional logics and thus mainly relied upon work by Peirce, Schröder, Peano, and Georg Cantor. Frege's care and rigour were, however, admired by many German logicians and mathematicians, including David Hilbert and Ludwig Wittgenstein. Although he did not formulate his theories in an axiomatic form, Frege's derivations were so careful and painstaking that he is sometimes regarded as a founder of this axiomatic tradition in logic. Since the 1960s Frege's works have been translated extensively into English and reprinted in German, and they have had an enormous impact on a new generation of mathematical and philosophical logicians.

Ernst Schröder
 German symbolic logic (in a broad sense) was cultivated by two other major figures in the 19th century. The tradition of Hermann Grassmann was continued by the German mathematician and algebraist Ernst Schröder. His first work, Der Operations-kreis des Logikkalkuls (1877; “The Circle of Operations of the Logical Calculus”), was an equational algebraic logic influenced by Boole and Grassmann but presented in an especially clear, concise, and careful manner; it was, however, intensional in that letters stand for concepts, not classes or things. Although Jevons and Frege complained of what they saw as the “mysterious” relationship between numerical algebra and logic in Boole, Schröder announced with great clarity: “There is certainly a contrast of the objects of the two operations. They are totally different. In arithmetic, letters are numbers, but here, they are arbitrary concepts.” He also used the phrase “mathematical logic.” Schröder's main work was his three-volume Vorlesungen über die Algebra der Logik (1890–1905; “Lectures on the Algebra of Logic”). This is an extensive and sometimes original presentation of all that was known about the algebra of logic circa 1890, together with derivations of thousands of theorems and an extensive bibliography of the history of logic. It is an extensional logic with a special sign for inclusion “” (paralleling Peirce's “”; see illustration—>), an inclusive notion of class union, and the usual Boolean operations and rules.

      The first volume is devoted to the basic theory of an extensional theory of classes (which Schröder called Gebiete, logical “domains,” a term that is somewhat suggestive of Grassmann's “extensions”). Schröder was especially interested in formal features of the resulting calculus, such as the property he called “dualism” (carried over from his 1877 work): any theorem remains valid if the addition and multiplication, as well as 0 and 1, are switched—for example, A Ā = 0, A + Ā = 1, and the pair of De Morgan laws. The second volume is a discussion of propositional (propositional calculus) logic, with propositions taken to refer to domains of times in the manner of Boole's Laws of Thought but using the same calculus. Schröder, unlike Boole and Peirce, distinguished between the universes for the separate cases of the class and propositional logics, using respectively 1 and {dotted 1}. The third volume contains Schröder's masterful but leisurely development of the logic of relations, borrowing heavily from Peirce's work. In the first decades of the 20th century, Schröder's volumes were the only major works in German on symbolic logic other than Frege's, and they had an enormous influence on important figures writing in German, such as Thoralf Albert Skolem, Leopold Löwenheim, Julius König, Hilbert, and Tarski. (Frege's influence was felt mainly through Russell and Whitehead's Principia Mathematica, but this tradition had a rather minor impact on 20th-century German logic.) Although it was an extensional logic more in the English tradition, Schröder's logic exhibited the German tendency of focusing exclusively upon deductive logic; it was a legacy of the English textbook tradition always to cover inductive logic in addition, and this trait survived in (and often cluttered) the works of Boole, De Morgan, Venn, and Peirce.

Georg Cantor
      A development in Germany originally completely distinct from logic but later to merge with it was Georg Cantor's (Cantor, Georg) development of set theory. In work originating from discussions on the foundations of the infinitesimal and derivative calculus by Baron Augustin-Louis Cauchy and Karl Weierstrauss, Cantor and Richard Dedekind (Dedekind, Richard) developed methods of dealing with the large, and in fact infinite, sets of the integers and points on the real number line. Although the Booleans had used the notion of a class, they rarely developed tools for dealing with infinite classes, and no one systematically considered the possibility of classes whose elements were themselves classes, which is a crucial feature of Cantorian set theory. The conception of “real” or “closed” infinities of things, as opposed to infinite possibilities, was a medieval problem that had also troubled 19th-century German mathematicians, especially the great Carl Friedrich Gauss. The Bohemian mathematician and priest Bernhard Bolzano (Bolzano, Bernhard) emphasized the difficulties posed by infinities in his Paradoxien des Unendlichen (1851; “Paradoxes of the Infinite”); in 1837 he had written an anti-Kantian and pro-Leibnizian nonsymbolic logic that was later widely studied. First Dedekind, then Cantor used Bolzano's tool of measuring sets by one-to-one mappings; using this technique, Dedekind gave in Was sind und was sollen die Zahlen? (1888; “What Are and Should Be the Numbers?”) a precise definition of an infinite set. A set is infinite if and only if the whole set can be put into one-to-one correspondence with a proper part of the set. (De Morgan and Peirce had earlier given quite different but technically correct characterizations of infinite domains; these were not especially useful in set theory and went unnoticed in the German mathematical world.)

      Although Cantor developed the basic outlines of a set theory, especially in his treatment of infinite sets and the real number line, he did not worry about rigorous foundations for such a theory—thus, for example, he did not give axioms of set theory—nor about the precise conditions governing the concept of a set and the formation of sets. Although there are some hints in Cantor's writing of an awareness of problems in this area (such as hints of what later came to be known as the class/set distinction), these difficulties were forcefully posed by the paradoxes of Russell and the Italian mathematician Cesare Burali-Forti and were first overcome in what has come to be known as Zermelo-Fraenkel set theory.

Other 19th-century logicians
 French logic was ably, though not originally, represented in this period by Louis Liard and Louis Couturat (Couturat, Louis). Couturat's L'Algèbre de la logique (1905; The Algebra of Logic) and De l'Infini mathematique (1896; “On Mathematical Infinity”) were important summaries of German and English research on symbolic logic, while his book on Leibniz' logic (1901) and an edition of Leibniz' previously unpublished writings on logic (1903) were very important events in the study of the history of logic. In Russia V.V. Bobyin (1886) and Platon Sergeevich Poretsky (1884) initiated a school of algebraic logic. In the United Kingdom a vast amount of work on formal and symbolic logic was published in the best philosophical journals from 1870 until 1910. This includes work by William Stanley Jevons (Jevons, William Stanley), whose intensional logic is unusual in the English-language tradition; John Venn, who was notable for his (extensional) diagrams of class relationships (see illustration—>) but who retained Boole's noninclusive class union operator; Hugh MacColl; Alexander Bain; Sophie Bryant; Emily Elizabeth Constance Jones; Arthur Thomas Shearman; Lewis Carroll (Charles Lutwidge Dodgson); and Whitehead (Whitehead, Alfred North), whose A Treatise on Universal Algebra (1898) was the last major English logical work in the algebraic tradition. Little of this work influenced Russell's (Russell, Bertrand) conception, which was soon to sweep through English-language logic; Russell was more influenced by Frege, Peano, and Schröder. The older nonsymbolic syllogistic (syllogism) tradition was represented in major English universities well into the 20th century by John Cook Wilson, William Ernest Johnson, Lizzie Susan Stebbing, and Horace William Brindley Joseph and in the United States by Ralph Eaton, James Edwin Creighton, Charles West Churchman, and Daniel Sommer Robinson.

      The Italian mathematician Giuseppe Peano's (Peano, Giuseppe) contributions represent a more extensive impetus to the new, nonalgebraic logic. He had a direct influence on the notation of later symbolic logic that exceeded that of Frege and Peirce. His early works (such as the logical section of the Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassman [1888; “Calculus of Geometry According to the Theory of Extension of H. Grassmann”]) were squarely in the algebraic tradition of Boole, Grassmann, Peirce, and Schröder. Writing in the 1890s in his own journal, Revista di mathematica, with a growing appreciation of the use of quantifiers in the first and third volumes of Schröder's Vorlesungen, Peano evolved a notation for quantifiers. This notation, along with Peano's use of the Greek letter epsilon, ε, to denote membership in a set, was adopted by Russell and Whitehead and used in later logic and set theory. Although Peano himself was not interested in the logicist program of Frege and Russell, his five postulates governing the structure of the natural numbers (now known as the Peano Postulates), with similar ideas in the work of Peirce and Dedekind, came to be regarded as the crucial link between logic and mathematics. It was widely thought that all mathematics could be derived from the theory for the natural numbers; if the Peano postulates could be derived from logic, or from logic including set theory, its feasibility would have been demonstrated. Simultaneously with his work in logic, Peano wrote many articles on universal languages and on the features of an ideal notation in mathematics and logic—all explicitly inspired by Leibniz.

      Logic in the 19th century culminated grandly with the First International Congress of Philosophy and the Second International Congress of Mathematics held consecutively in Paris in August 1900. The overlap between the two congresses was extensive and fortunate for the future of logic and philosophy. Peano, Alessandro Padoa, Burali-Forti, Schröder, Cantor, Dedekind, Frege, Felix Klein, Ladd Franklin (Peirce's student), Coutourat, and Henri Poincaré were on the organizing committee of the Philosophical Congress; for the subsequent development of logic, Bertrand Russell was perhaps its most important attendee. The influence of algebraic logic was already ebbing, and the importance of nonalgebraic symbolic logics, of axiomatizations, and of logic (and set theory) as a foundation for mathematics were ascendant. Until the congresses of 1900 and the work of Russell and Hilbert (Hilbert, David), mathematical logic lacked full academic legitimacy. None of the 19th-century logicians had achieved major positions at first-rank universities: Peirce never obtained a permanent university position, Dedekind was a high-school teacher, and Frege and Cantor remained at provincial universities. The mathematicians stayed for the Congress of Mathematics, and it was here that David Hilbert gave his presentation of the 23 most significant unsolved problems of mathematics—several of which were foundational issues in mathematics and logic that were to dominate logical research during the first half of the 20th century.

20th-century logic
      In 1900 logic was poised on the brink of the most active period in its history. The late 19th-century work of Frege, Peano, and Cantor, as well as Peirce's and Schröder's extensions of Boole's insights, had broken new ground, raised considerable interest, established international lines of communication, and formed a new alliance between logic and mathematics. Five projects internal to late 19th-century logic coalesced in the early 20th century, especially in works such as Russell and Whitehead's Principia Mathematica. These were the development of a consistent set or property theory (originating in the work of Cantor and Frege), the application of the axiomatic method (including non-symbolically), the development of quantificational (quantification) logic, and the use of logic to understand mathematical objects and the nature of mathematical proof. The five projects were unified by a general effort to use symbolic (symbol) techniques, sometimes called mathematical, or formal (formal logic), techniques. Logic became increasingly “mathematical,” then, in two senses. First, it attempted to use symbolic methods like those that had come to dominate mathematics. Second, an often dominant purpose of logic came to be its use as a tool for understanding the nature of mathematics—such as in defining mathematical concepts, precisely characterizing mathematical systems, or describing the nature of ideal mathematical proof. (See mathematics, history of: Mathematics in the 19th and 20th centuries (mathematics), and mathematics, foundations of.)

Russell (Russell, Bertrand) and Whitehead's (Whitehead, Alfred North) Principia Mathematica
      The three-volume Principia Mathematica (1910–13) was optimistically named after the Philosophiae naturalis principia mathematica of another hugely important Cambridge thinker, Isaac Newton. Like Newton's Principia, it was imbued with an optimism about the application of mathematical techniques, this time not to physics but to logic and to mathematics itself—what the first sentence of their preface calls “the mathematical treatment of the principles of mathematics.” It was intended by Russell and Whitehead both as a summary of then-recent work in logic (especially by Frege, Cantor and Peano) and as a ground-breaking, large-scale treatise systematically developing mathematical logic and deriving basic mathematical principles from the principles of logic alone.

      The Principia was the natural outcome of Russell's earlier polemical book, The Principles of Mathematics (published in 1903 but largely written in 1900), and his views were later summarized in Introduction to Mathematical Philosophy (1919). Whitehead's A Treatise on Universal Algebra (1898) was more in the algebraic (algebra) tradition of Boole, Peirce, and Schröder, but there is a sense in which Principia Mathematica became the second volume both of it and of Russell's Principles.

      The main idea in the Principia is the view, taken from Frege, that all of mathematics (mathematics, philosophy of) could be derived from the principles of logic alone. This view later came to be known as logicism and was one of the principal philosophies of mathematics in the early 20th century. number theory, the core of mathematics, was organized around the Peano postulates, stated in works by Peano of 1889 and 1895 (and anticipated by similar but less influential theories of Peirce and Dedekind). These postulates state and organize the fundamental laws of “natural” (integral, positive) numbers, and thus of all of mathematics:
● 0 is a number.
● The successor of any number is also a number.
● No two distinct numbers have the same successor.
● 0 is not the successor of any number.
● If any property is possessed by 0 and also by the successor of any number having the property, then all numbers have that property.

      If some entities satisfying these conditions could be derived or constructed in logic, it would have been shown that mathematics was (or at least could be) founded in pure logic, requiring no additional assumptions.

      Although his language actually used the intensional and second-order language of functions and properties, Frege had claimed to have accomplished precisely this, identifying 0 with the empty set, 1 with the set of all single-membered sets (singletons), 2 with the set of all dual-membered sets (doubletons), and so on. These sets of equinumerous sets were then what numbers really were. Unfortunately, Russell showed through his famous paradox (Russell's paradox) that the theory is inconsistent and, hence, that any statement at all can be derived in Frege's system, not merely desired logical truths, the Peano postulates, and what follows from them. Russell, in a famous letter to Frege, asked him to consider “the set of all those sets not members of themselves.” Paradox follows if one assumes such a set is empty, or is not empty. After meditating on this paradox and a great many other paradoxes devised by Burali-Forti, George Godfrey Berry, and others, Russell and Whitehead concluded that the main difficulty lies in allowing the construction of entities that contain a “vicious circle”—i.e., entities that are used in the construction or definition of themselves.

      Russell and Whitehead sought to rule out this possibility while at the same time allowing a great many of the operations that Frege had deemed desirable. The result was the theory of types: (types, theory of) all sets and other entities have a logical “type,” and sets are always constructed from specifying members with lower types. (F.P. Ramsay offered a criticism that was subsequently accommodated in later editions of Principia Mathematica; as modified, the theory came to be known as the “ramified” theory of types.) Consequently, to speak of sets that are, or are not, “members of themselves” is simply to violate this rule governing the specification of sets. There is some evidence that Cantor (Cantor, Georg) had been aware of the difficulties created when there is no such restriction (he permitted large collective entities that do not obey the usual rules for sets), and a parallel intuition concerning the pitfalls of certain operations was independently followed by Ernst Zermelo in the development of his set theory.

      In addition to its notation (much of it borrowed from Peano), its masterful development of logical systems for propositional and predicate logic, and its overcoming of difficulties that had beset earlier logical theories and logistic conceptions, the Principia offered discussions of functions, definite descriptions, truth, and logical laws that had a deep influence on discussions in analytical philosophy (analytic philosophy) and logic throughout the 20th century. What is perhaps missing is any hesitation or perplexity about the limits of logic: whether this logic is, for example, provably consistent, complete, or decidable, or whether there are concepts expressible in natural languages but not in this logical notation. This is somewhat odd, given the well-known list of problems posed by Hilbert in 1900 that came to animate 20th-century logic, especially German logic. The Principia is a work of confidence and mastery and not of open problems and possible difficulties and shortcomings; it is a work closer to the naive progressive elements of the Jahrhundertwende than to the agonizing fin de siècle.

20th-century set theory
      Independently of Russell and Whitehead's work, and more narrowly in the German mathematical tradition of Dedekind and Cantor, in 1908 Ernst Zermelo described axioms of set theory that, slightly modified, came to be standard in the 20th century. The type theory of the Principia Mathematica has, by contrast, gradually faded in influence. Like that of Russell and Whitehead, Zermelo's system avoids the paradoxes inherent in Frege's and Cantor's systems by imposing certain restrictions on what may be a set.

      Zermelo's axioms are:
● Axiom of extensionality. If two sets have the same members, then they are identical.
● Axiom of elementary sets. There exists a set with no members, the null or empty set. For any two members of a set, there exist (singleton) sets containing only those members, as well as a (doubleton) set containing only those members.
● Axiom of separation. For any well-formed property and any set S, there is a set, S′, containing all and only the members of S having this property. That is, already-existing sets can be partitioned or separated into parts by certain properties.
● Power set axiom. If S is a set, then there exists a set, S′, which contains all and only the subsets of S.
● Union axiom. If S is a set, then there is a set containing all and only the members of the sets in S.
axiom of choice. (Discussed below.)
● Axiom of infinity. There exists at least one set that contains an infinite number of members.

      With the exception of axiom 2, all these axioms allow new sets to be constructed from already-constructed sets by carefully constrained operations. This method embodies what has come to be known as the “iterative” conception of a set. This list of axioms was eventually modified by Zermelo, and by Abraham Fraenkel, and the result is widely known as Zermelo-Fraenkel set theory, or ZF for short. (See the article set theory: Axiomatic set theory (set theory).)

      Axiomatized Zermelo-Frankel set theory is almost always what mathematicians and logicians now mean by “set theory.” The system was later modified by John von Neumann (von Neumann, John) and others with the addition of a “foundation axiom” explicitly prohibiting (among others) sets that contain themselves as members. The system was further modified for technical reasons by von Neumann, Paul Bernays (Bernays, Paul Isaak), and Kurt Gödel (Gödel, Kurt) in the 1920s and '30s, and the result is called von Neumann-Bernays-Gödel set theory, or NBG. A more distinct alternative was proposed by the American logician Willard Van Orman Quine (Quine, Willard Van Orman) and is called New Foundations (NF; from 1936–37). Quine's system is not widely used, however, and there have been recurrent suspicions that it is inconsistent. Other set theories have been proposed, but most of them, such as relevant, fuzzy, or multivalued set theories, differ from ZF in having different underlying logics. ZF was soon shown to be capable of deriving the Peano postulates by several alternative methods—for example, by identifying the natural numbers with certain sets, such as 0 with the empty set, Λ, 1 with the singleton empty set {Λ}, and so on. The crucial mathematical notions of relation and function were defined as certain sets of ordered pairs, and ordered pairs were defined strictly within set theory using suggestions first by the American mathematician and cyberneticist Norbert Wiener (Wiener, Norbert), and then by the Polish logician Kasimierz Kuratowski and the Norwegian logician Thoralf Skolem. With these proposals, the need for basic notions of function or relation (and generally, of order) that had been proposed by Frege, Peirce, and Schröder, disappeared. Zermelo and other early set theorists (obviously influenced by Hilbert's list of open problems) were concerned with a number of issues about the properties of the whole system: Was ZF consistent? Was its consistency provable? Were the axioms independent of one another? Were there other desirable axioms that should be added? Particularly problematic was the status of the axiom of choice.

      The axiom of choice states (in Zermelo's first version) that, given any set of disjoint (nonoverlapping) sets, a set can be formed with one and only one element from each of these disjoint sets. The issue is whether elements can be “chosen” or selected from sets; the problem is acute only when infinite sets are permitted or when numerous nonidentical memberless sets (similar to the empty set), which Zermelo called Urelementen (literally “primitive” or “original” elements), are permitted. The axiom of choice has a large number of formulations that are logically equivalent to it, some quite surprisingly so: these include a well-ordering axiom (that the elements of any set can be put in a certain order). Early perplexity in set theory centred on whether the axiom of choice is consistent with the other axioms and whether or not it is independent of them. While clearly desirable, the axiom of choice has the nonintuitive character of a postulate, rather than being self-evident. The first question is whether the addition of the axiom of choice to a system of axiomatic set theory makes the resulting system inconsistent if it was not so previously. The second question is whether the axiom of choice can be derived from the other axioms or whether its inclusion really adds anything to the system—i.e., whether every useful implication of it could be derived without it. The consistency of the axiom of choice with the other axioms of set theory (specifically in NBG set theory) was shown by Kurt Gödel in 1940. The independence of the axiom of choice from the other axioms was shown, trivially, for set theories with Urelementen very early; the independence of the axiom of choice in NBG or ZF set theories was one of the major outstanding problems in 20th-century mathematical logic until Paul Cohen (Cohen, Paul Joseph) showed in 1963 that the axiom of choice was indeed independent of the other standard axioms for set theory.

      Another early outstanding issue in axiomatic set theory was whether what came to be known as the “continuum hypothesis” (continuum hypothesis) was consistent with the other axioms of ZF, and whether it was independent of them. The continuum hypothesis states that between ℵ0 (aleph-null; the “smallest” infinite cardinality, on the order of the integers) and its power set, ℵ1 (a cardinality usually identified with the continuum of points on a real number line), as well as between other integral alephs, there is no intermediate cardinality—no ℵ1.5, so to speak. This is the first of Hilbert's 1900 list of 23 open problems in mathematics and its foundations. The second problem is the consistency and independence of the Peano postulates and any alternative general axioms for mathematics. In his work of 1938–40, Kurt Gödel had shown that the continuum hypothesis—that there are no intermediate cardinalities—was consistent with the other axioms of set theory. In 1963, employing techniques similar to those that he used for showing the independence of the axiom of choice, Paul Cohen showed the independence of the continuum hypothesis. Since 1963 a number of alternative and less difficult methods of showing these independence results have emerged.

      A third, but less conceptually vital, area of research in set theory has been in the precise form of axioms of infinity. It became evident that there are a variety of “stronger” axioms of infinity that can be added to ZF: these declare the existence of infinite sets with cardinalities beyond all the integral ℵ cardinalities. With the results of Gödel from 1931, which have implications for the completeness and consistency of set theory (and are discussed below), and with the independence results of Cohen from 1963, basic questions concerning standard set theories (ZF and NBG) are considered to have been answered, even if the answers are somewhat unsatisfactory. The questions that have lingered about set theory, now a very well understood formal system, have centred on philosophical issues of whether numbers and mathematical operations are really “just” sets and set-theoretic operations, or whether one can usefully understand mathematics and the world in other than set-theoretic terms.

      Various substantive alternatives to set theory have been proposed. One is the part-whole calculus, or “calculus of individuals,” also called mereology, of Stanisław Lésniewski (Leśniewski, Stanisław) (1916, 1927–31). This theory rejects the hierarchy of sets, sets of sets, and so on, that emerge in set theory through the member-of relation (as defined by the power set axiom) and instead proposes a part-whole relationship. It obeys rules like those for the subset relationship in set theory. Its inspiration seems to have been the earlier Boolean theory of classes (especially as described by Schröder), as well as the work of the German philosopher Edmund Husserl (Husserl, Edmund) and his followers on conceptualization in everyday thought (“Phenomenology” (Phenomenology)) of collections. This work was developed by Henry Leonard and Nelson Goodman in the United States in the mid-20th century. It has continued to attract philosophers of logic and mathematics who are nominalists (nominalism), who suspect set theory of being inherently Platonistic (Platonism), or who are otherwise suspicious of the complex entities proposed by, and the complicated assumptions needed for, set theory. Although some interesting proposals have been made, it does not appear that the part-whole calculus is capable of grounding mathematics, or at least of doing so in as straightforward a manner as does ZF. A much different approach to logical foundations for mathematics is to be seen in the category theory of Saunders MacLane (Mac Lane, Saunders) and others. The category theory proposes that mathematics is based on highly abstract formal objects: categories (“topoi,” singular: “topos”) that are neither sets nor properties. In set theory there is a distinction between the objects of the theory—sets—and what one does to them: intersecting them, unioning them, and so forth. In category theory, this distinction between objects and operations on them (transformations, or morphisms) disappears. In the latter part of the 20th century, interest has also arisen in the logic of collective entities other than sets, classes, or classes of individuals: this includes theories of heaps and aggregates and theories for mass terms—such as water or butter—that are not conceptualized as formed from distinct individuals. The goal has been to give formal theories for collective or quantitative terms used in natural language.

Logic and philosophies of mathematics
      Philosophies of mathematics are more extensively discussed in the article mathematics, foundations of; the major schools are mentioned here briefly. An outgrowth of the theory of Russell and Whitehead, and of most modern set theories, was a better articulation of a philosophy of mathematics known as logicism: (logicism) that operations and objects spoken about in mathematics are really purely logical constructions. This has focused increased attention on what exactly “pure” logic is and whether, for example, set theory is really logic in a narrow sense. There seems little doubt that set theory is not “just” logic in the way in which, for example, Frege viewed logic—i.e., as a formal theory of functions and properties. Because set theory engenders a large number of interestingly distinct kinds of nonphysical, nonperceived abstract objects, it has also been regarded by some philosophers and logicians as suspiciously (or endearingly) Platonistic. Others, such as Quine, have “pragmatically” endorsed set theory as a convenient way—perhaps the only such way—of organizing the whole world around us, especially if this world contains the richness of transfinite mathematics.

      For most of the first half of the 20th century, new work in logic saw logic's goal as being primarily to provide a foundation for, or at least to play an organizing role in, mathematics. Even for those researchers who did not endorse the logicist program, logic's goal was closely allied with techniques and goals in mathematics, such as giving an account of formal systems ( formalism) or of the ideal nature of nonempirical proof and demonstration. (Interest in the logicist and formalist program waned after Gödel's demonstration that logic could not provide exactly the sort of foundation for mathematics or account of its formal systems that had been sought. Namely, mathematics could not be reduced to a provably complete and consistent logical theory, but logic has still remained closely allied with mathematical foundations and principles.)

      Traditionally, logic had set itself the task of understanding valid arguments of all sorts, not just mathematical ones. It had developed the concepts and operations needed for describing concepts, propositions, and arguments—especially in terms of “logical form”—insofar as such tools could conceivably affect the assessment of any argument's quality or ideal persuasiveness. It is this general ideal that many logicians have developed and endorsed, and that some, such as Hegel (Hegel, Georg Wilhelm Friedrich), have rejected as impossible or useless. For the first decades of the 20th century, logic threatened to become exclusively preoccupied with a new and historically somewhat foreign role of serving in the analysis of arguments in only one field of study, mathematics. The philosophical-linguistic task of developing tools for analyzing statements and arguments that can be expressed in some natural language about some field of inquiry, or even for analyzing propositions as they are actually (and perhaps necessarily) thought or conceived by human beings, was almost completely lost. There were scattered efforts to eliminate this gap by reducing basic principles in all disciplines—including physics, biology, and even music—to axioms, particularly axioms in set theory or first-order logic. But these attempts, beyond showing that it could be done, did not seem especially enlightening. Thus, such efforts, at their zenith in the 1950s and '60s, had all but disappeared in the '70s: one did not better and more usefully understand an atom or a plant by being told it was a certain kind of set.

Logic narrowly construed
Formal logical systems: syntax
      Although set theory and the type theory of Russell and Whitehead were considered to be “logic” for the purposes of the logicist program, a narrower sense of logic reemerged in the mid-20th century as what is usually called the “underlying logic” of these systems: whatever concerns only rules for propositional connectives, quantifiers, and nonspecific terms for individuals and predicates. (An interesting issue is whether the privileged relation of identity, typically denoted by the symbol “=,” is a part of logic: most researchers have assumed that it is.) In the early 20th century and especially after Tarski's work in the 1920s and '30s, a formal logical system (formal system) was regarded as being composed of three parts, all of which could be rigorously described. First, there was the notation: the rules of formation for terms and for well-formed formulas (wffs) in the logical system. This theory of notation itself became subject to exacting treatment in the concatenation theory, or theory of strings, of Tarski (Tarski, Alfred), and in the work of the American Alonzo Church. Previously, notation was often a haphazard affair in which it was unclear what could be formulated or asserted in a logical theory and whether expressions were finite or were schemata standing for infinitely long wffs. Issues that arose out of notational questions include definability of one wff by another (addressed in Beth's and Craig's theorems, and in other results), creativity, and replaceability, as well as the expressive power and complexity of different logical languages.

      The second part of a logical system consisted of the axioms (axiom), rules of inference, or other ways of identifying what counts as a theorem. This is what is usually meant by the logical “theory” proper: a (typically recursive) description of the theorems of the theory, including axioms and every wff derivable from axioms by admitted rules. Although the axiomatic method of characterizing such theories with axioms or postulates or both and a small number of rules of inference had a very old history (going back to Euclid or further), two new methods arose in the 1930s and '40s. First, in 1934, there was the German mathematician Gerhard Gentzen's method of succinct Sequenzen (rules of consequents), which were especially useful for deriving metalogical decidability results. This method originated with Paul Hertz in 1932, and a related method was described by Stanisław Jaśkowski in 1934. Next to appear was the similarly axiomless method of “natural deduction,” which used only rules of inference; it originated in a suggestion by Russell in 1925 but was developed by Quine (Quine, Willard Van Orman) and the American logicians Frederick Fitch and George David Wharton Berry. The natural deduction technique is widely used in the teaching of logic, although it makes the demonstration of metalogical results somewhat difficult, partly because historically these arose in axiomatic and consequent formulations.

      A formal description of a language, together with a specification of a theory's theorems (derivable propositions), are often called the “syntax” (syntax) of the theory. (This is somewhat misleading when one compares the practice in linguistics, which would limit syntax to the narrower issue of grammaticality.) The term “calculus” is sometimes chosen to emphasize the purely syntactic, uninterpreted nature of a formal theory.

      Finally, the third component of a logical system was the semantics for such a theory and language: a declaration of what the terms of a theory refer to, and how the basic operations and connectives are to be interpreted in a domain of discourse, including truth conditions for wffs in this domain. A specification of a domain of objects (De Morgan's “universe of discourse”), and of rules for interpreting the symbols of a logical language in this domain such that all the theorems of the logical theory are true is then said to be a “model” of the theory (or sometimes, less carefully, an “interpretation” of the theory).

      The notion of a rigorous logical theory, in the sense of a specification, often axiomatic, of theorems of a theory, was fairly well understood by Euclid, Aristotle, and others in ancient times. With the crises in geometry of the 19th century, the need developed for very careful presentations of these theories. Hilbert's work, as well that of a group of American mathematicians that included Edward Vermilye Huntington, Oswald Veblen (Veblen, Oswald), and Benjamin Abram Bernstein (the postulate theorists, working shortly after 1900), reestablished this tradition with even higher standards. Frege and, in his footsteps, Russell and Whitehead, had separate claims to emphasizing standards of precision and care in the statement of logical theories. Cantor, Zermelo, and most other early set theorists did not often state the content of their axioms and theorems in symbolic form, or restrict themselves to certain symbols. Zermelo, in fact, did not often use the formal language for quantifiers and binding variables that was then available; instead, he used ordinary expressions such as “for any,” “all,” or “there exists.” Through the 1920s, logical axioms and rules of inference were typically not all explicitly and precisely stated, especially various principles of substitution that mimicked widely understood algebraic practices.

Formal semantics
      What is known as formal semantics, or model theory, has a more complicated history than does logical syntax; indeed, one could say that the history of the emergence of semantic conceptions of logic in the late 19th and early 20th centuries is poorly understood even today. Certainly, Frege's notion that propositions refer to (German: bedeuten) “The True” or “The False”—and this for complex propositions as a function of the truth values of simple propositions—counts as semantics. Earlier medieval theories of supposition incorporated useful semantic observations. So, too, do Boolean techniques of letters taking or referring to the values 1 and 0 that are seen from Boole through Peirce (Peirce, Charles Sanders) and Schröder. Both Peirce and Schröder occasionally gave brief demonstrations of the independence of certain logical postulates using models in which some postulates were true, but not others. (The first explicit use of such techniques seems to have arisen earlier in the 19th century, and in geometry.) The first clear and significant general result in model theory is usually accepted to be a result discovered by Löwenheim in 1915 and strengthened in work by Skolem from the 1920s. This is the Löwenheim-Skolem theorem, which states that a theory that has a model at all has a countable model. That is to say, if there exists some model of a theory (i.e., an application of it to some domain of objects), then there is sure to be one with a domain no larger than the natural numbers. Although Löwenheim and Skolem understood their results perfectly well, they did not explicitly use the modern language of “theories” being true in “models.” The Löwenheim-Skolem theorem is in some ways a shocking result, since it implies that any consistent formal theory of anything—no matter how hard it tries to address the phenomena unique to a field such as biology, physics, or even sets—can just as well be understood from its formalisms alone as being about natural numbers.

      The second major result in formal semantics, Gödel's completeness theorem of 1930, required even for its description, let alone its proof, more careful development of precise concepts about logical systems—metalogical concepts—than existed in earlier decades. One question for all logicians since Boole, and certainly since Frege, had been: Was the theory consistent? In its purely syntactic analysis, this amounts to the question: Was a contradictory sentence (of the form A & ∼ A) a theorem? In its semantic analysis, it is equivalent to the question: Does the theory have a model at all? For a logical theory, consistency means that a contradictory theorem cannot be derived in the theory. But since logic was intended to be a theory of necessarily true statements, the goal was stronger: a theory is Post-consistent (named for the Polish-American logician Emil Post) if every theorem is valid—that is, if no theorem is a contradictory or a contingent statement. (In nonclassical logical systems, one may define many other interestingly distinct notions of consistency; these notions were not distinguished until the 1930s.) Consistency was quickly acknowledged as a desired feature of formal systems: it was widely and correctly assumed that various earlier theories of propositional (propositional calculus) and first-order logic were consistent. Zermelo was, as has been observed, concerned with demonstrating that ZF was consistent; Hilbert had even observed that there was no proof that the Peano postulates were consistent. These questions received an answer that was not what was hoped for in a later result of Gödel (discussed below). A clear proof of the consistency of propositional logic was first given by Post in 1921. Its tardiness in the history of symbolic logic is a commentary not so much on the difficulty of the problem as it is on the slow emergence of the semantic and syntactic notions necessary to characterize consistency precisely. The first clear proof of the consistency of the first-order predicate logic is found in the work of Hilbert and Wilhelm Ackermann from 1928. Here the problem was not only the precise awareness of consistency as a property of formal theories but also of a rigorous statement of first-order predicate logic as a formal theory.

      In 1928 Hilbert and Ackermann also posed the question of whether a logical system, and, in particular, first-order predicate logic, was (as it is now expressed) “complete.” This is the question of whether every valid proposition—that is, every proposition that is true in all intended models—is provable in the theory. In other words, does the formal theory describe all the noncontingent truths of a subject matter? Although some sort of completeness had clearly been a guiding principle of formal logical theories dating back to Boole, and even to Aristotle (and to Euclid in geometry)—otherwise they would not have sought numerous axioms or postulates, risking nonindependence and even inconsistency—earlier writers seemed to have lacked the semantic terminology to specify what their theory was about and wherein “aboutness” consists. Specifically, they lacked a precise notion of a proposition being “valid,”—that is, “true in all (intended) models”—and hence lacked a way of precisely characterizing completeness. Even the language of Hilbert and Ackermann from 1928 is not perfectly clear by modern standards.

      Gödel proved the completeness of first-order predicate logic in his doctoral dissertation of 1930; Post had shown the completeness of propositional logic in 1921. In many ways, however, explicit consideration of issues in semantics, along with the development of many of the concepts now widely used in formal semantics and model theory (including the term metalanguage), first appeared in a paper by Alfred Tarski (Tarski, Alfred), “The Concept of Truth in Formalized Languages,” published in Polish in 1933; it became widely known through a German translation of 1936. Although the theory of truth Tarski advocated has had a complex and debated legacy (see the article semantics), there is little doubt that the concepts there (and in later papers from the 1930s) developed for discussing what it is for a sentence to be “true in” a model marked the beginning of model theory in its modern phase. Although the outlines of how to model propositional logic had been clear to the Booleans and to Frege, one of Tarski's most important contributions was an application of his general theory of semantics in a proposal for the semantics of the first-order predicate logic (now termed the set-theoretic, or Tarskian, interpretation).

      Tarski's techniques and language for precisely discussing semantic concepts, as well as properties of formal systems described using his concepts—such as consistency, completeness, and independence—rapidly and almost imperceptibly entered the literature in the late 1930s and after. This influence accelerated with the publication of many of his works in German and then in English, and with his move to the United States in 1939.

The first and second incompleteness theorem
      Gödel's first incompleteness theorem, from 1931, stands as a major turning point of 20th-century logic. It states that no finitely axiomatizable theory sufficient to derive the Peano postulates is both consistent and complete. (How Gödel proved this fascinating result is discussed more extensively in the article mathematics, foundations of.) In other words, if we try to build a theory sufficient for a foundation for mathematics, stating the axioms and rules of inference so that we have stipulated precisely what is and what is not an axiom (as opposed to open-ended axiom schemata), then the resulting theory will either (1) not be sufficient for mathematics (i.e., not allow the derivation of the Peano postulates for number theory) or (2) not be complete (i.e., there will be some valid proposition that is not derivable in the theory) or (3) be inconsistent. (Gödel actually distinguished between consistency and a stronger feature, ω- [omega-] consistency.) A corollary of this result is that, if a theory is finitely axiomatizable, consistent, and sufficient to derive the Peano postulates, then that theory cannot be used as a metalanguage to show its own consistency; that is, a finitely axiomatized set theory cannot be used to show the consistency of finitely axiomatized set theory, if set theory is consistent. This is often called Gödel's second incompleteness theorem.

      These results were widely interpreted as a blow to both the logicist (logicism) and formalist programs. Logicists seemed to have taken as their goal the construction of rigorously described theories that were sufficient for deriving mathematics and also consistent and complete. Gödel showed that, if this was their goal, they would necessarily fail. It was also a blow to the longer-standing axiomatic, or formalist (formalism), program, since it seemed to show that precise axiomatic descriptions of valuable domains like mathematics would also necessarily fail. Gödel himself eventually interpreted the result as showing that there exist entities with well-defined properties, namely numbers, that are beyond our ability to describe precisely with standard logical tools. This is one source of his inclination toward what is usually called mathematical Platonism.

      One reply of the logicists could have been to abandon as ideal the first-order, finitely axiomatized theories, such as first-order predicate logic, the system of Russell and Whitehead, and NBG, and instead to accept theories that were less rigorously described. First-order theories allow explicit reference to, and quantification over, individuals, such as numbers or sets, but not quantification over (and hence rules for manipulating) properties of these individuals. For example, one possible logicist reply is to note that the Peano postulates themselves seem acceptable. It is true that Gödel's result implies that we cannot prove (as Hilbert hoped in his second problem) that these postulates are consistent; furthermore, the fifth postulate is a schema or second-order formulation, rather than being strictly in the finitely axiomatizable first-order language that was once preferred. This reply, however, clashes with another desired feature of a formal theory, namely, decidability: that there exists a finite mechanical procedure for determining whether a proposition is, or is not, a theorem of the theory. This property took on added interest after World War II with the advent of electronic computers (computer), since modern computers can actually apply algorithms to determine whether a given proposition is, or is not, a theorem, whereas some algorithms had only been shown theoretically to exist. The decidability of propositional logic, through the use of truth tables, was known to Frege and Peirce; a proof of its decidability is attributable to Jan Łukasiewicz and Emil Post independently in 1921. Löwenheim showed in 1915 that first-order predicate logic with only single-place predicates was decidable and that the full theory was decidable if the first-order predicate calculus with only two-place predicates was decidable; further developments were made by Skolem, Heinrich Behmann, Jacques Herbrand, and Quine. Herbrand showed the existence of an algorithm which, if a theorem of the first-order predicate logic is valid, will determine it to be so; the difficulty, then, was in designing an algorithm that in a finite amount of time would determine that propositions were invalid. As early as the 1880s, Peirce seemed to be aware that the propositional logic was decidable but that the full first-order predicate logic with relations was undecidable. The proof that first-order predicate logic (in any general formulation) was undecidable was first shown definitively by Alan Turing (Turing, Alan M.) and Alonzo Church independently in 1936. Together with Gödel's (second) incompleteness theorem and the earlier Löwenheim-Skolem theorem, the Church-Turing theorem of the undecidability of the first-order predicate logic is one of the most important, even if “negative,” results of 20th-century logic.

      By the 1930s almost all work in the foundations of mathematics and in symbolic logic was being done in a standard first-order predicate logic, often extended with axioms or axiom schemata of set- or type-theory. This underlying logic consisted of a theory of “classical” truth functional connectives, such as “and,” “not,” and “if . . . then,” and first-order quantification permitting propositions that “all” and “at least one” individual satisfy a certain formula. Only gradually in the 1920s and '30s did a conception of a “first-order” logic, and of alternatives, arise—and then without a name.

      Certainly with Hilbert and Ackermann's Grundzüge der Theoretischen Logik (1928; “Basic Elements of Theoretical Logic”), and Hilbert's and Paul Bernays' minor corrections to this work in the 1930s, a rigorous theory of first-order predicate logic achieved its mature state. Even Hilbert and his coworkers, however, sometimes deviated from previous and subsequent treatments of quantification, preferring to base their theory on a single term-forming operator, ε, which was to be interpreted as extracting an arbitrary individual satisfying a given predicate. In the 1920s and '30s considerable energy went into formulating various alternative but equivalent axiom systems for classical propositional and first-order logic and demonstrating that these axioms were independent. Some of these efforts were concentrated on the “implicational” (if . . . then) fragment of propositional logic. Others sought reductions of truth-functional connectives (connective) to a short list of primitive connectives, especially to the single Sheffer or, in modern terminology, NAND function, named for the American logician Henry M. Sheffer. Peirce had been aware in the 1880s that single connectives based either on not-both or on neither-nor sufficed for the expression of all truth-functional connectives. Alternative formulations of classical propositional logic reached their apex in J.G.P. Nicod's, Mordchaj Wajsberg's, and Łukasiewicz's different single-axiom formulations of 1917, 1929, and 1932. A basic underlying classical propositional logic and a first-order quantificational theory had become widely accepted by 1928, and different systems varied primarily in provably equivalent, notational aspects.

Other developments
      A notable exception to this orthodoxy was intuitionistic logic. Arising from observations by the Dutch mathematicians Arend Heyting and L.E.J. Brouwer (Brouwer, Luitzen Egbertus Jan) concerning the results of indirect proof in traditional mathematics and distantly inspired by Kant's views on constructions in mathematics (and less distantly by views of French mathematicians Henri Poincaré and Émile Borel at the turn of the century), these theorists proposed that a proof in mathematics should be accepted only if it constructed the mathematical entity it talked about, and not if it merely showed that the entity “could” be constructed or that supposing its nonexistence would result in contradiction. This view is called intuitionism or sometimes constructivism, because of the weight it places on mental apprehension through construction of purported mathematical entities. (A still more severe form of constructivism is strict finitism, in which one rejects infinite sets; for further discussion, see mathematics, foundations of: Intuitionism (mathematics, foundations of).)

      The central focus of Brouwer's logical critique was directed at the principle of the excluded middle—which states that, for any proposition p, “p or not-p” is a theorem of logic—and at what one could typically infer with it. So, if not-p can be shown to be false, then in classical, but not intuitionistic, propositional logic, p is thereby proven. Intuitionistic propositional logic was formulated in 1930 by Heyting; the independence of Heyting's axioms was shown in 1939 by J.C.C. McKinsey. The primary difference between classical and intuitionistic propositional logics is concentrated in axioms and rules involving negation. Heyting in fact used the symbol ¬ for intuitionistic negation, to distinguish it from the symbol ∼ of classical logic.

      The intuitionistic first-order predicate logic, aside from the differing propositional logic on which it is based, differs from classical first-order predicate logic only in small respects. A number of results concerning Heyting's system, as well as stronger and weaker versions of the intuitionistic propositional theory, were produced in the 1930s and '40s by the Russian theorist Andrei Nikolayevich Kolmogorov and by Mordchaj Wajsberg, Gentzen, McKinsey, Tarski, and others. Few practicing mathematicians have followed the intuitionistic doctrine of constructivism, but the theory has exerted attraction for and elicted respect from many researchers in the foundations and philosophy of mathematics. (One oddity is that metalogical results for intuitionistic logics have nearly always been shown using the theory of classical logic.)

      The other major competitor to first-order predicate logic based on a classical propositional logic arose with the renewed interest in Frege's (Frege, Gottlob) theory of properties begun by Alonzo Church in the late 1930s. The first result was a logical theory called the λ calculus, which allowed one by the application of a λ operator to a formula precisely to characterize or “extract” that property. Later developments included his investigation of formal Fregean theories (“A Logic of Sense and Denotation”) that allow quantification over properties and incorporate Frege's semantic views in distinguishing between the highly individuated “sense” of an expression and its denotation (extension, or referent). These two developments laid the basis for formal theories of second- and higher-order logical theories that permit quantification over properties and other non-individuals, and for intensional logics. While Boolean and most first-order theories, including type and set theories, had dealt with individuals and collections of these (collective extensions), intensional (intension and extension) logics allow one to develop theories of properties that have the same extension but differ in intension—such as “polygons with three sides” and “polygons with three angles” or Frege's example of the morning and the evening star (i.e., Venus).

      Intension had often been equated with how a property is thought (its associations for the conceiver), while Frege, Church, and a number of philosophers and philosophers of language equated it with abstract, formally described entities that constitute the “meaning” or “sense” of expressions. Second-order theories and intensional logical systems have been extensively developed, and the metalogical features have been well explored. For a system like that described in Church's “A Formulation of the Logic of Sense and Denotation” (1946), consistency was shown by Gentzen in 1936, and for many similar systems it was less rigorously demonstrated by Herbrand in 1930. Weak completeness was demonstrated by Leon Henkin in 1947, although what counts as the intended interpretation and domain of such a powerful theory is problematic; strong completeness has yet to be shown, and, since it embraces first-order predicate logic, it is not decidable by a corollary of Church's own theorem. Debates about whether second-order logic is philosophically acceptable, technically usable, or even should count as “logic” in comparison with first-order theories have raged since its resurrection in the 1940s and '50s.

Nonmathematical formal logic
      Early 20th-century formal logic was almost entirely fixated upon the project of exploring the foundations of mathematics. Furthering or exploring the logicist program and the related formalist programme of Hilbert and linking mathematics with pure logic or with rigorous formal theories had been the original motivation for many developments. The Löwenheim-Skolem theorem might have seemed also to have given a reason for this mathematical, and specifically numerical, fixation, since there is a sense in which any consistent (first-order) formal theory is always about numbers. These developments reached their height in the 1930s with the finite axiomatizations of NBG and with the formulations of the first-order predicate logic of Hilbert, Ackermann, and Gentzen. Major metalogical results for the underlying first-order predicate logic were completed in 1936 with the Church-Turing theorem. After first-order logic had been rigorously described in the 1930s and had become well understood and after set theory coalesced into ZF (with the exception of the then outstanding independence results), a period of reflection set in. There were now increasing doubts about the ability of the logicist (logicism) and formalist (formalism) program to connect mathematics and logic. The intuitionist critiques became well known, if not always accepted. A number of authors suggested approaching logic with entirely different formalisms—without quantifiers, for example. These included the American mathematician Haskell Curry and the category theorists, as well as algebraists who urged a return to algebraic—though not always Boolean—methods; the latter included Tarski and Paul Halmos. There were doubts about the exact form or notation and the general approach of the first-order, set-theoretic enterprise. As with many large-scale completed projects—and this project, moreover, had been accompanied by considerable disappointment, owing to the negative results of Gödel, Church, and Turing—there was also a search for new logical terrain to explore.

      Łukasiewicz had, as early as 1923, begun exploring the logical theories of Aristotle and the Stoics (Stoicism) and formalizing (formal system) them as modern logical systems; this work culminated in his 1951 and 1957 editions of Aristotle's Syllogistic and in further work on Aristotle's logic by John Corcoran and T.J. Smiley. Benson Mates' careful study of Stoic logic similarly served to renew interest in older logics. These theories had no obvious bearing on the foundations of mathematics, but they were of interest as formal theories in their own right—and perhaps as theories of ideal reasoning, of abstract conceptual entities, or as theories of the referents of terms in natural language. Similarly, not all of Church's work on Fregean theories of properties and intensions had obvious utility for constructing the simplest possible foundation for mathematics with the fewest arguable postulates, but his work was also motivated by more general theoretical features in the theory of properties and of language—especially by the richness of natural languages. These might be termed nonmathematical influences in the development of 20th-century logic. Another challenge to “classical” propositional logic—specifically to the standard interpretation of propositional logics—has been posed by many-valued logics. Propositions can be regarded as taking more than (or other than) the traditional “values” (truth-value) of true or false. Such possibilities had been speculated about by Peirce and Schröder (and even by medieval logicians) and were used in the 1920s and '30s by Carnap, Łukasiewicz, and others to derive independence results for various propositional calculi (propositional calculus). In the 1940s and after, formal theories for many-valued (including infinitely valued, probabilistic-like) logics have been taken increasingly seriously—albeit for nonmathematical purposes.

      Many nonmathematical goals and considerations arose from philosophy (especially from metaphysics but also from epistemology and even ethics), from the study of the history of logic and mathematics, from quantum mechanics (quantum logic), and from the philosophy of language, as well as, more recently, from cognitive psychology (starting with Jean Piaget's interest in syllogistic logics). This work has rekindled interest in logic for purposes other than giving or exploring the foundations of mathematics. Foremost among the nonmathematical interests was the development of modal logic beginning with C.I. Lewis' (Lewis, C.I.) theories of 1932 and, specifically, a study of the alethic modal operators of necessity, possibility, contingency, and impossibility. Viable semantic accounts for modal systems in terms of Leibnizian “possible worlds” were developed by Saul Kripke, David Lewis, and others in the 1960s and '70s and led to greatly intensified research. Tense logics and logics of knowledge, causation, and ethical or legal obligation also moved rapidly forward, together with specialized logics for analyzing the “if . . . then” conditional in ordinary language (first due to C.I. Lewis as a theory of entailment, then elaborately developed by Alan Ross Anderson, Nuel Belnap, Jr., and their students as relevance logic).

      From the turn of the century through the mid-1930s and with the almost singular exception of Russell and Whitehead's Principia Mathematica, logic was dominated by mathematicians from the German-speaking world. The work of Frege (Frege, Gottlob), Dedekind, and Cantor at the end of the 19th century, even if little recognized at the time, as well as the more widely recognized work of Hilbert and Zermelo, had given German mathematical logic a strong boost into the 20th century. Widespread institutional interest in the new mathematical logic in the early decades of the 20th century seemed to have been far weaker in the United States, France, and, rather surprisingly, in the United Kingdom and Italy. In the 1920s and into the early 1930s, Poland developed an specially strong logical tradition, and Polish logicians made a number of major contributions, writing in both Polish and German; in the 1920s and '30s Polish logicians posed the only exceptions to almost absolute German logical hegemony.

      By the late 1930s, both the political and the logical situations had shifted dramatically. American logic, as represented by younger figures such as Church, McKinsey, and Quine, made a number of important contributions to logic in the late 1930s; the young Alan Turing in England contributed to logic and to the infant field of the theory of computation. France's place dwindled prematurely with the untimely death of Jacques Herbrand. The Moravian-Austrian Gödel fled to the United States as the political situation in central Europe worsened, as did Tarski and Carnap. Set theory and some set theorists fell under the pall of anti-Semitism, as did other logical theories, together with the theory of relativity and several philosophical orientations. Communication between scholars in Germany, both with each other and with the increasing number of reseachers outside the country, was hindered in the late 1930s and '40s. With the death of Hilbert in 1943, interest in logic and in the foundations of mathematics at the University of Göttingen—interests that had flourished there since the time of the German mathematician Bernhard Riemann—declined. With the flight of promising students and the lack of political stability and academic support, German logic became critically weakened. Heinrich Scholz, primarily a historian of logic, but also one of the few figures in Germany of the time in a philosophy, rather than a mathematics, department, attempted to rally German logic with the establishment of the Ernst Schröder Prize in mathematical logic. Its winner in 1941 was J.C.C. McKinsey, an American.

      Especially because of its often predominating mathematical orientation (and this in several respects), the influence and place of 20th-century logic in all intellectual activity has changed dramatically. On the one hand, it has regained the respectability as an academic discipline through its affiliation with rigorous mathematics—the “queen of the sciences”—that it had lost in the Renaissance. On the other hand, the number of people who could profitably study modern symbolic logic and understand its more impressive achievements has dwindled as its techniques have become more austere and distant from ordinary language and have also required more and more background simply to understand. Consequently, one could say of Gödel's incompleteness theorem (for example) what Einstein once said about the theory of special relativity: that there have at times been only a handful of people who understand it. Few general university programs required an understanding of symbolic logic in the way they had once required an understanding of the rudiments of Aristotelian syllogistic or even of Venn's version of Boolean logic. Twentieth-century symbolic logic has also reexperienced its traditional problem of finding a place in modern universities.

      In the early decades of the 20th century, the study of logic and the foundations of mathematics (metamathematics) acquired considerable prestige within mathematics departments, especially owing to the influence of Hilbert (Hilbert, David) and the Göttingen school. In the 1920s and '30s, existing on the borderline between philosophy, mathematics, and the burgeoning interest in the philosophy of science, logic also achieved additional legitimacy through the work and participation of Wittgenstein (and Russell's philosophy of logical atomism), Carnap, and others in the Vienna (Vienna Circle) and Berlin schools of scientific philosophy. (Gödel (Gödel, Kurt) sometimes attended sessions of the Vienna Circle.)

      The usefulness of logic in philosophy reached a critical point, however, with Gödel's incompleteness theorem and then with Church's logical critique of one version of the principle of verification. Roughly since the death of Hilbert, logicians and mathematical foundationalists have often been accepted less readily in mathematics departments, and after solutions to the major problems in metalogic were achieved, many practicing mathematicians in the Western Hemisphere have increasingly regarded logic and foundational work as mere tinkering. (This is less true in Russia, other former Soviet republics, and Poland, where logic has survived as a major mathematical subject.) Philosophy departments in the English-speaking world have often proved to be more stable homes for symbolic logicians, especially as they increasingly addressed issues in formal philosophy that are not necessarily issues in the foundations (mathematics, foundations of) of mathematics, such as theories of properties and the development of nonstandard philosophical (mathematics, philosophy of) logics.

Randall R. Dipert

Additional Reading
A broad survey of the history of logic is found in William Kneale and Martha Kneale, The Development of Logic (1962, reprinted 1984), covering ancient, medieval, modern, and contemporary periods. Articles on particular authors and topics are found in The Encyclopedia of Philosophy, ed. by Paul Edwards, 8 vol. (1967); and New Catholic Encyclopedia, 18 vol. (1967–89). I.M. Bochenski, Ancient Formal Logic (1951, reprinted 1968), is an overview of early Greek developments. On Aristotle, see Jan Łukasiewicz, Aristotle's Syllogistic from the Standpoint of Modern Formal Logic, 2nd ed., enlarged (1957, reprinted 1987); Günther Patzig, Aristotle's Theory of the Syllogism (1968; originally published in German, 2nd ed., 1959); Otto A. Bird, Syllogistic and Its Extensions (1964); and Storrs McCall, Aristotle's Modal Syllogisms (1963). I.M. Bochenski, La Logique de Théophraste (1947, reprinted 1987), is the definitive study of Theophrastus' logic. On Stoic logic, see Benson Mates, Stoic Logic (1953, reprinted 1973); and Michael Frede, Die stoische Logik (1974).Detailed treatment of medieval logic is found in Norman Kretzmann, Anthony Kenny, and Jan Pinborg (eds.), The Cambridge History of Later Medieval Philosophy: From the Rediscovery of Aristotle to the Disintegration of Scholasticism, 1100–1600 (1982); and translations of important texts of the period are presented in Norman Kretzmann and Eleonore Stump (eds.), Logic and the Philosophy of Language (1988). For Boethius, see Margaret Gibson (ed.), Boethius, His Life, Thought, and Influence (1981); and, for Arabic logic, Nicholas Rescher, The Development of Arabic Logic (1964). L.M. de Rijk, Logica Modernorum: A Contribution to the History of Early Terminist Logic, 2 vol. in 3 (1962–1967), is a classic study of 12th- and early 13th-century logic, with full texts of many important works. Norman Kretzmann (ed.), Meaning and Inference in Medieval Philosophy (1988), is a collection of topical studies.A broad survey of modern logic is found in Wilhelm Risse, Die Logik der Neuzeit, 2 vol. (1964–70). See also Robert Adamson, A Short History of Logic (1911, reprinted 1965); C.I. Lewis, A Survey of Symbolic Logic (1918, reissued 1960); Jørgen Jørgensen, A Treatise of Formal Logic: Its Evolution and Main Branches with Its Relations to Mathematics and Philosophy, 3 vol. (1931, reissued 1962); Alonzo Church, Introduction to Mathematical Logic (1956); I.M. Bochenski, A History of Formal Logic, 2nd ed. (1970; originally published in German, 1962); Heinrich Scholz, Concise History of Logic (1961; originally published in German, 1959); Alice M. Hilton, Logic, Computing Machines, and Automation (1963); N.I. Styazhkin, History of Mathematical Logic from Leibniz to Peano (1969; originally published in Russian, 1964); Carl B. Boyer, A History of Mathematics, 2nd ed., rev. by Uta C. Merzbach (1991); E.M. Barth, The Logic of the Articles in Traditional Philosophy: A Contribution to the Study of Conceptual Structures (1974; originally published in Dutch, 1971); Martin Gardner, Logic Machines and Diagrams, 2nd ed. (1982); and E.J. Ashworth, Studies in Post-Medieval Semantics (1985).Developments in the science of logic in the 20th century are reflected mostly in periodical literature. See Warren D. Goldfarb, “Logic in the Twenties: The Nature of the Quantifier,” The Journal of Symbolic Logic 44:351–368 (September 1979); R.L. Vaught, “Model Theory Before 1945,” and C.C. Chang, “Model Theory 1945–1971,” both in Leon Henkin et al. (eds.), Proceedings of the Tarski Symposium (1974), pp. 153–172 and 173–186, respectively; and Ian Hacking, “What is Logic?” The Journal of Philosophy 76:285–319 (June 1979). Other journals devoted to the subject include History and Philosophy of Logic (biannual); Notre Dame Journal of Formal Logic (quarterly); and Modern Logic (quarterly).

* * *


Universalium. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • History of mathematics — A proof from Euclid s Elements, widely considered the most influential textbook of all time.[1] …   Wikipedia

  • Logic — • A historical survey from Indian and Pre Aristotelian philosophy to the Logic of John Stuart Mill Catholic Encyclopedia. Kevin Knight. 2006. Logic     Logic      …   Catholic encyclopedia

  • Logic in Islamic philosophy — Logic (Arabic: Mantiq ) played an important role in early Islamic philosophy. Islamic law placed importance on formulating standards of argument, which gave rise to a novel approach to logic in Kalam, as seen in the method of qiyas . This… …   Wikipedia

  • Logic Pro — Logic 8 Developer(s) Apple Inc. Stable release 9.1.5 / 2011 08 08 Operating system …   Wikipedia

  • Logic and the philosophy of mathematics in the nineteenth century — John Stillwell INTRODUCTION In its history of over two thousand years, mathematics has seldom been disturbed by philosophical disputes. Ever since Plato, who is said to have put the slogan ‘Let no one who is not a geometer enter here’ over the… …   History of philosophy

  • History of science — History of science …   Wikipedia

  • Logic Lane — looking towards the High, with the Durham Buildings on right …   Wikipedia

  • Logic programming — is, in its broadest sense, the use of mathematical logic for computer programming. In this view of logic programming, which can be traced at least as far back as John McCarthy s [1958] advice taker proposal, logic is used as a purely declarative… …   Wikipedia

  • History of calculus — History of science …   Wikipedia

  • Logic synthesis — is a process by which an abstract form of desired circuit behavior (typically register transfer level (RTL) or behavioral) is turned into a design implementation in terms of logic gates. Common examples of this process include synthesis of HDLs,… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”