formal logic

formal logic
the branch of logic concerned exclusively with the principles of deductive reasoning and with the form rather than the content of propositions.
[1855-60]

* * *

Introduction

      the abstract study of propositions, statements, or assertively used sentences and of deductive (deduction) arguments. The discipline abstracts from the content of these elements the structures or logical forms that they embody. The logician customarily uses a symbolic notation to express such structures clearly and unambiguously and to enable manipulations and tests of validity to be more easily applied. Although the following discussion freely employs the technical notation of modern symbolic logic, its symbols are introduced gradually and with accompanying explanations so that the serious and attentive general reader should be able to follow the development of ideas.

      Formal logic is an a priori (a priori knowledge), and not an empirical, study. In this respect it contrasts with the natural sciences and with all other disciplines that depend on observation for their data. Its nearest analogy is with pure mathematics; indeed, many logicians and pure mathematicians would regard their respective subjects as indistinguishable, or as merely two stages of the same unified discipline. Formal logic, therefore, is not to be confused with the empirical study of the processes of reasoning (reason), which belongs to psychology. It must also be distinguished from the art of correct reasoning, which is the practical skill of applying logical principles to particular cases; and, even more sharply, it must be distinguished from the art of persuasion, in which invalid arguments (rhetoric) are sometimes more effective than valid ones.

General observations
      Probably the most natural approach to formal logic is through the idea of the validity of an argument of the kind known as deductive. A deductive argument can be roughly characterized as one in which the claim is made that some proposition (the conclusion) follows with strict necessity from some other proposition or propositions (the premises)—i.e., that it would be inconsistent or self-contradictory to assert the premises but deny the conclusion.

      If a deductive argument is to succeed in establishing the truth of its conclusion, two quite distinct conditions must be met: first, the conclusion must really follow from the premises—i.e., the deduction of the conclusion from the premises must be logically correct—and, second, the premises themselves must be true. An argument meeting both these conditions is called sound. Of these two conditions, the logician as such is concerned only with the first; the second, the determination of the truth or falsity of the premises, is the task of some special discipline or of common observation appropriate to the subject matter of the argument. When the conclusion of an argument is correctly deducible from its premises, the inference from the premises to the conclusion is said to be (deductively) valid, irrespective of whether the premises are true or false. Other ways of expressing the fact that an inference is deductively valid are to say that the truth of the premises gives (or would give) an absolute guarantee of the truth of the conclusion or that it would involve a logical inconsistency (as distinct from a mere mistake of fact) to suppose that the premises were true but the conclusion false.

      The deductive inferences with which formal logic is concerned are, as the name suggests, those for which validity depends not on any features of their subject matter but on their form or structure. Thus the two inferences

(1) Every dog is a mammal.
Some quadrupeds are dogs.
∴ Some quadrupeds are mammals.
and
(2) Every anarchist is a believer in free love.
Some members of the government party are anarchists.
∴ Some members of the government party are believers in free love.
differ in subject matter and hence require different procedures to check the truth or falsity of their premises. But their validity is ensured by what they have in common— namely, that the argument in each is of the form:
(3) Every X is a Y.
Some Z's are X's.
∴ Some Z's are Y's.
Line (3) above may be called an inference form, and (1) and (2) are then instances of that inference form. The letters—X, Y, and Z—in (3) mark the places into which expressions of a certain type may be inserted. Symbols used for this purpose are known as variables; their use is analogous to that of the x in algebra, which marks the place into which a numeral can be inserted. An instance of an inference form is produced by replacing all the variables in it by appropriate expressions—i.e., ones that make sense in the context—and by doing so uniformly—i.e., by substituting the same expression wherever the same variable recurs. The feature of (3) that guarantees that every instance of it will be valid is its construction in such a manner that every uniform way of replacing its variables to make the premises true automatically makes the conclusion true also, or, in other words, that no instance of it can have true premises but a false conclusion. In virtue of this feature, the form (3) is termed a valid inference form. In contrast,
(4) Every X is a Y.
Some Z's are Y's.
∴ Some Z's are X's.
is not a valid inference form, for, although instances of it can be produced in which premises and conclusion are all true, instances of it can also be produced in which the premises are true but the conclusion is false—e.g.,
(5) Every dog is a mammal.
Some winged creatures are mammals.
∴ Some winged creatures are dogs.
Formal logic as a study is concerned with inference forms rather than with particular instances of them; one of its tasks is to discriminate between valid and invalid inference forms and to explore and systematize the relations that hold among valid ones.

      Closely related to the idea of a valid inference form is that of a valid proposition form. A proposition form is an expression of which the instances (produced as before by appropriate and uniform replacements for variables) are not inferences from several propositions to a conclusion but rather propositions taken individually, and a valid proposition form is one for which all of the instances are true propositions. A simple example is

Nothing is both an X and a non-X.(6)
Formal logic is concerned with proposition forms as well as with inference forms. The study of proposition forms can, in fact, be made to include that of inference forms in the following way: let the premises of any given inference form (taken together) be abbreviated by alpha (α) and its conclusion by beta (β). Then the condition stated above for the validity of the inference form “α, therefore β” amounts to saying that no instance of the proposition form “α and not-β” is true—i.e., that every instance of the proposition form
Not both: α and not-β(7)
is true—or that line (7), fully spelled out, of course—is a valid proposition form. The study of proposition forms, however, cannot be similarly accommodated under the study of inference forms, and so for reasons of comprehensiveness it is usual to regard formal logic as the study of proposition forms. Because a logician's handling of proposition forms is in many ways analogous to a mathematician's handling of numerical formulas, the systems he constructs are often called calculi.

      Much of the work of a logician proceeds at a more abstract level than that of the foregoing discussion. Even a formula such as (3) above, though not referring to any specific subject matter, contains expressions like “every” and “is a,” which are thought of as having a definite meaning, and the variables are intended to mark the places for expressions of one particular kind (roughly, common nouns or class names). It is possible, however—and for some purposes it is essential—to study formulas without attaching even this degree of meaningfulness to them. The construction of a system of logic, in fact, involves two distinguishable processes: one consists in setting up a symbolic (symbol) apparatus—a set of symbols, rules for stringing these together into formulas, and rules for manipulating these formulas; the second consists in attaching certain meanings to these symbols and formulas. If only the former is done, the system is said to be uninterpreted, or purely formal; if the latter is done as well, the system is said to be interpreted. This distinction is important, because systems of logic turn out to have certain properties quite independently of any interpretations that may be placed upon them. An axiomatic (axiom) system of logic can be taken as an example—i.e., a system in which certain unproved formulas, known as axioms, are taken as starting points, and further formulas (theorems (theorem)) are proved on the strength of these. As will appear later (Axiomatization of PC), the question whether a sequence of formulas in an axiomatic system is a proof or not depends solely on which formulas are taken as axioms and on what the rules are for deriving theorems from axioms, and not at all on what the theorems or axioms mean. Moreover, a given uninterpreted system is in general capable of being interpreted equally well in a number of different ways; hence, in studying an uninterpreted system, one is studying the structure that is common to a variety of interpreted systems. Normally a logician who constructs a purely formal system does have a particular interpretation in mind, and his motive for constructing it is the belief that when this interpretation is given to it the formulas of the system will be able to express true principles in some field of thought; but, for the above reasons among others, he will usually take care to describe the formulas and state the rules of the system without reference to interpretation and to indicate as a separate matter the interpretation that he has in mind.

      Many of the ideas used in the exposition of formal logic (logic, philosophy of), including some that are mentioned above, raise problems that belong to philosophy rather than to logic itself. Examples are: What is the correct analysis of the notion of truth? What is a proposition, and how is it related to the sentence by which it is expressed? Are there some kinds of sound reasoning that are neither deductive nor inductive? Fortunately, it is possible to learn to do formal logic without having satisfactory answers to such questions, just as it is possible to do mathematics without answering questions belonging to the philosophy of mathematics such as: Are numbers real objects or mental constructs?

The propositional calculus (propositional calculus)

Basic features of PC
      The simplest and most basic branch of logic is the propositional calculus, hereafter called PC, named from the fact that it deals only with complete, unanalyzed propositions and certain combinations into which they enter. Various notations for PC are used in the literature. In that used here the symbols employed in PC first comprise variables (for which the letters p, q, r, . . . are used, with or without numerical subscripts); second, operators (operator) (for which the symbols “∼,” “·,” “∨,” “⊃,” “≡” are employed); and third, brackets or parentheses. The rules for constructing formulas are discussed below (see Formation rules for PC (formal logic)), but the intended interpretations of these symbols—i.e., the meanings to be given to them—are indicated here immediately: The variables are to be viewed as representing unspecified propositions or as marking the places in formulas into which sentences, and only sentences, may be inserted. (This is sometimes expressed by saying that variables range over propositions, or that they take propositions as their values.) Hence they are often called propositional variables. It is assumed that every proposition is either true or false and that no proposition is both true and false. Truth and falsity are said to be the truth values (truth-value) of propositions. The function of an operator is to form a new proposition from one or more given propositions, called the arguments of the operator. The operators ∼, · , ∨, ⊃, and ≡ correspond respectively to the English expressions “not,” “and,” “or,” “if . . . , then” (or “implies”), and “is equivalent to,” when these are used in the following senses:
● Given a proposition p, then ∼p (“not p”) is to count as false when p is true and true when p is false; “∼” (when thus interpreted) is known as the negation sign, and ∼p as the negation of p.
● Given any two propositions p and q, then p · q (“p and q”) is to count as true when p and q are both true and as false in all other cases (namely, when p is true and q false, when p is false and q true, and when p and q are both false); p · q is said to be the conjunction of p and q; “ · ” is known as the conjunction sign, and its arguments (p, q) as conjuncts.
● 3.pq (“p or q”) is to count as false when p and q are both false and true in all other cases; thus it represents the assertion that at least one of p and q is true. pq is known as the disjunction of p and q; “∨” is the disjunction sign, and its arguments (p, q) are known as disjuncts.
pq (“if p [then] q” or “p [materially] implies q”) is to count as false when p is true and q is false and as true in all other cases; hence it has the same meaning as “either not-p or q” or as “not both; p and not-q.” The symbol “⊃” is known as the (material) implication sign, the first argument as the antecedent, and the second as the consequent; qp is known as the converse of pq.
● Finally, pq (“p is [materially] equivalent (equivalence) to q” or “p if and only if q”) is to count as true when p and q have the same truth value (i.e., either when both are true or when both are false), and false when they have different truth values; the arguments of “≡” (the [material] equivalence sign) are called equivalents.

      Brackets are used to indicate grouping; they make it possible to distinguish, for example, between p · (qr) (“both p and either-q-or-r”) and (p · q) ∨ r (“either both-p-and-q or r”). Precise rules for bracketing are given below.

      All PC operators take propositions as their arguments, and the result of applying them is also in each case a proposition. For this reason they are sometimes called proposition-forming operators on propositions or, more briefly, propositional connectives (connective). An operator that, like ∼, requires only a single argument is known as a monadic operator; operators that, like all the others listed, require two arguments are known as dyadic.

 All PC operators also have the following important characteristic: given the truth values (truth-value) of the arguments, the truth value of the proposition formed by them and the operator is determined in every case. An operator that has this characteristic is known as a truth-functional operator, and a proposition formed by such an operator is called a truth function of the operator's argument(s). The truth functionality of the PC operators is clearly brought out by summarizing the above account of them in Table 1—>. In it, “true” is abbreviated by “1” and “false” by “0,” and to the left of the vertical line are tabulated all possible combinations of truth values of the operators' arguments. The columns of 1s and 0s under the various truth functions indicate their truth values for each of the cases; these columns are known as the truth tables of the relevant operators. It should be noted that any column of four 1s or 0s or both will specify a dyadic truth-functional operator. Because there are precisely 24 (i.e., 16) ways of forming a string of four symbols each of which is to be either 1 or 0 (1111, 1110, 1101, . . . , 0000), there are 16 such operators in all; the four that are listed here are only the four most generally useful ones.

Formation rules for PC
      In any system of logic it is necessary to specify which sequences of symbols are to count as acceptable formulas—or, as they are usually called, well-formed formulas (wffs). Rules that specify this are called formation rules. From an intuitive point of view, it is desirable that the wffs of PC be just those sequences of PC symbols that, in terms of the interpretation given above, make sense and are unambiguous; and this can be ensured by stipulating that the wffs of PC are to be all those expressions constructed in accordance with the following PC formation rules, and only these:

FR1.A variable standing alone is a wff.
FR2.If α is a wff, so is ∼α.
FR3.If α and β are wffs, (α·β), (α ∨ β), (α ⊃ β), and (α ≡ β) are wffs.
(In these rules α and β are variables representing arbitrary formulas of PC. They are not themselves symbols of PC but are used in discoursing about PC. Such variables are known as metalogical variables. It should be noted that the rules, though designed to ensure unambiguous sense for the wffs of PC under the intended interpretation, are themselves stated without any reference to interpretation and in such a way that there is an effective procedure for determining, again without any reference to interpretation, whether any arbitrary string of symbols is a wff or not. (An effective procedure is one that is “mechanical” in nature and can always be relied on to give a definite result in a finite number of steps. The notion of effectiveness plays an important role in formal logic.)

      Examples of wffs are: p; ∼q; ∼(p · q), “not both p and q”; [∼p ∨ (qp)], “either not p or else q is equivalent to p.”

      For greater ease in writing or reading formulas, the formation rules are often relaxed. The following relaxations are common: (1) Brackets enclosing a complete formula may be omitted. (2) The typographical style of brackets may be varied within a formula to make the pairing of brackets more evident to the eye. (3) Conjunctions and disjunctions may be allowed to have more than two arguments—for example, p · (qr) · ∼r may be written instead of [p · (qr)] · ∼r. (The conjunction p · q · r is then interpreted to mean that p, q, and r are all true, pqr to mean that at least one of p, q, and r is true, and so forth.)

Validity in PC
      Given the standard interpretation, a wff of PC becomes a sentence, true or false, when all its variables are replaced by actual sentences. Such a wff is therefore a proposition form in the sense explained above and hence is valid if and only if all its instances express true propositions. A wff of which all instances are false is said to be unsatisfiable, and one with some true and some false instances is said to be contingent.

      An important problem for any logical system is the decision problem for the class of valid wffs of that system (sometimes simply called the decision problem for the system). This is the problem of finding an effective procedure, in the sense explained above in Formation rules for PC (formal logic), for testing the validity of any wff of the system. Such a procedure is called a decision procedure. For some systems a decision procedure can be found; the decision problem for a system of this sort is then said to be solvable, and the system is said to be a decidable one. For other systems it can be proved that no decision procedure is possible; the decision problem for such a system is then said to be unsolvable, and the system is said to be an undecidable one.

      PC is a decidable system. In fact, several decision procedures for it are known. Of these the simplest and most important theoretically (though not always the easiest to apply in practice) is the method of truth tables, which will now be explained briefly. Since all the operators in a wff of PC are truth-functional, in order to discover the truth value (truth-value) of any instance of such a wff, it is unnecessary to consider anything but the truth values of the sentences replacing the variables. In other words, the assignment of a truth value to each of the variables in a wff uniquely determines a truth value for the whole wff. Since there are only two truth values and each wff contains only a finite number of variables, there are only a finite number of truth-value assignments to the variables to be considered (if there are n distinct variables in the wff, there are 2n such assignments); these can easily be systematically tabulated. For each of these assignments the truth tables for the operators then enable one to calculate the resulting truth value of the whole wff; and if and only if this truth value is truth in each case is the wff valid. As an example, [(pq) · r] ⊃ [(∼rp) ⊃ q] may be tested for validity. This formula states that “if one proposition implies a second one, and a certain third proposition is true, then if either that third proposition is false or the first is true, the second is true.”

 The calculation is shown in Table 2—>. As before, 1 represents truth and 0 falsity. Since the wff contains three variables, there are 23 (i.e., 8) different assignments to the variables to be considered, which therefore generate the eight lines of the table. These assignments are tabulated to the left of the vertical line. The numbers in parentheses at the foot indicate the order in which the steps (from 1 through 6) are to be taken in determining the truth values (1 or 0) to be entered in the table. Thus column 1, falling under the symbol ⊃, sets down the values of pq for each assignment, obtained from the columns under p and q by the truth table for ⊃; column 2, for (pq) · r, is then obtained by employing the values in column 1 together with those in the column under r by use of the truth table for ·; . . . until finally column 6, which gives the values for the whole wff, is obtained from columns 2 and 5. This column is called the truth table of the whole wff. Since it consists entirely of 1s, it shows that the wff is true for every assignment given to the variables and is therefore valid. A wff for which the truth table consists entirely of 0s is never satisfied, and a wff for which the truth table contains at least one 1 and at least one 0 is contingent. It follows from the formation rules and from the fact that an initial truth table has been specified for each operator that a truth table can be constructed for any given wff of PC.

 Among the more important valid wffs of PC are those of Table 3—>, all of which can be shown to be valid by a mechanical application of the truth-table method. They can also be seen to express intuitively sound general principles about propositions. For instance, because “not ( . . . or )” can be rephrased as “neither . . . nor ,” the first De Morgan law can be read as “both p and q if and only if neither not-p nor not-q”; thus it expresses the principle that two propositions are jointly true if and only if neither of them is false. Whenever, as is the case in most of the examples given, a wff of the form α ≡ β is valid, the corresponding wffs α ⊃ β and β ⊃ α are also valid. For instance, because (p · q) ≡ ∼(∼p ∨ ∼q) is valid, so are (p · q) ⊃ ∼(∼p ∨ ∼q) and ∼(∼p ∨ ∼q) ⊃ (p · q).

      Moreover, although pq does not mean that q can be deduced from p, yet whenever a wff of the form α ⊃ β is valid, the inference form “α, therefore β” is likewise valid. This fact is easily seen from the fact that α ⊃ β means the same as “not both: α and not-β”; for, as was noted above, whenever the latter is a valid proposition form, “α, therefore β” is a valid inference form.

      Let α be any wff. If any variable in it is now uniformly replaced by some wff, the resulting wff is called a substitution-instance of α. Thus [p ⊃ (q ∨ ∼r)] ≡ [∼(q ∨ ∼r) ⊃ ∼p] is a substitution-instance of (pq) ≡ (∼q ⊃ ∼p), obtained from it by replacing q uniformly by (q ∨ ∼r). It is an important principle that, whenever a wff is valid, so is every substitution-instance of it (the rule of [uniform] substitution).

      A further important principle is the rule of substitution of equivalents. Two wffs, α and β, are said to be equivalents (equivalence) when α ≡ β is valid. (The wffs α and β are equivalents if and only if they have identical truth tables.) The rule states that, if any part of a wff is replaced by an equivalent of that part, the resulting wff and the original are also equivalents. Such replacements need not be uniform. The application of this rule is said to make an equivalence transformation.

Interdefinability of operators (operator)
 The rules that have just been stated would enable the first De Morgan law listed in Table 3—> to transform any wff containing any number of occurrences of · into an equivalent wff in which · does not appear at all but in place of it certain complexes of ∼ and ∨ arise. Similarly, since ∼pq has the same truth table as pq, (pq) ≡ (∼pq) is valid, and any wff containing ⊃ can therefore be transformed into an equivalent wff containing ∼ and ∨ but not ⊃. And, since (pq) ≡ [(pq) · (qp)] is valid, any wff containing ≡ can be transformed into an equivalent containing ⊃ and · but not ≡, and thus in turn by the previous steps it can be further transformed into one containing ∼ and ∨ but neither ≡ nor ⊃ nor · . Thus, for every wff of PC there is an equivalent wff, expressing precisely the same truth function, in which the only operators are ∼ and ∨, though the meaning of this wff will usually be much less clear than that of the original.

      An alternative way of presenting PC, therefore, is to begin with the operators ∼ and ∨ only and to define the others in terms of these. The operators ∼ and ∨ are then said to be primitive. If “=Df” is used to mean “is defined as,” then the relevant definitions can be set down as follows:

(α · β) = Df ∼(∼α ∨ ∼β)
(α ⊃ β) = Df (∼α ∨ β)
(α ≡ β) = Df [(α ⊃ β) · (β ⊃ α)]
in which α and β are any wffs of PC. These definitions are not themselves wffs of PC, nor is =Df a symbol of PC; they are metalogical statements about PC, used to introduce the new symbols · , ⊃, and ≡ into the system. If PC is regarded as a purely uninterpreted system, the expression on the left in a definition is simply a convenient abbreviation of the expression on the right. If, however, PC is thought of as having its standard interpretation, the meanings of ∼ and ∨ will first of all have been stipulated by truth tables, and then the definitions will lay it down that the expression on the left is to be understood as having the same meaning (i.e., the same truth table) as the expression on the right. It is easy to check that the truth tables obtained in this way for · , ⊃, and ≡ are precisely the ones that were originally stipulated for them.

      An alternative to taking ∼ and ∨ as primitive is to take ∼ and · as primitive and to define (α ∨ β) as ∼(∼α · ∼β), to define (α ⊃ β) as ∼(α · ∼β), and to define (α ≡ β) as before. Yet another possibility is to take ∼ and ⊃ as primitive and to define (α ∨ β) as (∼α ⊃ β), (α · β) as ∼(α ⊃ ∼β), and (α ≡ β) as before. In each case, precisely the same wffs that were valid in the original presentation of the system are still valid.

Axiomatization of PC
      The basic idea of constructing an axiomatic system (formal system) is that of choosing certain wffs (known as axioms (axiom)) as starting points and giving rules for deriving further wffs (known as theorems (theorem)) from them. Such rules are called transformation rules. Sometimes the word “theorem” is used to cover axioms as well as theorems; the word “thesis” is also used for this purpose.

      An axiomatic basis consists of:
● (1) a list of primitive symbols, together with any definitions that may be thought convenient,
● (2) a set of formation rules, specifying which sequences of symbols are to count as wffs,
● (3) a list of wffs selected as axioms, and
● (4) a set of (one or more) transformation rules, which enable new wffs (theorems) to be obtained by performing certain specified operations on axioms or previously obtained theorems.

      Definitions, where they occur, can function as additional transformation rules, to the effect that, if in any theorem any expression of the form occurring on one side of a definition is replaced by the corresponding expression of the form occurring on the other side, the result is also to count as a theorem. A proof or derivation of a wff α in an axiomatic system S is a sequence of wffs of which the last is α itself and each wff in the sequence is either an axiom of S or is derived from some axiom(s) or some already-derived theorem(s) or both by one of the transformation rules of S. A wff is a theorem of S if and only if there is a proof of it in S.

      Care is usually taken, in setting out an axiomatic basis, to avoid all reference to interpretation. It must be possible to tell purely from the construction of a wff whether it is an axiom or not. Moreover, the transformation rules must be so formulated that there is an effective way of telling whether any purported application of them is a correct application or not, and hence whether a purported proof of a theorem really is a proof or not. An axiomatic system will then be a purely formal structure, on which any one of a number of interpretations, or none at all, may be imposed without affecting the question of which wffs are theorems. Normally, however, an axiomatic system is constructed with a certain interpretation in mind; the transformation rules are so formulated that under that interpretation they are validity-preserving (i.e., the results of applying them to valid wffs are always themselves valid wffs); and the chosen axioms either are valid wffs or are expressions of principles of which it is desired to explore the consequences.

      Probably the best-known axiomatic system for PC is the following one, which, since it is derived from Principia Mathematica, by Whitehead (Whitehead, Alfred North) and Russell (Russell, Bertrand), is often called PM:

      Primitive symbols: ∼, ∨, (,), and an infinite set of variables, p, q, r, . . . (with or without numerical subscripts).

      Definitions of · , ⊃, ≡ as above (Interdefinability of operators (formal logic)).

      Formation rules as above (Formation rules for PC (formal logic)), except that formation rule 3 can be abbreviated to “If α and β are wffs, (α ∨ β) is a wff,” since · , ⊃, and ≡ are not primitive.

      Axioms:
● (pp) ⊃ p
q ⊃ (pq)
● (pq) ⊃ (qp)
● (qr) ⊃ [(pq) ⊃ (pr)]

      Axiom 4 can be read, “If q implies r, then, if either p or q, either p or r.”

      Transformation rules:
● The result of uniformly replacing any variable in a theorem by any wff is a theorem (rule of substitution).
● If α and (α ⊃ β) are theorems, then β is a theorem (rule of detachment, or modus ponens (modus ponens and modus tollens)).

      Relative to a given criterion of validity, an axiomatic system is sound if every theorem is valid, and it is complete (or, more specifically, weakly complete) if every valid wff is a theorem. The axiomatic system PM can be shown to be both sound and complete relative to the criterion of validity given in Validity in PC (formal logic) above.

      An axiomatic system is consistent if, whenever a wff α is a theorem, ∼α is not a theorem. (In terms of the standard interpretation, this means that no pair of theorems can ever be derived one of which is the negation of the other.) It is strongly complete if the addition to it (as an extra axiom) of any wff whatever that is not already a theorem would make the system inconsistent. Finally, an axiom or transformation rule is independent (in a given axiomatic system) if it cannot be derived from the remainder of the basis (or—which comes to the same thing—if its omission from the basis would make the derivation of certain theorems impossible). It can, moreover, be shown that PM is consistent and strongly complete and that each of its axioms and transformation rules is independent.

      A considerable number of other axiomatic bases for PC, each having all the above properties, are known. The task of proving that they have these properties belongs to metalogic.

      In some standard expositions of formal logic, the place of axioms is taken by axiom schemata, which, instead of presenting some particular wff as an axiom, lay it down that any wff of a certain form is an axiom. For example, in place of axiom 1 in PM one might have the axiom schema “Every wff of the form (α ∨ α) ⊃ α is an axiom”; and analogous schemata can be substituted for the other axioms. The number of axioms would then become infinite, but, on the other hand, the rule of substitution would no longer be needed, and modus ponens could be the only transformation rule. This method makes no difference to the theorems that can be derived; but in some branches of logic (though not in PC) it is simpler to work with axiom schemata rather than with particular axioms and substitution rules. Having an infinite number of axioms causes no trouble provided that there is an effective way of telling whether a wff is an axiom or not.

Special systems of PC
Partial systems of PC
      Various propositional calculi have been devised to express a narrower range of truth functions than those of PC as expounded above. Of these, the one that has been most fully studied is the pure implicational calculus (PIC), in which the only operator is ⊃, and the wffs are precisely those wffs of PC that can be built up from variables, ⊃, and brackets alone. Formation rules 2 and 3 of Formation rules for PC (formal logic) (above) are therefore replaced by the rule that if α and β are wffs, (α ⊃ β) is a wff. As in ordinary PC, pq is interpreted as “p materially implies q”—i.e., as true except when p is true but q false. The truth-table test of validity can then be straightforwardly applied to wffs of PIC.

      The task of axiomatizing PIC is that of finding a set of valid wffs, preferably few in number and relatively simple in structure, from which all other valid wffs of the system can be derived by straightforward transformation rules. The best-known basis, which was formulated in 1930, has the transformation rules of substitution and modus ponens (as in PM) and the following axioms:
p ⊃ (qp)
● [(pq) ⊃ p] ⊃ p
● (pq) ⊃ [(qr) ⊃ (pr)]

      Axioms 1 and 3 are closely related to axioms 2 and 4 of PM respectively (see above Axiomatization of PC (formal logic)). It can be shown that the basis is complete and that each axiom is independent.

      Under the standard interpretation, the above axioms can be thought of as expressing the following principles: (1) “If a proposition p is true, then if some arbitrary proposition q is true, p is (still) true.” (2) “If the fact that a certain proposition p implies some arbitrary proposition q implies that p itself is true, then p is (indeed) true.” (3) “If one proposition (p) implies a second (q), then if that second proposition implies a third (r), the first implies the third.” The completeness of the basis is, however, a formal matter, not dependent on these or any other readings of the formulas.

      An even more economical complete basis for PIC contains the same transformation rules but the sole axiom

[(pq) ⊃ r] ⊃ [(rp) ⊃ (sp)].
It has been proved that this is the shortest possible single axiom that will give a complete basis for PIC with these transformation rules.

      Since PIC contains no negation sign, the previous account of consistency is not significantly applicable to it. Alternative accounts of consistency have, however, been proposed, according to which a system is consistent (1) if no wff consisting of a single variable is a theorem or (2) if not every wff is a theorem. The bases stated are consistent in these senses.

Nonstandard versions of PC
      Qualms have sometimes been expressed about the intuitive soundness of some formulas that are valid in “orthodox” PC, and these qualms have led some logicians to construct a number of propositional calculi that deviate in various ways from PC as expounded above.

      Underlying ordinary PC is the intuitive idea that every proposition is either true (truth-value) or false, an idea that finds its formal expression in the stipulation that variables shall have two possible values only—namely, 1 and 0. (For this reason the system is often called the two-valued propositional calculus.) This idea has been challenged on various grounds. Following a suggestion made by Aristotle, some logicians have maintained that propositions about those events in the future that may or may not come to pass are neither true nor false but “neuter” in truth value. Aristotle's example, which has received much discussion, is “There will be a sea battle tomorrow.” It has also been maintained, by the English philosopher Sir Peter Strawson (Strawson, Sir Peter) and others, that, for propositions with subjects that do not have anything actual corresponding to them—such as “The present king of France is wise” (assuming that France has no king) or “All John's children are asleep” (assuming that John has no children)—the question of truth or falsity “does not arise.” Another view is that a third truth value (say “half-truth”) ought to be recognized as existing intermediate between truth and falsity; thus it has been advanced that certain familiar states of the weather make the proposition “It is raining” neither definitely true nor definitely false but something in between the two.

 The issues raised by the above examples no doubt differ significantly, but they all suggest a threefold rather than a twofold division of propositions and hence the possibility of a logic in which the variables may take any of three values (say 1, 1/2, and 0), with a consequent revision of the standard PC account of validity. Several such three-valued logics have been constructed and investigated; a brief account will be given here of one of them, in which the most natural interpretation of the additional value (1/2) is as “half-true,” with 1 and 0 representing truth and falsity as before. The formation rules are as they were for orthodox PC, but the meaning of the operators is extended to cover cases in which at least one argument has the value 1/2 by the five entries in Table 4—>. (Adopting one of the three values of the first argument, p, given in the leftmost column [1, 1/2, or 0] and, for the dyadic operators, one of the three values of the second, q, in the top row—above the line—one then finds the value of the whole formula by reading across for p and down for q.) It will be seen that these tables, due to the Polish logician Jan Łukasiewicz, are the same as the ordinary two-valued ones when the arguments have the values 1 and 0. The other values are intended to be intuitively plausible extensions of the principles underlying the two-valued calculus to cover the cases involving half-true arguments. Clearly, these tables enable a person to calculate a determinate value (1, 1/2, or 0) for any wff, given the values assigned to the variables in it; a wff is valid in this calculus if it has the value 1 for every assignment to its variables. Since the values of formulas when the variables are assigned only the values 1 and 0 are the same as in ordinary PC, every wff that is valid in the present calculus is also valid in PC. Some wffs that are valid in PC are, however, now no longer valid. An example is (p ∨ ∼p), which, when p has the value 1/2, also has the value 1/2. This reflects the idea that if one admits the possibility of a proposition's being half-true, he can no longer hold of every proposition without restriction that either it or its negation is true.

 Given the truth tables for the operators in Table 4—>, it is possible to take ∼ and ⊃ as primitive and to define (α ∨ β) as [(α ⊃ β) ⊃ β]—though not as (∼α ⊃ β) as in ordinary PC; (α · β) as ∼(∼α ∨ ∼β); and (α ≡ β) as [(α ⊃ β) · (β ⊃ α)]. With these definitions as given, all valid wffs constructed from variables and ∼, · , ∨, ⊃, and ≡ can be derived by substitution and modus ponens from the following four axioms:

p ⊃ (qp)
● (pq) ⊃ [(qr) ⊃ (pr)]
● [(p ⊃ ∼p) ⊃ p] ⊃ p
● (∼p ⊃ ∼q) ⊃ (qp)

      Other three-valued logics can easily be constructed. For example, the above tables might be modified so that ∼1/2, 1/2 ⊃ 0, 1/2 ≡ 0, and 0 ≡ 1/2 all have the value 0 instead of 1/2 as before, leaving everything else unchanged. The same definitions are then still possible, but the list of valid formulas is different; e.g., ∼∼pp, which was previously valid, now has the value 1/2 when p has the value 1/2. This system can also be successfully axiomatized. Other calculi with more than three values can also be constructed along analogous lines.

      Other nonstandard calculi have been constructed by beginning with an axiomatization instead of a definition of validity. Of these, the best-known is the intuitionistic calculus, devised by Arend Heyting, one of the chief representatives of the intuitionist (intuitionism) school of mathematicians, a group of theorists who deny the validity of certain types of proof used in classical mathematics (see mathematics, the foundations of: Intuitionism (mathematics, foundations of)). At least in certain contexts, members of this school regard the demonstration of the falsity of the negation of a proposition (a proof by reductio ad absurdum) as insufficient to establish the truth of the proposition in question. Thus they regard ∼∼p as an inadequate premise from which to deduce p and hence do not accept the validity of the law of double negation in the form ∼∼pp. They do, however, regard a demonstration that p is true as showing that the negation of p is false and hence accept p ⊃ ∼∼p as valid. For somewhat similar reasons, these mathematicians also refuse to accept the validity of arguments based on the law of excluded middle (p ∨ ∼p). The intuitionistic calculus aims at presenting in axiomatic form those and only those principles of propositional logic that are accepted as sound in intuitionist mathematics. In this calculus, ∼, · , ∨, and ⊃ are all primitive, the transformation rules as before are substitution and modus ponens, and the axioms are the following:
p ⊃ (p · p)
● (p · q) ⊃ (q · p)
● (pq) ⊃ [(p · r) ⊃ (q· r)]
● [(pq) · (qr)] ⊃ (pr)
p ⊃ (qp)
● [p · (pq)] ⊃ q
p ⊃ (pq)
● (pq) ⊃ (qp)
● [(pr) · (qr)] ⊃ [(pq) ⊃ r]
● ∼p ⊃ (pq)
● [(pq) · (p ⊃ ∼q)] ⊃ ∼p

      From this basis neither p ∨ ∼p nor ∼∼pp can be derived, though p ⊃ ∼∼p can. In this respect this calculus resembles the second of the three-valued logics described above. It is, however, not possible to give a truth-table account of validity—no matter how many values are used—that will bring out as valid precisely those wffs that are theorems of the intuitionistic calculus and no others.

Natural deduction method in PC
 PC is often presented by what is known as the method of natural deduction. Essentially this consists of a set of rules for drawing conclusions from hypotheses (assumptions, premises) represented by wffs of PC and thus for constructing valid inference forms. It also provides a method of deriving from these inference forms valid proposition forms, and in this way it is analogous to the derivation of theorems in an axiomatic system. One such set of rules is presented in Table 5—> (and there are various other sets that yield the same results).

      A natural deduction proof is a sequence of wffs beginning with one or more wffs as hypotheses; fresh hypotheses may also be added at any point in the course of a proof. The rules may be applied to any wff or group of wffs, as appropriate, that have already occurred in the sequence. In the case of rules 1–7, the conclusion is said to depend on all of those hypotheses that have been used in the series of applications of the rules that have led to this conclusion; i.e., it is claimed simply that the conclusion follows from these hypotheses, not that it holds in its own right. An application of rule 8 or rule 9, however, reduces by one the number of hypotheses on which the conclusion depends; and a hypothesis so eliminated is said to be a discharged hypothesis. In this way a wff may be reached that depends on no hypotheses at all. Such a wff is a theorem of logic. It can be shown that those theorems derivable by the rules stated above—together with the definition of α ≡ β as (α ⊃ β) · (β ⊃ α)—are precisely the valid wffs of PC. A set of natural deduction rules yielding as theorems all the valid wffs of a system is complete (with respect to that system) in a sense obviously analogous to that in which an axiomatic basis was said to be complete in Axiomatization of PC (formal logic) (above).

      As an illustration, the formula [(pq) · (pr)] ⊃ [p ⊃ (q · r)] will be derived as a theorem of logic by the natural deduction method. (The sense of this formula is that, if a proposition [p] implies each of two other propositions [q, r], then it implies their conjunction.) Explanatory comments follow the proof.

      The figures in parentheses immediately preceding the wffs are simply for reference. To the right is indicated either that the wff is a hypothesis or that it is derived from the wffs indicated by the rules stated. On the left are noted the hypotheses on which the wff in question depends (either the first or the second line of the derivation, or both). Note that since 8 is derived by conditional proof from hypothesis 2 and from 7, which is itself derived from hypotheses 1 and 2, 8 depends only on hypothesis 1, and hypothesis 2 is discharged. Similarly, 9 depends on no hypotheses and is therefore a theorem.

      By varying the above rules it is possible to obtain natural deduction systems corresponding to other versions of PC. For example, if the second part of the double negation rule is omitted and the rule is added that, given α·∼α, one may then conclude β, it can be shown that the theorems then derivable are precisely the theorems of the intuitionistic calculus.

The predicate calculus (predicate calculus)
      Propositions may also be built up, not out of other propositions but out of elements that are not themselves propositions. The simplest kind to be considered here are propositions in which a certain object or individual (in a wide sense) is said to possess a certain property or characteristic; e.g., “Socrates is wise” and “The number 7 is prime.” Such a proposition contains two distinguishable parts: (1) an expression that names or designates an individual and (2) an expression, called a predicate, that stands for the property that that individual is said to possess. If x, y, z, . . . are used as individual variables (replaceable by names of individuals) and the symbols ϕ (phi), ψ (psi), χ (chi), . . . as predicate variables (replaceable by predicates), the formula ϕx is used to express the form of the propositions in question. Here x is said to be the argument of ϕ; a predicate (or predicate variable) with only a single argument is said to be a monadic, or one-place, predicate (variable). Predicates with two or more arguments stand not for properties of single individuals but for relations between individuals. Thus the proposition “Tom is a son of John” is analyzable into two names of individuals (“Tom” and “John”) and a dyadic or two-place predicate (“is a son of”), of which they are the arguments; and the proposition is thus of the form ϕxy. Analogously, “. . . is between . . . and . . .” is a three-place predicate, requiring three arguments, and so on. In general, a predicate variable followed by any number of individual variables is a wff of the predicate calculus. Such a wff is known as an atomic formula, and the predicate variable in it is said to be of degree n, if n is the number of individual variables following it. The degree of a predicate variable is sometimes indicated by a superscript—e.g., ϕxyz may be written as ϕ3xyz; ϕ3xy would then be regarded as not well formed. This practice is theoretically more accurate, but the superscripts are commonly omitted for ease of reading when no confusion is likely to arise.

      Atomic formulas may be combined with truth-functional operators (connective) to give formulas such as ϕx ∨ ψy [Example: “Either the customer (x) is friendly (ϕ) or else John (y) is disappointed (ψ)”]; ψxy ⊃ ∼ψx [Example: “If the road (x) is above (ϕ) the flood line (y), then the road is not wet (∼ψ)”]; and so on. Formulas so formed, however, are valid when and only when they are substitution-instances of valid wffs of PC and hence in a sense do not transcend PC. More interesting formulas are formed by the use, in addition, of quantifiers. There are two kinds of quantifiers: universal quantifiers, written as “(∀ )” or often simply as “(),” where the blank is filled by a variable, which may be read “For all ”; and existential quantifiers, written as “(∃ ),” which may be read “For some ” or “There is a such that.” (“Some” is to be understood as meaning “at least one.”) Thus (∀xx is to mean “For all x, x is ϕ” or, more simply, “Everything is ϕ”; and (∃xx is to mean “For some x, x is ϕ” or, more simply, “Something is ϕ” or “There is a ϕ.” Slightly more complex examples are (∀x)(ϕx ⊃ ψx) for “Whatever is ϕ is ψ,” (∃x)(ϕx·ψx) for “Something is both ϕ and ψ,” (∀x)(∃yxy for “Everything bears the relation ϕ to at least one thing,” and (∃x)(∀yxy for “There is something that bears the relation ϕ to everything.” To take a concrete case, if ϕxy means “x loves y” and the values of x and y are taken to be human beings, then the last two formulas mean, respectively, “Everybody loves somebody” and “Somebody loves everybody.”

      Intuitively, the notions expressed by the words “some” and “every” are connected in the following way: to assert that something has a certain property amounts to denying that everything lacks that property (for example, to say that something is white is to say that not everything is nonwhite); and, similarly, to assert that everything has a certain property amounts to denying that there is something that lacks it. These intuitive connections are reflected in the usual practice of taking one of the quantifiers as primitive and defining the other in terms of it. Thus ∀ may be taken as primitive, and ∃ introduced by the definition

(∃a)α =Df ∼(∀a)∼α
in which a is any variable and α is any wff; or, alternatively, ∃ may be taken as primitive, and ∀ introduced by the definition
(∀a)α =Df ∼(∃a)∼α.

The lower predicate calculus
      A predicate calculus in which the only variables that occur in quantifiers are individual variables is known as a lower (or first-order) predicate calculus. Various lower predicate calculi have been constructed. In the most straightforward of these, to which the most attention will be devoted in this discussion and which subsequently will be referred to simply as LPC, the wffs can be specified as follows: Let the primitive symbols be (1) x, y, . . . (individual variables), (2) ϕ, ψ, . . . , each of some specified degree (predicate variables), and (3) the symbols ∼, ∨, ∀, (, and ). An infinite number of each type of variable can now be secured as before by the use of numerical subscripts. The symbols · , ⊃, and ≡ are defined as in PC, and ∃ as explained above. The formation rules are:
● An expression consisting of a predicate variable of degree n followed by n individual variables is a wff.
● If α is a wff, so is ∼α.
● If α and β are wffs, so is (α ∨ β).
● If α is a wff and a is an individual variable, then (∀a)α is a wff. (In such a wff, α is said to be the scope of the quantifier.)

      If a is any individual variable and α is any wff, every occurrence of a in α is said to be bound (by the quantifiers) when occurring in the wffs (∀a)α and (∃a)α. Any occurrence of a variable that is not bound is said to be free. Thus, in (∀x)(ϕx ∨ ϕy) the x in ϕx is bound, since it occurs within the scope of a quantifier containing x, but y is free. In the wffs of a lower predicate calculus, every occurrence of a predicate variable (ϕ, ψ, χ, . . . ) is free. A wff containing no free individual variables is said to be a closed wff of LPC. If a wff of LPC is considered as a proposition form, instances of it are obtained by replacing all free variables in it by predicates or by names of individuals, as appropriate. A bound variable, on the other hand, indicates not a point in the wff where a replacement is needed but a point (so to speak) at which the relevant quantifier applies.

      For example, in ϕx, in which both variables are free, each variable must be replaced appropriately if a proposition of the form in question (such as “Socrates is white”) is to be obtained; but in (∃xx, in which x is bound, it is necessary only to replace ϕ by a predicate in order to obtain a complete proposition (e.g., replacing ϕ by “is white” yields the proposition “Something is white”).

Validity in LPC
      Intuitively, a wff of LPC is valid if and only if all its instances are true—i.e., if and only if every result of replacing each of its free variables appropriately and uniformly is a true proposition. A formal definition of validity in LPC to express this intuitive notion more precisely can be given as follows: for any wff of LPC, any number of LPC models can be formed. An LPC model has two elements: One is a set, D, of objects, known as a domain. D may contain as many or as few objects as one chooses, but it must contain at least one, and the objects may be of any kind. The other element, V, is a system of value assignments satisfying the following conditions. To each individual variable there is assigned some member of D (not necessarily a different one in each case). Assignments are next made to the predicate variables in the following way: if ϕ is monadic, there is assigned to it some subset of D (possibly the whole of D); intuitively this subset can be viewed as the set of all the objects in D that have the property ϕ. If ϕ is dyadic, there is assigned to it some set of ordered pairs (i.e., pairs of objects of which one is marked out as the first and the other as the second) drawn from D; intuitively these can be viewed as all the pairs of objects in D in which the relation ϕ holds between the first object in the pair and the second. In general, if ϕ is of degree n, there is assigned to it some set of ordered n-tuples (groups of n objects) of members of D. It is then stipulated that an atomic formula is to have the value 1 in the model if the members of D assigned to its individual variables form, in that order, one of the n-tuples assigned to the predicate variable in it; otherwise, it is to have the value 0. Thus in the simplest case, ϕx will have the value 1 if the object assigned to x is one object in the set of objects assigned to ϕ; and, if it is not, then ϕx will have the value 0. The values of truth functions are determined by the values of their arguments as in PC. Finally, the value of (∀x)α is to be 1 if both (1) the value of α itself is 1 and (2) α would always still have the value 1 if a different assignment were made to x but all the other assignments were left precisely as they were; otherwise (∀x)α is to have the value 0. Since ∃ can be defined in terms of ∀, these rules cover all the wffs of LPC. A given wff may of course have the value 1 in some LPC models but the value 0 in others. But a valid wff of LPC may now be defined as one that has the value 1 in every LPC model. If 1 and 0 are viewed as representing truth and falsity, respectively, then validity is defined as truth in every model.

      Although the above definition of validity in LPC is quite precise, it does not yield, as did the corresponding definition of PC validity in terms of truth tables, an effective decision procedure (decision problem). It can, indeed, be shown that no generally applicable decision procedure for LPC is possible—i.e., that LPC is not a decidable system. This does not mean that it is never possible to prove that a given wff of LPC is valid—the validity of an unlimited number of such wffs can in fact be demonstrated—but it does mean that in the case of LPC, unlike that of PC, there is no general procedure, stated in advance, that would enable one to determine, for any wff whatever, whether it is valid or not.

Logical manipulations in LPC
      The intuitive connections between some and every noted earlier are reflected in the fact that the following equivalences are valid:

(∃xx ≡ ∼(∀x)∼ϕx
(∀xx ≡ ∼(∃ x)∼ϕx
These equivalences remain valid when ϕx is replaced by any wff, however complex; i.e., for any wff α whatsoever,
(∃x)α ≡ ∼(∀x)∼α
and
(∀x)α ≡ ∼(∃ x)∼α
are valid. Because the rule of substitution of equivalents can be shown to hold in LPC, it follows that (∃x) may be replaced anywhere in a wff by ∼(∀x)∼, or (∀x) by ∼(∃x)∼, and the resulting wff will be equivalent to the original. Similarly, because the law of double negation permits the deletion of a pair of consecutive negation signs, ∼(∃x) may be replaced by (∀x)∼, and ∼(∀x) by (∃x)∼.

      These principles are easily extended to more complex cases. To say that there is a pair of objects satisfying a certain condition is equivalent to denying that every pair of objects fails to satisfy that condition, and to say that every pair of objects satisfies a certain condition is equivalent to denying that there is any pair of objects that fails to satisfy that condition. These equivalences are expressed formally by the validity, again for any wff α, of

(∃x)(∃y)α ≡ ∼(∀x)(∀y)∼α
and
(∀x)(∀y)α ≡ ∼(∃x)(∃y)∼α
and by the resulting replaceability anywhere in a wff of (∃x)(∃y) by ∼(∀x)(∀y)∼, or of (∀x)(∀y) by ∼(∃x)(∃y)∼.

      Analogously, (∃x)(∀y) can be replaced by ∼(∀x)(∃y)∼ [e.g., (∃x)(∀y)(x loves y)—“There is someone who loves everyone”—is equivalent to ∼(∀x)(∃y)∼(x loves y)—“It is not true of everyone that there is someone whom he does not love”]; (∀x)(∃y) can be replaced by ∼(∃x)(∀y)∼; and in general the following rule, covering sequences of quantifiers of any length, holds:

      1. If a wff contains an unbroken sequence of quantifiers, then the wff that results from replacing ∀ by ∃ and vice versa throughout that sequence and inserting or deleting ∼ at each end of it is equivalent to the original wff.

      This may be called the rule of quantifier transformation. It reflects, in a generalized form, the intuitive connections between “some” and “every” that were noted above.

      The following are also valid, again where α is any wff:

(∀x)(∀y)α ≡ (∀y)(∀x
(∃x)(∃y)α ≡ (∃y)(∃x
The extensions of these lead to the following rule:

      2. If a wff contains an unbroken sequence either of universal or of existential quantifiers, these quantifiers may be rearranged in any order and the resulting wff will be equivalent to the original wff.

      This may be called the rule of quantifier rearrangement.

      Two other important rules concern implications (implication), not equivalences:

      3. If a wff β begins with an unbroken sequence of quantifiers, and β′ is obtained from β by replacing ∀ by ∃ at one or more places in the sequence, then β is stronger than β′—in the sense that (β ⊃ β′) is valid but (β′ ⊃ β) is in general not valid.

      4. If a wff β begins with an unbroken sequence of quantifiers in which some existential quantifier Q1 precedes some universal quantifier Q2, and if β′ is obtained from β by moving Q1 to the right of Q2, then β is stronger than β′.

      As illustrations of these rules, the following are valid for any wff α:

      In each case the converses are not valid (though they may be valid in particular cases in which α is of some special form).

      Some of the uses of the above rules can be illustrated by considering a wff α that contains precisely two free individual variables. By prefixing to α two appropriate quantifiers and possibly one or more negation signs, it is possible to form a closed wff (called a closure of α) that will express a determinate proposition when a meaning is assigned to the predicate variables. The above rules can be used to list exhaustively the nonequivalent closures of α and the implication relations between them. The simplest example is ϕxy, which for illustrative purposes can be taken to mean “x loves y.” Application of rules 1 and 2 will show that every closure of ϕxy is equivalent to one or another of the following 12 wffs (none of which is in fact equivalent to any of the others):
● (a) (∀x)(∀yxy (“Everybody loves everybody”);
● (b) (∃x)(∀yxy (“Somebody loves everybody”);
● (c) (∃y)(∀xxy (“There is someone whom everyone loves”);
● (d) (∀y)(∃xxy (“Each person is loved by at least one person”);
● (e) (∀x)(∃yxy (“Each person loves at least one person”);
● (f) (∃x)(∃yxy (“Somebody loves somebody”); and
● (g)–(l) the respective negations of each of the above.

      Rules 3 and 4 show that the following implications among formulas (a)–(f) are valid:

(a) ⊃ (b) (d) ⊃ (f) (c) ⊃ (e)
(b) ⊃ (d) (a) ⊃ (c) (e) ⊃ (f)
See as table:

      The implications holding among the negations of (a)–(f) follow from these by the law of transposition; e.g., since (a) ⊃ (b) is valid, so is ∼(b) ⊃ ∼(a). The quantification of wffs containing three, four, etc., variables can be dealt with by the same rules.

      Intuitively, (∀xx and (∀yy both “say the same thing”—namely, that everything is ϕ—and (∃xx and (∃yy both mean simply that something is ϕ. Clearly, so long as the same variable occurs both in the quantifier and as the argument of ϕ, it does not matter what letter is chosen for this purpose. The procedure of replacing some variable in a quantifier, together with every occurrence of that variable in its scope, by some other variable that does not occur elsewhere in its scope is known as relettering a bound variable. If β is the result of relettering a bound variable in a wff α, then α and β are said to be bound alphabetical variants of each other, and bound alphabetical variants are always equivalent. The reason for restricting the replacement variable to one not occurring elsewhere in the scope of the quantifier can be seen from an example: If ϕxy is taken as before to mean “x loves y,” the wff (∀xxy expresses the proposition form “Everyone loves y,” in which the identity of y is left unspecified, and so does its bound alphabetical variant (∀zzy. If x were replaced by y, however, the closed wff (∀yyy would be obtained, which expresses the proposition that everyone loves himself and is clearly not equivalent to the original.

      A wff in which all the quantifiers occur in an unbroken sequence at the beginning, with the scope of each extending to the end of the wff, is said to be in prenex normal form (PNF). Wffs that are in PNF are often more convenient to work with than those that are not. For every wff of LPC, however, there is an equivalent wff in PNF (often simply called its PNF). One effective method for finding the PNF of any given wff is the following:
● Reletter bound variables as far as is necessary to ensure (a) that each quantifier contains a distinct variable and (b) that no variable in the wff occurs both bound and free.
● Use definitions or PC equivalences to eliminate all operators except ∼, · , and ∨.
● Use the De Morgan laws and the rule of quantifier transformation to eliminate all occurrences of ∼ immediately before parentheses or quantifiers.
● Gather all of the quantifiers into a sequence at the beginning in the order in which they appear in the wff and take the whole of what remains as their scope. Example:
      (∀x){[ϕx · (∃yxy] ⊃ (∃yxy} ⊃ (∃z)(ϕz ⊃ ψzx).

      Step 1 can be achieved by relettering the third and fourth occurrences of y and every occurrence of x except the last (which is free); thus

(∀w){[ϕw · (∃ywy] ⊃ (∃uwu} ⊃ (∃z)(ϕz ⊃ ψzx).
Step 2 now yields
∼(∀w){∼[ϕw · (∃ywy] ∨ (∃uwu} ∨ (∃z)(∼ϕz ∨ ψzx).
By step 3 this becomes
(∃w){[ϕw · (∃ywy] · (∀u)∼χwu} ∨ (∃z)(∼ϕz ∨ ψzx).
Finally, step 4 yields
(∃w)(∃y)(∀u)(∃z){[(ϕw · ψwy) · ∼χwu] ∨ (∼ϕz ∨ ψzx)},
which is in PNF.

Classification of dyadic relations
      Consider the closed wff,

(∀x)(∀y)(ϕxy ⊃ ϕyx),
which means that, whenever the relation ϕ holds between one object and a second, it also holds between that second object and the first. This expression is not valid, since it is true for some relations but false for others. A relation for which it is true is called a symmetrical relation (example: “is parallel to”). If the relation ϕ is such that, whenever it holds between one object and a second, it fails to hold between the second and the first—i.e., if ϕ is such that
(∀x)(∀y)(ϕxy ⊃ ∼ϕyx)
—then ϕ is said to be asymmetrical (example: “is greater than”). A relation that is neither symmetrical nor asymmetrical is said to be nonsymmetrical. Thus ϕ is nonsymmetrical if
(∃x)(∃y)(ϕxy · ϕyx) · (∃x)(∃y)(ϕxy · ∼ϕyx)
(example: “loves”).

      Dyadic relations can also be characterized in terms of another threefold division: A relation ϕ is said to be transitive (transitive law) if, whenever it holds between one object and a second and also between that second object and a third, it holds between the first and the third—i.e., if

(∀x)(∀y)(∀z)[(ϕxy · ϕyz) ⊃ ϕxz]
(example: “is greater than”). An intransitive relation is one that, whenever it holds between one object and a second and also between that second and a third, fails to hold between the first and the third; i.e., ϕ is intransitive if
(∀x)(∀y)(∀z)[(ϕxy · ϕyz) ⊃ ∼ϕxz]
(example: “is father of”). A relation that is neither transitive nor intransitive is said to be nontransitive. Thus ϕ is nontransitive if
(∃x)(∃y)(∃z)(ϕxy · ϕyz · ϕxz) · (∃x)(∃y)(∃z)(ϕxy · ϕyz · ∼ϕxz)
(example: “is a first cousin of”).

      A relation ϕ that always holds between any object and itself is said to be reflexive; i.e., ϕ is reflexive if

(∀xxx
(example: “is identical with”). If ϕ never holds between any object and itself—i.e., if
∼(∃xxx
—then ϕ is said to be irreflexive (example: “is greater than”). If ϕ is neither reflexive nor irreflexive—i.e., if
(∃xxx · (∃x)∼ϕxx
—then ϕ is said to be nonreflexive (example: “admires”).

      A relation such as “is of the same length as” is not strictly reflexive, as some objects do not have a length at all and thus are not of the same length as anything, even themselves. But this relation is reflexive in the weaker sense that whenever an object is of the same length as anything it is of the same length as itself. Such a relation is said to be quasi-reflexive. Thus ϕ is quasi-reflexive if

(∀x)[(∃yxy ⊃ ϕxx].
A reflexive relation is of course also quasi-reflexive.

      For the most part, these three classifications are independent of each other; thus a symmetrical relation may be transitive (like “is equal to”) or intransitive (like “is perpendicular to”) or nontransitive (like “is one mile distant from”). There are, however, certain limiting principles, of which the most important are:
● Every relation that is symmetrical and transitive is at least quasi-reflexive.
● Every asymmetrical relation is irreflexive.
● Every relation that is transitive and irreflexive is asymmetrical.

      A relation that is reflexive, symmetrical, and transitive is called an equivalence relation.

Axiomatization of LPC
      Rules of uniform substitution for predicate calculi, though formulable, are mostly very complicated, and, to avoid the necessity for these rules, axioms for these systems are therefore usually given by axiom schemata in the sense explained earlier in Axiomatization of PC. Given the formation rules and definitions stated in the introductory paragraph of The lower predicate calculus, the following is presented as one standard axiomatic basis for LPC:

      Axiom schemata:
● Any LPC substitution-instance of any valid wff of PC is an axiom.
● Any wff of the form (∀a)α ⊃ β is an axiom, if β is either identical with α or differs from it only in that, wherever α has a free occurrence of a, β has a free occurrence of some other individual variable b.
● Any wff of the form (∀a)(α ⊃ β) ⊃ [α ⊃ (∀a)β] is an axiom, provided that α contains no free occurrence of a.

      Transformation rules:
● Modus ponens (as given above in Axiomatization of PC).
● If α is a theorem, so is (∀a)α, where a is any individual variable (rule of universal generalization).

      The axiom schemata call for some explanation and comment. By an LPC substitution-instance of a wff of PC is meant any result of uniformly replacing every propositional variable in that wff by a wff of LPC. Thus one LPC substitution-instance of (p ⊃ ∼q) ⊃ (q ⊃ ∼p) is [ϕxy ⊃ ∼(∀xx] ⊃ [(∀xx ⊃ ∼ϕxy]. Axiom schema 1 makes available in LPC all manipulations such as commutation, transposition, and distribution, which depend only on PC principles. Examples of wffs that are axioms by axiom schema 2 are (∀xx ⊃ ϕx, (∀xx ⊃ ϕy, and (∀x)(∃yxy ⊃ (∃yzy. To see why it is necessary for the variable that replaces a to be free in β, consider the last example: Here a is x, α is (∃yxy, in which x is free, and β is (∃yzy, in which z is free and replaces x. But had y, which would become bound by the quantifier (∃y), been chosen as a replacement instead of z, the result would have been (∀x)(∃yxy ⊃ (∃yyy, the invalidity of which can be seen intuitively by taking ϕxy to mean “x is a child of y”: for then (∀x)(∃yxy will mean that everyone is a child of someone, which is true, but (∃yyy will mean that someone is a child of himself, which is false. The need for the proviso in axiom schema 3 can also be seen from an example. Defiance of the proviso would give as an axiom (∀x)(ϕx ⊃ ψx) ⊃ [ϕx ⊃ (∀xx]; and, if ϕx were taken to mean “x is a Spaniard,” ψx to mean “x is a European,” and the free occurrence of x (the first occurrence in the consequent) to stand for General Franco, then the antecedent would mean that every Spaniard is a European, but the consequent would mean that, if General Franco is a Spaniard, then everyone is a European.

      It can be proved—though the proof is not an elementary one—that the theorems derivable from the above basis are precisely the wffs of LPC that are valid by the definition of validity given above in Validity in LPC (formal logic). Several other bases for LPC are known that also have this property. The axiom schemata and transformation rules here given are such that any purported proof of a theorem can be effectively checked to determine whether it really is a proof or not; nevertheless, theoremhood in LPC, like validity in LPC, is not effectively decidable, in that there is no effective method of telling with regard to any arbitrary wff whether it is a theorem or not. In this respect, axiomatic bases for LPC contrast with those for PC (propositional calculus).

Semantic tableaux
      Since the 1980s another technique for determining the validity of arguments in either PC or LPC has gained some popularity, owing both to its ease of learning and to its straightforward implementation by computer programs. Originally suggested by the Dutch logician Evert W. Beth (1908–64), it was more fully developed and publicized by the American mathematician and logician Raymond M. Smullyan (b. 1919). Resting on the observation that it is impossible for the premises of a valid argument to be true while the conclusion is false, this method attempts to interpret (or evaluate) the premises in such a way that they are all simultaneously satisfied and the negation of the conclusion is also satisfied. Success in such an effort would show the argument to be invalid, while failure to find such an interpretation would show it to be valid.

      The construction of a semantic tableau proceeds as follows: Express the premises and negation of the conclusion of an argument in PC using only negation (∼) and disjunction (∨) as propositional connectives. Eliminate every occurrence of two negation signs in a sequence (e.g., ∼∼∼∼∼a becomes ∼a). Now construct a tree diagram branching downward such that each disjunction is replaced by two branches, one for the left disjunct and one for the right. The original disjunction is true if either branch is true. Reference to De Morgan's laws shows that a negation of a disjunction is true just in case the negations of both disjuncts are true (i.e., ∼(pq) ≡ (∼p·∼q)). This semantic observation leads to the rule that the negation of a disjunction becomes one branch containing the negation of each disjunct:

      Consider the following argument:

      Write:

      Now strike out the disjunction and form two branches:

      Only if all the sentences in at least one branch are true is it possible for the original premises to be true and the conclusion false (equivalently for the negation of the conclusion). By tracing the line upward in each branch to the top of the tree, one observes that no valuation of a in the left branch will result in all the sentences in that branch receiving the value true (because of the presence of a and ∼a). Similarly, in the right branch the presence of b and ∼b makes it impossible for a valuation to result in all the sentences of the branch receiving the value true. These are all the possible branches; thus, it is impossible to find a situation in which the premises are true and the conclusion false. The original argument is therefore valid.

      This technique can be extended to deal with other connectives:

      Furthermore, in LPC, rules for instantiating quantified wffs need to be introduced. Clearly, any branch containing both (∀xx and ∼ϕy is one in which not all the sentences in that branch can be simultaneously satisfied (under the assumption of ω-consistency; see the article metalogic). Again, if all the branches fail to be simultaneously satisfiable, the original argument is valid.

Special systems of LPC
      LPC as expounded above may be modified by either restricting or extending the range of wffs in various ways.
● Partial systems of LPC. Some of the more important systems produced by restriction are here outlined:

a. It may be required that every predicate variable be monadic, while still allowing an infinite number of individual and predicate variables. The atomic wffs are then simply those consisting of a predicate variable followed by a single individual variable. Otherwise, the formation rules remain as before, and the definition of validity is also as before, though simplified in obvious ways. This system is known as the monadic LPC; it provides a logic of properties but not of relations. One important characteristic of this system is that it is decidable. (The introduction of even a single dyadic predicate variable, however, would make the system undecidable, and, in fact, even the system that contains only a single dyadic predicate variable and no other predicate variables at all has been shown to be undecidable.)
b. A still simpler system can be formed by requiring (1) that every predicate variable be monadic, (2) that only a single individual variable (e.g., x) be used, (3) that every occurrence of this variable be bound, and (4) that no quantifier occur within the scope of any other. Examples of wffs of this system are (∀x)[ϕx ⊃ (ψx · χx)] (“Whatever is ϕ is both ψ and χ”); (∃x)(ϕx · ∼ψx) (“There is something that is ϕ but not ψ”); and (∀x)(ϕx ⊃ ψx) ⊃ (∃x)(ϕx · ψx) (“If whatever is ϕ is ψ, then something is both ϕ and ψ”). The notation for this system can be simplified by omitting x everywhere and writing ∃ϕ for “Something is ϕ,” ∀(ϕ ⊃ ψ) for “Whatever is ϕ is ψ,” and so on. Although this system is more rudimentary even than the monadic LPC (of which it is a fragment), the forms of a wide range of inferences can be represented in it. It is also a decidable system, and decision procedures of an elementary kind can be given for it.
● Extensions of LPC. More elaborate systems, in which a wider range of propositions can be expressed, have been constructed, by adding to LPC new symbols of various types. The most straightforward of such additions are:

a. One or more individual constants (constant) (say a, b, . . .): these constants are interpreted as names of specific individuals; formally they are distinguished from individual variables by the fact that they cannot occur within quantifiers: e.g., (∀x) is a quantifier but (∀a) is not.
b. One or more predicate constants (say A, B, . . .), each of some specified degree, thought of as designating specific properties or relations.

      A further possible addition, which calls for somewhat fuller explanation, consists of symbols designed to stand for functions (function). The notion of a function may be sufficiently explained for present purposes as follows: There is said to be a certain function of n arguments (or, of degree n) when there is a rule that specifies a unique object (called the value of the function) whenever all the arguments are specified. In the domain of human beings, for example, “the mother of —” is a monadic function (a function of one argument), since for every human being there is a unique individual who is his mother; and in the domain of the natural numbers (i.e., 0, 1, 2, . . .), “the sum of —and —” is a function of two arguments, since for any pair of natural numbers there is a natural number that is their sum. A function symbol can be thought of as forming a name out of other names (its arguments): thus, whenever x and y name numbers, “the sum of x and y” also names a number, and similarly for other kinds of functions and arguments.

      To enable functions to be expressed in LPC there may be added:
c. One or more function variables (say f, g, . . .) or one or more function constants (say, F, G, . . .) or both, each of some specified degree. The former are interpreted as ranging over functions of the degrees specified, and the latter as designating specific functions of that degree.

      When any or all of ac are added to LPC, the formation rules listed in the first paragraph of the section The lower predicate calculus (formal logic) need to be modified to enable the new symbols to be incorporated into wffs. This can be done as follows: A term is first defined as either (1) an individual variable or (2) an individual constant or (3) any expression formed by prefixing a function variable or function constant of degree n to any n terms (these terms—the arguments of the function symbol—are usually separated by commas and enclosed in parentheses). Formation rule 1 is then replaced by:
● 1′. An expression consisting of a predicate variable or predicate constant of degree n followed by n terms is a wff.

      The axiomatic basis given in Axiomatization of LPC (formal logic) also requires the following modification: in axiom schema 2 any term is allowed to replace a when β is formed, provided that no variable that is free in the term becomes bound in β. The following examples will illustrate the use of the aforementioned additions to LPC: Let the values of the individual variables be the natural numbers; let the individual constants a and b stand for the numbers 2 and 3, respectively; let A mean “is prime”; and let F represent the dyadic function “the sum of.” Then AF(a,b) expresses the proposition “The sum of 2 and 3 is prime,” and (∃x) AF(x,a) expresses the proposition “There exists a number such that the sum of it and 2 is prime.”

      The introduction of constants is normally accompanied by the addition, to the axiomatic (axiom) basis, of special axioms containing those constants, designed to express principles that hold of the objects, properties, relations, or functions represented by them—though they do not hold of objects, properties, relations, or functions in general. It may be decided, for example, to use the constant A to represent the dyadic relation “is greater than” (so that Axy is to mean “x is greater than y,” and so forth). This relation, unlike many others, is transitive; i.e., if one object is greater than a second and that second is in turn greater than a third, then the first is greater than the third. Hence, the following special axiom schema might be added: If t1, t2, and t3 are any terms, then

(At1t2 · At2t3) ⊃ At1t3
is an axiom. By such means systems can be constructed to express the logical structures of various particular disciplines. The area in which most work of this kind has been done is that of natural-number arithmetic.

      PC (propositional calculus) and LPC are sometimes combined into a single system. This may be done most simply by adding propositional variables to the list of LPC primitives, adding a formation rule to the effect that a propositional variable standing alone is a wff, and deleting “LPC” in axiom schema 1. This yields as wffs such expressions as (pq) ⊃ (∀xx and (∃x)[p ⊃ (∀yxy].
● 3. LPC-with-identity. The word “is” is not always used in the same way. In a proposition such as (1) “Socrates is snub-nosed,” the expression preceding the “is” names an individual and the expression following it stands for a property attributed to that individual. But, in a proposition such as (2) “Socrates is the Athenian philosopher who drank hemlock,” the expressions preceding and following the “is” both name individuals, and the sense of the whole proposition is that the individual named by the first is the same individual as the individual named by the second. Thus in (2) “is” can be expanded to “is the same individual as,” whereas in (1) it cannot. As used in (2), “is” stands for a dyadic relation—namely, identity—that the proposition asserts to hold between the two individuals. An identity proposition is to be understood in this context as asserting no more than this; in particular it is not to be taken as asserting that the two naming expressions have the same meaning. A much discussed example to illustrate this last point is “The morning star is the evening star.” It is false that the expressions “the morning star” and “the evening star” mean the same, but true that the object referred to by the former is the same as that referred to by the latter.

      To enable the forms of identity propositions to be expressed, a dyadic predicate constant is added to LPC, for which the most usual notation is = (written between, rather than before, its arguments). The intended interpretation of x = y is that x is the same individual as y, and the most convenient reading is “x is identical with y.” Its negation ∼(x = y) is commonly abbreviated to xy. To the definition of an LPC model in Validity in LPC (formal logic) there is now added the rule (which accords in an obvious way with the intended interpretation) that the value of x = y is to be 1 if the same member of D is assigned to both x and y and that otherwise its value is to be 0; validity can then be defined as before. The following additions (or some equivalent ones) are made to the axiomatic basis for LPC: the axiom (I1) x = x and the axiom schema (I2) that, where a and b are any individual variables and α and β are wffs that differ only in that, at one or more places where α has a free occurrence of a, β has a free occurrence of b, (a = b) ⊃ (α ⊃ β) is an axiom. Such a system is known as a lower-predicate-calculus-with-identity; it may of course be further augmented in the other ways referred to above in Extensions of LPC, in which case any term may be an argument of =.

      Identity is an equivalence relation; i.e., it is reflexive, symmetrical, and transitive. Its reflexivity is directly expressed in axiom I1, and theorems expressing its symmetry and transitivity can easily be derived from the basis given.

      Certain wffs of LPC-with-identity express propositions about the number of things that possess a given property. “At least one thing is ϕ” could, of course, already be expressed by (∃xx; “At least two distinct (nonidentical) things are ϕ” can now be expressed by (∃x)(∃y)(ϕx · ϕy · xy); and the sequence can be continued in an obvious way. “At most one thing is ϕ,” (i.e., “No two distinct things are both ϕ”) can be expressed by the negation of the last mentioned wff or by its equivalent, (∀x)(∀y)[(ϕx · ϕy) ⊃ x = y], and the sequence can again be easily continued. A formula for “Exactly one thing is ϕ” may be obtained by conjoining the formulas for “At least one thing is ϕ” and “At most one thing is ϕ,” but a simpler wff equivalent to this conjunction is (∃x)[ϕx · (∀y)(ϕyx = y)], which means “There is something that is ϕ, and anything that is ϕ is that thing.” The proposition “Exactly two things are ϕ” can be represented by

(∃x)(∃y){ϕx · ϕy · xy · (∀z)[ϕz ⊃ (z = xz = y)]};
i.e., “There are two nonidentical things each of which is ϕ, and anything that is ϕ is one or the other of these.” Clearly, this sequence can also be extended to give a formula for “Exactly n things are ϕ” for every natural number n. It is convenient to abbreviate the wff for “Exactly one thing is ϕ” to (∃!xx. This special quantifier is frequently read aloud as “E-Shriek x.”

Definite descriptions
      When a certain property ϕ belongs to one and only one object, it is convenient to have an expression that names that object. A common notation for this purpose is (ιxx, which may be read as “the thing that is ϕ” or more briefly as “the ϕ.” In general, where a is any individual variable and α is any wff, (ιa)α then stands for the single value of a that makes α true. An expression of the form “the so-and-so” is called a definite description; and (ιx), known as a description operator, can be thought of as forming a name of an individual out of a proposition form. (ιx) is analogous to a quantifier in that, when prefixed to a wff α, it binds every free occurrence of x in α. Relettering of bound variables is also permissible: in the simplest case, (ιxx and (ιyy can each be read simply as “the ϕ.”

      As far as formation rules are concerned, definite descriptions can be incorporated into LPC by letting expressions of the form (ıa)α count as terms; rule 1′ of Extensions of LPC will then allow them to occur in atomic formulas (including identity formulas). “The ϕ is (i.e., has the property) ψ” can then be expressed as ψ(ιxx; “y is (the same individual as) the ϕ” as y = (ιxx; “The ϕ is (the same individual as) the ψ” as (ιxx = (ιyy; and so forth. The correct analysis of propositions containing definite descriptions has been the subject of considerable philosophical controversy. One widely accepted account, however—substantially that presented in Principia Mathematica and known as Russell's theory of descriptions, after Bertrand Russell (Russell, Bertrand)—holds that “The ϕ is ψ” is to be understood as meaning that exactly one thing is ϕ and that thing is also ψ. In that case it can be expressed by a wff of LPC-with-identity that contains no description operators—namely,

(1) (∃x)[ϕx · (∀y)(ϕyx = y) · ψx].
Analogously, “y is the ϕ” is analyzed as “y is ϕ and nothing else is ϕ,” and hence as expressible by
(2) ϕy · (∀x)(ϕxx = y);
and “The ϕ is the ψ” is analyzed as “Exactly one thing is ϕ, exactly one thing is ψ, and whatever is ϕ is ψ,” and hence as expressible by
(3) (∃x)[ϕx · (∀y)(ϕyx = y)] · (∃x)[ψx · (∀y)(ψyx = y)] · (∀x)(ϕx ⊃ ψx)
ψ(ιxx, y = (ιxx and (ιxx = (ιyy can then be regarded as abbreviations for (1), (2), and (3), respectively; and by generalizing to more complex cases, all wffs that contain description operators can be regarded as abbreviations for longer wffs that do not.

      The analysis that leads to (1) as a formula for “The ϕ is ψ” leads to the following for “The ϕ is not ψ”:

(4) (∃x)[ϕx · (∀y)(ϕyx = y) · ∼ψx].
It is important to note that (4) is not the negation of (1); this negation is, instead,
(5) ∼(∃x)[ϕx · (∀y)(ϕyx = y) · ψx].
The difference in meaning between (4) and (5) lies in the fact that (4) is true only when there is exactly one thing that is ϕ and that thing is not ψ, but (5) is true both in this case and also when nothing is ϕ at all and when more than one thing is ϕ. Neglect of the distinction between (4) and (5) can result in serious confusion of thought: in ordinary speech it is frequently unclear whether someone who denies that the ϕ is ψ is conceding that exactly one thing is ϕ but denying that it is ψ, or denying that exactly one thing is ϕ.

      The basic contention of Russell's theory of descriptions is that a proposition containing a definite description is not to be regarded as an assertion about an object of which that description is a name but rather as an existentially quantified assertion that a certain (rather complex) property has an instance. Formally, this is reflected in the rules for eliminating description operators that were outlined above.

Higher-order predicate calculi
      A feature shared by LPC and all its extensions so far mentioned is that the only variables that occur in quantifiers are individual variables. It is in virtue of this feature that they are called lower (or first-order) calculi. Various predicate calculi of higher order can be formed, however, in which quantifiers may contain other variables as well, hence binding all free occurrences of these that lie within their scope. In particular, in the second-order predicate calculus, quantification is permitted over both individual and predicate variables; hence wffs such as (∀ϕ)(∃xx can be formed. This last formula, since it contains no free variables of any kind, expresses a determinate proposition—namely, the proposition that every property has at least one instance. One important feature of this system is that in it identity need not be taken as primitive but can be introduced by defining x = y as (∀ϕ)(ϕx ≡ ϕy)—i.e., “Every property possessed by x is also possessed by y and vice versa.” Whether such a definition is acceptable as a general account of identity is a question that raises philosophical issues too complex to be discussed here; they are substantially those raised by the principle of the identity of indiscernibles, best known for its exposition in the 17th century by Leibniz. (Leibniz, Gottfried Wilhelm)

The Rev. G.E. Hughes Morton L. Schagrin

      True propositions can be divided into those—like “2 + 2 = 4”—that are true by logical necessity (necessary propositions), and those—like “France is a republic”—that are not (contingently true propositions). Similarly, false propositions can be divided into those—like “2 + 2 = 5”—that are false by logical necessity (impossible propositions), and those—like “France is a monarchy”—that are not (contingently false propositions). Contingently true and contingently false propositions are known collectively as contingent propositions. A proposition that is not impossible (i.e., one that is either necessary or contingent) is said to be a possible proposition. Intuitively, the notions of necessity and possibility are connected in the following way: to say that a proposition is necessary is to say that it is not possible for it to be false; and to say that a proposition is possible is to say that it is not necessarily false.

      If it is logically impossible for a certain proposition, p, to be true without a certain proposition, q, being also true (i.e., if the conjunction of p and not-q is logically impossible), then it is said that p strictly implies q. An alternative, equivalent way of explaining the notion of strict implication is by saying that p strictly implies q if and only if it is necessary that p materially implies q. “John's tie is scarlet,” for example, strictly implies “John's tie is red,” because it is impossible for John's tie to be scarlet without being red (or: it is necessarily true that if John's tie is scarlet it is red); and in general, if p is the conjunction of the premises, and q the conclusion, of a deductively valid inference, p will strictly imply q.

      The notions just referred to—necessity, possibility, impossibility, contingency, strict implication—and certain other closely related ones are known as modal (modality) notions, and a logic designed to express principles involving them is called a modal logic.

      The most straightforward way of constructing such a logic is to add to some standard nonmodal system a new primitive operator intended to represent one of the modal notions mentioned above, to define other modal operators in terms of it, and to add certain special axioms or transformation rules or both. A great many systems of modal logic have been constructed, but attention will be restricted here to a few closely related ones in which the underlying nonmodal system is ordinary PC (propositional calculus).

Alternative systems of modal logic
      All the systems to be considered here have the same wffs but differ in their axioms. The wffs can be specified by adding to the symbols of PC a primitive monadic operator L and to the formation rules of PC the rule that if α is a wff, so is Lα. L is intended to be interpreted as “It is necessary that,” so that Lp will be true if and only if p is a necessary proposition. The monadic operator M and the dyadic operator ℨ (to be interpreted as “It is possible that” and “strictly implies,” respectively) can then be introduced by the following definitions, which reflect in an obvious way the informal accounts given above of the connections between necessity, possibility, and strict implication: if α is any wff, then Mα is to be an abbreviation of ∼L∼α; and if α and β are any wffs, then α ℨ β is to be an abbreviation of L(α ⊃ β) [or alternatively of ∼M(α · ∼β)].

      The modal system known as T has as axioms some set of axioms adequate for PC (such as those of PM), and in addition
Lpp
L(pq) ⊃ (LpLq)

      Axiom 1 expresses the principle that whatever is necessarily true is true, and 2 the principle that, if q logically follows from p, then, if p is a necessary truth, so is q (i.e., that whatever follows from a necessary truth is itself a necessary truth). These two principles seem to have a high degree of intuitive plausibility, and 1 and 2 are theorems in almost all modal systems. The transformation rules of T are uniform substitution, modus ponens (modus ponens and modus tollens), and a rule to the effect that if α is a theorem so is Lα (the rule of necessitation). The intuitive rationale of this rule is that, in a sound axiomatic system, it is expected that every instance of a theorem α will be not merely true but necessarily true—and in that case every instance of Lα will be true.

      Among the simpler theorems of T are
pMp,
L(p · q) ≡ (Lp · Lq),
M(pq) ≡ (MpMq),
● (LpLq) ⊃ L(pq) (but not its converse),
M(p · q) ⊃ (Mp · Mq) (but not its converse),

      and
LMp ≡ ∼MLp,
● (pq) ⊃ (MpMq),
● (∼pp) ≡ Lp,
L(pq) ⊃ (LpMq).

      There are many modal formulas that are not theorems of T but that have a certain claim to express truths about necessity and possibility. Among them are

LpLLp, MpLMp, and pLMp.
The first of these means that if a proposition is necessary, its being necessary is itself a necessary truth; the second means that if a proposition is possible, its being possible is a necessary truth; and the third means that if a proposition is true, then not merely is it possible but its being possible is a necessary truth. These are all various elements in the general thesis that a proposition's having the modal characteristics it has (such as necessity, possibility) is not a contingent matter but is determined by logical considerations. Although this thesis may be philosophically controversial, it is at least plausible, and its consequences are worth exploring. One way of exploring them is to construct modal systems in which the formulas listed above are theorems. None of these formulas, as was said, is a theorem of T; but each could be consistently added to T as an extra axiom to produce a new and more extensive system. The system obtained by adding LpLLp to T is known as S4; that obtained by adding MpLMp to T is known as S5; and the addition of pLMp to T gives the Brouwerian system, here called B for short.

      The relations between these four systems are as follows: S4 is stronger than T—i.e., it contains all the theorems of T and others besides. B is also stronger than T. S5 is stronger than S4 and also stronger than B. S4 and B, however, are independent of each other in the sense that each contains some theorems that the other does not have. It is of particular importance that if MpLMp is added to T then LpLLp can be derived as a theorem but that if one merely adds the latter to T the former cannot then be derived.

      Examples of theorems of S4 that are not theorems of T are MpMMp, MLMpMp, and (pq) ⊃ (LpLq). Examples of theorems of S5 that are not theorems of S4 are LpMLp, L(pMq) ≡ (LpMq), M(p · Lq) ≡ (Mp · Lq), and (LpLq) ∨ (LqLp). One important feature of S5 but not of the other systems mentioned is that any wff that contains an unbroken sequence of monadic modal operators (Ls or Ms or both) is probably equivalent to the same wff with all these operators deleted except the last.

      Considerations of space preclude an account of the many other axiomatic systems of modal logic that have been investigated. Some of these are weaker than T; such systems normally contain the axioms of T either as axioms or as theorems but have only a restricted form of the rule of necessitation. Another group comprises systems that are stronger than S4 but weaker than S5; some of these have proved fruitful in developing a logic of temporal relations. Yet another group includes systems that are stronger than S4 but independent of S5 in the sense explained above.

      Modal predicate logics can be formed also by making analogous additions to LPC instead of to PC.

Validity in modal logic
      The task of defining validity for modal wffs is complicated by the fact that, even if the truth values (truth-value) of all of the variables in a wff are given, it is not obvious how one should set about calculating the truth value of the whole wff. Nevertheless, a number of definitions of validity applicable to modal wffs have been given, each of which turns out to match some axiomatic modal system, in the sense that it brings out as valid those wffs, and no others, that are theorems of that system. Most, if not all, of these accounts of validity can be thought of as variant ways of giving formal precision to the idea that necessity is truth in every “possible world” or “conceivable state of affairs.” The simplest such definition is this: Let a model be constructed by first assuming a (finite or infinite) set W of “worlds.” In each world, independently of all the others, let each propositional variable then be assigned either the value 1 or the value 0. In each world the values of truth functions are calculated in the usual way from the values of their arguments in that world. In each world, however, Lα is to have the value 1 if α has the value 1 not only in that world but in every other world in W as well and is otherwise to have the value 0; and in each world Mα is to have the value 1 if α has value 1 either in that world or in some other world in W and is otherwise to have the value 0. These rules enable one to calculate a value (1 or 0) in any world in W for any given wff, once the values of the variables in each world in W are specified. A model is defined as consisting of a set of worlds together with a value assignment of the kind just described. A wff is valid if and only if it has the value 1 in every world in every model. It can be proved that the wffs that are valid by this criterion are precisely the theorems of S5; for this reason models of the kind here described may be called S5-models, and validity as just defined may be called S5-validity.

      A definition of T-validity (i.e., one that can be proved to bring out as valid precisely the theorems of T) can be given as follows: a T-model consists of a set of worlds W and a value assignment to each variable in each world, as before. It also includes a specification, for each world in W, of some subset of W as the worlds that are “accessible” to that world. Truth functions are evaluated as before; but in each world in the model, Lα is to have the value 1 if α has value 1 in that world and in every other world in W accessible to it and is otherwise to have value 0. And in each world, Mα is to have the value 1 if α has value 1 either in that world or in some other world accessible to it and is otherwise to have value 0. (In other words, in computing the value of Lα or Mα in a given world, no account is taken of the value of α in any other world not accessible to it.) A wff is T-valid if and only if it has the value 1 in every world in every T-model.

      An S4-model is defined as is a T-model except that it is required that the accessibility relation be transitive—i.e., that, where w1, w2, and w3 are any worlds in W, if w1 is accessible to w2 and w2 is accessible to w3, then w1 is accessible to w3. A wff is S4-valid if and only if it has the value 1 in every world in every S4-model. The S4-valid wffs can be shown to be precisely the theorems of S4. Finally, a definition of validity is obtained that will match the system B by requiring that the accessibility relation be symmetrical but not that it be transitive.

      For all four systems, effective decision procedures (decision problem) for validity can be given. Further modifications of the general method described have yielded validity definitions that match many other axiomatic modal systems, and the method can be adapted to give a definition of validity for intuitionistic PC. For a number of axiomatic modal systems, however, no satisfactory account of validity has been devised. Validity can also be defined for various modal predicate logics by combining the definition of LPC-validity given above in Validity in LPC (formal logic) with the relevant accounts of validity for modal systems, but a modal logic based on LPC is, like LPC itself, an undecidable system.

      Only a sketchy account of set theory is given here; for further information see the article set theory. Set theory is a logic of classes—i.e., of collections (finite or infinite) or aggregations of objects of any kind, which are known as the members of the classes in question. Some logicians use the terms “class” and “set” interchangeably; others distinguish between them, defining a set (for example) as a class that is itself a member of some class and defining a proper class as one that is not a member of any class. It is usual to write “∊” for “is a member of” and to abbreviate ∼(xy) to xy. A particular class may be specified either by listing all its members or by stating some condition of membership, in which (latter) case the class consists of all those things and only those that satisfy that condition (it is used, for example, when one speaks of the class of inhabitants of London or the class of prime numbers). Clearly, the former method is available only for finite classes and may be very inconvenient even then; the latter, however, is of more general applicability. Two classes that have precisely the same members are regarded as the same class or are said to be identical with each other, even if they are specified by different conditions; i.e., identity of classes is identity of membership, not identity of specifying conditions. This principle is known as the principle of extensionality. A class with no members, such as the class of Chinese popes, is said to be null. Since the membership of all such classes is the same, there is only one null class, which is therefore usually called the null class (or sometimes the empty class); it is symbolized by Λ or ø. The notation x = y is used for “x is identical with y,” and ∼(x = y) is usually abbreviated to xy. The expression x = Λ therefore means that the class x has no members, and x ≠ Λ means that x has at least one member.

      A member of a class may itself be a class. The class of dogs, for example, is a member of the class of species of animals. An individual dog, however, though a member of the former class, is not a member of the latter—because an individual dog is not a species of animal (if the number of dogs increases, the number of species of animals does not thereby increase). Class membership is therefore not a transitive relation (transitive law). The relation of class inclusion, however (to be carefully distinguished from class membership), is transitive. A class x is said to be included in a class y (written: xy) if and only if every member of x is also a member of y. (This is not meant to exclude the possibility that x and y may be identical.) If x is included in, but is not identical with, yi.e., if every member of x is a member of y but some members of y are not members of xx is said to be properly included in y (written: xy).

      It is perhaps natural to assume that for every statable condition there is a class (null or otherwise) of objects that satisfy that condition. This assumption is known as the principle of comprehension. In the unrestricted form just mentioned, however, this principle has been found to lead to inconsistencies and hence cannot be accepted as it stands. One statable condition, for example, is non-self-membership—i.e., the property possessed by a class if and only if it is not a member of itself. This in fact appears to be a condition that most classes do fulfill; the class of dogs, for example, is not itself a dog and hence is not a member of the class of dogs.

      Let it now be assumed that the class of all classes that are not members of themselves can be formed and let this class be y. Then any class x will be a member of y if and only if it is not a member of itself; i.e., for any class x, (xy) ≡ (xx). The question can then be asked whether y is a member of itself or not, with the following awkward result: If it is a member of itself, then it fails to fulfill the condition of membership of y, and hence it is not a member of yi.e., not a member of itself. On the other hand, if y is not a member of itself, then it does fulfill the required condition, and therefore it is a member of yi.e., of itself. Hence the equivalence (yy) ≡ (yy) results, which is self-contradictory. This perplexing conclusion, which was pointed out by Bertrand Russell (Russell, Bertrand), is known as Russell's paradox. Russell's own solution to it and to other similar difficulties was to regard classes as forming a hierarchy of types (types, theory of) and to posit that a class could only be regarded sensibly as a member, or a nonmember, of a class at the next higher level in the hierarchy. The effect of this theory is to make xx, and therefore xx, ill-formed. Another kind of solution, however, is based upon the distinction made earlier between two kinds of classes, those that are sets and those that are not—a set being defined as a class that is itself a member of some class. The unrestricted principle of comprehension is then replaced by the weaker principle that for every condition there is a class the members of which are the individuals or sets that fulfill that condition. Other solutions have also been devised, but none has won universal acceptance, with the result that several different versions of set theory are found in the literature of the subject.

      Formally, set theory can be derived by the addition of various special axioms to a rather modest form of LPC that contains no predicate variables and only a single primitive dyadic predicate constant (∊) to represent membership. Sometimes LPC-with-identity is used, and there are then two primitive dyadic predicate constants (∊ and =). In some versions the variables x, y, . . . are taken to range only over sets or classes; in other versions they range over individuals as well. The special axioms vary, but the basis normally includes the principle of extensionality and some restricted form of the principle of comprehension, or some elements from which these can be deduced.

      A notation to express theorems about classes can be either defined in various ways (not detailed here) in terms of the primitives mentioned above or else introduced independently. The main elements of one widely used notation are the following: if α is an expression containing some free occurrence of x, the expression {x : α} is used to stand for the class of objects fulfilling the condition expressed by α—e.g., {x : x is a prime number} represents the class of prime numbers; {x} represents the class the only member of which is x; {x, y} the class the only members of which are x and y; and so on. em>x, yx and y in that order (thus {x, y} and {y, x} are identical; but em>x, yy, xx and y be any classes, as (for example) those of the dots on the two arms of a stippled cross. The intersection of x and y, symbolized as xy, is the class the members of which are the objects common to x and y—in this case the dots within the area where the arms cross—i.e., {z : zx · zy}. Similarly, the union of x and y, symbolized as xy, is the class the members of which are the members of x together with those of y—in this case all the dots on the cross—i.e., {z : zxzy}; the complement of x, symbolized as -x, is the class the members of which are all those objects that are not members of xi.e., {y : yx}; the complement of y in x, symbolized as xy, is the class of all objects that are members of x but not of yi.e., {z : zx · zy}; the universal class, symbolized as V, is the class of which everything is a member, definable as the complement of the null class—i.e., as -Λ. Λ itself is sometimes taken as a primitive individual constant, sometimes defined as {x : xx}—the class of objects that are not identical with themselves.

      Among the simpler theorems of set theory are

(∀x)(xx = x),
(∀x)(∀y)(xy = yx);
and corresponding theorems for ∪:
(∀x)(∀y)(∀z)[x ∩ (yz) = (xy) ∪ (xz)],
(∀x)(∀y)[-(xy) = -x ∪ -y];
and corresponding theorems with ∩ and ∪ interchanged:
(∀x)(- -x = x),
(∀x)(∀y)(x - y = x ∩ -y),
(∀x)(Λ ⊂ x),
(∀x)(x ∩ Λ = Λ),
(∀x)(x ∪ Λ = x).
In these theorems, the variables range over classes. In several cases, there are obvious analogies to valid wffs of PC.

      Apart from its own intrinsic interest, set theory has an importance for the foundations of mathematics (mathematics, foundations of) in that it is widely held that the natural numbers can be adequately defined in set-theoretic terms. Moreover, given suitable axioms, standard postulates for natural-number arithmetic can be derived as theorems within set theory.

The Rev. G.E. Hughes Morton L. Schagrin

Additional Reading
Michael Dummett, Elements of Intuitionism (1977), offers a clear presentation of the philosophic approach that demands constructibility in logical proofs. G.e. Hughes and M.J. Cresswell, An Introduction to Modal Logic (1968, reprinted 1989), treats operators acting on sentences in first-order logic (or predicate calculus) so that, instead of being interpreted as statements of fact, they become necessarily or possibly true or true at all or some times in the past, or they denote obligatory or permissible actions, and so on. Jon Barwise et al. (eds.), Handbook of Mathematical Logic (1977), provides a technical survey of work in the foundations of mathematics (set theory) and in proof theory (theories with infinitely long expressions). Elliott Mendelson, Introduction to Mathematical Logic, 3rd ed. (1987), is the standard text; and G. Kreisel and J.L. Krivine, Elements of Mathematical Logic: Model Theory (1967, reprinted 1971; originally published in French, 1967), covers all standard topics at an advanced level. A.S. Troelstra, Choice Sequences: A Chapter of Intuitionistic Mathematics (1977), offers an advanced analysis of the philosophical position regarding what are legitimate proofs and logical truths; and A.S. Troelstra and D. van Dalen, Constructivism in Mathematics, 2 vol. (1988), applies intuitionist strictures to the problem of the foundations of mathematics.

* * *


Universalium. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • formal logic — n. the branch of logic that examines patterns of reasoning to determine which ones necessarily result in valid, or formally correct, conclusions …   English World dictionary

  • formal logic — formalioji logika statusas T sritis automatika atitikmenys: angl. formal logic vok. formale Logik, f rus. формальная логика, f pranc. logique formelle, f …   Automatikos terminų žodynas

  • formal logic — noun any logical system that abstracts the form of statements away from their content in order to establish abstract criteria of consistency and validity (Freq. 1) • Syn: ↑symbolic logic, ↑mathematical logic • Hypernyms: ↑logic, ↑logical system,… …   Useful english dictionary

  • formal logic — noun a) Mathematical logic. b) A particular logical calculus …   Wiktionary

  • formal logic — /fɔməl ˈlɒdʒɪk/ (say fawmuhl lojik) noun the branch of logic concerned exclusively with the principles of deductive reasoning and in consequence with the forms (as distinct from the content) of propositions …  

  • Logic — • A historical survey from Indian and Pre Aristotelian philosophy to the Logic of John Stuart Mill Catholic Encyclopedia. Kevin Knight. 2006. Logic     Logic      …   Catholic encyclopedia

  • Formal epistemology — is a subdiscipline of epistemology that utilizes formal methods from logic, probability theory and computability theory to elucidate traditional epistemic problems. TopicsSome of the topics that come under the heading of formal epistemology… …   Wikipedia

  • Formal ethics — is a formal logical system for describing and evaluating the form as opposed to the content of ethical principles. Formal ethics was introduced by Harry J. Gensler, in part in his 1990 logic textbook Symbolic Logic: Classical and Advanced Systems …   Wikipedia

  • Logic in computer science — describes topics where logic is applied to computer science and artificial intelligence. These include:*Investigations into logic that are guided by applications in computer science. For example: Combinatory logic and Abstract interpretation;… …   Wikipedia

  • Logic — For other uses, see Logic (disambiguation). Philosophy …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”