metalogic

metalogic
/met"euh loj'ik/, n.
the logical analysis of the fundamental concepts of logic.
[1835-45; META- + LOGIC]

* * *

Study of the syntax and the semantics of formal languages and formal systems.

It is related to, but does not include, the formal treatment of natural languages (e.g., English, Russian, etc.). Metalogic has led to a great deal of work of a mathematical nature in axiomatic set theory, model theory, and recursion theory (in which functions that are computable in a finite number of steps are studied).

* * *

Introduction

      the study and analysis of the semantics (relations between expressions and meanings) and syntax (relations among expressions) of formal languages and formal systems (formal system). It is related to, but does not include, the formal treatment of natural languages (language). (For a discussion of the syntax and semantics of natural languages, see the articles linguistics and semantics.)

Nature, origins, and influences of metalogic

Syntax and semantics
      A formal language usually requires a set of formation rules—i.e., a complete specification of the kinds of expressions that shall count as well-formed formulas (sentences or meaningful expressions), applicable mechanically, in the sense that a machine could check whether a candidate satisfies the requirements. This specification usually contains three parts: (1) a list of primitive symbols (basic units) given mechanically, (2) certain combinations of these symbols, singled out mechanically as forming the simple (atomic) sentences, and (3) a set of inductive clauses—inductive (induction) inasmuch as they stipulate that natural combinations of given sentences formed by such logical connectives as the disjunction “or,” which is symbolized “∨”; “not,” symbolized “∼”; and “for all ,” symbolized “(∀),” are again sentences. [“(∀)” is called a quantifier, as is also “there is some ,” symbolized (symbol) “(∃)”.] Since these specifications are concerned only with symbols and their combinations and not with meanings, they involve only the syntax of the language.

      An interpretation of a formal language is determined by formulating an interpretation of the atomic sentences of the language with regard to a domain of objects—i.e., by stipulating which objects of the domain are denoted by which constants of the language and which relations and functions are denoted by which predicate letters and function symbols. The truth-value (whether “true” or “false”) of every sentence is thus determined according to the standard interpretation of logical connectives (connective). For example, p · q is true if and only if p and q are true. (Here, the dot means the conjunction “and,” not the multiplication operation “times.”) Thus, given any interpretation of a formal language, a formal concept of truth is obtained. Truth, meaning, and denotation are semantic concepts.

      If, in addition, a formal system in a formal language is introduced, certain syntactic concepts arise—namely, axioms (axiom), rules of inference, and theorems (theorem). Certain sentences are singled out as axioms. These are (the basic) theorems. Each rule of inference is an inductive clause, stating that, if certain sentences are theorems, then another sentence related to them in a suitable way is also a theorem. If p and “either not-p or q” (∼pq) are theorems, for example, then q is a theorem. In general, a theorem is either an axiom or the conclusion of a rule of inference whose premises are theorems.

      In 1931 Gödel (Gödel, Kurt) made the fundamental discovery that, in most of the interesting (or significant) formal systems, not all true sentences are theorems. It follows from this finding that semantics cannot be reduced to syntax; thus syntax, which is closely related to proof theory, must often be distinguished from semantics, which is closely related to model theory. Roughly speaking, syntax—as conceived in the philosophy (mathematics, philosophy of) of mathematics—is a branch of number theory, and semantics is a branch of set theory, which deals with the nature and relations of aggregates.

      Historically, as logic and axiomatic systems became more and more exact, there emerged, in response to a desire for greater lucidity, a tendency to pay greater attention to the syntactic features of the languages employed rather than to concentrate exclusively on intuitive meanings. In this way, logic, the axiomatic method (such as that employed in geometry), and semiotic (the general science of signs) converged toward metalogic.

      The best known axiomatic system (mathematics, foundations of) is that of Euclid for geometry (Euclidean geometry). In a manner similar to that of Euclid, every scientific theory involves a body of meaningful concepts (concept) and a collection of true or believed assertions. The meaning of a concept can often be explained or defined in terms of other concepts, and, similarly, the truth of an assertion or the reason for believing it can usually be clarified by indicating that it can be deduced (deduction) from certain other assertions already accepted. The axiomatic method proceeds in a sequence of steps, beginning with a set of primitive concepts and propositions and then defining or deducing all other concepts and propositions in the theory from them.

      The realization which arose in the 19th century that there are different possible geometries (non-Euclidean geometry) led to a desire to separate abstract mathematics from spatial intuition; in consequence, many hidden axioms were uncovered in Euclid's geometry. These discoveries were organized into a more rigorous axiomatic system by David Hilbert (Hilbert, David) in his Grundlagen der Geometrie (1899; The Foundations of Geometry). In this and related systems, however, logical connectives and their properties are taken for granted and remain implicit. If the logic involved is taken to be that of the predicate calculus, the logician can then arrive at such formal systems as that discussed above.

      Once such formal systems are obtained, it is possible to transform certain semantic problems into sharper syntactic problems. It has been asserted, for example, that non-Euclidean geometries must be self-consistent systems because they have models (or interpretations) in Euclidean geometry, which in turn has a model in the theory of real numbers (real number). It may then be asked, however, how it is known that the theory of real numbers is consistent in the sense that no contradiction can be derived within it. Obviously, modeling can establish only a relative consistency and has to come to a stop somewhere. Having arrived at a formal system (say, of real numbers), however, the consistency problem then has the sharper focus of a syntactic problem: that of considering all the possible proofs (as syntactic objects) and asking whether any of them ever has (say) 0 = 1 as the last sentence.

      As another example, the question whether a system is categorical—that is, whether it determines essentially a unique interpretation in the sense that any two interpretations are isomorphic—may be explored. This semantic question can to some extent be replaced by a related syntactic question, that of completeness: whether there is in the system any sentence having a definite truth-value in the intended interpretation such that neither that sentence nor its negation is a theorem. Even though it is now known that the semantic and syntactic concepts are different, the vague requirement that a system be “adequate” is clarified by both concepts. The study of such sharp syntactic questions as those of consistency and completeness, which was emphasized by Hilbert, was named “metamathematics” (or “proof theory”) by him about 1920.

Logic and metalogic
      In one sense, logic is to be identified with the predicate calculus of the first order, the calculus in which the variables are confined to individuals of a fixed domain—though it may include as well the logic of identity, symbolized “=,” which takes the ordinary properties of identity as part of logic. In this sense Gottlob Frege achieved a formal calculus of logic as early as 1879. Sometimes logic is construed, however, as including also higher-order predicate calculi, which admit variables of higher types, such as those ranging over predicates (or classes and relations) and so on. But then it is a small step to the inclusion of set theory, and, in fact, axiomatic set theory is often regarded as a part of logic. For the purposes of this article, however, it is more appropriate to confine the discussion to logic in the first sense.

      It is hard to separate out significant findings in logic from those in metalogic, because all theorems of interest to logicians are about logic and therefore belong to metalogic. If p is a mathematical theorem—in particular, one about logic—and P is the conjunction of the mathematical axioms employed for proving p, then every p can be turned into a theorem, “either not-P or p,” in logic. Mathematics is not done, however, by carrying out explicitly all the steps as formalized in logic; the selection and intuitive grasp of the axioms is important both for mathematics and for metamathematics. Actual derivations in logic, such as those carried out just prior to World War I by Alfred North Whitehead and Bertrand Russell, are of little intrinsic interest to logicians. It might therefore appear redundant to introduce the term metalogic. In the present classification, however, metalogic is conceived as dealing not only with findings about logical calculi but also with studies of formal systems and formal languages in general.

      An ordinary formal system differs from a logical calculus in that the system usually has an intended interpretation, whereas the logical calculus deliberately leaves the possible interpretations open. Thus, one speaks, for example, of the truth or falsity of sentences in a formal system, but with respect to a logical calculus one speaks of validity (i.e., being true in all interpretations or in all possible worlds) and of satisfiability (or having a model—i.e., being true in some particular interpretation). Hence, the completeness of a logical calculus has quite a different meaning from that of a formal system: a logical calculus permits many sentences such that neither the sentence nor its negation is a theorem because it is true in some interpretations and false in others, and it requires only that every valid sentence be a theorem.

Semiotic (semiotics)
      Originally, the word “semiotic” meant the medical theory of symptoms; however, an empiricist, John Locke (Locke, John), used the term in the 17th century for a science of signs and significations. The current usage was recommended especially by Rudolf Carnap (Carnap, Rudolf)—see his Introduction to Semantics (1942) and his reference there to Charles William Morris, who suggested a threefold distinction. According to this usage, semiotic is the general science of signs and languages (language), consisting of three parts: (1) pragmatics (in which reference is made to the user of the language), (2) semantics (in which one abstracts from the user and analyzes only the expressions and their meanings), and (3) syntax (in which one abstracts also from the meanings and studies only the relations between expressions).

      Considerable effort since the 1970s has gone into the attempt to formalize some of the pragmatics of natural languages. The use of indexical expressions to incorporate reference to the speaker, his or her location, or the time of either the utterance or the events mentioned was of little importance to earlier logicians, who were primarily interested in universal truths or mathematics. With the increased interest in linguistics there has come an increased effort to formalize pragmatics.

      At first Carnap exclusively emphasized syntax. But gradually he came to realize the importance of semantics, and the door was thus reopened to many difficult philosophical problems.

      Certain aspects of metalogic have been instrumental in the development of the approach to philosophy commonly associated with the label of Logical Positivism. In his Tractatus Logico-Philosophicus (1922; originally published under another title, 1921), Ludwig Wittgenstein (Wittgenstein, Ludwig), a seminal thinker in the philosophy of language, presented an exposition of logical truths as sentences that are true in all possible worlds. One may say, for example, “It is raining or it is not raining,” and in every possible world one of the disjuncts is true. On the basis of this observation and certain broader developments in logic, Carnap tried to develop formal treatments of science and philosophy (science, philosophy of).

      It has been thought that the success that metalogic had achieved in the mathematical disciplines could be carried over into physics and even into biology or psychology. In so doing, the logician gives a branch of science a formal language in which there are logically true sentences having universal logical ranges and factually true sentences having universal logical ranges and factually true ones having more restricted ranges. (Roughly speaking, the logical range of a sentence is the set of all possible worlds in which it is true.)

      A formal solution of the problem of meaning has also been proposed for these disciplines. Given the formal language of a science, it is possible to define a notion of truth. Such a truth definition determines the truth condition for every sentence—i.e., the necessary and sufficient conditions for its truth. The meaning of a sentence is then identified with its truth condition because, as Carnap wrote:

To understand a sentence, to know what is asserted by it, is the same as to know under what conditions it would be true. . . . To know the truth condition of a sentence is (in most cases) much less than to know its truth-value, but it is the necessary starting point for finding out its truth-value.

Influences in other directions
      Metalogic has led to a great deal of work of a mathematical nature in axiomatic set theory, model theory, and recursion theory (in which functions that are computable in a finite number of steps are studied).

      In a different direction, the devising of Turing (Turing machine) computing machines, involving abstract designs for the explication of mechanical logical procedures, has led to the investigation of idealized computers (computer science), with ramifications in the theory of finite automata and mathematical linguistics.

      Among philosophers of language, there is a widespread tendency to stress the philosophy of logic (logic, philosophy of). The contrast, for example, between intensional concepts and extensional concepts; the role of meaning in natural languages as providing truth conditions; the relation between formal and natural logic (i.e., the logic of natural languages); and the relation of ontology, the study of the kinds of entities that exist, to the use of quantifiers—all these areas are receiving extensive consideration and discussion. There are also efforts to produce formal systems for empirical sciences such as physics, biology, and even psychology. Many scholars have doubted, however, whether these latter efforts have been fruitful.

Nature of a formal system and of its formal language

Example of a formal system
      In order to clarify the abstract concepts of metalogic, a formal system N (with its formal language) may be considered for illustration.

Formation rules
      The system may be set up by employing the following formation rules:
● The following are primitive symbols: “∼,” “∨,” “∀,” and “=” and the symbols used for grouping, “(” and “)”; the function symbols for “successor,” “S,” and for arithmetical addition and multiplication, “+” and “ · ”; constants (constant) 0, 1; and variables x, y, z, . . . .
● The following are terms: (term) a constant is a term; a variable is a term; if a is a term, Sa is a term; and, if a and b are terms, a + b and a · b are terms.
● Atomic sentences are thus specified: if a and b are terms, a = b is a sentence.
● Other sentences can be defined as follows: if A and B are sentences and v is a variable, then ∼A, AB, and (∀v)A are sentences.

Axioms (axiom) and rules of inference
      The system may be developed by adopting certain sentences as axioms and following certain rules of inference.

      1. The basic axioms and rules are to be those of the first-order predicate calculus with identity.

      2. The following additional axioms of N are stipulated:

      a. Zero (0) is not a successor:

      ∼Sx = 0

      b. No two different numbers have the same successor:

      ∼(Sx =Sy) ∨ x = y

      c. Recursive definition of addition:

      x + 0 = x

      x + Sy = S(x + y)

      (From this, with the understanding that 1 is the successor of 0, one can easily show that Sx = x + 1.)

      d. Recursive definition of multiplication:

      x · 0 = 0

      x · Sy = (x · y) + x

      3. Rule of inference (the principle of mathematical induction): If zero has some property p and it is the case that if any number has p then its successor does, then every number has p. With some of the notation from above, this can be expressed: If A(0) and (∀x)(∼A(x) ∨ A(Sx)) are theorems, then (∀x)A(x) is a theorem.

      The system N as specified by the foregoing rules and axioms is a formal system in the sense that, given any combination of the primitive symbols, it is possible to check mechanically whether it is a sentence of N, and, given a finite sequence of sentences, it is possible to check mechanically whether it is a (correct) proof in N—i.e., whether each sentence either is an axiom or follows from preceding sentences in the sequence by a rule of inference. Viewed in this way, a sentence is a theorem if and only if there exists a proof in which it appears as the last sentence. It is not required of a formal system, however, that it be possible to decide mechanically whether or not a given sentence is a theorem; and, in fact, it has been proved that no such mechanical method exists.

Truth definition of the given language
      The formal system N admits of different interpretations, according to findings of Gödel (Gödel, Kurt) (from 1931) and of the Norwegian mathematician Thoralf Skolem, a pioneer in metalogic (from 1933). The originally intended, or standard, interpretation takes the ordinary nonnegative integers {0, 1, 2, . . . } as the domain, the symbols 0 and 1 as denoting zero and one, and the symbols + and · as standing for ordinary addition and multiplication. Relative to this interpretation, it is possible to give a truth definition of the language of N.

      It is necessary first to distinguish between open and closed sentences. An open sentence, such as x = 1, is one that may be either true or false depending on the value of x, but a closed sentence, such as 0 = 1 and (∀x) (x = 0) or “All x's are zero,” is one that has a definite truth-value—in this case, false (in the intended interpretation).

      1. A closed atomic sentence is true if and only if it is true in the intuitive sense; for example, 0 = 0 is true, 0 + 1 = 0 is false.

      This specification as it stands is not syntactic, but, with some care, it is possible to give an explicit and mechanical specification of those closed atomic sentences that are true in the intuitive sense.

      2. A closed sentence ∼A is true if and only if A is not true.

      3. A closed sentence AB is true if and only if either A or B is true.

      4. A closed sentence (∀ν)A(ν) is true if and only if A(ν) is true for every value of ν—i.e., if A(0), A(1), A(1 + 1), . . . are all true.

      The above definition of truth is not an explicit definition; it is an inductive one. Using concepts from set theory, however, it is possible to obtain an explicit definition that yields a set of sentences that consists of all the true ones and only them. If Gödel's method of representing symbols and sentences by numbers is employed, it is then possible to obtain in set theory a set of natural numbers that are just the Gödel numbers of the true sentences of N.

      There is a definite sense in which it is impossible to define the concept of truth within a language itself. This is proved by the liar paradox: if the sentence “I am lying,” or alternatively

      (1) This sentence is not true.

      is considered, it is clear—since (1) is “This sentence”—that if (1) is true, then (1) is false; on the other hand, if (1) is false, then (1) is true. In the case of the system N, if the concept of truth were definable in the system itself, then (using a device invented by Gödel) it would be possible to obtain in N a sentence that amounts to (1) and that thereby yields a contradiction.

Discoveries about formal mathematical systems
      The two central questions of metalogic are those of the completeness and consistency of a formal system based on axioms. In 1931 Gödel made fundamental discoveries in these areas for the most interesting formal systems. In particular, he discovered that, if such a system is ω-consistent—i.e., devoid of contradiction in a sense to be explained below—then it is not complete and that, if a system is consistent, then the statement of its consistency, easily expressible in the system, is not provable in it.

      Soon afterward, in 1934, Gödel modified a suggestion that had been offered by Jacques Herbrand, a French mathematician, and introduced a general concept of recursive functions (recursive function)—i.e., of functions mechanically computable by a finite series of purely combinatorial steps. In 1936 Alonzo Church, a mathematical logician, Alan Mathison Turing (Turing, Alan M.), originator of a theory of computability, and Emil L. Post, a specialist in recursive unsolvability, all argued for this concept (and certain equivalent notions), thereby arriving at stable and exact conceptions of “mechanical,” “computable,” “recursive,” and “formal” that explicate the intuitive concept of what a mechanical computing procedure is. As a result of the development of recursion theory, it is now possible to prove not only that certain classes of problems are mechanically solvable (which could be done without the theory) but also that certain others are mechanically unsolvable (or absolutely unsolvable). The most notable example of such unsolvability is the discovery, made in 1970, that there is no algorithm, or rule of repetitive procedure, for determining which Diophantine equations (Diophantine equation) (i.e., polynomial equations of which the coefficients are whole numbers) have integer solutions. This solution gives a negative solution to the 10th problem in the famous list presented by Hilbert at the International Mathematical Congress in 1900.

      In this way, logicians have finally arrived at a sharp concept of a formal axiomatic system, because it is no longer necessary to leave “mechanical” as a vague nonmathematical concept. In this way, too, they have arrived at sharp concepts of decidability. In one sense, decidability is a property of sets (of sentences): that of being subject (or not) to mechanical methods by which to decide in a finite number of steps, for any closed sentence of a given formal system (e.g., of N), whether it is true or not or—as a different question—whether it is a theorem or not. In another sense, decidability can refer to a single closed sentence: the sentence is called undecidable in a formal system if neither it nor its negation is a theorem. Using this concept, Gödel's incompleteness theorem is sometimes stated thus: “Every interesting (or significant) formal system has some undecidable sentences.”

      Given these developments, it was easy to extend Gödel's findings, as Church did in 1936, to show that interesting formal systems such as N are undecidable (both with regard to theorems and with regard to true sentences).

The two incompleteness theorems
      The first and most central finding in this field is that systems such as N are incomplete and incompletable because Gödel's theorem applies to any reasonable and moderately rich system. The proof of this incompleteness may be viewed as a modification of the liar paradox, which shows that truth cannot be defined in the language itself. Since provability in a formal system can often be expressed in the system itself, one is led to the conclusion of incompleteness.

      Let us consider the sentence

      (2) This sentence is not provable in the system.

      In particular, N may be thought of as the system being studied. Representing expressions by numbers and using an ingenious substitution function, Gödel was able to find in the system a sentence p that could be viewed as expressing (2).

      Once such a sentence is obtained, some strong conclusions result. If the system is complete, then either the sentence p or its negation is a theorem of the system. If p is a theorem, then intuitively p or (2) is false, and there is in some sense a false theorem in the system. Similarly, if ∼p is a theorem, then it says that ∼(2) or that p is provable in the system. Since ∼p is a theorem, it should be true, and there seem then to be two conflicting sentences that are both true—namely, p is provable in the system and ∼p is provable in it. This can be the case only if the system is inconsistent.

      A careful examination of this inexact line of reasoning leads to Gödel's exact theorem, which says that, if a system is reasonably rich and ω-consistent, then p is undecidable in it. The notion of ω-consistency is stronger than consistency, but it is a very reasonable requirement, since it demands merely that one cannot prove in a system both that some number does not have the property A and yet for each number that it does have the property Ai.e., that (∃ x)∼A(x) and also all of A(0), A(1), . . . are theorems. The American mathematician J. Barkley Rosser, who also contributed to number theory and applied mathematics, weakened the hypothesis to mere consistency in 1936, at the expense of complicating somewhat the initial sentence (2).

      More exactly, Gödel showed that, if the system is consistent, then p is not provable; if it is ω-consistent, then ∼p is not provable. The first half leads to Gödel's theorem on consistency proofs, which says that if a system is consistent, then the arithmetic sentence expressing the consistency of the system cannot be proved in the system. This is usually stated briefly thus: that no interesting system can prove its own consistency or that there exists no consistency proof of a system that can be formalized in the system itself.

      The proof of this theorem consists essentially of a formalization in arithmetic of the arithmetized version of the proof of the statement, “If a system is consistent, then p is not provable”; i.e., it consists of a derivation within number theory of p itself from the arithmetic sentence that says that the system is consistent. Hence, if the arithmetic sentence were provable, p would also be provable—contradicting the previous result. This proof, which was only briefly outlined by Gödel, has been carried out in detail by Paul Bernays (Bernays, Paul Isaak) in his joint work with Hilbert. Moreover, the undecidable sentence p is always of a relatively simple form—namely, the form (∀x)A(x), “For every x, x is A,” in which A is a recursive, in fact a primitive recursive, predicate.

Decidability and undecidability
      The first incompleteness theorem yields directly the fact that truth in a system (e.g., in N) to which the theorem applies is undecidable. If it were decidable, then all true sentences would form a recursive set, and they could be taken as the axioms (axiom) of a formal system that would be complete. This claim depends on the reasonable and widely accepted assumption that all that is required of the axioms of a formal system is that they make it possible to decide effectively whether a given sentence is an axiom.

      Alternatively, the above assumption can be avoided by resorting to a familiar lemma, or auxiliary truth: that all recursive (recursive function) or computable functions and relations are representable in the system (e.g., in N). Since truth in the language of a system is itself not representable (definable) in the system, it cannot, by the lemma, be recursive (i.e., decidable).

      The same lemma also yields the undecidability of such systems with regard to theorems. Thus, if there were a decision procedure, there would be a computable function f such that f(i) equals 1 or 0 according as the ith sentence is a theorem or not. But then what f(i) = 0 says is just that the ith sentence is not provable. Hence, using Gödel's device, a sentence (say the tth) is again obtained saying of itself that it is not provable. If f(t) = 0 is true, then, because f is representable in the system, it is a theorem of the system. But then, because f(t) = 0 is (equivalent to) the tth sentence, f(t) = 1 is also true and therefore provable in the system. Hence, the system, if consistent, is undecidable with regard to theorems.

      Although the system N is incompletable and undecidable, it has been discovered by the Polish logician M. Presburger and by Skolem (both in 1930) that arithmetic with addition alone or multiplication alone is decidable (with regard to truth) and therefore has complete formal systems (formal system). Another well-known positive finding is that of the Polish-American semanticist and logician Alfred Tarski (Tarski, Alfred), who developed a decision procedure for elementary geometry and elementary algebra (1951).

Consistency proofs
      The best-known consistency proof is that of the German mathematician Gerhard Gentzen (1936) for the system N of classical (or ordinary, in contrast to intuitionistic) number theory. Taking ω (omega) to represent the next number beyond the natural numbers (called the “first transfinite number”), Gentzen's proof employs an induction in the realm of transfinite numbers (ω + 1, ω + 2, . . . ; 2ω, 2ω + 1, . . . ; ω2, ω2 + 1, . . . ), which is extended to the first epsilon-number, ε0 (defined as the limit of

. . . ), which is not formalizable in N. This proof, which has appeared in several variants, has opened up an area of rather extensive work.

      Intuitionistic number theory (intuitionism), which denies the classical concept of truth and consequently eschews certain general laws such as “either A or ∼A,” and its relation to classical number theory have also been investigated (see mathematics, foundations of: Intuitionism (mathematics, foundations of)). This investigation is considered significant, because intuitionism is believed to be more constructive and more evident than classical number theory. In 1932 Gödel found an interpretation of classical number theory in the intuitionistic theory (also found by Gentzen and by Bernays). In 1958 Gödel extended his findings to obtain constructive interpretations of sentences of classical number theory in terms of primitive recursive functionals.

      More recently, work has been done to extend Gentzen's findings to ramified theories of types and to fragments of classical analysis and to extend Gödel's interpretation and to relate classical analysis to intuitionistic analysis. Also, in connection with these consistency proofs, various proposals have been made to present constructive notations for the ordinals of segments of the German mathematician Georg Cantor's second number class, which includes ω and the first epsilon-number and much more. A good deal of discussion has been devoted to the significance of the consistency proofs and the relative interpretations for epistemology (the theory of knowledge).

Discoveries about logical calculi
      The two main branches of formal logic are the propositional calculus and the predicate calculus.

The propositional calculus
      It is easy to show that the propositional calculus is complete in the sense that every valid sentence in it—i.e., every tautology, or sentence true in all possible worlds (in all interpretations)—is a theorem, as may be seen in the following example. “Either p or not-p” ( p ∨ ∼p) is always true because p is either true or false. In the former case, p ∨ ∼p is true because p is true; in the latter case, because ∼p is true. One way to prove the completeness of this calculus is to observe that it is sufficient to reduce every sentence to a conjunctive normal form—i.e., to a conjunction of disjunctions (disjunction) of single letters and their negations. But any such conjunction is valid if and only if every conjunct is valid; and a conjunct is valid if and only if it contains some letter p as well as ∼p as parts of the whole disjunction. Completeness follows because (1) such conjuncts can all be proved in the calculus and (2) if these conjuncts are theorems, then the whole conjunction is also a theorem.

      The consistency of the propositional calculus (its freedom from contradiction) is more or less obvious, because it can easily be checked that all its axioms are valid—i.e., true in all possible worlds—and that the rules of inference carry from valid sentences to valid sentences. But a contradiction is not valid; hence, the calculus is consistent. The conclusion, in fact, asserts more than consistency, for it holds that only valid sentences are provable.

      The calculus is also easily decidable. Since all valid sentences, and only these, are theorems, each sentence can be tested mechanically by putting true and false for each letter in the sentence. If there are n letters, there are 2n possible substitutions. A sentence is then a theorem if and only if it comes out true in every one of the 2n possibilities.

      The independence of the axioms is usually proved by using more than two truth values (truth-value). These values are divided into two classes: the desired and the undesired. The axiom to be shown independent can then acquire some undesired value, whereas all the theorems that are provable without this axiom always get the desired values. This technique is what originally suggested the many-valued logics.

The first-order predicate calculus
      The problem of consistency for the predicate calculus is relatively simple. A world may be assumed in which there is only one object a. In this case, both the universally quantified and the existentially quantified sentences (∀x)A(x) and (∃ x)A(x) reduce to the simple sentence A(a), and all quantifiers can be eliminated. It may easily be confirmed that, after the reduction, all theorems of the calculus become tautologies (i.e., theorems in the propositional calculus). If F is any predicate, such a sentence as “Every x is F and not every x is F ”—i.e., (∀x)F(x) · ∼(∀x)F(x)—is then reduced to “a is both A and not-A”—A(a) · ∼A(a)—which is not a tautology; therefore, the original sentence is not a theorem; hence, no contradiction can be a theorem. If F is simple, then F and A are the same. If F is complex and contains (∀y) or (∃z), etc., then A is the result obtained by iterating the transformation of eliminating (∀y), etc. In fact, it can be proved quite directly not only that the calculus is consistent but also that all its theorems are valid.

      The discoveries that the calculus is complete and undecidable are much more profound than the discovery of its consistency. Its completeness was proved by Gödel in 1930; its undecidability was established with quite different methods by Church and Turing in 1936. Given the general developments that occurred up to 1936, its undecidability also follows in another way from Theorem X of Gödel's paper of 1931.

      Completeness means that every valid sentence of the calculus is a theorem. It follows that if ∼A is not a theorem, then ∼A is not valid; and, therefore, A is satisfiable; i.e., it has an interpretation, or a model. But to say that A is consistent means nothing other than that ∼A is not a theorem. Hence, from the completeness, it follows that if A is consistent, then A is satisfiable. Therefore, the semantic concepts of validity and satisfiability are seen to coincide with the syntactic concepts of derivability and consistency.

The Löwenheim-Skolem theorem
      A finding closely related to the completeness theorem is the Löwenheim-Skolem theorem (1915, 1920), named after Leopold Löwenheim, a German schoolteacher, and Skolem, which says that if a sentence (or a formal system) has any model, it has a countable or enumerable model (i.e., a model whose members can be matched with the positive integers). In the most direct method of proving this theorem, the logician is provided with very useful tools in model theory and in studies on relative consistency and independence in set theory.

      In the predicate calculus there are certain reduction or normal-form theorems. One useful example is the prenex normal form: every sentence can be reduced to an equivalent sentence expressed in the prenex form—i.e., in a form such that all the quantifiers appear at the beginning. This form is especially useful for displaying the central ideas of some of the proofs of the Löwenheim-Skolem theorem.

      As an illustration, one may consider a simple schema in prenex form, “For every x, there is some y such that x bears the (arbitrary) relation M to y”; i.e.,

      (3) (∀x)(∃y)Mxy.

      If (3) now has a model with a nonempty domain D, then, by a principle from set theory (the axiom of choice), there exists a function f of x, written f(x), that singles out for each x a corresponding y. Hence, “For every x, x bears the relation M to f(x)”; i.e.,

      (4) (∀x)Mxf(x).

      If a is now any object in D, then the countable subdomain {a, f (a), f [ f(a)], . . .} already contains enough objects to satisfy (4) and therefore to satisfy (3). Hence, if (3) has any model, it has a countable model, which is in fact a submodel of the original.

      An alternative proof, developed by Skolem in 1922 to avoid appealing to the principles of set theory, has turned out to be useful also for establishing the completeness of the calculus. Instead of using the function f as before, a can be arbitrarily denoted by 1. Since equation (3) is true, there must be some object y such that the number 1 bears the relation M to y, or symbolically M1y, and one of these y's may be called 2. When this process is repeated indefinitely, one obtains

      (5) M12; M12 · M23; M12 · M23 · M34; . . . ,

      all of which are true in the given model. The argument is elementary, because in each instance one merely argues from “There exists some y such that n is M of y”—i.e., (∃y)Mny—to “Let one such y be n + 1.” Consequently, every member of (5) is true in some model. It is then possible to infer that all members of (5) are simultaneously true in some model—i.e., that there is some way of assigning truth values to all its atomic parts so that all members of (5) will be true. Hence, it follows that (3) is true in some countable model.

The completeness theorem
      Gödel's (Gödel, Kurt) original proof of the completeness theorem is closely related to the second proof above. Consideration may again be given to all the sentences in (5) that contain no more quantifiers. If they are all satisfiable, then, as before, they are simultaneously satisfiable and (3) has a model. On the other hand, if (3) has no model, some of its terms—say M12 · . . . · M89—are not satisfiable; i.e., their negations are tautologies (theorems of the propositional calculus). Thus, ∼M12 ∨ . . . ∨ ∼M89 is a tautology, and this remains true if 1, 2, . . . , 9 are replaced by variables, such as r, s, . . . , z; hence, ∼Mrs ∨ . . . ∨ ∼Myz, being a tautology expressed in the predicate calculus as usually formulated, is a theorem in it. It is then easy to use the usual rules of the predicate calculus to derive also the statement, “There exists an x such that, for every y, x is not M of y”; i.e., (∃ x)(∀y)∼Mxy. In other words, the negation of (3) is a theorem of the predicate calculus. Hence, if (3) has no model, then its negation is a theorem of the predicate calculus. And, finally, if a sentence is valid (i.e., if its negation has no model), then it is itself a theorem of the predicate calculus.

The undecidability theorem and reduction classes
      Given the completeness theorem, it follows that the task of deciding whether any sentence is a theorem of the predicate calculus is equivalent to that of deciding whether any sentence is valid or whether its negation is satisfiable.

      Turing's (Turing, Alan M.) method of proving that this class of problems is undecidable is particularly suggestive. Once the concept of mechanical procedure was crystallized, it was relatively easy to find absolutely unsolvable problems—e.g., the halting problem, which asks for each Turing machine the question of whether it will ever stop, beginning with a blank tape. In other words, each Turing machine operates in a predetermined manner according to what is given initially on the (input) tape; we consider now the special case of a blank tape and ask the special question whether the machine will eventually stop. This infinite class of questions (one for each machine) is known to be unsolvable.

      Turing's method shows that each such question about a single Turing machine can be expressed by a single sentence of the predicate calculus so that the machine will stop if and only if that sentence is not satisfiable. Hence, if there were a decision procedure of validity (or satisfiability) for all sentences of the predicate calculus, then the halting problem would be solvable.

      In more recent years (1962), Turing's formulation has been improved to the extent that all that is needed are sentences of the relatively simple form (∀x)(∃y)(∀z)Mxyz, in which all the quantifiers are at the beginning; i.e., M contains no more quantifiers. Hence, given the unsolvability of the halting problem, it follows that, even for the simple class of sentences in the predicate calculus having the quantifiers ∀ ∃ ∀, the decision problem is unsolvable. Moreover, the method of proof also yields a procedure by which every sentence of the predicate calculus can be correlated with one in the simple form given above. Thus, the class of ∀ ∃ ∀ sentences forms a “reduction class.” (There are also various other reduction classes.)

Model theory

Background and typical problems
      In model theory one studies the interpretations (models) of theories formalized in the framework of formal logic, especially in that of the first-order predicate calculus with identity—i.e., in elementary logic. A first-order language is given by a collection S of symbols for relations, functions, and constants, which, in combination with the symbols of elementary logic, single out certain combinations of symbols as sentences. Thus, for example, in the case of the system N (see above Example of a formal system (metalogic)), the formation rules yield a language that is determined in accordance with a uniform procedure by the set (indicated by braces) of uninterpreted extralogical symbols:

      L = {S, +, · , 0, 1}.

      A first-order theory is determined by a language and a set (set theory) of selected sentences of the language—those sentences of the theory that are, in an arbitrary, generalized sense, the “true” ones (called the “distinguished elements” of the set). In the particular case of the system N, one theory Ta is built up on the basis of the language and the set of theorems of N, and another theory Tb is determined by the true sentences of N according to the natural interpretation or meaning of its language. In general, the language of N and any set of sentences of the language can be used to make up a theory.

Satisfaction of a theory by a structure: finite and infinite models
      A realization of a language (for example, the one based on L) is a structure  identified by the six elements so arranged

      in which the second term is a function that assigns a member of the set A to each member of the set A, the next two terms are functions correlating each member of the Cartesian product A × A (i.e., from the set of ordered pairs em>a, ba and b belong to A) with a member of A, and the last two terms are members of A. The structure  satisfies, or is a model of, the theory Ta (or Tb) if all of the distinguished sentences of Ta (or Tb) are true in  (or satisfied by ). Thus, if  is the structure of the ordinary nonnegative integers <ω, S, +, · , 0, 1>, in which ω is the set of all such integers and S, +, · , 0, and 1 the elements for their generation, then it is not only a realization of the language based on L but also a model of both Ta and Tb. Gödel's incompleteness theorem permits nonstandard models of Ta that contain more objects than ω but in which all the distinguished sentences of Ta (namely, the theorems of the system N) are true. Skolem's constructions (related to ultraproducts, see below) yield nonstandard models for both theory Ta and theory Tb.

      The use of the relation of satisfaction, or being-a-model-of, between a structure and a theory (or a sentence) can be traced to the book Wissenschaftslehre (1837; Theory of Science) by Bernhard Bolzano (Bolzano, Bernhard), a Bohemian theologian and mathematician, and, in a more concrete context, to the introduction of models of non-Euclidean geometries about that time. In the mathematical treatment of logic, these concepts can be found in works of the late 19th-century German mathematician Ernst Schröder and in Löwenheim (in particular, in his paper of 1915). The basic tools and results achieved in model theory—such as the Löwenheim-Skolem theorem, the completeness theorem of elementary logic, and Skolem's construction of nonstandard models of arithmetic—were developed during the period from 1915 to 1933. A more general and abstract study of model theory began after 1950, in the work of Tarski (Tarski, Alfred) and others.

      One group of developments may be classified as refinements and extensions of the Löwenheim-Skolem theorem. These developments employ the concept of a “cardinal number,” which—for a finite set—is simply the number at which one stops in counting its elements. For infinite sets, however, the elements must be matched from set to set instead of being counted, and the “sizes” of these sets must thus be designated by transfinite numbers (transfinite number). A rather direct generalization can be drawn that says that, if a theory has any infinite model, then, for any infinite cardinal number, it has a model of that cardinality. It follows that no theory with an infinite model can be categorical or such that any two models of the theory are isomorphic (i.e., matchable in one-to-one correspondence), because models of different cardinalities can obviously not be so matched. A natural question is whether a theory can be categorical in certain infinite cardinalities—i.e., whether there are cardinal numbers such that any two models of the theory of the same cardinality are isomorphic. According to a central discovery made in 1963 by the American mathematician Michael Morley, if a theory is categorical in any uncountable cardinality (i.e., any cardinality higher than the countable), then it is categorical in every uncountable cardinality. On the other hand, examples are known for all four combinations of countable and uncountable cardinalities: specifically, there are theories that are categorical (1) in every infinite cardinality, (2) in the countable cardinality but in no uncountable cardinality, (3) in every uncountable cardinality but not in the countable, and (4) in no infinite cardinality.

      In another direction, there are “two-cardinal” problems that arise from the possibilities of changing, from one model to another, not only the cardinality of the domain of the first model but also the cardinality of some chosen property (such as being a prime number). Various answers to these questions have been found, including proofs of independence (based on the ordinary axioms employed in set theory) and proofs of conditional theorems made on the basis of certain familiar hypotheses of set theory.

Elementary logic
      An area that is perhaps of more philosophical interest is that of the nature of elementary logic itself. On the one hand, the completeness discoveries seem to show in some sense that elementary logic is what the logician naturally wishes to have. On the other hand, he is still inclined to ask whether there might be some principle of uniqueness according to which elementary logic is the only solution that satisfies certain natural requirements on what a logic should be. The development of model theory has led to a more general outlook that enabled the Swedish logician Per Lindström to prove in 1969 a general theorem to the effect that, roughly speaking, within a broad class of possible logics, elementary logic is the only one that satisfies the requirements of axiomatizability and of the Löwenheim-Skolem theorem. Although Lindström's theorem does not settle satisfactorily whether or not elementary logic is the right logic, it does seem to suggest that mathematical findings can help the logician to clarify his concepts of logic and of logical truth.

      A particularly useful tool for obtaining new models from the given models of a theory is the construction of a special combination called the “ultraproduct” of a family of structures (see below Ultrafilters, ultraproducts, and ultrapowers (metalogic))—in particular, the ultrapower when the structures are all copies of the same structure (just as the product of a1, . . . , an is the same as the power an, if ai = a for each i). The intuitive idea in this method is to establish that a sentence is true in the ultraproduct if and only if it is true in “almost all” of the given structures (i.e., “almost everywhere”—an idea that was present in a different form in Skolem's construction of a nonstandard model of arithmetic in 1933). It follows that, if the given structures are models of a theory, then their ultraproduct is such a model also, because every sentence in the theory is true everywhere (which is a special case of “almost everywhere” in the technical sense employed). Ultraproducts have been applied, for example, to provide a foundation for what is known as “nonstandard analysis” that yields an unambiguous interpretation of the classical concept of infinitesimals (infinitesimal)—the division into units as small as one pleases. They have also been applied by two mathematicians, James Ax and Simon B. Kochen, to problems in the field of algebra (on p-adic fields).

Nonelementary logic and future developments
      There are also studies, such as second-order logic and infinitary logics, that develop the model theory of nonelementary logic. Second-order logic contains, in addition to variables that range over individual objects, a second kind of variable ranging over sets of objects so that the model of a second-order sentence or theory also involves, beyond the basic domain, a larger set (called its “power set”) that encompasses all the subsets of the domain. Infinitary logics may include functions or relations with infinitely many arguments, infinitely long conjunctions and disjunctions, or infinite strings of quantifiers. From studies on infinitary logics, William Hanf, an American logician, was able to define certain cardinals, some of which have been studied in connection with the large cardinals in set theory. In yet another direction, logicians are developing model theories for modal logics—those dealing with such modalities as necessity and possibility—and for the intuitionistic logic.

      There is a large gap between the general theory of models and the construction of interesting particular models such as those employed in the proofs of the independence (and consistency) of special axioms and hypotheses in set theory. It is natural to look for further developments of model theory that will yield more systematic methods for constructing models of axioms with interesting particular properties, especially in deciding whether certain given sentences are derivable from the axioms. Relative to the present state of knowledge, such goals appear fairly remote. The gap is not unlike that between the abstract theory of computers and the basic properties of actual computers.

Characterizations of the first-order logic
      There has been outlined above a proof of the completeness of elementary logic without including sentences asserting identity. The proof can be extended, however, to the full elementary logic in a fairly direct manner. Thus, if F is a sentence containing equality, a sentence G can be adjoined to it that embodies the special properties of identity relevant to the sentence F. The conjunction of F and G can then be treated as a sentence not containing equality (i.e., “=” can be treated as an arbitrary relation symbol). Hence, the conjunction has a model in the sense of logic-without-identity if and only if F has a model in the sense of logic-with-identity; and the completeness of elementary logic (with identity) can thus be inferred.

      A concept more general than validity is that of the relation of logical entailment or implication between a possibly infinite set X of sentences and a single sentence p that holds if and only if p is true in every model of X. In particular, p is valid if the empty set, defined as having no members, logically entails p—for this is just another way of saying that p is true in every model. This suggests a stronger requirement on a formal system of logic—namely, that p be derivable from X by the system whenever X logically entails p. The usual systems of logic satisfy this requirement because, besides the completeness theorem, there is also a compactness theorem:

      A theory X has a model if every finite subsetof X has a model.

      Roughly speaking, this theorem enables the logician to reduce an infinite set X to a finite subset X1 in each individual case, and the case of entailment when X1 is finite is taken care of by the completeness of the system.

      These findings show that the ordinary systems of elementary logic comprise the correct formulation, provided that the actual choice of the truth functions (say negation and disjunction), of the quantifiers, and of equality as the “logical constants” is assumed to be the correct one. There remains the question, however, of justifying the particular choice of logical constants. One might ask, for example, whether “For most x” or “For finitely many x” should not also be counted as logical constants. Lindström has formulated a general concept of logic and shown that logics that apparently extend the first-order logic all end up being the same as that logic, provided that they satisfy the Löwenheim-Skolem theorem and either have the compactness property or are formally axiomatizable. There remains the question, however, of whether or why these requirements (especially that of the Löwenheim-Skolem theorem) are intrinsic to the nature of logic.

Generalizations and extensions of the Löwenheim-Skolem theorem
      A generalized theorem can be proved using basically the same ideas as those employed in the more special case discussed above.

      If a theory has any infinite model, then, for any infinite cardinality α, that theory has a model of cardinality α. More explicitly, this theorem contains two parts: (1) If a theory has a model of infinite cardinality β, then, for each infinite cardinal α that is greater than β, the theory has a model of cardinality α. (2) If a theory has a model of infinite cardinality β, then, for each infinite cardinal α less than β, the theory has a model of cardinality α.

      It follows immediately that any theory having an infinite model has two nonisomorphic models and is, therefore, not categorical. This applies, in particular, to the aforementioned theories Ta and Tb of arithmetic (based on the language of N), the natural models of which are countable, as well as to theories dealing with real numbers and arbitrary sets, the natural models of which are uncountable; both kinds of theory have both countable and uncountable models. There is much philosophical discussion about this phenomenon.

      The possibility is not excluded that a theory may be categorical in some infinite cardinality. The theory Td, for example, of dense linear ordering (such as that of the rational numbers) is categorical in the countable cardinality. One application of the Löwenheim-Skolem theorem is: If a theory has no finite models and is categorical in some infinite cardinality α, then the theory is complete; i.e., for every closed sentence in the language of the theory, either that sentence or its negation belongs to the theory. An immediate consequence of this application of the theorem is that the theory of dense linear ordering is complete.

      A theorem that is generally regarded as one of the most difficult to prove in model theory is the theorem by Michael Morley, as follows:

      A theory that is categorical in one uncountable cardinality is categorical in every uncountable cardinality.

      Two-cardinal theorems deal with languages having some distinguished predicate U. A theory is said to admit the pair <α, β> of cardinals if it has a model (with its domain) of cardinality α wherein the value of U is a set of cardinality β. The central two-cardinal theorem says:

      If a theory admits the pair <α, β> of infinite cardinals with β less than α, then for each regular cardinal γ the theory admits <γ+, γ>, in which γ+ is the next larger cardinal after γ.

      The most interesting case is when γ is the least infinite cardinal, ℵ0. (The general theorem can be established only when the “generalized continuum hypothesis” is assumed, according to which the next highest cardinality for an infinite set is that of its power set.)

Ultrafilters, ultraproducts, and ultrapowers
      An ultrafilter on a nonempty set I is defined as a set D of subsets of I such that
      (1) the empty set does not belong to D,
[m211]
 
      (2) if A, B are in D, so is their intersection, AB, the set of elements common to both,
[m211]
 
      (3) if A is a subset of B, and A is in D, then B is in D, and
[m211]
 
      (4) for every subset A of I, either A is in D or I minus A is in D.
[m211]
 

      Roughly stated, each ultrafilter of a set I conveys a notion of large subsets of I so that any property applying to a member of D applies to I “almost everywhere.”

      The set {i}, where i = < Ai, Rii are members of the set I, is taken to be a family of structures indexed by I, and D to be an ultrafilter on I. Consider now the Cartesian product B of {Ai} (for example, if I is {0, 1, 2, . . .}, then B is the set of all sequences f such that f(i) belong to Ai). The members of B are divided into equivalence classes with the help of D : fg if and only if {i|f(i) = g(i)} ∊ D—in other words, the set of indices i such that f(i) = g(i) belong to D [or f(i) and g(i) are equal “almost everywhere”]. Let W be the set of these equivalence classes—i.e., the set of all f* such that f* is the set of all members g of B with gf. Similarly, a relation S is introduced such that Sfg if and only if Ri holds between f(i) and g(i) for “almost all” i; i.e.,

      {i|Ri [ f(i), g(i)]} ∊ D.

      In this way, we arrive at a new structure U = em>W,Si} over D. In the special case when all the i are the same, the resulting structure U is called the ultrapower of the original family over D.

      The central theorems are the following:
● If i (iI) are realizations of the same language, then a sentence p is true in the ultraproduct U if and only if the set of i such that p is true in i belongs to D. In particular, if each i is a model of a theory, then U is also a model of the theory.
● Two realizations of the same language are said to be elementarily equivalent if they have the same set of true sentences. A necessary and sufficient condition for two realizations to be elementarily equivalent is that they admit ultrapowers that are isomorphic.

      One application of these theorems is in the introduction of nonstandard analysis, which was originally instituted by other considerations. By using a suitable ultrapower of the structure of the field ℜ of real numbers, a real closed field that is elementarily equivalent to ℜ is obtained that is non-Archimedean—i.e., which permits numbers a and b such that no n can make na greater than b. This development supplies an unexpected exact foundation for the classical differential calculus (differentiation) using infinitesimals (infinitesimal), which has considerable historical, pedagogical, and philosophical (mathematics, philosophy of) interest.

      A widely known application to the area of algebra is that which deals with certain fields of rational numbers Qp, called the p-adic completion of the rational numbers. The conjecture has been made that every form of degree d (in the same sense as degrees of ordinary polynomials) over Qp, in which the number of variables exceeds d2, has a nontrivial zero in Qp. Using ultraproducts, it has been shown that the conjecture is true for arbitrary d with the possible exception of a finite set of primes p (depending on d). Subsequently, it was found that the original conjecture is not true when extended to full generality.

      Other useful tools in model theory include the pigeonhole principles, of which the basic principle is that, if a set of large cardinality is partitioned into a small number of classes, some one class will have large cardinality. Those elements of the set that lie in the same class cannot be distinguished by the property defining that class.

      A related concept is that of “indiscernibles,” which also has rather extensive applications in set theory. An ordered subset of the domain of a model  of a theory is a homogeneous set, or a set of indiscernibles for , if  cannot distinguish the members of the subset from one another. More exactly, given any x1 < . . . < xn, y1 < . . . em>yn in the subset, then for any sentence F(a1, . . . , an) of the language of the theory, that sentence (with argument x) is satisfied by (symbolized ⊩) the structure—i.e.,

      ⊩F(x1, . . . , xn)

      —if and only if that sentence (with argument y) is also satisfied by it—i.e.,

      ⊩F(y1, . . . , yn).

      There is also a first theorem on this notion that says that, given a theory with an infinite model and a linearly ordered set (set theory) X, there is then a model  of the theory such that X is a set of indiscernibles for .

Hao Wang Morton L. Schagrin

Additional Reading
Jon Barwise and S. Feferman (eds.), Model-Theoretic Logics (1985), emphasizes semantics of models. J.L. Bell and A.B. Slomson, Models and Ultraproducts: An Introduction, 3rd rev. ed. (1974), explores technical semantics. Richard Montague, Formal Philosophy: Selected Papers of Richard Montague, ed. by Richmond H. Thomason (1974), uses modern logic to deal with the semantics of natural languages. Martin Davis, Computability & Unsolvability (1958, reprinted with a new preface and appendix, 1982), is an early classic on important work arising from Gödel's theorem, and the same author's The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems, and Computable Functions (1965), is a collection of seminal papers on issues of computability. Rolf Herken (ed.), The Universal Turing Machine: A Half-Century Survey (1988), takes a look at where Gödel's theorem on undecidable sentences has led researchers. Hans Hermes, Enumerability, Decidability, Computability, 2nd rev. ed. (1969, originally published in German, 1961), offers an excellent mathematical introduction to the theory of computability and Turing machines. A classic treatment of computability is presented in Hartley Rogers, Jr., Theory of Recursive Functions and Effective Computability (1967, reissued 1987). M.E. Szabo, Algebra of Proofs (1978), is an advanced treatment of syntactical proof theory. P.T. Johnstone, Topos Theory (1977), explores the theory of structures that can serve as interpretations of various theories stated in predicate calculus. H.J. Keisler, “Logic with the Quantifier ‘There Exist Uncountably Many',” Annals of Mathematical Logic 1:1–93 (January 1970), reports on a seminal investigation that opened the way for Jon Barwise et al. (eds.), Handbook of Mathematical Logic (1977); and Carol Ruth Karp, Language with Expressions of Infinite Length (1964), which expands the syntax of the language of predicate calculus so that expressions of infinite length can be constructed. C.C. Chang and H.J. Keisler, Model Theory, 3rd rev. ed. (1990), is the single most important text on semantics. F.W. Lawvere, C. Maurer, and G.C. Wraith (eds.), Model Theory and Topoi (1975), is an advanced, mathematically sophisticated treatment of the semantics of theories expressed in predicate calculus with identity. Michael Makkai and Gonzalo Reyes, First Order Categorical Logic: Model-Theoretical Methods in the Theory of Topoi and Related Categories (1977), analyzes the semantics of theories expressed in predicate calculus. Saharon Shelah, “Stability, the F.C.P., and Superstability: Model-Theoretic Properties of Formulas in First Order Theory,” Annals of Mathematical Logic 3:271–362 (October 1971), explores advanced semantics.

* * *


Universalium. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”