mathematics, foundations of

mathematics, foundations of
Scientific inquiry into the nature of mathematical theories and the scope of mathematical methods.

It began with Euclid's Elements as an inquiry into the logical and philosophical basis of mathematics
in essence, whether the axioms of any system (be it Euclidean geometry or calculus) can ensure its completeness and consistency. In the modern era, this debate for a time divided into three schools of thought: logicism, formalism, and intuitionism. Logicists supposed that abstract mathematical objects can be entirely developed starting from basic ideas of sets and rational, or logical, thought; a variant of logicism, known as mathematical Platonism, views these objects as existing external to and independent of an observer. Formalists believed mathematics to be the manipulation of configurations of symbols according to prescribed rules, a "game" independent of any physical interpretation of the symbols. Intuitionists rejected certain concepts of logic and the notion that the axiomatic method would suffice to explain all of mathematics, instead seeing mathematics as an intellectual activity dealing with mental constructions (see constructivism) independent of language and any external reality. In the 20th century, Gödel's theorem ended any hope of finding an axiomatic basis of mathematics that was both complete and free from contradictions.

* * *

Introduction

      the study (scientific theory) of the logical and philosophical basis of mathematics, including whether the axioms of a given system ensure its completeness and its consistency. Because mathematics has served as a model for rational inquiry in the West and is used extensively in the sciences, foundational studies have far-reaching consequences for the reliability and extensibility of rational thought itself.

      For 2,000 years the foundations of mathematics seemed perfectly solid. Euclid's (Euclidean geometry) Elements (c. 300 BC), which presented a set of formal logical arguments based on a few basic terms and axioms, provided a systematic method of rational exploration that guided mathematicians, philosophers, and scientists well into the 19th century. Even serious objections to the lack of rigour in Sir Isaac Newton's notion of fluxions (derivatives) in the calculus, raised by the Anglo-Irish empiricist George Berkeley (among others), did not call into question the basic foundations of mathematics. The discovery in the 19th century of consistent alternative geometries (non-Euclidean geometry), however, precipitated a crisis, for it showed that Euclidean geometry, based on seemingly the most intuitively obvious axiomatic assumptions, did not correspond with reality as mathematicians had believed. This, together with the bold discoveries of the German mathematician Georg Cantor in set theory, made it clear that, to avoid further confusion and satisfactorily answer paradoxical results, a new and more rigorous foundation for mathematics was necessary.

      Thus began the 20th-century quest to rebuild mathematics on a new basis independent of geometric intuitions. Early efforts included those of the logicist school of the British mathematicians Bertrand Russell and Alfred North Whitehead, the formalist school of the German mathematician David Hilbert, the intuitionist school of the Dutch mathematician L.E.J. Brouwer, and the French set theory school of mathematicians collectively writing under the pseudonym of Nicolas Bourbaki. Some of the most promising current research is based on the development of category theory by the American mathematician Saunders Mac Lane and the Polish-born American mathematician Samuel Eilenberg following World War II.

      This article presents the historical background of foundational questions and 20th-century efforts to construct a new foundational basis for mathematics.

Ancient Greece to the Enlightenment
      A remarkable amount of practical mathematics, some of it even fairly sophisticated, was already developed as early as 2000 BC by the agricultural civilizations of Egypt and Mesopotamia, and perhaps even farther east. However, the first to exhibit an interest in the foundations of mathematics were the ancient Greeks (ancient Greek civilization).

Arithmetic or geometry
      Early Greek philosophy was dominated by a dispute as to which is more basic, arithmetic or geometry, and thus whether mathematics should be concerned primarily with the (positive) integers or the (positive) reals (real number), the latter then being conceived as ratios of geometric quantities. (The Greeks confined themselves to positive numbers, as negative numbers were introduced only much later in India by Brahmagupta.) Underlying this dispute was a perceived basic dichotomy, not confined to mathematics but pervading all nature: is the universe made up of discrete atoms (as the philosopher Democritus believed) which hence can be counted, or does it consist of one or more continuous substances (as Thales of Miletus is reputed to have believed) and thus can only be measured? This dichotomy was presumably inspired by a linguistic distinction, analogous to that between English count nouns, such as “apple,” and mass nouns, such as “water.” As Aristotle later pointed out, in an effort to mediate between these divergent positions, water can be measured by counting cups.

      The Pythagorean (Pythagoreanism) school of mathematics, founded on the doctrines of the Greek philosopher Pythagoras, originally insisted that only natural and rational numbers exist. Its members only reluctantly accepted the discovery that √2, the ratio of the diagonal of a square to its side, could not be expressed as the ratio of whole numbers. The remarkable proof of this fact has been preserved by Aristotle.

      The contradiction between rationals and reals was finally resolved by Eudoxus of Cnidus, a disciple of Plato, who pointed out that two ratios of geometric quantities are equal if and only if they partition the set of (positive) rationals in the same way, thus anticipating the German mathematician Richard Dedekind (1831–1916), who defined real numbers as such partitions.

Being versus becoming
      Another dispute among pre-Socratic philosophers was more concerned with the physical world. Parmenides claimed that in the real world there is no such thing as change and that the flow of time is an illusion, a view with parallels in the Einstein-Minkowski four-dimensional space-time model of the universe. Heracleitus, on the other hand, asserted that change is all-pervasive and is reputed to have said that one cannot step into the same river twice.

       Zeno Of Elea, a follower of Parmenides, claimed that change is actually impossible and produced four paradoxes (paradoxes of Zeno) to show this. The most famous of these describes a race between Achilles (Achilles paradox) and a tortoise. Since Achilles can run much faster than the tortoise, let us say twice as fast, the latter is allowed a head start of one mile. When Achilles has run one mile, the tortoise will have run half as far again—that is, half a mile. When Achilles has covered that additional half-mile, the tortoise will have run a further quarter-mile. After n + 1 stages, Achilles has run

      miles and the tortoise has run

 miles, being still 1/2n + 1 miles ahead. So how can Achilles ever catch up with the tortoise (Figure 1—>)?

      Zeno's paradoxes may also be interpreted as showing that space and time (space-time) are not made up of discrete atoms but are substances which are infinitely divisible. Mathematically speaking, his argument involves the sum of the infinite geometric progression

      no finite partial sum of which adds up to 2. As Aristotle would later say, this progression is only potentially infinite. It is now understood that Zeno was trying to come to grips with the notion of limit, which was not formally explained until the 19th century, although a start in that direction had been made by the French encyclopaedist Jean Le Rond d'Alembert (1717–83).

Universals (universal)
      The Athenian philosopher Plato believed that mathematical entities are not just human inventions but have a real existence. For instance, according to Plato, the number 2 is an ideal object. This is sometimes called an “idea,” from the Greek eide, or “universal,” from the Latin universalis, meaning “that which pertains to all.” But Plato did not have in mind a “mental image,” as “idea” is usually used. The number 2 is to be distinguished from a collection of two stones or two apples or, for that matter, two platinum balls in Paris.

      What, then, are these Platonic ideas? Already in ancient Alexandria some people speculated that they are words. This is why the Greek word logos, originally meaning “word,” later acquired a theological meaning as denoting the ultimate reality behind the “thing.” An intense debate occurred in the Middle Ages over the ontological status of universals. Three dominant views prevailed: realism, from the Latin res (“thing”), which asserts that universals have an extra-mental reality—that is, they exist independently of perception; conceptualism, which asserts that universals exist as entities within the mind but have no extra-mental existence; and nominalism, from the Latin nomen (“name”), which asserts that universals exist neither in the mind nor in the extra-mental realm but are merely names that refer to collections of individual objects.

      It would seem that Plato believed in a notion of truth independent of the human mind. In the Meno Plato's teacher Socrates asserts that it is possible to come to know this truth by a process akin to memory retrieval. Thus, by clever questioning, Socrates managed to bring an uneducated person to “remember,” or rather to reconstruct, the proof of a mathematical theorem.

The axiomatic method (axiomatic method)
      Perhaps the most important contribution to the foundations of mathematics made by the ancient Greeks was the axiomatic method and the notion of proof. This was insisted upon in Plato's Academy and reached its high point in Alexandria about 300 BC with Euclid's Elements. This notion survives today, except for some cosmetic changes.

      The idea is this: there are a number of basic mathematical truths, called axioms (axiom) or postulates, from which other true statements may be derived in a finite number of steps. It may take considerable ingenuity to discover a proof; but it is now held that it must be possible to check mechanically, step by step, whether a purported proof is indeed correct, and nowadays a computer should be able to do this. The mathematical statements that can be proved are called theorems (theorem), and it follows that, in principle, a mechanical device, such as a modern computer, can generate all theorems.

      Two questions about the axiomatic method were left unanswered by the ancients: are all mathematical truths axioms or theorems (this is referred to as completeness), and can it be determined mechanically whether a given statement is a theorem (this is called decidability)? These questions were raised implicitly by David Hilbert (1862–1943) about 1900 and were resolved later in the negative, completeness by the Austrian-American logician Kurt Gödel (1906–78) and decidability by the American logician Alonzo Church (1903–95).

      Euclid's work dealt with number theory and geometry, essentially all the mathematics then known. Since the middle of the 20th century a gradually changing group of mostly French mathematicians under the pseudonym Nicolas Bourbaki (Bourbaki, Nicolas) has tried to emulate Euclid in writing a new Elements of Mathematics based on their theory of structures. Unfortunately, they just missed out on the new ideas from category theory.

Number systems
      While the ancient Greeks were familiar with the positive integers, rationals, and reals, zero (used as an actual number instead of denoting a missing number) and the negative numbers were first used in India, as far as is known, by Brahmagupta in the 7th century AD. Complex numbers (complex number) were introduced by the Italian Renaissance mathematician and physician Gerolamo Cardano (Cardano, Girolamo) (1501–76), not just to solve equations such as x2 + 1 = 0 but because they were needed to find real solutions of certain cubic equations with real coefficients. Much later, the German mathematician Carl Friedrich Gauss (Gauss, Carl Friedrich) (1777–1855) proved the fundamental theorem of algebra, that all equations with complex coefficients have complex solutions, thus removing the principal motivation for introducing new numbers. Still, the Irish mathematician Sir William Rowan Hamilton (1805–65) and the French mathematician Olinde Rodrigues (1794–1851) invented quaternions in the mid-19th century, but these proved to be less popular in the scientific community until quite recently.

      Currently, a logical presentation of the number system, as taught at the university level, would be as follows:

ℕ → ℤ → ℚ → ℝ → ℂ → ℍ.
Here the letters, introduced by Nicolas Bourbaki, refer to the natural numbers, integers, rationals, reals, complex numbers, and quaternions, respectively, and the arrows indicate inclusion of each number system into the next. However, as has been shown, the historical development proceeds differently:
+ → ℚ+ → ℝ+ → ℝ → ℂ → ℍ,
where the plus sign indicates restriction to positive elements. This is the development, up to ℝ, which is often adhered to at the high-school level.

The reexamination of infinity

calculus reopens foundational questions
      Although mathematics flourished after the end of the Classical Greek period for 800 years in Alexandria and, after an interlude in India and the Islāmic world, again in Renaissance Europe, philosophical questions concerning the foundations of mathematics were not raised until the invention of calculus and then not by mathematicians but by the philosopher George Berkeley (1685–1753).

      Sir Isaac Newton (Newton, Sir Isaac) in England and Gottfried Wilhelm Leibniz (Leibniz, Gottfried Wilhelm) in Germany had independently developed the calculus on a basis of heuristic rules and methods markedly deficient in logical justification. As is the case in many new developments, utility outweighed rigour, and, though Newton's fluxions (or derivatives) and Leibniz's infinitesimals (infinitesimal) (or differentials) lacked a coherent rational explanation, their power in answering heretofore unanswerable questions was undeniable. Unlike Newton, who made little effort to explain and justify fluxions, Leibniz, as an eminent and highly regarded philosopher, was influential in propagating the idea of infinitesimals, which he described as infinitely small actual numbers—that is, less than 1/n in absolute value for each positive integer n and yet not equal to zero. Berkeley (Berkeley, George), concerned over the deterministic and atheistic implications of philosophical mechanism, set out to reveal contradictions in the calculus in his influential book The Analyst; or, A Discourse Addressed to an Infidel Mathematician. There he scathingly wrote about these fluxions and infinitesimals, “They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?” and further asked, “Whether mathematicians, who are so delicate in religious points, are strictly scrupulous in their own science? Whether they do not submit to authority, take things upon trust, and believe points inconceivable?”

      Berkeley's criticism was not fully met until the 19th century, when it was realized that, in the expression dy/dx, dx and dy need not lead an independent existence. Rather, this expression could be defined as the limit of ordinary ratios Δyx, as Δx approaches zero without ever being zero. Moreover, the notion of limit was then explained quite rigorously, in answer to such thinkers as Zeno and Berkeley.

      It was not until the middle of the 20th century that the logician Abraham Robinson (1918–74) showed that the notion of infinitesimal was in fact logically consistent and that, therefore, infinitesimals could be introduced as new kinds of numbers. This led to a novel way of presenting the calculus, called nonstandard analysis, which has, however, not become as widespread and influential as it might have.

      Robinson's argument was this: if the assumptions behind the existence of an infinitesimal ξ led to a contradiction, then this contradiction must already be obtainable from a finite set of these assumptions, say from:

      But this finite set is consistent, as is seen by taking ξ = 1/(n + 1).

Non-Euclidean geometries (non-Euclidean geometry)
      When Euclid presented his axiomatic treatment of geometry, one of his assumptions, his fifth postulate, appeared to be less obvious or fundamental than the others. As it is now conventionally formulated, it asserts that there is exactly one parallel to a given line through a given point. Attempts to derive this from Euclid's other axioms did not succeed, and, at the beginning of the 19th century, it was realized that Euclid's fifth postulate is, in fact, independent of the others. It was then seen that Euclid had described not the one true geometry but only one of a number of possible geometries.

Elliptic and hyperbolic geometries
      Within the framework of Euclid's other four postulates (and a few that he omitted), there were also possible elliptic and hyperbolic geometries. In plane elliptic geometry there are no parallels to a given line through a given point; it may be viewed as the geometry of a spherical surface on which antipodal points have been identified and all lines are great circles. This was not viewed as revolutionary. More exciting was plane hyperbolic geometry, developed independently by the Hungarian mathematician János Bolyai (1802–60) and the Russian mathematician Nikolay Lobachevsky (1792–1856), in which there is more than one parallel to a given line through a given point. This geometry is more difficult to visualize, but a helpful model presents the hyperbolic plane as the interior of a circle, in which straight lines take the form of arcs of circles perpendicular to the circumference.

 Another way to distinguish the three geometries is to look at the sum of the angles of a triangle. It is 180° in Euclidean geometry, as first reputedly discovered by Thales of Miletus (fl. 6th century BC), whereas it is more than 180° in elliptic and less than 180° in hyperbolic geometry. See Figure 2—>.

      The discovery that there is more than one geometry was of foundational significance and contradicted the German philosopher Immanuel Kant (Kant, Immanuel) (1724–1804). Kant had argued that there is only one true geometry, Euclidean, which is known to be true a priori by an inner faculty (or intuition) of the mind. For Kant, and practically all other philosophers and mathematicians of his time, this belief in the unassailable truth of Euclidean geometry formed the foundation and justification for further explorations into the nature of reality. With the discovery of consistent non-Euclidean geometries, there was a subsequent loss of certainty and trust in this innate intuition, and this was fundamental in separating mathematics from a rigid adherence to an external sensory order (no longer vouchsafed as “true”) and led to the growing abstraction of mathematics as a self-contained universe. This divorce from geometric intuition added impetus to later efforts to rebuild assurance of truth on the basis of logic. (See below The quest for rigour (mathematics, foundations of).)

      What then is the correct geometry for describing the space (actually space-time) we live in? It turns out to be none of the above, but a more general kind of geometry, as was first discovered by the German mathematician Bernhard Riemann (1826–66). In the early 20th century, Albert Einstein (Einstein, Albert) showed, in the context of his general theory of relativity, that the true geometry of space is only approximately Euclidean. It is a form of Riemannian geometry in which space and time are linked in a four-dimensional manifold, and it is the curvature at each point that is responsible for the gravitational “force” at that point. Einstein spent the last part of his life trying to extend this idea to the electromagnetic force, hoping to reduce all physics to geometry, but a successful unified field theory eluded him.

Cantor (Cantor, Georg)
      In the 19th century, the German mathematician Georg Cantor (1845–1918) returned once more to the notion of infinity and showed that, surprisingly, there is not just one kind of infinity but many kinds. In particular, while the set ℕ of natural numbers and the set of all subsets of ℕ are both infinite, the latter collection is more numerous, in a way that Cantor made precise, than the former. He proved that ℕ, ℤ, and ℚ all have the same size, since it is possible to put them into one-to-one correspondence with one another, but that ℝ is bigger, having the same size as the set of all subsets of ℕ.

      However, Cantor was unable to prove the so-called continuum hypothesis, which asserts that there is no set that is larger than ℕ yet smaller than the set of its subsets. It was shown only in the 20th century, by Gödel and the American logician Paul Cohen (b. 1934), that the continuum hypothesis can be neither proved nor disproved from the usual axioms of set theory. Cantor had his detractors, most notably the German mathematician Leopold Kronecker (Kronecker, Leopold) (1823–91), who felt that Cantor's theory was too metaphysical and that his methods were not sufficiently constructive (see below The quest for rigour: Formal foundations: Nonconstructive arguments (mathematics, foundations of)).

The quest for rigour

Formal foundations
Set theoretic beginnings
      While laying rigorous foundations for mathematics, 19th-century mathematicians discovered that the language of mathematics could be reduced to that of set theory (developed by Cantor), dealing with membership (∊) and equality (=), together with some rudimentary arithmetic, containing at least symbols for zero (0) and successor (S). Underlying all this were the basic logical concepts: conjunction (∧), disjunction (∨), implication (⊃), negation (¬), and the universal (∀) and existential (∃) quantifiers (formalized by the German mathematician Gottlob Frege [1848–1925]). (The modern notation owes more to the influence of the English logician Bertrand Russell (Russell, Bertrand) [1872–1970] and the Italian mathematician Giuseppe Peano [1858–1932] than to that of Frege.) For an extensive discussion of logic symbols and operations, see logic: Logic systems: Formal logic (formal logic).

      For some time, logicians were obsessed with a principle of parsimony, called Ockham's razor, which justified them in reducing the number of these fundamental concepts, for example, by defining pq (read p implies q) as ¬pq or even as ¬(p ∧ ¬q). While this definition, even if unnecessarily cumbersome, is legitimate classically, it is not permitted in intuitionistic logic (see below Intuitionistic logic (mathematics, foundations of)). In the same spirit, many mathematicians adopted the Wiener-Kuratowski definition of the ordered pair < a, ba}, {a, b}}, where {a} is the set whose sole element is a, which disguises its true significance.

      Logic had been studied by the ancients, in particular by Aristotle and the Stoic philosophers. Philo of Megara (fl. c. 250 BC) had observed (or postulated) that pq is false if and only if p is true and q is false. Yet the intimate connection between logic and mathematics had to await the insight of 19th-century thinkers, in particular Frege (Frege, Gottlob).

      Frege was able to explain most mathematical notions with the help of his comprehension scheme, which asserts that, for every ϕ (formula or statement), there should exist a set X such that, for all x, xX if and only if ϕ(x) is true. Moreover, by the axiom of extensionality, this set X is uniquely determined by ϕ(x). A flaw in Frege's system was uncovered by Russell, who pointed out some obvious contradictions involving sets that contain themselves as elements—e.g., by taking ϕ(x) to be ¬(xx). Russell illustrated this by what has come to be known as the barber paradox: A barber states that he shaves all who do not shave themselves. Who shaves the barber? Any answer contradicts the barber's statement. To avoid these contradictions Russell introduced the concept of types (types, theory of), a hierarchy (not necessarily linear) of elements and sets such that definitions always proceed from more basic elements (sets) to more inclusive sets, hoping that self-referencing and circular definitions would then be excluded. With this type distinction, xX only if X is of an appropriate higher type than x.

      The type theory proposed by Russell, later developed in collaboration with the English mathematician Alfred North Whitehead (Whitehead, Alfred North) (1861–1947) in their monumental Principia Mathematica (1910–13), turned out to be too cumbersome to appeal to mathematicians and logicians, who managed to avoid Russell's paradox in other ways. Mathematicians made use of the Neumann-Gödel-Bernays set theory, which distinguishes between small sets and large classes, while logicians preferred an essentially equivalent first-order language, the Zermelo-Fraenkel axioms, which allow one to construct new sets only as subsets of given old sets. Mention should also be made of the system of the American philosopher Willard Van Orman Quine (Quine, Willard Van Orman) (b. 1908), which admits a universal set. (Cantor had not allowed such a “biggest” set, as the set of all its subsets would have to be still bigger.) Although type theory was greatly simplified by Alonzo Church and the American mathematician Leon Henkin (b. 1921), it came into its own only with the advent of category theory (see below Category theory (mathematics, foundations of)).

Foundational logic
      The prominence of logic in foundations led some people, referred to as logicists (logicism), to suggest that mathematics is a branch of logic. The concepts of membership and equality could reasonably be incorporated into logic, but what about the natural numbers? Kronecker had suggested that, while everything else was made by man, the natural numbers were given by God. The logicists, however, believed that the natural numbers were also man-made, inasmuch as definitions may be said to be of human origin.

      Russell proposed that the number 2 be defined as the set of all two-element sets, that is, X ∊ 2 if and only if X has distinct elements x and y and every element of X is either x or y. The Hungarian-born American mathematician John von Neumann (von Neumann, John) (1903–57) suggested an even simpler definition, namely that X ∊ 2 if and only if X = 0 or X = 1, where 0 is the empty set and 1 is the set consisting of 0 alone. Both definitions require an extralogical axiom to make them work—the axiom of infinity, which postulates the existence of an infinite set. Since the simplest infinite set is the set of natural numbers, one cannot really say that arithmetic has been reduced to logic. Most mathematicians follow Peano (Peano, Giuseppe), who preferred to introduce the natural numbers directly by postulating the crucial properties of 0 and the successor operation S, among which one finds the principle of mathematical induction.

      The logicist program might conceivably be saved by a 20th-century construction usually ascribed to Church, though he had been anticipated by the Austrian philosopher Ludwig Wittgenstein (1889–1951). According to Church, the number 2 is the process of iteration; that is, 2 is the function which to every function f assigns its iterate 2(f) = ff, where (ff)(x) = f(f(x)). There are some type-theoretical difficulties with this construction, but these can be overcome if quantification over types is allowed; this is finding favour in theoretical computer science.

Impredicative constructions
      A number of 19th-century mathematicians found fault with the program of reducing mathematics to arithmetic and set theory as suggested by the work of Cantor and Frege. In particular, the French mathematician Henri Poincaré (Poincaré, Henri) (1854–1912) objected to impredicative constructions, which construct an entity of a certain type in terms of entities of the same or higher type—i.e., self-referencing constructions and definitions. For example, when proving that every bounded nonempty set X of real numbers has a least upper bound a, one proceeds as follows. (For this purpose, it will be convenient to think of a real number, following Dedekind (Dedekind, Richard), as a set of rationals that contains all the rationals less than any element of the set.) One lets xa if and only if xy for some yX; but here y is of the same type as a.

      It would seem that to do ordinary analysis one requires impredicative constructions. Russell and Whitehead tried unsuccessfully to base mathematics on a predicative type theory; but, though reluctant, they had to introduce an additional axiom, the axiom of reducibility, which rendered their enterprise impredicative after all. More recently, the Swedish logician Per Martin-Löf presented a new predicative type theory, but no one claims that this is adequate for all of classical analysis. However, the German-American mathematician Hermann Weyl (Weyl, Hermann) (1885–1955) and the American mathematician Solomon Feferman have shown that impredicative arguments such as the above can often be circumvented and are not needed for most, if not all, of analysis. On the other hand, as was pointed out by the Italian computer scientist Giuseppe Longo (b. 1929), impredicative constructions are extremely useful in computer science—namely, for producing fixpoints (entities that remain unchanged under a given process).

Nonconstructive arguments
      Another criticism of the Cantor-Frege program was raised by Kronecker (Kronecker, Leopold), who objected to nonconstructive arguments, such as the following proof that there exist irrational numbers (irrational number) a and b such that ab is rational. If

is rational, then the proof is complete; otherwise take
and b = √2, so that ab = 2. The argument is nonconstructive, because it does not tell us which alternative holds, even though more powerful mathematics will, as was shown by the Russian mathematician Aleksandr Osipovich Gelfond (Gelfond, Aleksandr Osipovich) (1906–68). In the present case, the result can be proved constructively by taking a = √2 and b = 2log23. But there are other classical theorems for which no constructive proof exists.

      Consider, for example, the statement

x(∃yϕ(y) ⊃ ϕ(x)),
which symbolizes the statement that there exists a person who is famous if there are any famous people. This can be proved with the help of De Morgan's laws, named after the English mathematician and logician Augustus De Morgan (1806–71). It asserts the equivalence of ∃yϕ(y) with ¬∀y¬ϕ(y), using classical logic, but there is no way one can construct such an x, for example, when ϕ(x) asserts the existence of a well-ordering of the reals, as was proved by Feferman. An ordered set is said to be well-ordered if every nonempty subset has a least element. It had been shown by the German mathematician Ernst Zermelo (1871–1951) that every set can be well-ordered, provided one adopts another axiom, the axiom of choice, which says that, for every nonempty family of nonempty sets, there is a set obtainable by picking out exactly one element from each of these sets. This axiom is a fertile source of nonconstructive arguments.

Intuitionistic (intuitionism) logic
      The Dutch mathematician L.E.J. Brouwer (1881–1966) in the early 20th century had the fundamental (Brouwer, Luitzen Egbertus Jan) insight that such nonconstructive arguments will be avoided if one abandons a principle of classical logic which lies behind De Morgan's laws. This is the principle of the excluded third (or excluded middle), which asserts that, for every proposition p, either p or not p; and equivalently that, for every p, not not p implies p. This principle is basic to classical logic and had already been enunciated by Aristotle, though with some reservations, as he pointed out that the statement “there will be a sea battle tomorrow” is neither true nor false.

      Brouwer did not claim that the principle of the excluded third always fails, only that it may fail in the presence of infinite sets. Of two natural numbers x and y one can always decide whether x = y or xy, but of two real numbers this may not be possible, as one might have to know an infinite number of digits of their decimal expansions. Similar objections apply to De Morgan's laws, a consequence of the principle of the excluded third. For a finite set A, if it has been shown that the assertion ∀xA¬ϕ(x) leads to a contradiction, ∃xAϕ(x) can be verified by looking at each element of A in turn; i.e., the statement that no members of a given set have a certain property can be disproved by examining in turn each element of the set. For an infinite set A, there is no way in which such an inspection can be carried out.

      Brouwer's philosophy of mathematics is called intuitionism. Although Brouwer himself felt that mathematics was language-independent, his disciple Arend Heyting (1898–1980) set up a formal language for first-order intuitionistic arithmetic. Some of Brouwer's later followers even studied intuitionistic type theory (see below Intuitionistic type theories (mathematics, foundations of)), which differs from classical type theory only by the absence of a single axiom (double negation):

x ∊ Ω(¬¬xx),
where Ω is the type of truth-values.

      While it cannot be said that many practicing mathematicians have followed Brouwer in rejecting this principle on philosophical grounds, it came as a great surprise to people working in category theory that certain important categories called topoi (singular: topos; see below Topos theory (mathematics, foundations of)) have associated with them a language that is intuitionistic in general. In consequence of this fact, a theorem about sets proved constructively was immediately seen to be valid not only for sets but also for sheaves, which, however, lie beyond the scope of this article.

      The moderate form of intuitionism considered here embraces Kronecker's constructivism but not the more extreme position of finitism. According to this view, which goes back to Aristotle, infinite sets do not exist, except potentially. In fact, it is precisely in the presence of infinite sets that intuitionists drop the classical principle of the excluded third.

      An even more extreme position, called ultrafinitism, maintains that even very large numbers do not exist, say numbers greater than 10(1010). Of course, the vast majority of mathematicians reject this view by referring to 10(1010) + 1, but the true believers have subtle ways of getting around this objection, which, however, lie beyond the scope of this discussion.

Other logics
      While intuitionistic logic is obtained from classical logic by dropping the principle of the excluded third, other logics have also been proposed, though none has had a comparable impact on the foundations of mathematics. One may mention many-valued, or multivalued, logics, which admit a finite number of truth-values; fuzzy logic, with an imprecise membership relationship (though, paradoxically, a precise equality relation); and quantum logic, where conjunction may be only partially defined and implication may not be defined at all. Perhaps more important have been various so-called substructural logics in which the usual properties of the deduction symbol are weakened: relevance logic is studied by philosophers, linear logic by computer scientists, and a noncommutative version of the latter by linguists.

      Russell's discovery of a hidden contradiction in Frege's attempt to formalize set theory, with the help of his simple comprehension scheme, caused some mathematicians to wonder how one could make sure that no other contradictions existed. Hilbert's program, called formalism, was to concentrate on the formal language of mathematics and to study its syntax. In particular, the consistency of mathematics, which may be taken, for instance, to be the metamathematical assertion that the mathematical statement 0 = 1 is not provable, ought to be a metatheorem—that is, provable within the syntax of mathematics. This formalization project made sense only if the syntax of mathematics was consistent, for otherwise every syntactical statement would be provable, including that which asserts the consistency of mathematics.

      Unfortunately, a consequence of Gödel's incompleteness theorem is that the consistency of mathematics can be proved only in a language which is stronger than the language of mathematics itself. Yet, formalism is not dead—in fact, most pure mathematicians are tacit formalists—but the naive attempt to prove the consistency of mathematics in a weaker system had to be abandoned.

      While no one, except an extremist intuitionist, will deny the importance of the language of mathematics, most mathematicians are also philosophical realists who believe that the words of this language denote entities in the real world. Following the Swiss mathematician Paul Bernays (1888–1977), this position is also called Platonism, since Plato believed that mathematical entities really exist.

Gödel (Gödel, Kurt)
      Implicit in Hilbert's program had been the hope that the syntactic notion of provability would capture the semantic notion of truth. Gödel came up with the surprising discovery that this was not the case for type theory and related languages adequate for arithmetic, as long as the following assumptions are insisted upon:
● The set of theorems (provable statements) is effectively enumerable, by virtue of the notion of proof being decidable.
● The set of true statements of mathematics is ω-complete in the following sense: given any formula ϕ(x), containing a free variable x of type N, the universal statement ∀xNϕ(x) will be true if ϕ(n) is true for each numeral n—that is, for n = 0, n = S0, n = SS0, and so on.
● The language is consistent.

      Actually, Gödel also made a somewhat stronger assumption, which, as the American mathematician J. Barkley Rosser later showed, could be replaced by assuming consistency. Gödel's ingenious argument was based on the observation that syntactical statements about the language of mathematics can be translated into statements of arithmetic, hence into the language of mathematics. It was partly inspired by an argument that supposedly goes back to the ancient Greeks and which went something like this: Epimenides says that all Cretans are liars (liar paradox); Epimenides is a Cretan; hence Epimenides is a liar. Under the assumptions 1 and 2, Gödel constructed a mathematical statement g that is true but not provable. If it is assumed that all theorems are true, it follows that neither g nor ¬g is a theorem.

      No mathematician doubts assumption 1; by looking at a purported proof of a theorem, suitably formalized, it is possible for a mathematician, or even a computer, to tell whether it is a proof. By listing all proofs in, say, alphabetic order, an effective enumeration of all theorems is obtained. Classical mathematicians also accept assumption 2 and therefore reluctantly agree with Gödel that, contrary to Hilbert's expectation, there are true mathematical statements which are not provable.

      However, moderate intuitionists could draw a different conclusion, because they are not committed to assumption 2. To them, the truth of the universal statement ∀xNϕ(x) can be known only if the truth of ϕ(n) is known, for each natural number n, in a uniform way. This would not be the case, for example, if the proof of ϕ(n) increases in difficulty, hence in length, with n. Moderate intuitionists might therefore identify truth with provability and not be bothered by the fact that neither g nor ¬g is true, as they would not believe in the principle of the excluded third in the first place.

      Intuitionists have always believed that, for a statement to be true, its truth must be knowable. Moreover, moderate intuitionists might concede to formalists that to say that a statement is known to be true is to say that it has been proved. Still, some intuitionists do not accept the above argument. Claiming that mathematics is language-independent, intuitionists would state that in Gödel's metamathematical proof of his incompleteness theorem, citing ω-completeness to establish the truth of a universal statement yields a uniform proof of the latter after all.

      Gödel considered himself to be a Platonist, inasmuch as he believed in a notion of absolute truth. He took it for granted, as do many mathematicians, that the set of true statements is ω-complete. Other logicians are more skeptical and want to replace the notion of truth by that of truth in a model. In fact, Gödel himself, in his completeness theorem, had shown that for a mathematical statement to be provable it is necessary and sufficient that it be true in every model. His incompleteness theorem now showed that truth in every ω-complete model is not sufficient for provability. This point will be returned to later, as the notion of model for type theory is most easily formulated with the help of category theory, although this is not the way Gödel himself proceeded. See below Gödel and category theory (mathematics, foundations of).

      Peano (Peano, Giuseppe) had observed that addition of natural numbers can be defined recursively thus:

x + 0 = x, x + Sy = S(x + y).
Other numerical functions ℕk → ℕ that can be defined with the help of such a recursion scheme (and with the help of 0, S, and substitution) are called primitive recursive. Gödel used this concept to make precise what he meant by “effectively enumerable.” A set of natural numbers is said to be recursively enumerable if it consists of all f(n) with n ∊ ℕ, where f ∶ ℕ → ℕ is a primitive recursive function.

      This notion can easily be extended to subsets of ℕk and, by a simple trick called arithmetization, to sets of strings of words in a language. Thus Gödel was able to assert that the set of theorems of mathematics is recursively enumerable, and, more recently, the American linguist Noam Chomsky (b. 1928) could say that the set of grammatical sentences of a natural language, such as English, is recursively enumerable.

      It is not difficult to show that all primitive recursive functions can be calculated. For example, to calculate x + y when x = 3 and y = 2, making use of Peano's recursive definition of x + y and of the definitions 1 = S0, 2 = S1, and so on, one proceeds as follows:

      3 + 2 = S2 + S1 = S(S2 + 1) = S(S2 + S0)

      = SS(S2 + 0) = SSS2 = SS3 = S4 = 5.

      But primitive recursive functions are not the only numerical functions that can be calculated. More general are the recursive functions, where f ∶ ℕ → ℕ is said to be recursive if its graph is recursively enumerable—that is, if there exist primitive recursive functions u, v ∶ ℕ → ℕ such that, for all natural numbers x and y, y = f(x) if and only if, for some z ∊ ℕ, x = u(z) and y = v(z).

      All recursive functions can be calculated with pencil and paper or, even more primitively, by moving pebbles (calculi in Latin) from one location to another, using some finite set of instructions, nowadays called a program. Conversely, only recursive functions can be so calculated, or computed by a theoretical machine introduced by the English mathematician Alan Turing (1912–54), or by a modern computer, for that matter. The Church-Turing thesis (Church's thesis) asserts that the informal notion of calculability is completely captured by the formal notion of recursive functions and hence, in theory, replicable by a machine.

      Gödel's incompleteness theorem had proved that any useful formal mathematical system will contain undecidable propositions—propositions which can be neither proved nor disproved. Church and Turing, while seeking an algorithmic (mechanical) test for deciding theoremhood and thus potentially deleting nontheorems, proved independently, in 1936, that such an algorithmic method was impossible for the first-order predicate logic (see logic, history of: 20th-century logic (logic, history of)). The Church-Turing theorem of undecidability, combined with the related result of the Polish-born American mathematician Alfred Tarski (1902–83) on undecidability of truth, eliminated the possibility of a purely mechanical device replacing mathematicians.

Computers (computer) and proof
      While many mathematicians use computers only as word processors and for the purpose of communication, computer-assisted computations can be useful for discovering potential theorems. For example, the prime number theorem was first suggested as the result of extensive hand calculations on the prime numbers up to 3,000,000 by the Swiss mathematician Leonhard Euler (1707–83), a process that would have been greatly facilitated by the availability of a modern computer. Computers may also be helpful in completing proofs when there are a large number of cases to be considered. The renowned computer-aided proof of the four-colour (four-colour map problem) mapping theorem by the American mathematicians Kenneth Appel (b. 1932) and Wolfgang Haken (b. 1928) even goes beyond this, as the computer helped to determine which cases were to be considered in the next step of the proof. Yet, in principle, computers cannot be asked to discover proofs, except in very restricted areas of mathematics—such as elementary Euclidean geometry—where the set of theorems happens to be recursive, as was proved by Tarski.

      As the result of earlier investigations by Turing, Church, the American mathematician Haskell Brooks Curry (1900–82), and others, computer science has itself become a branch of mathematics. Thus, in theoretical computer science, the objects of study are not just theorems but also their proofs, as well as calculations, programs, and algorithms. Theoretical computer science turns out to have a close connection to category theory. Although this lies beyond the scope of this article, an indication will be given below.

Category theory
Abstraction in mathematics
      One recent tendency in the development of mathematics has been the gradual process of abstraction. The Norwegian mathematician Niels Henrik Abel (Abel, Niels Henrik) (1802–29) proved that equations of the fifth degree cannot, in general, be solved by radicals. The French mathematician Évariste Galois (Galois, Évariste) (1811–32), motivated in part by Abel's work, introduced certain groups (group) of permutations to determine the necessary conditions for a polynomial equation to be solvable. These concrete groups soon gave rise to abstract groups, which were described axiomatically. Then it was realized that to study groups it was necessary to look at the relation between different groups—in particular, at the homomorphisms which map one group into another while preserving the group operations. Thus people began to study what is now called the concrete category of groups, whose objects are groups and whose arrows are homomorphisms (homomorphism). It did not take long for concrete categories to be replaced by abstract categories, again described axiomatically.

      The important notion of a category was introduced by Samuel Eilenberg and Saunders Mac Lane at the end of World War II. These modern categories must be distinguished from Aristotle's categories, which are better called types in the present context. A category has not only objects but also arrows (referred to also as morphisms, transformations, or mappings) between them.

      Many categories have as objects sets endowed with some structure and arrows, which preserve this structure. Thus, there exist the categories of sets (with empty structure) and mappings, of groups and group-homomorphisms, of rings and ring-homomorphisms, of vector spaces and linear transformations, of topological spaces and continuous mappings, and so on. There even exists, at a still more abstract level, the category of (small) categories and functors, as the morphisms between categories are called, which preserve relationships among the objects and arrows.

      Not all categories can be viewed in this concrete way. For example, the formulas of a deductive system may be seen as objects of a category whose arrows f : AB are deductions of B from A. In fact, this point of view is important in theoretical computer science, where formulas are thought of as types and deductions as operations.

      More formally, a category consists of (1) a collection of objects A, B, C, . . . , (2) for each ordered pair of objects in the collection an associated collection of transformations including the identity IAAA, and (3) an associated law of composition for each ordered triple of objects in the category such that for fAB and gBC the composition gf (or gf) is a transformation from A to Ci.e., gfAC. Additionally, the associative law and the identities are required to hold (where the compositions are defined)—i.e., h(gf) = (hg)f and 1Bf = f = f1A.

      In a sense, the objects of an abstract category have no windows, like the monads of Leibniz. To infer the interior of an object A one need only look at all the arrows from other objects to A. For example, in the category of sets, elements of a set A may be represented by arrows from a typical one-element set into A. Similarly, in the category of small categories, if 1 is the category with one object and no nonidentity arrows, the objects of a category A may be identified with the functors 1A. Moreover, if 2 is the category with two objects and one nonidentity arrow, the arrows of A may be identified with the functors 2A.

Isomorphic structures
      An arrow fAB is called an isomorphism if there is an arrow gBA inverse to f—that is, such that gf = 1A and fg = 1B. This is written AB, and A and B are called isomorphic, meaning that they have essentially the same structure and that there is no need to distinguish between them. Inasmuch as mathematical entities are objects of categories, they are given only up to isomorphism. Their traditional set-theoretical constructions, aside from serving a useful purpose in showing consistency, are really irrelevant.

      For example, in the usual construction of the ring of integers, an integer is defined as an equivalence class of pairs (m,n) of natural numbers, where (m,n) is equivalent to (m′,n′) if and only if m + n′ = m′ + n. The idea is that the equivalence class of (m,n) is to be viewed as mn. What is important to a categorist, however, is that the ring ℤ of integers is an initial object in the category of rings and homomorphisms (homomorphism)—that is, that for every ring ℝ there is a unique homomorphism ℤ → ℝ. Seen in this way, ℤ is given only up to isomorphism. In the same spirit, it should be said not that ℤ is contained in the field ℚ of rational numbers but only that the homomorphism ℤ → ℚ is one-to-one. Likewise, it makes no sense to speak of the set-theoretical intersection of π and √(-1) , if both are expressed as sets of sets of sets (ad infinitum).

      Of special interest in foundations and elsewhere are adjoint functors (F,G). These are pairs of functors between two categories  and ℬ, which go in opposite directions such that a one-to-one correspondence exists between the set of arrows F(A) → B in ℬ and the set of arrows AG(B) in —that is, such that the sets are isomorphic.

Topos theory
      The original purpose of category theory had been to make precise certain technical notions of algebra and topology and to present crucial results of divergent mathematical fields in an elegant and uniform way, but it soon became clear that categories had an important role to play in the foundations of mathematics. This observation was largely the contribution of the American mathematician F.W. Lawvere (b. 1937), who elaborated on the seminal work of the German-born French mathematician Alexandre Grothendieck (b. 1928) in algebraic geometry. At one time he considered using the category of (small) categories (and functors) itself for the foundations of mathematics. Though he did not abandon this idea, later he proposed a generalization of the category of sets (and mappings) instead.

      Among the properties of the category of sets, Lawvere singled out certain crucial ones, only two of which are mentioned here:
● There is a one-to-one correspondence between subsets B of A and their characteristic functions χ ∶ A → {true, false}, where, for each element a of A, χ(a) = true if and only if a is in B.
● Given an element a of A and a function hAA, there is a unique function f ∶ ℕ → A such that f(n) = hn(a).

      Suitably axiomatized, a category with these properties is called an (elementary) topos. However, in general, the two-element set {true, false} must be replaced by an object Ω with more than two truth-values, though a distinguished arrow into Ω is still labeled as true.

Intuitionistic type theories
      Topoi are closely related to intuitionistic type theories. Such a theory is equipped with certain types, terms, and theorems.

      Among the types there should be a type Ω for truth-values, a type N for natural numbers, and, for each type A, a type (A) for all sets of entities of type A.

      Among the terms there should be in particular:
● The formulas a = a′ and a ∊ α of type Ω, if a and a′ are of type A and α is of type (A)
● The numerals 0 and Sn of type N, if the numeral n is of type N
● The comprehension term {xA|ϕ(x)} of type (A), if ϕ(x) is a formula of type Ω containing a free variable x of type A

      The set of theorems should contain certain obvious axioms and be closed under certain obvious rules of inference, neither of which will be spelled out here.

      At this point the reader may wonder what happened to the usual logical symbols. These can all be defined—for example, universal quantification

xAϕ(x) as {xA|ϕ(x)} = {xA|x = x}
and disjunction
p ∨ q as ∀t ∊ Ω((pt) ⊃ ((qt) ⊃ t)).
For a formal definition of implication see formal logic.

      In general, the set of theorems will not be recursively enumerable. However, this will be the case for pure intuitionistic type theory ℒ0, in which types, terms, and theorems are all defined inductively. In ℒ0 there are no types, terms, or theorems other than those that follow from the definition of type theory. ℒ0 is adequate for the constructive part of the usual elementary mathematics—arithmetic and analysis—but not for metamathematics, if this is to include a proof of Gödel's completeness theorem, and not for category theory, if this is to include the Yoneda embedding of a small category into a set-valued functor category.

Internal language
      It turns out that each topos  has an internal language L(), an intuitionistic type theory whose types are objects and whose terms are arrows of . Conversely, every type theory ℒ generates a topos T(ℒ), by the device of turning (equivalence classes of) terms into objects, which may be thought of as denoting sets.

      Nominalists may be pleased to note that every topos  is equivalent (in the sense of category theory) to the topos generated by a language—namely, the internal language of . On the other hand, Platonists may observe that every type theory ℒ has a conservative extension to the internal language of a topos—namely, the topos generated by ℒ, assuming that this topos exists in the real (ideal) world. Here, the phrase “conservative extension” means that ℒ can be extended to LT(ℒ) without creating new theorems. The types of LT(ℒ) are names of sets in ℒ and the terms of LT(ℒ) may be identified with names of sets in ℒ for which it can be proved that they have exactly one element. This last observation provides a categorical version of Russell's theory of descriptions: if one can prove the unique existence of an x of type A in ℒ such that ϕ(x), then this unique x has a name in LT(ℒ).

      The interpretation of a type theory ℒ in a topos  means an arrow ℒ → L() in the category of type theories or, equivalently, an arrow T(ℒ) →  in the category of topoi. Indeed, T and L constitute a pair of adjoint functors.

Gödel (Gödel, Kurt) and category theory
      It is now possible to reexamine Gödel's theorems from a categorical point of view. In a sense, every interpretation of ℒ in a topos  may be considered as a model of ℒ, but this notion of model is too general, for example, when compared with the models of classical type theories studied by Henkin. Therefore, it is preferable to restrict  to being a special kind of topos called local. Given an arrow p into Ω in , then, p is true in  if p coincides with the arrow true in , or, equivalently, if p is a theorem in the internal language of .  is called a local topos provided that (1) 0 = 1 is not true in , (2) pq is true in  only if p is true in  or q is true in , and (3) ∃x ∊ Aϕ(x) is true in  only if ϕ(a) is true in  for some arrow a ∶ 1 → A in . Here the statement 0 = 1 in provision 1 can be replaced by any other contradiction—e.g., by ∀t ∊ Ωt, which says that every proposition is true.

      A model of ℒ is an interpretation of ℒ in a local topos . Gödel's completeness theorem, generalized to intuitionistic type theory, may now be stated as follows: A closed formula of ℒ is a theorem if and only if it is true in every model of ℒ.

      Gödel's incompleteness theorem, generalized likewise, says that, in the usual language of arithmetic, it is not enough to look only at ω-complete models: Assuming that ℒ is consistent and that the theorems of ℒ are recursively enumerable, with the help of a decidable notion of proof, there is a closed formula g in ℒ, which is true in every ω-complete model, yet g is not a theorem in ℒ.

The search for a distinguished model
      A Platonist might still ask whether, among all the models of the language of mathematics, there is a distinguished model, which may be considered to be the world of mathematics. Take as the language ℒ0 pure intuitionistic type theory (see above Intuitionistic type theories (mathematics, foundations of)). It turns out, somewhat surprisingly, that the topos generated by ℒ0 is a local topos; hence, the unique interpretation of ℒ0 in the topos generated by it may serve as a distinguished model.

      This so-called free topos has been constructed linguistically to satisfy any formalist, but it should also satisfy a moderate Platonist, one who is willing to abandon the principle of the excluded third, inasmuch as the free topos is the initial object in the category of all topoi. Hence, the free topos may be viewed, in the words of Leibniz, as the best of all possible worlds. More modestly speaking, the free topos is to an arbitrary topos like the ring of integers is to an arbitrary ring.

      The language ℒ0 should also satisfy any constructivist: if an existential statement ∃xAϕ(x) can be proved in ℒ0, then ϕ(a) can be proved for some term a of type A; moreover, if pq can be proved, then either p can be proved or q can be proved.

      The above argument would seem to make a strong case for the acceptance of pure intuitionistic type theory as the language of elementary mathematics—that is, of arithmetic and analysis—and hence for the acceptance of the free topos as the world of mathematics. Nonetheless, most practicing mathematicians prefer to stick to classical mathematics. In fact, classical arguments seem to be necessary for metamathematics—for example, in the usual proof of Gödel's completeness theorem—even for intuitionistic type theory.

      In this connection, one celebrated consequence of Gödel's incompleteness theorem may be recalled, to wit: the consistency of ℒ cannot be proved (via arithmetization) within ℒ. This is not to say that it cannot be proved in a stronger metalanguage. Indeed, to exhibit a single model of ℒ would constitute such a proof.

      It is more difficult to make a case for the classical world of mathematics, although this is what most mathematicians believe in. This ought to be a distinguished model of pure classical type theory ℒ1. Unfortunately, Gödel's argument shows that the interpretation of ℒ1 in the topos generated by it is not a model in this sense.

Boolean local topoi
      A topos is said to be Boolean if its internal language is classical. It is named after the English mathematician George Boole (1815–64), who was the first to give an algebraic presentation of the classical calculus of propositions. A Boolean topos is local under the following circumstances. The disjunction property (2) holds in a Boolean topos if and only if, for every closed formula p, either p is true or ¬p is true. Moreover, with the help of De Morgan's laws, the existence property (3) may then be rephrased thus: if ϕ(a) is true for all closed terms a of type A, then ∀xAϕ(x) is true. As it turns out, a Boolean local topos may be described more simply, without referring to the internal language, as a topos with the following property: if f, g : AB are arrows such that fa = ga for all a : 1 → A, then f = g. (Here 1 is the so-called terminal object, with the property that, for each object C, there is a unique arrow C → 1.) For the Boolean topos to be ω-complete requires furthermore that all numerals—that is, closed terms of type N in its internal language—be standard—that is, have the form 0, S0, SS0, and so on.

      Of course, Gödel's completeness theorem shows that there are plenty of Boolean local topoi to model pure classical type theory in, but the usual proof of their existence requires nonconstructive arguments. It would be interesting to exhibit at least one such model constructively.

      As a first step toward constructing a distinguished ω-complete Boolean model of ℒ1 one might wish to define the notion of truth in ℒ1, as induced by this model. Tarski (Tarski, Alfred) had shown how truth can be defined for classical first-order arithmetic, a language that admits, aside from formulas, only terms of type N. Tarski achieved this essentially by incorporating ω-completeness into the definition of truth. It is not obvious whether his method can be extended to classical higher-order arithmetic—that is, to classical type theory. In fact, Tarski himself showed that the notion of truth is not definable (in a technical sense) in such a system. If his notion of definability corresponds to what is here meant by constructibility, then it is possible to conclude that, indeed, no Boolean model can be constructed.

      One may be tempted to consider as a candidate for the distinguished Boolean local topos the so-called von Neumann universe. This is defined as the union of a class of sets containing the empty set (the initial object in the category of sets) and closed under the power-set operation and under transfinite unions—thus, as a subcategory of the category of sets. But what is the category of sets if not the distinguished Boolean local topos being sought?

      A better candidate may be Gödel's constructible universe, whose original purpose was to serve as a model of Zermelo-Fraenkel set theory in which the continuum hypothesis holds. It is formed like the von Neumann universe, except that the notion of subset, implicit in the power-set operation, is replaced by that of definable subset. Is it possible that this universe can be constructed syntactically, like the free topos, without reference to any previously given category of sets, or by a universal property?

      In the internal language of a Boolean local topos, the logical connectives and quantifiers have their natural meanings. In particular, quantifiers admit a substitutional interpretation, a desirable property that has been discussed by philosophers (among them, Russell (Russell, Bertrand) and the American logician Saul Kripke [b. 1941])—to wit: if an existential statement is true, then it can be witnessed by a term of appropriate type in the language; and a universal statement is true if it is witnessed by all terms of the appropriate type.

      Note that, in the internal language of the free topos, and therefore in pure intuitionistic type theory, the substitutional interpretation is valid for existential quantifiers, by virtue of the free topos being local, but that it fails for universal quantifiers, in view of the absence of ω-completeness and the fact that in the free topos all numerals are standard. For a Boolean local topos, ω-completeness will also ensure that all numerals are standard, so that numerals mean exactly what they are intended to mean.

One distinguished model or many models
      Some mathematicians do not believe that a distinguished world of mathematics should be sought at all, but rather that the multiplicity of such worlds should be looked at simultaneously. A major result in algebraic geometry, due to Alexandre Grothendieck (Grothendieck, Alexandre), was the observation that every commutative ring may be viewed as a continuously variable local ring, as Lawvere would put it. In the same spirit, an amplified version of Gödel's completeness theorem would say that every topos may be viewed as a continuously variable local topos, provided sufficiently many variables (Henkin constants) are adjoined to its internal language. Put in more technical language, this makes the possible worlds of mathematics stalks of a sheaf. However, the question still remains as to where this sheaf lives if not in a distinguished world of mathematics or—perhaps better to say—metamathematics.

      These observations suggest that the foundations of mathematics have not achieved a definitive shape but are still evolving; they form the subject of a lively debate among a small group of interested mathematicians, logicians, and philosophers.

Additional Reading
W.S. Anglin and J. Lambek, The Heritage of Thales (1995), a textbook aimed primarily at undergraduate mathematics students, deals with the history, philosophy, and foundations of mathematics and includes an elementary introduction to category theory. Collections of important readings and original articles include Paul Benacerraf and Hilary Putnam (eds.), Philosophy of Mathematics: Selected Readings, 2nd ed. (1983), treating the foundations of mathematics, the existence of mathematical objects, the notion of mathematical truth, and the concept of set; Jaako Hintikka (ed.), The Philosophy of Mathematics (1969), which includes articles by Henkin on completeness, by Feferman on predicativity, by Robinson on the calculus, and by Tarski on elementary geometry; and Jean Van Heijenoort (compiler), From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931 (1967, reissued 1977). Bertrand Russell, A History of Western Philosophy and Its Connection with Political and Social Circumstances from the Earliest Times to the Present Day, 2nd ed. (1961, reprinted 1991), an extremely readable work, portrays the relevant views of the pre-Socratics, Plato, Aristotle, Leibniz, and Kant. Mario Bunge, Treatise on Basic Philosophy, vol. 7, Epistemology & Methodology III: Philosophy of Science and Technology, part 1, Formal and Physical Sciences (1985), contains a discussion by a philosopher of the different philosophical schools in the foundations of mathematics. William Kneale and Martha Kneale, The Development of Logic (1962, reprinted 1984), offers a thorough scholarly account of the growth of logic from ancient times to the contributions by Frege, Russell, Brouwer, Hilbert, and Gödel. Saunders Mac Lane, Mathematics, Form and Function (1986), records the author's personal views on the form and function of mathematics as a background to the philosophy of mathematics, touching on many branches of mathematics. Michael Hallett, Cantorian Set Theory and Limitation of Size (1984), provides a scholarly account of Cantor's set theory and its further development by Fraenkel, Zermelo, and von Neumann. William S. Hatcher, Foundations of Mathematics (1968), surveys different systems, including those of Frege, of Russell, of von Neumann, Bernays, and Gödel, and of Quine as well as Lawvere's category of categories. Y.I. Manin (Iu.I. Manin), A Course in Mathematical Logic, trans. from Russian (1977), is addressed to mathematicians at a sophisticated level and presents the most significant discoveries up to 1977 concerning the continuum hypothesis, the nonexistence of algorithmic solutions, and other topics. George S. Boolos and Richard C. Jeffrey, Computability and Logic, 3rd ed. (1989), for graduate and advanced undergraduate philosophy or mathematics students, deals with computability, Gödel's theorems, and the definability of truth, among other topics. J. Lambek and P.J. Scott, Introduction to Higher Order Categorical Logic (1986), is an advanced textbook addressed to graduate students in mathematics and computer science in which the relationship between topoi and type theories is explored in detail and some of the metatheorems cited in this article are proved.Joachim Lambek

* * *


Universalium. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Mathematics in Education and Industry — (MEI) is an independent A Level, AS Level, and GCSE Mathematics education and assessment body in the United Kingdom which was founded in the early 1960s. Although independent, MEI works in partnership with many organisations, including the… …   Wikipedia

  • mathematics — /math euh mat iks/, n. 1. (used with a sing. v.) the systematic treatment of magnitude, relationships between figures and forms, and relations between quantities expressed symbolically. 2. (used with a sing. or pl. v.) mathematical procedures,… …   Universalium

  • mathematics, philosophy of — Branch of philosophy concerned with the epistemology and ontology of mathematics. Early in the 20th century, three main schools of thought called logicism, formalism, and intuitionism arose to account for and resolve the crisis in the foundations …   Universalium

  • Foundations of mathematics — is a term sometimes used for certain fields of mathematics, such as mathematical logic, axiomatic set theory, proof theory, model theory, and recursion theory. The search for foundations of mathematics is also a central question of the philosophy …   Wikipedia

  • Foundations of Economic Analysis — is a book by Paul A. Samuelson published in 1947 (Enlarged ed., 1983). It sought to demonstrate a common mathematical structure underlying multiple branches of economics from two basic principles: optimizing behavior of agents and stability of… …   Wikipedia

  • Foundations of statistics — is the usual name for the epistemological debate over how one should conduct inductive inference from data. Among issues considered are the question of Bayesian inference versus frequentist inference, the distinction between Fisher s significance …   Wikipedia

  • Mathematics and science partnerships — (MSP) is education policy from Title 2, Part B, Sections 2201 2203 of the No Child Left Behind Act of 2001. The purpose of MSP is to increase student achievement in science and mathematics by partnering IHE science, math, and engineering… …   Wikipedia

  • Mathematics — Maths and Math redirect here. For other uses see Mathematics (disambiguation) and Math (disambiguation). Euclid, Greek mathematician, 3r …   Wikipedia

  • Areas of mathematics — Mathematics has become a vastly diverse subject over history, and there is a corresponding need to categorize the different areas of mathematics. A number of different classification schemes have arisen, and though they share some similarities,… …   Wikipedia

  • Mathematics Subject Classification — The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme collaboratively produced by staff of and based on the coverage of the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH.… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”