combinatorics

combinatorics
/keuhm buy'neuh tawr"iks, -tor"-, kom'beuh-/, n. (used with singular v.)

* * *

Branch of mathematics concerned with the selection, arrangement, and combination of objects chosen from a finite set.

The number of possible bridge hands is a simple example; more complex problems include scheduling classes in classrooms at a large university and designing a routing system for telephone signals. No standard algebraic procedures apply to all combinatorial problems; a separate logical analysis may be required for each problem. Combinatorics has its roots in antiquity, but new uses in computer science and systems management have increased its importance in recent years. See also permutations and combinations.

* * *

Introduction
also called  combinatorial mathematics,  

      the field of mathematics concerned with problems of selection, arrangement, and operation within a finite or discrete system. Included is the closely related area of combinatorial geometry.

      One of the basic problems of combinatorics is to determine the number of possible configurations (e.g., graphs, designs, arrays) of a given type. Even when the rules specifying the configuration are relatively simple, enumeration may sometimes present formidable difficulties. The mathematician may have to be content with finding an approximate answer or at least a good lower and upper bound.

      In mathematics, generally, an entity is said to “exist” if a mathematical example satisfies the abstract properties that define the entity. In this sense it may not be apparent that even a single configuration with certain specified properties exists. This situation gives rise to problems of existence and construction. There is again an important class of theorems that guarantee the existence of certain choices under appropriate hypotheses. Besides their intrinsic interest, these theorems may be used as existence theorems in various combinatorial problems.

      Finally, there are problems of optimization. As an example, a function f, the economic function, assigns the numerical value f(x) to any configuration x with certain specified properties. In this case the problem is to choose a configuration x0 that minimizes f(x) or makes it ε = minimal—that is, for any number ε > 0, f(x0) f(x) + ε, for all configurations x, with the specified properties.

History

Early developments
      Certain types of combinatorial problems have attracted the attention of mathematicians since early times. Magic squares, for example, which are square arrays of numbers with the property that the rows, columns, and diagonals add up to the same number, occur in the I Ching, a Chinese book dating back to the 12th century BC. The binomial coefficients, or integer coefficients in the expansion of (a + b)n, were known to the 12th-century Indian mathematician Bhāskara (Bhāskara II), who in his Līlāvatī (“The Graceful”), dedicated to a beautiful woman, gave the rules for calculating them together with illustrative examples. “Pascal's triangle,” a triangular array of binomial coefficients, had been taught by the 13th-century Persian philosopher Naṣīr ad-Dīn aḷ-Ṭūsī.

      In the West, combinatorics may be considered to begin in the 17th century with Blaise Pascal and Pierre de Fermat, both of France, who discovered many classical combinatorial results in connection with the development of the theory of probability. The term combinatorial was first used in the modern mathematical sense by the German philosopher and mathematician Gottfried Wilhelm Leibniz in his Dissertatio de Arte Combinatoria (“Dissertation Concerning the Combinational Arts”). He foresaw the applications of this new discipline to the whole range of the sciences. The Swiss mathematician Leonhard Euler (Euler, Leonhard) was finally responsible for the development of a school of authentic combinatorial mathematics beginning in the 18th century. He became the father of graph theory when he settled the Königsberg bridge problem, and his famous conjecture on Latin squares was not resolved until 1959.

      In England, Arthur Cayley, near the end of the 19th century, made important contributions to enumerative graph theory, and James Joseph Sylvester discovered many combinatorial results. The British mathematician George Boole at about the same time used combinatorial methods in connection with the development of symbolic logic, and the combinatorial ideas and methods of Henri Poincaré, which developed in the early part of the 20th century in connection with the problem of n bodies, have led to the discipline of topology, which occupies the centre of the stage of mathematics. Many combinatorial problems were posed during the 19th century as purely recreational problems and are identified by such names as “the problem of eight queens” and “the Kirkman school girl problem.” On the other hand, the study of triple systems begun by Thomas P. Kirkman in 1847 and pursued by Jakob Steiner, a Swiss-born German mathematician, in the 1850s was the beginning of the theory of design. Among the earliest books devoted exclusively to combinatorics are the German mathematician Eugen Netto's Lehrbuch der Combinatorik (1901; “Textbook of Combinatorics”) and the British mathematician Percy Alexander MacMahon's Combinatory Analysis (1915–16), which provide a view of combinatorial theory as it existed before 1920.

Combinatorics during the 20th century
      Many factors have contributed to the quickening pace of development of combinatorial theory since 1920. One of these was the development of the statistical theory of the design of experiments by the English statisticians Ronald Fisher and Frank Yates, which has given rise to many problems of combinatorial interest; the methods initially developed to solve them have found applications in such fields as coding theory. Information theory, which arose around midcentury, has also become a rich source of combinatorial problems of a quite new type.

      Another source of the revival of interest in combinatorics is graph theory, the importance of which lies in the fact that graphs can serve as abstract models for many different kinds of schemes of relations among sets of objects. Its applications extend to operations research, chemistry, statistical mechanics, theoretical physics, and socioeconomic problems. The theory of transportation networks can be regarded as a chapter of the theory of directed graphs. One of the most challenging theoretical problems, the four-colour problem (see below) belongs to the domain of graph theory. It has also applications to such other branches of mathematics as group theory.

      The development of computer technology in the second half of the 20th century is a main cause of the interest in finite mathematics in general and combinatorial theory in particular. Combinatorial problems arise not only in numerical analysis but also in the design of computer systems and in the application of computers to such problems as those of information storage and retrieval.

      Statistical mechanics is one of the oldest and most productive sources of combinatorial problems. Much important combinatorial work has been done by applied mathematicians and physicists since the mid-20th century—for example, the work on Ising models (see below The Ising problem (combinatorics)).

      In pure mathematics, combinatorial methods have been used with advantage in such diverse fields as probability, algebra (finite groups and fields, matrix and lattice theory), number theory (difference sets), set theory (Sperner's theorem), and mathematical logic (Ramsey's theorem).

      In contrast to the wide range of combinatorial problems and the multiplicity of methods that have been devised to deal with them stands the lack of a central unifying theory. Unifying principles and cross connections, however, have begun to appear in various areas of combinatorial theory. The search for an underlying pattern that may indicate in some way how the diverse parts of combinatorics are interwoven is a challenge that faces mathematicians in the last quarter of the 20th century.

Problems of enumeration

Permutations and combinations
Binomial coefficients
      An ordered set a1, a2, . . . , ar of r distinct objects selected from a se55t of n objects is called a permutation (permutations and combinations) of n things taken r at a time. The number of permutations is given by nPn = n(n - 1)(n - 2) · · · (n - r + 1). When r = n, the number nPr = n(n - 1)(n - 2) · · · is simply the number of ways of arranging n distinct things in a row. This expression is called factorial n and is denoted by n!. It follows that nPr = n!/(n - r)!. By convention 0! = 1.

      A set of r objects selected from a set of n objects without regard to order is called a combination of n things taken r at a time. Because each combination gives rise to r! permutations, the number of combinations, which is written (n/r), can be expressed in terms of factorials (see formula 1—>).

      The number (n/r) is called a binomial coefficient because it occurs as the coefficient of prqn - r in the binomial expansion—that is, the re-expression of (q + p)n in a linear combination of products of p and q (see 2—>).

      in the binomial expansion is the probability that an event the chance of occurrence of which is p occurs exactly r times in n independent trials (see probability theory).

      The answer to many different kinds of enumeration problems can be expressed in terms of binomial coefficients. The number of distinct solutions of the equation x1 + x2 + · · · + xn = m, for example, in which m is a non-negative integer mn and in which only non-negative integral values of xi are allowed is expressible this way, as was found by the 17th–18th-century French-born British mathematician Abraham De Moivre (see 3—>).

Multinomial coefficients
      If S is a set of n objects, and n1, n2, · · · , nk are non-negative integers satisfying n1 + n2 + · · · + nk = n, then the number of ways in which the objects can be distributed into k boxes, X1, X2, · · · , Xk, such that the box Xi contains exactly ni objects is given in terms of a ratio constructed of factorials (see 4—>). This number, called a multinomial coefficient, is the coefficient in the multinomial expansion of the nth power of the sum of the {pi} (see 5—>). If all of the {pi} are non-negative and sum to 1 and if there are k possible outcomes in a trial in which the chance of the ith outcome is pi, then the ith summand in the multinomial expansion is the probability that in n independent trials the ith outcome will occur exactly ni times, for each i, 1 i k.

Recurrence relations and generating functions
      If fn is a function defined on the positive integers, then a relation that expresses fn+k as a linear combination of function values of integer index less than n + k, in which a fixed constant in the linear combination is written ai, is called a recurrence relation (see 6—>). The relation together with the initial values f0, f1, · · · , fk-1 determines fn for all n. The function F(x) constructed of a sum of products of the type fnxn, the convergence of which is assumed in the neighbourhood of the origin, is called the generating function of fn (see 7—>).

      The set of the first n positive integers will be written Xn. It is possible to find the number of subsets of Xn containing no two consecutive integers, with the convention that the null set counts as one set. The required number will be written fn. A subset of the required type is either a subset of Xn-1 or is obtained by adjoining n to a subset of Xn-2. Therefore fn is determined by the recurrence relation fn = fn-1 + fn-2 with the initial values f0 = 1, f1 = 2. Thus f2 = 3, f3 = 5, f4 = 8, and so on. The generating function F(x) of fn can be calculated (see 8—>), and from this a formula for the desired function fn can be obtained (see 9—>). That fn = fn-1 + fn-2 can now be directly checked.

Partitions
      A partition of a positive integer n is a representation of n as a sum of positive integers n = x1 + x2 + · · · + xk, xi ⋜ 1, i = 1, 2, · · · , k. The numbers (number theory) xi are called the parts of the partition. The

for this is the number of ways of putting k - 1 separating marks in the n - 1 spaces between n dots in a row. The theory of unordered partitions is much more difficult and has many interesting features. An unordered partition can be standardized by listing the parts in a decreasing order. Thus n = x1 + x2 + · · · + xk, x1x2 ⋜ · · · ⋜ xk ⋜ 1. In what follows partition will mean an unordered partition.

      The number of partitions of n into k parts will be denoted by Pk(n), and a recurrence formula for it can be obtained from the definition (see 10—>). This recurrence formula, together with the initial conditions Pk(n) = 0 if n < k, and Pk(k) = 1 determines Pk(n). It can be shown that Pk(n) depends on the value of n (mod k!), in which the notation xa (mod b) means that x is any number that, if divided by b, leaves the same remainder as a does. For example, P3(n) = n2 + cn, in which cn = 0, −1/12, −1/3, +1/4, −1/3, or −1/12, according as n is congruent to 0, 1, 2, 3, 4, or 5 (mod 6). P(n), which is a sum over all values of k from 1 to n of Pk(n), denotes the number of partitions of n into n or fewer parts.

The Ferrer diagram
 Many results on partitions can be obtained by the use of Ferrers' diagram. The diagram of a partition is obtained by putting down a row of squares equal in number to the largest part, then immediately below it a row of squares equal in number to the next part, and so on. Such a diagram for 14 = 5 + 3 + 3 + 2 + 1 is shown in Figure 1—>.

      By rotating the Ferrers' diagram of the partition about the diagonal, it is possible to obtain from the partition n = x1 + x2 + · · · + xk the conjugate partition n = x1* + x2* + · · · xn*, in which xi* is the number of parts in the original partition of cardinality i or more. Thus the conjugate of the partition of 14 already given is 14 = 5 + 4 + 3 + 1 + 1. Hence, the following result is obtained:

      (F1) The number of partitions of n into k parts is equal to the number of partitions of n with k as the largest part.

      Other results obtainable by using Ferrers' diagrams are:

      (F2) The number of self-conjugate partitions of n equals the number of partitions of n with all parts unequal and odd.

      (F3) the number of partitions of n into unequal parts is equal to the number of partitions of n into odd parts.

      Generating functions can be used with advantage to study partitions. For example, it can be proved that:

      (G1) The generating function F1(x) of P(n), the number of partitions of the integer n, is a product of reciprocals of terms of the type (1 - xk), for all positive integers k, with the convention that P(0) = 1 (see 11—>).

      (G2) The generating function F2(x) of the number of partitions of n into unequal parts is a product of terms like (1 + xk), for all positive integers k (see 12—>).

      (G3) The generating function F3(x) of the number of partitions of x consisting only of odd parts is a product of reciprocals of terms of the type (1 - xk), for all positive odd integers k (see 13—>).

      Thus to prove (F3) it is necessary only to show that the generating functions described in (G2) and (G3) are equal. This method was used by Euler.

The principle of inclusion and exclusion: derangements
      For a case in which there are N objects and n properties A1, A2, · · · An, the number N(A1, A2), for example, will be the number of objects that possess the properties A1, A2. If N(Ā1, Ā2, · · · , Ān) is the number of objects possessing none of the properties A1, A2, · · · , An, then this number can be computed as an alternating sum of sums involving the numbers of objects that possess the properties (see 14—>). This is the principle of inclusion and exclusion expressed by Sylvester.

      The permutation of n elements that displaces each object is called a derangement. The permutations themselves may be the objects and the property i may be the property that a permutation does not displace the ith element. In such a case N = n! and N(A1, A2) = (n - 2)!, for example. Hence the number D n of derangements can be shown to be approximated by n!/e (see 15—>). This number was first obtained by Euler. If n persons check their hats in a restaurant, and the waiter loses the checks and returns the hats at random, the chance that no one receives his own hat is D n/n! = e -1 approximately. It is surprising that the approximate answer is independent of n. To six places of decimals e -1 = 0.367879. When n = 6 the error of approximation is less than 0.0002.

      If n is expressed as the product of powers of its prime factors p1, p2, · · · pk, and if the objects are the integers less than or equal to n, and if Ai is the property of being divisible by pi, then Sylvester's formula gives, as the number of integers less than n and prime to it, a function of n, written ϕ(n), composed of a product of n and k factors of the type (1 - 1/pi) (see 16—>). The function ϕ(n) is the Euler function.

Polya's theorem
      It is required to make a necklace of n beads out of an infinite supply of beads of k different colours. The number of different necklaces, c (n, k), that can be made is given by the reciprocal of n times a sum of terms of the type ϕ(n) kn/d, in which the summation is over all divisors d of n and ϕ is the Euler function (see 17—>).

      Though the problem of the necklaces appears to be frivolous, the formula given above can be used to solve a difficult problem in the theory of Lie algebras, of some importance in modern physics.

      The general problem of which the necklace problem is a special case was solved by the Hungarian-born U.S. mathematician George Polya in a famous 1937 memoir in which he established connections between groups, graphs, and chemical bonds. It has been applied to enumeration problems in physics, chemistry, and mathematics.

The Möbius inversion theorem
      In 1832 the German astronomer and mathematician August Ferdinand Möbius proved that, if f and g are functions defined on the set of positive integers, such that f evaluated at x is a sum of values of g evaluated at divisors of x, then inversely g at x can be evaluated as a sum involving f evaluated at divisors of x (see 18—>).

      In 1964 the U.S. mathematician Gian-Carlo Rota obtained a powerful generalization of this theorem, providing a fundamental unifying principle of enumeration. One consequence of Rota's theorem is the following:

      If f and g are functions defined on subsets of a finite set A, such that f(A) is a sum of terms g(S), in which S is a subset of A, then g(A) can be expressed in terms of f (see 19—>).

Special problems
      Despite the general methods of enumeration already described, there are many problems in which they do not apply and which therefore require special treatment. Two of these are described below, and others will be met further in this article.

The Ising problem
      A rectangular m × n grid is made up of unit squares, each coloured either red or green. How many different colour patterns are there if the number of boundary edges between red squares and green squares is prescribed?

      This problem, though easy to state, proved very difficult to solve. A complete and rigorous solution was not achieved until the early 1960s. The importance of the problem lies in the fact that it is the simplest model that exhibits the macroscopic behaviour expected from certain natural assumptions made at the microscopic level. Historically, the problem arose from an early attempt, made in 1925, to formulate the statistical mechanics of ferromagnetism.

      The three-dimensional analogue of the Ising problem remains unsolved in spite of persistent attacks.

Non-self-intersecting random walk
      A random walk consists of a sequence of n steps of unit length on a flat rectangular grid, taken at random either in the x- or the y-direction, with equal probability in each of the four directions. What is the number Rn of random walks that do not touch the same vertex twice? This problem has defied solution, except for small values of n, though a large amount of numerical data has been amassed.

Problems of choice

Systems of distinct representatives
      Subsets S1, S2, · · · , Sn of a finite set S are said to possess a set of distinct representatives if x1, x2, · · · , xn can be found, such that xiSi, i = 1, 2, · · · , n, xixj for ij. It is possible that Si and Sj, ij, may have exactly the same elements and are distinguished only by the indices i, j. In 1935 a mathematician, M. Hall, Jr., of the United States, proved that a necessary and sufficient condition for S1, S2, · · · , Sn to possess a system of distinct representatives is that, for every k n, any k of the n subsets contain between them at least k distinct elements.

      For example, the sets S1 = (1, 2, 2), S2 = (1, 2, 4), S3 = (1, 2, 5), S4 = (3, 4, 5, 6), S5 = (3, 4, 5, 6) satisfy the conditions of the theorem, and a set of distinct representatives is x1 = 1, x2 = 2, x3 = 5, x4 = 3, x5 = 4. On the other hand, the sets T1 = (1, 2), T2 = (1, 3), T3 = (1, 4), T4 = (2, 3), T5 = (2, 4), T6 = (1, 2, 5) do not possess a system of distinct representatives because T1, T2, T3, T4, T5 possess between them only four elements.

      The following theorem due to König is closely related to Hall's theorem and can be easily deduced from it. Conversely, Hall's theorem can be deduced from König's: If the elements of rectangular matrix are 0s and 1s, the minimum number of lines that contain all of the 1s is equal to the maximum number of 1s that can be chosen with no two on a line.

Ramsey's numbers
      If X = {1, 2, . . . , n}, and if T, the family of all subsets of X containing exactly r distinct elements, is divided into two mutually exclusive families α and β, the following conclusion that was originally obtained by the British mathematician Frank Plumpton Ramsey follows. He proved that for r ⋜ 1, pr, qr, there exists a number Nr(p, q) depending solely on p, q, r, such that if n > Nr(p, q), there is either a subset A of p elements all of the r subsets of which are in the family α or there is a subset B of q elements all of the r subsets of which are in the family β.

      The set X can be a set of n persons. For r = 2, T is the family of all pairs. If two persons have met each other the pair can belong to the family α. If two persons have not met, the pair can belong to the family β. If these things are assumed, then by Ramsey's theorem, for any given p ⋜ 2, q ⋜ 2 there exists a number N2(p, q), such that if n > N2(p, q), then among n persons invited to a party there will be either a set of p persons all of whom have met each other or a set of q persons no two of whom have met.

      Although the existence of Nr(p, q) is known, actual values are known only for a few cases. Because Nr(p, q) = Nr(q, p), it is possible to take p q. It is known that N2(3, 3) = 6, N2(3, 4) = 9, N2(3, 5) = 14, N2(3, 6) = 18, N2(4, 4) = 18. Some bounds are also known, for example, 25 N2(4, 5) 28.

      A consequence of Ramsey's theorem is the following result obtained in 1935 by the Hungarian mathematicians Paul Erdös and George Szekeres. For a given integer n there exists an integer N = N(n), such that a set of any N points on a plane, no three on a line, contains n points forming a convex n-gon.

Design theory

BIB (balanced incomplete block) designs.
      A design is a set of T = {1, 2, . . . , υ} objects called treatments and a family of subsets B1, B2, . . . , Bb of T, called blocks, such that the block Bi contains exactly ki treatments, all distinct. The number ki is called the size of the block Bi, and the ith treatment is said to be replicated ri times if it occurs in exactly ri blocks. Specific designs are subject to further constraints. The name design comes from statistical theory in which designs are used to estimate effects of treatments applied to experimental units.

      A BIB design is a design with υ treatments and b blocks in which each block is of size k, each treatment is replicated r times, and every pair of distinct treatments occurs together in λ blocks. The design is said to have the parameters (υ, b, r, k, λ). Some basic relations are easy to establish (see 20—>). These conditions are necessary but not sufficient for the existence of the design. The design is said to be proper if k < υ—that is, the blocks are incomplete. For a proper BIB design Fisher's inequality b ⋜ υ, or equivalently rk, holds.

      A BIB design is said to be symmetric if υ = b, and consequently r = k. Such a design is called a symmetric (υ, k, λ) design, and λ(υ − 1) = k(k − 1). A necessary condition for the existence of a symmetric (υ, k, λ) design is given by the following:

      A. If υ is even, k − λ is a perfect square.

      B. If υ is odd, a certain Diophantine equation (see 21—>) has a solution in integers not all zero.

      For example, the designs (υ, k, λ) = (22, 7, 2) and (46, 10, 2) are ruled out by (A) and the design (29, 8, 2) by (B).

      Because necessary and sufficient conditions for the existence of a BIB design with given parameters are not known, it is often a very difficult problem to decide whether a design with given parameters (satisfying the known necessary conditions) really exists. By 1972 there were only two unsettled cases with r 10. These are (υ, b, r, k, λ) = (46, 69, 9, 6, 1) and (51, 85, 10, 6, 1).

      Methods of constructing BIB designs depend on the use of finite fields, finite geometries, and number theory. Some general methods were given in 1939 by the Indian mathematician Raj Chandra Bose, who has since emigrated to the United States.

      A finite field is a finite set of marks with two operations, addition and multiplication, subject to the usual nine laws of addition and multiplication obeyed by rational numbers. In particular the marks may be taken to be the set X of non-negative integers less than a prime p. If this is so, then addition and multiplication are defined by modified addition and multiplication laws (see 22—>) in which a, b, r, and p belong to X. For example, if p = 7, then 5 + 4 = 2, 5 · 4 = 6. There exist more general finite fields in which the number of elements is pn, p a prime. There is essentially one field with pn elements, with given p and n. It is denoted by GF(pn).

      Finite geometries can be obtained from finite fields in which the coordinates of points are now elements of a finite field.

      A set of k + 1 non-negative integers d0, d1, · · · , dk, is said to form a perfect difference set mod υ, if among the k(k − 1) differences didj, ij, i, j = 0, 1, · · · , k, reduced mod υ, each nonzero positive integer less than υ occurs exactly the same number of times λ. For example, 1, 4, 5, 9, 3 is a difference set mod 11, with λ = 2. From a perfect difference set can be obtained the symmetric (υ, k, λ) design using the integers 0, 1, 2, · · · , υ − 1. The jth block contains the treatments obtained by reducing mod υ the numbers d0 + j, j1 + j, · · · , di + j, j = 0, 1, · · · , υ − 1.

      It can be shown that any two blocks of a symmetric (υ, k, λ) design intersect in exactly k treatments. By deleting one block and all the treatments contained in it, it is possible to obtain from the symmetric design its residual, which is a BIB design (unsymmetric) with parameters υ* = υ − k, b* = υ − 1, r* = k, k* = k − λ, λ* = λ. One may ask whether it is true that a BIB design with the parameters of a residual can be embedded in a symmetric BIB design. The truth of this is rather easy to demonstrate when λ = 1. Hall and W.S. Connor in 1953 showed that it is also true for λ = 2. The Indian mathematician K.N. Bhattacharya in 1944, however, gave a counterexample for λ = 3 by exhibiting a BIB design with parameters υ = 16, b = 24, r = 9, k = 6, λ = 3 for which two particular blocks intersect in four treatments and which for that reason cannot be embedded in a symmetric BIB design.

      A BIB design is said to be resolvable if the set of blocks can be partitioned into subsets, such that the blocks in any subset contain every treatment exactly once. For the case k = 3 this problem was first posed during the 19th century by the British mathematician T.P. Kirkman as a recreational problem. There are υ girls in a class. Their teacher wants to take the class out for a walk for a number of days, the girls marching abreast in triplets. It is required to arrange the walk so that any two girls march abreast in the same triplet exactly once. It is easily shown that this is equivalent to the construction of a resolvable BIB design with υ = 6t + 3, b = (2t + 1)(3t + 1), r = 3t + 1, k = 3, λ = 1. Solutions were known for only a large number of special values of t until a completely general solution was finally given by the Indian and U.S. mathematicians Dwijendra K. Ray-Chaudhuri and R.M. Wilson in 1970.

PBIB (partially balanced incomplete block) designs.
      Given υ objects 1, 2, · · · , υ, a relation satisfying the following conditions is said to be an m-class partially balanced association scheme:

      A. Any two objects are either 1st, or 2nd, · · · , or mth associates, the relation of association being symmetrical.

      B. Each object α has ni ith associates, the number ni being independent of α.

      C. If any two objects α and β are ith associates, then the number of objects that are jth associates of α and kth associates of β is pjki and is independent of the pair of ith associates α and β.

      The constants υ, ni, pjki are the parameters of the association scheme. A number of identities connecting these parameters were given by the Indian mathematicians Bose and K.R. Nair in 1939, but Bose and the U.S. mathematician D.M. Mesner in 1959 discovered new identities when m > 2.

      A PBIB design is obtained by identifying the υ treatments with the υ objects of an association scheme and arranging them into b blocks satisfying the following conditions:

      A. Each contains k treatments.

      B. Each treatment occurs in r blocks.

      C. If two treatments are ith associates, they occur together in λi blocks.

      Two-class association schemes and the corresponding designs are especially important both from the mathematical point of view and because of statistical applications. For a two-class association scheme the constancy of υ, ni, p111, and p112 ensures the constancy of the other parameters. Seven relations hold (see 23—>). Sufficient conditions for the existence of association schemes with given parameters are not known, but for a two-class association scheme Connor and the U.S. mathematician Willard H. Clatworthy in 1954 obtained some necessary conditions (see 24—>).

Latin squares and the packing problem

Orthogonal Latin squares
 A Latin square of order k is defined as a k × k square grid, the k2 cells of which are occupied by k distinct symbols of a set X = 1, 2, . . . , k, such that each symbol occurs once in each row and each column. Two Latin squares are said to be orthogonal if, when superposed, any symbol of the first square occurs exactly once with each symbol of the second square. Two orthogonal Latin squares of order 4 are exhibited in Figure 2—>.

      A set of mutually orthogonal Latin squares is a set of Latin squares any two of which are orthogonal. It is easily shown that there cannot exist more than k - 1 mutually orthogonal Latin squares of a given order k. When k - 1 mutually orthogonal Latin squares of order k exist, the set is complete. A complete set always exists if k is the power of a prime. An unsolved question is whether there can exist a complete set of mutually orthogonal Latin squares of order k if k is not a prime power.

      Many types of experimental designs are based on Latin squares. Hence, the construction of mutually orthogonal Latin squares is an important combinatorial problem. Letting the prime power decomposition of an integer k be given, the arithmetic function n(k) is defined by taking the minimum of the factors in such a decomposition (see 25—>).

      Letting N(k) denote the maximum number of mutually orthogonal Latin squares of order k, the U.S. mathematician H.F. MacNeish in 1922 showed that there always exist n(k) mutually orthogonal Latin squares of order k and conjectured that this is the maximum number of such squares—that is, N(k) = n(k). There was also the long-standing conjecture of Euler, formulated in 1782, that there cannot exist mutually orthogonal Latin squares of order 4t + 2, for any integer t. MacNeish's conjecture, if true, would imply the truth of Euler's but not conversely. The U.S. mathematician E.T. Parker in 1958 disproved the conjecture of MacNeish. This left open the question of Euler's conjecture. Bose and the Indian mathematician S.S. Shrikhande in 1959–60 obtained the first counterexample to Euler's conjecture by obtaining two mutually orthogonal Latin squares of order 22 and then generalized their method to disprove Euler's conjecture for an infinity of values of k = 2(mod 4). Parker in 1959 used the method of differences to show the falsity of Euler's conjecture for all k = (3q + 1)/2, in which q is a prime power, q ≡ 3(mod 4). Finally these three mathematicians in 1960 showed that N(k) ⋜ 2 whenever k > 6. It is pertinent to inquire about the behaviour of N(k) for large k. The best result in this direction is due to R.M. Wilson in 1971. He shows that N(k) ⋜ k1/17 − 2 for large k.

Orthogonal arrays and the packing problem
      A k × N matrix A with entries from a set X of s ⋜ 2 symbols is called an orthogonal array of strength t, size N, k constraints, and s levels if each t × N submatrix of A contains all possible t × 1 column vectors with the same frequency λ. The array may be denoted by (N, k, s, t). The number λ is called the index of the array, and N = λst. This concept is due to the Indian mathematician C.R. Rao and was obtained in 1947.

      Orthogonal arrays are a generalization of orthogonal Latin squares. Indeed, the existence of an orthogonal array of k constraints, s levels, strength 2, and index unity is combinatorially equivalent to the existence of a set of k − 2 mutually orthogonal Latin squares of order s. For a given λ, s, and t it is an important combinatorial problem to obtain an orthogonal array (N, k, s, t), N = st, for which the number of constraints k is maximal.

      Orthogonal arrays play an important part in the theory of factorial designs in which each treatment is a combination of factors at different levels. For an orthogonal array (λst, k, s, t), t ⋜ 2, the number of constraints k satisfies an inequality (see 26—>) in which λst is greater than or equal to a linear expression in powers of (s − 1), with binomial coefficients giving the number of combinations of k − 1 or k things taken i at a time (i u).

      Letting GF(q) be a finite field with q = ph elements, an n × r matrix with elements from the field is said to have the property Pt if any t rows are independent. The problem is to construct for any given r a matrix H with the maximum number of rows possessing the property Pt. The maximal number of rows is denoted by nt(r, q). This packing problem is of great importance in the theory of factorial designs and also in communication theory, because the existence of an n × r matrix with the property Pt leads to the construction of an orthogonal array (qr, n, q, t) of index unity.

      Again n × r matrices H with the property Pt may be used in the construction of error-correcting codes. A row vector c′ is taken as a code word if and only if cH = 0. The code words then are of length n and differ in at least t + 1 places. If t = 2u, then u or fewer errors of transmission can be corrected if such a code is used. If t = 2u + 1, then an additional error can be detected.

      A general solution of the packing problem is known only for the case t = 2, the corresponding codes being the one-error-correcting codes of the U.S. mathematician Richard W. Hamming. When t = 3 the solution is known for general r when q = 2 and for general q when r 4. Thus, n2(r, 2) = (qr − 1)/(q − 1), n3(r, 2) = 2r−1, n3(3, q) = q + 1 or q + 2, according as q is odd or even. If q > 2, then n3(4, q) = q2 + 1. The case q = 2 is especially important because in practice most codes use only two symbols, 0 or 1. Only fairly large values of r are useful, say, r ⋜ 25. The optimum value of nt(r, 2) is not known. The BCH codes obtained by Bose and Ray-Chaudhuri and independently by the French mathematician Alexis Hocquenghem in 1959 and 1960 are based on a construction that yields an n × r matrix H with the property P2u in which r mu, n = 2m - 1, q = 2. They can correct up to math.u; errors.

Graph (graph theory) theory

Definitions
      A graph G consists of a non-empty set of elements V(G) and a subset E(G) of the set of unordered pairs of distinct elements of V(G). The elements of V(G), called vertices of G, may be represented by points. If (x, y) ∊ E(G), then the edge (x, y) may be represented by an arc joining x and y. Then x and y are said to be adjacent, and the edge (x, y) is incident with x and y. If (x, y) is not an edge, then the vertices x and y are said to be nonadjacent. G is a finite graph if V(G) is finite. A graph H is a subgraph of G if V(H) ⊂ V(G) and E(H) ⊂ E(G).

      A chain of a graph G is an alternating sequence of vertices and edges x0, e1, x1, e2, · · · en, xn, beginning and ending with vertices in which each edge is incident with the two vertices immediately preceding and following it. This chain joins x0 and xn and may also be denoted by x0, x1, · · · , xn, the edges being evident by context. The chain is closed if x0 = xn and open otherwise. If the chain is closed, it is called a cycle, provided its vertices (other than x0 and xn) are distinct and n ⋜ 3. The length of a chain is the number of edges in it.

 A graph G is labelled when the various υ vertices are distinguished by such names as x1, x2, · · · xυ. Two graphs G and H are said to be isomorphic (written GH) if there exists a one–one correspondence between their vertex sets that preserves adjacency. For example, G1 and G2, shown in Figure 3—>, are isomorphic under the correspondence xiyi.

 Two isomorphic graphs count as the same (unlabelled) graph. A graph is said to be a tree if it contains no cycle—for example, the graph G3 of Figure 3—>.

Enumeration of graphs
      The number of labelled graphs with υ vertices is 2υ(υ − 1)/2 because υ(υ − 1)/2 is the number of pairs of vertices, and each pair is either an edge or not an edge. Cayley in 1889 showed that the number of labelled trees with υ vertices is υυ − 2.

      The number of unlabelled graphs with υ vertices can be obtained by using Polya's theorem. The first few terms of the generating function F(x), in which the coefficient of xυ gives the number of (unlabelled) graphs with υ vertices, can be given (see 27—>).

      A rooted tree has one point, its root, distinguished from others. If Tυ is the number of rooted trees with υ vertices, the generating function for Tυ can also be given (see 28—>).

      Polya in 1937 showed in his memoir already referred to that the generating function for rooted trees satisfies a functional equation (see 29—>). Letting tυ be the number of (unlabelled) trees with υ vertices, the generating function t(x) for tυ can be obtained in terms of T(x) (see 30—>). This result was obtained in 1948 by the American mathematician Richard R. Otter.

      Many enumeration problems on graphs with specified properties can be solved by the application of Polya's theorem and a generalization of it made by a Dutch mathematician, N.G. de Bruijn, in 1959.

Characterization problems of graph theory
      If there is a class C of graphs each of which possesses a certain set of properties P, then the set of properties P is said to characterize the class C, provided every graph G possessing the properties P belongs to the class C. Sometimes it happens that there are some exceptional graphs that possess the properties P. Many such characterizations are known. Here is presented a typical example.

      A complete graph Km is a graph with m vertices, any two of which are adjacent. The line graph H of a graph G is a graph the vertices of which correspond to the edges of G, any two vertices of H being adjacent if and only if the corresponding edges of G are incident with the same vertex of G.

      A graph G is said to be regular of degree n1 if each vertex is adjacent to exactly n1 other vertices. A regular graph of degree n1 with υ vertices is said to be strongly regular with parameters (υ, n1, p111, p112) if any two adjacent vertices are both adjacent to exactly p111 other vertices and any two nonadjacent vertices are both adjacent to exactly p112 other vertices. A strongly regular graph and a two-class association are isomorphic concepts. The treatments of the scheme correspond to the vertices of the graph, two treatments being either first associates or second associates according as the corresponding vertices are either adjacent or nonadjacent.

      It is easily proved that the line graph T2(m) of a complete graph Km, m ⋜ 4 is strongly regular with parameters υ = m(m − 1)/2, n1 = 2(m − 2), p111 = m − 2, p112 = 4.

      It is surprising that these properties characterize T2(m) except for m = 8, in which case there exist three other strongly regular graphs with the same parameters nonisomorphic to each other and to T2(m).

      A partial geometry (r, k, t) is a system of two kinds of objects, points and lines, with an incidence relation obeying the following axioms:

      1. Any two points are incident with not more than one line.

      2. Each point is incident with r lines.

      3. Each line is incident with k points.

      4. Given a point P not incident with a line ℓ, there are exactly t lines incident with P and also with some point of ℓ.

      A graph G is obtained from a partial geometry by taking the points of the geometry as vertices of G, two vertices of G being adjacent if and only if the corresponding points are incident with the same line of the geometry. It is strongly regular with parameters

      The question of whether a strongly regular graph with the above parameters is the graph of some partial geometry is of interest. It was shown by Bose in 1963 that the answer is in the affirmative if a certain condition holds (see 31—>). Not much is known about the case if this condition is not satisfied, except for certain values of rand t. For example, T2(m) is isomorphic with the graph of a partial geometry (2, m − 1, 2). Hence, for m > 8 its characterization is a consequence of the above theorem. Another consequence is the following:

      Given a set of k-1-d mutually orthogonal Latin squares of order k, the set can be extended to a complete set of k-1 mutually orthogonal squares if a condition holds (see 32—>). The case d = 2 is due to Shrikhande in 1961 and the general result to the American mathematician Richard H. Bruck in 1963.

Applications of graph theory

Planar graphs
 A graph G is said to be planar if it can be represented on a plane in such a fashion that the vertices are all distinct points, the edges are simple curves, and no two edges meet one another except at their terminals. For example, K4, the complete graph on four vertices, is planar, as Figure 4A—> shows.

  Two graphs are said to be homeomorphic if both can be obtained from the same graph by subdivisions of edges. For example, the graphs in Figure 4A—> and Figure 4B—> are homeomorphic.

      The Km,n graph is a graph for which the vertex set can be divided into two subsets, one with m vertices and the other with n vertices. Any two vertices of the same subset are nonadjacent, whereas any two vertices of different subsets are adjacent. The Polish mathematician Kazimierz Kuratowski in 1930 proved the following famous theorem:

 A necessary and sufficient condition for a graph G to be planar is that it does not contain a subgraph homeomorphic to either K5 or K3,3 shown in Figure 5—>.

      An elementary contraction of a graph G is a transformation of G to a new graph G1, such that two adjacent vertices u and υ of G are replaced by a new vertex w in G1 and w is adjacent in G1 to all vertices to which either u or υ is adjacent in G. A graph G* is said to be a contraction of G if G* can be obtained from G by a sequence of elementary contractions. The following is another characterization of a planar graph due to the German mathematician K. Wagner in 1937.

      A graph is planar if and only if it is not contractible to K5 or K3,3.

      For more than a century the solution of the four-colour map problem eluded every analyst who attempted it. The problem may have attracted the attention of Möbius, but the first written reference to it seems to be a letter from one Francis Guthrie to his brother, a student of Augustus De Morgan, in 1852.

      The problem concerns planar maps—that is, subdivisions of the plane into nonoverlapping regions bounded by simple closed curves. In geographical maps it has been observed empirically, in as many special cases as have been tried, that, at most, four colours are needed in order to colour the regions so that two regions that share a common boundary are always coloured differently, and in certain cases that at least four colours are necessary. (Regions that meet only at a point, such as the states of Colorado and Arizona in the United States, are not considered to have a common boundary). A formalization of this empirical observation constitutes what is called “the four-colour theorem.” The problem is to prove or disprove the assertion that this is the case for every planar map. That three colours will not suffice is easily demonstrated, whereas the sufficiency of five colours was proved in 1890 by the British mathematician P.J. Heawood.

      In 1879 A.B. Kempe, an Englishman, proposed a solution of the four-colour problem. Although Heawood showed that Kempe's argument was flawed, two of its concepts proved fruitful in later investigation. One of these, called unavoidability, correctly states the impossibility of constructing a map in which every one of four configurations is absent (these configurations consist of a region with two neighbours, one with three, one with four, and one with five). The second concept, that of reducibility, takes its name from Kempe's valid proof that if there is a map that requires at least five colours and that contains a region with four (or three or two) neighbours, then there must be a map requiring five colours for a smaller number of regions. Kempe's attempt to prove the reducibility of a map containing a region with five neighbours was erroneous, but it was rectified in a proof published in 1976 by Kenneth Appel and Wolfgang Haken of the United States. Their proof attracted some criticism because it necessitated the evaluation of 1,936 distinct cases, each involving as many as 500,000 logical operations. Appel, Haken, and their collaborators devised programs that made it possible for a large digital computer to handle these details. The computer required more than 1,000 hours to perform the task, and the resulting formal proof is several hundred pages long.

Eulerian cycles and the Königsberg bridge problem
      A multigraph G consists of a non-empty set V(G) of vertices and a subset E(G) of the set of unordered pairs of distinct elements of V(G) with a frequency f ⋜ 1 attached to each pair. If the pair (x1, x2) with frequency f belongs to E(G), then vertices x1 and x2 are joined by f edges.

      An Eulerian cycle of a multigraph G is a closed chain in which each edge appears exactly once. Euler showed that a multigraph possesses an Eulerian cycle if and only if it is connected (apart from isolated points) and the number of vertices of odd degree is either zero or two.

  This problem first arose in the following manner. The Pregel River, formed by the confluence of its two branches, runs through the town of Königsberg and flows on either side of the island of Kneiphof. There were seven bridges, as shown in Figure 6A—>. The townspeople wondered whether it was possible to go for a walk and cross each bridge once and once only. This is equivalent to finding an Eulerian cycle for the multigraph in Figure 6B—>. Euler showed it to be impossible because there are four vertices of odd order.

Directed graphs
      A directed graph G consists of a non-empty set of elements V(G), called vertices, and a subset E(G) of ordered pairs of distinct elements of V(G). Elements (x, y) of E(G) may be called edges, the direction of the edge being from x to y. Both (x, y) and (y, x) may be edges.

      A closed path in a directed graph is a sequence of vertices x0x1x2 · · · xn = x0, such that (xi, xi+1) is a directed edge for i = 0, 1, · · · , n - 1. To each edge (x, y) of a directed graph G there can be assigned a non-negative weight function f(x, y). The problem then is to find a closed path in G traversing all vertices so that the sum of the weights of all edges in the path is a minimum. This is a typical optimization problem. If the vertices are certain cities, the edges are routes joining cities, and the weights are the lengths of the routes, then this becomes the travelling-salesman problem—that is, can he visit each city without retracing his steps? This problem still remains unsolved except for certain special cases.

Raj C. Bose Ed.

Combinatorial geometry
      The name combinatorial geometry, first used by Hadwiger, is not quite accurately descriptive of the nature of the subject. Combinatorial geometry does touch on those aspects of geometry that deal with arrangements, combinations, and enumerations of geometric objects; but it takes in much more. The field is so new that there has scarcely been time for it to acquire a well-defined position in the mathematical world. Rather it tends to overlap parts of topology (especially algebraic topology), number theory, analysis, and, of course, geometry. The subject concerns itself with relations among members of finite systems of geometric figures subject to various conditions and restrictions. More specifically, it includes problems of covering, packing, symmetry, extrema (maxima and minima), continuity, tangency, equalities, and inequalities, many of these with special emphasis on their application to the theory of convex bodies. A few of the fundamental problems of combinatorial geometry originated with Newton and Euler; the majority of the significant advances in the field, however, have been made since 1946.

      The unifying aspect of these disparate topics is the quality or style or spirit of the questions and the methods of attacking these questions. Among those branches of mathematics that interest serious working mathematicians, combinatorial geometry is one of the few branches that can be presented on an intuitive basis, without recourse by the investigator to any advanced theoretical considerations or abstractions.

      Yet the problems are far from trivial, and many remain unsolved. They can be handled only with the aid of the most careful and often delicate reasoning that displays the variety and vitality of geometric methods in a modern setting. A few of the answers are natural and are intuitively suggested by the questions. Many of the others, however, require proofs of unusual ingenuity and depth even in the two-dimensional case. Sometimes a plane solution may be readily extendible to higher dimensions, but sometimes just the opposite is true, and a three-dimensional or n-dimensional problem may be entirely different from its two-dimensional counterpart. Each new problem must be attacked individually. Attempts to create standard methods or theories capable of being applied to the solution of any significant group of the currently unsolved problems in the field had by the late 20th century met with no success. The continuing charm and challenge of the subject are at least in part due to the relative simplicity of the statements coupled with the elusive nature of their solutions.

Some historically important topics of combinatorial geometry
packing and covering
 It is easily seen that six equal circular disks may be placed around another disk of the same size so that the central one is touched by all the others but no two overlap (Figure 7—>) and that it is not possible to place seven disks in such a way. In the analogous three-dimensional situation, around a given ball (solid sphere) it is possible to place 12 balls of equal size, all touching the first one but not overlapping it or each other. One such arrangement may be obtained by placing the 12 surrounding balls at the midpoints of edges of a suitable cube that encloses the central ball; each of the 12 balls then touches four other balls in addition to the central one. But if the 12 balls are centred at the 12 vertices of a suitable regular icosahedron surrounding the given ball, there is an appreciable amount of free space between each of the surrounding balls and its neighbours. (If the spheres have radius 1, the distances between the centres of the surrounding spheres are at least 2/cos 18° = 2.1029 · · · .) It appears, therefore, that by judicious positioning it might be possible to have 13 equal non-overlapping spheres touch another of the same size. This dilemma between 12 and 13, one of the first nontrivial problems of combinatorial geometry, was the object of discussion between Isaac Newton and David Gregory in 1694. Newton believed 12 to be the correct number, but this claim was not proved until 1874. The analogous problem in four-dimensional space is still open, the answer being one of the numbers 24, 25, or 26.

      The problem of the 13 balls is a typical example of the branch of combinatorial geometry that deals with packings and coverings. In packing problems the aim is to place figures of a given shape or size without overlap as economically as possibly, either inside another given figure or subject to some other restriction.

 Problems of packing and covering have been the objects of much study, and some striking conclusions have been obtained. For each plane convex set K, for example, it is possible to arrange nonoverlapping translates of K so as to cover at least two-thirds of the plane; if K is a triangle (and only in that case), no arrangement of nonoverlapping translates covers more than two-thirds of the plane (Figure 8—>). On the other hand, many easily stated questions are still open. One of them concerns the densest packing of spheres. If the spheres are packed in cannonball fashion—that is, in the way cannonballs are stacked to form a triangular pyramid, indefinitely extended—then they fill π/√18, or about 0.74, of the space. Whether this is the greatest density possible is not known, but it was proved in 1958 by the British mathematician C. Ambrose Rogers that, if there exists a closer packing, its density cannot exceed 0.78.

      Covering problems deal in an analogous manner with economical ways of placing given figures so as to cover (that is, contain in their union) another given figure. One famous covering problem, posed by the French mathematician Henri Lebesgue in 1914, is still unsolved: What is the size and shape of the universal cover of least area? Here a convex set C is called universal cover if for each set A in the plane such that diam A 1 it is possible to move C to a suitable position in which it covers A. The diameter diam A of a set A is defined as the least upper bound of the mutual distances of points of the set A. If A is a compact set, then diam A is simply the greatest distance between any two points of A. Thus, if A is an equilateral triangle of side 1, then diam A = 1; and if B is a cube of edge length 1, then diam B = √3.

Polytopes
 A (convex) polytope is the convex hull of some finite set of points. Each polytope of dimensions d has as faces finitely many polytopes of dimensions 0 (vertices), 1 (edge), 2 (2-faces), · · · , d-1 (facets). Two-dimensional polytopes are usually called polygons, three-dimensional ones polyhedra. Two polytopes are said to be isomorphic, or of the same combinatorial type, provided there exists a one-to-one correspondence between their faces, such that two faces of the first polytope meet if and only if the corresponding faces of the second meet. The prism and the truncated pyramid of Figure 9—> are isomorphic, the correspondence being indicated by the letters at the vertices. To classify the convex polygons by their combinatorial types, it is sufficient to determine the number of vertices υ; for each υ ⋜ 3, all polygons with υ vertices (υ-gons) are of the same combinatorial type, while a υ-γον ανδ α υ′-gon are not isomorphic if υ ≠ υ′. Euler was the first to investigate in 1752 the analogous question concerning polyhedra. He found that υ − e + f = 2 for every convex polyhedron, where υ, e, and f are the numbers of vertices, edges, and faces of the polyhedron. Though this formula became one of the starting points of topology (see topology), Euler was not successful in his attempts to find a classification scheme for convex polytopes or to determine the number of different types for each υ. Despite efforts of many famous mathematicians since Euler (J. Steiner, Kirkman, Cayley, O. Hermes, M. Brückner, to mention only a few from the 19th century), the problem is still open. It was established by P.J. Federico in the U.S. that there are 2,606 different combinatorial types of convex polyhedra with nine vertices. The numbers of different types with four, five, six, seven, or eight vertices have been known for some time to be 1, 2, 7, 34, and 257, respectively.

      The theory of convex polytopes has been more successful in developments in other directions. The regular polytopes have been under investigation since 1880 in dimensions higher than three, together with extensions of Euler's relation to the higher dimensions. (The Swiss geometer Ludwig Schläfli made many of these discoveries some 30 years earlier, but his work was published only posthumously in 1901.) The interest in regular polyhedra and other special polyhedra goes back to ancient Greece, as indicated by the names Platonic solids and Archimedean solids.

      Since 1950 there has been considerable interest, in part created by practical problems related to computer techniques such as linear programming, in questions of the following type: for polytopes of a given dimension d and having a given number υ of vertices, how large and how small can the number of facets be? Such problems have provided great impetus to the development of the theory. The U.S. mathematician Victor L. Klee solved the maximum problem in 1963 in most cases (that is, for all but a finite number of υ's for each d), but the remaining cases were disposed of only in 1970 by P. McMullen, in the United States, who used a completely new method. The minimum problem and many related questions are still unsolved.

Incidence problems
      In 1893 Sylvester posed the question: If a finite set S of points in a plane has the property that each line determined by two points of S meets at least one other point of S, must all points of S be on one line? Sylvester never found a satisfactory solution to the problem, and the first (affirmative) solutions were published a half century later. Since then, Sylvester's problem has inspired many investigations and led to many other open questions, both in the plane and in higher dimensions.

Helly's theorem
      In 1912 E. Helly proved the following theorem, which has since found applications in many areas of geometry and analysis and has led to numerous generalizations, extensions and analogues known as Helly-type theorems. If K1, K2, · · · , Kn are convex sets in d-dimensional Euclidean space Ed, in which nd + 1, and if for every choice of d + 1 of the sets Ki there exists a point that belongs to all the chosen sets, then there exists a point that belongs to all the sets K1, K2, · · · Kn. The theorem stated in two dimensions is easier to visualize and yet is not shorn of its strength: If every three of a set of n convex figures in the plane have a common point (not necessarily the same point for all trios), then all n figures have a point in common. If, for example, convex sets A, B, and C have the point p in common, and convex sets A, B, and D have the point q in common, and sets A, C, and D have the point r in common, and sets B, C, and D have the point s in common, then some point x is a member of A, B, C, and D.

      Although the connection is often far from obvious, many consequences may be derived from Helly's theorem. Among them are the following, stated for d = 2 with some higher dimensional analogues indicated in square brackets:

      A. Two finite subsets X and Y of the plane [d-space] may be strictly separated by a suitable straight line [hyperplane] if and only if, for every set Z consisting of at most 4 [d + 2] points taken from XY, the points of XZ may be strictly separated from those of YZ. (A line [hyperplane] L strictly separates X and Y if X is contained in one of the open half planes [half spaces] determined by L and if Y is contained in the other.)

      B. Each compact convex set K in the plane [d-space] contains a point P with the following property: each chord of K that contains P is divided by P into a number of segments so the ratio of their lengths is at most 2d.

      C. If G is an open subset of the plane [d-space] with finite area [d-dimensional content], then there exists a point P, such that each open half plane [half space] that contains P contains also at least 1/3 [1/(d + 1)] of the area [d-content] of G.

      D. If I1, · · · , In are segments parallel to the y-axis in a plane with a coordinate system (x, y), and if for every choice of three of the segments there exists a straight line intersecting each of the three segments, then there exists a straight line that intersects all the segments I1, · · · , In.

      Theorem D has generalizations in which kth degree polynomial curves y = akxk + · · · + a1x + a0 take the place of the straight lines and k + 2 replaces 3 in the assumptions. These are important in the theory of best approximation of functions by polynomials.

Methods of combinatorial geometry
      Many other branches of combinatorial geometry are as important and interesting as those mentioned above, but rather than list them here it is more instructive to provide a few typical examples of frequently used methods of reasoning. Because the emphasis is on illustrating the methods rather than on obtaining the most general results, the examples will deal with problems in two and three dimensions.

Exhausting the possibilities
      Using the data available concerning the problem under investigation, it is often possible to obtain a list of all potential, a priori possible, solutions. The final step then consists in eliminating the possibilities that are not actual solutions or that duplicate previously found solutions. An example is the proof that there are only five regular convex polyhedra (the Platonic solids (Platonic solid)) and the determination of what these five are.

      From the definition of regularity it is easy to deduce that all the faces of a Platonic solid must be congruent regular k-gons for a suitable k, and that all the vertices must belong to the same number j of k-gons. Because the sum of the face angles at a vertex of a convex polyhedron is less than 2π, and because each angle of the k-gon is (k − 2)π/k, it follows that j(k − 2)π/k < 2π, or (j − 2)(k − 2) < 4. Therefore, the only possibilities for the pair (j, k) are (3, 3), (3, 4), (3, 5), (4, 3), and (5, 3). It may be verified that each of these pairs actually corresponds to a Platonic solid, namely, to the tetrahedron, the cube, the dodecahedron, the octahedron, and the icosahedron, respectively. Very similar arguments may be used in the determination of Archimedean solids and in other instances.

      The most serious drawback of the method is that in many instances the number of potential (and perhaps actual) solutions is so large as to render the method unfeasible. For example, it is known that the number of different combinatorial types of convex polyhedra with 10 vertices exceeds 30,000, the number with 11 vertices exceeds 400,000, and the number with 12 vertices exceeds 5,000,000. Therefore, the exact determination of these numbers by the method just discussed is out of the question, certainly if attempted by hand and probably even with the aid of a computer.

Use of extremal properties
 In many cases the existence of a figure or an arrangement with certain desired properties may be established by considering a more general problem (or a completely different problem) and by showing that a solution of the general problem that is extremal in some sense provides also a solution to the original problem. Frequently there seems to be very little connection between the initial question and the extremal problem. As an illustration the following theorem will be proved: If K is a two-dimensional compact convex set with a centre of symmetry, there exists a parallelogram P containing K, such that the midpoints of the sides of P belong to K. The proof proceeds as follows: Of all the parallelograms that contain K, the one with least possible area is labeled P0. The existence of such a P0 is a consequence of the compactness of K and may be established by standard arguments. It is also easily seen that the centres of K and P0 coincide. The interesting aspect of the situation is that P0 may be taken as the P required for the theorem. In fact (Figure 10—>), if the midpoints A′ and A of a pair of sides of P0 do not belong to K, it is possible to strictly separate them from K by parallel lines L′ and L that, together with the other pair of sides of P0, determine a new parallelogram containing K but with area smaller than that of P0. The above theorem and its proof generalize immediately to higher dimensions and lead to results that are important in functional analysis (see Functional analysis (analysis)).

      Sometimes this type of argument is used in reverse to establish the existence of certain objects by disproving the possibility of existence of some extremal figures. As an example the following solution of the problem of Sylvester discussed above can be mentioned. By a standard argument of projective geometry (duality), it is evident that Sylvester's problem is equivalent to the question: If through the point of intersection of any two of n coplanar lines, no two of which are parallel, there passes a third, are the n lines necessarily concurrent? To show that they must be concurrent, contradiction can be derived from the assumption that they are not concurrent. If L is one of the lines, then not all the intersection points lie on L. Among the intersection points not on L, there must be one nearest to L, which can be called A. Through A pass at least three lines, which meet L in points B, C, D, so that C is between B and D. Through C passes a line L* different from L and from the line through A. Since L* enters the triangle ABD, it intersects either the segment AB or the segment AD, yielding an intersection point nearer to L than the supposedly nearest intersection point A, thus providing the contradiction.

      The difficulties in applying this method are caused in part by the absence of any systematic procedure for devising an extremal problem that leads to the solution of the original question.

Use of figures with special properties
      Sometimes a general theorem may be established by the use of appropriate special figures, even if they are not of the kind that the theorem is concerned with. This method is used in considering the question known as Borsuk's problem.

  The Polish mathematician K. Borsuk proved in 1933 that in any decomposition of the d-dimensional ball Bd into d subsets, at least one of the subsets has a diameter equal to diam Bd; and he asked whether it is possible to decompose every subset A of the d-dimensional space into d + 1 subsets, each of which has a diameter smaller than diam A. (Such a decomposition is easily found if A is the ball Bd.) In case d = 2 Borsuk's problem reduces to the question of whether each plane set A may be decomposed into three parts, each of diameter less than diam A. An affirmative answer follows in this case from the fact (which is not hard to prove) that each planar set A with diam A = 1 may be covered by a regular hexagon H of edge length 1√3 = 0.577 · · · (the diameter of H is diam H = 2√3 = 1.155 · · · > 1, and the distance between the pairs of parallel sides is 1; see Figure 11—>). Such a hexagon H may be cut into three pentagons (indicated in Figure 11—> by dotted lines), each of which has a diameter of only √3/2 = 0.866 · · · < 1. This partition of H may clearly be used to partition each planar set of diameter 1, thus establishing the following stronger variant of Borsuk's problem in the plane: each planar set A may be decomposed into three subsets, each of diameter at most 0.866 · · · × (diam A). An affirmative solution of Borsuk's problem in the three-dimensional case may be proved by a similar method, in which the hexagon H is replaced by a polyhedron obtained by appropriate triple truncation of the regular octahedron.

Use of transformations between different spaces and applications of Helly's theorem
      Although those two methods do not necessarily go together, both may be illustrated in one example—the proof of theorem D concerning parallel segments. Let the segment Ii have end-points (xi, yi) and (xi, yi), where yi yi and i = 1, 2, · · · , n. The case that two of the segments are on one line is easily disposed of; so it may be assumed that x1, x2, · · · , xn are all different. With each straight line y = ax + b in the (x, y)-plane can be associated a point (a, b) in another plane, the (a, b)-plane. Now, for i = 1, 2, · · · , n, the set consisting of all those points (a, b) for which the corresponding line y = ax + b in the (x, y) plane meets the segment Ii can be denoted by Ki. This condition means that yi axi + b yi so that each set Ki is convex. The existence of a line intersecting three of the segments Ii means that the corresponding sets Ki have a common point. Then Helly's theorem for the (a, b)-plane implies the existence of a point (a*, b*) common to all sets Ki. This in turn means that the line y = a*x + b* meets all the segments Ii, I2, · · · , In, and the proof of theorem D is complete.

      In addition to the methods illustrated above, many other techniques of proof are used in combinatorial geometry, ranging from simple mathematical induction to sophisticated decidability theorems of formal logic. The variety of methods available and the likelihood that there are many more not yet invented continue to stimulate research in this rapidly developing branch of mathematics.

Branko Grünbaum

Additional Reading
James Legge (trans.), The Yî-King, vol. 16 of the Sacred Books of the East (1882, reprinted 1962); Algebra, with Arithmetic and Mensuration from the Sanscrit of Brahmagupta and Bháskara, trans. by H.T. Colebrooke (1817); and Nasir ad-Din al-Tusi, “Handbook of Arithmetic Using Board and Dust,” Math. Rev., 31:5776 (1966), complete Russian trans. by S.A. Ahmedov and B.A. Rozenfeld in Istor.-Mat. Issled., 15:431–444 (1963), give glimpses of some of the early beginnings of the subject in the Orient. The term combinatorial was first used in Gottfried Wilhelm Leibniz, Dissertatio de arte combinatoria (1666). W.W. Rouse Ball, Mathematical Recreations and Essays, rev. by H.S. MacDonald Coxeter (1942), contains an account of some of the famous recreational combinatorial problems of the 19th century, such as the problem of eight queens, Hamiltonian circuits, and the Kirkman schoolgirl problem. Eugen Netto, Lehrbuch der Combinatorik, 2nd ed. (1927, reprinted 1958); and Percy A. MacMahon, Combinatory Analysis, 2 vol. (1915–16, reprinted 1960), show the state of the subject in the early part of the 20th century. Herbert J. Ryser, Combinatorial Mathematics (1963); Marshall Hall, Jr., Combinatorial Theory (1967); C.L. Liu, Introduction to Combinatorial Mathematics (1968), all deal with combinatorics in general. John Riordan, An Introduction to Combinatorial Analysis (1958); and Claude Berge, Principes de combinatoire (1968; Eng. trans., Principles of Combinatorics, 1971), deal with problems of enumeration. Claude Berge, Théorie des graphes et ses applications (1957; Eng. trans., The Theory of Graphs and Its Applications, 1962); Claude Berge and A. Ghouilahouri, Programmes, jeux et réseaux de transport (1962; Eng. trans., Programming, Games and Transportation Networks, 1965); Frank Harary, Graph Theory (1969), deal with graph theoretic problems. Oystein Ore, The Four-Color Problem (1967), gives an introduction to this problem. Peter Dembowski, Finite Geometries (1968), contains most of the important developments on designs, including partially balanced and group divisible designs. Elwyn R. Berlekamp, Algebraic Coding Theory (1968), may be consulted for combinatorial aspects of coding theory. Marshall Hall, Jr., “A Survey of Combinatorial Analysis,” in Surveys in Applied Mathematics, vol. 4 (1958), gives a very good survey of combinatorial developments up to 1958. E.F. Beckenbach (ed.), Applied Combinatorial Mathematics (1964), gives a good idea of the wide range of applications of modern combinatorics. G.C. Rota, “Combinatorial Analysis,” in G.A.W. Boehm (ed.), The Mathematical Sciences: A Collection of Essays (1969), in addition to surveying some of the famous combinatorial problems brings out modern trends and indicates where combinatorics is headed. Hugo Hadwiger and Hans Debrunner, Combinatorial Geometry in the Plane (1964); I.M. Yaglom and V.G. Boltyansky, Convex Figures (1961; orig. pub. in Russia, 1951); V.G. Boltyansky, Equivalent and Equidecomposable Figures (1963; orig. pub. in Russian, 1956); and L.A. Lyusternik, Convex Figures and Polyhedra (1966; orig. pub. in Russian, 1956), deal with aspects of combinatorial geometry on an elementary level. On an advanced level, see H.S. MacDonald Coxeter, Regular Polytopes, 2nd ed. (1963); L. Fejes Toth, Lagerungen in der Ebene, auf der Kugel und im Raum (1953) and Regular Figures (1964); Branko Grunbaum, Convex Polytopes (1967) and Arrangements and Spreads (1972); and C.A. Rogers, Packing and Covering (1964). See also Kenneth P. Bogart, Introductory Combinatorics (1983); and R.J. Wilson (ed.), Applications of Combinatorics (1982), both accessible to laymen.Raj C. Bose Branko Grünbaum

* * *


Universalium. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Combinatorics — is a branch of mathematics concerning the study of finite or countable discrete structures. Aspects of combinatorics include counting the structures of a given kind and size (enumerative combinatorics), deciding when certain criteria can be met,… …   Wikipedia

  • combinatorics — [käm΄bə nə tôr′iks] n. a branch of mathematics dealing with combinations and permutations …   English World dictionary

  • Combinatorics and physics — Combinatorial physics or physical combinatorics is the area of interaction between physics and combinatorics. Combinatorial Physics is an emerging area which unites combinatorial and discrete mathematical techniques applied to theoretical physics …   Wikipedia

  • Combinatorics on words — Construction of a Thue Morse infinite word Combinatorics on words is a branch of mathematics which applies combinatorics to words and formal languages. The study of combinatorics on words arose independently within several branches of mathematics …   Wikipedia

  • Combinatorics and dynamical systems — The mathematical disciplines of combinatorics and dynamical systems interact in a number of ways. The ergodic theory of dynamical systems has recently been used to prove combinatorial theorems about number theory which has given rise to the field …   Wikipedia

  • combinatorics — noun plural but singular in construction Date: 1941 combinatorial mathematics …   New Collegiate Dictionary

  • combinatorics — noun a branch of mathematics that studies (usually finite) collections of objects that satisfy specified criteria (see for further details) …   Wiktionary

  • combinatorics — n. (Mathematics) branch of mathematics that deals with individual sets and problems of combination (divided into sub branches: combinatorial analysis, combinatorial geometry, etc.) …   English contemporary dictionary

  • combinatorics — [ˌkɒmbɪnə tɒrɪks] plural noun [treated as sing.] the branch of mathematics dealing with combinations of objects belonging to a finite set in accordance with certain constraints …   English new terms dictionary

  • combinatorics — com·bi·na·to·rics …   English syllables

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”