optimization

optimization
/op'teuh meuh zay"sheuhn/
1. the fact of optimizing; making the best of anything.
2. the condition of being optimized.
3. Math. a mathematical technique for finding a maximum or minimum value of a function of several variables subject to a set of constraints, as linear programming or systems analysis.
[1855-60; OPTIMIZE + -ATION]

* * *

Field of applied mathematics whose principles and methods are used to solve quantitative problems in disciplines including physics, biology, engineering, and economics.

Questions of maximizing or minimizing functions arising in the various disciplines can be solved using the same mathematical tools (see maximum; minimum). In a typical optimization problem, the goal is to find the values of controllable factors determining the behaviour of a system (e.g., a physical production process, an investment scheme) that maximize productivity or minimize waste. The simplest problems involve functions (systems) of a single variable (input factor) and may be solved with differential calculus. Linear programming was developed to solve optimization problems involving two or more input variables. See also simplex method.

* * *

Introduction
also known as  mathematical programming 

      collection of mathematical principles and methods used for solving quantitative problems in many disciplines, including physics, biology, engineering, economics, and business. The subject grew from a realization that quantitative problems in manifestly different disciplines have important mathematical elements in common. Because of this commonality, many problems can be formulated and solved by using the unified set of ideas and methods that make up the field of optimization.

      The historic term mathematical programming, broadly synonymous with optimization, was coined in the 1940s before programming became equated with computer programming. Mathematical programming includes the study of the mathematical structure of optimization problems, the invention of methods for solving these problems, the study of the mathematical properties of these methods, and the implementation of these methods on computers. Faster computers (computer) have greatly expanded the size and complexity of optimization problems that can be solved. The development of optimization techniques has paralleled advances not only in computer science but also in operations research, numerical analysis, game theory, mathematical economics, control theory, and combinatorics.

      Optimization problems typically have three fundamental elements. The first is a single numerical quantity, or objective function, that is to be maximized or minimized. The objective may be the expected return on a stock portfolio, a company's production costs or profits, the time of arrival of a vehicle at a specified destination, or the vote share of a political candidate. The second element is a collection of variables, which are quantities whose values can be manipulated in order to optimize the objective. Examples include the quantities of stock to be bought or sold, the amounts of various resources to be allocated to different production activities, the route to be followed by a vehicle through a traffic network, or the policies to be advocated by a candidate. The third element of an optimization problem is a set of constraints, which are restrictions on the values that the variables can take. For instance, a manufacturing process cannot require more resources than are available, nor can it employ less than zero resources. Within this broad framework, optimization problems can have different mathematical properties. Problems in which the variables are continuous quantities (as in the resource allocation example) require a different approach from problems in which the variables are discrete or combinatorial quantities (as in the selection of a vehicle route from among a predefined set of possibilities).

      An important class of optimization is known as linear programming. Linear indicates that no variables are raised to higher powers, such as squares. For this class, the problems involve minimizing (or maximizing) a linear objective function whose variables are real numbers (real number) that are constrained to satisfy a system of linear equalities and inequalities. Another important class of optimization is known as nonlinear programming. In nonlinear programming the variables are real numbers, and the objective or some of the constraints are nonlinear functions (possibly involving squares, square roots, trigonometric functions, or products of the variables). Both linear and nonlinear programming are discussed in this article. Other important classes of optimization problems not covered in this article include stochastic (stochastic process) programming, in which the objective function or the constraints depend on random variables, so that the optimum is found in some “expected,” or probabilistic, sense; network optimization, which involves optimization of some property of a flow through a network, such as the maximization of the amount of material that can be transported between two given locations in the network; and combinatorial optimization, in which the solution must be found among a finite but very large set of possible values, such as the many possible ways to assign 20 manufacturing plants to 20 locations.

Origins and influences
      Although widely used now to solve everyday decision problems, linear programming was comparatively unknown before 1947. No work of any significance was carried out before this date, even though the French mathematician Joseph Fourier (Fourier, Joseph, Baron) seemed to be aware of the subject's potential as early as 1823. In 1939 a Russian mathematician, Leonid Vitalyevich Kantorovich, published an extensive monograph, Matematicheskie metody organizatsi i planirovaniya proizvodstva (“Mathematical Methods for Organization and Planning of Production”), which is now credited with being the first treatise to recognize that certain important broad classes of scheduling problems had well-defined mathematical structures. Unfortunately, Kantorovich's proposals remained mostly unknown both in the Soviet Union and elsewhere for nearly two decades. Meanwhile, linear programming had developed considerably in the United States and Western Europe. In the period following World War II, officials in the United States government came to believe that efficient coordination of the energies and resources of a whole nation in the event of nuclear war would require the use of scientific planning techniques. The advent of the computer made such an approach feasible.

      Intensive work began in 1947 in the U.S. Air Force. The linear programming model was proposed because it was relatively simple from a mathematical viewpoint, and yet it provided a sufficiently general and practical framework for representing interdependent activities that share scarce resources. In the linear programming model, the modeler views the system to be optimized as being made up of various activities that are assumed to require a flow of inputs (e.g., labour and raw materials) and outputs (e.g., finished goods and services) of various types proportional to the level of the activity. Activity levels are assumed to be representable by nonnegative numbers. The revolutionary feature of the approach lies in expressing the goal of the decision process in terms of minimizing or maximizing a linear objective function—for example, maximizing possible sorties in the case of the air force, or maximizing profits in industry. Before 1947 all practical planning was characterized by a series of authoritatively imposed rules of procedure and priorities. General objectives were never stated, probably because of the impossibility of performing the calculations necessary to minimize an objective function under constraints. In 1947 a method (described in the section The simplex method (optimization)) was introduced that turned out to solve practical problems efficiently. Interest in linear programming grew rapidly, and by 1951 its use spread to industry. Today it is almost impossible to name an industry that is not using mathematical programming in some form, although the applications and the extent to which it is used vary greatly, even within the same industry.

      Interest in linear programming has also extended to economics. In 1937 the Hungarian-born mathematician John von Neumann (von Neumann, John) analyzed a steadily expanding economy based on alternative methods of production and fixed technological coefficients. As far as mathematical history is concerned, the study of linear inequality systems excited virtually no interest before 1936. In 1911 a vertex-to-vertex movement along edges of a polyhedron (as is done in the simplex method) was suggested as a way to solve a problem that involved optimization, and in 1941 movement along edges was proposed for a problem involving transportation. Credit for laying much of the mathematical foundations should probably go to von Neumann. In 1928 he published his famous paper on game theory, and his work culminated in 1944 with the publication, in collaboration with the Austrian economist Oskar Morgenstern, of the classic Theory of Games and Economic Behaviour. In 1947 von Neumann conjectured the equivalence of linear programs and matrix games, introduced the important concept of duality, and made several proposals for the numerical solution of linear programming and game problems. Serious interest by other mathematicians began in 1948 with the rigorous development of duality and related matters.

      The general simplex method was first programmed in 1951 for the United States Bureau of Standards SEAC computer. Starting in 1952, the simplex method was programmed for use on various IBM (International Business Machines Corporation) computers and later for those of other companies. As a result, commercial applications of linear programs in industry and government grew rapidly. New computational techniques and variations of older techniques continued to be developed.

      More recently there has been much interest in solving large linear problems with special structures—for example, corporate models and national planning models that are multistaged, are dynamic, and exhibit a hierarchical structure. It is estimated that certain developing countries will have the potential of increasing their gross national product (GNP) by 10 to 15 percent per year if detailed growth models of the economy can be constructed, optimized, and implemented.

Theory
Basic ideas
      A simple problem in linear programming is one in which it is necessary to find the maximum (or minimum) value of a simple function subject to certain constraints. An example might be that of a factory producing two commodities. In any production run, the factory produces x1 of the first type and x2 of the second. If the profit on the second type is twice that on the first, then x1 + 2x2 represents the total profit. The function x1 + 2x2 is known as the objective function.

      Clearly the profit will be highest if the factory devotes its entire production capacity to making the second type of commodity. In a practical situation, however, this may not be possible; a set of constraints is introduced by such factors as availability of machine time, labour, and raw materials. For example, if the second type of commodity requires a raw material that is limited so that no more than five can be made in any batch, then x2 must be less than or equal to five; i.e., x2 ≤ 5. If the first commodity requires another type of material limiting it to eight per batch, then x1 ≤ 8. If x1 and x2 take equal time to make and the machine time available allows a maximum of 10 to be made in a batch, then x1 + x2 must be less than or equal to 10; i.e., x1 + x2 ≤ 10.

 Two other constraints are that x1 and x2 must each be greater than or equal to zero, because it is impossible to make a negative number of either; i.e., x1 ≥ 0 and x2 ≥ 0. The problem is to find the values of x1 and x2 for which the profit is a maximum. Any solution can be denoted by a pair of numbers (x1, x2); for example, if x1 = 3 and x2 = 6, the solution is (3, 6). These numbers can be represented by points plotted on two axes, as shown in the figure—>. On this graph the distance along the horizontal axis represents x1 and that along the vertical represents x2. Because of the constraints given above, the feasible solutions must lie within a certain well-defined region of the graph. For example, the constraint x1 ≥ 0 means that points representing feasible solutions lie on or to the right of the x2 axis. Similarly, the constraint x2 ≥ 0 means that they also lie on or above the x1 axis. Application of the entire set of constraints gives the feasible solution set, which is bounded by a polygon formed by the intersection of the lines x1 = 0, x2 = 0, x1 = 8, x2 = 5, and x1 + x2 = 10. For example, production of three items of commodity x1 and four of x2 is a feasible solution since the point (3, 4) lies in this region. To find the best solution, however, the objective function x1 + 2x2 = k is plotted on the graph for some value of k, say k = 4. As k is increased, a family of parallel lines are produced and the line for k = 15 just touches the constraint set at the point (5, 5). If k is increased further, the values of x1 and x2 will lie outside the set of feasible solutions. Thus, the best solution is that in which equal quantities of each commodity are made. It is no coincidence that an optimal solution occurs at a vertex, or “extreme point,” of the region. This will always be true for linear problems, although an optimal solution may not be unique. Thus, the solution of such problems reduces to finding which extreme point (or points) yields the largest value for the objective function.

The simplex method
      The graphical method of solution illustrated by the example in the preceding section is useful only for systems of inequalities involving two variables. In practice, problems often involve hundreds of equations with thousands of variables, which can result in an astronomical number of extreme points. In 1947 George Dantzig, a mathematical adviser for the U.S. Air Force, devised the simplex method to restrict the number of extreme points that have to be examined. The simplex method is one of the most useful and efficient algorithms (algorithm) ever invented, and it is still the standard method employed on computers to solve optimization problems. First, the method assumes that an extreme point is known. (If no extreme point is given, a variant of the simplex method, called Phase I, is used to find one or to determine that there are no feasible solutions.) Next, using an algebraic specification of the problem, a test determines whether that extreme point is optimal. If the test for optimality is not passed, an adjacent extreme point is sought along an edge in the direction for which the value of the objective function increases at the fastest rate. Sometimes one can move along an edge and make the objective function value increase without bound. If this occurs, the procedure terminates with a prescription of the edge along which the objective goes to positive infinity. If not, a new extreme point is reached having at least as high an objective function value as its predecessor. The sequence described is then repeated. Termination occurs when an optimal extreme point is found or the unbounded case occurs. Although in principle the necessary steps may grow exponentially with the number of extreme points, in practice the method typically converges on the optimal solution in a number of steps that is only a small multiple of the number of extreme points.

      To illustrate the simplex method, the example from the preceding section will be solved again. The problem is first put into canonical form by converting the linear inequalities into equalities by introducing “slack variables” x3 ≥ 0 (so that x1 + x3 = 8), x4 ≥ 0 (so that x2 + x4 = 5), x5 ≥ 0 (so that x1 + x2 + x5 = 10), and the variable x0 for the value of the objective function (so that x1 + 2x2 − x0 = 0). The problem may then be restated as that of finding nonnegative quantities x1, …, x5 and the largest possible x0 satisfying the resulting equations. One obvious solution is to set the objective variables x1 = x2 = 0, which corresponds to the extreme point at the origin. If one of the objective variables is increased from zero while the other one is fixed at zero, the objective value x0 will increase as desired (subject to the slack variables satisfying the equality constraints). The variable x2 produces the largest increase of x0 per unit change; so it is used first. Its increase is limited by the nonnegativity requirement on the variables. In particular, if x2 is increased beyond 5, x4 becomes negative.

      At x2 = 5, this situation produces a new solution—(x0x1x2x3x4x5) = (10, 0, 5, 8, 0, 5)—that corresponds to the extreme point (0, 5) in the figure. The system of equations is put into an equivalent form by solving for the nonzero variables x0, x2, x3, x5 in terms of those variables now at zero; i.e., x1 and x4. Thus, the new objective function is x1 − 2x4 = −10, while the constraints are x1 + x3 = 8, x2 + x4 = 5, and x1 − x4 + x5 = 5. It is now apparent that an increase of x1 while holding x4 equal to zero will produce a further increase in x0. The nonnegativity restriction on x3 prevents x1 from going beyond 5. The new solution—(x0x1x2x3x4x5) = (15, 5, 5, 3, 0, 0)—corresponds to the extreme point (5, 5) in the figure. Finally, since solving for x0 in terms of the variables x4 and x5 (which are currently at zero value) yields x0 = 15 − x4 − x5, it can be seen that any further change in these slack variables will decrease the objective value. Hence, an optimal solution exists at the extreme point (5, 5).

Standard formulation
      In practice, optimization problems are formulated in terms of matrices (matrix)—a compact symbolism for manipulating the constraints and testing the objective function algebraically. The original (or “primal”) optimization problem was given its standard formulation by von Neumann in 1947. In the primal problem the objective is replaced by the product (px) of a vector x = (x1x2x3, …, xn)T, whose components are the objective variables and where the superscript “transpose” symbol indicates that the vector should be written vertically, and another vector p = (p1p2p3, …, pn), whose components are the coefficients of each of the objective variables. In addition, the system of inequality constraints is replaced by Ax ≤ b, where the m by n matrix A replaces the m constraints on the n objective variables, and b = (b1b2b3, …, bm)T is a vector whose components are the inequality bounds.

Nonlinear programming

Origins
      Although the linear programming model works fine for many situations, some problems cannot be modeled accurately without including nonlinear components. One example would be the isoperimetric problem: determine the shape of the closed plane curve having a given length and enclosing the maximum area. The solution, but not a proof, was known by Pappus of Alexandria c. AD 340:

Bees, then, know just this fact which is useful to them, that the hexagon is greater than the square and the triangle and will hold more honey for the same expenditure of material in constructing each. But we, claiming a greater share of wisdom than the bees, will investigate a somewhat wider problem, namely that, of all equilateral and equiangular plane figures having the same perimeter, that which has the greater number of angles is always greater, and the greatest of them all is the circle having its perimeter equal to them.

      The branch of mathematics known as the calculus of variations began with efforts to prove this solution, together with the challenge in 1696 by the Swiss mathematician Jakob Bernoulli (Bernoulli, Jakob) to find the curve that minimizes the time it takes an object to slide, under only the force of gravity, between two nonvertical points. (The solution is the brachistochrone.) In addition to Jakob Bernoulli, his brother Johann Bernoulli (Bernoulli, Johann), the German Gottfried Wilhelm Leibniz (Leibniz, Gottfried Wilhelm), and the English Isaac Newton (Newton, Sir Isaac) all supplied correct solutions. In particular, Newton's approach to the solution plays a fundamental role in many nonlinear algorithms. Other influences on the development of nonlinear programming, such as convex analysis, duality theory, and control theory, developed largely after 1940. For problems that include constraints as well as an objective function, the optimality conditions discovered by the American mathematician William Karush and others in the late 1940s became an essential tool for recognizing solutions and for driving the behaviour of algorithms.

      An important early algorithm for solving nonlinear programs was given by the Nobel Prize-winning Norwegian economist Ragnar Frisch (Frisch, Ragnar) in the mid-1950s. Curiously, his approach fell out of favour for some decades, reemerging as a viable and competitive approach only in the 1990s. Other important algorithmic approaches include sequential quadratic programming, in which an approximate problem with a quadratic objective and linear constraints is solved to obtain each search step; and penalty methods, including the “method of multipliers,” in which points that do not satisfy the constraints incur penalty terms in the objective to discourage algorithms from visiting them.

      The Nobel Prize-winning American economist Harry M. Markowitz (Markowitz, Harry M.) provided a boost for nonlinear optimization in 1958 when he formulated the problem of finding an efficient investment portfolio as a nonlinear optimization problem with a quadratic objective function. Nonlinear optimization techniques are now widely used in finance, economics, manufacturing, control, weather modeling, and all branches of engineering.

Theory
      An optimization problem is nonlinear if the objective function f(x) or any of the inequality constraints ci(x) ≤ 0, i = 1, 2, …, m, or equality constraints dj(x) = 0, j = 1, 2, …, n, are nonlinear functions of the vector of variables x. For example, if x contains the components x1 and x2, then the function 3 + 2x1 − 7x2 is linear, whereas the functions (x1)3 + 2x2 and 3x1 + 2x1x2 + x2 are nonlinear.

      Nonlinear problems arise when the objective or constraints cannot be expressed as linear functions without sacrificing some essential nonlinear feature of the real world system. For example, the folded conformation of a protein molecule is believed to be the one that minimizes a certain nonlinear function of the distances between the nuclei of its component atoms—and these distances themselves are nonlinear functions of the positions of the nuclei. In finance, the risk associated with a portfolio of investments, as measured by the variance of the return on the portfolio, is a nonlinear function of the amounts invested in each security in the portfolio. In chemistry, the concentration of each chemical in a solution is often a nonlinear function of time, as reactions between chemicals usually take place according to exponential (exponential function) formulas.

      Nonlinear problems can be categorized according to several properties. There are problems in which the objective and constraints are smooth functions, and there are nonsmooth problems in which the slope or value of a function may change abruptly. There are unconstrained problems, in which the aim is to minimize (or maximize) the objective function f(x) with no restrictions on the value of x, and there are constrained problems, in which the components of x must satisfy certain bounds or other more complex interrelationships. In convex problems the graph of the objective function and the feasible set are both convex (where a set is convex if a line joining any two points in the set is contained in the set). Another special case is quadratic programming, in which the constraints are linear but the objective function is quadratic; that is, it contains terms that are multiples of the product of two components of x. (For instance, the function 3(x1)2 + 1.4x1x2 + 2(x2)2 is a quadratic function of x1 and x2.) Another useful way to classify nonlinear problems is according to the number of variables (that is, components of x). Loosely speaking, a problem is said to be “large” if it has more than a thousand or so variables, although the threshold of “largeness” continually increases as computers become more powerful. Another useful distinction is between problems that are computationally “expensive” to evaluate and those that are relatively cheap, as is the case in linear programming.

      Nonlinear programming algorithms typically proceed by making a sequence of guesses of the variable vector x (known as iterates and distinguished by superscripts x1, x2, x3, …) with the goal of eventually identifying an optimal value of x. Often, it is not practical to identify the globally optimal value of x. In these cases, one must settle for a local optimum—the best value in some region of the feasible solutions. Each iterate is chosen on the basis of knowledge about the constraint and objective functions gathered at earlier iterates. Most nonlinear programming algorithms are targeted to a particular subclass of problems. For example, some algorithms are specifically targeted to large, smooth unconstrained problems in which the matrix of second derivatives of f(x) contains few nonzero entries and is expensive to evaluate, while other algorithms are aimed specifically at convex quadratic programming problems, and so on.

      Software for solving optimization problems is available both commercially and in the public domain. In addition to computer optimization programs, a number of optimization modeling languages are available that allow a user to describe the problem in intuitive terms, which are then automatically translated to the mathematical form required by the optimization software.

Stephen J. Wright

Additional Reading
George B. Dantzig, Linear Programming and Extensions (1963, reprinted 1998), is a classic introduction to the history and theory of linear programming with a wealth of practical applications from finance and management. G.V. Shenoy, Linear Programming: Methods and Applications (1989), focuses on linear programming from a management point of view and is accessible to readers without a background in mathematics. H. Paul Williams, Model Building in Mathematical Programming, 4th ed. (1999), shows how many practical problems can be formulated as optimization problems. Jorge Nocedal and Stephen J. Wright, Numerical Optimization (1999), discusses both linear and nonlinear programming. Dimitri P. Bertsekas, Nonlinear Programming, 2nd ed. (1999), is an introductory textbook with a companion Web site where many fully worked-out solutions are available.Stephen J. Wright

* * *


Universalium. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Optimization — or optimality is a term that may refer to: * Optimization (mathematics), trying to find maxima and minima of a function * Optimization (computer science), improving a system to reduce runtime, bandwidth, memory requirements, or other property of… …   Wikipedia

  • optimization — UK US (UK also optimisation) /ˌɒptɪmaɪˈzeɪʃən/ US  /ˌɒptɪməˈzeɪʃən/ noun [U] ► the process of making something as good or effective as possible: »The airline s scheduling optimization program ensures that it serves the maximum number of… …   Financial and business terms

  • optimization — (Amer.) n. maximum production; improvement of efficiency; act of making optimal (also optimisation) …   English contemporary dictionary

  • optimization — noun the act of rendering optimal the simultaneous optimization of growth and profitability in an optimization problem we seek values of the variables that lead to an optimal value of the function that is to be optimized to promote the… …   Useful english dictionary

  • Optimization — In the context of technical analysis, it is the process of adjusting one s trading system in an attempt to make it more effective. These adjustments include changing the number of periods used in moving averages, changing the number of indicators …   Investment dictionary

  • optimization — optimizavimas statusas T sritis automatika atitikmenys: angl. optimization; optimizing vok. Optimierung, f; Optimisierung, f rus. оптимизация, f; оптимизирование, n pranc. optimalisation, f; optimisation, f …   Automatikos terminų žodynas

  • optimization — optimizacija statusas T sritis automatika atitikmenys: angl. optimization vok. Optimierung, f rus. оптимизация, f pranc. optimisation, f …   Automatikos terminų žodynas

  • optimization — optimizavimas statusas T sritis Standartizacija ir metrologija apibrėžtis Projektuojamo arba kuriamo objekto (įtaiso, konstrukcijos, plano, vyksmo) geriausio varianto nustatymas. atitikmenys: angl. optimization; optimizing vok. Optimierung, f;… …   Penkiakalbis aiškinamasis metrologijos terminų žodynas

  • optimization — optimizacija statusas T sritis fizika atitikmenys: angl. optimization vok. Optimisation, f rus. оптимизация, f pranc. optimisation, f …   Fizikos terminų žodynas

  • optimization — optimizacija statusas T sritis Kūno kultūra ir sportas apibrėžtis Geriausios, tinkamiausios, parankiausios sistemos sudarymo, kūrimo vyksmas. kilmė lot. optimus – geriausias atitikmenys: angl. optimization vok. Optimisation, f rus. оптимизация …   Sporto terminų žodynas

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”