next up previous
Next: Primality testing Up: Discrete Math, Eleventh Problem Previous: Lovász Lattice Reduction -

Linear programming

The objective of linear programming is to maximize a linear objective function $ {\mathbf{c}}^T{\mathbf{x}}=c_1x_1+\cdots +c_nx_n$ under a system of constraints of the form $ A{\mathbf{x}}\le{\mathbf{b}}$, where $ A$ is some fixed $ k\times n$ matrix, and $ {\mathbf{b}}$ is a fixed $ k\times 1$ vector and the meaning of ``$ \le$'' is the simultaneous system of inequalities in $ {\mathbf{x}}$:

$\displaystyle a_{11}x_1+\cdots+a_{1n}x_n$ $\displaystyle \le$ $\displaystyle b_1\notag$ (9)
  $\displaystyle \vdots$ $\displaystyle \notag$ (10)
$\displaystyle a_{k1}x_1+\cdots+a_{kn}x_n$ $\displaystyle \le$ $\displaystyle b_k\notag$ (11)

The objective is then to maximize, within this domain, the value of the objective function.

For example, $ {\mathbf{c}}^T{\mathbf{x}}$ could be a profit function, and the constraints of $ A$, $ {\mathbf{b}}$ would be production constraints.

Throughout we will use words such as ``up'' to denote the direction of the vector $ {\mathbf{c}}$. In this terminology, we need to reach the apex (top vertex) of the polytope defined by the constraints (if it has vertices).

The simplex algorithm (Dantzig, 1950) solves this problem by hopping from vertex to neighboring vertex, always ascending in the polytope. The algorithm terminates when the apex is reached. This algorithm is provably exponential in the worst case, yet in practice is extremely efficient.

This situation presented one of the foremost challenges to the notion of ``polynomial time algorithms.''

The ellipsoid method (Khachiyan, 1979) was the first polynomial-time algorithm for the linear programming problem. Basically the algorithm envelopes the polytope in an ellipsoid, and shrinks the volume of the ellipsoid by a constant factor, at each step making certain to still retain the topmost vertex (though the topmost vertex is unknown). While this algorithm is polynomial time, it is not a practical method for linear programming.

Remark 2.1   The ellipsoid method also suffers from the fact that it may become unstable if the polytope is lower dimensional, so in this case, it is important to first reduce to a hyperplane on which the problem is ``of maximal rank'' then solve there. The ellipsoid method itself pinpoints the approximate direction of a vector perpendicular to such a hyperplane; then a simultaneous diophantine approximation of the coordinates of this approximate normal vector results in the exact direction. This ``rounding'' was Lovász' original objective in developing the basis reduction algorithm.

Remark 2.2   An advantage that the ellipsoid method has is that, in cases where the constraints are implicitly defined, one only needs a separation oracle, or algorithm for determining some constraint that a point outside the polytope fails.

A third method to approach this problem is the interior point method of Karmarkar, which starts from the ``center'' of the polytope; chooses a ball in the interior of the polytope around the starting point, and moving up inside the ball. Then a projective transformation is used to make it appear that the new point is in the center, and we repeat the previous step. (Note: any interior point in a ball can be transformed into the center of the ball without changing the shape of the ball). The result is that the point will approach the apex from inside along a curved trajectory which is a straight line in the hyperbolic space into which the interior of the polytope can be converted.

This, again, is a polynomial time algorithm. In contrast with the ellipsoid method, it is also practical; for very large numbers of variables, the interior point method beats the simplex algorithm on benchmark inputs.


next up previous
Next: Primality testing Up: Discrete Math, Eleventh Problem Previous: Lovász Lattice Reduction -
Varsha Dani 2003-07-25