Linear Optimization
with the
Optimization Package Matrix Form
This worksheet introduces the matrix form of the linear optimization solver LPSolve in the Optimization package. For an introduction to the algebraic form please refer to the Optimization example worksheet.
In this worksheet, we will be minimizing a linear function (referred to as the objective function) subject to a number of linear constraints. Certain maximization examples are included as well. In many of the examples, the maximize option can be added to the command to find the maximum instead of the minimum.
|
Linear Optimization Primer
|
|
|
A linear optimization problem, often called a linear program or simply an LP, is an optimization problem of the form
|
minimize
|
c' .x
|
(objective function)
|
subject to:
|
|
(linear inequality constraints)
|
|
|
(linear equality constraints)
|
|
|
(bounds)
|
|
|
|
where is the vector of problem variables; , , , , and are vectors; and and are matrices. The relations involving matrices and vectors are element-wise. In the objective function defined, refers to the vector transpose.
|
|
The standard form for linear programs has no linear inequality constraints ) and uses the bound (that is, we let and for all ). The standard form LP is given by
|
minimize
|
c'. x
|
(objective function)
|
subject to:
|
|
(linear equality constraints)
|
|
|
(nonnegativity bounds)
|
|
|
Another common form for linear optimization problems is the symmetric form. The symmetric form is a maximization problem that has no linear equality constraints () and uses the bound .
maximize
|
c'. x
|
(objective function)
|
subject to:
|
|
(linear inequality constraints)
|
|
|
(nonnegativity bounds)
|
|
|
While the symmetric form only has linear inequality constraints it should be noted that the algorithms to solve the problem will generally convert the inequality constraints to equality constraints by adding slack variables. For example, the inequality constraints
|
2x + 3y4
|
|
|
|
|
where
|
|
|
|
|
are converted into equality constraints by adding two new nonnegative variables u and v.
|
2x + 3y + u 4
|
|
|
|
|
where
|
|
|
|
|
For each inequality constraint in the optimization problem a slack variable is introduced in order to convert that constraint to an equality constraint. When solving a symmetric form optimization problem the number of variables using in actually solving the problem is double the number in the problem statement.
All of the constraints of the problem, including the bounds, define a feasible region for the problem. The feasible region for a linear optimization problem is always a convex polytope, which may or may not be bounded. Since the objective function is also linear, it follows that for any linear optimization problem if there is an optimal solution then either there exists a unique optimal solution or there exist an infinite number of optimal solutions. If there is a unique optimal solution it is a corner point of the feasible region. In this case, the line defined by setting the objective function equal to the optimal value intersect the feasible region at a single point. When there is more than one optimal solution, then the line defined by setting the objective function equal to the optimal value must intersect the feasible region at least two points.However, since the feasible region is a convex polytope, this intersection must be a line or a face of the polytope. In this case, it must intersect the feasible region at an infinite number of points.
|
A Simple Example
|
|
|
Consider the two variable linear optimization problem written algebraically:
|
min
subject to
The set of constraint inequalities, including the non-negativity inequalities, are given by
>
|
|
| (1.1.1) |
The set of points that satisfy all of these constraint inequalities defines the feasible region of the problem. The feasible region for this example is shown in the shaded region (including the borders) in the plot below.
>
|
|
The objective function of the problem is given by
Some contours of the objective function are shown in the following plot along with the location of a point in the feasible region that minimizes the objective function.
>
|
|
The example problem can be written in matrix form using the following vectors and matrix:
>
|
|
The Vector corresponds to the objective function , the Matrix is the inequality constraint matrix and holds the coefficients of the inequality constraint functions, and the Vector corresponds to the constants on the right-hand side of the constraint inequalities.
To solve this problem we use the LPSolve function of the Optimization package. This function will output the optimal value (minimal value of the objective function) followed by a point in the feasible region that achieves this optimal value. The function accepts the LP in either matrix form or algebraic form and is called as follows.
>
|
|
| |
| (1.1.4) |
|
|
Methods for Solving Linear optimization Problems
|
|
Linear optimization solvers are typically based on one of three techniques: the simplex method, an active set method or an interior point method. While these methods arrive at a solution in different ways they are all iterative methods. The algorithms progress by computing a sequence of successively better (or no worse) solutions until either an exact optimal solution is found or a good enough solution is found. A "good enough" solution is defined by acceptable tolerances in floating-point numbers.
The first technique that we consider is the simplex method. Invented in 1947 by George Dantzig, the simplex method uses the corner points of the feasible region as its iterates. At each iteration of the algorithm, a neighboring corner point is chosen such that the objective value either improves or does not change. When the algorithm determines that no further step can improve the objective value it terminates with the current corner point as an optimal solution.
Consider the two dimensional feasible region in the shaded region of the following plot.
>
|
|
| |
If the objective function for a maximization problem is given by
Then the optimal solution should be the point in the feasible region with greatest y value. In a simplex-based method, an initial feasible point (corner point) is first constructed. Suppose we have the initial point
>
|
|
| (1.2.2) |
A simplex-based method will look at all neighboring corner points (relative to the current point) and decide which point gives the best improvement in the objective function. If the objective function evaluated at the current point is strictly better than all neighboring corner points then the current point is an optimal point. The algorithm can terminate with the current point. Otherwise, a neighbor which improves the objective value the most (which might include a change of zero) is chosen and the current point is updated to this point.
In this example, three new points need to be considered before an optimal solution is found. The sequence of points is shown in the following plot.
>
|
|
| |
In this example there was a unique solution because the objective function was maximized at a single point in the feasible region. This does not need to be the case. If the objective function was instead parallel to one of the edges of the feasible region then there are infinitely many solutions. For example, consider the same feasible region with the object function
>
|
|
In the following plot, the green line shows the contour of the objective function when it is maximized. The two red points are corner points that are optimal solutions to the maximization problem and every point on the green line between these two points are also optimal. A simplex-based method will return one of the two corner points as the solution to the problem.
>
|
|
In higher dimensions, the contour of the objective function when it is maximized might intersect the feasible region at a single point, along an edge (with infinite optimal solutions) or along a face of the polytope (again having infinite optimal solutions). In higher dimensions a simplex-based method always find a corner point as an optimal solution if one exists.
In an active set method, each iteration considers points that are solutions of a subset of the constraints called the active set. The active set of a given iteration is the current estimation of the active constraints of the problem at an optimal solution. In each iteration a new point x' = x + Dx is computed from the current point x by finding a step direction Dx. The step direction is determined so that the change in objective value from x to x' is optimized while enforcing that the active set is active for the point x'. Thus, the point x' satisfies the current active set with equality.
In each iteration, the active set is also modified until the algorithm finds the active set for an optimal solution.
When the number of constraints is greater than the number of variables in a linear optimization problem, the active set method will consider active sets of size equal to the number variables.
In an interior point method, each iteration considers points that are strictly inside the feasible region, with the exception of the initial point which may be outside the feasible region.
In each iteration a new point x' = x + a Dx is computed from the current point x by finding a step direction Dx. The step direction is chosen to optimize the change in objective value from x to x'. The parameter a is used to ensure that the new point x' lies strictly inside the feasible region.
For example, using the same feasible region and the same objective function as the simplex method example above, the iterates of an interior point method might look like something in the following plot. Here, the initial point is infeasible.
>
|
|
| |
In the example given, the interior point method does not find a corner point as its solution to the problem. Instead, it finds a point that is in the center of the lines of optimal solutions. This behavior is common with interior point methods. In this simple example, an interior point method will most likely require more iterations than a simplex based method since the possible number of corner points visited is very small. In larger problems with higher dimensions the number of iterations for an interior point method can be much smaller than a simplex based method.
|
References
|
|
For more details on the simplex method and interior point methods see:
•
|
Vasek Chvatal, Linear programming, W.H.Freeman, New York, 1983. [Simplex]
|
•
|
Stephen J. Wright, Primal-Dual Interior-Point Methods, Society for Industrial Mathematics, 1997. [Interior Point]
|
For Details of the active set method that is implemented here see:
•
|
Gill P E, Murray W, Saunders M A and Wright M H, Inertia-controlling Methods for General Quadratic Programming, SIAM Review Vol. 33 (1991): 1–36. [Active Set]
|
For details of the interior point method that is implemented here see:
•
|
M. Gonzalez-Lima, H. Wei, and H. Wolkowicz. "A stable primal-dual approach for linear programming under nondegeneracy assumptions." Computational Optimization and Applications. Vol. 44. (2009): 213-247.
|
|
|
|
|
LPSolve: The Optimization Package's Linear Optimization Solver
|
|
Linear optimization problems are solved using the LPSolve function in the Optimization package.
>
|
|
|
Calling Sequence
|
|
LPSolve(c,lc,bd,opts)
|
|
Parameters
|
|
c -- The objective function of the optimization problem.
•
|
when using the activeset method the value NoUserValue can be used when c is a Vector of zeros.
|
lc -- An optional list which contains the the linear constraints of the optimization problem.
•
|
lc = [A,b] if there are only inequality constraints,
|
•
|
lc = [A,b,Aeq,beq] if there are both inequality and equality constraints, or
|
•
|
lc = [NoUserValue,NoUserValue,Aeq,beq] if there are only equality constraints.
|
bd -- An optional list of bounds for the problem.
•
|
when
|
•
|
When we can omit the parameter and instead use the option assume=nonnegative. (See opts below)
|
opts -- Additional options for the function.
|
|
Options
|
|
Options are equations of the form option=value. They are used to specify some additional options for the function. The options for LPSolve include
•
|
assume = nonnegative -- Assume that all variables are non-negative. This can be used to replace the bd parameter when
|
•
|
feasibilitytolerance = realcons(positive) -- For the active-set method, set the maximum absolute allowable constraint violation. For the interior point method, set the tolerance for the sum of the relative constraint violation and relative duality gap.
|
•
|
infinitebound = realcons(positive) -- For the active-set method, set any value of a variable greater than the infinitebound value to be equivalent to infinity during the computation. This option is ignored when using the interior point method.
|
•
|
initialpoint = Vector -- Use the provided initial point, which is an n-dimensional Vector of numeric values.
|
•
|
iterationlimit = posint -- Set the maximum number of iterations performed by the active-set or interior point algorithm.
|
•
|
maximize = true -- Maximize the objective function when equal to true and minimize when equal to false. The default is maximize = false.
|
•
|
output = solutionmodule -- Return a module as described in the Optimization/Solution help page. This can be used to reveal more information about the solution.
|
The LPSolve function uses one of two algorithms to solve the given linear optimization problem. The particular method can be specified using the method option.
•
|
method = activeset, interiorpoint -- Specify whether to use the active-set or interior point algorithms. If neither method is requested, a heuristic is used to choose the method. In general, the interior point method will be more efficient for large, sparse problems. See the notes below for further details on each algorithm. If the input is given in Matrix form and the constraint matrices have the storage=sparse option set, the interior point method will be used. Otherwise, the heuristic is based on the number of variables, constraints, and the density of the constraint coefficient matrices.
Note: This worksheet focuses on using the interiorpoint method for LPSolve.
|
|
|
Choosing the activeset or interiorpoint Method
|
|
The specific algorithm used by LPSolve can be specified using the method option as described above. If a method is not specified a heuristic is used to choose a method. The best choice for a method depends on the given problem, especially its size, and the desired output.
The activeset method: The activeset method is a good choice for small to medium sized problems. This method is also the preferred choice when a corner point optimal solution is needed. If the number of variables in the problem is greater than the number of constraints, the activeset method will actually use the simplex method to solve the problem.
The interiorpoint method: The interior point method should always be used for large problems. If the problem is both large and sparse (i.e., the constraint matrices have relatively few non-zero entries) then the interior point method is much more efficient than the activeset method. It should be noted that the solution obtained by the interior point method will, in general, not be a corner point unless it is the unique solution to the problem.
|
|
|
Example use of LPSolve
|
|
|
Standard Form Example
|
|
maximize
|
c' x
|
|
subject to:
|
|
|
|
|
|
|
|
>
|
|
| (3.1.1) |
When the optimization problem does not have any inequality constraints, the linear constraints parameter must include the value NoUserValue (for both A and b) to indicate that there are no inequality constraints.
>
|
|
| (3.1.2) |
Since this is a maximization problem the maximize option is included.
>
|
|
| (3.1.3) |
The bounds on could have been explicitly added with the bd parameter.
>
|
|
| (3.1.4) |
>
|
|
| (3.1.5) |
|
|
Symmetric Form
|
|
minimize
|
c' x
|
|
subject to:
|
|
|
|
|
|
|
|
>
|
|
| (3.2.1) |
Here there are no equality constraints and so they can be omitted from the linear constraints list.
>
|
|
| (3.2.2) |
|
|
General Constraints
|
|
minimize
|
c' x
|
|
subject to:
|
|
|
|
|
|
|
|
|
|
|
>
|
|
| (3.3.1) |
>
|
|
| (3.3.2) |
| (3.3.3) |
|
|
General Bounds
|
|
minimize
|
c' x
|
|
subject to:
|
|
|
|
|
|
|
|
>
|
|
| (3.4.1) |
First solve the problem with nonnegative bounds .
>
|
|
| (3.4.2) |
We can restrict the feasible region by only consider values
>
|
|
| (3.4.3) |
The optimal solution should be no larger than found above (with less restrictions on )
>
|
|
| (3.4.4) |
The bounds can different for each component of x. For example, we can relax some bounds (allow negative lower bound) and restrict others.
>
|
|
| (3.4.5) |
>
|
|
| (3.4.6) |
|
|
|
Obtaining More Information about the Solution: solutionmodule and infolevel
|
|
The output of LPSolve is a list containing the optimal objective value and one solution point (one feasible value of that achieves the optimal objective value). There are two ways in which more information can be obtained about a solution to a linear optimization problem. We outline these two methods below.
|
Option: output = solutionmodule
|
|
One of the options of LPSolve is output = solutionmodule. With this option set, the return value of LPSolve will be a module instead of a list simply containing the objective value and a solution point.
The returned module has two exports, Settings and Results. Each export is a procedure that accepts a single optional string or name argument. When an argument is provided, the associated value is returned. For example, if the module is assigned to variable m, then the call returns the final numeric objective function value. When no argument is provided, then all available information is returned as a list of equations.
The Settings procedure accepts any of the following names as an argument: feasibilitytolerance, infinitebound, initialpoint, or iterationlimit. These names correspond to the options of the same name described in the Optimization[Options] help page. If a particular option is requested that was not set in the call to LPSolve then an error is returned.
The Results procedure accepts as an argument: iterations, objectivevalue, or solutionpoint. The arguments objectivevalue and solutionpoint correspond to the final objective function value and the solution point that are normally returned by LPSolve by default. The argument iterations refers to the total numbers of iterations performed.
Consider the following example of a linear optimization problem with only equality constraints and assuming .
>
|
|
| (4.1.1) |
A typical call to LPSolve returns the objective value and a solution point
>
|
|
| (4.1.2) |
Using the output=solutionmodule option the return value is now a module
>
|
|
| (4.1.3) |
The Results and Settings export of the module yield
| (4.1.4) |
>
|
|
| (4.1.5) |
Individual values can be accessed as follows.
>
|
|
| |
The Settings export can be used to keep track of changes to the default options of LPSolve. It also stores a user specified initial point is one is given.
>
|
|
| (4.1.7) |
>
|
|
| |
| (4.1.8) |
|
|
Increasing infolevel[Optimization]
|
|
The amount of information displayed while LPSolve is computing a solution can be increased by adjusting the infolevel of the optimization package. This is a general technique that can be used with all Maple Packages.
Detailed information about the execution of LPSolve will be be displayed when infolevel[Optimization] is set to value 4 or 5.
>
|
|
>
|
|
| (4.2.1) |
The default infolevel value for the Optimization package is 0, which yields the normal (minimal) output when LPSolve is called.
>
|
|
| (4.2.2) |
Detailed information about each iteration of the interior point method can be displayed by setting the infolevel to 5.
>
|
|
LPSolve: calling LP solver
| |
LPSolve: using method=interiorpoint
| |
LPSolve: number of problem variables 3
| |
LPSolve: number of slack variables 2
| |
LPSolve: Start of solve. Dimensions of A are 2 x 5
| |
LPSolve: Using solver: STABLE_DIRECT
| |
LPSolve: Number of nonzeros in A: 6
| |
LPSolve: relErr at start is: 251.995024875622
| |
LPSolve: Lip constant is: 20.
| |
LPSolve: System Parameters are:
| |
LPSolve: tau = 0.9998, haveCrossover = 0, usingPurify = false, relErrtoler = 1.1e-08
| |
LPSolve: iter mu relErr alphaP alphaD sigma itnTime PInf DInf
| |
LPSolve: 0 3.0e+02 2.5e+02 0.8767 1.0000 1.5e-02 0.00 5.0e+02 6.0e+00
| |
LPSolve: 1 3.7e+01 3.2e+01 1.0000 1.0000 3.3e-06 0.00 6.2e+01 0.0e+00
| |
LPSolve: 2 5.4e-01 1.7e+00 1.0000 0.7997 1.2e-02 0.00 8.9e-16 8.9e-16
| |
LPSolve: 3 9.8e-02 2.6e-01 0.0491 1.0000 3.1e-03 0.00 0.0e+00 0.0e+00
| |
LPSolve: 4 1.3e+00 2.9e+00 1.0000 0.9806 1.2e-05 0.00 1.1e-16 8.9e-16
| |
LPSolve: 5 2.5e-02 5.5e-02 0.9998 0.9996 9.1e-05 0.00 1.1e-16 1.3e-15
| |
LPSolve: 6 1.0e-05 2.1e-05 0.9998 0.9998 1.0e-13 0.00 2.2e-16 0.0e+00
| |
LPSolve: 7 2.0e-09 4.3e-09 0.9998 0.9998 1.0e-13 0.00 ------- -------
| |
LPSolve: CPU seconds: average 0.00 seconds per itns on 7 itns, with relErr := 4.3e-09
| |
LPSolve: primal optimal value is : -1.333333327468603e+00
| |
LPSolve: dual optimal value is : -1.333333337498659e+00
| |
LPSolve: No of iterations : 8
| |
LPSolve: rel. min. x vrble is: 3.10200018424348e-009
| |
LPSolve: rel. min. z vrble is: 9.10817627285263e-010
| |
LPSolve: abs. min. x vrble is: 1.03400006691238e-009
| |
LPSolve: abs. min. z vrble is: 2.12524113219043e-009
| |
LPSolve: rel. norm(A.x-b) is: .951619733694216e-16
| |
LPSolve: rel. norm(A^T.y+z-c) is: .951619733694216e-16
| |
LPSolve: gap is : .1003006e-7
| |
LPSolve: relgap is : .429859713518355e-8
| |
LPSolve: relnormxz is : 1.71377252084992e-009
| |
| (4.2.3) |
Similarly, more information can be obtained about the activeset method for LPSolve.
>
|
|
LPSolve: calling LP solver
| |
LPSolve: matrix size (number of entries) = 6, density = .666666666666667
| |
LPSolve: choosing nonsparse method
| |
LPSolve: using method=activeset
| |
LPSolve: number of problem variables 3
| |
LPSolve: number of general linear constraints 2
| |
LPSolve: feasibility tolerance set to 0.1053671213e-7
| |
LPSolve: iteration limit set to 50
| |
LPSolve: infinite bound set to 0.10e21
| |
LPSolve: number of iterations taken 2
| |
LPSolve: final value of objective -1.33333333333333348
| |
| (4.2.4) |
|
|
|
Netlib Problems
|
|
The Netlib repository, which is a collection of software and test cases for scientific computing, contains a suite of linear optimization test cases. Many of the test cases come from real-world applications and all are difficult to solve in some sense. These test cases are routinely used when testing linear optimization solvers.
One of the smallest netlib test cases is afiro, which originated from the Systems Optimization Laboratory at Stanford University. The original problem has 28 constraints (some equality and some inequality) with 32 variables. After removing a redundant constraint and converting to standard equality form (by adding slack variables), the problem has 27 constraints and 51 variables. The problem has expected solution -464.75314286 .
>
|
|
| (5.1) |
The variable Program holds all the problem data read from the file.
>
|
|
LPSolve: calling LP solver
| |
LPSolve: using method=interiorpoint
| |
LPSolve: number of problem variables 32
| |
LPSolve: number of slack variables 19
| |
LPSolve: Start of solve. Dimensions of A are 27 x 51
| |
LPSolve: Using solver: STABLE_DIRECT
| |
LPSolve: Number of nonzeros in A: 102
| |
LPSolve: relErr at start is: 5.13893981160143
| |
LPSolve: Lip constant is: 400.750625000000
| |
LPSolve: System Parameters are:
| |
LPSolve: tau = 0.9998, haveCrossover = 0, usingPurify = false, relErrtoler = 1.1e-08
| |
LPSolve: iter mu relErr alphaP alphaD sigma itnTime PInf DInf
| |
LPSolve: 0 5.8e+02 5.1e+00 0.9070 0.8645 6.3e-02 0.00 1.9e+03 4.9e+00
| |
LPSolve: 1 9.4e+01 2.0e+02 1.0000 0.9452 6.4e-02 0.00 1.7e+02 6.6e-01
| |
LPSolve: 2 1.3e+01 5.3e+00 0.7609 0.7982 1.1e-01 0.00 5.7e-14 3.6e-02
| |
LPSolve: 3 4.0e+00 4.5e-01 1.0000 0.7981 1.8e-02 0.00 5.7e-14 7.3e-03
| |
LPSolve: 4 6.3e-01 6.1e-02 0.7551 1.0000 1.1e-01 0.02 2.8e-14 1.5e-03
| |
LPSolve: 5 9.5e-02 1.0e-02 0.9975 0.9998 8.3e-04 0.00 5.7e-14 4.4e-16
| |
LPSolve: 6 2.1e-04 2.3e-05 0.9998 0.9998 1.0e-13 0.00 5.7e-14 4.4e-16
| |
LPSolve: 7 4.1e-08 4.5e-09 0.9998 0.9998 1.0e-13 0.00 ------- -------
| |
LPSolve: CPU seconds: average 0.00 seconds per itns on 7 itns, with relErr := 4.5e-09
| |
LPSolve: primal optimal value is : -4.647531414123108e+02
| |
LPSolve: dual optimal value is : -4.647531435228928e+02
| |
LPSolve: No of iterations : 8
| |
LPSolve: rel. min. x vrble is: 1.02837448398118e-011
| |
LPSolve: rel. min. z vrble is: 3.73541881073328e-012
| |
LPSolve: abs. min. x vrble is: 5.14187241906322e-009
| |
LPSolve: abs. min. z vrble is: 3.73541881078619e-011
| |
LPSolve: rel. norm(A.x-b) is: .442477204527800e-16
| |
LPSolve: rel. norm(A^T.y+z-c) is: .381394492791639e-17
| |
LPSolve: gap is : .2110582e-5
| |
LPSolve: relgap is : .453154644118094e-8
| |
LPSolve: relnormxz is : 4.30684555500199e-010
| |
| (5.2) |
The relative difference between the solution obtained and the expected solution is
>
|
|
A much larger netlib problem, 80BAU3B, was supplied by Bob Fourer. The problem, when all constraints are converted to equality constraints by adding slack variables, has 2262 constraints and 12061 variables. The non-slack variables also have bounds (other than 0 and ∞). Its expected optimal solution is 987232.16072.
>
|
|
| (5.4) |
The problem data is saved into variable Program.
Note: This problem requires some time to solve! Uncomment the LPSolve command and the following command to compute the relative difference if you wish to proceed.
>
|
|
| (5.5) |
The relative difference between the solution obtained and the expected solution is
>
|
|
|
Return to Index for Example Worksheets
|