LP definition with bounds
Let’s start by defining a linear programming problem in canonical form:
where:
- is the constraint matrix.
- is the right-hand side vector.
- is the coefficient vector of the objective function.
- are the lower and upper bounds on the variables.
A feasible solution to this problem is a vector that satisfies all the constraints. A feasible solution is optimal if it minimizes the objective function while satisfying the constraints.
Note
Depending on the source, it seems as if the definitions for “canonical form” and “standard form” of an LP problem vary. Here, we use the most convent for our purposes, but be aware that often they are described as
- Generic:
- Canonical:
- Standard:
Basis
A basis is a set of indices corresponding to a subset of columns of .
We define as , and we assume that is invertible.
If is full rank, then there exists at least one such basis.
Each column of corresponds to a basic variable.
The remaining variables are called non-basic variables, and the corresponding indices belong to the set , where is the matrix formed by the non-basic columns of .
Note
Let’s consider an example with and . Then, for the following problem we have:
Following the partition of variables into basic and non-basic, we can follow a similar approach for the vectors , and :
Basic Feasible Solution
Given a basis , we can define a basic solution (BS) as a solution where the basic variables are determined by solving the system while the non-basic variables are usually set to their bounds or to , if unbounded. To be considered a basic feasible solution (BFS), the solution must also satisfy the bounds . This corresponds to the following assignments:
where is vector containing an arbitrary combination of lower and upper bounds for the non-basic variables.
| Variable Type | Value in BFS |
|---|---|
| Basic Variables | , |
| Bounded Non-Basic Variables | Set to or |
| Unbounded Non-Basic Variables | Set to |
Error analysis
When operating with floating-point arithmetic, numerical errors can accumulate and affect the accuracy of the solution. We now analyse the sources of errors in the Revised Simplex Method and how they impact the solution.
Linear system
An exact solution to a linear system satisfies the equation precisely, while a computed solution solves a perturbed system, where represents the error we are introducing due to numerical inaccuracies. We are interested in the , which represents the difference between the exact solution and the computed solution.
We are interested in the norm of the error :
Using the definition of the condition number , we can rewrite the above expression as:
More precisely, we are interested in the residual , which quantifies how well the computed solution satisfies the original system, or the relative residual . We can also bound the residual norm as follows:
therefore, the relative residual can be bounded as:
Some notable results tell us that
- , where is the number of rows, and is the machine precision.
- for IEEE double precision.
Revised Simplex Method Overview
The Revised Simplex Method is an efficient algorithm for solving linear programming problems. It operates by iteratively improving a basic feasible solution until an optimal solution is found.
Step 1: Initialization
- Choose an initial basis such that the corresponding basic feasible solution is feasible.
- Compute , where are the coefficients of the basic variables in the objective function.
- Compute the reduced costs for the non-basic variables: .
- We first express the basic variables in terms of the non-basic ones:
- Then, we consider the objective function:
- We define the reduced costs vector as: Altering the reduced costs vector will affect the objective function value, e.g., increasing a non-basic variable with a negative reduced cost will decrease the objective function value, moving us closer to the optimum.
- If for all non-basic variables already at their lower bound and for all non-basic variables already at their upper bound, then the current solution is optimal. Stop.
- If there is a non-basic variable with a reduced cost that violates the optimality condition, select one such variable to enter the basis. This is called the entering variable.
- Compute the direction of movement for the basic variables:
where is the column of corresponding to the entering variable
- Let be the change in value we want to apply to the entering variable , which is currently outside the basis. To keep the system satisfied, we need to adjust the basic variables as follows: From which we get:
- While we would like to increase (or decrease) indefinitely to improve the objective function, we must ensure that and itself remain within their respective bounds. with the first condition which expands to:
- In practice, we only care about the strictest bound that limit the increase (or decrease) of .
- Determine the maximum allowable step size for the entering variable without violating any bounds. If no such limit exists (i.e., the entering variable is unbounded in the direction of improvement), then the problem is unbounded. Stop.
- Identify the leaving variable, with index , which is the basic variable that reaches its bound first as we increase (or decrease) to .
- Update the basis by replacing the leaving variable with the entering variable.
- Note that, based on how we made the change, the leaving variable will now be at its bound (either lower or upper).
- Update the basic feasible solution accordingly: and do the same for the sets and .