How much is
print(1 + 2) # 3
How much is
print(1.0 + 2.0) # 3.0
How much is
print(0.1 + 0.2) # 0.30000000000000004
How do we represent numbers in a computer?
Computers only store bits, usually in groups of 8, 16, 32, or 64.
It is natural to represent integers in binary.
Loading diagram...min0≤value1011012=2310≤max1127⋯112=214748364710
What about fractions?
A natural way is to use fixed-point numbers.
Loading diagram...min0≤value111001.1012=23.62510≤≤max1111⋯11.1112⋯112=32767.9999847412110
Pros
Cons
The IEEE 754 standard defines floating-point numbers.
Loading diagram...min0≤value1.1012×22=6.510≤≤max1.1119⋯112×2127=(1−2−24)10×2127
The number of bits is still finite.
Irrational numbers are not representable in a finite number of bits.
We always have to round them (approximation) or use symbolic computation.
Some fractions can create issues (think of 1/3). They are called non-dyadic numbers.
The standard dictates that the result of an operation is computed exactly and then rounded.
a+ba−ba×ba÷b⟹fl(a+b)=(a+b)(1+δ)⟹fl(a−b)=(a−b)(1+δ)⟹fl(a×b)=(a×b)(1+δ)⟹fl(a÷b)=(a÷b)(1+δ)with ∣δ∣≤ϵ.
Overflows and underflows are also possible, but they are detectable.
Let the exponent and mantissa be 4 and 7 (+1) bits.
252101.11111002×27+1.5101.12= ? 11111100011111111011+=The result would need 8 bits (+1) to be represented.
1.Stored111110112×27=253.0=252+1.5=253.5Errors propagate through operations.
fl(a+b+c)=((a+b)(1+δ1)+c)(1+δ2)The exact value depends on the order of operations, making floating point arithmetic non-associative.
Error analysis is the study of how floating-point computations affect our results.
Forward error analysis VS Backward error analysis.
The inner product of two column vectors is
s^=fl(xTy)=fl(i=1∑nxiyi) s^=x1y1(1+δi×)i=2∏n(i+δi)+i=2∑nxiyi(1+δi×)j=i∏n(1+δi)Using the backward error analysis,
fl(xTy)=(x+Δx)Ty=xT(y+Δy)with
∣Δx∣≤γn∣x∣,∣Δy∣≤γn∣y∣Compute the LU factorisation of
A=[ϵ1−11]=[1l2101][u110u12u22] u11=ϵ,u12=−1,l21=ϵ−1,u22=1+ϵ−1≈ϵ−1 L^U^=[ϵ1−10]The simplex is the most widely used method for solving LP problems.
maximizesubject tocTxAx=bx≥0with A∈Rm×n, b∈Rm, and c∈Rn.
We define a basis B of m columns of A. It must be non-singular.
The revised simplex is the basis for the more common implementation.
The steps of the simplex loop until
Three linear systems involving B each iteration. Why not use LU decomposition?
Keep L fixed and only update U.
H(i)=[U1(i−1)⋯Uli−1−1(i−1)Uli−1+1(i−1)⋯Um(i−1)L−1Ae]We need to re-triangularise H(i).
B(i)=LH(i)=LC(1)C(2)⋯C(i)Gaussian eliminationC(i)−1C(i−1)−1⋯C(1)−1H(i)==LG(i)C(1)C(2)⋯C(i)U(i)==LG(i)U(i)It can be proven that the Bartels–Golub decomposition is backward stable.
As long as the iterations do not grow too much, the error is bounded.
At some point it becomes more convenient to recompute the decomposition.
∃τ,ϵ∈R, x,τ>0, such that
def simplex(A, b, c, B_idx, tolerance, epsilon):
m, n = A.shape
for i in range(n!/(n-m)!):
B = A[:, B_idx]
L, G, U = bartels_golub(B)
x_B = solve(L, G, U, b)
y_B = solve(L.T, G.T, U.T, c[B_idx])
r = c - A.T @ y_B
if r <= tolerance: return x_B # Optimal
e = argmax(r)
d = solve(L, G, U, A[:, e])
if d <= tolerance: return x_B # Unbounded
l = argmin(x_B / d, where=d > tolerance)
B_idx[l] = e
behaves exactly.