0% found this document useful (0 votes)
15 views19 pages

Linear Programming

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
0% found this document useful (0 votes)
15 views19 pages

Linear Programming

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 19

Linear programming

Linear programming is a process that solves many variables and


constraints in real world scenarios.

In mathematics, linear programming (LP) problems involve the


optimization of a linear objective function, subject to linear equality and
inequality constraints.

What are Constraints?

Constraints are equalities or inequalities that describe restrictions


involved with the minimization or maximization of the objective function.

Linear Programming is a subcategory of mathematical programming.


As the name suggests, both the objective function and the constraints are
linear.

Put very informally, LP is about trying to get the best outcome (e.g.
maximum profit, least effort, etc) given some list of constraints (e.g. only
working 30 hours a week, not doing anything illegal, etc), using a linear
mathematical model.

More formally, given a polytope (for example, a polygon or a


polyhedron), and a real-valued affine function

defined on this polytope, the goal is to find a point in the polytope where this
function has the smallest (or largest) value. Such points may not exist, but if
they do, searching through the polytope vertices is guaranteed to find at
least one of them.

Linear programs are problems that can be expressed in canonical form:

Maximize
Subject to
Where

represents the vector of variables, while and are vectors of coefficients and
is a matrix of coefficients. The expression to be maximized or minimized is
called the objective function ( in this case). The equations are the constraints
which specify a convex polyhedron over which the objective function is to be
optimized.
Linear programming can be applied to various fields of study. Most
extensively it is used in business and economic situations, but can also be
utilized for some engineering problems. Some industries that use linear
programming models include transportation, energy, telecommunications,
and manufacturing. It has proved useful in modeling diverse types of
problems in planning, routing, scheduling, assignment, and design.

History of linear programming

The problem of solving a system of linear inequalities dates back at


least as far as Fourier, after whom the method of Fourier-Motzkin elimination
is named. Linear programming arose as a mathematical model developed
during the second world war to plan expenditures and returns in order to
reduce costs to the army and increase losses to the enemy. It was kept
secret until 1947. Postwar, many industries found its use in their daily
planning.

The founders of the subject are George B. Dantzig, who published the
simplex method in 1947, John von Neumann, who developed the theory of
the duality in the same year, and Leonid Kantorovich, a Russian
mathematician who used similar techniques in economics before Dantzig and
won the Nobel prize in 1975 in economics. The linear programming problem
was first shown to be solvable in polynomial time by Leonid Khachiyan in
1979, but a larger major theoretical and practical breakthrough in the field
came in 1984 when Narendra Karmarkar introduced a new interior point
method for solving linear programming problems.

Dantzig's original example of finding the best assignment of 70 people


to 70 jobs exemplifies the usefulness of linear programming. The computing
power required to test all the permutations to select the best assignment is
vast; the number of possible configurations exceeds the number of particles
in the universe. However, it takes only a moment to find the optimum
solution by posing the problem as a linear program and applying the simplex
algorithm. The theory behind linear programming drastically reduces the
number of possible optimal solutions that must be checked.

Uses

Linear programming is an important field of optimization for several


reasons. Many practical problems in operations research can be expressed
as linear programming problems. Certain special cases of linear
programming, such as network flow problems and multicommodity flow
problems are considered important enough to have generated much
research on specialized algorithms for their solution. A number of algorithms
for other types of optimization problems work by solving LP problems as sub-
problems. Historically, ideas from linear programming have inspired many of
the central concepts of optimization theory, such as duality, decomposition,
and the importance of convexity and its generalizations. Likewise, linear
programming is heavily used in microeconomics and business management,
either to maximize the income or minimize the costs of a production scheme.
Some examples are food blending, inventory management, portfolio and
finance management, resource allocation for human and machine resources,
planning advertisement campaigns, etc.

Could you list some ways in which linear programming is used in


real life?

There is a journal called "The Journal of Operations Research" that


contains articles by people who apply linear programming to everyday
problems. One of the classic applications is by railroads that own freight
cars that have to be sent to all parts of the country. At any given moment,
where are the best places to put all the railroad cars so that the custo-
mer's needs can be met in the shortest time and with the least expense.

And of course in general in economics, linear programming tells you


what
arrangement of activities will maximize your profits (or whatever you are
trying to maximize). Of course things are seldom linear in real life, so
extensions of linear programming techniques, rather than linear
programming
itself, is probably more important these days.

The Simplex Method:


A Method of solving Linear Programs

The simplex method was the first method developed to solve linear
programs. In these pages we attempt to teach someone how the simplex
method works. The user is expected to know some of the basic ideas and
terms involved with linear programming and matrix algebra.

Simplex Method Procedure

Step 1: Model the problem.

Step 2: Rewrite the constraint inequalities into equations by introducing a


slack variable.

Step 3: Rewrite the profit function. Make sure all the variables are on one
side.
Step 4: Construct the simplex matrix using the constraint equations (step 2)
and the profit equation (step3).

Step 5: Find the solution to the simplex matrix.

The following steps are necessary to find the maximum solution to the
simplex matrix.

Step 6: Find the pivot column.

Step 7: Find the pivot row.

Step 8: Find the pivot number.

Step 9: Eliminate any other numbers in the pivot column.

Step 10: Find the solution to the matrix.

Step 11: Repeat steps 6 through 10 to find the maximum solution.

The Dual Simplex Method

Although we don't go into details here, there is another reason why


duality proves valuable, associated with the need, in real life, to modify the
constraints of a problem. Supposed we have already solved (say) a
maximising problem. We can describe the solution as feasible (every entry in
the last column is non-negative) and optimal (every entry in the bottom row
is non-negative). Adding an additional constraint is likely to render the
``current'' solution infeasible, although it will remain optimal in the above
sense. Applying the conventional algorithm will first work on the (potentially)
bad row, and then re-do the optimisation. In contrast the dual problem is
likely to become non optimal (a bad row ``duals'' to a proper sign for
improvement) but feasible (an optimal objective row duals to a feasible but
non optimal solution). As such, restoring optimality is likely in practice to be
quicker working on the dual.

Standard form

Standard form is the usual and most intuitive form of describing a linear
programming problem. It consists of the following three parts:
 A linear function to be maximized

e.g. maximize
 Problem constraints of the following form

e.g.
 Non-negative variables

e.g.

The problem is usually expressed in matrix form, and then becomes:

maximize
subject to

Other forms, such as minimization problems, problems with constraints on


alternative forms, as well as problems involving negative variables can
always be rewritten into an equivalent problem in standard form.

Example

Suppose that a farmer has a piece of farm land, say A square


kilometres large, to be planted with either wheat or barley or some
combination of the two. The farmer has a limited permissible amount F of
fertilizer and P of insecticide which can be used, each of which is required in
different amounts per unit area for wheat (F1, P1) and barley (F2, P2). Let S1 be
the selling price of wheat, and S2 the price of barley. If we denote the area
planted with wheat and barley by x1 and x2 respectively, then the optimal number of
square kilometres to plant with wheat vs barley can be expressed as a linear programming
problem:

(maximize the revenue — revenue is the "objective


maximize
function")
subject
(limit on total area)
to
(limit on fertilizer)
(limit on insecticide)
(cannot plant a negative area)

Which in matrix form becomes:

maximize
subject to
Augmented form (slack form)

Linear programming problems must be converted into augmented form


before being solved by the simplex algorithm. This form introduces non-
negative slack variables to replace inequalities with equalities in the
constraints. The problem can then be written in the following form:

Maximize Z in:

where xs are the newly introduced slack variables, and Z is the variable to be
maximized.

Example

The example above becomes as follows when converted into augmented


form:

subject to:

maximize (objective function)


subject (augmented
to constraint)
(augmented
constraint)
(augmented
constraint)

where are (non-negative) slack variables.

Which in matrix form becomes:

Maximize Z in:

Duality

Every linear programming problem, referred to as a primal problem,


can be converted into a dual problem, which provides an upper bound to the
optimal value of the primal problem. In matrix form, we can express the
primal problem as:

maximize
subject to

The corresponding dual problem is:


minimize
subject to

where y is used instead of x as variable vector.

There are two ideas fundamental to duality theory. One is the fact that
the dual of a dual linear program is the original primal linear program.
Additionally, every feasible solution for a linear program gives a bound on
the optimal value of the objective function of its dual. The weak duality
theorem states that the objective function value of the dual at any feasible
solution is always greater than or equal to the objective function value of the
primal at any feasible solution. The strong duality theorem states that if the
primal has an optimal solution, x*, then the dual also has an optimal solution,
y*, such that cTx*=bTy*.

A linear program can also be unbounded or infeasible. Duality theory tells us


that if the primal is unbounded then the dual is infeasible by the weak
duality theorem. Likewise, if the dual is unbounded, then the primal must be
infeasible. However, it is possible for both the dual and the primal to be
infeasible.

Example

Following the above example of the farmer with some A land, F


fertilizer and P insecticide, the farmer can tell others that he has no way to
earn more than a specific amount of profit with the following scheme: to
claim that with his available method of earning, each kilometre of land can
give him no more than yA, each amount of fertilizer can earn him no more
than yF, and each amount of insecticide can earn him no more than yP. Then
he can tell others that the most he can earn is AyA + FyF + PyP. In order to
find the best (lowest) claim he can make, he can set yA, yF and yP by using the
following linear programming problem:

(minimize the revenue bound — revenue bound is the "objective


minimize
function")
subject
(he can earn no more by growing wheat)
to
(he can earn no more by growing barley)
(cannot claim negative revenue on resource)

Which in matrix form becomes:

minimize
subject to
Note that each variable in the primal problem (amount of wheat/barley
to grow) correspond to an inequality in the dual problem (revenue obtained
by wheat/barley), and each variable in the dual problem (revenue bound
provided by each resource) correspond to an inequality in the primal
problem (limit on each resource).

Since each inequality can be replaced by an equality and a slack


variable, this means each primal variable correspond to a dual slack variable,
and each dual variable correspond to a primal slack variable. This relation
allows us to complementary slackness.

Complementary slackness

It is possible to obtain an optimal solution to the dual when only an


optimal solution to the primal is known using the complementary slackness
theorem. The theorem states:

Suppose that x = (x1, x2, . . ., xn) is primal feasible and that y = (y1, y2, . . . ,
ym) is dual feasible. Let (w1, w2, . . ., wm) denote the corresponding primal
slack variables, and let (z1, z2, . . . , zn) denote the corresponding dual slack
variables. Then x and y are optimal for their respective problems if and only
if xjzj = 0, for j = 1, 2, . . . , n, w iyi = 0, for i = 1, 2, . . . , m.

So if the ith slack variable of the primal is not zero, then the ith variable of
the dual is equal zero. Likewise, if the jth slack variable of the dual is not
zero, then the jth variable of the primal is equal to zero.

Theory

Geometrically, the linear constraints define a convex polyhedron,


which is called the feasible region. Since the objective function is also linear,
hence a convex function, all local optima are automatically global optima (by
the KKT theorem). The linearity of the objective function also implies that the
set of optimal solutions is the convex hull of a finite set of points - usually a
single point.

There are two situations in which no optimal solution can be found. First, if
the constraints contradict each other (for instance, x ≥ 2 and x ≤ 1) then the
feasible region is empty and there can be no optimal solution, since there
are no solutions at all. In this case, the LP is said to be infeasible.

Alternatively, the polyhedron can be unbounded in the direction of the


objective function (for example: maximize x1 + 3 x2 subject to x1 ≥ 0, x2 ≥ 0,
x1 + x2 ≥ 10), in which case there is no optimal solution since solutions with
arbitrarily high values of the objective function can be constructed.
Barring these two pathological conditions (which are often ruled out by
resource constraints integral to the problem being represented, as above),
the optimum is always attained at a vertex of the polyhedron. However, the
optimum is not necessarily unique: it is possible to have a set of optimal
solutions covering an edge or face of the polyhedron, or even the entire
polyhedron (This last situation would occur if the objective function were
constant).

Algorithms
A series of linear constraints on two variables produces a region of
possible values for those variables. Solvable problems will have a feasible
region in the shape of a simple polygon.

The simplex algorithm, developed by George Dantzig, solves LP problems by


constructing an admissible solution at a vertex of the polyhedron and then
walking along edges of the polyhedron to vertices with successively higher
values of the objective function until the optimum is reached. Although this
algorithm is quite efficient in practice and can be guaranteed to find the
global optimum if certain precautions against cycling are taken, it has poor
worst-case behavior: it is possible to construct a linear programming problem
for which the simplex method takes a number of steps exponential in the
problem size. In fact, for some time it was not known whether the linear
programming problem was solvable in polynomial time (complexity class P).

This long standing issue was resolved by Leonid Khachiyan in 1979 with the
introduction of the ellipsoid method, the first worst-case polynomial-time
algorithm for linear programming. To solve a problem which has n variables
and can be encoded in L input bits, this algorithm uses O(n4L) arithmetic
operations on numbers with O(L) digits. It consists of a specialization of the
nonlinear optimization technique developed by Naum Shor, generalizing the
ellipsoid method for convex optimization proposed by Arkadi Nemirovski, a
2003 John von Neumann Theory Prize winner, and D. Yudin.

Khachiyan's algorithm was of landmark importance for establishing the


polynomial-time solvability of linear programs. The algorithm had little
practical impact, as the simplex method is more efficient for all but specially
constructed families of linear programs. However, it inspired new lines of
research in linear programming with the development of interior point
methods, which can be implemented as a practical tool. In contrast to the
simplex algorithm, which finds the optimal solution by progresses along
points on the boundary of a polyhedral set, interior point methods move
through the interior of the feasible region.

In 1984, N. Karmarkar proposed a new interior point projective method for


linear programming. Karmarkar's algorithm not only improved on
Khachiyan's theoretical worst-case polynomial bound (giving O(n3.5L)), but
also promised dramatic practical performance improvements over the
simplex method. Since then, many interior point methods have been
proposed and analyzed. Early successful implementations were based on
affine scaling variants of the method. For both theoretical and practical
properties, barrier function or path-following methods are the most common
recently.

The current opinion is that the efficiency of good implementations of


simplex-based methods and interior point methods is similar for routine
applications of linear programming.

LP solvers are in widespread use for optimization of various problems in


industry, such as optimization of flow in transportation networks, many of
which can be transformed into linear programming problems only with some
difficulty.

Linear programming formulation examples

A cargo plane has three compartments for storing cargo: front, centre
and rear. These compartments have the following limits on both weight and
space:

Compartment Weight capacity (tonnes) Space capacity (cubic metres)


Front 10 6800
Centre 16 8700
Rear 8 5300

Furthermore, the weight of the cargo in the respective compartments


must be the same proportion of that compartment's weight capacity to
maintain the balance of the plane.

The following four cargoes are available for shipment on the next flight:

Cargo Weight (tonnes) Volume (cubic metres/tonne) Profit (£/tonne)


C1 18 480 310
C2 15 650 380
C3 23 580 350
C4 12 390 285

Any proportion of these cargoes can be accepted. The objective is to


determine how much (if any) of each cargo C1, C2, C3 and C4 should be
accepted and how to distribute each among the compartments so that the
total profit for the flight is maximised.
 Formulate the above problem as a linear program
 What assumptions are made in formulating this problem as a linear
program?
 Briefly describe the advantages of using a software package to solve
the above linear program, over a judgemental approach to this
problem.

Solution

Variables

We need to decide how much of each of the four cargoes to put in each
of the three compartments. Hence let:

xij be the number of tonnes of cargo i (i=1,2,3,4 for C1, C2, C3 and C4
respectively) that is put into compartment j (j=1 for Front, j=2 for Centre and
j=3 for Rear) where xij >=0 i=1,2,3,4; j=1,2,3

Note here that we are explicitly told we can split the cargoes into any
proportions (fractions) that we like.

Constraints
 cannot pack more of each of the four cargoes than we have available

x11 + x12 + x13 <= 18


x21 + x22 + x23 <= 15
x31 + x32 + x33 <= 23
x41 + x42 + x43 <= 12

 the weight capacity of each compartment must be respected

x11 + x21 + x31 + x41 <= 10


x12 + x22 + x32 + x42 <= 16
x13 + x23 + x33 + x43 <= 8

 the volume (space) capacity of each compartment must be respected

480x11 + 650x21 + 580x31 + 390x41 <= 6800


480x12 + 650x22 + 580x32 + 390x42 <= 8700
480x13 + 650x23 + 580x33 + 390x43 <= 5300

 the weight of the cargo in the respective compartments must be the


same proportion of that compartment's weight capacity to maintain
the balance of the plane

[x11 + x21 + x31 + x41]/10 = [x12 + x22 + x32 + x42]/16 = [x13 + x23 + x33 + x43]/8
Objective

The objective is to maximise total profit, i.e.

maximise 310[x11+ x12+x13] + 380[x21+ x22+x23] + 350[x31+ x32+x33] +


285[x41+ x42+x43]

The basic assumptions are:

 that each cargo can be split into whatever proportions/fractions we


desire
 that each cargo can be split between two or more compartments if we
so desire
 that the cargo can be packed into each compartment (for example if
the cargo was spherical it would not be possible to pack a
compartment to volume capacity, some free space is inevitable in
sphere packing)
 all the data/numbers given are accurate

The advantages of using a software package to solve the above linear


program, rather than a judgemental approach are:

 actually maximise profit, rather than just believing that our


judgemental solution maximises profit (we may have bad judgement,
even if we have an MBA!)
 makes the cargo loading the decision one that we can solve in a
routine operational manner on a computer, rather than having to
exercise judgement each and every time we want to solve it
 problems that can be appropriately formulated as linear programs are
almost always better solved by computers than by people
 can perform sensitivity analysis very easily using a computer

Linear programming example No.2

The production manager of a chemical plant is attempting to devise a


shift pattern for his workforce. Each day of every working week is divided
into three eight-hour shift periods (00:01-08:00, 08:01-16:00, 16:01-24:00)
denoted by night, day and late respectively. The plant must be manned at all
times and the minimum number of workers required for each of these shifts
over any working week is as below:

Mon Tues Wed Thur Fri Sat Sun


Night 5 3 2 4 3 2 2
Day 7 8 9 5 7 2 5
Late 9 10 10 7 11 2 2
The union agreement governing acceptable shifts for workers is as follows:

1. Each worker is assigned to work either a night shift or a day shift or a


late shift and once a worker has been assigned to a shift they must
remain on the same shift every day that they work.
2. Each worker works four consecutive days during any seven day period.

In total there are currently 60 workers.

 Formulate the production manager's problem as a linear program.


 Comment upon the advantages/disadvantages you foresee of
formulating and solving this problem as a linear program.

Solution

Variables

The union agreement is such that any worker can only start their four
consecutive work days on one of the seven days (Mon to Sun) and in one of
the three eight-hour shifts (night, day, late).

Let:

Monday be day 1, Tuesday be day 2, ..., Sunday be day 7

Night be shift 1, Day be shift 2, Late be shift 3

then the variables are:

Nij the number of workers starting their four consecutive work days on day i
(i=1,...,7) and shift j (j=1,...,3)

Note here that strictly these variables should be integer but, as we are
explicitly told to formulate the problem as a linear program in part (a) of the
question, we allow them to take fractional values.

Constraints
 upper limit on the total number of workers of 60

SUM{i=1 to 7} SUM{j=1 to 3} Nij <= 60

since each worker can start his working week only once during the seven
day, three shift, week

 lower limit on the total number of workers required for each day/shift
period
let Dij be the (known) number of workers required on day i (i=1,...,7) and
shift period j (j=1,...,3) e.g. D53=11 (Friday, Late)

then the constraints are

Monday: N1j + N7j + N6j + N5j >= D1j j=1,...,3


Tuesday: N2j + N1j + N7j + N6j >= D2j j=1,...,3
Wednesday: N3j + N2j + N1j + N7j >= D3j j=1,...,3
Thursday: N4j + N3j + N2j + N1j >= D4j j=1,...,3
Friday: N5j + N4j + N3j + N2j >= D5j j=1,...,3
Saturday: N6j + N5j + N4j + N3j >= D6j j=1,...,3
Sunday: N7j + N6j + N5j + N4j >= D7j j=1,...,3

The logic here is straightforward, for example for Wednesday (day 3) the
workers working shift j on day 3 either started on Wednesday (day 3, N 3j) or
on Tuesday (day 2, N2j) or on Monday (day 1, N1j) or on Sunday (day 7, N7j) -
so the sum of these variables is the total number of workers on duty on day
3 in shift j and this must be at least the minimum number required (D 3j).

Objective

It appears from the question that the production manager's objective is


simply to find a feasible schedule so any objective is possible. Logically
however he might be interested in reducing the size of the workforce so the
objective function could be:

minimise SUM{i=1 to 7} SUM{j=1 to 3} Nij

where all variables Nij>=0 and continuous (i.e. can take fractional values).

This completes the formulation of the problem as a linear program.

Some of the advantages and disadvantages of solving this problem as a


linear program are:

 really need variable values which are integer


 some workers will always end up working weekends
 how do we choose the workers to use, e.g. if N 43=7 which 7 workers do
we choose to begin their work week on day 4 working shift 3
 what happens if workers fail to report in (e.g. if they are sick) - we may
fall below the minimum number required
 the approach above enables us to deal with the problem in a
systematic fashion
 have the potential to reduce the size of the workforce by more
effectively matching the resources to the needs
able to investigate changes (e.g. in shift patterns, workers needed per
day, etc) very easily.

Linear programming example 1986 UG exam

A company assembles four products (1, 2, 3, 4) from delivered components.


The profit per unit for each product (1, 2, 3, 4) is £10, £15, £22 and £17
respectively. The maximum demand in the next week for each product (1, 2,
3, 4) is 50, 60, 85 and 70 units respectively.

There are three stages (A, B, C) in the manual assembly of each product and
the man-hours needed for each stage per unit of product are shown below:

Product
1 2 3 4
Stage A 2 2 1 1
B 2 4 1 2
C 3 6 1 5

The nominal time available in the next week for assembly at each stage (A,
B, C) is 160, 200 and 80 man-hours respectively.

It is possible to vary the man-hours spent on assembly at each stage such


that workers previously employed on stage B assembly could spend up to
20% of their time on stage A assembly and workers previously employed on
stage C assembly could spend up to 30% of their time on stage A assembly.

Production constraints also require that the ratio (product 1 units


assembled)/(product 4 units assembled) must lie between 0.9 and 1.15.

Formulate the problem of deciding how much to produce next week as a


linear program.

Solution

Variables

Let

xi = amount of product i produced (i=1,2,3,4)

tBA be the amount of time transferred from B to A

tCA be the amount of time transferred from C to A


Constraints
 maximum demand

x1 <= 50

x2 <= 60

x3 <= 85

x4 <= 70

 ratio

0.9 <= (x1/x4) <= 1.15

i.e. 0.9x4 <= x1 and x1 <= 1.15x4

 work-time

2x1 + 2x2 + x3 + x4 <= 160 + tBA + tCA

2x1 + 4x2 + x3 + 2x4 <= 200 - tBA

3x1 + 6x2 + x3 + 5x4 <= 80 - tCA

 limit on transferred time

tBA <= 0.2(200)

tCA <= 0.3(80)

 all variables >= 0

Objective

maximise 10x1 + 15x2 + 22x3 + 17x4

Note we neglect the fact that the xi variables should be integer because we
are told to formulate the problem as an LP.

Facts

Professor George Dantzig:


Linear Programming Founder Turns 80
In spite of impressive developments in computational optimization in
the last 20 years, including the rapid advance of interior point methods, the
simplex method, invented by George B. Dantzig in 1947, has stood the test
of time quite remarkably: It is still the pre-eminent tool for almost all
applications of linear programming.

Dantzig, who turns 80 on November 8, is generally regarded as one of


the three founders of linear programming, along with von Neumann and
Kantorovich. Through his research in mathematical theory, computation,
economic analysis, and applications to industrial problems, he has
contributed more than any other researcher to the remarkable development
of linear programming.

Dantzig's work has been recognized by numerous honors, among them


the National Medal of Science (1975), the John von Neumann Theory Prize of
the Operations Research Society of America and the Institute of Management
Sciences (1974), and membership in the National Academy of Sciences, the
National Academy of Engineering, and the American Academy of Arts and
Sciences. But he has his own basis for judging his work: "The final test of any
theory," he said in the opening sentence of his 1963 book Linear
Programming and Extensions, "is its capacity to solve the problems which
originated it."

Linear programming and its offspring (such as nonlinear constrained


optimization and integer programming) have come of age and have
demonstrably passed this test, and they are fundamentally affecting the
economic practice of organizations and management. Computer scientist
Laszlo Lovasz said in 1980, "If one would take statistics about which
mathematical problem is using up most of the computer time in the world,
then (not including database handling problems like sorting and searching)
the answer would probably be linear programming." That same year, Eugene
Lawler of Berkeley offered the following summary: "It [linear programming] is
used to allocate resources, plan production, schedule workers, plan
investment portfolios and formulate marketing (and military) strategies. The
versatility and economic impact of linear programming in today's industrial
world is truly awesome."

Dantzig's own assessment, set forth in the chapter he contributed to


the History of Mathematical Programming: A Collection of Personal
Reminiscences, is characteristically modest. In his words: "The tremendous
power of the simplex method is a constant surprise to me." Citing the simple
example of an assignment problem (70 people to 70 jobs) and the impossibly
vast computing power that would be required to scan all the permutations to
select the one that is best, he observed that "it takes only a moment to find
the optimum solution using a personal computer and standard simplex or
interior point method software."
Dantzig's unassuming nature and complete lack of pretension are the
subject of countless anecdotes recounted by friends and colleagues. One
researcher in optimization contributed his favorite George Dantzig story to
SIAM News: About 15 years ago, having just completed his PhD, he found
himself at a loss for words when, on meeting Dantzig for the first time,
Dantzig enthusiasticaly responded to the introduction, "I've heard so much
about you!"

"In retrospect," Dantzig wrote in the 1991 history book, "it is


interesting to note that the original problem that started my research is still
outstanding -- namely the problem of planning or scheduling dynamically
over time, particularly planning dynamically under uncertainty. If such a
problem could be successfully solved it could eventually through better
planning contribute to the well-being and stability of the world."

The nature of that original problem is also detailed in the book.


Dantzig's contributions, he explained, grew out of his experience in the
Pentagon during World War II, when he had become an expert on
programming -- planning methods done with desk calculators. In 1946, as
mathematical adviser to the U.S. Air Force Comptroller, he was challenged
by his Pentagon colleagues to see what he could do to mechanize the
planning process, "to more rapidly compute a time-staged deployment,
training and logistical supply program." In those pre-electronic computer
days, mechanization meant using analog devices or punch-card machines.
("Program" at that time was a military term that referred not to the
instruction used by a computer to solve problems, which were then called
"codes," but rather to plans or proposed schedules for training, logistical
supply, or deployment of combat units. The somewhat confusing name
"linear programming," Dantzig explained in the book, is based on this
military definition of "program.")

The large-scale "activity analysis" model he developed, Dantzig said,


would be described today as a time-staged dynamic linear program with a
staircase matrix structure. In those days, he explained, "There was no
objective function" [italics his]. Lacking the power of electronic computers,
practical planners at the time had no way to implement such a concept. In
fact, summarizing his contributions to linear programming, Dantzig listed the
substitution of an explicit objective function for a set of ad hoc rules, along
with two others -- the recognition that practical planning relations could be
reformulated as a system of linear inequalities and the invention of the
simplex method.

As viewed by his colleagues, the list of Dantzig's professional


accomplishments extends beyond linear programming and the simplex
method to decomposition theory, sensitivity analysis, complementary pivot
methods, large-scale optimization, nonlinear programming, and
programming under uncertainty. His research in linear programming (and
the related areas of nonlinear optimization, integer programming, and
optimization under uncertainty) has had a fundamental impact on the
consequential development of operations research as a discipline. Inasmuch
as operations research is defined by the use of analytic tools to improve
decision-making, operations research as a discipline could not exist without
the use of formal optimization models as mental constructs, and the actual
solution of models in a practical setting.

Dantzig is so well known to the optimization community that in 1991,


when the editors of the SIAM Journal on Optimization decided to dedicate the
first issue to him, they needed very few words: "The first issue of the SIAM
Journal on Optimization is dedicated to George Dantzig who has been so
influential in the development of optimization."

Since 1979, with the Mathematical Programming Society, SIAM has


also honored Dantzig by awarding the George B. Dantzig Prize. The prize,
which recognizes original research that has had a major impact on the field
of mathematical programming, was awarded for the first time in 1982, to
Michael Powell and R. T. Rockafellar. Since then it has been awarded every
three years; the recipients are Ellis Johnson and Manfred Padberg (1985),
Michael J. Todd (1988), Martin Grotschel and Arkady S. Nemirovsky (1991),
and Claude Lemarechal and Roger J. B. Wets (1994). On the occasion of
Dantzig's 80th birthday, many of his friends and colleagues have made
contributions to the prize fund. For others wishing to do so, checks should be
made payable to the SIAM-Dantzig Prize Fund and sent to Richard W. Cottle
(Department of Operations Research, Stanford University, Stanford, CA
94305-4022).

You might also like