Optimisation Theory
Optimisation Theory
Optimisation Theory
Dr Damien S. Eldridge
19 May 2017
Scarcity:
This is the defining feature of economics.
It is this feature that distinguishes economics from other social sciences.
The behaviour of an individual who is faced with scarcity:
Often modelled using “constrained optimisation” techniques.
The interaction of individuals that face scarcity:
Economic equilibrium (eg competitive equilibrium and Nash
equilibrium).
When does a system of equations have at least one solution?
How do we find such a solution (if it exists)?
Use techniques from linear algebra and (for nonlinear cases) fixed point
theorems.
f ∗ (·) ≡ max {f (x ) : x ∈ C } .
= arg max {f (x ) : x ∈ C } .
We can find the maximum value function from the ”arg max”s as
follows:
f ∗ (·) = f (x ∗ )
where
x ∗ ∈ arg max {f (x ) : x ∈ C } .
If the ”arg max” is unique, then this becomes
f ∗ (·) = f (x ∗ )
where
x ∗ = arg max {f (x ) : x ∈ C } .
f ∗ (·) ≡ min {f (x ) : x ∈ C } .
= arg min {f (x ) : x ∈ C } .
We can find the minimum value function from the ”arg min”s as
follows:
f ∗ (·) = f (x ∗ )
where
x ∗ ∈ arg min {f (x ) : x ∈ C } .
If the ”arg min” is unique, then this becomes
f ∗ (·) = f (x ∗ )
where
x ∗ = arg min {f (x ) : x ∈ C } .
Note that every global maximum is a local maximum, but not all
local maxima will be a global maximum.
Illustrate this on the white-board.
Note that every global minimum is a local minimum, but not all local
minima will be a global minimum.
Illustrate this on the white-board.
d 2Π d 2C dMC
2
= − 2
=− < 0,
dQ dQ dQ
when it is evaluated at the point Q ∗ .
Note that this requires that dMC
dQ > 0 when it is evaluated at the point
Q ∗.
In other words, we need marginal cost to be an increasing function of
the output quantity in a non-empty neighbourhood around the point
Q ∗.
∂Π ∂C
= P1 − =0
∂Q1 ∂Q1
and
∂Π ∂C
= P2 − = 0.
∂Q2 ∂Q2
These two conditions can be rearranged to obtain
P1 = MC1 (Q1 , Q2 )
and
P2 = MC2 (Q1 , Q2 ),
where MCi (Q1 , Q2 ) = ∂C
∂Qi for i ∈ {1, 2}.
D. S. Eldridge (ANU) An Introduction to Optimisation 19 May 2017 24 / 48
Bivariate Profit Maximisation by a Price-Taking Firm Part
3
H = D 2 Π(Q1 , Q2 )
2 ∂ Π 2 ∂ Π
∂Q12 ∂Q2 ∂Q1
= ∂2 Π ∂2 Π
∂Q1 ∂Q2 ∂Q22
2 2
∂ C C
− ∂Q 2 − ∂Q∂2 ∂Q 1
= C2
1
∂ C 2
− ∂Q∂1 ∂Q2
− ∂Q 2
2
!
− ∂MC 1
− ∂MC 1
= ∂Q1 ∂Q2 .
− ∂Q1
∂MC2
− ∂Q2
∂MC2
∂MC2 ∗ ∗ ∗
∂Q2 > 0 at Q = (Q1 , Q2 ), and
2
2
3
∂MC1
∂Q1
∂MC2
∂Q2 > ∂MC1
∂Q2 at Q ∗ = (Q1∗ , Q2∗ ).
be a constraint set.
Consider the problem of maximising f on C .
The Lagrangian function for this constrained maximisation problem is
m
L (x, λ) = f (x ) + ∑ λi bi − g i (x ) .
i =1
The optimal value of the Lagrange multiplier for the ith constraint
(that is, the value of that Lagrange multiplier when all of the
first-order conditions are satisfied) tells us the degree of sensitivity of
the optimal value of the objective function over the constraint set,
f ∗ (x∗ , b ), to a change i
in the ith constraint value b , where
1
b = b ,b ,··· ,b2 m is the vector of constraint values.
It is, in effect, a“shadow price”.
The optimal value of the Lagrange multiplier in a standard utility
maximisation problem can be interpreted as the marginal utility of
income.
The optimal value of the Lagrange multiplier in a cost minimisation
problem can be interpreted as marginal cost.
g 1 (x1 , x2 , · · · , xn ) = b1 ,
g 2 (x1 , x2 , · · · , xn ) = b2 ,
..
.
g m (x1 , x2 , · · · , xn ) = bm .
The constraint set for this problem is C =
Assume that there are fewer constraints than there are choice
variables (m < n) and that the constraint set is nonempty (C 6= ∅).
g 1 (x1 , x2 , · · · , xn ) = b1 ,
g 2 (x1 , x2 , · · · , xn ) = b2 ,
..
.
g m (x1 , x2 , · · · , xn ) = bm .
These solutions would be parametric. They would pin down the value
of m of the choice variables in terms of the other (n − m ) choice
variables and the constraint values. However, each of the remaining
(n − m) choice variables would be free to vary over their entire
domain.
D. S. Eldridge (ANU) An Introduction to Optimisation 19 May 2017 38 / 48
Converting Equality Constrained Optimisation Problems
into Unconstrained Optimisation Problems Part 3
Suppose that this is the case even when the constraint equations are
nonlinear. (This may impose some restrictions on the constraint
equations. Think about the implicit function theorem and the inverse
function theorem, for example.)
In this case, we can use the system of constraint equations to express
m of the choice variables as implicit functions of the remaining
(n − m) choice variables and the m constraint values.
These implicit functions might take the form
∂Û α (1 − α − β) pp13
= − y = 0,
∂x1 x1 ( p3 − pp13 x1 − pp23 x2 )
and
∂Û β (1 − α − β) pp23
= − y = 0.
∂x2 x2 ( p3 − pp13 x1 − pp23 x2 )
These can be rearranged and simplified to obtain
(1 − α − β)p1 x1 = α(y − p1 x1 − p2 x2 ),
and
(1 − α − β)p2 x2 = β(y − p1 x1 − p2 x2 ).
V (p1 , p2 , p3 , y ) = U ∗
αy βy (1 − α − β)y
=U , ,
p1 p2 p3
(1 − α − β )y
αy βy
= αln + βln + (1 − α − β)ln
p1 p2 p3
!
( 1 − α − β ) y (1− α − β )
α β
αy βy
= ln .
p1 p2 p3
What about second-order conditions?
We need to examine the definiteness (or otherwise) of the Hessian
matrix for the utility function from the “reduced-form” unconstrained
optimisation problem.
D. S. Eldridge (ANU) An Introduction to Optimisation 19 May 2017 45 / 48
Conversion Example Part 5
Recall that
α (1 − α − β) pp31
Û1 = − y = 0,
x1 ( p3 − pp13 x1 − pp23 x2 )
and
β (1 − α − β) pp32
Û2 = − y = 0.
x2 ( p3 − pp13 x1 − pp23 x2 )
Thus we have
2
p1
α (1 − α − β ) p3
Û11 =− 2− y p1 p2 2
< 0 for all x1 and x2 ,
x1 ( p3 − p3 x1 − p3 x2 )
p1 p2
(1 − α − β ) p32
Û12 = Û21 = − , and
( py3 − p1
p3 x1 − p2
p3 x 2 )
2
Recall that
Û11 Û12
H= .
Û21 Û22
Note that
det(H1 ) = det(Û11 ) = Û11
2
p1
α ( 1 − α − β ) p3
=− 2− y p1 p2 < 0 for all x1 and x2 , and
x1 ( p3 − p3 x1 − p3 x2 )2
Continued on next page.