Convex Conic Wilmott
Convex Conic Wilmott
Convex Conic Wilmott
O
ptimization has played an
important role in quantitative
finance ever since Markowitz
published his original paper on
portfolio selection in 19521.
Most “quants” have some
knowledge of linear programming as used in
bond duration matching, quadratic program-
ming as used in equity portfolio optimization,
and nonlinear optimization used with portfo-
lios of derivatives. The finance industry and
technical literature is home to many different,
competing approaches that apply these stan-
dard optimization methods to different models
and risk measures, in an effort to achieve better
investment results.
But fewer people are aware of a quiet revo-
lution that has taken place in the optimiza-
tion methods themselves over the last decade.
A better understanding of the properties of
optimization models, and new algorithms –
notably interior point or barrier methods –
have led to a changed view of the whole field
of optimization. It is not an exaggeration to
say that linear and quadratic programming
are being replaced – or more properly sub-
sumed – by more powerful and general meth-
ods of convex and conic optimization.
18 Wilmott magazine
What this means for quantitative finance is a
relaxing of restrictions on optimization models It is not an exaggeration to say that linear
– for example, using quadratic constraints as
easily as a quadratic objective – and new ways to and quadratic programming are being
deal with important problems such as “unnatu-
ral” portfolios from optimization that are due to
“noise” in the return and covariance or factor
replaced – or more properly subsumed –
parameters of a portfolio model. This article
will introduce the ideas of convex and conic
by more powerful and general methods
optimization, and their applications in
quantitative finance. of convex and conic optimization
Linear and quadratic programming
An optimization problem consists of decision
variables, an objective function to be maximized or
minimized, and constraints that place limits on general constraints would put the problem in the In an observation now famous for its prescience
other functions of the variables. In linear pro- domain of nonlinear optimization. Quantitative among optimization researchers, mathemati-
gramming, the objective and constraint func- finance professionals have worked hard to create cian R. Tyrrell Rockafellar wrote in SIAM Review
tions are all linear – hence the simple form: models that “fit” within the domain of linear and in 19932:
quadratic programming, and avoid models that
max/min cx “. . . in fact, the great watershed in opti-
require nonlinear optimization methods. Why?
subject to bl ≤ Ax ≤ bu mization isn’t between linearity and
nonlinearity, but convexity and
xl ≤ x ≤ xu Convex and non-convex problems
nonconvexity.”
where x is a vector of decision variables, cx is the Nonlinear optimization is a well-developed
objective function, A is a matrix of coefficients field, with many solution algorithms for prob- What makes the general optimization problem
and Ax computes the constraints, bl and xl are lems of the general form: so hard to solve? It is not the fact that f (x) or
lower bounds, and bu and xu are upper bounds. G(x) may be nonlinear! It is the fact that f (x) or
max/min f (x) G(x) may be non-convex.
In quadratic programming, the objective is gener-
alized to a quadratic function of the form: subject to bl ≤ G(x) ≤ bu A convex optimization problem is one
xl ≤ x ≤ xu where all of the constraints are convex func-
max/min xT Qx + cx tions, and the objective is a convex function if
subject to bl ≤ Ax ≤ bu where f (x) is a smooth function, and G(x) is a minimizing, or a concave function if maximiz-
xl ≤ x ≤ xu vector of smooth functions of the variables x. ing. A non-convex optimization problem is any
Linear and quadratic programming problems are case where the objective or any of the con-
where Q is a matrix of coefficients. In the sim-
special cases of this form, where f (x) = cx or straints are non-convex functions. The differ-
plest formulation of the classic Markowitz port-
xT Qx + cx, and G(x) = Ax . But solution algorithms ence is dramatic: Convex optimization prob-
folio optimization problem, Q is a covariance
for this general problem can find only “locally lems can be efficiently solved to global optimal-
matrix, the objective xT Qx is portfolio variance to
optimal” solutions that may not be “globally ity with up to tens or even hundreds of thou-
be minimized, and A has just two rows: A budget
optimal,” and they may fail to find a feasible sands of variables. In contrast, the best methods
constraint 1x = 1 and a portfolio return threshold
solution even though one exists. Unlike linear for global optimization on modern computers
ax ≥ b where a is the expected return of each
and quadratic programming, the time taken to usually can solve non-convex problems of only a
security and b is the minimum portfolio return.
find a globally optimal solution can rise expo- few hundred variables to global optimality.
A factor model that expresses the “beta” or
nentially with the number of variables and con- Even a quadratic programming problem may
sensitivity of each security to one or more
straints. This severely limits the size of problems be “impossibly” hard to solve (in mathematical
market factors also leads to a quadratic
that can be solved to global optimality. Hence terms, NP-hard) if the objective function xT Qx is
programming model.
the aversion to nonlinear optimization for prac- non-convex, which happens when the matrix Q
Classical quadratic programming still requires tical quantitative finance problems seems well is indefinite3. Fortunately for portfolio optimiza-
that all constraints are linear; quadratic or more founded. Or is it? tion, the matrix Q will be positive definite if the
^
Wilmott magazine 19
DANIEL FYLSTRA
20 Wilmott magazine
DANIEL FYLSTRA
Wilmott magazine 21
DANIEL FYLSTRA
22 Wilmott magazine