Nishakova Robust Approximation
Nishakova Robust Approximation
Nishakova Robust Approximation
APPROXIMATION
Seminar Optimierung
Nishakova Irina
CONTENTS
1. STOCHASTIC ROBUST APPROXIMATION ................................................................................3
SUMOFNORMS PROBLEM ..........................................................................................................3
STATISTICAL ROBUST LEASTSQUARES PROBLEM ............................................................4
INTERPRETATION OF THE TIKHONOV REGULARIZATION PROBLEM .........................4
2. WORSTCASE ROBUST APPROXIMATION ...............................................................................5
COMPARISON OF STOCHASTIC AND WORSTCASE ROBUST APPROXIMATION ........5
FINITE SET .........................................................................................................................................8
NORM BOUND ERROR ....................................................................................................................8
UNCERTAINLY ELLIPSOIDS ..........................................................................................................8
NORM BOUNDED ERROR WITH LINEAR STRUCTURE ........................................................8
2
Robust approximation
1. Stochastic robust approximation
We consider an approximation problem with basic objective , but
also wish to take into account some uncertainty or possible variation in the
data matrix . (The same ideas can be extended to handle the case where
there is uncertainty in both and .) In this section we consider some
statistical models or the variation in .
We assume that is a random variable taking values in , with
mean , so we can describe as
,
where is a random matrix with zero mean. Here, the constant matrix
gives the average value of , and describes its statistical variation.
It is natural to use the expected value of as the objective:
minimize E . (1.1)
We refer to this problem as the stochastic robust approximation problem. It is
always a convex optimization problem, but usually not tractable since in most
cases it is very difficult to evaluate the objective or its derivatives.
Sumofnorms problem
One simple case in which the stochastic robust approximation problem
(1.1) can be solved occurs when assumes only a finite number of values,
i.e.,
, 1, . . . , ,
where , 1, 0. In this case the problem (1.1) has the
form
minimize ,
which is often called a sum-of-norms problem. It can be expressed as
minimize
subject to , 1, … , ,
where the variables are and . If the norm is the Euclidean
norm, this sum-of-norms problem is an SOCP. If the norm is the - or
-norm, the sum-of-norms problem can be expressed as an LP.
3
Statistical robust leastsquares problem
Some variations on the statistical robust approximation problem (1.1) are
tractable. As an example, consider the statistical robust least-squares problem
minimize E ,
where the norm is the Euclidean norm. We can express the objective as
where . Therefore the statistical robust approximation problem
has the form of a regularized least-squares problem
/
minimize , 1.2
with solution
This makes perfect sense: when the matrix is subject to variation, the
vector will have more variation the larger is, and Jensen’s inequality
tells us that variation in will increase the average value of . So
we need to balance making small with the desire for a small (to
keep the variation in small), which is the essential idea of regularization.
Interpretation of the Tikhonov regularization problem
The Tikhonov regularized least-squares problem
minimize 1.3
has the analytical solution
.
The statistical robust least-squares problem in the form of a regularized
least-squares problem (1.2) gives us another interpretation of the Tikhonov
regularization problem (1.3) as a robust least-squares problem, taking into
account possible variation in the matrix .
Consideration of (where is a random matrix with zero mean,
so, 0) gives us:
.
4
If the variance of is equal to ⁄ , . .
⁄ ,
we get:
.
2. Worstcase robust approximation
It is also possible to model the variation in the matrix using a set-based,
worst-case approach. We describe the uncertainty by a set of possible values
for :
,
which we assume is nonempty and bounded. We define the associated
worst-case error of a candidate approximate solution as
sup | ,
which is always a convex function of . The (worst-case) robust
approximation problem is to minimize the worst-case error:
minimize sup | , (2.1)
where the variable is , and the problem data are band the set . When is
the singleton , the robust approximation problem (2.1) reduces to
the basic norm approximation problem
minimize .
The robust approximation problem is always a convex optimization problem,
but its tractability depends on the norm used and the description of the
uncertainty set .
1
.
3
The solution is:
.
• Worst-case robust approximation. The solution minimizes
sup max , .
6
Figure 1. The residual as a function of the uncertain
parameter for three approximate solutions :
(1) the nominal least-squares solution ;
(2) the solution of the stochastic robust approximation problem
(assuming is uniformly distributed on 1, 1 );
(3) the solution of the worst-case robust approximation problem ,
assuming the parameter lies in the interval 1, 1 .
The nominal solution achieves the smallest residual when 0, but gives
much larger residuals as approaches −1 or 1. The worst-case solution has a
larger residual when 0 , but its residuals do not rise much as the
parameter varies over the interval 1, 1 .
Uncertainty ellipsoids
We can also describe the variation in A by giving an ellipsoid of possible
values for each row:
| , 1, … ,
where
| 1 .
The matrix describes the variation in .