Investigation of Reliability Method Formulations I
Investigation of Reliability Method Formulations I
Investigation of Reliability Method Formulations I
net/publication/228936357
CITATIONS READS
59 515
5 authors, including:
All content following this page was uploaded by Victor Manuel Pérez on 02 January 2014.
M.S. Eldred‡∗, H. Agarwal§, V.M. Perez§, S.F. Wojtkiewicz, Jr.‡, and J.E. Renaud§
Abstract
Reliability methods are probabilistic algorithms for quantifying the effect of simulation input uncertainties on response
metrics of interest. In particular, they compute approximate response function distribution statistics (probability, reliability,
and response levels) based on specified input random variable probability distributions. In this paper, a number of
algorithmic variations are explored for both the forward reliability analysis of computing probabilities for specified response
levels (the reliability index approach (RIA)) and the inverse reliability analysis of computing response levels for specified
probabilities (the performance measure approach (PMA)). These variations include limit state linearizations, probability
integrations, warm starting, and optimization algorithm selections. The resulting RIA/PMA reliability algorithms for
uncertainty quantification are then employed within bi-level and sequential reliability-based design optimization approaches.
Relative performance of these uncertainty quantification and reliability-based design optimization algorithms are presented
for a number of computational experiments performed using the DAKOTA/UQ software.
1 Introduction
Reliability methods are probabilistic algorithms for quantifying the effect of simulation input uncertainties on response met-
rics of interest. In particular, they perform uncertainty quantification (UQ) by computing approximate response function
distribution statistics based on specified input random variable probability distributions. These response statistics include re-
sponse mean, response standard deviation, and cumulative or complementary cumulative distribution function (CDF/CCDF)
response level and probability/reliability level pairings. These methods are often more efficient at computing statistics in the
tails of the response distributions (events with low probability) than sampling-based approaches since the number of samples
required to resolve a low probability can be prohibitive. Thus, these methods, as their name implies, are often used in a
reliability context for assessing the probability of failure of a system when confronted with an uncertain environment.
A number of classical reliability analysis methods are discussed in [Haldar and Mahadevan, 2000], including Mean-Value
First-Order Second-Moment (MVFOSM), First-Order Reliability Method (FORM), and Second-Order Reliability Method
(SORM). More recent methods which seek to improve the efficiency of FORM analysis through limit state approximations in-
clude the use of local and multipoint approximations in Advanced Mean Value methods (AMV/AMV+ [Wu et al., 1990]) and
Two-point Adaptive Nonlinear Approximation-based methods (TANA [Wang and Grandhi, 1994, Xu and Grandhi, 1998]),
respectively. Each of the FORM-based methods can be employed for “forward” or “inverse” reliability analysis through the
reliability index approach (RIA) or performance measure approach (PMA), respectively, as described in [Tu et al., 1999].
The capability for assessing reliability is broadly useful within a design optimization context, and reliability-based
design optimization (RBDO) methods are popular approaches for designing systems while accounting for uncertainty.
RBDO approaches may be broadly characterized as bi-level (in which the reliability analysis is nested within the opti-
mization, e.g. [Allen and Maute, 2004]), sequential (in which iteration occurs between optimization and reliability analysis,
e.g. [Wu et al., 2001]), or unilevel (in which the design and reliability searches are combined into a single optimization,
e.g. [Agarwal et al., 2004]). Bi-level RBDO methods are simple and general-purpose, but can be computationally demand-
ing. Sequential and unilevel methods seek to reduce computational expense by breaking the nested relationship through the
use of iterated or simultaneous approaches.
In order to provide access to a variety of uncertainty quantification capabilities for analysis of large-scale engineering
applications on high-performance parallel computers, the DAKOTA project [Eldred et al., 2003] at Sandia National Labo-
ratories has developed a suite of algorithmic capabilities known as DAKOTA/UQ [Wojtkiewicz et al., 2001]. This package
∗ Corresponding
author. Email: [email protected]
† Sandia
is a multiprogram laboratory operated by Sandia Corporation, a Lockheed-Martin Company, for the United States Department of
Energy under Contract DE-AC04-94AL85000.
1
contains the reliability analysis and RBDO capabilities described in this paper, and is freely available for download worldwide
through an open source license.
This paper explores a variety of algorithms for performing reliability analysis. In particular, forward and inverse reliability
analyses are performed using multiple linearization, integration, warm starting, and optimization algorithm selections. These
uncertainty quantification capabilities are then used as a foundation for exploring bi-level and sequential RBDO formulations.
Sections 2 and 3 describe these algorithmic components, Section 4 provides computational results for three simple test
problems, and Section 5 provides concluding remarks.
µg = g(µx ) (1)
XX dg dg
σg = Cov(i, j) (µx ) (µx ) (2)
i j
dxi dxj
µg − z̄
βcdf = (3)
σg
z̄ − µg
βccdf = (4)
σg
z = µg − σg β̄cdf (5)
z = µg + σg β̄ccdf (6)
respectively, where x are the uncertain values in the space of the original uncertain variables (“x-space”), g(x) is the limit
state function (the response function for which probability-response level pairs are needed), and the CDF reliability index
βcdf , CCDF reliability index βccdf , CDF probability p(g ≤ z), and CCDF probability p(g > z) are related to one another
through
where Φ() is the standard normal cumulative distribution function. A common convention in the literature is to define g in
such a way that the CDF probability for a response level z of zero (i.e., p(g ≤ 0)) is the response metric of interest. The
formulations in this paper are not restricted to this convention and are designed to support CDF or CCDF mappings for
general response, probability, and reliability level sequences.
minimize uT u
subject to G(u) = z̄ (13)
and for PMA, the MPP search for achieving the specified reliability/probability level β̄, p̄ is formulated as
minimize ±G(u)
subject to u u = β̄ 2
T
(14)
where u is a vector centered at the origin in u-space and g(x) ≡ G(u) by definition. In the RIA case, the optimal MPP
solution u∗ defines the reliability index from β = ±ku∗ k2 , which in turn defines the CDF/CCDF probabilities (using Eqs. 7-8
in the case of first-order integration). The sign of β is defined by
where G(0) is the median limit state response computed at the origin in u-space (where β cdf = βccdf = 0 and first-order
p(g ≤ z) = p(g > z) = 0.5). In the PMA case, the sign applied to G(u) (equivalent to minimizing or maximizing G(u)) is
similarly defined by β̄
and the limit state at the MPP (G(u∗ )) defines the desired response level result.
g(x) ∼
= g(µx ) + ∇x g(µx )T (x − µx ) (19)
2. same as AMV, except that the linearization is performed in u-space. This option has been termed the u-space AMV
method (note: µu = T (µx ) and is nonzero in general).
G(u) ∼
= G(µu ) + ∇u G(µu )T (u − µu ) (20)
3. an initial x-space linearization at the uncertain variable means, with iterative relinearizations at each MPP estimate
(x∗ ) until the MPP converges (commonly known as the AMV+ method).
g(x) ∼
= g(x∗ ) + ∇x g(x∗ )T (x − x∗ ) (21)
4. same as AMV+, except that the linearizations are performed in u-space. This option has been termed the u-space
AMV+ method.
G(u) ∼= G(u∗ ) + ∇u G(u∗ )T (u − u∗ ) (22)
5. the MPP search on the original response functions without the use of any linearizations.
The selection between x-space or u-space for performing linearizations depends on where the limit state will be more linear,
since an approximation that is accurate over a larger range will result in more accurate MPP estimates (AMV) or faster
convergence (AMV+). Since this relative linearity depends on the forms of the limit state g(x) and the transformation T (x)
and is therefore application dependent in general, DAKOTA/UQ supports both options. A concern with linearization-based
iterative search methods (i.e., x-/u-space AMV+) is the robustness of their convergence to the MPP. It is possible for the
MPP iterates to oscillate or even diverge. However, to date, this occurrence has been relatively rare, and DAKOTA/UQ
contains checks that monitor for this behavior.
2.2.2 Integrations
The second algorithmic variation involves the integration approach for computing probabilities at the MPP, which can be
selected to be first-order (Eqs. 7-8) or second-order integration. Second-order integration involves applying a curvature
correction [Breitung, 1984, Hohenbichler and Rackwitz, 1988, Hong, 1999]. The simplest of these corrections is
n−1
Y 1
p = Φ(−β) √ (23)
i=1
1 + βκi
where κi are the principal curvatures of the limit state function and β ≥ 0 (select CDF or CCDF correction based on sign of
β). Second-order reliability approaches are discussed in detail in a companion paper [Eldred and Wojtkiewicz, 2005].
Combining the no-linearization option of the MPP search with first-order and second-order integration approaches results
in the traditional first-order and second-order reliability methods (FORM and SORM). Additional probability integration
approaches can involve importance sampling in the vicinity of the MPP [Hohenbichler and Rackwitz, 1988, Wu, 1994], but
are outside the scope of this paper.
• with design variable increment (across multiple reliability analyses for RBDO)
Within a single reliability analysis, the AMV+ linearization point and associated response data for each new z/p/β level
is warm started using the MPP from the previous level. The initial guess for each new MPP search is warm started differently
for AMV+ iterations, RIA level changes, or PMA level changes. For unconverged AMV+ iterations, a simple copy of the
previous MPP estimate is used. In the case of an advance to the next z/p/β level, the initial guess is determined by projecting
from the current MPP out to the new β radius or response level. This projection is important since premature optimization
termination can occur with some optimizers if the RIA/PMA first-order optimality conditions (u + λ∇ u G = 0 for RIA or
∇u G + λu = 0 for PMA) remain satisfied for the new level, even though the new equality constraint will be violated. That
is, even though initial guess may not affect overall efficiency in linearization-based searches, it can affect search robustness.
For RIA projections, an approximate u(k+1) is computed using a first-order Taylor series approximation of the next g level:
minimize f
subject to β ≥ β̄
or p ≤ p̄ (29)
minimize f
subject to z ≥ z̄ (30)
where z ≥ z̄ is used as the RBDO constraint for a cumulative failure probability (failure defined as z ≤ z̄) but z ≤ z̄ would
be used as the RBDO constraint for a complementary cumulative failure probability (failure defined as z ≥ z̄). It is worth
noting that DAKOTA is not limited to these types of inequality-constrained RBDO formulations; rather, they are convenient
examples. DAKOTA supports general optimization under uncertainty mappings [Eldred et al., 2002] which allow flexible use
of statistics within multiple objectives, inequality constraints, and equality constraints.
An important performance enhancement for bi-level methods is the use of sensitivity analysis to analytically compute the
design gradients of probability, reliability, and response levels. When design variables are separate from the uncertain variables
(i.e., they are not distribution parameters), then the following expressions may be used [Hohenbichler and Rackwitz, 1986,
Karamchandani and Cornell, 1992, Allen and Maute, 2004]:
∇d z = ∇d G (31)
1
∇d βcdf = ∇d G (32)
k ∇u G k
∇d pcdf = −φ(−βcdf )∇d βcdf (33)
where φ() is the standard normal density function. From Eqs. 11-12, it is evident that
and PMA trust-region surrogate-based RBDO employs surrogate models of f and z within a trust region ∆ k centered at dc :
4 Computational Experiments
The algorithmic variations of interest in reliability analysis include the linearization approaches (MV, x-/u-space AMV, x-/u-
space AMV+, and FORM), integration approaches (first-/second-order), warm starting approaches, and MPP optimization
algorithm selections (SQP or NIP). RBDO algorithmic variations of interest include use of bi-level, fully-analytic bi-level, or
sequential approaches, use of RIA or PMA formulations for the underlying UQ, and the specific z/p/β mappings that are
employed. Relative performance of these algorithmic variations will be presented in this section for a number of computa-
tional experiments performed using the DAKOTA/UQ software [Wojtkiewicz et al., 2001]. DAKOTA/UQ is the uncertainty
quantification component of DAKOTA [Eldred et al., 2003], an open-source software framework for design and performance
analysis of computational models on high performance computers.
(RIA p error norm of 0.01538 and PMA z error norm of 0.03775) relative to a Latin Hypercube reference solution of 10 6
samples are not included in order to avoid obscuring the relative errors. Figure 1 overlays the computed CDF values for each
of the six method variants as well as the Latin Hypercube reference solution.
It is evident that, relative to the fully-converged AMV+/FORM results, MV accuracy degrades rapidly away from the
means. AMV is reasonably accurate over the full range (x-space AMV has a factor of 4.8 reduction in error norm on
average relative to MV, and u-space AMV has zero error for this problem) but has undesirable offsets from the prescribed
response levels in the RIA case. In terms of computational expense, MV is two orders of magnitude less expensive than
AMV+/FORM and AMV is one order of magnitude less expensive, which makes these techniques attractive when rough
statistics are sufficient. When more accurate statistics are desired, AMV+ has equal accuracy to FORM and is a factor of
9.1 less expensive on average in the case of cold starts using sequential quadratic programming (SQP) for each level, which
decreases to a factor of 3.1 less expensive in the case of warm starts using SQP. That is, FORM benefits more from warm
starting than AMV+. When using a nonlinear interior-point (NIP) optimizer, FORM solutions are generally less expensive
and become directly competitive with AMV+ in some cases (AMV+ is a factor of 2.4 less expensive on average than FORM
in the case of cold starts using NIP, which decreases to a factor of 1.4 in the case of warm starts using NIP). The SQP/NIP
comparison is much less relevant for the AMV/AMV+ methods since the MPP searches are linearized. Another benefit of
NIP relative to SQP has been observed in PMA solutions (Eq. 14). PMA solutions with SQP involve penalties applied to the
equality constraint (e.g., in an augmented Lagrangian merit function) and must have strict u-space bound constraints (e.g.,
10 standard deviations) to avoid excessive u-space excursions in minimizing G(u) prior to enforcement of the u T u equality
constraint. These excursions can result in inaccurate Hessian approximations in moderate cases and numerical overflow in
extreme cases. NIP methods are less prone to this difficulty since they proceed toward constraint satisfaction more uniformly.
4M P2
g(x) = 1 − − (39)
bh2 Y b2 h 2 Y 2
The distributions for P , M , and Y are Normal(500, 100), Normal(2000, 400), and Lognormal(5, 0.5), respectively, with a
correlation coefficient of 0.5 between P and M (uncorrelated otherwise). The nominal values for b and h are 5 and 15,
respectively. In this test problem, analytic gradients of f and g with respect to P , M , and Y are used to reduce function
evaluation counts (note: the evaluation counts reflect data requests from the algorithm and do not separate value and gradient
requests).
1 1
0.9 0.9
0.8 0.8
0.7 0.7
Cumulative Probability
Cumulative Probability
0.6 0.6
0.5 0.5
0.4 0.4
MV MV
0.3 x−/u−space AMV 0.3 x−/u−space AMV
x−/u−space AMV+ & FORM x−/u−space AMV+ & FORM
0.2 106 Latin hypercube samples 0.2 106 Latin hypercube samples
0.1 0.1
0 0
0.9 0.9
0.8 0.8
0.7 MV 0.7 MV
Cumulative Probability
Cumulative Probability
x−/u−space AMV x−/u−space AMV
0.6 x−/u−space AMV+ & FORM 0.6 x−/u−space AMV+ & FORM
6 6
10 Latin hypercube samples 10 Latin hypercube samples
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
−10 −8 −6 −4 −2 0 2 −10 −8 −6 −4 −2 0 2
Response Value Response Value
than FORM in the case of cold starts using SQP for each level, which decreases to a factor of 3.5 in the case of warm starts
using SQP. NIP-based FORM solutions are again less expensive than SQP-based FORM solutions and are approaching the
expense of AMV+ solutions (AMV+ is a factor of 2.8 less expensive on average than FORM in the case of cold starts using
NIP, which decreases to a factor of 1.7 in the case of warm starts using NIP).
Y
L = 100”
X
t
w
bi-level case, with expense reduced by another factor of 1.4 on average. Warm starts are even less effective than in the fully-
analytic bi-level case and save only 6% on average. AMV+-based sequential RBDO outperforms FORM-based sequential
RBDO by a factor of 6.2 on average, and solves the problem in as few as 65 function evaluations.
600 600
stress = Y + X≤R (41)
wt2 r w2 t
4L3 Y X
displacement = ( 2 )2 + ( 2 )2 ≤ D 0 (42)
Ewt t w
or when scaled:
stress
gS = −1≤0 (43)
R
displacement
gD = −1≤0 (44)
D0
1 1
0.9 0.9
0.8 0.8
0.7 0.7
Cumulative Probability
Cumulative Probability
0.6 0.6
0.5 0.5
0.4 MV 0.4 MV
x−/u−space AMV x−/u−space AMV
0.3 x−/u−space AMV+ & FORM 0.3 x−/u−space AMV+ & FORM
6
10 Latin hypercube samples 106 Latin hypercube samples
0.2 0.2
0.1 0.1
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Response Value Response Value
1 1
0.9 0.9
0.8 0.8
0.7 MV 0.7 MV
Cumulative Probability
Cumulative Probability
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Response Value Response Value
Relative to the fully-converged AMV+/FORM results, MV accuracy again degrades rapidly away from the means. AMV is
reasonably accurate over the full range (a factor of 11 reduction in error norm on average relative to MV) but has undesirable
offsets from the prescribed response levels in the RIA case. In terms of computational expense, MV and AMV are again
significantly less expensive. AMV+ has equal accuracy to FORM and is a factor of 5.2 and 2.8 less expensive on average
in the case of cold and warm starts, respectively, using SQP, and is a factor of 3.6 and 2.3 less expensive on average in the
case of cold and warm starts, respectively, using NIP. NIP can be seen to be less robust than SQP for this problem, as “*”
indicates that one or more of the 11 levels failed to converge.
min wt
s.t. βD ≥ 3.0
βS ≥ 3.0
1.0 ≤ w ≤ 4.0
1.0 ≤ t ≤ 4.0 (45)
which has the solution (w, t) = (2.45, 3.88) with an objective function of 9.52.
Table 10 shows the results for bi-level RBDO using 12 variants (the x-space and u-space linearizations are identical for
this problem). Constraint violations are raw norms (not normalized by allowable). Analytic gradients of g S and gD with
respect to R, E, X, and Y are used at the uncertainty analysis level, but numerical gradients of f and z/p/β with respect to
w and t are computed using central finite differences at the optimization level. SQP is used for optimization at both levels.
Again, RBDO with MV and AMV is relatively inexpensive, but can only obtain an approximate optimal solution. Reliability
constraints are again preferred to probability constraints in RIA RBDO (expense reduced by a factor of 1.6 on average).
In addition, warm starts are helpful, reducing expense by a factor of 1.5 on average, and AMV+-based RBDO consistently
outperforms FORM-based RBDO by a factor of 2.9 on average. PMA RBDO with AMV+ was the top performer in this
case and solved the problem in 428 function evaluations.
Table 11 shows the results for fully-analytic bi-level RBDO employing the gradient expressions for p, β, and z (Eqs. 31-33).
As for short column, only the AMV+ and FORM variants for RIA/PMA RBDO are allowed, since the sensitivity expressions
require a fully-converged MPP. In comparison with Table 10, avoiding numerical differencing at the design level reduces
computation expense by a factor of 2.3 on average. Again, warm starts are less effective (reduction of only a factor of 1.2 on
average) since the design changes between reliability analyses are larger than when finite differencing. AMV+-based RBDO
outperforms FORM-based RBDO by a factor of 2.8 on average.
Table 12 shows the results for sequential RBDO using a trust-region surrogate-based approach. The surrogates are
first-order Taylor-series using the same analytic gradients of p, β, and z. The sequential case is more efficient than the
Table 11: Analytic bi-level RBDO results, cantilever test problem.
RBDO Function Evals Objective Constraint
Approach (Cold/Warm Start) Function Violation
RIA z → p x-/u-space AMV+ 279/319 9.529 1.1e-5
RIA z → p FORM 623/531 9.563 0.0
RIA z → β x-/u-space AMV+ 207/208 9.520 0.0
RIA z → β FORM 367/324 9.520 0.0
PMA p, β → z x-/u-space AMV+ 247/232 9.521 0.0
PMA p, β → z FORM 1408/843 9.521 0.0
fully-analytic bi-level case, with expense reduced by another factor of 1.3 on average. Warm starts are less effective than
in the fully-analytic bi-level case and save only 2% on average. AMV+-based sequential RBDO outperforms FORM-based
sequential RBDO by a factor of 2.5 on average, and solves the problem in under 200 function evaluations.
5 Conclusions
DAKOTA/UQ provides a flexible, object-oriented implementation of reliability methods that allows plug-and-play experimen-
tation with RIA/PMA formulations and various linearization, integration, warm starting, and MPP optimization algorithm
selections. Linearization approaches have included MV, x-space and u-space AMV, x-space and u-space AMV+, and no lin-
earization (FORM); integration options have included first-order and second-order integrations; warm starting has included
MPP reuse and projections; and MPP search selection has included SQP and NIP optimization algorithms. These reliability
analysis capabilities provide a substantial foundation for RBDO formulations, and bi-level and sequential RBDO approaches
have been explored in this paper. Bi-level RBDO has included basic and fully-analytic approaches, and sequential RBDO
has utilized a trust-region surrogate-based approach.
Reliability method performance comparisons for the three simple test problems presented indicate several trends. MV
and AMV are significantly less expensive than AMV+ and FORM, but come with corresponding reductions in accuracy.
In combination, these methods provide a useful spectrum of accuracy and expense that allow the computational effort to
be balanced with the statistical precision required for particular applications. In addition, support for forward and inverse
mappings (RIA and PMA) provide the flexibility to support different UQ analysis needs. Relative to FORM, AMV+ has
been shown to have equal accuracy, equal robustness (for these test problems), and consistent computational savings (factor
of 3.5 reduction in function evaluations on average). In addition, NIP optimizers have shown promise in being less susceptible
to PMA u-space excursions and in being more efficient than SQP optimizers in most cases (factor of 1.8 less expensive on
average for FORM). Warm starting with projections has been shown to be effective for reliability analyses, with a factor of
1.3 reduction in expense on average. The x-space and u-space linearizations for AMV and AMV+ were both effective, and
the relative performance was strongly problem-dependent (u-space AMV+ was consistently more efficient for lognormal ratio,
x-space AMV+ was consistently more efficient for short column, and x-space and u-space were equivalent for cantilever).
Among all combinations tested, AMV+ with warms starts is the recommended approach.
RBDO results mirror the reliability analysis trends. Basic bi-level RBDO has been evaluated with up to 18 variants
(RIA/PMA with different p/β/z mappings for MV/x-space AMV/u-space AMV/x-space AMV+/u-space AMV+/FORM),
and fully-analytic bi-level and sequential RBDO have been evaluated with up to 9 variants (RIA/PMA with different p/β/z
mappings for x-space AMV+/u-space AMV+/FORM). Bi-level RBDO with MV and AMV are inexpensive but give only
approximate optima. These approaches may be useful for preliminary design or for warm-starting other RBDO methods.
Bi-level RBDO with AMV+ was shown to have equal accuracy and robustness to bi-level FORM-based approaches and be
a factor of 4.2 less expensive on average. In addition, usage of β in RIA RBDO constraints was preferred due to it being
more well-behaved and more well-scaled than constraints on p (resulting in a factor of 1.7 reduction in expense), and this
approach for RIA RBDO was more efficient (factor of 1.4 reduction in expense on average) than PMA RBDO. Warm starts
in RBDO were most effective when the design changes were small, and basic bi-level RBDO (with numerical differencing at
the design level) showed a factor of 1.4 reduction in expense, which decreased to being marginally effective for fully-analytic
bi-level RBDO (factor of 1.1 reduction) and relatively ineffective for sequential RBDO (only a 5% reduction in expense on
average). However, large design changes were desirable for overall RBDO efficiency and, compared to basic bi-level RBDO,
fully-analytic RBDO was a factor 2.3 less expensive on average and sequential RBDO was a factor of 3.2 less expensive on
average. Among all combinations tested, sequential RBDO using AMV+ is the recommended approach.
The effectiveness of first-order approximations, both in limit state linearization within reliability analysis and in surrogate-
based RBDO, has led to additional work in second-order approximations which hold promise to both improve the accuracy
of probability integrations and improve the computational efficiency through accelerated convergence rates. This work is
presented in a companion paper.
6 Acknowledgments
The authors would like to express their thanks to the Sandia Computer Science Research Institute (CSRI) for support of this
collaborative work between Sandia National Laboratories and the University of Notre Dame.
References
[Agarwal et al., 2004] Agarwal, H., Renaud, J.E., Lee, J.C., and Watson, L.T., A Unilevel Method for Reliability Based De-
sign Optimization, paper AIAA-2004-2029 in Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC Structures, Struc-
tural Dynamics, and Materials Conference, Palm Springs, CA, April 19-22, 2004.
[Allen and Maute, 2004] Allen, M. and Maute, K., Reliability-based design optimization of aeroelastic structures, Struct.
Multidiscip. O., Vol. 27, 2004, pp. 228-242.
[Breitung, 1984] Breitung, K., Asymptotic approximation for multinormal integrals, J. Eng. Mech., ASCE, Vol. 110, No. 3,
1984, pp. 357-366.
[Burton and Hajela, 2004] Burton, S.A. and Hajela, P., Efficient Reliability-Based Structural Optimization Through Most
Probable Failure Point Approximation, paper AIAA-2004-1901 in Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC
Structures, Structural Dynamics, and Materials Conference, Palm Springs, CA, April 19-22, 2004.
[Der Kiureghian and Liu, 1986] Der Kiureghian, A. and Liu, P.L., Structural Reliability Under Incomplete Probability In-
formation, J. Eng. Mech., ASCE, Vol. 112, No. 1, 1986, pp. 85-104.
[Eldred et al., 2002] Eldred, M.S., Giunta, A.A., Wojtkiewicz, S.F., Jr., and Trucano, T.G., Formulations for Surrogate-
Based Optimization Under Uncertainty, paper AIAA-2002-5585 in Proceedings of the 9th AIAA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Atlanta, GA, Sept. 4-6, 2002.
[Eldred et al., 2003] Eldred, M.S., Giunta, A.A., van Bloemen Waanders, B.G., Wojtkiewicz, S.F., Jr., Hart, W.E., and
Alleva, M.P., DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation,
Uncertainty Quantification, and Sensitivity Analysis. Version 3.1 Users Manual. Sandia Technical Report SAND2001-3796,
Revised April 2003, Sandia National Laboratories, Albuquerque, NM.
[Eldred and Wojtkiewicz, 2005] Eldred, M.S. and Wojtkiewicz, S.F., Jr., Second-Order Reliability Formulations in
DAKOTA/UQ, (in preparation).
[Gill et al., 1998] Gill, P.E., Murray, W., Saunders, M.A., and Wright, M.H., User’s Guide for NPSOL 5.0: A Fortran Package
for Nonlinear Programming, System Optimization Laboratory, Technical Report SOL 86-1, Revised July 1998, Stanford
University, Stanford, CA.
[Giunta and Eldred, 2000] Giunta, A.A. and Eldred, M.S., Implementation of a Trust Region Model Management Strategy
in the DAKOTA Optimization Toolkit, paper AIAA-2000-4935 in Proceedings of the 8th AIAA/USAF/NASA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization, Long Beach, CA, September 6-8, 2000.
[Haldar and Mahadevan, 2000] Haldar, A. and Mahadevan, S., Probability, Reliability, and Statistical Methods in Engineer-
ing Design, 2000 (Wiley: New York).
[Hohenbichler and Rackwitz, 1986] Hohenbichler, M. and Rackwitz, R., Sensitivity and importance measures in structural
reliability, Civil Eng. Syst., Vol. 3, 1986, pp. 203-209.
[Hohenbichler and Rackwitz, 1988] Hohenbichler, M. and Rackwitz, R., Improvement of second-order reliability estimates by
importance sampling, J. Eng. Mech., ASCE, Vol. 114, No. 12, 1988, pp. 2195-2199.
[Hong, 1999] Hong, H.P., Simple Approximations for Improving Second-Order Reliability Estimates, J. Eng. Mech., ASCE,
Vol. 125, No. 5, 1999, pp. 592-595.
[Karamchandani and Cornell, 1992] Karamchandani, A. and Cornell, C.A., Sensitivity estimation within first and second
order reliability methods, Struct. Saf., Vol. 11, pp. 95-107.
[Kuschel and Rackwitz, 1997] Kuschel, N. and Rackwitz, R., Two Basic Problems in Reliability-Based Structural Optimiza-
tion, Math. Method Oper. Res., Vol. 46, 1997, pp.309-333.
[Meza, 1994] Meza, J.C., OPT++: An Object-Oriented Class Library for Nonlinear Optimization, Sandia Technical Report
SAND94-8225, Sandia National Laboratories, Livermore, CA, March 1994.
[Rosenblatt, 1952] Rosenblatt, M., Remarks on a Multivariate Transformation, Ann. Math. Stat., Vol. 23, No. 3, 1952, pp.
470-472.
[Sues et al., 2001] Sues, R., Aminpour, M. and Shin, Y., Reliability-Based Multidisciplinary Optimization for Aerospace Sys-
tems, paper AIAA-2001-1521 in Proceedings of the 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics,
and Materials Conference, Seattle, WA, April 16-19, 2001.
[Tu et al., 1999] Tu, J., Choi, K.K., and Park, Y.H., A New Study on Reliability-Based Design Optimization, J. Mech.
Design, Vol. 121, 1999, pp.557-564.
[Wang and Grandhi, 1994] Wang, L. and Grandhi, R.V., Efficient Safety Index Calculation for Structural Reliability Analysis,
Comput. Struct., Vol. 52, No. 1, 1994, pp. 103-111.
[Wojtkiewicz et al., 2001] Wojtkiewicz, S.F., Jr., Eldred, M.S., Field, R.V., Jr., Urbina, A., and Red-Horse, J.R., A Toolkit
For Uncertainty Quantification In Large Computational Engineering Models, paper AIAA-2001-1455 in Proceedings of
the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Seattle, WA, April
16-19, 2001.
[Wu et al., 1990] Wu, Y.-T., Millwater, H.R., and Cruse, T.A., Advanced Probabilistic Structural Analysis Method for
Implicit Performance Functions, AIAA J., Vol. 28, No. 9, 1990, pp. 1663-1669.
[Wu, 1994] Wu, Y.-T., Computational Methods for Efficient Structural Reliability and Reliability Sensitivity Analysis, AIAA
J., Vol. 32, No. 8, 1994, pp. 1717-1723.
[Wu et al., 2001] Wu, Y.-T., Shin, Y., Sues, R., and Cesare, M., Safety-Factor Based Approach for Probability-Based Design
Optimization, paper AIAA-2001-1522 in Proceedings of the 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural
Dynamics, and Materials Conference, Seattle, WA, April 16-19, 2001.
[Xu and Grandhi, 1998] Xu, S., and Grandhi, R.V., Effective Two-Point Function Approximation for Design Optimization,
AIAA J., Vol. 36, No. 12, 1998, pp. 2269-2275.
[Zou et al., 2004] Zou, T., Mahadevan, S., and Rebba, R., Computational Efficiency in Reliability-Based Optimization,
Proceedings of the 9th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability, Albuquerque,
NM, July 26-28, 2004.