Investigation of Reliability Method Formulations I

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228936357

Investigation of reliability method formulations in DAKOTA/UQ

Article in Structure and Infrastructure Engineering · September 2007


DOI: 10.1080/15732470500254618

CITATIONS READS
59 515

5 authors, including:

Michael Scott Eldred Harish Agarwal


Sandia National Laboratories Madanapalle Institute of Technology & Science
176 PUBLICATIONS 6,672 CITATIONS 19 PUBLICATIONS 958 CITATIONS

SEE PROFILE SEE PROFILE

Victor Manuel Pérez Steven F. Wojtkiewicz


General Electric Clarkson University
22 PUBLICATIONS 337 CITATIONS 76 PUBLICATIONS 2,332 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Victor Manuel Pérez on 02 January 2014.

The user has requested enhancement of the downloaded file.


Investigation of Reliability Method Formulations in DAKOTA/UQ

M.S. Eldred‡∗, H. Agarwal§, V.M. Perez§, S.F. Wojtkiewicz, Jr.‡, and J.E. Renaud§

‡Sandia National Laboratories†, Albuquerque, NM 87185-0370


§The University of Notre Dame, Notre Dame, IN 46556-5637

(Received xx; accepted xx)

Abstract
Reliability methods are probabilistic algorithms for quantifying the effect of simulation input uncertainties on response
metrics of interest. In particular, they compute approximate response function distribution statistics (probability, reliability,
and response levels) based on specified input random variable probability distributions. In this paper, a number of
algorithmic variations are explored for both the forward reliability analysis of computing probabilities for specified response
levels (the reliability index approach (RIA)) and the inverse reliability analysis of computing response levels for specified
probabilities (the performance measure approach (PMA)). These variations include limit state linearizations, probability
integrations, warm starting, and optimization algorithm selections. The resulting RIA/PMA reliability algorithms for
uncertainty quantification are then employed within bi-level and sequential reliability-based design optimization approaches.
Relative performance of these uncertainty quantification and reliability-based design optimization algorithms are presented
for a number of computational experiments performed using the DAKOTA/UQ software.

Keywords: Uncertainty; Reliability; Design optimization; Software

1 Introduction
Reliability methods are probabilistic algorithms for quantifying the effect of simulation input uncertainties on response met-
rics of interest. In particular, they perform uncertainty quantification (UQ) by computing approximate response function
distribution statistics based on specified input random variable probability distributions. These response statistics include re-
sponse mean, response standard deviation, and cumulative or complementary cumulative distribution function (CDF/CCDF)
response level and probability/reliability level pairings. These methods are often more efficient at computing statistics in the
tails of the response distributions (events with low probability) than sampling-based approaches since the number of samples
required to resolve a low probability can be prohibitive. Thus, these methods, as their name implies, are often used in a
reliability context for assessing the probability of failure of a system when confronted with an uncertain environment.
A number of classical reliability analysis methods are discussed in [Haldar and Mahadevan, 2000], including Mean-Value
First-Order Second-Moment (MVFOSM), First-Order Reliability Method (FORM), and Second-Order Reliability Method
(SORM). More recent methods which seek to improve the efficiency of FORM analysis through limit state approximations in-
clude the use of local and multipoint approximations in Advanced Mean Value methods (AMV/AMV+ [Wu et al., 1990]) and
Two-point Adaptive Nonlinear Approximation-based methods (TANA [Wang and Grandhi, 1994, Xu and Grandhi, 1998]),
respectively. Each of the FORM-based methods can be employed for “forward” or “inverse” reliability analysis through the
reliability index approach (RIA) or performance measure approach (PMA), respectively, as described in [Tu et al., 1999].
The capability for assessing reliability is broadly useful within a design optimization context, and reliability-based
design optimization (RBDO) methods are popular approaches for designing systems while accounting for uncertainty.
RBDO approaches may be broadly characterized as bi-level (in which the reliability analysis is nested within the opti-
mization, e.g. [Allen and Maute, 2004]), sequential (in which iteration occurs between optimization and reliability analysis,
e.g. [Wu et al., 2001]), or unilevel (in which the design and reliability searches are combined into a single optimization,
e.g. [Agarwal et al., 2004]). Bi-level RBDO methods are simple and general-purpose, but can be computationally demand-
ing. Sequential and unilevel methods seek to reduce computational expense by breaking the nested relationship through the
use of iterated or simultaneous approaches.
In order to provide access to a variety of uncertainty quantification capabilities for analysis of large-scale engineering
applications on high-performance parallel computers, the DAKOTA project [Eldred et al., 2003] at Sandia National Labo-
ratories has developed a suite of algorithmic capabilities known as DAKOTA/UQ [Wojtkiewicz et al., 2001]. This package
∗ Corresponding
author. Email: [email protected]
† Sandia
is a multiprogram laboratory operated by Sandia Corporation, a Lockheed-Martin Company, for the United States Department of
Energy under Contract DE-AC04-94AL85000.

1
contains the reliability analysis and RBDO capabilities described in this paper, and is freely available for download worldwide
through an open source license.
This paper explores a variety of algorithms for performing reliability analysis. In particular, forward and inverse reliability
analyses are performed using multiple linearization, integration, warm starting, and optimization algorithm selections. These
uncertainty quantification capabilities are then used as a foundation for exploring bi-level and sequential RBDO formulations.
Sections 2 and 3 describe these algorithmic components, Section 4 provides computational results for three simple test
problems, and Section 5 provides concluding remarks.

2 Reliability Method Formulations


2.1 Mean Value
The Mean Value method (MV, also known as MVFOSM in [Haldar and Mahadevan, 2000]) is the simplest, least-expensive
reliability method in that it estimates the response means, response standard deviations, and all CDF/CCDF response-
probability-reliability levels from a single evaluation of response functions and their gradients at the uncertain variable means.
This approximation can have acceptable accuracy when the response functions are nearly linear and their distributions are
approximately Gaussian, but can have poor accuracy in other situations. The expressions for approximate response mean µ g ,
approximate response standard deviation σg , response target to approximate probability/reliability level mapping (z̄ → p, β),
and probability/reliability target to approximate response level mapping (p̄, β̄ → z) are

µg = g(µx ) (1)
XX dg dg
σg = Cov(i, j) (µx ) (µx ) (2)
i j
dxi dxj
µg − z̄
βcdf = (3)
σg
z̄ − µg
βccdf = (4)
σg
z = µg − σg β̄cdf (5)
z = µg + σg β̄ccdf (6)

respectively, where x are the uncertain values in the space of the original uncertain variables (“x-space”), g(x) is the limit
state function (the response function for which probability-response level pairs are needed), and the CDF reliability index
βcdf , CCDF reliability index βccdf , CDF probability p(g ≤ z), and CCDF probability p(g > z) are related to one another
through

p(g ≤ z) = Φ(−βcdf ) (7)


p(g > z) = Φ(−βccdf ) (8)
βcdf = −Φ−1 (p(g ≤ z)) (9)
βccdf = −Φ−1 (p(g > z)) (10)
βcdf = −βccdf (11)
p(g ≤ z) = 1 − p(g > z) (12)

where Φ() is the standard normal cumulative distribution function. A common convention in the literature is to define g in
such a way that the CDF probability for a response level z of zero (i.e., p(g ≤ 0)) is the response metric of interest. The
formulations in this paper are not restricted to this convention and are designed to support CDF or CCDF mappings for
general response, probability, and reliability level sequences.

2.2 MPP Search Methods


All other reliability methods solve a nonlinear optimization problem to compute a most probable point (MPP) and then
integrate about this point to compute probabilities. The MPP search is performed in transformed standard normal space (“u-
space”) since it simplifies the probability integration: the distance of the MPP from the origin has the meaning of the number
of standard deviations separating the mean response from a particular response threshold. The transformation from x-space to
u-space is performed using the transformation u = T (x) with the reverse transformation denoted as x = T −1 (u). Common ap-
proaches for performing these mappings include the Rosenblatt [Rosenblatt, 1952] and Nataf [Der Kiureghian and Liu, 1986]
transformations, where the results in this paper employ the latter.
The forward reliability analysis algorithm of computing CDF/CCDF probability/reliability levels for specified response
levels is called the reliability index approach (RIA), and the inverse reliability analysis algorithm of computing response levels
for specified CDF/CCDF probability/reliability levels is called the performance measure approach (PMA) [Tu et al., 1999].
The differences between the RIA and PMA formulations appear in the objective function and equality constraint formulations
used in the MPP searches. For RIA, the MPP search for achieving the specified response level z̄ is formulated as

minimize uT u
subject to G(u) = z̄ (13)

and for PMA, the MPP search for achieving the specified reliability/probability level β̄, p̄ is formulated as

minimize ±G(u)
subject to u u = β̄ 2
T
(14)

where u is a vector centered at the origin in u-space and g(x) ≡ G(u) by definition. In the RIA case, the optimal MPP
solution u∗ defines the reliability index from β = ±ku∗ k2 , which in turn defines the CDF/CCDF probabilities (using Eqs. 7-8
in the case of first-order integration). The sign of β is defined by

G(u∗ ) > G(0) : βcdf < 0, βccdf > 0 (15)


G(u∗ ) < G(0) : βcdf > 0, βccdf < 0 (16)

where G(0) is the median limit state response computed at the origin in u-space (where β cdf = βccdf = 0 and first-order
p(g ≤ z) = p(g > z) = 0.5). In the PMA case, the sign applied to G(u) (equivalent to minimizing or maximizing G(u)) is
similarly defined by β̄

β̄cdf < 0, β̄ccdf > 0 : maximize G(u) (17)


β̄cdf > 0, β̄ccdf < 0 : minimize G(u) (18)

and the limit state at the MPP (G(u∗ )) defines the desired response level result.

2.2.1 Limit state linearizations


There are a variety of algorithmic variations that can be explored within RIA/PMA reliability analysis. First, one may
select among several different linearization approaches for the limit state function that can be used to reduce computational
expense during the MPP searches. Options include:
1. a single linearization per response/probability level in x-space centered at the uncertain variable means (commonly
known as the Advanced Mean Value (AMV) method).

g(x) ∼
= g(µx ) + ∇x g(µx )T (x − µx ) (19)

2. same as AMV, except that the linearization is performed in u-space. This option has been termed the u-space AMV
method (note: µu = T (µx ) and is nonzero in general).

G(u) ∼
= G(µu ) + ∇u G(µu )T (u − µu ) (20)

3. an initial x-space linearization at the uncertain variable means, with iterative relinearizations at each MPP estimate
(x∗ ) until the MPP converges (commonly known as the AMV+ method).

g(x) ∼
= g(x∗ ) + ∇x g(x∗ )T (x − x∗ ) (21)

4. same as AMV+, except that the linearizations are performed in u-space. This option has been termed the u-space
AMV+ method.
G(u) ∼= G(u∗ ) + ∇u G(u∗ )T (u − u∗ ) (22)

5. the MPP search on the original response functions without the use of any linearizations.
The selection between x-space or u-space for performing linearizations depends on where the limit state will be more linear,
since an approximation that is accurate over a larger range will result in more accurate MPP estimates (AMV) or faster
convergence (AMV+). Since this relative linearity depends on the forms of the limit state g(x) and the transformation T (x)
and is therefore application dependent in general, DAKOTA/UQ supports both options. A concern with linearization-based
iterative search methods (i.e., x-/u-space AMV+) is the robustness of their convergence to the MPP. It is possible for the
MPP iterates to oscillate or even diverge. However, to date, this occurrence has been relatively rare, and DAKOTA/UQ
contains checks that monitor for this behavior.
2.2.2 Integrations
The second algorithmic variation involves the integration approach for computing probabilities at the MPP, which can be
selected to be first-order (Eqs. 7-8) or second-order integration. Second-order integration involves applying a curvature
correction [Breitung, 1984, Hohenbichler and Rackwitz, 1988, Hong, 1999]. The simplest of these corrections is
n−1
Y 1
p = Φ(−β) √ (23)
i=1
1 + βκi

where κi are the principal curvatures of the limit state function and β ≥ 0 (select CDF or CCDF correction based on sign of
β). Second-order reliability approaches are discussed in detail in a companion paper [Eldred and Wojtkiewicz, 2005].
Combining the no-linearization option of the MPP search with first-order and second-order integration approaches results
in the traditional first-order and second-order reliability methods (FORM and SORM). Additional probability integration
approaches can involve importance sampling in the vicinity of the MPP [Hohenbichler and Rackwitz, 1988, Wu, 1994], but
are outside the scope of this paper.

2.2.3 Optimization algorithms


The third algorithmic variation involves the optimization algorithm selection for solving Eqs. 13 and 14. The Hasofer-Lind
Rackwitz-Fissler (HL-RF) algorithm [Haldar and Mahadevan, 2000] is a classical approach that has been broadly applied.
It is a Newton-based approach lacking line search/trust region globalization, and is generally regarded as computationally
efficient but occasionally unreliable. DAKOTA/UQ takes the approach of employing robust, general-purpose optimization
algorithms with provable convergence properties. This paper explores the use of sequential quadratic programming (SQP)
and nonlinear interior-point (NIP) optimization algorithms from the NPSOL [Gill et al., 1998] and OPT++ [Meza, 1994]
libraries, respectively.

2.2.4 Warm Starting of MPP Searches


The final algorithmic variation involves the use of warm starting approaches for improving computational efficiency. MPP
searches can be accelerated through three distinct types of warm starting:

• with internal iteration increment (within an AMV+ reliability analysis)

• with z/p/β level increment (within any reliability analysis)

• with design variable increment (across multiple reliability analyses for RBDO)

and involve several different types of data:


• linearization point and associated response values (for AMV+ reliability analysis)

• MPP optimizer initial guess (for any reliability analysis)

Within a single reliability analysis, the AMV+ linearization point and associated response data for each new z/p/β level
is warm started using the MPP from the previous level. The initial guess for each new MPP search is warm started differently
for AMV+ iterations, RIA level changes, or PMA level changes. For unconverged AMV+ iterations, a simple copy of the
previous MPP estimate is used. In the case of an advance to the next z/p/β level, the initial guess is determined by projecting
from the current MPP out to the new β radius or response level. This projection is important since premature optimization
termination can occur with some optimizers if the RIA/PMA first-order optimality conditions (u + λ∇ u G = 0 for RIA or
∇u G + λu = 0 for PMA) remain satisfied for the new level, even though the new equality constraint will be violated. That
is, even though initial guess may not affect overall efficiency in linearization-based searches, it can affect search robustness.
For RIA projections, an approximate u(k+1) is computed using a first-order Taylor series approximation of the next g level:

G(k+1) = G(k) + ∇u G(u(k) )T (u(k+1) − u(k) ) (24)

where u(k+1) is defined as a projection along ∇u G from u(k)

u(k+1) = u(k) + α∇u G(u(k) ) (25)

Substituting Eq. 25 into Eq. 24 defines the step


G(k+1) − G(k)
α= (26)
k∇u G(u(k) )k2
This projection could bypass the need for ∇u G with knowledge of the Lagrange multipliers at the current MPP (∇u G = − λ1 u
for RIA). For PMA projections, an approximate u(k+1) is computed by scaling u(k) to match its magnitude to the next β
target.
β (k+1) (k)
u(k+1) = u (27)
β (k)
In the case of multiple reliability method invocations within RBDO, the optimizer initial guess for the first level can be
warm started using information from the previous reliability analysis. The simplest approach uses the MPP for the first level
from the previous reliability method invocation. A more advanced approach for the case of RIA-based RBDO corrects the
previous MPP using a projection [Burton and Hajela, 2004], resulting in an expression similar to Eqs. 25-26:

∇d G(d(k) )T (d(k+1) − d(k) )


u(k+1) = u(k) − ∇u G(u(k) ) (28)
k∇u G(u(k) )k2
For AMV+, the linearization point for the first level is also warm started using the previous/projected MPP, although the
response data at the linearization point must be reevaluated to account for design variable changes. Warm starts for all
subsequent levels within the new reliability analysis are performed using Eqs. 25-27.

3 Reliability-Based Design Optimization


Reliability-based design optimization (RBDO) methods are used to perform design optimization accounting for reliability
metrics. The reliability analysis capabilities described in Section 2 provide a rich foundation for exploring a variety of RBDO
formulations. This paper will present first-order methods for bi-level and sequential RBDO.

3.1 Bi-level RBDO


The simplest and most direct RBDO approach is the bi-level approach in which a full reliability analysis is performed for
every optimization function evaluation. This involves a nesting of two distinct levels of optimization within each other, one
at the design level and one at the MPP search level.
Since an RBDO problem will typically specify both the z level and the p/β level, one can use either the RIA or the
PMA formulation for the UQ portion and then constrain the result in the design optimization portion. In particular, RIA
reliability analysis maps z to p/β, so RIA RBDO constrains p/β:

minimize f
subject to β ≥ β̄
or p ≤ p̄ (29)

And PMA reliability analysis maps p/β to z, so PMA RBDO constrains z:

minimize f
subject to z ≥ z̄ (30)

where z ≥ z̄ is used as the RBDO constraint for a cumulative failure probability (failure defined as z ≤ z̄) but z ≤ z̄ would
be used as the RBDO constraint for a complementary cumulative failure probability (failure defined as z ≥ z̄). It is worth
noting that DAKOTA is not limited to these types of inequality-constrained RBDO formulations; rather, they are convenient
examples. DAKOTA supports general optimization under uncertainty mappings [Eldred et al., 2002] which allow flexible use
of statistics within multiple objectives, inequality constraints, and equality constraints.
An important performance enhancement for bi-level methods is the use of sensitivity analysis to analytically compute the
design gradients of probability, reliability, and response levels. When design variables are separate from the uncertain variables
(i.e., they are not distribution parameters), then the following expressions may be used [Hohenbichler and Rackwitz, 1986,
Karamchandani and Cornell, 1992, Allen and Maute, 2004]:

∇d z = ∇d G (31)
1
∇d βcdf = ∇d G (32)
k ∇u G k
∇d pcdf = −φ(−βcdf )∇d βcdf (33)

where φ() is the standard normal density function. From Eqs. 11-12, it is evident that

∇d βccdf = −∇d βcdf (34)


∇d pccdf = −∇d pcdf (35)
Even when ∇d G is estimated numerically, these analytic expressions can be used to avoid numerical differencing across full
reliability analyses. Since these expressions are derived using the KKT conditions for a converged MPP, they are appropriate
for RBDO using AMV+ and FORM, but not for RBDO using MV or AMV.

3.2 Sequential/Surrogate-based RBDO


An alternative RBDO approach is the sequential approach, in which additional efficiency is sought through breaking the
nested relationship of the MPP and design searches. The general concept is to iterate between optimization and uncertainty
quantification, updating the optimization goals based on the most recent probabilistic assessment results. This update may
be based on safety factors [Wu et al., 2001] or other approximations.
A particularly effective approach for updating the optimization goals is to use the p/β/z sensitivity analysis of Eqs. 31-33 in
combination with local surrogate models [Zou et al., 2004]. In this paper, first-order Taylor series approximations will be used
for both the objective function and the constraints, although the use of constraint approximations alone is sufficient to remove
the nesting. When surrogate models are employed, a trust-region model management framework [Giunta and Eldred, 2000]
can be used to adaptively manage the extent of the approximations and ensure convergence of the RBDO process.
In particular, RIA trust-region surrogate-based RBDO employs surrogate models of f and p/β within a trust region ∆ k
centered at dc :

minimize f (dc ) + ∇d f (dc )T (d − dc )


subject to β(dc ) + ∇d β(dc )T (d − dc ) ≥ β̄
or p(dc ) + ∇d p(dc )T (d − dc ) ≤ p̄
k d − d c k ∞ ≤ ∆k (36)

and PMA trust-region surrogate-based RBDO employs surrogate models of f and z within a trust region ∆ k centered at dc :

minimize f (dc ) + ∇d f (dc )T (d − dc )


subject to z + ∇d z(dc )T (d − dc ) ≥ z̄
k d − d c k ∞ ≤ ∆k (37)

where the sense of the z constraint may vary as described previously.

4 Computational Experiments
The algorithmic variations of interest in reliability analysis include the linearization approaches (MV, x-/u-space AMV, x-/u-
space AMV+, and FORM), integration approaches (first-/second-order), warm starting approaches, and MPP optimization
algorithm selections (SQP or NIP). RBDO algorithmic variations of interest include use of bi-level, fully-analytic bi-level, or
sequential approaches, use of RIA or PMA formulations for the underlying UQ, and the specific z/p/β mappings that are
employed. Relative performance of these algorithmic variations will be presented in this section for a number of computa-
tional experiments performed using the DAKOTA/UQ software [Wojtkiewicz et al., 2001]. DAKOTA/UQ is the uncertainty
quantification component of DAKOTA [Eldred et al., 2003], an open-source software framework for design and performance
analysis of computational models on high performance computers.

4.1 Lognormal ratio


This test problem has a limit state function defined by the ratio of two lognormally-distributed random variables.
x1
g(x) = (38)
x2
The distributions for both x1 and x2 are Lognormal(1, 0.5) with a correlation coefficient between the two variables of 0.3.

4.1.1 Uncertainty quantification


For RIA, 24 response levels (.4, .5, .55, .6, .65, .7, .75, .8, .85, .9, 1, 1.05, 1.15, 1.2, 1.25, 1.3, 1.35, 1.4, 1.5, 1.55, 1.6, 1.65, 1.7,
and 1.75) are mapped into the corresponding cumulative probability levels. For PMA, these 24 probability levels (the fully
converged results from RIA FORM) are mapped back into the original response levels. Tables 1 and 2 show the computational
results for each of the six method variants using numerical gradients computed with central differences. Duplicate function
evaluations (detected by DAKOTA’s evaluation cache) are not included in the totals, and an AMV+ convergence tolerance
of k u(k+1) − u(k) k2 < 10−4 is used to give comparable accuracy to the FORM SQP/NIP converged results. The RIA p
error norms and PMA z error norms are measured relative to the fully-converged FORM results. That is, the FORM errors
Table 1: Reliability index approach results, lognormal ratio test problem.
RIA SQP Function Evals NIP Function Evals CDF p Target z
Approach (Cold/Warm Start) (Cold/Warm Start) Error Norm Offset Norm
MV 5 5 0.2312 0.0
x-space AMV 30 30 0.05210 0.5100
u-space AMV 30 30 0.0 0.6915
x-space AMV+ 504/506 505/506 0.0 0.0
u-space AMV+ 388/381 389/381 0.0 0.0
FORM 1416/1371 546/491 0.0 0.0

Table 2: Performance measure approach results, lognormal ratio test problem.


PMA SQP Function Evals NIP Function Evals CDF z Target p
Approach (Cold/Warm Start) (Cold/Warm Start) Error Norm Offset Norm
MV 5 5 0.6072 0.0
x-space AMV 30 30 0.1166 0.0
u-space AMV 30 30 0.0 0.0
x-space AMV+ 390/406 415/406 0.0 0.0
u-space AMV+ 150/221 178/231 0.0 0.0
FORM 3241/861 879/481 0.0 0.0

(RIA p error norm of 0.01538 and PMA z error norm of 0.03775) relative to a Latin Hypercube reference solution of 10 6
samples are not included in order to avoid obscuring the relative errors. Figure 1 overlays the computed CDF values for each
of the six method variants as well as the Latin Hypercube reference solution.
It is evident that, relative to the fully-converged AMV+/FORM results, MV accuracy degrades rapidly away from the
means. AMV is reasonably accurate over the full range (x-space AMV has a factor of 4.8 reduction in error norm on
average relative to MV, and u-space AMV has zero error for this problem) but has undesirable offsets from the prescribed
response levels in the RIA case. In terms of computational expense, MV is two orders of magnitude less expensive than
AMV+/FORM and AMV is one order of magnitude less expensive, which makes these techniques attractive when rough
statistics are sufficient. When more accurate statistics are desired, AMV+ has equal accuracy to FORM and is a factor of
9.1 less expensive on average in the case of cold starts using sequential quadratic programming (SQP) for each level, which
decreases to a factor of 3.1 less expensive in the case of warm starts using SQP. That is, FORM benefits more from warm
starting than AMV+. When using a nonlinear interior-point (NIP) optimizer, FORM solutions are generally less expensive
and become directly competitive with AMV+ in some cases (AMV+ is a factor of 2.4 less expensive on average than FORM
in the case of cold starts using NIP, which decreases to a factor of 1.4 in the case of warm starts using NIP). The SQP/NIP
comparison is much less relevant for the AMV/AMV+ methods since the MPP searches are linearized. Another benefit of
NIP relative to SQP has been observed in PMA solutions (Eq. 14). PMA solutions with SQP involve penalties applied to the
equality constraint (e.g., in an augmented Lagrangian merit function) and must have strict u-space bound constraints (e.g.,
10 standard deviations) to avoid excessive u-space excursions in minimizing G(u) prior to enforcement of the u T u equality
constraint. These excursions can result in inaccurate Hessian approximations in moderate cases and numerical overflow in
extreme cases. NIP methods are less prone to this difficulty since they proceed toward constraint satisfaction more uniformly.

4.2 Short column


This test problem involves the plastic analysis and design of a short column with rectangular cross section (width b and depth
h) having uncertain material properties (yield stress Y ) and subject to uncertain loads (bending moment M and axial force
P ) [Kuschel and Rackwitz, 1997]. The limit state function is defined as:

4M P2
g(x) = 1 − − (39)
bh2 Y b2 h 2 Y 2
The distributions for P , M , and Y are Normal(500, 100), Normal(2000, 400), and Lognormal(5, 0.5), respectively, with a
correlation coefficient of 0.5 between P and M (uncorrelated otherwise). The nominal values for b and h are 5 and 15,
respectively. In this test problem, analytic gradients of f and g with respect to P , M , and Y are used to reduce function
evaluation counts (note: the evaluation counts reflect data requests from the algorithm and do not separate value and gradient
requests).
1 1

0.9 0.9

0.8 0.8

0.7 0.7
Cumulative Probability

Cumulative Probability
0.6 0.6

0.5 0.5

0.4 0.4
MV MV
0.3 x−/u−space AMV 0.3 x−/u−space AMV
x−/u−space AMV+ & FORM x−/u−space AMV+ & FORM
0.2 106 Latin hypercube samples 0.2 106 Latin hypercube samples

0.1 0.1

0 0

0 0.5 1 1.5 2 0 0.5 1 1.5 2


Response Value Response Value

(a) RIA methods (b) PMA methods

Figure 1: Lognormal ratio cumulative distribution function.

Table 3: Reliability index approach results, short column test problem.


RIA SQP Function Evals NIP Function Evals CDF p Target z
Approach (Cold/Warm Start) (Cold/Warm Start) Error Norm Offset Norm
MV 1 1 0.1548 0.0
x-space AMV 45 45 0.009275 18.28
u-space AMV 45 45 0.006408 18.81
x-space AMV+ 239/192 239/192 0.0 0.0
u-space AMV+ 263/207 263/207 0.0 0.0
FORM 653/636 473/351 0.0 0.0

4.2.1 Uncertainty quantification


For RIA, 43 response levels (-9.0, -8.75, -8.5, -8.0, -7.75, -7.5, -7.25, -7.0, -6.5, -6.0, -5.5, -5.0, -4.5, -4.0, -3.5, -3.0, -2.5, -2.0,
-1.9, -1.8, -1.7, -1.6, -1.5, -1.4, -1.3, -1.2, -1.1, -1.0, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0.0, 0.05, 0.1, 0.15, 0.2, 0.25)
are mapped into the corresponding cumulative probability levels. For PMA, these 43 probability levels (the fully converged
results from RIA FORM) are mapped back into the original response levels. Tables 3 and 4 show the computational results for
each of the six method variants. The RIA p error norms and PMA z error norms are measured relative to the fully-converged
FORM results. That is, the FORM errors (RIA p error norm of 0.01370 and PMA z error norm of 0.2181) relative to a Latin
Hypercube reference solution of 106 samples are omitted in order to avoid obscuring the relative errors. Figure 2 overlays
the computed CDF values for each of the six method variants as well as the Latin Hypercube reference solution.
Relative to the fully-converged AMV+/FORM results, MV accuracy again degrades rapidly away from the means. AMV
is again reasonably accurate over the full range (a factor of 15 reduction in error norm on average relative to MV) but has
undesirable offsets from the prescribed response levels in the RIA case. In terms of computational expense, MV and AMV
are again significantly less expensive. AMV+ has equal accuracy to FORM and is a factor of 3.8 less expensive on average

Table 4: Performance measure approach results, short column test problem.


PMA SQP Function Evals NIP Function Evals CDF z Target p
Approach (Cold/Warm Start) (Cold/Warm Start) Error Norm Offset Norm
MV 1 1 7.454 0.0
x-space AMV 45 45 0.9420 0.0
u-space AMV 45 45 0.5828 0.0
x-space AMV+ 201/171 190/179 0.0 0.0
u-space AMV+ 246/205 242/205 0.0 0.0
FORM 1123/716 780/325 0.0 0.0
1 1

0.9 0.9

0.8 0.8

0.7 MV 0.7 MV
Cumulative Probability

Cumulative Probability
x−/u−space AMV x−/u−space AMV
0.6 x−/u−space AMV+ & FORM 0.6 x−/u−space AMV+ & FORM
6 6
10 Latin hypercube samples 10 Latin hypercube samples
0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0

−10 −8 −6 −4 −2 0 2 −10 −8 −6 −4 −2 0 2
Response Value Response Value

(a) RIA methods (b) PMA methods

Figure 2: Short column cumulative distribution function.

than FORM in the case of cold starts using SQP for each level, which decreases to a factor of 3.5 in the case of warm starts
using SQP. NIP-based FORM solutions are again less expensive than SQP-based FORM solutions and are approaching the
expense of AMV+ solutions (AMV+ is a factor of 2.8 less expensive on average than FORM in the case of cold starts using
NIP, which decreases to a factor of 1.7 in the case of warm starts using NIP).

4.2.2 Reliability-based design optimization


The short column example problem is also amenable to RBDO. An objective function of cross-sectional area and a target
reliability index of 2.5 (cumulative failure probability = p(g ≤ 0) ≤ 0.00621) are used in the design problem:
min bh
s.t. β ≥ 2.5
5.0 ≤ b ≤ 15.0
15.0 ≤ h ≤ 25.0 (40)
It is important to note that only a single response/probability mapping is needed for each uncertainty analysis (instead of
the 43 used previously in generating a full CDF). As is evident from the UQ results shown in Figure 2, the initial design
of (b, h) = (5, 15) is infeasible and the optimization must add material to obtain the target reliability at the optimal design
(b, h) = (8.68, 25.0).
Table 5 shows the results for bi-level RBDO for 18 variants. Constraint violations are raw norms (not normalized by
allowable). Analytic gradients of g with respect to P , M , and Y are used at the uncertainty analysis level, but numerical
gradients of f and z/p/β with respect to b and h are computed using central finite differences at the optimization level. SQP
is used for optimization at both levels. It is evident that RBDO with MV and AMV is relatively inexpensive, but can only
obtain an approximate optimal solution. Applying reliability constraints using β is generally preferred to applying probability
constraints using p in the RIA RBDO formulation of Eq. 29 (expense reduced by a factor of 2.3 on average), since β tends
to be more linear/well-behaved/well-scaled for the top-level optimizer than p. In addition, warm starts are generally helpful,
reducing expense by a factor of 1.2 on average, and AMV+-based RBDO consistently outperforms FORM-based RBDO by
a factor of 4.8 on average. No consistent preference for RIA-based or PMA-based RBDO is evident in this case, although
RIA AMV+ RBDO using warm starts and β constraints was the top performer and solved the problem in fewer than 200
function evaluations.
Table 6 shows the results for fully-analytic bi-level RBDO employing the gradient expressions for p, β, and z (Eqs. 31-33).
In this case, only the AMV+ and FORM variants for RIA/PMA RBDO are allowed, since the sensitivity expressions require
a fully-converged MPP. In comparison with Table 5, it is evident that avoiding numerical differencing of reliability metrics
at the design level results in a significant improvement in efficiency (factor of 2.3 on average). In this case, warm starts are
less effective (only a factor of 1.1 on average) since the design changes between reliability analyses are larger than when finite
differencing. AMV+-based RBDO outperforms FORM-based RBDO by a factor of 5.6 on average.
Table 7 shows the results for sequential RBDO using a trust-region surrogate-based approach. The surrogates are first-
order Taylor-series using the same analytic gradients of p, β, and z. The sequential case is more efficient than the fully-analytic
Table 5: Bi-level RBDO results, short column test problem.
RBDO Function Evals Objective Constraint
Approach (Cold/Warm Start) Function Violation
RIA z → p MV 50 197.8 0.01913
RIA z → p x-space AMV 150 197.5 0.01962
RIA z → p u-space AMV 147 198.9 0.01721
RIA z → p x-space AMV+ 370/354 217.1 0.0
RIA z → p u-space AMV+ 400/371 217.1 0.0
RIA z → p FORM 1877/1781 217.1 0.0
RIA z → β MV 16 197.7 0.5475
RIA z → β x-space AMV 48 197.4 0.5559
RIA z → β u-space AMV 48 198.2 0.5326
RIA z → β x-space AMV+ 195/185 216.7 0.0
RIA z → β u-space AMV+ 211/193 216.7 0.0
RIA z → β FORM 916/1088 216.7 0.0
PMA p, β → z MV 35 197.7 0.1547
PMA p, β → z x-space AMV 124 214.8 0.01367
PMA p, β → z u-space AMV 124 215.6 0.008390
PMA p, β → z x-space AMV+ 268/212 216.8 0.0
PMA p, β → z u-space AMV+ 328/214 216.8 0.0
PMA p, β → z FORM 1567/707 216.8 0.0

Table 6: Analytic bi-level RBDO results, short column test problem.


RBDO Function Evals Objective Constraint
Approach (Cold/Warm Start) Function Violation
RIA z → p x-space AMV+ 161/149 217.1 0.0
RIA z → p u-space AMV+ 171/160 217.1 0.0
RIA z → p FORM 865/911 217.1 0.0
RIA z → β x-space AMV+ 76/72 216.7 0.0
RIA z → β u-space AMV+ 82/76 216.7 0.0
RIA z → β FORM 538/612 216.7 0.0
PMA p, β → z x-space AMV+ 105/100 216.8 0.0
PMA p, β → z u-space AMV+ 125/102 216.8 0.0
PMA p, β → z FORM 508/285 216.8 0.0
Table 7: Surrogate-based RBDO results, short column test problem.
RBDO Function Evals Objective Constraint
Approach (Cold/Warm Start) Function Violation
RIA z → p x-space AMV+ 77/75 216.9 0.0
RIA z → p u-space AMV+ 82/81 216.9 0.0
RIA z → p FORM 573/577 216.9 0.0
RIA z → β x-space AMV+ 67/65 216.7 0.0
RIA z → β u-space AMV+ 72/72 216.7 0.0
RIA z → β FORM 508/561 216.7 0.0
PMA p, β → z x-space AMV+ 79/76 216.7 2.1e-4
PMA p, β → z u-space AMV+ 87/79 216.7 2.1e-4
PMA p, β → z FORM 333/228 216.7 2.1e-4

Y
L = 100”
X
t
w

Figure 3: Cantilever beam test problem.

bi-level case, with expense reduced by another factor of 1.4 on average. Warm starts are even less effective than in the fully-
analytic bi-level case and save only 6% on average. AMV+-based sequential RBDO outperforms FORM-based sequential
RBDO by a factor of 6.2 on average, and solves the problem in as few as 65 function evaluations.

4.3 Cantilever beam


The final test problem involves the simple uniform cantilever beam [Sues et al., 2001, Wu et al., 2001] shown in Figure 3.
Random variables in the problem include the yield stress R of the beam material, the Young’s modulus E of the material,
and the horizontal and vertical loads, X and Y , which are modeled with normal distributions using N(40000, 2000), N(2.9E7,
1.45E6), N(500, 100), and N(1000, 100), respectively. Problem constants include L = 100 in. and D 0 = 2.2535 in. The
constraints on beam response have the following analytic form:

600 600
stress = Y + X≤R (41)
wt2 r w2 t
4L3 Y X
displacement = ( 2 )2 + ( 2 )2 ≤ D 0 (42)
Ewt t w
or when scaled:

stress
gS = −1≤0 (43)
R
displacement
gD = −1≤0 (44)
D0

4.3.1 Uncertainty quantification


For RIA, 11 levels (0.0 to 1.0 in 0.1 increments) are employed for each limit state function (g S and gD ) and are mapped into
the corresponding cumulative probability levels. For PMA, these probability levels (the fully converged results from RIA
FORM) are mapped back into the original response levels. In this test problem, analytic gradients of g S and gD with respect
to R, E, X, and Y are used to reduce function evaluation counts. Tables 8 and 9 show the computational results for each of
the method variants. In this case, since all uncertain variables are normally distributed, the x-space and u-space linearization
approaches are equivalent. The RIA p error norms and PMA z error norms are measured relative to the fully-converged
FORM results. That is, the FORM errors (RIA p error norm of 0.02764 and PMA z error norm of 0.04198) relative to a
Latin Hypercube reference solution of 106 samples are not included in order to avoid obscuring the relative errors. Figure 4
overlays the computed CDF values for each of the method variants as well as the Latin Hypercube reference solution.
Table 8: Reliability index approach results, cantilever test problem.
RIA SQP Function Evals NIP Function Evals CDF p Target z
Approach (Cold/Warm Start) (Cold/Warm Start) Error Norm Offset Norm
MV 1 1 0.01889 0.0
x-/u-space AMV 23 23 0.001175 0.1261
x-/u-space AMV+ 92/89 93/94 0.0 0.0
FORM 249/165 235/189 0.0 0.0

Table 9: Performance measure approach results, cantilever test problem.


PMA SQP Function Evals NIP Function Evals CDF z Target p
Approach (Cold/Warm Start) (Cold/Warm Start) Error Norm Offset Norm
MV 1 1 0.1239 0.0
x-/u-space AMV 23 23 0.01946 0.0
x-/u-space AMV+ 81/69 96*/88 0.0 0.0
FORM 625/265 440*/234* 0.0 0.0

1 1

0.9 0.9

0.8 0.8

0.7 0.7
Cumulative Probability
Cumulative Probability

0.6 0.6

0.5 0.5

0.4 MV 0.4 MV
x−/u−space AMV x−/u−space AMV
0.3 x−/u−space AMV+ & FORM 0.3 x−/u−space AMV+ & FORM
6
10 Latin hypercube samples 106 Latin hypercube samples
0.2 0.2

0.1 0.1

0 0

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Response Value Response Value

(a) RIA methods, gS (b) PMA methods, gS

1 1

0.9 0.9

0.8 0.8

0.7 MV 0.7 MV
Cumulative Probability

Cumulative Probability

x−/u−space AMV x−/u−space AMV


0.6 x−/u−space AMV+ & FORM 0.6 x−/u−space AMV+ & FORM
6 6
10 Latin hypercube samples 10 Latin hypercube samples
0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Response Value Response Value

(c) RIA methods, gD (d) PMA methods, gD

Figure 4: Cantilever beam cumulative distribution functions.


Table 10: Bi-level RBDO results, cantilever test problem.
RBDO Function Evals Objective Constraint
Approach (Cold/Warm Start) Function Violation
RIA z → p MV 95 11.37 0.0
RIA z → p x-/u-space AMV 285 11.37 0.0
RIA z → p x-/u-space AMV+ 597/624 9.563 0.0
RIA z → p FORM 1522/1111 9.563 0.0
RIA z → β MV 44 9.392 0.2958
RIA z → β x-/u-space AMV 132 9.392 0.2958
RIA z → β x-/u-space AMV+ 540/493 9.520 0.0
RIA z → β FORM 1082/865 9.520 0.0
PMA p, β → z MV 53 9.393 0.03216
PMA p, β → z x-/u-space AMV 159 9.504 0.003602
PMA p, β → z x-/u-space AMV+ 547/428 9.521 0.0
PMA p, β → z FORM 3631/1148 9.521 0.0

Relative to the fully-converged AMV+/FORM results, MV accuracy again degrades rapidly away from the means. AMV is
reasonably accurate over the full range (a factor of 11 reduction in error norm on average relative to MV) but has undesirable
offsets from the prescribed response levels in the RIA case. In terms of computational expense, MV and AMV are again
significantly less expensive. AMV+ has equal accuracy to FORM and is a factor of 5.2 and 2.8 less expensive on average
in the case of cold and warm starts, respectively, using SQP, and is a factor of 3.6 and 2.3 less expensive on average in the
case of cold and warm starts, respectively, using NIP. NIP can be seen to be less robust than SQP for this problem, as “*”
indicates that one or more of the 11 levels failed to converge.

4.3.2 Reliability-based design optimization


The design problem is to minimize the weight (or, equivalently, the cross-sectional area) of the beam subject to the displace-
ment and stress constraints. If the random variables are fixed at their means, the resulting deterministic design problem (with
constraints gS ≤ 0 and gD ≤ 0) has the solution (w, t) = (2.35, 3.33) with an objective function of 7.82. When seeking relia-
bility levels of 3.0 for these constraints (complementary cumulative failure probabilities p(g S ≥ 0) and p(gD ≥ 0) ≤ 0.00135),
the design problem becomes:

min wt
s.t. βD ≥ 3.0
βS ≥ 3.0
1.0 ≤ w ≤ 4.0
1.0 ≤ t ≤ 4.0 (45)

which has the solution (w, t) = (2.45, 3.88) with an objective function of 9.52.
Table 10 shows the results for bi-level RBDO using 12 variants (the x-space and u-space linearizations are identical for
this problem). Constraint violations are raw norms (not normalized by allowable). Analytic gradients of g S and gD with
respect to R, E, X, and Y are used at the uncertainty analysis level, but numerical gradients of f and z/p/β with respect to
w and t are computed using central finite differences at the optimization level. SQP is used for optimization at both levels.
Again, RBDO with MV and AMV is relatively inexpensive, but can only obtain an approximate optimal solution. Reliability
constraints are again preferred to probability constraints in RIA RBDO (expense reduced by a factor of 1.6 on average).
In addition, warm starts are helpful, reducing expense by a factor of 1.5 on average, and AMV+-based RBDO consistently
outperforms FORM-based RBDO by a factor of 2.9 on average. PMA RBDO with AMV+ was the top performer in this
case and solved the problem in 428 function evaluations.
Table 11 shows the results for fully-analytic bi-level RBDO employing the gradient expressions for p, β, and z (Eqs. 31-33).
As for short column, only the AMV+ and FORM variants for RIA/PMA RBDO are allowed, since the sensitivity expressions
require a fully-converged MPP. In comparison with Table 10, avoiding numerical differencing at the design level reduces
computation expense by a factor of 2.3 on average. Again, warm starts are less effective (reduction of only a factor of 1.2 on
average) since the design changes between reliability analyses are larger than when finite differencing. AMV+-based RBDO
outperforms FORM-based RBDO by a factor of 2.8 on average.
Table 12 shows the results for sequential RBDO using a trust-region surrogate-based approach. The surrogates are
first-order Taylor-series using the same analytic gradients of p, β, and z. The sequential case is more efficient than the
Table 11: Analytic bi-level RBDO results, cantilever test problem.
RBDO Function Evals Objective Constraint
Approach (Cold/Warm Start) Function Violation
RIA z → p x-/u-space AMV+ 279/319 9.529 1.1e-5
RIA z → p FORM 623/531 9.563 0.0
RIA z → β x-/u-space AMV+ 207/208 9.520 0.0
RIA z → β FORM 367/324 9.520 0.0
PMA p, β → z x-/u-space AMV+ 247/232 9.521 0.0
PMA p, β → z FORM 1408/843 9.521 0.0

Table 12: Surrogate-based RBDO results, cantilever test problem.


RBDO Function Evals Objective Constraint
Approach (Cold/Warm Start) Function Violation
RIA z → p x-/u-space AMV+ 197/186 9.520 1.0e-9
RIA z → p FORM 342/457 9.520 1.0e-9
RIA z → β x-/u-space AMV+ 189/203 9.520 9.5e-5
RIA z → β FORM 372/442 9.520 9.5e-5
PMA p, β → z x-/u-space AMV+ 181/181 9.520 2.7e-9
PMA p, β → z FORM 759/487 9.520 2.7e-9

fully-analytic bi-level case, with expense reduced by another factor of 1.3 on average. Warm starts are less effective than
in the fully-analytic bi-level case and save only 2% on average. AMV+-based sequential RBDO outperforms FORM-based
sequential RBDO by a factor of 2.5 on average, and solves the problem in under 200 function evaluations.

5 Conclusions
DAKOTA/UQ provides a flexible, object-oriented implementation of reliability methods that allows plug-and-play experimen-
tation with RIA/PMA formulations and various linearization, integration, warm starting, and MPP optimization algorithm
selections. Linearization approaches have included MV, x-space and u-space AMV, x-space and u-space AMV+, and no lin-
earization (FORM); integration options have included first-order and second-order integrations; warm starting has included
MPP reuse and projections; and MPP search selection has included SQP and NIP optimization algorithms. These reliability
analysis capabilities provide a substantial foundation for RBDO formulations, and bi-level and sequential RBDO approaches
have been explored in this paper. Bi-level RBDO has included basic and fully-analytic approaches, and sequential RBDO
has utilized a trust-region surrogate-based approach.
Reliability method performance comparisons for the three simple test problems presented indicate several trends. MV
and AMV are significantly less expensive than AMV+ and FORM, but come with corresponding reductions in accuracy.
In combination, these methods provide a useful spectrum of accuracy and expense that allow the computational effort to
be balanced with the statistical precision required for particular applications. In addition, support for forward and inverse
mappings (RIA and PMA) provide the flexibility to support different UQ analysis needs. Relative to FORM, AMV+ has
been shown to have equal accuracy, equal robustness (for these test problems), and consistent computational savings (factor
of 3.5 reduction in function evaluations on average). In addition, NIP optimizers have shown promise in being less susceptible
to PMA u-space excursions and in being more efficient than SQP optimizers in most cases (factor of 1.8 less expensive on
average for FORM). Warm starting with projections has been shown to be effective for reliability analyses, with a factor of
1.3 reduction in expense on average. The x-space and u-space linearizations for AMV and AMV+ were both effective, and
the relative performance was strongly problem-dependent (u-space AMV+ was consistently more efficient for lognormal ratio,
x-space AMV+ was consistently more efficient for short column, and x-space and u-space were equivalent for cantilever).
Among all combinations tested, AMV+ with warms starts is the recommended approach.
RBDO results mirror the reliability analysis trends. Basic bi-level RBDO has been evaluated with up to 18 variants
(RIA/PMA with different p/β/z mappings for MV/x-space AMV/u-space AMV/x-space AMV+/u-space AMV+/FORM),
and fully-analytic bi-level and sequential RBDO have been evaluated with up to 9 variants (RIA/PMA with different p/β/z
mappings for x-space AMV+/u-space AMV+/FORM). Bi-level RBDO with MV and AMV are inexpensive but give only
approximate optima. These approaches may be useful for preliminary design or for warm-starting other RBDO methods.
Bi-level RBDO with AMV+ was shown to have equal accuracy and robustness to bi-level FORM-based approaches and be
a factor of 4.2 less expensive on average. In addition, usage of β in RIA RBDO constraints was preferred due to it being
more well-behaved and more well-scaled than constraints on p (resulting in a factor of 1.7 reduction in expense), and this
approach for RIA RBDO was more efficient (factor of 1.4 reduction in expense on average) than PMA RBDO. Warm starts
in RBDO were most effective when the design changes were small, and basic bi-level RBDO (with numerical differencing at
the design level) showed a factor of 1.4 reduction in expense, which decreased to being marginally effective for fully-analytic
bi-level RBDO (factor of 1.1 reduction) and relatively ineffective for sequential RBDO (only a 5% reduction in expense on
average). However, large design changes were desirable for overall RBDO efficiency and, compared to basic bi-level RBDO,
fully-analytic RBDO was a factor 2.3 less expensive on average and sequential RBDO was a factor of 3.2 less expensive on
average. Among all combinations tested, sequential RBDO using AMV+ is the recommended approach.
The effectiveness of first-order approximations, both in limit state linearization within reliability analysis and in surrogate-
based RBDO, has led to additional work in second-order approximations which hold promise to both improve the accuracy
of probability integrations and improve the computational efficiency through accelerated convergence rates. This work is
presented in a companion paper.

6 Acknowledgments
The authors would like to express their thanks to the Sandia Computer Science Research Institute (CSRI) for support of this
collaborative work between Sandia National Laboratories and the University of Notre Dame.

References
[Agarwal et al., 2004] Agarwal, H., Renaud, J.E., Lee, J.C., and Watson, L.T., A Unilevel Method for Reliability Based De-
sign Optimization, paper AIAA-2004-2029 in Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC Structures, Struc-
tural Dynamics, and Materials Conference, Palm Springs, CA, April 19-22, 2004.
[Allen and Maute, 2004] Allen, M. and Maute, K., Reliability-based design optimization of aeroelastic structures, Struct.
Multidiscip. O., Vol. 27, 2004, pp. 228-242.
[Breitung, 1984] Breitung, K., Asymptotic approximation for multinormal integrals, J. Eng. Mech., ASCE, Vol. 110, No. 3,
1984, pp. 357-366.
[Burton and Hajela, 2004] Burton, S.A. and Hajela, P., Efficient Reliability-Based Structural Optimization Through Most
Probable Failure Point Approximation, paper AIAA-2004-1901 in Proceedings of the 45th AIAA/ASME/ASCE/AHS/ASC
Structures, Structural Dynamics, and Materials Conference, Palm Springs, CA, April 19-22, 2004.
[Der Kiureghian and Liu, 1986] Der Kiureghian, A. and Liu, P.L., Structural Reliability Under Incomplete Probability In-
formation, J. Eng. Mech., ASCE, Vol. 112, No. 1, 1986, pp. 85-104.
[Eldred et al., 2002] Eldred, M.S., Giunta, A.A., Wojtkiewicz, S.F., Jr., and Trucano, T.G., Formulations for Surrogate-
Based Optimization Under Uncertainty, paper AIAA-2002-5585 in Proceedings of the 9th AIAA/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Atlanta, GA, Sept. 4-6, 2002.
[Eldred et al., 2003] Eldred, M.S., Giunta, A.A., van Bloemen Waanders, B.G., Wojtkiewicz, S.F., Jr., Hart, W.E., and
Alleva, M.P., DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation,
Uncertainty Quantification, and Sensitivity Analysis. Version 3.1 Users Manual. Sandia Technical Report SAND2001-3796,
Revised April 2003, Sandia National Laboratories, Albuquerque, NM.
[Eldred and Wojtkiewicz, 2005] Eldred, M.S. and Wojtkiewicz, S.F., Jr., Second-Order Reliability Formulations in
DAKOTA/UQ, (in preparation).
[Gill et al., 1998] Gill, P.E., Murray, W., Saunders, M.A., and Wright, M.H., User’s Guide for NPSOL 5.0: A Fortran Package
for Nonlinear Programming, System Optimization Laboratory, Technical Report SOL 86-1, Revised July 1998, Stanford
University, Stanford, CA.
[Giunta and Eldred, 2000] Giunta, A.A. and Eldred, M.S., Implementation of a Trust Region Model Management Strategy
in the DAKOTA Optimization Toolkit, paper AIAA-2000-4935 in Proceedings of the 8th AIAA/USAF/NASA/ISSMO
Symposium on Multidisciplinary Analysis and Optimization, Long Beach, CA, September 6-8, 2000.
[Haldar and Mahadevan, 2000] Haldar, A. and Mahadevan, S., Probability, Reliability, and Statistical Methods in Engineer-
ing Design, 2000 (Wiley: New York).
[Hohenbichler and Rackwitz, 1986] Hohenbichler, M. and Rackwitz, R., Sensitivity and importance measures in structural
reliability, Civil Eng. Syst., Vol. 3, 1986, pp. 203-209.
[Hohenbichler and Rackwitz, 1988] Hohenbichler, M. and Rackwitz, R., Improvement of second-order reliability estimates by
importance sampling, J. Eng. Mech., ASCE, Vol. 114, No. 12, 1988, pp. 2195-2199.
[Hong, 1999] Hong, H.P., Simple Approximations for Improving Second-Order Reliability Estimates, J. Eng. Mech., ASCE,
Vol. 125, No. 5, 1999, pp. 592-595.

[Karamchandani and Cornell, 1992] Karamchandani, A. and Cornell, C.A., Sensitivity estimation within first and second
order reliability methods, Struct. Saf., Vol. 11, pp. 95-107.

[Kuschel and Rackwitz, 1997] Kuschel, N. and Rackwitz, R., Two Basic Problems in Reliability-Based Structural Optimiza-
tion, Math. Method Oper. Res., Vol. 46, 1997, pp.309-333.

[Meza, 1994] Meza, J.C., OPT++: An Object-Oriented Class Library for Nonlinear Optimization, Sandia Technical Report
SAND94-8225, Sandia National Laboratories, Livermore, CA, March 1994.

[Rosenblatt, 1952] Rosenblatt, M., Remarks on a Multivariate Transformation, Ann. Math. Stat., Vol. 23, No. 3, 1952, pp.
470-472.

[Sues et al., 2001] Sues, R., Aminpour, M. and Shin, Y., Reliability-Based Multidisciplinary Optimization for Aerospace Sys-
tems, paper AIAA-2001-1521 in Proceedings of the 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics,
and Materials Conference, Seattle, WA, April 16-19, 2001.

[Tu et al., 1999] Tu, J., Choi, K.K., and Park, Y.H., A New Study on Reliability-Based Design Optimization, J. Mech.
Design, Vol. 121, 1999, pp.557-564.

[Wang and Grandhi, 1994] Wang, L. and Grandhi, R.V., Efficient Safety Index Calculation for Structural Reliability Analysis,
Comput. Struct., Vol. 52, No. 1, 1994, pp. 103-111.

[Wojtkiewicz et al., 2001] Wojtkiewicz, S.F., Jr., Eldred, M.S., Field, R.V., Jr., Urbina, A., and Red-Horse, J.R., A Toolkit
For Uncertainty Quantification In Large Computational Engineering Models, paper AIAA-2001-1455 in Proceedings of
the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Seattle, WA, April
16-19, 2001.

[Wu et al., 1990] Wu, Y.-T., Millwater, H.R., and Cruse, T.A., Advanced Probabilistic Structural Analysis Method for
Implicit Performance Functions, AIAA J., Vol. 28, No. 9, 1990, pp. 1663-1669.
[Wu, 1994] Wu, Y.-T., Computational Methods for Efficient Structural Reliability and Reliability Sensitivity Analysis, AIAA
J., Vol. 32, No. 8, 1994, pp. 1717-1723.

[Wu et al., 2001] Wu, Y.-T., Shin, Y., Sues, R., and Cesare, M., Safety-Factor Based Approach for Probability-Based Design
Optimization, paper AIAA-2001-1522 in Proceedings of the 42rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural
Dynamics, and Materials Conference, Seattle, WA, April 16-19, 2001.

[Xu and Grandhi, 1998] Xu, S., and Grandhi, R.V., Effective Two-Point Function Approximation for Design Optimization,
AIAA J., Vol. 36, No. 12, 1998, pp. 2269-2275.

[Zou et al., 2004] Zou, T., Mahadevan, S., and Rebba, R., Computational Efficiency in Reliability-Based Optimization,
Proceedings of the 9th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability, Albuquerque,
NM, July 26-28, 2004.

View publication stats

You might also like