Non Linear Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Nonlinear

Programming
Newton Method
Interior Penalty Ronin Java
Jun Baculio
Exterior Penalty Andre Guian
01 - Introduction to NLP
02 - Newton Method
03 - Interior Penalty
04 - Exterior Penalty
05 - Research
Nonlinear Programming
Introduction

Nonlinear programming (NLP) refers to the


process of optimizing (maximizing or
minimizing) an objective function, subject to a
set of constraints, where the objective function
and/or the constraints involve non-linear
relationships. In simpler terms, NLP deals with
optimization problems that are not linear in
nature.
Nonlinear Programming
Introduction

A nonlinear programming problem is an


optimization problem of the form:

Where:
f(x) is the objective function to be optimized
gi(x) are inequality constraints
hj(x) are equality constraints
x is a vector of decision variables
History of NLP Early Developments
(1900s - 1940s)
Calculus of Variations (Late 17th Century - 18th
Century): The foundation for optimization theory,
including the minimization of functionals, was laid by
mathematicians like Johann Bernoulli and Leonhard Euler.

Fermat and Lagrange (17th - 18th Century): Pierre de


Fermat and Joseph-Louis Lagrange made significant
contributions to optimization theory and the development
of necessary conditions for optimality.

Karush-Kuhn-Tucker (KKT) Conditions (1930s - 1950s):


The Karush-Kuhn-Tucker conditions were developed as
an extension of the Lagrange multiplier method to handle
non-linear constraints, laying the groundwork for modern
NLP.
History of NLP Emergence of Non-linear
Programming (1950s - 1960s)
George Dantzig (1950s): Known as the "father of linear
programming," Dantzig also made contributions to the
development of algorithms and methods for solving non-
linear optimization problems.

David M. Gay (1960s): Gay developed the feasibility


pump algorithm, a heuristic method for solving non-linear
optimization problems, which later became an important
tool in NLP.

Introduction of Algorithms: The 1960s saw the


development and refinement of algorithms like the
simplex method for linear programming, which laid the
groundwork for the development of algorithms for NLP.
History of NLP Advancements in Algorithms
and Methods (1970s - 1980s)
Sequential Quadratic Programming (SQP) (1970s): SQP
methods, which combine aspects of Newton's method
and the Lagrange multiplier method, became popular for
solving constrained NLP problems.

Interior Point Methods (1980s): Interior point methods


were developed as an alternative to the simplex method
for solving linear programming problems and were later
adapted for non-linear programming problems.

Advances in Computational Techniques: The


development of powerful computers and computational
techniques in the 1980s enabled more efficient and
accurate solutions to complex NLP problems.
History of NLP Modern Developments
(1990s - Present)
Global Optimization Methods: Research in global
optimization methods aimed at finding the global optimum
of NLP problems, which can have multiple local optima.

Metaheuristic Algorithms: The development and


application of metaheuristic algorithms like genetic
algorithms, particle swarm optimization, and simulated
annealing for solving complex NLP problems.

Applications in Engineering, Finance, and Data Science:


With the increasing complexity of problems in various
fields, NLP has found wide-ranging applications in
engineering design, financial modeling, data analytics,
machine learning, and more.
Applications of NLP
Mechanical Engineering: Optimization of
Engineering mechanical systems, design of machine

and Design components, and control systems.


Civil Engineering: Structural optimization,
design of infrastructure systems, and urban
planning.
Electrical Engineering: Design of electrical
circuits, power systems, and control systems.
Aerospace Engineering: Aerodynamic
optimization, structural design, and propulsion
system optimization.
Applications of NLP
Production Planning and Scheduling:
Operations Optimization of production processes, inventory
management, and supply chain logistics.
Research Transportation and Logistics: Route
and optimization, vehicle routing, and scheduling in
transportation networks.
Management Resource Allocation: Optimization of resource
Science utilization in manufacturing, healthcare, and
service industries.
Applications of NLP
Parameter Estimation: Optimization of model
Data Science parameters in statistical and machine learning
models.
and Machine Feature Selection and Dimensionality
Learning Reduction: Optimization of feature subsets and
transformation matrices in data analysis.
Model Calibration and Validation: Optimization
of model performance metrics and validation
criteria.
The
Newton
Method
Key ideas and
principles
Newton Method
Introduction

The Newton method aims to find the roots of


the first derivative (gradient) of the objective
function and uses the second derivative
(Hessian matrix) to determine the direction and
step size for the next iteration. In the context
of NLP, the Newton method can be used to
find the local minimum or maximum of a non-
linear objective function subject to constraints.
Newton Method

Where:
Newton Method
Introduction

The Newton method iteratively updates the


solution until a stopping criterion is met, such
as:

The change in the objective function value


is below a predefined threshold.
The norm of the gradient is below a
predefined threshold.
Maximum number of iterations is reached.
Newton Method
Advantages: Limitations:
Fast convergence rate, especially Convergence is not guaranteed
for well-conditioned problems. for all problems and initial
Efficient utilization of second- guesses.
order information (Hessian Computation of the Hessian
matrix). matrix and its inverse can be
Can handle non-linear constraints computationally expensive.
using the KKT conditions or other May converge to a local minimum
methods. or saddle point instead of the
global minimum.
Sample Problem Newton Method
Sample Problem Newton Method
Sample Problem Newton Method
Sample Problem Newton Method
Sample Problem Newton Method
Sample Problem Newton Method

goal
Sample Problem Newton Method

goal

initial guess
Sample Problem Newton Method

goal

initial guess
Sample Problem Newton Method
Sample Problem Newton Method
Sample Problem Newton Method
Sample Problem Newton Method
Sample Problem Newton Method
Interior
Penalty

Key ideas and


principles
Interior Penalty
Introduction

The interior penalty method is a popular


approach used in non-linear programming
(NLP) and optimization to handle constraints in
the optimization problem. It is particularly
effective for problems with inequality
constraints and aims to transform constrained
optimization problems into unconstrained
problems by penalizing violations of the
constraints within the feasible region.
Interior Penalty
Introduction

The interior penalty method introduces a


penalty term into the objective function to
discourage violations of the constraints within
the feasible region. By penalizing infeasible
solutions and encouraging feasible solutions,
the method transforms the constrained
optimization problem into an unconstrained
problem that can be solved using standard
optimization algorithms like Newton's method
or gradient descent.
Interior Penalty

Where:
Interior Penalty
Penalty Parameter μ

The choice of the penalty parameter μ plays a


crucial role in the interior penalty method:
Small μ: A smaller penalty parameter
results in a softer penalty, allowing for more
violations of the constraints but may
require more iterations to converge.
Large μ: A larger penalty parameter results
in a stronger penalty, reducing violations of
the constraints but may lead to numerical
instability and convergence issues.
Interior Penalty
Algorithms and Solvers

The interior penalty method can be combined


with various optimization algorithms and
solvers to solve the penalized unconstrained
problem, such as:
Gradient-based Methods: Newton's
method, gradient descent, and quasi-
Newton methods.
Interior Point Methods: Combining the
interior penalty method with interior point
methods for efficient solution of large-scale
NLP problems.
Types of Interior
Penalty Methods
There are different types of interior penalty
methods, each with its own approach to
penalizing constraint violations and
transforming the constrained optimization
problem into an unconstrained one
Logarithmic Types of Interior
Penalty Methods
Barrier Method
The logarithmic barrier method introduces a logarithmic
barrier function to the objective function to penalize
violations of the inequality constraints.
Quadratic Types of Interior
Penalty Methods
Penalty Method
The quadratic penalty method introduces a quadratic penalty
term to the objective function to penalize violations of the
inequality constraints.
Exponential Types of Interior
Penalty Methods
Penalty Method
The exponential penalty method introduces an exponential
penalty term to the objective function to penalize violations of
the inequality constraints.
Interior Penalty Method
Advantages:
Can handle a wide range of non-linear inequality constraints and
complex optimization problems.
Transforms constrained optimization problems into unconstrained
problems, allowing the use of standard optimization algorithms and
solvers.
Well-defined convergence properties under certain conditions,
ensuring the convergence to a feasible and optimal solution with
proper tuning of the penalty parameter.
Generally more numerically stable compared to other constraint-
handling techniques like the exterior penalty method.
Interior Penalty Method
Limitations:
The choice of the penalty parameter μ is crucial and may require careful tuning
to balance the trade-off between feasibility and optimality, which can be
challenging for large-scale problems.
The introduction of penalty terms may increase the computational complexity of
the optimization problem, requiring more computational resources and time for
solution.
May converge to a local optimum instead of the global optimum, especially for
non-convex and poorly conditioned problems, requiring multiple initial guesses
and solutions to ensure robustness.
May not handle equality constraints and complex constraints effectively,
requiring additional methods and techniques for constraint handling and
enforcement.
Exterior
Penalty

Key ideas and


principles
Exterior Penalty
Introduction

The exterior penalty method is another


approach used in non-linear programming
(NLP) to handle inequality constraints. Unlike
the interior penalty method, which penalizes
violations of the constraints within the feasible
region, the exterior penalty method penalizes
violations of the constraints outside the
feasible region.
Exterior Penalty
Introduction

The exterior penalty method is a constraint-


handling technique used to transform
constrained optimization problems into
unconstrained problems by penalizing
violations of the inequality constraints outside
the feasible region. By adding penalty terms to
the objective function for violations of the
constraints, the method aims to guide the
optimization algorithm towards feasible
solutions and encourage the exploration of the
feasible region.
Exterior Penalty

Where:
The choice of the penalty exponent p
Exterior Penalty plays a crucial role in the exterior penalty
method:
The choice of the penalty exponent p
Exterior Penalty plays a crucial role in the exterior penalty
method:
Linear Exterior Types of Exterior
Penalty Methods
Penalty Method
Introduces a linear penalty term to penalize violations of the
inequality constraints.
Quadratic Exterior Types of Exterior
Penalty Methods
Penalty Method
Introduces a quadratic penalty term to penalize violations of
the inequality constraints.
Exponential Exterior Types of Exterior
Penalty Methods
Penalty Method
Introduces an exponential penalty term to penalize violations
of the inequality constraints.
Example Problem: where:
µ= penalty parameter

Minimize the function f(x)=x2−4x+4 subject to the constraint


g(x)=x−2≤0. We'll use the exterior penalty method to handle
the constraint.

where:
µ= penalty parameter
g(x) = penalty functions corresponding to the
constraints
f(x) = original objective function Andre Luice Guian
Exterior Penalty Method
Advantages:
Can handle a wide range of non-linear inequality constraints and
complex optimization problems.
Transforms constrained optimization problems into unconstrained
problems, allowing the use of standard optimization algorithms and
solvers.
Encourages the convergence to feasible solutions by penalizing
infeasible solutions, ensuring robust and reliable solutions.
Exterior Penalty Method
Limitations:
The choice of the penalty parameter μ is crucial and may require careful tuning
to balance the trade-off between feasibility and optimality, which can be
challenging for large-scale problems.
The introduction of penalty terms may increase the computational complexity
of the optimization problem, requiring more computational resources and time
for solution.
May converge to a local optimum instead of the global optimum, especially for
non-convex and poorly conditioned problems, requiring multiple initial guesses
and solutions to ensure robustness.
May not handle equality constraints and complex constraints effectively,
requiring additional methods and techniques for constraint handling and
enforcement.
Thank
you!!

You might also like