Examen Metodos Numericos

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

ME 2450 - Numerical

Methods

Final Exam Review Notes

•You are allowed 2 sides of


an 8 ½ x 11 sheet of paper
for notes

•Exam: Friday, April 28,


2006 1:00 – 3:00 pm
Systems of Linear Algebraic Equations
CH. 10 LU Decomposition
• Best when [A] is fixed but {b} changes
[A]{x} = {b}
1. LU Decomposition – factor [A] into [L] & [U]

⎡1 0 0⎤ ⎡u11 u12 u13 ⎤


[L] = ⎢⎢l21 1 0⎥⎥ [U ] = ⎢⎢ 0 u22 u23 ⎥⎥
⎢⎣l31 l32 1⎥⎦ ⎢⎣ 0 0 u33 ⎥⎦

[A] = [L][U ] [U ]{x} = {d }


2. Substitution –
[ ]
a. L {d } = {b} ⇒ Solve for {d} by forward subs.

[ ]
b. U {x} = {d } ⇒ Solve for {x} by backward subs.

Note: I have posted a fully worked out example


online
CH. 11 Gauss – Seidel: Iterative Methods
Relaxation – Acceleration of the solution assuming we
know the direction of the solution.
Relaxation coeff

= (1 − λ )x + λx
k +1 k * weighted average
x solution

old value new value

Typical Range of λ: 0< λ <2


λ = 1 Æ No Relaxation
0< λ <1 Æ Under Relaxation
1< λ <2 Æ Over Relaxation

•The optimum value of λ is problem specific


and usually determined empirically
•For Large numbers of equations that are
diagonally dominant GS has less Round-Off
error and reduces unnecessary storage of 0’s
CH. 11 Gauss – Seidel:Iterative Methods
⎡ a11 a12 a13 ⎤ ⎧ x1 ⎫ ⎧ b1 ⎫ (1)
⎢a ⎪ ⎪ ⎪ ⎪
⎢ 21 a22 a23 ⎥⎥ ⎨ x2 ⎬ = ⎨b2 ⎬ (2)
⎢⎣a31 a32 a33 ⎥⎦ ⎪⎩ x3 ⎪⎭ ⎪⎩b3 ⎪⎭ (3)

1. Solve Equation (1) for x1, Eq (2) for x2, Eq (3) for x3
2. Start iterative procedure by guessing: xo1 ,xo2 ,xo3
3. Calculate x11 from xo2 ,xo3
4. Calculate x12 from x11 ,xo3
5. Calculate x13 from x11 ,x12
6. Repeat for new x’s

Convergence Check:
xik − xik −1
ε a ,i = k
• 100% < ε s
xi
GS Convergence Criteria:
n
aii > ∑ aij
j =1
j ≠i

Sufficient but not necessary


Diagonal Dominance!
CH. 17 Least Squares Regression
Derive expressions (approximating function) that fits
the shape of the data (or general trend of the data)

(A) Straight Line Least-Squares fit


y = a0 + a1 x + e
Error or Residual
• Find a fit that minimizes the error

Define: “Sum of the Squares” of the Residual

S r = ∑ e =∑ ( yi − ao + a1 xi ) = ∑ ( yi ,meas − yi ,mod )
N N N
2 2 2
i
i =1 i =1 i =1
Minimize Sr and solve for a’s Æ How?

The “Linear Regression” Method produces the best fit


to the data with:
ao = y − a1 x
n∑ xi yi − ∑ xi ∑ yi
a1 =
n∑ x − (∑ xi )
2 2
i
CH. 17 Least Squares Regression
Goodness of fit Statistics for linear regression
1. Standard Deviation: What does it measure?

Sy =

⎜ ∑ (
yi − y ) ⎞⎟
2

⎜ n −1 ⎟
⎝ ⎠
2. Standard Error Estimate: What does it measure?

⎛ ∑ ( yi − a0 − a1 xi )2 ⎞
Sy/x =⎜ ⎟
⎜ n − 2 ⎟
⎝ ⎠

3. Coefficient of determination: represents error


reduction due to using straight line regression
rather than the average
(
S t = ∑ yi − y )2

S r = ∑ ( yi − a0 + a1 xi )
2

St − S r
r =
2

St
St − S r Correlation
r=
St coefficient
CH. 17 Least Squares Regression
(B) Non-Linear Relationships: convert to linear

1. Exponential y = a1eb1x

Power Model y = a2 x
b2
2.
x
3. Saturation Growth Rate y = a3
b2 + x
(C) Polynomial Regression

y = a0 + a1 x + a1 x 2 + ... + am x m + e
• Follow minimization procedure from linear regression

• Solve a system of m+1 equations with standard error:


⎛ Sr ⎞
S y / x = ⎜⎜ ⎟⎟
⎝ n − ( m + 1) ⎠
(D) Multiple Linear Regression: y is a function of 2 or
more independent variables
y = a0 + a1 x + a2 x2 + ... + am xm + e
• Follow minimization procedure from linear regression

• Solve a system of m dimension with standard error:

⎛ Sr ⎞
Sy/x = ⎜⎜ ⎟⎟
⎝ n − (m + 1) ⎠
CH. 21 Numerical Integration
Integrate data & functions

(A) Newton-Cotes Integration Formula:


b b
I = ∫ f ( x)dx ≈ ∫ f n ( x)dx
a a

f n ( x) = a0 + a1 x + a2 x 2 + ... + an −1 x n −1 + an x n

Open & Closed Form Methods:

1. Trapazoidal Rule & Error Æ Multiple Application


2. Simpson’s Rule & Error
1. 1/3 Rule for even number of segments
2. 3/8 rule for odd number of segments
CH. 22 Numerical Integration
(B) Gauss Quadrature Æ Wise positioning of points for
integration to reduce error

Two Point Guass Legendre formulation


1
I= ∫ f (x
−1
d )dxd ≈ c0 f ( x0 ) + c1 f ( x1 )

co,c1 are constants (and unknown)


xo, x1 are unknown Gauss points
We need 4 Eqns. For our 4 unknowns, we choose the
following polynomials: y=1, y=x, y=x2, y=x3
1 1

∫ f (x
−1
d )dxd = c0 f ( x0 ) + c1 f ( x1 ) = ∫ 1dxd = 1
−1
1 1

∫ f (x
−1
d )dxd = c0 f ( x0 ) + c1 f ( x1 ) = ∫ xd dxd = 0
−1
1 1


−1
f ( xd )dxd = c0 f ( x0 ) + c1 f ( x1 ) = ∫ xd2 dxd = 2 / 3
−1
1 1


−1
f ( xd )dxd = c0 f ( x0 ) + c1 f ( x1 ) = ∫ xd3 dxd = 0
−1
Solving we obtain:
co = c1 = 1
xo = − x1 ⎛ −1 ⎞ ⎛ 1 ⎞ 2pt Gauss-Legendre
I ≈ f⎜ ⎟+ f⎜ ⎟
⎝ 3⎠ ⎝ ⎠ Forumula
−1 3
xo =
3
CH. 22 Numerical Integration
(B) Gauss Quadrature Æ Apply 2pt formula to an
integral of the form:

b−a
[ ]
b
I =∫ f ( x)dx ≈ c0 f ( xotrans ) + c1 f ( x1trans )
a
2
To transform the x-locations we use:
b+a b−a
x= + xd
2 2
b−a
dx = dxd
2
Substituting the Gauss points for the 2pt formula we obtain:

b + a b − a ⎛ −1 ⎞
x trans
o = + ⎜ ⎟
2 2 ⎝ 3⎠

b+a b−a⎛ 1 ⎞
x trans
1 = + ⎜ ⎟
2 2 ⎝ 3⎠
CH. 25 Runge-Kutta Methods
• Classification of Differential Equations
• Solution of ODE’s Note: I have
• Initial Value Problems posted handout
• Boundary Value Problems
online for
Solve ODE’s of the form: classification
dy
= f ( x, y )
dx
Numerical Solution form: Step size
yi +1 = yi + φh
New slope
Current
estimate value

1. Euler’s Method

yi +1 = yi + f ( xi , yi )h
• Local truncation error – O(h2)
• Global truncation error – O(h)
CH. 25 Runge-Kutta Methods
2. Huen’s Method – Predictor/Corrector
• Predictor Equation

yi0+1 = yi + f ( xi , yi )h

• Corrector Equation

f ( xi , yi ) + f ( xi +1 , y )
0
yi +1 = yi + i +1
h
2
Average slope

• Can be solved iteratively


• Local truncation error – O(h3)
• Global truncation error – O(h2)
•2nd order accurate
CH. 25 Runge-Kutta Methods
3. 4th order Runge-Kutta
• Can achieve Taylor Series accuracy without
evaluating higher order derivatives.

yi +1 = yi + (k1 + 2k 2 + 2k3 + k 4 )h
1
6
yi +1 = yi + φh

Slope Estimates:
k1 = f ( xi , yi )
k 2 = f ( xi + 0.5h, yi + .5k1h)
k3 = f ( xi + 0.5h, yi + .5k 2 h)
k 4 = f ( xi + h, yi + k3 h)

•Note the recursive nature of the k’s


•Recall where coefficients come from
•Global truncation error – O(h4)
•4th order accurate
CH. 25 Runge-Kutta Methods
4. Systems of ODEs
• Higher order ODEs can be broken down
into a system of first order ODEs that can
be solved using Runge-Kutta Methods
• Example:
y ' '+ ay '+ c sin y = 0
y (0 ) = 1
y ' (0 ) = −1

dy
=z
dx yi +1 = yi + ϕ1h
dz zi +1 = zi + ϕ 2 h
= −az − c sin y
dx
y (0 ) = 1
z (0 ) = −1
CH. 27 Boundary Value Problems
1. Shooting Method
• Convert a boundary value problem into
an initial value problem.
• Solve the problem iteratively
T(x)

z1
Exact

z2 x
• Linear ODE Approach
• Non-Linear ODE Approach
CH. 27 Boundary Value Problems
2. Finite Difference Equations
• Alternative to the shooting method
• Substitute finite difference equations
for derivatives in the original ODE.
• This will give us a set of simultaneous
algebraic equations that are solved a
nodes using techniques like Gauss-
Seidel, LU Decomposition, etc.

• Advantage over shooting method:


• Shooting method can become difficult
for higher order equations where we
have to assume two or more conditions
CH. 29 Partial Differential Equations:
Finite Difference: Elliptical Equations
Poisson’s equation:

∂ 2T ∂ 2T
+ 2 = R ( x, y )
∂x 2
∂y

Can be written using central differencing as:


Ti +1, j − 2Ti , j + Ti −1, j Ti , j +1 − 2T j + T j −1
+ = Ri , j
Δx 2
Δy 2

The resulting system of algebraic equations can


be solved using standard techniques developed
in Chapters 9-11

You might also like