Submitted By: Bonjoe M. Amano Bsce Iii Submitted To: Engr. Irismay Jumawan, PH., D Instructor
Submitted By: Bonjoe M. Amano Bsce Iii Submitted To: Engr. Irismay Jumawan, PH., D Instructor
Bonjoe M. Amano
BSCE III
Submitted to:
Engr. Irismay Jumawan, Ph., D
Instructor
January 2023
TABLE OF CONTENTS
I. Solving Systems of Linear Algebraic Equation
Matrix Form
Small Scale
a. Graphical Method
b. Cramer’s Rule
c. Elimination of Unknowns
Large Scale
d. Gaussian Elimination with Row Pivoting
e. LU Decomposition
f. Iteration Method
i. Jacobi Method
ii. Gauss-Seidel Method
Case Study
II. Solving Systems of Non-Linear Algebraic Equation
Open Method
a. Fixed- Iteration method
b. Newton- Raphson Method
c. Secant Method
Bracketing Method
a. Bisection Method
b. False-Position Method pr Regula-Falsi Method
Case Study
III. Curve Fitting and Interpolation
a. Least Square Regression
b. Polynomial Regression
Case Study
IV. Numerical Integration
a. Euler’s Method
b. Trapezoidal Method
c. Simpson’s Rule
V. Solution to ordinary Differential Equation
a. Range-Kutta Methods
MATRIX FORM
a. GRAPHICAL METHOD
Graphical Method is a simple method for finding roots of the equation f(x)=0 to make a plot of
the function.
Example:
2x+3y=4 x+y=2
x 7x+2y=20 3x-y=3
-3.000 20.500 -12.000
-2.000 17.000 -9.000
-1.000 13.500 -6.000
0.000 10.000 -3.000
1.000 6.500 0.000
2.000 3.000 3.000
3.000 -0.500 6.000
4.000 -4.000 9.000
5.000 -7.500 12.000
6.000 -11.000 15.000
Chart Title
25.000
20.000
15.000
10.000
5.000
0.000
-4.000 -2.000 0.000 2.000 4.000 6.000 8.000
-5.000
-10.000
-15.000
7x+2y=20 3x-y=3
b. CRAMER’S RULE
Cramer’s rule also known as determinant method is a specific formula used for solving a system
of linear equations containing as many equations as unknowns, efficient whenever the system of
equations has a unique solution.
The solution obtained using Cramer’s rule will be in terms of the determinants of the coefficient
matrix and matrices obtained from it by replacing one column with the column vector of the
right-hand sides of the equations.
To find the Cramer’s rule formula, we need to consider the system of 3 equations with three
variables.
Consider:
Therefore;
D≠0
x = Dx/D
y = Dy/D
z = Dz/D
Example:
Given,
x + y + z =6
y + 3z = 11
x + z =2y or x – 2y + z = 0
EXAMPLE 2
x+3y=5
2x+2y=6
A=10-
D=2-6 18 B=6-10
D=-4 A=-8 B=-4
x= A/D= -8/-4 =2
y=B/D= -4/-4 =1
c. ELIMINATION OF UNKNOWNS
Another way of solving a linear system is to use the elimination method. In the elimination
method you either add or subtract the equations to get an equation in one variable.
When the coefficients of one variable are opposites you add the equations to eliminate a variable
and when the coefficients of one variable are equal you subtract the equations to eliminate a
variable.y.
Example:
3y+2x=6
5y−2x=10
y+2x=6
+5y−2x=10
=8y =16
y=2
The value of y can now be substituted into either of the original equations to find the value of x.
3y+2x=6
3(2)+2x=6
6+2x=6
x=0
To avoid errors make sure that all like terms and equal signs are in the same columns before
beginning the elimination.
Using excel:
EXAMPLE 1
3y+2x=6 eq. 1
5y−2x=10 eq. 2
x y
0 2
eq. 1 eq. 2
6 10
EXAMPLE 2
3x+2y=9 eq. 1
2x+6y=6 eq. 2
x y
3 0
eq. 1 eq. 2
9 6
LARGE SCALE
d. GAUSSIAN ELIMINATION WITH ROW PIVOTING
Gaussian elimination method is known as the row reduction algorithm for solving linear
equations systems. It consists of a sequence of operations performed on the corresponding matrix
of coefficients.
For that, let us work on our first Gaussian elimination example so you can start looking into the
whole process and the intuition that is needed when working through them:
Example:
x+y+z=3 1 1 1 3
2x+3y+7z=0 -> 2 3 7 = 0
x+3y-2z=17 1 3 -2 17
1 1 1 3
0 1 5 = -6
0 2 -3 14
1 0 -4 9
0 1 5 = -6
0 0 -13 26
1 0 -4 9
0 1 5 = -6
0 0 1 -2
1 0 0 1
0 1 0 = 4
0 0 1 -2
2X1 + X2 - X3 + 2X4 = 5
4X1 + 5X2 - 3X3 + 6X4 = 4
-2X1 + 5X2 - 2X3 +6X4 = 4
4X1 + 11X2 -4X3 +2X4 = 2
X1 X2 X3 X4 B
PIVOT 2 1 -1 2 5
m2 1 =
2 4 5 -3 6 9
m3 1 = -
1 -2 5 -2 6 4
m4 1 =
2 4 11 -4 2 2
X1 X2 X3 X4 B
2 1 -1 2 5
PIVOT 0 3 -1 2 -1
m3 2 =
2 0 6 -3 8 9
m4 2 =
3 0 9 -2 -2 -8
X1 X2 X3 X4 B
2 1 -1 2 5
0 3 -1 2 -1
PIVOT 0 0 -1 4 11
m4 3 = -
1 0 0 1 -8 -5
X1 X2 X3 X4 B
2 1 -1 2 5
0 3 -1 2 -1
0 0 -1 4 11
0 0 0 -4 6
BACKWARD
SUBSTITUTI
ON
X1 = -2
X2 = -5
X3 = -17
X4 = -1.5
e. LU DECOMPOSITION
LU decomposition of a matrix is the factorization of a given square matrix into two triangular
matrices, one upper triangular matrix and one lower triangular matrix, such that the product of
these two matrices gives the original matrix.
A square matrix A can be decomposed into two square matrices L and U such that A = L U
where U is an upper triangular matrix formed as a result of applying the Gauss Elimination
Method on A, and L is a lower triangular matrix with diagonal elements being equal to 1.
We have,
AX=B
A= , X= and B=
So, by doing
R2= R2 - 4R1
R3= R3 – 3R1
We get,
Now, by doing
R3= R3 –(-2)R2
We have,
Now, we have;
L= , U=
Therefore, we get
x1 + x2 + x3 = 1
-x2 - 5x3 = 2
-10x3 = 5
i. Jacobi Method
Is an iterative approach for finding the numerical solution of diagonally dominant system
of linear equations. With the Jacobi method, the values of obtained in the nth iteration
remain unchanged until the entire (n+1)th iteration
has been calculated.
Example:
x+y+z=7
x+2y+2z=13
x+3y+z=13
First, arrange to make it diagonally dominant.
x+y+z=7
x+3y+z=13
x+2y+2z=13
Initial gauss (x,y,z)=(0,0,0)
1st Approximation
x1=11[7-(0)-(0)]=11[7]=7
y1=13[13-(0)-(0)]=13[13]=4.3333
z1=12[13-(0)-2(0)]=12[13]=6.5
2nd Approximation
x2=11[7-(4.3333)-(6.5)]=11[-3.8333]=-3.8333
y2=13[13-(7)-(6.5)]=13[-0.5]=-0.1667
z2=12[13-(7)-2(4.3333)]=12[-2.6667]=-1.3333
3rd Approximation
x3=11[7-(-0.1667)-(-1.3333)]=11[8.5]=8.5
y3=13[13-(-3.8333)-(-1.3333)]=13[18.1667]=6.0556
z3=12[13-(-3.8333)-2(-0.1667)]=12[17.1667]=8.5833
4th Approximation
x4=11[7-(6.0556)-(8.5833)]=11[-7.6389]=-7.6389
y4=13[13-(8.5)-(8.5833)]=13[-4.0833]=-1.3611
z4=12[13-(8.5)-2(6.0556)]=12[-7.6111]=-3.8056
5th Approximation
x5=11[7-(-1.3611)-(-3.8056)]=11[12.1667]=12.1667
y5=13[13-(-7.6389)-(-3.8056)]=13[24.4444]=8.1481
z5=12[13-(-7.6389)-2(-1.3611)]=12[23.3611]=11.6806
6th Approximation
x6=11[7-(8.1481)-(11.6806)]=11[-12.8287]=-12.8287
y6=13[13-(12.1667)-(11.6806)]=13[-10.8472]=-3.6157
z6=12[13-(12.1667)-2(8.1481)]=12[-15.463]=-7.7315
K X1 X2 X3
0 0 0 0
4.33333
1 7 3 6.5
- - -
2 3.83333 0.16667 1.33333
6.05555 8.58333
3 8.5 6 3
- - -
4 7.63889 1.36111 3.80556
12.1666 8.14814 11.6805
5 7 8 6
- - -
6 12.8287 3.61574 7.73148
ii. Gauss-Seidel Method
Gauss-Seidel method give the user control of the round-off error. Also, if the physics of
the problem are well known, initial guesses needed in iterative methods can be made
more judiciously leading to faster convergence.
With the Gauss-Seidel method, we use the new values as soon as they are known. For
example, once we have computed from the first equation, its value is then used in the
second equation to obtain the new and so on.
Example:
x+y+z=7
x+2y+2z=13
x+3y+z=13
We re-arrange the equations as follows, such that the elements in the coefficient matrix are
diagonally dominant.
x+y+z=7
x+3y+z=13
x+2y+2z=13
X2= 13/3-X1/3-X3/3
X3=13/2-X1/2-X2
Initial gauss (x,y,z)=(0,0,0)
3rd Approximation
x3=11[7-(2.6667)-(1.8333)]=11[2.5]=2.5
y3=13[13-(2.5)-(1.8333)]=13[8.6667]=2.8889
z3=12[13-(2.5)-2(2.8889)]=12[4.7222]=2.3611
K X1 X2 X3
0 0.000 0.000 0.000
1 7.000 2.000 1.000
2 4.000 2.667 1.833
3 2.500 2.889 2.361
4 1.750 2.963 2.662
5 1.375 2.988 2.825
6 1.188 2.996 2.910
7 1.094 2.999 2.954
8 1.047 3.000 2.977
9 1.023 3.000 2.988
10 1.012 3.000 2.994
11 1.006 3.000 2.997
12 1.003 3.000 2.999
13 1.001 3.000 2.999
14 1.001 3.000 3.000
Converges at 14 iterations.
Therefore;
x=1.0007≅1
y=3≅3
z=2.9996≅3
Case Study
20KN
B
A 60 30 C
5m 10m
ƩFv=0
Ay+Cy=-20 eq. 1
@joint a:
ƩFv=0
Ay-AB(sin60)=0 eq. 2
ƩFh=0
AC=0
@joint c:
ƩFv=0
Cy-AC(sin30)=0 eq. 3
@joint b:
ƩFv=0
AB(sin60)+BC(sin30)=-20 eq. 4
Equations:
X1+X2=-20
Ay+Cy=20
X1-X3(sin60)=0
Ay-AB(sin60)=0
X2-X4(sin30)=0
Cy-AC(sin30)=0
AB(sin60)+BC(sin30)=20 X3(sin60)+X4(sin30)=-20
X1-X3(sin60)=0
Final Equations:
X1+X2=-20
X2-X4(sin30)=0
X1-X3(sin60)=0
X3(sin60)+X4(sin30)=-20
K X1 X2 X3 X4
0 0 0 0 0
1 -20.000 0.000 20.242 -20
2 -20.000 19.761 20.242 -20
3 -39.761 19.761 40.242 -20
4 -39.761 19.761 40.242 -20
Therefore:
X1= -40
X2= 20
X3= 40
X4= -20
II. Solving Systems of Non-Linear Algebraic Equation
Open Methods
Example:
Find the root of;
x2-x-1 = 0
Let x1=2,
Therefore, x=1.618
b. Newton-Raphson Method
Also known as Newton's method, is a way to quickly find a good approximation
for the root of a real-valued function f(x) = 0. It uses the idea that a continuous
and differentiable function can be approximated by a straight line tangent to it.
Example:
f(x) = x^2 - 4x – 7
then, f′(x)=2x−4
i x f(x) f'(x) e
1 5.000 -2.000 6.000
2 5.333 0.111 6.667 0.062
3 5.317 0.000 6.633 -0.003
4 5.317 0.000 6.633 0.000
5 5.317 0.000 6.633 -5E-11
6 5.317 0.000 6.633 0E+00
Eq. 1:
Step 1: Initialization
x0 and x1 of α are taken as initial guesses.
Step 2: Iteration
In the case of n = 1, 2, 3, …, use equation 1 until a specific criterion for termination has been met
(i.e., The desired accuracy of the answer or the maximum number of iterations has been
attained).
Example:
Compute two iterations for the function f(x) = x3 – 5x + 1 = 0, in which the real roots of
the equation f(x) lies in the interval (0, 1).
Therefore, x= 0.202
Bracketing Method
a) Bisection Method
The bisection method is used to find the roots of a polynomial equation. It separates the
interval and subdivides the interval in which the root of the equation lies. It works by
narrowing the gap between the positive and negative intervals until it closes in on the correct
answer. This method narrows the gap by taking the average of the positive and negative
intervals. It is a simple method and it is relatively slow.
The bisection method is an approximation method to find the roots of the given equation by
repeatedly dividing the interval. This method will divide the interval until the resulting
interval is found, which is extremely small.
Example:
Determine the root of the given equation x2-3 = 0 for x ∈ [1, 2]
c=1/2(a+b)
4 13 20
5 22 15
6 33 10
0
0 1 2 3 4 5 6 7
-5
k a b f(a) f(b) c F© f(a)*f© f(b)*f© e
- -
0 1.00000 2.00000 2.00000 1.00000 1.50000 0.75000 1.5 -0.75 1.00000
-
1 1.50000 2.00000 0.75000 1.00000 1.75000 0.06250 -0.046875 0.0625 0.50000
- - -
2 1.50000 1.75000 0.75000 0.06250 1.62500 0.35938 0.26953125 0.022460938 0.25000
- - -
3 1.62500 1.75000 0.35938 0.06250 1.68750 0.15234 0.054748535 0.009521484 0.12500
- - -
4 1.68750 1.75000 0.15234 0.06250 1.71875 0.04590 0.00699234 0.002868652 0.06250
- -
5 1.71875 1.75000 0.04590 0.06250 1.73438 0.00806 0.000369787 0.00050354 0.03125
- - -
6 1.71875 1.73438 0.04590 0.00806 1.72656 0.01898 0.000871241 0.000152931 0.01563
- - -4.41335E-
7 1.72656 1.73438 0.01898 0.00806 1.73047 0.00548 0.000103981 05 0.00781
- -7.04214E-
8 1.73047 1.73438 0.00548 0.00806 1.73242 0.00129 06 1.03572E-05 0.00391
- - -2.69597E-
9 1.73047 1.73242 0.00548 0.00129 1.73145 0.00210 1.14879E-05 06 0.00195
- - -5.21969E-
10 1.73145 1.73242 0.00210 0.00129 1.73193 0.00041 8.51491E-07 07 0.00098
- -1.78531E-
11 1.73193 1.73242 0.00041 0.00129 1.73218 0.00044 07 5.65262E-07 0.00049
- -6.83076E-
12 1.73193 1.73218 0.00041 0.00044 1.73206 0.00002 09 7.39731E-09 0.00024
- - -3.27393E-
13 1.73193 1.73206 0.00041 0.00002 1.73199 0.00019 7.9015E-08 09 0.00012
- - -1.49547E-
14 1.73199 1.73206 0.00019 0.00002 1.73203 0.00009 1.72989E-08 09 0.00006
- - -6.06223E-
15 1.73203 1.73206 0.00009 0.00002 1.73204 0.00004 3.20318E-09 10 0.00003
- - -1.61599E-
16 1.73204 1.73206 0.00004 0.00002 1.73205 0.00001 3.46133E-10 10 0.00002
- -3.46656E-
17 1.73205 1.73206 0.00001 0.00002 1.73205 0.00000 11 6.07138E-11 0.00001
- -1.08208E-
18 1.73205 1.73205 0.00001 0.00000 1.73205 0.00000 2.88011E-11 11 0.00000
It is described as the trial and error approach of using “false” or “test” values for the
variable and then altering the test value according to the result.
It’s only difference to bisection method is how to get the new root. It uses the formula,
f(x)=x3-x-1
a= 1
k a b f(a) f(b) c F© f(a)*f© f(b)*f© e
- -
0.5787 2.89351851
0 1.00000 2.00000 -1.00000 5.00000 1.16667 0 0.578703704 9 1.00000
- -
0.2853 1.42681514
1 1.16667 2.00000 -0.57870 5.00000 1.25311 6 0.165140642 8 0.83333
- -
0.1295 0.64771046
2 1.25311 2.00000 -0.28536 5.00000 1.29344 4 0.036966524 4 0.74689
- -
0.0565 0.28294243
3 1.29344 2.00000 -0.12954 5.00000 1.31128 9 0.007330591 6 0.70656
- -
0.0243 0.12151873
4 1.31128 2.00000 -0.05659 5.00000 1.31899 0 0.001375312 6 0.68872
-
0.0103
5 1.31899 2.00000 -0.02430 5.00000 1.32228 6 0.000251832 -0.05180925 0.68101
- -
0.0044 0.02201974
6 1.32228 2.00000 -0.01036 5.00000 1.32368 0 4.56331E-05 9 0.67772
- -
0.0018 0.00934629
7 1.32368 2.00000 -0.00440 5.00000 1.32428 7 8.23212E-06 2 0.67632
- -
0.0007 0.00396479
8 1.32428 2.00000 -0.00187 5.00000 1.32453 9 1.48225E-06 6 0.67572
- -
0.0003 0.00168150
9 1.32453 2.00000 -0.00079 5.00000 1.32464 4 2.66673E-07 5 0.67547
- -
0.0001 0.00071306
10 1.32464 2.00000 -0.00034 5.00000 1.32468 4 4.79611E-08 9 0.67536
11 1.32468 2.00000 -0.00014 5.00000 1.32470 - 8.62457E-09 - 0.67532
0.0000 0.00030237
6 5
- -
0.0000 0.00012821
12 1.32470 2.00000 -0.00006 5.00000 1.32471 3 1.55081E-09 9 0.67530
-
0.0000 -5.43695E-
13 1.32471 2.00000 -0.00003 5.00000 1.32472 1 2.78848E-10 05 0.67529
0.0000 -2.30546E-
14 1.32472 2.00000 -0.00001 5.00000 1.32472 0 5.01387E-11 05 0.67528
0.0000 -9.77594E-
15 1.32472 2.00000 0.00000 5.00000 1.32472 0 9.0152E-12 06 0.67528
b= 2
Let’s make a table and a chart to check.
x f(x)
1 -1
1.1 -0.769
1.2 -0.472
1.3 -0.103
1.4 0.344
1.5 0.875
1.6 1.496
1.7 2.213
1.8 3.032
1.9 3.959
2 5
f(x)
6
5 5
4 3.959
3 3.032
2.213
2
1.496
1 0.875
0.343999999999999
0 -0.103
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
-0.472
-0.769
-1 -1
-2
Case Study
It is well documented that the atmospheric levels of several greenhouse gases have
been increasing over the past 20 years. For example, the figure below shows data for the partial
pressure of carbon dioxide (CO2) collected at Hempstead town, New York from 2001 through
2020. The trend in the data can be nicely fit with a quadratic polynomial (in Part Five, we
will learn how to determine such polynomials),
x^3 – 4x – 9
Using one Open method and one Bracketing method in finding the root.
Let’s graph:
x Pcco2
1 -12
2 -9 Pcco2
3 6
9000
4 39
5 96 8000
6 183 7000
7 306
8 471 6000
9 684 5000
10 951
11 1278 4000
12 1671 3000
13 2136
14 2679 2000
15 3306 1000
16 4023
0
17 4836 0 5 10 15 20 25
18 5751 -1000
19 6774
20 7911
Using false position method:
f(x)= x^3 - 4x
x^3 – 4x – 9 -9
a= 1 c=a+(a-b)(f(a)/f(a)-f(b))
b= 2
k a b f(a) f(b) c F©
0 2.00000 3.00000 -9.00000 6.00000 2.60000 -1.82400
1 2.60000 3.00000 -1.82400 6.00000 2.69325 -0.23723
2 2.69325 3.00000 -0.23723 6.00000 2.70492 -0.02891
3 2.70492 3.00000 -0.02891 6.00000 2.70633 -0.00350
4 2.70633 3.00000 -0.00350 6.00000 2.70650 -0.00042
5 2.70650 3.00000 -0.00042 6.00000 2.70653 -0.00005
6 2.70653 3.00000 -0.00005 6.00000 2.70653 -0.00001
7 2.70653 3.00000 -0.00001 6.00000 2.70653 0.00000
Therefore, the root is 2.70653
Let’s graph:
year Pcco2
1 -12 Pcco2
2 -9
3 6 9000
4 39
5 96 8000
6 183
7000
7 306
8 471 6000
9 684
10 951 5000
11 1278
4000
12 1671
13 2136 3000
14 2679
15 3306 2000
16 4023
17 4836 1000
18 5751
0
19 6774 0 5 10 15 20 25
20 7911 -1000
This method of regression analysis begins with a set of data points to be plotted
on an x- and y-axis graph. An analyst using the least squares method will generate
a line of best fit that explains the potential relationship between independent and
dependent variables.
b = (∑y - m∑x)/n
Here, n is the number of data points.
Example:
Consider the set of points: (1, 1), (-2,-1), and (3, 2). Plot these points and the least-squares
regression line in the same graph.
x y xy x^2 y
1 1 1 1 2.5
-2 1 -2 4 2
3 2 6 9
total 2 4 5 14 1.5
4=3a+2b
5=2a+14b 0.5
0
-3 -2 -1 0 1 2 3 4
a=1.211
b=0.184
b. Polynomial Regression
Polynomial Regression is a form of regression analysis in which the relationship
between the independent variables and dependent variables are modeled in the nth
degree polynomial.
Polynomial Regression does not require the relationship between the independent and
dependent variables to be linear in the data set,This is also one of the main difference
between the Linear and Polynomial Regression.
Polynomial Regression is generally used when the points in the data are not captured by
the Linear Regression Model and the Linear Regression fails in describing the best result
clearly.
Formulas to be considered;
y = a0+a1x+a2x^2+a3x^3….+anx^n
y= a0 + a1x + a2x^2 -> Equation of a parabola
Example:
Fit a second order polynomial to the data given below:
x y
0 2.1
1 7.7
2 13.6
3 27.2
4 40.9
5 61.1
y
70
60
50
40
30
20
10
0
0 1 2 3 4 5 6
MATRIX FORM
6 15 55 152.6
15 55 225 = 585.6
55 225 979 2488.8
a0= 2.4785
a1= 2.35929
a2= 1.8607
a. Euler’s Method
Euler's method is based on approximating the graph of a solution y(x) with a sequence of tangent
line approximations computed sequentially, in “steps”.
In order to use Euler's Method to generate a numerical solution to an initial value problem of the
form:
y′ = f(x, y)
y(xo) = yo
we decide upon what interval, starting at the initial condition, we desire to find the solution. We
chop this interval into small subdivisions of length h. Then, using the initial condition as our
starting point, we generate the rest of the solution by using the iterative formulas:
xn+1 = xn + h
to find the coordinates of the points in our numerical solution. We terminate this process when
we have reached the right end of the desired interval.
Example:
y′ = x + 2y
y(0) = 0
numerically, finding a value for the solution at x = 1, and using steps of size h = 0.25.
Clearly, the description of the problem implies that the interval we'll be finding a solution
on is [0,1]. The differential equation given tells us the formula for f(x, y) required by the Euler
Method, namely:
f(x, y) = x + 2y
and the initial condition tells us the values of the coordinates of our starting point:
x0 = 0
y0 = 0
f(x, y) = x + 2y P(1.1)
dy/dx= 1 + 2 Y(1) h= 0.25
formula:
Yn+1 = Yn + h (x1 + 2*y1
n Xn Yn
0 0 0
1 0.25 0.0625
2 0.5 0.21875
3 0.75 0.515625
4 1 1.023438
Yn
1.2
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2
x4 = 1
y4 = 0.515625
b. Trapezoidal Method
Trapezoidal Rule is a rule that evaluates the area under the curves by dividing the total
area into smaller trapezoids rather than using rectangles. This integration works by
approximating the region under the graph of a function as a trapezoid, and it calculates
the area.
Suppose we have a function f(x) and we are interested in the integration in the linear
range of x = a and x = b. Then the value of integration is:
If we include the error term, then the final expression will be,
Example:
f(x) = 4 x2 +6
b
∫ ( 4 x 2 +6 ) dx
a
b−a 3−2 1
n=4; a=2, b=3 = =
n 4 4
h
= [ f ( x 0 ) +2 f ( x 1) + 2 f ( x2 ) +2 f ( x 3 ) + f ( x 4 ) ]
2
1
= ¿]
8
1 105 145
= [22+ + 62+ +42]
8 2 2
1
= [251]
8
= 31.375
40
35
30
25
20
15
10
0
2 2.25 2.5 2.75 3
c. Simpson’s Rule
Is one of the numerical methods which is used to evaluate the definite integral.
Usually, to find the definite integral, we use the fundamental theorem of calculus,
where we have to apply the antiderivative techniques of integration. However,
sometimes, it isn’t easy to find the antiderivative of an integral, like in Scientific
Experiments, where the function has to be determined from the observed readings.
Therefore, numerical methods are used to approximate the integral in such conditions.
Simpson’s rule methods are more accurate than the other numerical approximations
and its formula for n+1 equally spaced subdivision is given by;
Example:
1
F(x) = ; n = 6, a=0, b=3
1+ x5
[ ]
3
1 3−0 1
∫ 1+ x
5
dx delta x=
6
=¿
2
0
h
¿ [f ( x 0 )+ 4 f ( x 1) + 2 f ( x2 ) + 4 f ( x 3 ) +2 f ( x 4 ) +4 f ( x 5 ) +f ( x 6 ) ]
3
1
= [f ( 0 ) +4 f ( 0.5 ) +2 f ( 1 ) +4 f ( 1.5 ) +2 f ( 2 ) +4 f ( 2.5 ) + f ( 3 )]
6
Since it will take much time solving this by hand, we will proceed to excel.
x f(x) coeff. c*f(x)
0 1 1 1
0.5 0.969697 4 3.878788
1 0.5 2 1
1.5 0.116364 4 0.465455
2 0.030303 2 0.060606
2.5 0.010136 4 0.040545
3 0.004098 1 0.004098
6.449492
1.074915
Therefore, it is approximately,
1.074915
Now, let’s do its graph.
f(x)
1.2
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
V. Solution to ordinary Differential Equation
a. Runge-Kutta Methods
Runge–Kutta method is an effective and widely used method for solving the initial-
value problems of differential equations. Runge–Kutta method can be used to
construct high order accurate numerical method by functions' self without needing the
high order derivatives of functions.
(where O(x) is a Landau symbol), sometimes known as RK2, and the fourth-order
formula is,
Example:
dy 2
=3 x y where; x0 = 1, y0 = 2 h = 0.1
dx
y1 = y0 + hT4(x0, y0, h)
Graph:
y
1600
1400
1200
1000
800
600
400
200
0
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2