Numerical Methods On Finding The Roots

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Ching, Robby Andre July 3, 2020

Numerical Methods on Root finding

I. Linear Equations

Introduction and Overview

The quadratic formula provides roots for simple quadratic equations. Roots are x values that yield
y=0, and so they may also be referred to as Zeroes. There are, however, some functions, polynomial or
non-polynomial it may be, would give a harder time for their roots to be found. In such cases, numerical
methods can be used, which give approximate values of their roots. A root is where the values of f(x)
changes sign (from + to – or vice versa). These values are the boundaries of an interval chosen within the
function [1]. As a generalization:

• When the two boundaries in a given interval have opposite signs, the number of roots within that
interval is odd; that is, at least 1 root can be found.
• When they have same signs, the number of roots is even, and this include 0 (no roots in that
interval).
However, there are cases in which this generalization cannot be applied. Such cases include when
the function is Tangent on the x-axis or if it is Discontinuous on the given interval [1].

-The function is -The function is


tangential to the x-axis discontinuous such
such that even though that even though
the boundaries (blue the boundaries (blue
dot) of the interval dot) have opposite
have opposite signs, signs, there are two
there are two roots in roots in the interval
between them [1]. [1].

A. Bracket Methods

Bisection Method

This method utilizes the 1st generalization, in which at least one root exists between an interval
of two boundaries (upper bound xu and lower bound xl) whose f(x) values have different signs. That is, at
least one root exists when 𝑓(𝑥𝑢 ) ∙ 𝑓(𝑥𝑙 ) < 0. The Incremental Search Methods locates an interval where
f(x) changes sign, then the location of the sign change is identified more precisely by dividing that interval
into a number of subintervals. The process is repeated until a refined root estimate is obtained [1].

In the bisection method, which is also called Binary Chopping, Interval Halving, or Bolzano’s
Method, the interval is always divided into half. A midpoint (xr) is found between an interval, and two new
sub intervals would be searched for the sign change (between xu and xr, and between xl and xr). The
midpoint is found with the formula given below [1]. The following rules would be applied:
• When 𝑓(𝑥𝑟 ) ∙ 𝑓(𝑥𝑙 ) < 0, that is, a sign change occurred between xr and xl, then the root must be
in that interval. Make 𝑥𝑟 = 𝑥𝑢 and continue the iteration using the interval [xl,xr].
• When 𝑓(𝑥𝑟 ) ∙ 𝑓(𝑥𝑙 ) > 0, that is, , a sign change did NOT occurred between xr and xl, then the root
must NOT be in that interval. Make 𝑥𝑟 = 𝑥𝑙 and continue the iteration using the interval [xr,xu].
• When 𝑓(𝑥𝑟 ) ∙ 𝑓(𝑥𝑙 ) = 0, that is, 𝑓 (𝑥𝑟 ) = 0 since 𝑓(𝑥𝑙 ) ≠ 0, then xr is already the root. Terminate
the iterations.
𝒙𝒖 + 𝒙𝒍
𝒙𝒓 = 𝒙𝒏𝒆𝒘
𝒓 =
𝟐
The new signifies that a new midpoint as a candidate for a more refined root is obtained from a
previous iteration. Because the new sub-interval is half of the interval of the previous iteration, then:
𝒙𝒖 − 𝒙𝒍
𝒙𝒏𝒆𝒘
𝒓 − 𝒙𝒐𝒍𝒅
𝒓 =
𝟐

Termination Criteria and Error Estimates

The iterations are terminated once when the root has been obtained or when the desired
absolute error has been reached. The True Relative Error (et) can only be computed if the actual root is
known. But for practical and obvious reasons, this is hardly obtained since the purpose of numerical
methods is to give approximate root values for functions whose root cannot absolutely be determined. In
such case, the Approximate Relative Error (ea) is often used for the evaluation [1].
𝑥 −𝑥
|𝒙𝒏𝒆𝒘
𝒓 − 𝒙𝒐𝒍𝒅
𝒓 |
| 𝑢 2 𝑙| |𝒙𝒖 − 𝒙𝒍 |
𝜺𝒂 = 𝒏𝒆𝒘 𝒙𝟏𝟎𝟎 → 𝜀𝑎 = 𝑥 + 𝑥 𝑥100 → 𝜺𝒂 = 𝒙𝟏𝟎𝟎
𝒙𝒓 𝑢 𝑙 𝒙𝒖 + 𝒙𝒍
2
The absolute error (E) at the beginning (0th iteration) is 𝐸𝑎0 = 𝑥𝑢0 − 𝑥𝑙0 = ∆𝑥 0 . Since for the
bisection method, the sub-interval, which will be used for the next iteration (1st iteration), will have its
0 −𝑥 0
𝑥𝑢 ∆𝑥 0
absolute error half of it: 𝐸𝑎1 = 𝑥𝑢1 − 𝑥𝑙1 = 2
𝑙
= 2
. The iterations would use half of the interval used
0 −𝑥 0
𝑥𝑢 ∆𝑥 0
from the previous iteration. Then, for an nth iteration: 𝐸𝑎𝑛 = 𝑥𝑢𝑛 − 𝑥𝑙𝑛 = 𝑙
2𝑛
= 2𝑛
. This last equation
becomes useful because it could be used to obtain the number or iterations (n+1 including 0th iteration)
or to what iteration the computation must stop to achieve the desired absolute error [1].
∆𝒙𝟎
∆𝒙𝟎 ∆𝑥 0 ∆𝑥 0
𝐥𝐨𝐠 ( 𝑬𝒏 )
∆𝑥 0 𝒂 ⁄
𝑬𝒏𝒂 = 𝟐𝒏
→ 𝑙𝑜𝑔 [2𝑛 = 𝐸𝑛 ] → log(2𝑛 ) = log ( 𝐸𝑛 ) → nlog(2) = log ( 𝐸𝑛 ) → 𝒏 = 𝐥𝐨𝐠(𝟐)
𝑎 𝑎 𝑎

The approximate relative error (ea) is always greater than the true relative error (et). The
approximate relative error (ea) is computed using the upper and lower bound of the interval of interest
for a given iteration. But the true relative error (e t) is computed using the True Root (xr,t) and the
Approximated Root or the estimate we computed. The true root would always be between the interval of
interest. And since the approximate root is the midpoint of the two boundaries, then [1]:

All problems were adapted from [1], unless otherwise stated.


𝑥𝑟,𝑡 = 𝑥𝑟 ± 0.50

-In (a), the True root is near the new


approximate root or the midpoint.
In (b), the True root is near the old
approximate root or one of the
boundaries. Whichever case, the
true root never goes outside of this
interval [1].

Sample #1: Determine the real roots of 𝑓 (𝑥) = −0.6 𝑥 2 + 2.4𝑥 + 5.5

a. Using the quadratic formula


−𝑏 ± √𝑏2 − 4𝑎𝑐 −2.4 ± √2.42 − 4(−0.6)(5.5)
𝑥𝑟,𝑡 = = = −𝟏. 𝟔𝟐𝟖𝟔, 𝟓. 𝟔𝟐𝟖𝟔
2𝑎 2(−0.6)
b. Using three iterations of the bisection method to determine the highest root. Employ initial
guesses of xl = 5 and xu = 10. Compute the estimated error εa and the true error εt after each
iteration.
n=0:

𝑓 (5) = −0.6 (5)2 + 2.4(5) + 5.5 = 2.50


𝑓 (10) = −0.6 (10)2 + 2.4(10) + 5.5 = −30.50
10 + 5
𝑥𝑟0 = = 7.5; 𝑓 (7.5) − 0.6 (7.5)2 + 2.4(7.5) + 5.5 = −10.25
2
- 𝐵𝑒𝑐𝑎𝑢𝑠𝑒 𝑓 (7.5) ∙ 𝑓(5) = −; 𝑡ℎ𝑒𝑛 𝑥𝑢 = 7.50

n=1:
7.5 + 5
𝑥𝑟1 = = 6.25; 𝑓 (6.25) − 0.6 (6.25)2 + 2.4(6.25) + 5.5 = −2.9375
2
- 𝐵𝑒𝑐𝑎𝑢𝑠𝑒 𝑓 (6.25) ∙ 𝑓 (5) = −; 𝑡ℎ𝑒𝑛 𝑥𝑢 = 6.25

n=2:
6.25 + 5
𝑥𝑟2 = = 𝟓. 𝟔𝟐𝟓; 𝑓(5.625) − 0.6 (6.25)2 + 2.4(6.25) + 5.5 = 𝟎. 𝟎𝟏𝟓𝟔𝟐𝟓
2
- 𝐵𝑒𝑐𝑎𝑢𝑠𝑒 𝑓 (6.25) ∙ 𝑓 (5) = +; 𝑡ℎ𝑒𝑛 𝑥𝑙 = 5.625 𝑎𝑛𝑑 𝑥𝑢 =
6.25. 𝑇ℎ𝑖𝑠 𝑖𝑠𝑓𝑜𝑟 𝑡ℎ𝑒 𝑛𝑒𝑥𝑡 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 (𝑛 = 3)

All problems were adapted from [1], unless otherwise stated.


|𝑥𝑟,𝑡 − 𝑥𝑟𝑛𝑒𝑤 | |5.6286 − 5.625|
𝜀𝑡 = 𝑥100 = 𝑥100 = 𝟎. 𝟎𝟔𝟒%
𝑥𝑟,𝑡 5.6286

|𝑥𝑟𝑛𝑒𝑤 − 𝑥𝑟𝑜𝑙𝑑 | |5.625 − 6.25| |6.25 − 5|


𝜀𝑎 = 𝑛𝑒𝑤 𝑥100 = 𝑥100 = = 𝟏𝟏. 𝟏𝟏%
𝑥𝑟 5.625 6.25 + 5

Sample #2: Determine the real roots of 𝑓 (𝑥) = 4𝑥 3 − 6𝑥 2 + 7𝑥 − 2.3 Using bisection to locate the root.
Employ initial guesses of xl = 0 and xu = 1 and iterate until the estimated error εa falls below a level of εs =
10%.

n xl xu xr ea yl yr yryl
0 0 1 0.5 100 -2.3 0.2 -0.46
1 0 0.5 0.25 100 -2.3 -0.8625 1.98375
2 0.25 0.5 0.375 33.33333 -0.8625 -0.30781 0.265488
3 0.375 0.5 0.4375 14.28571 -0.30781 -0.05098 0.015691
4 0.4375 0.5 0.46875 6.666667 -0.05098 0.074878 -0.00382

-The following data were calculated using Excel software. The computation is up to n=4 (5 iterations)
and the approximate root is x=0.46875.

Sample #3: The saturation concentration of dissolved oxygen in freshwater can be calculated with the
equation (APHA, 1992) below, where osf = the saturation concentration of dissolved oxygen in freshwater
at 1 atm (mg/L) and Ta = absolute temperature (K). Remember that Ta = T + 273.15, where T = temperature
(◦C). According to this equation, saturation decreases with increasing temperature. For typical natural
waters in temperate climates, the equation can be used to determine that oxygen concentration ranges
from 14.621 mg/L at 0◦C to 6.413 mg/L at 40◦C. Given a value of oxygen concentration, this formula and
the bisection method can be used to solve for temperature in ◦C.

1.575701 × 105 6.642308 × 107 1.243800 × 1010 8.621949 × 1011


𝑙𝑛𝑜𝑠𝑓 = −139.3441 + − + −
𝑇𝑎 𝑇𝑎 2 𝑇𝑎 3 𝑇𝑎 4
a. If the initial guesses are set as 0 and 40◦C, how many bisection iterations would be required to
determine temperature to an absolute error of 0.05◦C?
40 − 0
log ( 0.05 )
𝑛= ⁄
log(2) = 9.64 = 𝟏𝟎

b. What temperature of the freshwater would have a zero concentration of oxygen if the desired
absolute error is that in part a?

All problems were adapted from [1], unless otherwise stated.


n xl xu xr ea yl yr yryl
0 273.15 313.15 293.15 6.822446 14.62098 9.092517 132.9415
1 293.15 313.15 303.15 3.298697 9.092517 7.558872 68.72917
2 303.15 313.15 308.15 1.622586 7.558872 6.949387 52.52953
3 308.15 313.15 310.65 0.804764 6.949387 6.673151 46.37431
4 310.65 313.15 311.9 0.400769 6.673151 6.541114 43.64984
5 311.9 313.15 312.525 0.199984 6.541114 6.476502 42.36354
6 312.525 313.15 312.8375 0.099892 6.476502 6.444534 41.73804
7 312.8375 313.15 312.9938 0.049921 6.444534 6.428633 41.42954
8 312.9938 313.15 313.0719 0.024954 6.428633 6.420702 41.27634
9 313.0719 313.15 313.1109 0.012476 6.420702 6.416742 41.19999

-The following data were computed using Excel software. The computation is up to n=9 (10
iterations) and the approximate temperature for 0 concentration of oxygen is 313.1109, with an
absolute error of 𝐸𝑎9 = |313.1109 − 313.0719| = 𝟎. 𝟎𝟑𝟗𝟏𝟗.

Regula-Falsi Method

The problem with the Bisection method is that it is relatively inefficient given that the
approximate relative errors go down at a slow rate as the iterations continue to progress. Also, the
Bisection method only evaluates the distance of the boundaries (xl and xu) to xr,t, not highlighting the
magnitude of the their f(x) values (yl and yu). A generalization could be made that when the f(x) values of
either the lower bound l or the upper bound u is nearer to the f(x) value of the true root, then its x value
must also then be nearer to xr,t [1].

The Regula-Falsi method (in Latin it


means false position), also called False Position
and Linear Interpolation, joins f(xl) and f(u) by a
straight line. It replaces the curve with a
straight line and hence the name False Position.
This method then utilizes a graphical intuition
on the location of the true root [1].

Referring to the diagram, two right triangles are


formed. The smaller one has a height of f(xl) and
a base of (xr-xl), while the larger one has a
height of f(xu) and a base of (xu-xr). To
determine an equation in finding an
approximate value of xr, we use the similar
triangles approach for its derivation. The
derivation is as follows [1].

All problems were adapted from [1], unless otherwise stated.


𝑦𝑙 𝑦𝑢
= → 𝑦𝑙 (𝑥𝑟 − 𝑥𝑢 ) = 𝑦𝑢 (𝑥𝑟 − 𝑥𝑙 ) → 𝑥𝑟 (𝑦𝑙 − 𝑦𝑢 ) = 𝑥𝑢 𝑦𝑙 − 𝑥𝑙 𝑦𝑢
𝑥𝑟 − 𝑥𝑙 𝑥𝑟 − 𝑥𝑢

𝑥𝑢 𝑦𝑙 − 𝑥𝑙 𝑦𝑢 𝑥𝑢 𝑦𝑙 − 𝑥𝑙 𝑦𝑢 𝑥𝑢 𝑦𝑙 𝑥𝑙 𝑦𝑢
→ 𝑥𝑟 = → 𝑥𝑟 − 𝑥𝑢 = − 𝑥𝑢 → 𝑥𝑟 = 𝑥𝑢 + − 𝑥𝑢 −
(𝑦𝑙 − 𝑦𝑢 ) (𝑦𝑙 − 𝑦𝑢 ) (𝑦𝑙 − 𝑦𝑢 ) (𝑦𝑙 − 𝑦𝑢 )

𝑥𝑢 𝑦𝑙 − 𝑥𝑢 𝑦𝑙 + 𝑥𝑢 𝑦𝑢 𝑥𝑙 𝑦𝑢 𝑥𝑢 𝑦𝑢 𝑥𝑙 𝑦𝑢
→ 𝑥𝑟 = 𝑥𝑢 + − → 𝑥𝑟 = 𝑥𝑢 + −
(𝑦𝑙 − 𝑦𝑢 ) (𝑦𝑙 − 𝑦𝑢 ) (𝑦𝑙 − 𝑦𝑢 ) (𝑦𝑙 − 𝑦𝑢 )

𝒚𝒖 (𝒙𝒍 − 𝒙𝒖 )
→ 𝒙𝒓 = 𝒙𝒖 −
(𝒚𝒍 − 𝒚𝒖 )

The procedure would be the same with that of the bisection


method, but now using this derived equation. It also follows the
same termination criteria and estimate error.

Downside and weakness

The upside in this method is that the approximate relative


error goes down faster as the iterations proceed compared to that
with the bisection method. This can be seen in the diagram.
However, the approximate relative error (ea) is NOT ALWAYS greater
than the true relative error (et) [1].
|𝒙𝒏𝒆𝒘
𝒓 − 𝒙𝒐𝒍𝒅
𝒓 |
𝜺𝒂 = 𝒏𝒆𝒘 𝒙𝟏𝟎𝟎
𝒙𝒓

Another weakness of this method is that it can be way too


focused in one of the boundaries. In the bisection method, the
interval is always halved no matter what until the iteration stops.
|𝑥𝑢 −𝑥𝑙 |
That is, the interval always shrink since it is equal to 2
. But here,
the interval approaches a constant value. One of the original
boundaries would tend to remain fixed while the other would
continue converging with the true root. This makes sense because
the Regula-Falsi method follows the generalization that the closer the
f(x) value of one of the original boundaries to the f(x) of the true root,
then its x is closer to the true root. But this is NOT ALWAYS true,
especially if the equation has way too many curves. This can lead to
a poorer convergence and lower approximate relative error
compared to the true relative error, and also a much slower decrease
in the approximate relative error. The diagram shows a function
where f(xl) is closer than f(xr,t). However, xu is much closer to xr,t. One
way to address this is to use the Modified Regula-Falsi Method, but

All problems were adapted from [1], unless otherwise stated.


this is only done using programs. The program detects when an original boundary is stuck or keeps
repeating. If so, its y value would be halved [1].
Generally, the Regula-Falsi method is much more superior compared to the Bisection
method. But of course, as previously shown, it is NOT always true.

Sample #1: Determine the real root of 𝑓 (𝑥) = −26 + 85𝑥 − 91𝑥 2 + 44𝑥 3 − 8𝑥 4 + 𝑥 5 up to
below 𝜀 = 0.20% with 𝑥𝑙 = 0.5 and 𝑥𝑢 = 1.0

n=0:
𝑦𝑙 = 𝑓(0.5) = −26 + (85 ∙ 0.5) − (91 ∙ 0.52 ) + (44 ∙ 0.53 ) − (8 ∙ 0.54 ) + 0.55 = −1.21875
𝑦𝑢 = 𝑓 (1.0) = −26 + (85 ∙ 1.0) − (91 ∙ 1.02 ) + (44 ∙ 1.03 ) − (8 ∙ 1.04 ) + 1.05 = 5

𝑦𝑢 (𝑥𝑙 − 𝑥𝑢 ) 5(0.5 − 1)
𝑥𝑟 = 𝑥𝑢 − = 1− = 𝟎. 𝟓𝟗𝟕𝟗𝟗; 𝑓(0.59799) = 0.75057
(𝑦𝑙 − 𝑦𝑢 ) (−1.21875 − 5)

- 𝐵𝑒𝑐𝑎𝑢𝑠𝑒 𝑓 (0.59799) ∙ 𝑓 (0.5) = −; 𝑡ℎ𝑒𝑛 𝑥𝑢 = 0.59799 𝑎𝑛𝑑 𝑥𝑙 = 0.5.

n=1:

𝑦𝑙 = 𝑓(0.5) = −1.21875; 𝑦𝑢 = 𝑓 (0.59799) = 0.75057


0.75057(0.5 − 0.59799)
𝑥𝑟 = 0.59799 − = 0.56064; 𝑓(0.56064) = 0.07024
(−1.21875 − 0.75057)

- 𝐵𝑒𝑐𝑎𝑢𝑠𝑒 𝑓 (0.56391) ∙ 𝑓 (0.5) = −; 𝑡ℎ𝑒𝑛 𝑥𝑢 = 0.56064 𝑎𝑛𝑑 𝑥𝑙 = 0.5.

|𝑥𝑟𝑛𝑒𝑤 − 𝑥𝑟𝑜𝑙𝑑 | |0.56064 − 0.59799|


𝜀𝑎 = 𝑛𝑒𝑤 𝑥100 = 𝑥100 = 6.66146%
𝑥𝑟 0.56064

n=2:

𝑦𝑙 = 𝑓(0.5) = −1.21875; 𝑦𝑢 = 𝑓 (0.56064) = 0.10721


0.07024(0.5 − 0.56064)
𝑥𝑟 = 0.56064 − = 𝟎. 𝟓𝟓𝟕𝟑𝟒; 𝑓(0.55734) = 0.00610
(−1.21875 − 0.07024)

- 𝐵𝑒𝑐𝑎𝑢𝑠𝑒 𝑓 (0.55734) ∙ 𝑓 (0.5) = −; 𝑡ℎ𝑒𝑛 𝑥𝑢 = 0.55734 𝑎𝑛𝑑 𝑥𝑙 = 0.5.


|0.55734 − 0.56064|
𝜀𝑎 = 𝑥100 = 0.59296%
0.55734

n=3:

𝑦𝑙 = 𝑓(0.5) = −1.21875; 𝑦𝑢 = 𝑓 (0.55734) = 0.00610

All problems were adapted from [1], unless otherwise stated.


0.00610(0.5 − 0.55734)
𝑥𝑟 = 0.55734 − = 𝟎. 𝟓𝟓𝟕𝟎𝟓; 𝑓(0.55705) = 0.00053
(−1.21875 − 0.00610)
|0.55705 − 0.55734|
𝜀𝑎 = 𝑥100 = 𝟎. 𝟎𝟓𝟏𝟐𝟗%
0.55705

Sample #2: Determine the positive real root of ln(𝑥 4 ) = 0.70 using ten iterations of the bisection method
and as well as the regula-falsi with 𝑥𝑙 = 0.5 and 𝑥𝑢 = 2.0 as the initial guess.

ln(𝑥 4 ) = 0.70 → 𝑓 (𝑥) = ln(𝑥 4 ) − 0.70

Bisection method
n xl xu xr ea yl yr yryl
0 0.5 2 1.25 60 -3.472588722 0.192574 -0.66873
1 0.5 1.25 0.875 42.85714 -3.472588722 -1.23413 4.285611
2 0.875 1.25 1.0625 17.64706 -1.23412557 -0.4575 0.564614
3 1.0625 1.25 1.15625 8.108108 -0.457501513 -0.11927 0.054567
4 1.15625 1.25 1.203125 3.896104 -0.119271961 0.039689 -0.00473
5 1.15625 1.203125 1.179688 1.986755 -0.119271961 -0.039 0.004652
6 1.179688 1.203125 1.191406 0.983607 -0.039001708 0.000537 -2.1E-05
7 1.179688 1.191406 1.185547 0.494234 -0.039001708 -0.01918 0.000748
8 1.185547 1.191406 1.188477 0.246508 -0.019183336 -0.00931 0.000179
9 1.188477 1.191406 1.189941 0.123102 -0.00931085 -0.00438 4.08E-05
10 1.189941 1.191406 1.190674 0.061513 -0.00438373 -0.00192 8.43E-06

Regula Falsi method


n xl xu xr ea yl yr Yu yryl
0 0.5 2.00000 1.43935 NaN -3.47259 0.75678 2.07259 -2.62797
1 0.5 1.43935 1.27127 13.22158 -3.47259 0.26007 0.75678 -0.90312
2 0.5 1.27127 1.21753 4.41366 -3.47259 0.08731 0.26007 -0.30319
3 0.5 1.21753 1.19994 1.46658 -3.47259 0.02907 0.08731 -0.10095
4 0.5 1.19994 1.19412 0.48664 -3.47259 0.00965 0.02907 -0.03352
5 0.5 1.19412 1.19220 0.16140 -3.47259 0.00320 0.00965 -0.01112
6 0.5 1.19220 1.19156 0.05352 -3.47259 0.00106 0.00320 -0.00369
7 0.5 1.19156 1.19135 0.01775 -3.47259 0.00035 0.00106 -0.00122
8 0.5 1.19135 1.19128 0.00589 -3.47259 0.00012 0.00035 -0.00041
9 0.5 1.19128 1.19126 0.00195 -3.47259 0.00004 0.00012 -0.00013
10 0.5 1.19126 1.19125 0.00065 -3.47259 0.00001 0.00004 -0.00004

All problems were adapted from [1], unless otherwise stated.


-The true root, as plotted using Desmos, is xr,t =
1.191. The regula-falsi method is closer compared
to the bisection method. In this problem it also
provided a smaller approximate relative error.
The true relative errors are:

|𝑥𝑟,𝑡 − 𝑥𝑟10 |
𝜀𝑡, = 𝑥100
𝑥𝑟,𝑡
|1.191 − 1.190674|
𝜀𝑡,𝑏𝑖𝑠𝑒𝑐𝑡𝑖𝑜𝑛 = 𝑥100
1.191
= 𝟎. 𝟎𝟐𝟕%
|1.191 − 1.19125|
𝜀𝑡,𝑟𝑒𝑔𝑢𝑙𝑎 𝑓𝑎𝑙𝑠𝑖 = 𝑥100
1.191
= 𝟎. 𝟎𝟐𝟏%

Sample #3: The velocity v of a falling parachutist is given by the following equation below, where g = 9.8
m/s2. For a parachutist with a drag coefficient c = 15 kg/s, compute the mass m so that the velocity is v = 35 m/s at t
9 s. Use the false-position method to determine m to a level of εa = 0.001%.
𝑔𝑚 𝑐 𝑔𝑚 𝑐
h m f(m) 𝑣= (1 − 𝑒 −𝑚𝑡 ) → 𝑓 (𝑚 ) = (1 − 𝑒 −𝑚𝑡 ) − 𝑣
5 5 -31.7333 𝑐 𝑐
15
10 -28.4667 9.8𝑚
→ 𝑓(𝑚 ) = (1 − 𝑒 − 𝑚 ∙9 ) − 35
15
15 -25.2012
20 -21.9486
25 -18.7404
-We first need to determine the initial guess. Since it is mass, only
30 -15.6177 positive values are allowed. Also, the mass is of a person, so large
35 -12.6165 values are expected to be used. The sign of f(m) changes at m = 55 and
40 -9.7609 m = 60. We will use these as our initial guess boundaries.
45 -7.06374
-The table below shows the computed data using Excel software. The
50 -4.52871
approximate root is mr = 55.0000040 with a very small relative error
ml= 55 -2.15342
of ea = 0.0001111. The value is very close to the lower boundary. This
mu= 60 0.06835
example recognizes the benefit of the regula falsi method of
65 2.144948
acknowledging one of the boundaries being close to the true root. But
70 4.085733 of course, not all equations or problems will yield such same result.
75 5.900354
80 7.598305

All problems were adapted from [1], unless otherwise stated.


n ml mu mr ea f(ml) f(mr) f(mu) f(mr) f(ml)
0 55 60.0000000 59.8461803 -2.1534202 0.0022106 0.0683504 -0.0047603
1 55 59.8461803 59.8412106 0.0083049 -2.1534202 35.0000714 0.0022106 -75.3698622
2 55 59.8412106 55.2805971 8.2499354 -2.1534202 32.9752596 35.0000714 -71.0095914
3 55 55.2805971 55.0172009 0.4787525 -2.1534202 32.8544819 32.9752596 -70.7495062
4 55 55.0172009 55.0010581 0.0293500 -2.1534202 32.8470659 32.8544819 -70.7335365
5 55 55.0010581 55.0000651 0.0018054 -2.1534202 32.8466097 32.8470659 -70.7325541
6 55 55.0000651 55.0000040 0.0001111 -2.1534202 32.8465816 32.8466097 -70.7324936

B. Open Methods

Newton-Raphson method

In the previous bracket methods, the true root is


enclosed within a given interval of two bounds. In open
methods, only one initial guess is required, and
subsequent x values are computed iteratively from this
initial guess. The bracket methods may always be
Convergent, going towards the true root. In open
methods however, it may be Divergent, wandering off
from the true root. However, open methods are more
efficient as it often converges faster than the bracket
methods [1].

In the Newton-Raphson method, a tangent line


is extended towards the x-axis from the point
(𝑥𝑖 , 𝑓 (𝑥𝑖 )), where 𝑥𝑖 is the initial guess. The point where
this tangent cross the x-axis usually represents an
improved approximation of the root. That is, the next
approximated root is computed base from the y-value
(and also on the y’-value) of the previous root of the
preceding iteration. The graphical derivation is as
follows [1].
𝑓 (𝑥𝑖 ) − 0 𝒇(𝒙𝒊 )
𝑓 ′ (𝑥𝑖 ) = → 𝑓 ′ (𝑥𝑖 )(𝑥𝑖 − 𝑥𝑖+1 ) = 𝑓(𝑥𝑖 ) → 𝒙𝒊+𝟏 = 𝒙𝒊 − ′
𝑥𝑖 − 𝑥𝑖+1 𝒇 (𝒙𝒊 )
Termination Criteria and Error Estimates

This Newton-Raphson formula can also be derived using the Taylor series expansion. If derived
this way, it can be also shown that the true absolute error (𝐸𝑡,𝑖+1 ) is roughly proportional to the square of
the true absolute error of the previous iteration (𝐸𝑡,𝑖 ). This then results in the error having approximately

All problems were adapted from [1], unless otherwise stated.


doubled significant figures or correct decimal places relative to the previous, and thus the absolute error
values rapidly progress downward. This is called Quadratic Convergence [1].
−𝒇′′ (𝒙𝒓 ) |𝒙𝒊+𝟏 − 𝒙𝒊 |
𝑬𝒕,𝒊+𝟏 = 𝑬 𝟐 | 𝜺𝒂 = 𝒙𝟏𝟎𝟎
𝟐𝒇′ (𝒙𝒓 ) 𝒕,𝒊 𝒙𝒊+𝟏

xr could be the true root or the last approximate root


from the last iteration made. Ando so, E t,i for i = 0 (first
iteration) is then 𝐸𝑡,0 = |𝑥𝑟 − 𝑥0 |, x0 being the initial guess.
The iteration could be terminated with the desired
approximate relative error, computed with the equation
above [1].

Downside and Weakness

In some cases, the convergence to the true root is very


much slow given an initial guess. This may occur when the
approximate root keeps wandering off from the true root,
that is, a tendency for divergence. This shown by the 4
cases in the diagram. Case (a) shows that when an
inflection point (y’’= 0) is within the vicinity of the true root,
the approximate roots diverge from it progressively. Case
(b) shows a persisting oscillation of the approximate root
near a maximum or a minimum (y’ = 0). Case (c) shows that
when a near 0 slope area is encountered, the approximate
root quickly diverges away from the area of the true root.
Lastly, in case (d), when the approximate root encounters
a 0-slope, it would not cross the x-axis anymore. This
makes sense since the equation has the first derivative (y’)
as its denominator [1].

Sample #1: You are designing a spherical tank (Fig. P6.30) to hold water for a small village in a developing
country. The volume of liquid it can hold can be computed as follows. If R = 3 m, what depth must the
tank be filled to so that it holds 30 m3? Use three iterations of the Newton-Raphson method to determine
your answer. Determine the approximate relative error after each iteration. Note that an initial guess of
R will always converge.
(3𝑅 − ℎ ) (3𝑅 − ℎ )
𝑉 = 𝜋ℎ 2 → 𝑓(ℎ ) = 𝜋ℎ 2 −𝑉
3 3
(9 − ℎ ) ℎ3
→ 𝑓(ℎ ) = 𝜋ℎ 2 ( ) 2
− 30 → 𝑓 ℎ = 3𝜋ℎ − − 30
3 3
→ 𝑓 ′ (ℎ ) = 6𝜋ℎ − ℎ 2

All problems were adapted from [1], unless otherwise stated.


𝝅𝒉 𝟑
𝑓 (ℎ𝑖 ) 𝟑𝝅𝒉𝒊 𝟐 − 𝟑𝒊 − 𝟑𝟎⁄ |𝒉𝒊+𝟏 − 𝒉𝒊 |
ℎ𝑖+1 = ℎ𝑖 − ′ → ℎ𝑖+1 = 𝒉𝒊 − 𝟐 | 𝜺𝒂 = 𝒙𝟏𝟎𝟎
𝑓 (ℎ𝑖 ) 𝟔𝝅𝒉𝒊 − 𝝅𝒉𝒊 𝒉𝒊+𝟏

i=0:

-Since the value of R would always converge, we take h0 = 3 as our initial guess.
(3)3
3𝜋 (3)2 − 3 − 30⁄
ℎ1 = 3 − = 2.06103
6𝜋(3) − 𝜋 (3)2
|2.06103 − 3|
𝜀𝑎 = 𝑥100 = 45.56%
2.06103

i=1:
(2.06103)3
3𝜋(2.06103)2 − − 30⁄
ℎ2 = 2.06103 − 3
6𝜋(2.06103) − 𝜋(2.06103)2 = 2.02704
|2.02704 − 2.06103|
𝜀𝑎 = 𝑥100 = 1.68%
2.0270

i=2:
(2.0270)3
3𝜋 (2.0270)2 − 3 − 30⁄
ℎ3 = 2.0270 − = 𝟐. 𝟎𝟐𝟔𝟗𝟏
6𝜋(2.0270) − 𝜋(2.0270)2

|2.02691 − 2.02704|
𝜀𝑎 = 𝑥100 = 𝟎. 𝟎𝟎𝟔𝟒%
2.02691

Sample #2: Given the following function, use the Newton-Raphson method to determine its maximum.
Use an initial guess of xi = 1 and perform the iterations until the true root has been obtained or the relative
error reaches 0%.

𝑓 (𝑥) = −2𝑥 6 − 1.5𝑥 4 + 10𝑥 + 2 → 𝑓 ′ (𝑥) = −12𝑥 5 − 6𝑥 3 + 10 → 𝑓 ′′ (𝑥) = −60𝑥 4 − 18𝑥 2

𝑓 ′ (𝑥𝑖 ) −𝟏𝟐𝒙𝒊 𝟓 − 𝟔𝒙𝒊 𝟑 + 𝟏𝟎


𝑥𝑖+1 = 𝑥𝑖 − → 𝒙 𝒊+𝟏 = 𝒙 𝒊 −
𝑓 ′′ (𝑥𝑖 ) −𝟔𝟎𝒙𝒊 𝟒 − 𝟏𝟖𝒙𝒊 𝟐

All problems were adapted from [1], unless otherwise stated.


i x ea
0 1 11.428571428571400
1 0.897435897435897 2.836474423963110
2 0.872682482030690 0.145505470880312
3 0.871414526220993 0.000364424259489
4 0.871411350586632 0.000000002280017
5 0.871411350566763 0.000000000000000
6 0.871411350566763 0.000000000000000

-Because we are finding the root where f’(x) = 0 (maxima), our primary
function is then f’(x) and its 1st derivative is f’’(x). The following data
were computed using excel.

𝑓 (0.871411350566763) = −2𝑥 6 − 1.5𝑥 4 + 10𝑥 + 2 = 𝟖. 𝟗𝟕𝟑𝟒𝟒𝟗𝟏𝟔𝟓

-The maxima then is (𝟎. 𝟖𝟕𝟏𝟒𝟏𝟏𝟑𝟓𝟎𝟓𝟔𝟔𝟕𝟔𝟑 , 𝟖. 𝟗𝟕𝟑𝟒𝟒𝟗𝟏𝟔𝟓)

Sample #3: Apply the Newton-Raphson method to the function shown below to evaluate its known real
root at x = 3. Use an initial guess of x0 = 3.2 and take a minimum of four iterations. (b) Did the method
exhibit convergence onto its real root? Sketch the plot with the results for each iteration shown.

𝐭𝐚𝐧𝐡 (𝒙𝟐 − 𝟗)
𝑓(𝑥) = tanh (𝑥 2 − 9) → 𝑓 ′ (𝑥) = 2x ∙ sech2 (𝑥 2 − 9) → 𝒙𝒊+𝟏 = 𝒙𝒊 −
𝟐𝐱 ∙ 𝐬𝐞𝐜𝐡𝟐 (𝒙𝟐 − 𝟗)

i x ea et
3.2000 6.6667
0 2.7368 16.9242 8.7728
1 2.1783 25.6425 27.3915
2 1.7323 25.7438 42.2568
3 1.3817 25.3769 53.9443
4 1.1198 23.3811 62.6720
5 0.9526 17.5600 68.2477
6 0.8826 7.9246 70.5792
7 0.8717 1.2587 70.9449
8 0.8714 0.0278 70.9530
9 0.8714 0.0000 70.9530

-The approximate root clearly Diverges. As seen in the plot, the true root is an inflection point. So
approximate roots would always tend to diverge from it. Even though the approximate relative error
progresses downward, the true relative progresses upward.

All problems were adapted from [1], unless otherwise stated.


Secant method

In the Newton-Raphson method, the function must be


(readily) differentiable. For functions where it is inconvenient to
find its first derivative, the secant method is used. The slope is
computed as follows. The derivation of the formula is as follows.
It uses the same principle with that of the Newton-Raphson
method, but two initial guesses are required (xi-1 and xi). It also
follows the same Termination and error estimate with that of the
Newton-Raphson method [1].
𝑓(𝑥𝑖−1 ) − 𝑓(𝑥𝑖 )
𝑓 ′ (𝑥𝑖 ) ≅
𝑥𝑖−1 − 𝑥𝑖

𝑓 (𝑥𝑖 ) 𝑓(𝑥𝑖 ) 𝒇(𝒙𝒊 ) ∙ (𝒙𝒊−𝟏 − 𝒙𝒊 )


𝑥𝑖+1 = 𝑥𝑖 − → 𝑥𝑖+1 = 𝑥𝑖 − → 𝒙 𝒊+𝟏 = 𝒙 𝒊 −
𝑓 ′ (𝑥𝑖 ) 𝑓(𝑥𝑖−1 ) − 𝑓(𝑥𝑖 ) 𝒇(𝒙𝒊−𝟏 ) − 𝒇(𝒙𝒊 )
𝑥𝑖−1 − 𝑥𝑖
It could be seen that the Secant formula looks
analogous to the Regula-Falsi method. Their difference
lies primarily on how the next iterations proceed. In the
diagram it could be seen that for the first iteration (a and
b), they are exactly the same. But for the next (and
succeeding) iterations, they would become different.
This is because in the regula falsi method, the latest
approximate root replaces one of the bounds whose sign
is the same with it. In this sense it always gets to bracket
the true root and will always converge. However, in the
secant method the values are replaced in a strict
sequence fashion. This may result into the two bounds
being on the same side of the true root (left or right side
of it, unable to bracket it) and may tend to diverge from
it. Its upside however is that the approximate error
values may progress downward at a much faster rate [1].

Modified Secant method

Rather than using two initial guesses, a modified approach is to use a fractional perturbation of
the independent variable (x) to estimate f’(x). The value of this fractional perturbation (𝛿) must be
appropriate to provide an efficient converging result. If it is too small, it could be hijacked by round off
errors caused by the subtraction in the denominator. If it is too big, it could become inefficiently slow and
may even diverge [1].
𝑓 (𝑥𝑖 + 𝛿𝑥𝑖 ) − 𝑓(𝑥𝑖 ) 𝑓(𝑥𝑖 ) 𝜹𝒙𝒊 ∙ 𝒇(𝒙𝒊 )
𝑓 ′ (𝑥𝑖 ) ≅ → 𝑥𝑖+1 = 𝑥𝑖 − → 𝒙𝒊+𝟏 = 𝒙𝒊 −
𝛿𝑥𝑖 𝑓(𝑥𝑖 + 𝛿𝑥𝑖 ) − 𝑓(𝑥𝑖 ) 𝒇(𝒙𝒊 + 𝜹𝒙𝒊 ) − 𝒇(𝒙𝒊 )
𝛿𝑥𝑖

All problems were adapted from [1], unless otherwise stated.


Sample #1: Locate the first positive root of the following function, where x is in radians. Use five iterations
of the secant method with initial guesses of (a) xi-1 = 1.0 and xi = 3.0; (b) xi-1 = 1.5 and xi = 2.5, and (c) xi-1 =
1.5 and xi = 2.25 to locate the root.

𝑓(𝑥) = 𝑠𝑖𝑛𝑥 + cos(1 + 𝑥 2 ) − 1

(a) xi-1 = 1.0 and xi = 3.0

Sample calculations:

i=0:

𝑓 (𝑥𝑖−1 ) = sin(𝑥𝑖−1 ) + cos(1 + 𝑥𝑖−1 2 ) − 1 → 𝑓(1) = sin(1) + cos(1 + 12 ) − 1 = −0.5747

𝑓 (𝑥𝑖 ) = sin(𝑥𝑖 ) + cos(1 + 𝑥𝑖 2 ) − 1 → 𝑓 (3) = sin(3) + cos(1 + 32 ) − 1 = −1.6980


𝑓 (𝑥𝑖 ) ∙ (𝑥𝑖−1 − 𝑥𝑖 ) −1.6980 ∙ (1 − 3)
𝑥𝑖+1 = 𝑥𝑖 − → 𝑥1 = 3 − = −0.02324
𝑓 (𝑥𝑖−1 ) − 𝑓(𝑥𝑖 ) 0.5747 − −1.6980
i=1:

𝑓 (𝑥𝑖−1 ) = sin(𝑥𝑖−1 ) + cos(1 + 𝑥𝑖−1 2 ) − 1 → 𝑓(3) = sin(3) + cos(1 + 32 ) − 1 = −1.6980

𝑓 (𝑥𝑖 ) = sin(𝑥𝑖 ) + cos(1 + 𝑥𝑖 2 ) − 1 → 𝑓 (−0.02324) = sin(−0.02324) + cos(1 + (−0.02324)2 ) − 1


= −0.4834
−0.4834 ∙ (3 − (−0.02324))
𝑥2 = −0.02324 − = −1.2265
−1.6980 − (−0.4834)
i=2:

𝑓 (𝑥𝑖−1 ) = 𝑓(−0.02324) = sin(−0.02324) + cos(1 + (−0.02324)2 ) − 1 = −0.4834

𝑓 (𝑥𝑖 ) = 𝑓 (−1.2265) = sin(−1.2265) + cos(1 + (−1.2265)2 ) − 1 = −2.7450

−2.7450 ∙ (−0.02324 − (−𝟎. 𝟎𝟐𝟑𝟐𝟒))


𝑥3 = −1.2265 − = 𝟎. 𝟐𝟑𝟑𝟗
−0.4834 − (−2.7450)

Secant method for (1,3) -The approximate x value does


i x y ea not tend to converge on the real
1 -0.574675852 root, but it oscillates. The
3 -1.697951521 -13023.08095 graphical intuiton behind this is
0 -0.023214278 -0.483363437 -98.10703908 that there are too many extrema
points surrounding the chosen
1 -1.226347476 -2.744750012 624.1893994
range.
2 0.233951216 -0.274717273 40.97592885
3 0.396365774 -0.211940326 58.04281991
4 0.944691166 -0.505811697 103376.1825

All problems were adapted from [1], unless otherwise stated.


(b) xi-1 = 1.5 and xi = 2.5
Secant method for (1.5,2.5)
i x y ea
1.5 -0.99663
0 2.5 0.166396 6.070241
1 2.356928735 0.669842 7.472986
2 2.54728716 -0.08283 0.829187
3 2.526339088 0.031471 0.227788
4 2.532106932 0.00057 0.004202
5 2.532213338 -4.2E-06 3.1E-05

(c) xi-1 = 1.5 and xi = 2.25

Secant method for (1.5,2.25) -A different root has been converged


i x y ea in part (b) and in part (c). This shows
1.5 -0.99663 that when multiple roots appear
0 2.25 0.753821 16.76071567 within a close vicinity, the chosen
1 1.927017993 -0.06177 1.253476722 rang of the initial guess becomes
2 1.951479332 0.024147 0.353535878 important.
3 1.944604458 -1.4E-05 0.000204032
4 1.944608426 1.61E-09 2.35567E-08
5 1.944608425 0 0

Sample #2: A mass balance for a pollutant in a well-mixed lake can be written by the given equation below.
Given the parameter values V = 1 × 106 m3, Q = 1 × 105 m3/yr, W = 1 × 106 g/yr, and k = 0.25 m0.5/g0.5/yr,
use the modified secant method to solve for the steady-state concentration. Employ an initial guess of
c = 4 g/m3 and δ = 0.5. Perform three iterations and determine the percent relative error after the third
iteration.
𝑑𝑐
𝑉 = 𝑊 − 𝑄𝑐 − 𝑘𝑉√𝑐 → 𝑊 − 𝑄𝑐 − 𝑘𝑉√𝑐 = 0 → 𝑓(𝑐) = 1𝑥106 − 1𝑥105 𝑐 − (0.25 ∙ 1𝑥106 )√𝑐
𝑑𝑡
i=0:

𝑓 (4) = 1𝑥106 − 1𝑥105 (4) − (0.25 ∙ 1𝑥106 )√4 = 100,000

𝑓 (4 + (4 ∙ 0.5)) = 1𝑥106 − 1𝑥105 (4 + (4 ∙ 0.5)) − (0.25 ∙ 1𝑥106 )√4 + (4 ∙ 0.5) = −212372.4357

𝛿𝑥𝑖 ∙ 𝑓 (𝑥𝑖 ) 0.5 ∙ 4 ∙ 100,000


𝑥𝑖+1 = 𝑥𝑖 − → 𝑥1 = 4 − = 4.64026
𝑓 (𝑥𝑖 + 𝛿𝑥𝑖 ) − 𝑓(𝑥𝑖 ) −212372.4357 − 100,000
i=1:

𝑓 (4.64026) = 1𝑥106 − 1𝑥105 (4.64026) − (0.25 ∙ 1𝑥106 )√4.64026 = −2557.5682

All problems were adapted from [1], unless otherwise stated.


𝑓(4.64026 + (4.64026 ∙ 0.5))
= 1𝑥106 − 1𝑥105 (4.64026 + (4.64026 ∙ 0.5))
− (0.25 ∙ 1𝑥106 )√4.64026 + (4.64026 ∙ 0.5) = −355,602.7763
0.5 ∙ 4.64026 ∙ −2557.5682
𝑥2 = 4.64026 − = 4.6235
−355,602.7763 − (−2557.5682)
i=2:

𝑓 (4.6235) = 1𝑥106 − 1𝑥105 (4.6235) − (0.25 ∙ 1𝑥106 )√4.6235 = 91.8636

𝑓(4.6235 + (4.6235 ∙ 0.5))


= 1𝑥106 − 1𝑥105 (4.6235 + (4.6235 ∙ 0.5))
− (0.25 ∙ 1𝑥106 )√4.6235 + (4.6235 ∙ 0.5) = −351,896.5706
0.5 ∙ 4.6235 ∙ 91.8636
𝑥3 = 4.6235 − = 𝟒. 𝟔𝟐𝟒𝟏
−351,896.5706 − (91.8636)

|4.6241 − 4.6235|
𝜀𝑎 = 𝑥100 = 𝟎. 𝟎𝟏𝟐𝟗𝟖%
4.6241
Sample #3: Determine the highest real root of the given function using the Secant method. Compare the
results using Newton-Raphson method (xi = 3.5) and the Modified Secant method (xi = 3.5 and 𝛿 = 0.01).
Use xi-1 = 2.5 and xi = 3.5 and perform iterations until the true root has been obtained or ea = 0 (max of 10
iterations).

𝑓 (𝑥) = 0.95𝑥 3 − 5.9𝑥 2 + 10.9𝑥 − 6 → 𝑓 ′ (𝑥) = 2.85𝑥 2 − 11.8𝑥 + 10.9


Secant method:
Secant method
i x y ea
2.5 -0.78125
0 3.5 0.60625 14.26470588
1 3.063063063 -0.666700301 6.951691746
2 3.291906237 -0.164873855 2.232961387
3 3.367092104 0.076255627 0.711181561
4 3.343315064 -0.004403145 0.038808185
5 3.344613047 -0.000106564 0.000962512
6 3.34464524 1.55484E-07 1.40232E-06
7 3.344645193 -5.4996E-12 4.96052E-11
8 3.344645193 1.42109E-14 1.32776E-13
9 3.344645193 0 0
10 3.344645193 0 0

All problems were adapted from [1], unless otherwise stated.


(0.95𝑥𝑖 3 − 5.9𝑥𝑖 2 + 10.9𝑥𝑖 − 6) ∙ (𝑥𝑖−1 − 𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 −
(0.95𝑥𝑖−1 3 − 5.9𝑥𝑖−1 2 + 10.9𝑥𝑖−1 − 6) − (0.95𝑥𝑖 3 − 5.9𝑥𝑖 2 + 10.9𝑥𝑖 − 6)

Newton Raphson method:

0.95𝑥𝑖 3 − 5.9𝑥𝑖 2 + 10.9𝑥𝑖 − 6


Newton-Raphson method 𝑥𝑖+1 = 𝑥𝑖 −
2.85𝑥𝑖 2 − 11.8𝑥𝑖 + 10.9
i x ea
0 3.5000 3.9918
1 3.365650970 0.6140
-In this example, it is shown that the Newton Raphson method
2 3.345112276 0.0140 proved to be more superior than the secant (and even the
3 3.344645432 0.0000 modified secant) method. This is because in Newton Raphson,
4 3.344645193 0.0000 the original first derivative function is used, but in the secant
5 3.344645193 0.0000 (and modified secant), the slope is only an approximation. In this
6 3.344645193 0.0000 method, the true root has been reached way much faster than
7 3.344645193 0.0000 that of the secant (and the modified secant) method.
8 3.344645193 0.0000
9 3.344645193 0.0000
10 3.344645193 0.0000
Modified Secant method:

𝑥𝑖+1
𝛿𝑥𝑖 ∙ (0.95𝑥𝑖 3 − 5.9𝑥𝑖 2 + 10.9𝑥𝑖 − 6)
= 𝑥𝑖 −
(0.95(𝑥𝑖 + 𝛿𝑥𝑖 )3 − 5.9(𝑥𝑖 + 𝛿𝑥𝑖 )2 + 10.9(𝑥𝑖 + 𝛿𝑥𝑖 ) − 6) − (0.95𝑥𝑖 3 − 5.9𝑥𝑖 2 + 10.9𝑥𝑖 − 6)

Modified Secant method


i x x+ (x*constant) f(x) f(x + constant) ea
0 3.5 3.535 0.60625 0.769220106 3.86373876
1 3.369799741 3.403497739 0.085704 0.207878982 0.706446339
2 3.34616091 3.379622519 0.005033 0.120439234 0.043631707
3 3.34470156 3.378148575 0.000187 0.115181346 0.001625108
4 3.344647206 3.378093678 6.67E-06 0.114985827 5.80291E-05
5 3.344645265 3.378091717 2.38E-07 0.114978846 2.06877E-06
6 3.344645195 3.378091647 8.48E-09 0.114978597 7.37485E-08
7 3.344645193 3.378091645 3.02E-10 0.114978588 2.62913E-09
8 3.344645193 3.378091645 1.08E-11 0.114978588 9.37533E-11
9 3.344645193 3.378091645 3.84E-13 0.114978588 3.33268E-12
10 3.344645193 3.378091645 1.42E-14 0.114978588 1.19499E-13

-In this method, the true root has been at the same rate but with much smaller relative errors compared
to that of the secant method but slower compared to the Newton-Raphson method.

All problems were adapted from [1], unless otherwise stated.


II. Non-linear Equations

Introduction and Overview

For equations that cannot be expressed in the form given below, they are considered Non-Linear
equations. Suppose we have two non-linear equations 𝑢(𝑥, 𝑦) and 𝑣(𝑥, 𝑦) with two unknowns, x and y.
Then the solution for such a system of two non-linear equations would be the values of x and y that will
make both u and v equal to 0 [1].

𝑓 (𝑥) = 𝑎1 𝑥1 + 𝑎2 𝑥2 + ⋯ + 𝑎𝑛 𝑥𝑛 − 𝑏 = 0

Newton-Raphson Method

The equation developed for a single variable linear equation is based on a 1st-order Taylor series
expansion 𝑓 (𝑥𝑖+1 ) = 𝑓(𝑥𝑖 ) + (𝑥𝑖+1 − 𝑥𝑖 )𝑓 ′ (𝑥𝑖 ). If the solved root 𝑥𝑖+1 is where the slope function
intercepts the x-axis, then 𝑓(𝑥𝑖+1 ) = 0, yielding this expansion to be 𝑓(𝑥𝑖 ) + (𝑥𝑖+1 − 𝑥𝑖 )𝑓 ′ (𝑥𝑖 ) = 0. For
a two-variable case, a 1st-order Taylor series can be written for both the two equations (𝑓1 = 𝑢 𝑎𝑛𝑑 𝑓2 =
𝑣), accounting of course both the variables(𝑥 𝑎𝑛𝑑 𝑦) in each equation [1].

𝑓1 (𝑥𝑖+1 , 𝑦𝑖+1 ) = 𝑓1 (𝑥𝑖 , 𝑦𝑖 ) + (𝑥𝑖+1 − 𝑥𝑖 )𝑓1 ′ (𝑥𝑖 ) + (𝑦𝑖+1 − 𝑦𝑖 )𝑓1 ′ (𝑦𝑖 )

→ 𝑓1 (𝑥𝑖 ) + (𝑥𝑖+1 − 𝑥𝑖 )𝑓1 ′ (𝑥𝑖 ) + (𝑦𝑖+1 − 𝑦𝑖 )𝑓1 ′ (𝑦𝑖 ) = 0


𝜕𝑢𝑖 𝜕𝑢𝑖
→ 𝑢𝑖 + (𝑥𝑖+1 − 𝑥𝑖 ) 𝜕𝑥
+ (𝑦𝑖+1 − 𝑦𝑖 ) 𝜕𝑥
0
𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖
→ 𝜕𝑥
𝑥𝑖+1 + 𝜕𝑥
𝑦𝑖+1 = −𝑢𝑖 + 𝑥𝑖 𝜕𝑥
+ 𝑦𝑖 𝜕𝑥

and

𝑓2 (𝑥𝑖+1 , 𝑦𝑖+1 ) = 𝑓2 (𝑥𝑖 , 𝑦𝑖 ) + (𝑥𝑖+1 − 𝑥𝑖 )𝑓2 ′ (𝑥𝑖 ) + (𝑦𝑖+1 − 𝑦𝑖 )𝑓2 ′ (𝑦𝑖 )

→ 𝑓2 (𝑥𝑖 ) + (𝑥𝑖+1 − 𝑥𝑖 )𝑓2 ′ (𝑥𝑖 ) + (𝑦𝑖+1 − 𝑦𝑖 )𝑓2 ′ (𝑦𝑖 ) = 0


𝜕𝑣𝑖 𝜕𝑣𝑖
→ 𝑣𝑖 + (𝑥𝑖+1 − 𝑥𝑖 ) + (𝑦𝑖+1 − 𝑦𝑖 ) =0
𝜕𝑥 𝜕𝑦
𝜕𝑣𝑖 𝜕𝑣 𝜕𝑣𝑖 𝜕𝑣
→ 𝜕𝑥
𝑥𝑖+1 + 𝜕𝑦𝑖 𝑦𝑖+1 = −𝑣𝑖 + 𝑥𝑖 𝜕𝑥
+ 𝑦𝑖 𝜕𝑦𝑖

The right-hand side of both equations are constants. Thus, we can use Cramer’s Rule in finding the
unknowns 𝑥𝑖+1 and 𝑦𝑖+1 [1]:
Cramer’s Rule:
𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖
𝑥 + 𝑦 = −𝑢𝑖 + 𝑥𝑖 + 𝑦𝑖 𝐷𝑥
𝜕𝑥 𝑖+1 𝜕𝑦 𝑖+1 𝜕𝑥 𝜕𝑦 𝑥𝑖+1 = 𝑖+1
𝐷
𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖
𝑥 + 𝑦 = −𝑣𝑖 + 𝑥𝑖 + 𝑦𝑖 𝐷𝑦
𝜕𝑥 𝑖+1 𝜕𝑦 𝑖+1 𝜕𝑥 𝜕𝑦 𝑦𝑖+1 = 𝑖+1
𝐷

All problems were adapted from [1], unless otherwise stated.


𝜕𝑢𝑖 𝜕𝑢𝑖
𝜕𝑥 𝜕𝑦 | 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖
𝐷 = || = −
𝜕𝑣𝑖 𝜕𝑣𝑖 | 𝜕𝑥 𝜕𝑦 𝜕𝑦 𝜕𝑥
𝜕𝑥 𝜕𝑦
𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖
−𝑢𝑖 + 𝑥𝑖 + 𝑦𝑖
𝜕𝑥 𝜕𝑦 𝜕𝑦 |
𝐷𝑥𝑖+1 = ||
𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖 |
−𝑣𝑖 + 𝑥𝑖 + 𝑦𝑖
𝜕𝑥 𝜕𝑦 𝜕𝑦
𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖
= −𝑢𝑖 + 𝑥𝑖 + 𝑦𝑖 − (−𝑣𝑖 + 𝑥𝑖 + 𝑦𝑖 )
𝜕𝑦 𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦 𝜕𝑦 𝜕𝑦
𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖
= −𝑢𝑖 + 𝑣𝑖 + 𝑥𝑖 ( − )
𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦

𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖


−𝑢𝑖 + 𝑥𝑖 + 𝑦𝑖
𝜕𝑥 𝜕𝑥 𝜕𝑦 |
𝐷𝑦𝑖+1 = ||
𝜕𝑣 𝜕𝑣𝑖 𝜕𝑣𝑖 |
−𝑣𝑖 + 𝑥𝑖 + 𝑦𝑖
𝜕𝑥 𝜕𝑥 𝜕𝑦
𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖
= −𝑣𝑖 + 𝑥𝑖 + 𝑦𝑖 − (−𝑢𝑖 + 𝑥𝑖 + 𝑦𝑖 )
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑦 𝜕𝑥
𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖
= −𝑣𝑖 + 𝑢𝑖 + 𝑦𝑖 ( − )
𝜕𝑥 𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦 𝜕𝑥

𝜕𝑣𝑖 𝜕𝑢 𝜕𝑢 𝜕𝑣 𝜕𝑣 𝜕𝑢 𝝏𝒗 𝝏𝒖
𝐷𝑥𝑖+1 −𝑢𝑖 + 𝑣𝑖 𝑖 + 𝑥𝑖 ( 𝑖 𝑖 − 𝑖 𝑖 ) 𝒖𝒊 𝒊 − 𝒗𝒊 𝒊
𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦 𝝏𝒚 𝝏𝒚
𝑥𝑖+1 = = → 𝒙𝒊+𝟏 = 𝒙𝒊 −
𝐷 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢_𝑖 𝜕𝑣𝑖 𝝏𝒖𝒊 𝝏𝒗𝒊 𝝏𝒖𝒊 𝝏𝒗𝒊
𝜕𝑥 𝜕𝑦 − 𝜕𝑦 𝜕𝑥 𝝏𝒙 𝝏𝒚 − 𝝏𝒚 𝝏𝒙
𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖 𝝏𝒖 𝝏𝒗
𝐷𝑦𝑖+1 −𝑣𝑖 𝜕𝑥 + 𝑢𝑖 𝜕𝑥 + 𝑦𝑖 ( 𝜕𝑦 𝜕𝑥 − 𝜕𝑦 𝜕𝑥 ) 𝒗𝒊 𝝏𝒙𝒊 − 𝒖𝒊 𝝏𝒙𝒊
𝑦𝑖+1 = = → 𝒚𝒊+𝟏 = 𝒚𝒊 −
𝐷 𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢_𝑖 𝜕𝑣𝑖 𝝏𝒖𝒊 𝝏𝒗𝒊 𝝏𝒖𝒊 𝝏𝒗𝒊
− −
𝜕𝑥 𝜕𝑦 𝜕𝑦 𝜕𝑥 𝝏𝒙 𝝏𝒚 𝝏𝒚 𝝏𝒙

Sample #1: Solve the given system of nonlinear equations with an initial guess of x=2 and y=4 using the
Newton-Raphson method.

(𝑥 − 4)2 + (𝑦 − 4)2 = 5 → 𝑢 (𝑥, 𝑦) = (𝑥 − 4)2 + (𝑦 − 4)2 − 5 = 0

𝑥 2 + 𝑦 2 = 16 → 𝑣(𝑥, 𝑦) = 𝑥 2 + 𝑦 2 − 16 = 0
𝜕𝑢 𝜕𝑢 𝜕𝑣 𝜕𝑣
= 2(𝑥 − 4) = 2(𝑦 − 4) = 2𝑥 = 2𝑦
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦

All problems were adapted from [1], unless otherwise stated.


1st iteration (i=0):
𝜕𝑢 𝜕𝑢 𝜕𝑣 𝜕𝑣
= 2(2 − 4) = −4 = 2(4 − 4) = 0 = 2(2) = 2(4) = 8
𝜕𝑥0 𝜕𝑦0 𝜕𝑥0 𝜕𝑦0
𝑢(𝑥0 , 𝑦0 ) = (2 − 4)2 + (4 − 4)2 − 5 = −1

𝑣(𝑥0 , 𝑦0 ) = (2)2 + (4)2 − 16 = 4

𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝟕


𝑥𝑖+1 + 𝑦𝑖+1 = −𝑢𝑖 + 𝑥𝑖 + 𝑦𝑖 → −4𝑥1 + (0)𝑦1 = 1 + (2)(−4) + (4)(0) → 𝒙𝟏 =
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦 𝟒
𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖
𝑥𝑖+1 + 𝑦𝑖+1 = −𝑣𝑖 + 𝑥𝑖 + 𝑦𝑖 → 4𝑥1 + 8𝑦1 = −4 + (2)(4) + (4)(8) → 𝒚𝟏 = 𝟑. 𝟔𝟐𝟓
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦

|1.75 − 2|
𝜀𝑎,𝑥,0 = ∙ 100 = 𝟏𝟒. 𝟐𝟗%
1.75
|3.625 − 4|
𝜀𝑎,𝑦,0 = ∙ 100 = 𝟏𝟎. 𝟑𝟒%
3.625

2nd iteration (i=1):


𝜕𝑢 𝜕𝑢
= 2(1.75 − 4) = −4.5 = 2(3.625 − 4) = −0.75
𝜕𝑥1 𝜕𝑦1
𝜕𝑣 𝜕𝑣
= 2(1.75) = 3.5 = 2(3.625) = 7.25
𝜕𝑥1 𝜕𝑦1
𝑢(𝑥0 , 𝑦0 ) = (1.75 − 4)2 + (3.625 − 4)2 − 5 = 0.2031
𝑣(𝑥0 , 𝑦0 ) = (1.75)2 + (3.625)2 − 16 = 0.2031

𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖 𝜕𝑢𝑖


𝑥 + 𝑦 = −𝑢𝑖 + 𝑥𝑖 + 𝑦𝑖
𝜕𝑥 𝑖+1 𝜕𝑦 𝑖+1 𝜕𝑥 𝜕𝑦
→ −4.5𝑥2 − 0.75𝑦2 = −0.2031 + (1.75)(−4.5) + (3.625)(−0.75)
→ 4.5𝑥2 + 0.75𝑦2 = 10.7969
𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖 𝜕𝑣𝑖
𝑥𝑖+1 + 𝑦𝑖+1 = −𝑣𝑖 + 𝑥𝑖 + 𝑦𝑖
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦
→ 3.5𝑥2 + 7.25𝑦2 = −0.2031 + (1.75)(3.5) + (3.625)(7.25)
→ 3.5𝑥2 + 7.25𝑦2 = 32.2032

All problems were adapted from [1], unless otherwise stated.


Solving the two equations simultaneously:

𝒙𝟐 = 𝟏. 𝟖𝟎𝟒𝟐
𝒚𝟐 = 𝟑. 𝟓𝟕𝟎𝟖
|1.8042 − 1.75|
𝜀𝑎,𝑥,0 = ∙ 100 = 𝟑. 𝟎𝟎%
1.8042
|3.5708 − 3.625|
𝜀𝑎,𝑦,0 = ∙ 100 = 𝟏. 𝟓𝟐%
3.5708

3rd iteration (i=2):


𝜕𝑢 𝜕𝑢
= 2(1.8042 − 4) = −4.3916 = 2(3.5708 − 4) = −0.8584
𝜕𝑥1 𝜕𝑦1
𝜕𝑣 𝜕𝑣
= 2(1.8042) = 3.6084 = 2(3.5708) = 7.1416
𝜕𝑥1 𝜕𝑦1
𝑢(𝑥0 , 𝑦0 ) = (1.8042 − 4)2 + (3.5708 − 4)2 − 5 = 5.7503𝑥10−3
𝑣(𝑥0 , 𝑦0 ) = (1.8042)2 + (3.5708)2 − 16 = 5.7503𝑥10−3

𝜕𝑣 𝜕𝑢
𝑢𝑖 𝜕𝑦𝑖 − 𝑣𝑖 𝜕𝑦𝑖
𝑥𝑖+1 = 𝑥𝑖 −
𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖

𝜕𝑥 𝜕𝑦 𝜕𝑦 𝜕𝑥
(5.7503𝑥10−3 )(7.1416 ) − (5.7503𝑥10−3 )(−0.8584)
→ 𝑥3 = 1.8042 −
(−4.3916 )(7.1416) − (−0.8584)(3.6084 )

→ 𝒙𝟑 = 𝟏. 𝟖𝟎𝟓𝟖
𝜕𝑢 𝜕𝑣
𝑣𝑖 𝜕𝑥𝑖 − 𝑢𝑖 𝜕𝑥𝑖
𝑦𝑖+1 = 𝑦𝑖 −
𝜕𝑢𝑖 𝜕𝑣𝑖 𝜕𝑢𝑖 𝜕𝑣𝑖
𝜕𝑥 𝜕𝑦 − 𝜕𝑦 𝜕𝑥
(5.7503𝑥10−3 )(−4.3916) − (5.7503𝑥10−3 )(3.647)
→ 𝑦3 = 3.5708 −
(−4.3916 )(7.1416) − (−0.8584)(3.6084 )

→ 𝒚𝟑 = 𝟑. 𝟓𝟔𝟗𝟐

|1.8058 − 1.8042|
𝜀𝑎,𝑥,0 = ∙ 100 = 𝟎. 𝟎𝟖𝟗%
1.8042
|3.5692 − 3.5708|
𝜀𝑎,𝑦,0 = ∙ 100 = 𝟎. 𝟎𝟒𝟓%
3.5708

All problems were adapted from [1], unless otherwise stated.


Sample #2: Solve the given system of nonlinear equations using Newton-Raphson and Fixed-point
iteration methods with initial values of x=y=0. Set the tolerance to 0.0001%.

Newton-Raphson:

𝑦 = −𝑥 2 + 𝑥 + 0.75 → 𝑢(𝑥, 𝑦) = −𝑥 2 + 𝑥 − 𝑦 + 0.75 = 0

𝑦 + 5𝑥𝑦 = 𝑥 2 → 𝑣(𝑥, 𝑦) = 𝑦 + +5𝑥𝑦 − 𝑥 2 = 0

*plotted using Matlab. The red curve is 𝑢(𝑥, 𝑦), while the blue curve is 𝑣(𝑥, 𝑦)

All problems were adapted from [1], unless otherwise stated.


*solved using Matlab

Fixed point iteration:


𝑦 = −𝑥 2 + 𝑥 + 0.75 → 𝑢(𝑥, 𝑦) = 𝑥 = √𝑥 − 𝑦 + 0.75 = 0
𝑥2 − 𝑦
𝑦 + 5𝑥𝑦 = 𝑥 2 → 𝑣(𝑥, 𝑦) = 𝑦 = =0
5𝑥

A different root was solved using MOSS. In this example it can be shown the general superiority of the
Newton-Raphson method over MOSS in solving systems of nonlinear equations. The problem with MOSS
is that it is so restrictive in its convergence criterion, making it not always applicable to any system of
nonlinear equations. However, its application becomes quite useful in solving systems of linear equations,
which is in the form of the Gauss-Seidel method.

Sample #3: A liquid is said to be stable when it is below its boiling point at a given pressure, while it is said
to be metastable and superheated when it is above its boiling point. The condition for metastability is
𝜕𝑃 𝜕𝑃
( ) < 0, such that a liquid is said to be unstable when ( ) > 0. The boundary between metastability
𝜕𝑉 𝑇 𝜕𝑉 𝑇
𝜕𝑃
and instability for a liquid is the Spinodal state, where (𝜕𝑉 ) = 0. The constant temperature for the
𝑇
spinodal state is called Spinodal temperature (𝑇𝑠 ), and it is relative to the pressure (each pressure has a
designated spinodal temperature) [2]. Using the Redlich-Kwong equation of state, what is the spinodal
temperature and its corresponding molar volume of H2O at 1 atm? Use initial guesses of 𝑉𝑚 = 0.30 𝐿 and

All problems were adapted from [1], unless otherwise stated.


𝑇𝑠 = 300 𝐾. The critical temperature and pressure of H2O are 647.4 K and 218.3 atm [3]. (This problem
was adapted and modified from [2]).

Starting equations:
5
𝑅2 𝑇𝑐 2
𝑎= 1 [2]
9(23 − 1)𝑃𝑐
1
(23 − 1) 𝑅𝑇𝑐
𝑏= [2]
3𝑃𝑐
𝑅𝑇 𝑎
𝑃= − [2]
𝑉𝑚 − 𝑏 √𝑇𝑉𝑚 (𝑉𝑚 + 𝑏)

𝜕𝑃 𝑅𝑇 𝑎(2𝑉𝑚 + 𝑏) 𝜕𝑃 𝑅𝑇𝑠 𝑎(2𝑉𝑚 + 𝑏)


( ) =− 2
+ 2 →( ) =− 2
+ =0
𝜕𝑉 𝑇 (𝑉𝑚 − 𝑏) √𝑇𝑉𝑚 (𝑉𝑚 + 𝑏)2 𝜕𝑉 𝑇𝑠 (𝑉𝑚 − 𝑏) 2
√𝑇𝑠 𝑉𝑚 (𝑉𝑚 + 𝑏)2

Solution:

Evaluate the constants a and b:


5 5
𝑅2 𝑇𝑐 2 (0.08205)2 (647.4)2 1
𝑎= 1 = 1 = 140.59 𝑎𝑡𝑚 ∙ 𝐾 2 ∙ 𝐿6 ∙ 𝑚𝑜𝑙−2
9(23 − 1)𝑃𝑐 9 (23 − 1) 218.3

1 1
(23 − 1) 𝑅𝑇𝑐 (23 − 1) (0.08205)(647.4)
𝑏= = = 0.02108 𝐿3 ∙ 𝑚𝑜𝑙 −1
3𝑃𝑐 3(218.3)
Develop the two equations that will be used for the Newton-Raphson iteration process:
𝑅𝑇 𝑎 0.08205𝑇𝑠 140.59
𝑃= − → 𝑢(𝑉𝑚 , 𝑇𝑠 ) = − −1=0
𝑉𝑚 − 𝑏 √𝑇𝑉𝑚 (𝑉𝑚 + 𝑏) 𝑉𝑚 − 0.02108 √𝑇𝑠 𝑉𝑚 (𝑉𝑚 + 0.02108 )

𝑅𝑇𝑠 𝑎(2𝑉𝑚 + 𝑏) 0.08205𝑇𝑠 140.59(2𝑉𝑚 + 0.02108 )


− 2
− = 0 → 𝑣 (𝑉𝑚 , 𝑇𝑠 ) = − 2
+
(𝑉𝑚 − 𝑏) 2
√𝑇𝑠 𝑉𝑚 (𝑉𝑚 + 𝑏)2 (𝑉𝑚 − 0.02108 ) √𝑇𝑠 𝑉𝑚 2 (𝑉𝑚 + 0.02108 )2

The functions 𝑢(𝑉𝑚 , 𝑇𝑠 ) and 𝑣(𝑉𝑚 , 𝑇𝑠 ) will be used for solving 𝑉𝑚 (treated as x) and 𝑇𝑠 (treated as y) using
the Newton-Raphson method for solving nonlinear systems of equations with initial guesses of 𝑉𝑚 =
0.30 𝐿 and 𝑇𝑠 = 300 𝐾. The following numerical computations were done using Matlab, with a tolerance
set to 0.00001%. The numerical computations using the Newton-Rhapson method for solving nonlinear
systems of equations showed the following results after 14 iterations with tolerance=0.00001%:

𝑻𝑺 = 𝟗𝟐. 𝟒𝟑𝟕𝟑 𝑲
𝑽𝒎 = 𝟑. 𝟕𝟖𝟏𝟓 𝑳

All problems were adapted from [1], unless otherwise stated.


References:

[1] Chapra, S. C., & Canale, R. P. (2010). Numerical methods for engineers: with software and
programming applications (6th ed.). Boston: McGraw-Hill.
[2] Eberhart, J. G. (1994). Solving Nonlinear Simultaneous Equations by the Method of Successive
Substitution: Applications to Equations of State. Journal of Chemical Education, 71(12), 1038. doi:
10.1021/ed071p1038
[3] Atkins, P., & Paula, J. (2014). Physical Chemistry Thermodynamics, Structure, and Change (10th ed.).
New York,NY: Oxford University Press.

The Matlab codes used in the examples are available online:

Robby Ching (2020). Numerical methods on finding the roots


(https://www.mathworks.com/matlabcentral/fileexchange/77702-numerical-methods-on-finding-the-
roots), MATLAB Central File Exchange. Retrieved July 3, 2020.

Robby Ching (2020). Numerical methods on finding the roots-nonlinear


(https://www.mathworks.com/matlabcentral/fileexchange/77708-numerical-methods-on-finding-the-
roots-nonlinear), MATLAB Central File Exchange. Retrieved July 3, 2020.

All problems were adapted from [1], unless otherwise stated.

You might also like