Numerical Differentiation of Equally Spaced and Not Equally Space
Numerical Differentiation of Equally Spaced and Not Equally Space
Numerical Differentiation of Equally Spaced and Not Equally Space
Scholars' Mine
Chemical and Biochemical Engineering Faculty Linda and Bipin Doshi Department of Chemical
Research & Creative Works and Biochemical Engineering
01 Aug 1967
Jacques L. Zakin
Missouri University of Science and Technology
Robert Simha
Recommended Citation
H. C. Hershey et al., "Numerical Differentiation Of Equally Spaced And Not Equally Spaced Experimental
Data," Industrial and Engineering Chemistry Fundamentals, vol. 6, no. 3, pp. 413 - 421, American Chemical
Society, Aug 1967.
The definitive version is available at https://doi.org/10.1021/i160023a015
This Article - Journal is brought to you for free and open access by Scholars' Mine. It has been accepted for
inclusion in Chemical and Biochemical Engineering Faculty Research & Creative Works by an authorized
administrator of Scholars' Mine. This work is protected by U. S. Copyright Law. Unauthorized use including
reproduction for redistribution requires the permission of the copyright holder. For more information, please
contact [email protected].
Subscripts (17) Hu, S., Kintner, R. C., A.I.Ch.E. J. 1, 42 (1955).
(18) Johnson, A. I., Braida, L., Can. J. Chem. Eng. 35, 165 (1957).
C =
continuous phase (19) Keith, F. W., Hixson, A. N„ Ind. Eng. Chem. 47, 258 (1955).
D =
dispersed phase (20) Laddha, G. S., Smith, J. M., Chem. Eng. Progr. 46, 195
d =
dispersed phase (1950).
(21) Langmuir, I., Phys. Rev. 12, 368 (1918).
Literature Cited (22) Li, P., West, F. B., Vance, W. H., Moulton, R. W., A.I.Ch.E.
J. 11, 581 (1956).
(1) Angelo, J. B., Lightfoot, E. N., Howard, D. W., A.I.Ch.E. J. (23) Null, H. R., Johnson, H. F., Ibid., 4, 273 (1958).
12, 751 (1966). (24) Pasternak, I. S., Gauvin, W. H., Can. J. Chem. Engr. 38,
(2) Baird, . . I., Davidson, J. F., Chem. Eng. Sci. 17, 87 (1962). 35 (April 1960).
(3) Bevaert, B. O., Lapidus, L., Elgin, J. C., A.I.Ch.E. J. 7, 46 (25) Powell, R. W., Trans. Inst. Chem. Engrs. 18, 36 (1940).
(1961). (26) Ranz, W. E., Marshall, W. R., Chem. Eng. Progr. 48, 173
(4) Calderbank, P. H., Trans. Inst. Chem. Engrs. 36, 433 (1958). (1952).
(5) Calderbank, P. H., Korchinski, I. J. O., Chem. Eng. Sci. 6, 65 (27) Rose, P. M., Kintner, R. C., A.I.Ch.E. J. 12, 530 (1966).
(1956) . (28) Ruby, C. L., Elgin, J. C., Chem. Eng. Progr. Symp. Ser. 51,
(6) Christiansen, R. M., Hixon, A. N., Ind. Eng. Chem. 49, 1017 No. 15, 17 (1955).
(1957) . (29) Schroeder, R. R., Ph.D. thesis, Illinois Institute of Tech-
(7) Conkie, W. R., Savic, P., Division of Mechanical Engineering nology, Chicago, 111., 1964.
National Research Council of Canada, Rept. MT-23 (Oct. 22, (30) Skelland, A. . P., Cornish, A. R. IT, A.I.Ch.E. J. 9, 73
1952). (1963).
(8) Evnochides, S., Thodos, G., A.I.Ch.E. J. 7, 78 (1961). (31) Steinberger, R. S., Treybal, R. E., Ibid., 6, 227 (1960).
(9) Frossling, N., Beitr. Geophysik. 52, 170 (1938). (32) Thorsen, G., Terjesen, S. G., Chem. Eng. Sci. 17, 137 (1962).
(10) Garner, F. H., Foord, A., Tayeban, M., J. Appl. Chem. 9, (33) Treybal, R. E., “Liquid Extraction,” 2nd ed., p. 467,
315 (1959). McGraw-Hill, New York, 1963.
(11) Garner, F. H., Suckling, R. D., A.I.Ch.E. J. 4, 114 (1958). (34) Warshay, M., Bogusz, E., Johnson, M., Kintner, R. C.,
Downloaded via MISSOURI UNIV SCIENCE & TECHNOLOGY on June 12, 2023 at 18:05:53 (UTC).
(12) Griffith, R. M., Chem. Eng. Sci. 12, 198 (1960). Can. J. Chem. Engr. 37, 29 (1959).
See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles.
(13) Hammerton, D., Garner, F. H., Trans. Inst. Chem. Engrs. (35) Weaver, R. E. C., Ph.D. dissertation, Princeton University,
32, 518 (1954). Princeton, N. J., 1955.
(14) Hayworth, C. B., Treybal, R. E., Ind. Eng. Chem. 42, 1174 (36) Weaver, R. E. C., Lapidus, L., Elgin, J. C., A.I.Ch.E. J.
(1950). 5, 533 (1959).
(15) Heertjes, P. M., Holve. W. A., Talsma, H., Chem. Eng. Sci. (37) Zenz, F. A., Petrol Refiner 36, No. 8, 147 (1957).
3, 122 (1954).
(16) Hsu, N. T., Sato, K., Sage, B. H., Ind. Eng. Chem. 46, 870 Received for review October 19, 1966
(1954). Accepted April 25, 1967
NUMERICAL DIFFERENTIATION OF
EQUALLY SPACED AND NOT EQUALLY
SPACED EXPERIMENTAL DATA
HARRY C. HERSHEY,1 2JACQUES L .
ZAKIN, AND ROBERT SIMHA1
Department of Chemical Engineering, University of Missouri at Rolla, Rolla, Mo.
Procedures are given for smoothing and differentiating experimental data with both equal and nonequal
spacing in the independent variable. Selection of the number of points to be included in the "movable
strip" technique and of the degree of the polynomial is discussed. Equations are given to estimate the
error by calculating a confidence interval on each slope. A technique for handling certain types of non-
random errors is presented.
^Phe scientist is often called upon to obtain derivatives of Experimental data are always subject to errors. These
* functions representing his experimental data. With the errors prevent a simple difference technique from being used
advent of the digital computer there is no longer any excuse to obtain derivatives, as is graphically shown by Ralston
for plotting the data by hand and then using “optical” methods (1965). Such a scheme of approximating dY/dX by AY/AX
(Simons, 1941) which are often unintentionally biased as at the point of interest results in the loss of significant figures
well as tedious. On the other hand, the digital computer in both numerator and denominator. Thus even the sign of
can generate such a quantity of numbers that the correct the derivative is often incorrect, and the simple difference
interpretation of the results is obscured or overlooked. This technique is completely unsatisfactory.
article emphasizes procedures for numerical differentiation In numerical differentiation no attempt is made to represent
of experimental data and estimation of the error in the slope, all the data by a single function of high order. Although this
reviews the assumptions behind the methods, and gives some technique is often satisfactory for integration, it usually leads
techniques for numerical differentiation of data with non- to results as erroneous as those from the simple difference
random errors. technique. The best estimate of the derivative at a point is
to fit a function to several data points on both sides of the
Present address, Department of Chemical Engineering, Ohio
particular point and then differentiate the resulting function
1
is simplified into a power series by equating P¡{X) to X1: number of digits in the largest element of (X'X)—i.e., XX,2m.
However, for any given set of data a little experimentation
f(x> =
+« (2) with the highest degree polynomial to be selected, using a
3=0 variety of word lengths in the calculation, will show exactly
where ß are the power coefficients. It is easily shown that how many digits are needed.
minimization of the sum of the squares of the residuals leads Orthogonal Least Squares Polynomials. The definition
to the following set of simultaneous equations written in matrix of a set of orthogonal polynomials Q,(X) is:
form (Graybill, 1961):
Qj(Xi)Q*(Xi) = 0 if/ pi k (6)
(X 'X) (B) =
(X'Y) (3) 1=1
Table I. Least Squares Cubic Coefficients for Density of Water vs. Temperature
Word Standard
Length Scaling & "CL; X 3 ß X 705 ß> X 70’ Deviation, *5’
f(X) =
7/2X*) + * (7) i =
L + s (12)
3=0
and
where y¡ become the least squares orthogonal coefficients.
Perhaps the easiest set of orthogonal polynomials for use on a
(13)
digital computer is that proposed by Forsythe (1957), which is
defined as:
lets go from —L, —L + 1,
r ., 0, ., L 1, L as 2 goes . . . .
—
Q-i (Z) = o from 0 to '. Since a dummy variable with integer values
has replaced Z, the least squares orthogonal polynomial
Qo(z) = i becomes (Ralston, 1965):
Qi(Z) =
(Z -
ai)Qo(Z)
/ =
= 0
7J QA 2L) + e (14)
Qi(X) =
(Z -
a2)<21(Z)
-
¿nQoíZ) (8)
where
Qa(Z) =
(Z -
a3)Q2(Z) -
¿,2Qi(Z) = 1
Qo
<2i =
s/L
Qi(X) =
(X -
aj)Qj-i(X) -
6y-iQ,-2(Z)
Q2 - — —~
where a¡, b¡, and y (the best linear unbiased estimate of 7,·)
¡
L(2L -
1)
are calculated from:
+ k)'^ (L + i)<*)
XilQ.3-i(Xi)Wi: IQj-ÁXi)]2 (9) (-D*+i (j
=
a, Qj(s, 2L) =
i=l ¿=1
k =
0 (k'y [2L]m
(15)
b)
=
[<2riXt)]V¿ [<2,-t(^,')]2 (10)
=1 i=l with the factorial functions (j + k)<2t>, (L + t)<*>, and [2Z.]<*>
defined according to the following example of Z<*>:
7,·
=
/ (2,·(^)/ mxd}2 (11)
z<‘) +
1=1 ¿=1 =
Z(Z -
1)(Z -
2)(Z -
3) ...
(Z -
k 1) (16)
X,
treated in depth elsewhere (Bright and Dawkins, 1965; For- -
L (19)
sythe, 1957; Hildebrand, 1956; Lanczos, 1956; Lapidus,
1962; Ralston, 1965; Sasuly, 1934). But it should be where X0 is the value of Z at i 0 and h is ;_µ Z,, yields = —
emphasized that the least squares orthogonal polynomials, the desired equation.
when fitted to a particular set of data, will yield the same As can be seen from Equations 14 through 18, much of the
coefficients in Equation 2 as the solution to Equation 3,
computation involved is done only once after the number of
when all the constants in Equation 7 have been evaluated and
points and the degree of fit have been chosen. For example,
factored into coefficients of simple powers of Z. This is the third-degree seven-point formula for Y evaluated at r 0 is =
2/3)
3=0
was minimized in the derivation, and only one equation of (20)
degree m will fit a given set of data in the least squares sense.
The principal advantages of orthogonal polynomials in The computing time to find Ya in Equation 20 is several orders
numerical differentiation are for data equally spaced in the of magnitude less than to find Y0 by calculating (X'X) and
independent variable. [The case of data equally spaced in f(X)
(X'Y) from Equations 4 and 5, then B in Equation 3, and
can also be handled.] For unevenly spaced data, computa- lastly the generated function, Y, from Equation 2. The final
tion is as long for the low order orthogonal polynomials as for formulas for the derivative of Equation 14 are equally simple.
the power function least squares (Bright and Dawkins, 1965),
and in computing centers where library routines are available Smoothing
for the Gauss-Jordan reduction the programming of orthog- Practically all experimental data require smoothing before
onal polynomials is somewhat more tedious. differentiation is performed. Smoothing (Hildebrand, 1956;
If we agree to the restriction of an odd number, Z', of equally Lanczos, 1956; Ralston, 1965) consists of passing a least
Tm+i =
[ /j + / + · ?/i+ +
“AW
. . .
5=0 +1 ^/ ~
[ /, + + 2 7/,+ . . .
+
As discussed for the general case in numerical differentiation, » ,"· ] (24)
the best method is to select a small, odd number of points,
to the error in the fit of Equation 22, +:
evaluate Y at the center, and proceed using the “movable
strip” technique. At the top and bottom of the table it will be Em+i =
[2/¡2
-
/i -
, -
...
-
2] (25)
extremity from off-center formulas (Hildebrand, 1956).
Hildebrand (1956) points out that the amount of smoothing If Fm+1 is less than the value of the central F statistic with 1
increases with the number of points in the movable strip and degree of freedom in the numerator and (N m 2) degrees — —
decreases with increasing m, the degree of the polynomial. in the denominator at the desired confidence level (p 0.05 =
Each set of data must be judged as to the proper TV' and m, is convenient), the data are adequately represented by Equa-
and so generalization is difficult. However, selection of ' tion 21, since the am+1—i.e., , +1—coefficient makes no
and m is not so critical in smoothing as in the actual differen- significant improvement in the fit.
tiation, because the number of times that smoothing is applied The AOV to test the hypothesis am+1 0 frequently must be
=
is also controlled. The authors have had good results with interpreted carefully, because sometimes e is not normally
m = 3 and ' 5 or ' =
7 formulas. Often no significant
=
distributed. For example, data taken on electrical instru-
improvement is found after smoothing has been applied more ments throughout several ranges, each range differing as a
than four times. However, the smoothing has to be ter- result of different size resistors in series at the time of the
minated at the stage where the random error is eliminated measurements, will not always have a common normal dis-
and yet the basic shape and character of the data have been tribution. Often best results will be obtained by assuming
unaltered. “m+i = 0 if Fm+1 is reasonably close to the F statistic, even
when the F statistic is determined at p 0.05 or p 0.01.
= =
Selection of Degree of Fit in Numerical Differentiation Some statisticians recommend that if am+1 is found not to be
significant, am+i be checked also, just in case the true function
It is advantageous to perform numerical differentiation of the data were odd or even. However, this is usually un-
with m as low as possible, because as m approaches N 1,
—
2 points at which the slope changes sign. Hence its The AOV is even more sensitive to roundoff error in the
derivative often fluctuates widely and in general is a poor
computation than the solution of the least squares matrix.
representation of the slope of the points. The subtractions in both Equations 24 and 25 are almost
A second-degree least squares polynomial will give excellent
always between nearly equal numbers. After the subtraction,
results on almost all sets of data, unless the points are unusually
Equation 23 requires a division. Thus the AOV will require
far apart or sparse. However, a statistical test can be made a careful choice of word length (discussed in conjunction with
if there is reason to suspect that a higher degree would produce the solution of Equation 3).
better results. If e in Equation 1 can be assumed to be dis-
tributed normally as well as independently with mean zero
Selection of Number of Points in Each Strip
and variance 2, an analysis of variance (AOV) can be per-
formed to test the hypothesis am+1 0, given two sets of least
= There is no known test to assist in the selection of ', the
squares polynomials of degree m and m + 1 (Graybill, 1961), number of data points to be included in each strip. The larger
and thereby determine if the polynomial of degree m ade- ', the less the error in any given data point affects the slope
quately represents the data. of the curve at that and adjacent points. A three-point
Since the AOV requires that both the m + degree series of 1 formula is unsatisfactory for experimental data, because a
least squares coefficients and the m degree series of coefficients second-degree polynomial fits all three points exactly. A
be determined in order to test the hypothesis am+i 0, the =
seven-point polynomial often gives good results (Zakin et al.,
m degree coefficients will be called , , ., „ and the m + . .
1966). A five-point formula works well when the data are not
1 coefficients , , ., +1· The first step is to find the
. . too “bumpy,” whereas a nine-point or higher polynomial may
least squares coefficients in the following equations: be required for data with large errors, even after smoothing.
Y =
(21) Numerical Differentation of Equally Spaced Data
5=0
Smoothing Formulas. Orthogonal polynomials are the
m-h 1 most convenient for evenly spaced data. The most useful
=
*iX¡ (22)
smoothing formulas are the third-degree five-point and the
5=0
third-degree seven-point. Table II supplies the coefficients
The equations for the AOV are: for smoothing formulas to be used in Equation 26 :
Fm+1
=
Am+l
(23) Ys =
^ ( (26)
-8
4
C2
-1
2
c3
I XXt
\ ?
? If
\d2,
ZXifi
¿XfíiJ
(28)
0 35 -3 12 17 12 -3
1 35 2 -8 12 27 2 and so on.
-3 42 -4 -4 For smoothing, coefficients j3y are found as in the least
7 39 8 1 4 -2
-2 42 8 19 16 6 -4 -7 4
-1 42 -4 16 19 12 2 -4 1 squares power functions, and Y(X{) are calculated from^o0jX/
=
0 21 -2 3 6 7 6 3 -2 y
for the center or off-center points, as the case may be. For
differentiation, Equation 28 is solved as above and the estimated
derivative dY(Xf)/dX is calculated from the derivative of
where the formulas for Fi, F2, and Fs are obtained from sym- Y(Xi):
metry by multiplying the subscript of every /< by ( —1); as dY{Xt)
shown in Table II for the five-point Y, Y„ is the smoothed
dX
=
> = i
0) W"1 (29)
value at the center of the strip (s = 0), etc. The off-center
equations are used only at the beginning and end of the data Estimate of Error in Slope
table. Equation 20 serves as an example of how to use Table A convenient estimate of the error in the slope as calculated
II to construct the desired equations.
by the above methods is provided by a confidence set on the
Sloping Formulas. After the data have been smoothed, slope.
' and m are selected and the suitable formulas applied. Two-sided confidence limits are defined as ± ( /2. N —
Table III supplies the coefficients for the more commonly used m 1)5, where t(p/2, N m 1) has the Student’s distribu-
— — —
1)5 < ^
dX dX
-
dX
The formulas for dYJdX, dY2/dX, etc., are obtained by
multiplying the subscript of every ft by ( 1) and then — t(p/2, X -
m -
1)5 (30)
changing the sign of the coefficient of fi, as shown in Table III is 1
p. This statement is properly interpreted as follows:
—
for the dYi/dX second-degree, five-point formula. If points X{ are measured repeatedly in proper statistical
Additional smoothing and differentiation formulas may be replications, slopes dY{Xf) /dX are expected to be in the con-
derived if needed from Equations 14 through 19. fidence interval expressed by Equation 30 in the fraction of
experiments equal to 1 p. —
Numerical Differentiation of Data Not Equally Spaced The estimation of the confidence interval of a slope dY{Xf)/
dX is really a special case for finding confidence intervals of any
For data not equally spaced the power function least squares linear combination of ß and is treated in the general case
are recommended for both smoothing and differentiation. elsewhere (Graybill, 1961). First the following vector is
Although the computing time is roughly the same for both the defined:
power function and the orthogonal cases, the power functions
are easier to program. Furthermore, only a trivial change in
the program is required to change ' and a few extra state-
ments are required to change m in the power function case,
whereas the orthogonal sets require more g(X)
l
3A2 (31)
programming in
addition to calculating the new polynomials from Equations 8
through 11. .mX™-1/
1) vVW(X'X)-1 g(X) 2 < III and Equation 28) was used to differentiate the data. An
example calculation is shown for the case of no smoothing
^ ^ + (p/2,
(2 A dX
<
-
m -
1) \/g'W(X'X)-1 g(X)d2
and X
dY
0.5:
=
(32) (X =
0.5) = ---
[(3)(0.935
1
-
0.610)' +
dX (28) (0.1)
where g'(X) is the transpose of g(X), (X'X)-1 the inverse of
(2)(0.880 -
0.663) + (0.875 -
0.717)] =
0.560
the matrix (X'X) in Equation 4 and d2 is given by:
The 16 points in Table IV were chosen from the middle
_
OCY) -
(fi)(X'Y) of a data set of 125 points in order to illustrate the advantages
V -
m -
1
of smoothing data before differentiation. The 16 points are
with in a region of relatively constant slope, yet at point X 0.6 =
(Y'Y) =
2/<2 (34) there is a glaring inconsistency in the data for which the
scientists who took the data had no positive explanation.
and (É) and (X'Y) defined in Equation 3. The inconsistent point affects 2 ' 1 (or in this case 13)
—
The confidence interval calculation in the case of the slope slopes in the moving strip technique. The resultant dis-
at center point of an orthogonal polynomial with equal X continuity in slopes for the no smoothing case (NS 0) is =
spacing is considerably shortened because g(X) simplifies to: clearly shown in Figure 1, whereas a much better estimate of
the slope is obtained after four smoothings (NS 4). =
0
Method for Sloping Data with Nonrandom Errors and
for Improving the Estimate of Transitions
Example. Table IV presents a set of experimentally deter- In considering nonrandom errors in experimental data, no
mined length-temperature measurements (coded to save space)
and typical results of smoothing and differentiating according general statement can be made because each case must be
scrutinized individually. The effects on sloping of some
to the techniques described previously. For the smoothing
nonrandom errors can be at least minimized by a technique
operations a third-degree seven-point orthogonal polynomial
called “breaking.”
from Table II and Equation 26 was chosen:
Sometimes data taken on electric instruments are subject
to unavoidable jumps or shifts. An example is length-tem-
Y smoothed (X =
0.1) = ~
(2) (0.880)] =
gc.ra
-
m II
X Y(NS 0) =
rfM? =
7) Y(NS =
4) Y(NS 20)
= cu
2, with those obtained by “breaking” the data at 5 shows the results of smoothing four times and sloping with
—
137° C., smoothing four times, and differentiating (using breaks at —137°, —60°, and —39° C. Using a graphical
the same equations). The “break” consists of terminating the method in Figure 4, the transition at the lowest temperature
moving strip on one side of the discontinuity and starting the appears to occur at —132° C. However, the numerical
strip again on the other side. There is considerable improve- differentiation techniques with a break show that —137° C.
ment when the break is used as can be seen in Figure 3: The is the better value. Breaking at the higher temperatures
.j
Figure 2. LVDT output vs. temperature data for mixture of 52% polypropylene
and 48% polyethylene
| BREAK POINT
.
6 6 e
rh 0 NO BREAK
xBREAK AT -137*0
x * » *
• X
X
o X
----0.6 X
-[J
0.5-
7.0
ß.
-V. X.. X
a ß.
\
4.0
>3.0
\
\
2.0
1.0
Figure 4. Graphical method for determining transition temperatures on LVDT output vs.
temperature data for a polyurethane
Figure 5. Differentiation with smoothing and breaks for determining transition temperatures
from data of Figure 4
prevented the smoothing from “smearing out” the major Ohio State University, for making computer time available.
transition and permitted a better estimate of the change in the Part of the work was done under NASA Grant NSG-343 to
magnitude of the expansion coefficient. the University of Southern California. The helpful comments
Care must be taken to ensure a break at the proper location. of Charles E. Antle are acknowledged.
The authors’ experience has been that if the wrong point were
chosen initially it was obvious from the resulting plot. Nomenclature
a¡ = defined by Equation 9
Acknowledgment Am+1
= defined by Equation 24
AOV =
analysis of variance
The authors are indebted to the Computer Science Center, N X 1 matrix containing (ij
(6) =
Xi t = random error
i, j, k = summation indices 2 = variance
(k) = factorial notation, defined by Equations 16 and 17 2 = estimated variance, defined by Equation 33
L = defined by Equation 13 ; = mth degree least squares coefficients in Equation
m =
degree of polynomial 22
= number of data points included in polynomial =
m + 1th degree least squares coefficients in
A" = odd AT Equation 21
NS = number of smoothings
Pj(X) = least squares polynomial of degree /, general case, Literature Cited
in Equation 1
p = level of significance Bright, J. VV., Dawkins, G. S., Ind. Eng. Chem. Fundamentals
Qi =
orthogonal least squares polynomial of degree j, 4, 93 (1965).
defined for the general case by Equation 8, for Forsythe, G. E., J. Soc. Ind. Appl. Math. 5, 74 (1957).
the equal-spaced independent variable case by Graybill, F. A., “Introduction to Linear Statistical Models,”
Vol. I, McGraw-Hill, New York, 1961.
Equation 15 “Handbook of Chemistry and Physics,” 37th ed., p. 1972, Chemi-
s =
translated independent variable, of Equation 12 cal Rubber Publishing Co., Cleveland, Ohio, 1955.
a =
ViFIxkx'x)-1 g(x)i2 Hildebrand, F. B., “Introduction to Numerical Analysis,” Mc-
tip/2, n —
m —
Student’s t statistic, at p/2 level of
1) =
Graw-Hill, New York, 1956.
significance with n m 1
degrees
— —
Hildebrand, F. B., “Methods of Applied Mathematics,” Prentice-
of freedom Hall, Englewood Cliffs, N. J., 1963.
Lanczos, C., “Applied Analysis,” Prentice-Hall, Englewood Cliffs,
X, Xi, X0 =
independent variable N. J., 1956.
(X'X) = N X ;V matrix defined by Equation 4
Lapidus, L., “Digital Computation for Chemical Engineers,”
(X'X)"1 = inverse of (X'X) McGraw-Hill, New York, 1962.
(X'Y) = N X 1 matrix, defined by Equation 5 Orden, A., “Mathematical Methods for Digital Computers,”
Ralston Wilf, ed., Wiley, New York, 1962.
Y =
approximation of /, calculated from
3=0
y ijQjiX) Ralston, A., “First Course in Numerical Analysis,” McGraw-Hill,
New York, 1965.
in the case of equally spaced data and from Sasuly, Max, “Trend Analysis in Statistics,” Brookings Institute,
Washington, D. C., 1934.
T. 0j X> for nonequally spaced data Simons, . P., Ind. Eng. Chem., Anal. Ed. 13, 536 (1941).
Zakin, J. L., Simha, R., Hershey, H. C., J. Appl. Polymer Sci.
(Y'Y) = defined by Equation 34 10, 1455 (1966).
a3 =
jib. coefficient of generalized least squares poly- Received for review August 22, 1966
nomial model in Equation 1 Accepted April 14, 1967
The design and simulation of feedforward and feedback control systems for a five-tray, single-feed dis-
tillation column have been carried out for the multicomponent system benzene-toluene-xylene. For changes
in feed composition, linear feedforward compensation may range from nearly perfect to detrimental, de-
pending on column operating conditions.
#ITHIN the past few decades, the increasing interest in the complexity of chemical and petroleum processing has
severely hampered the parallel development of potential
**
higher product quality, safety of operation, and optimum
economic performance has been the motivation for a revolution applications. Such is particularly the case for one of the most
in the theory of process dynamics and control. Unfortunately, common of the unit operations, multicomponent distillation.
In this paper, emphasis is placed on the evaluation of the
Present address, University of Kentucky, Lexington, Ky.
1
potentials and application of the linear theory of feedforward
40506 and feedback control design to multicomponent distillation.