Pergamon: (Received 6 August
Pergamon: (Received 6 August
Pergamon: (Received 6 August
A b s t r a c t - - I n this paper, we propose an architecture with two different kinds of neural networks for on-line
determination of optimal cutting conditions. A back-propagation network with three inputs and four outputs
is used to model the cutting process. A second network, which parallelizes the a u g m e n t e d Lagrange multiplier
algorithm, determines the corresponding optimal cutting parameters by maximizing the material removal
rate according to appropriate operating constraints. D u e to its parallelism, this architecture can greatly
reduce processing time and make real-time control possible. Numerical simulations and a series of experiments
are conducted on end milling to confirm the feasibility of this architecture.
1. I N T R O D U C T I O N
637
MTM 35-4-0
638 SHIUH-TARNG CHIANG et al.
perceptron neural network to model the turning process and an augmented Lagrange
multiplier (ALM) method to optimize the material removal rate. They presented only
a computer simulation. Later, Choi et al. [6] experimented with this scheme on a
turning process, but employed a different optimum strategy, the barrier function (one
of the sequential unconstrained minimization techniques). Since the calculation for
optimization takes a great deal of time, their scheme was unable to reduce the error
level between the neural network and actual lathe immediately. Thus this method may
result in a false optimal result.
Although neural networks can represent more I/O relationships without increasing
computing time, the networks employed in all of the previous work in this area require
a great deal of time to find the optimal cutting conditions. Thus the calculated optimal
conditions are far from the real optimal conditions. More recently, a special type of
neural network that parallelizes the optimal algorithm has been used to solve on-line
optimization problems [7].
Tank and Hopfield [8] first found that a neural network can seek to minimize the
energy function and designed a neural network for finding a function minimum. Chua
and Lin [9] used integrator cells to model neurons and mapped the cost function and
constraints into a canonical nonlinear circuit based on the Kuhn-Tucker conditions.
Kennedy and Chua [10] showed that the linear programming circuit of Tank and
Hopfield [8], after some modification, can be reduced to the circuit of Chua and Lin
[9]. Rodriguez-Vazquez et al. [11] replaced the RC-active technique by an SC-reactive
technique, which is more suitable for VLSI implementation. Most of the above networks
use the penalty method to solve optimization problems. However, Cichocki and Unbe-
hauen [12] proposed a structure similar to Rodriguez-Vazquez et al.'s [11], that, unlike
his, employed an ALM method. Their structure parallelizes existing optimal algorithms
and constitutes a parallel network. Because of their parallelism, these networks make
on-line optimization possible.
In this paper, a neural network based adaptive control with optimization (NNBACO)
system that includes two different kinds of neural networks is proposed for on-line
selection of optimal cutting parameters and control of the machining process. A back-
propagation network with three inputs and four outputs is used as a general-purpose
model to learn the end milling. A second network, which parallelizes the ALM method,
finds the corresponding optimal cutting conditions based on the cost function and
maximum material removal rate (MRR), subject to certain constraints. Owing to the
parallel processing ability of this architecture, the processing time will not increase
when more constraints are added. Numerical simulations and a set of experiments that
apply this NNBACO system on end milling are presented to demonstrate its capabilities.
These results show that this system is valid within the cutting conditions examined.
This paper is organized as follows. Section 2 gives an overview of neural networks.
Section 3 first presents the circuit used for solving on-line optimization of a cutting
process and then describes the architecture of the proposed NNBACO system. Section
4 presents a simulation that illustrates the application of the proposed structure in end
milling. Section 5 describes the experimental procedure and results. Conclusions are
given in the last section.
2. NEURAL NETWORKS
In an artificial neural network, the unit analogous to the biological neuron is referred
to as a "processing element" (PE) (see Fig. 1). Like a biological nervous system, an
artificial neural network consists of a large number of interconnected PEs. A P E has
many inputs oj.~_~, but only a single output o~.~, which can fan out to many other PEs
in the network. The basic function of a neuron is to sum its inputs and to produce an
output. Let wi4.~ be the weight between the ith input branch in the kth layer connected
to the jth neuron in the (k - 1) layer. Then the output of the ith neuron oi,k is given
by
Adaptive Control Optimization 639
O 1,k-I W
-'-- N y ,~i i ~W
I L"; . -
o
2,k-1 ~ nat I f
w
0 " z Wi,j,k
j,k-1
FI~. 1. Schematic diagram of the basic model.
1 (1)
Oi.k = f ( n e t i . k ) (2)
where
Output layer
Hidden layer
Input layer
STEP 4 Find an error term 8i for an output node using the equation
For a hidden layer node, find the error term using the equation
Fx - Allowable Fx -< 0
Fy - Allowable Fy -< 0
P - Allowable P -< 0
Ra - Allowable Ra -< 0 . (9)
Since the above optimization problem must be solved on-line during control of a
machining process, a special type of neural network is introduced to solve this problem
in the next subsection.
where the vector X is referred to as the vector of design variables. The sequential
unconstrained minimization technique (SUMT) is one method used to solve constrained
optimization problems. It turns a constrained problem into an unconstrained one. After
that, an unconstrained optimization method can be applied.
The SUMT creates a pseudo-objective function of the form
where F(X) is the original cost function and F(X) is an imposed penalty function, the
form of which depends on the SUMT employed. The scalar rp is a multiplier that
determines the magnitude of the penalty. Since the augmented Lagrange multiplier
(ALM) method is an efficient and reliable SUMT, it will be adapted in the following.
In the ALM method, the pseudo-objective function is
where kj is the Lagrange multiplier and zj is a slack variable. Because the new variable
zj has been added in equation (14), it greatly increases the number of design variables.
According to Rockafellar [16], equation (14) is equivalent to
where
(16)
~ ;so
Yes
~,k+m--kk+m+2rp' Hk(X *)
k=l ..... 1
] rp-- rp
i
@ vos2 rr p =rpm a x
in which
and
sj = if G/(X) - i -rp
otherwise.
G/(X) = xj - x; j=1,2,3
where x; and x~' are the upper and lower limits on the input variables xj and y} is the
allowable value of the output yj. Since there are no equality constraints, the partial
differential term of Hk(X) with respect to xi in equation (21) can be removed. Substitut-
ing equations (22) and (23) into (21), we obtain
OA
Oxi - Xj " Xk + Si " (hi + 2 " rp " d~i) - si+3 " ( X i + 3 + 2 " r p ' 6 i + 3 )
~[ OGJ+6] (24)
"~- Sj+ 6 ( h j + 6 + 2 rp 61)/-+6) dxi J ,
j=l
where 6i, ~bi+3, and 6i+6 depend on the subscript and they are as follows:
6j = Max[Gj(X), ~ ] ifj=l,2,3
: -:., '
xi
FIG. 4. Circuit of a constrained nonlinear optimization solver based on the ALM method.
7
.~ Neuralnetwork I
for modeling
Current ~lstate
.~ Neuralnetwork 11 I
for optimization .Optimal
inputs
Measured outputs
STEP 1 Under the initial constraints, neural network (II) determines a set of optimal
inputs and then sends them into the milling machines and neural network (I).
STEP 2 The measured outputs of the milling machine, corresponding to the optimal
input, are used as the desired output to train neural network (I).
STEP 3 Neural network (II) uses the newly upgraded neural network (I) as the model
of the end milling process to find the optimal inputs and sends them into the
milling machine and neural network (I).
STEP 4 STEP 2 and STEP 3 are repeated until the termination of the cutting process.
4. SIMULATION
In the following, the theoretical model of the end milling process is described. A
simulation employing this theoretical model and the results are presented to confirm
the feasibility of the NNBACO system.
where 13nis the tooth entry angle, Nf is the tooth number of the end mill, Kt indicates
the ratio of tangential cutting force to the chip load, and Kr indicates the ratio of
radial to tangential cutting force. Kt and Kr for the aluminum with hardness 55 HRB
are
P = 1.6 + 4.26- [(N,. A,~ Rd ft) 0"66] rpm/97422 " (kW), (28)
Ra = ~ / ( 8 . R) (mm) (29)
in which R a is the average peak to valley height on the workpiece surface and R is
the radius of the end mill. The deformation of the cutting tool is ignored here.
output quantities, and they composed the set of I/O samples which were used for
initial training and testing.
The three-layered back-propagation network with three inputs and four outputs
described in Section 3.1 was employed for neural network (I). In the initial training,
we start with three hidden nodes and the number of nodes was increased until a low
error rate was achieved. The error rate is the ratio of incorrect samples, i.e. samples
in which the absolute difference between the theoretical output and the output of the
neural network exceeds 10%, to all 420 samples. The simulation results for the initial
training, with ten hidden-layer nodes, are shown in Fig. 6; the results indicate that
after 150 iterations the error rate approached 10%.
If the number of iterations used in training the neural network is increased, the
error rate may be reduced slightly. But, from a macroscopic point of view, the error
rate remains in the vicinity of 10%. Increasing the number of nodes in the hidden
layer may also decrease the error rate, but it may make it more difficult to realize the
proposed structure physically and may cause more time to be consumed in calculating
the optimal inputs. Hence ten hidden-layer nodes were chosen in this stage.
According to the experiment set-up and workpiece used in our laboratory, the limits
on the input and output variables that were used in the simulation were as follows:
The upper limitation on the input variables:
Fx:800(N)
~:1600(N)
e:l.8(kW)
Ra:0.0015(mm).
In order to examine the performance of the NNBACO system under specific cutting
conditions, one of the input variables (radial depth of cut) was specified as varying
between three specific values, and it changed from one value to another every one
hundred iterations. Through this simulation, not only the static (constant Ro) but also
the dynamic (varying Rd) performance of this system were investigated.
I00
%
80
60
4O I ~ i"
i
20 . . . . . . . . . . i
i
0% I .
0 50 100 150 200 250 300
iterations
Figure 7(b) shows the three values (2, 6 and 10 mm) for the radial depth of cut,
and Figs 7(a) and (c) show the corresponding optimal inputs (Aa and ft). Because the
neural network can not "learn" as fast as the change in the radial depth of cut, there
were some transient states at the beginning of each state. When the radial depth of
cut is 2 mm, the inequality constraints of the cutting power are satisfied more easily
than when the depth is 6 or 10. Hence the product of Jet and Rd, shown in Fig. 7(d),
has the greates t value when Ra = 2. In these figures, we also find that as the system
varies, the steady optimal inputs do as well.
The neural network's outputs corresponding to the optimal inputs are shown in
.5
Ill m
(~),4
; i
,3
ft
.2
.1
0
0 200 400 600 800 1000
(a) optimal fee&ate per tooth (iterations)
(mm) 12
10
8
Rd 6
4
2
0
V
0 200 400 600 800 .. 1006 .
(b) radial depth of cut 0teratlons)
(mm) 25
20
15
10
f
i
0 200 400 600 800 1000
(c) axial depth of cut (iterations)
8
ft*Ad 4
0
0 200 400 600 800 100
(iterations)
(d) cost function--ft*A d
Fig. 8. Because the calculation of the optimizer is based on the neural network, all
the inequality constraints, i.e. equation (23), are satisfied. However, in order to satisfy
the inequality constraints for the cutting power, all the other outputs of the neural
networks are far from the upper limits.
Most of the outputs of the theoretical model, shown in Fig. 9, are below the maximum
limitations. But some overshoots can be detected in the cutting power during the
changes between states. We also find that in these figures the maximum MRR is mainly
limited by the cutting power.
The learning errors in the simulation are shown in Fig. 10, in which the error is
defined as follows
(-#cc- : ;-; -:
800 , I
600
Fx
400
200
0
0 200 400 600 800 IO40
(a) NN oulpul: force Fx (ileratlons)
1000
09
800
600
FY
400
200
I
400 600 800 IO00
(c) NN output: power P (ilcralions)
_-_- -~
200 400 600 800 1000
(iteralions)
(d) NN output: surface finish R
a
600 i ............... i . . . . . . .
Fx 400 ~ ........
201 ...........................................................................................................
0 200 400 600 800 1000
(a) milling output: force F x (iterations)
10001 ~
soo ................... i . . . . . . . . . . . . . . . i ....
600 .................................
Fy 400 ~ ............~ ............. 7 ...............~ ......... : .
1.0
0.5
R a .6
.4
.2 . . . . .
T
0 i
0 200 400 600 800 1000
(d) milling output: surface finish R a (iterations)
Because of large variation of weights at the beginning of step change in Rd, the
optimizer takes m o r e iterations to find a set of o p t i m a l inputs. As the neural n e t w o r k
a p p r o a c h e s the steady state, the o p t i m a l inputs can be o b t a i n e d in fewer iterations.
This m e a n s that the neural n e t w o r k can describe the system m o r e and m o r e accurately
in the static state. C o m p a r e d with the results of initial training, the e r r o r diminished
very quickly.
650 SHIUH-TARNG CHIANG et al.
(x 100%)
10
0.8
0.6
0.4
(x 100%)
02
0 ikL _LI
i, ,i
200
i
400 600
(a) Fx error
800 1000
(iterations)
1.0 I
0.8
0.6 /
Fy
0.08
o.o . . . . !
P
0.04 ] i
0.02
0
0 200 400 600 800 1000
(x 100% (c) power error (iterations)
1.0
0.8
0.6
Ra
0.4
0.2
0
2OO 400 600 800 1000
(d) surface finish error (iterations)
After the simulations in this section, we adopted some patterns from the initial
training set to test the underlying neural network and found that for patterns located
near the optimal inputs the neural network generated outputs with a very small error.
But for patterns far from the optimal values, the network generated outputs with a
large error. This result implies that the neural network has only partial knowledge
after training.
AdaptiveControlOptimization 651
5. EXPERIMENTS
5.1. Experimental set-up
We conducted a series of experiments with end milling to confirm the feasibility of
the proposed NNBACO system.
The experimental set-up for the NNBACO system is shown in Fig. 11, which also
defines the positive directions of the three axes. The feedrate command to the servo-
driver was generated by a hardware interpolator. Each NC servo loop (per axis)
included a DC servo motor and a P.I.D. Velocity and position feedback were provided
by tachometer and encoders.
A vertical knee-type milling machine (3 horse power spindle motor) was used for
the experiments. The cutting forces were measured by a dynamometer (Kistler 9257B)
and a charge amplifier (Kistler 5007). The A/D converter used was a 12-bit one; the
collected data were saved to the memory of a 486 personal computer by DMA (direct
memory access) transfer. The sampling time in the A / D converter was 1 ms and the
NNBACO system updated the feedrate parameter in 100 ms.
The data processing and the control algorithm were implemented using Borland
C+ + V.3.0 on a 32-bit 486 PC. The other machining conditions and parameters of
the workpiece are listed below:
Workpiece: aluminum alloy (T6061).
Cutting tool: end mill,
25 mm diameter,
four teeth,
160 mm total length.
Cutting conditions:
spindle speed: 300 rpm,
axial depth of cut: 25 mm.
No coolant.
5.2. Experiments
We conducted three main series of experiments, in which two differently shaped
workpieces were machined. Details of the experimental conditions and the dimensions
of the workpiece are shown in Figs 12(a) and (b).
The first experiment is conventional cutting that the feedrate per tooth was set to
be constant under the constraints on the maximum allowable cutting forces.
In the second experiment, the proposed NNBACO system was applied in the end
milling to demonstrate its performance. Because of the limitations of the laboratory
equipment used, the neural network for modeling was set up with only two outputs,
Fx and Fy, and the axial depth of cut was kept at 25 mm. There were ten nodes in
the hidden layer. The radial depth of cut and other cutting conditions were set up as
described in Figs 12(a) and (b).
In the last experiment on each workpiece, regardless of the variation in radial depth
feed direction [~
~~1 workpiece
dynamometer I
[ 486 pc I~.~Iinterplatr
hardware ~ NC
X-axiservo
s table Il
T X Charge /
Y amplifieri
(a)
Down milling
90
40 --~
Workpiece A:
4 1 3 (unit: iron)
I-
(b)
Down milling
90
-4-25--,.-1 ~r
(unit: ram)
Workpiece B: 2 i
T
Fro. 12. (a) Cutting conditions for workpiece A. (b) Cutting conditions for workpiece B.
of cut, the input (radial depth of cut) to the neural network was fixed at specific
"false" values as defined in Figs 12(a) and (b). The difference between the "real" and
"false" input is like a noise value. Thus, the error tolerance of the NNBACO system
can be investigated from this experiment.
Adaptive Control Optimization 653
The geometry of the first workpiece, shown in Fig. 12(a), consisted of two steps.
In order to satisfy the force limitation (the maximum allowable forces in the X, Y
directions were 400 and 450 N, respectively), the feedrate was kept at 0.12 (mm/tooth)
in case a-1. The measured force is shown in Fig. 13.
The NNBACO system was applied in case a-2, in which the feedrate decreased to
a lower level as the cutting tool stepped up to a higher stair. The results are shown
in Fig. 14. The error defined is similar to equation (30), but the theoretical outputs
are replaced by the measured signals. Since the neural network cannot learn very
accurately when the radial depth of cut changes, there are two peaks in the measured
force. After a while, however, the measured force in the X direction stabilized around
the preset value, which means that the optimal cutting conditions are constrained by
the force in the X direction in this case. Due to the sudden change in the input to the
neural network, there is a peak in the neural network's output shown in Fig. 14(c).
Although the radial depth of cut varied from 2 to 3 mm, the input to the neural
network was kept at 2 mm throughout the tool path in case a-3. The results, as depicted
in Fig. 15, show that the peak in the neural network's output vanished, because there
was no change in the input to the network. However, the two peaks are still present
in the measured force and the error diagram, i.e. Figs 15(b) and (d).
For the other cases of experiment, there were three steps in the second workpiece,
and the distance between each step was shorter. This workpiece, as shown in Fig. 12(b),
includes ascending and descending parts. The maximum allowable forces in the X, Y
directions were again set as 400 and 450 N, respectively.
In case b-l, feedrate was kept at 0.12 (mm/tooth). The measured forces are shown
in Fig. 16.
The results of case b-2 obtained using the NNBACO system are shown in Fig. 17.
For the same reasons described in case a-2, there are four peaks in the neural network's
output. In Fig. 17(c), we see that the optimal cutting conditions are limited sometimes
by the force in the X direction and sometimes by the force in the Y direction. From
the Fig. 17(d), we see that the error in the third step is not less than that in the first
step. It means that the system only possesses knowledge near about optimun region.
As the state (Rd) returns to the previous state, the system also needs to reoptimize
the cutting conditions as before.
(N) 600
500 m
400
measured 300
force
200 i Fx
i F y .........
100
-!00
10 20 30 40 50 55
time (second)
FIG. 13. The results of case a-].
141]435-4-K
654 SHIUH-TARNGCHIANGel al.
( -to-I~[~ .3 ,
.J
.25
.2
(a) optimal .fee&ale
per tootla:ft .15
.I
.05 i~ i !
0
(W)600
500
400
(b) measured 300
forces Fx,Fy 200
100
0 ......... i ........ i . . . . . L
-100
(N~600
500
....................... i.--:-/.i.Z ...........
400
(c)NN's output: 300
forces Fx,Fy 200
100
0
-100
l
(x ! 00%)
.8
~-- x error!
il i..... v error i
(d) error .6
.4
.2
0 10 20 30 40
time (seconds)
Fro. 14. The results of case a-2.
For the results of the last case b-3, Fig. 18, as in the previous case, we see that the
optimal conditions are always constrained by the maximum allowable force in the X
direction. When the results of Figs 17(d) and 18(d) are compared, they seem to be
different. In the case of Fig. 17(d), as the radial depth of cut changes, the input of
neural network suddenly changes which causes the weights to update tremendously
and generates larger force errors. In the case of Fig. 18(d), this situation is avoided
Adaptive Control Optimization 655
.25
.2
(a) optimal feedrate
per tooth:f t .15
.1
.05
0
~)600
500
400
(b) measured 300
force Fx ,Fy 200
100
0
-100
~)600
500
, i i ...... ..... i
400
(e)NN's output: 300
force Fx ,Fy 200
100
0 .... i
-100 i
1
(x 100%)
.8 I .....
X error
i...... Y error
(d) error .6
.4
.2
10 20 30 40
time (seconds)
and s m o o t h transitions o f the errors occur as true radial depth of cut changes. H o w e v e r ,
for large variation o f depth of cut, the N N B A C O still n e e d to give correct input to
reduce the c o n v e r g e n c e time.
The results o f the real-time control experiments p r e s e n t e d above show that w h e n
applied in end milling the N N B A C O system is stable within the range of the cutting
conditions examined. Since the cost function is M R R and the radial and axial depth
656 SHIuH-TARNG CrUANGet al.
(N) 600
500
Z
400
100
0
ii , Fx- ti - ....
-100
0 10 20 30 40 50 55
time (second)
FIG. 16. The results of case b-1.
of cut are kept at specific values in each case studied, we can use the time needed to
cut through the workpiece as an index of efficiency. That is, the less time spent, the
higher M R R is. The time spent in cutting through each workpiece is given in Table
1. From the table, it is clear that when the NNBACO system is applied M R R is
increased greatly.
Although a "false" input was applied in the last experiment on each workpiece, the
MRR for these experiments is still high. Moreover, in some cases, because of less
variation in the neural network's input (Ra), these cases perform better than the "true"
input cases. This shows that the NNBACO system is an architecture with high fault
tolerance.
6. CONCLUSIONS
Workpiece A Workpiece B
I mm ) .3
i-6-6ih-
.25
.2
(a)optimal feedrate
per tooth: ft .15 -. \
.1
.05
0
(NJ500
500
400
300
(b) measured
forces:Fx ,Fy 200
100
0
-100
(N) 600
500
. . . . -_ ..
400
300 .- - - ,
.2
L
0 10 15 20 25 30 35
lime (second)
there were some errors in the beginning, the N N B A C O system gradually achieved
optimal cutting conditions. And since the N N B A C O system itself determines the
knowledge to be acquired, i.e. by the neural network for optimization, it behaves like
a self-organizing system.
The N N B A C O system is applied to end milling in this paper, but it is obvious that
this system is a general-purpose architecture, i.e. it can be extended to other machines
to improve cutting efficiency.
658 SHIUH-TARNGCHIANGet al.
(_lnnL.~ .3
tooth j
.25
.2
(a) optimal feedrate
per tooth: fl .15
/--/
.1
.05
0
(N) 600
500
4OO
300
(b) measured
forces: Fx ,Fy200
Fx-
100
0
-I00
(N) 60O
500
400 !
0 5 10 15 21) 25 30 35
time (second)
REFERENCES
[ 1] Y. KOREN, Computer Control of Manufacturing System. McGraw-Hill, New York (1983).
[2] M. A. ELBESTAWl,A S M E J. Dyn. Syst. Meas. Control 112, 6ll (1990).
[3] G. CHRVSSOLOUmSand M. GUmLOT, A S M E J. Engng Ind. 112, 122 (1990).
[4] G. R. MADEY, J. WEINROTHand V. SHAH, J. lntell. Manufact. 3, 193 (1992).
[5] S. R. RANGWALAand D. A. DOaNFELD, IEEE Trans. Syst. Man Cyber. 19, 299 (1989).
[6] G. S. CHOI, Z. WANG and D. A. DORNFELD, Proc. IEEE, Int. Conf. Robotics and Automation 1567
(1991).
[7] J. CHEN, M. A. SHANBLA'rrand C. MAA, Int. J. Neural Syst. 2, 331 (1992).
[8] D. W. TANK and J. J. HOPFIELD,1EEE Trans. Circuits Syst. CAS-33, 533 (1986).
[9] L. O. CHUg and G. N. LIN, IEEE Trans. Circuits Syst. CAS-31. 182 (1984).
[10] M. P. KENNEDYand L. O. CHug, IEEE Trans. Circuits Syst. CAS-34, 210 (1987).
Adaptive Control Optimization 659
APPENDIX
In this Appendix, the method used to calculate the derivatives of equation (24) by a forward pass through
the network is described. With regard to Fig. 2, we define the following nomenclature:
net~.k the sum of the ith neuron's inputs in the kth layer, i.e.
1
f(net,.k) = (A3)
1 + e--nct~.k
Yi the output of the ith neuron in the output layer. If the neural network has three layers then
Yi = 0t.3"
di the desired output of the ith output node.
Wid,k the weight between the ith neuron in the kth layer and the jth neuron in the (k -1)th layer.
E defined as the global error
Since the neural network is used for learning the cutting process, yj is the jth output and x/ is the ith
input. Then in equation (24)
With respect to distinct layers, (A5) is different. The following derivation is divided into "the output
layer" and "the hidden and input layer"
for the output layer
= (A8)
660 SHIUH-TARNGCmA~G et al.
Substituting (A8) into (A5), we can calculate the derivatives of equation (24) by a forward pass through
the neural network.