Pergamon: (Received 6 August

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

~ Pergamon

Int. J. Mach. Tools Manufact.Vol. 34. No. 5. pp. 637-6~). 1995


Copyright ~) 1994Elsevier Science Ltd
Printed in Great Britain. All rights reserved
08~1-6955/9559.5(! + .(WI

ADAPTIVE CONTROL OPTIMIZATION IN END MILLING


USING NEURAL NETWORKS

SmuH-TARN6 CHIANG,t DIN6-I LIV,t AN-CHEN LEES" and WEI-HUA CmENGt


(Received 6 August 1993)

A b s t r a c t - - I n this paper, we propose an architecture with two different kinds of neural networks for on-line
determination of optimal cutting conditions. A back-propagation network with three inputs and four outputs
is used to model the cutting process. A second network, which parallelizes the a u g m e n t e d Lagrange multiplier
algorithm, determines the corresponding optimal cutting parameters by maximizing the material removal
rate according to appropriate operating constraints. D u e to its parallelism, this architecture can greatly
reduce processing time and make real-time control possible. Numerical simulations and a series of experiments
are conducted on end milling to confirm the feasibility of this architecture.

1. I N T R O D U C T I O N

THE USE OF COMPUTERnumerical control (CNC) systems has grown tremendously in


recent decades. However, a remaining drawback of these systems is that the operating
parameters, such as feedrate, speed, and depth of cut, are programmed off-line and
the selection of these parameters is based on the part programmer's experience and
knowledge. To prevent damage to the cutting tool, the operating conditions are usually
set extremely conservatively. As a result, many CNC systems are inefficient and run
under the operating conditions that are far from the optimal criteria.
For this reason, adaptive control, which provides on-line adjustment of the operating
parameters, is being studied with interest. Adaptive control systems can be classified
into two types [1]: (1)adaptive control with optimization (ACO) and (2)adaptive
control with constraints (ACC). In ACO systems, the controller adjusts the operating
parameters to maximize a given performance index under various constraints. In ACC
systems, on the other hand, the operating parameters are adjusted to regulate one or
more output parameters (typically cutting force or cutting power) to their limit values.
In fact, the objective of most ACC systems is also to increase a given performance
index by assuming that the optimal solution occurs on a constraint boundary.
Since machining is a time-varying process, most adaptive control methods [2] apply
the recursive least squares method to in-process estimate the parameters of the special
empirical formula or the linearized model. However, in many cases, no reliable model
is available or the reduced linearized model is not accurate enough to depict the
input/output (I/O) relationship of the cutting process. Furthermore, as more cutting
constraints are taken into account, the computing time for these methods increases,
because more parameters must be estimated. Therefore, neural networks, which can
map the I/O relationship and possesses massive parallel computing capability, have
attracted much attention in research on machining processes.
Chryssolouris and Guillot [3] modeled the machining process by a multiple regression
method and a neural network and concluded that neural networks are superior to
conventional multiple regression methods. Madey et al. [4] had a neural network learn
the I/O relationship of a human operator's actions. The neural network could then
work like the operator. However, this method is limited to the trained cutting con-
ditions. Rangwala and Dornfeld [5] presented a scheme that used a multilayered

t D e p a r t m e n t of Mechanical Engineering, National Chiao T u n g University, 1001 Ta H s u e h Road, Hsinchu


300, Taiwan, R . O . C .

637

MTM 35-4-0
638 SHIUH-TARNG CHIANG et al.

perceptron neural network to model the turning process and an augmented Lagrange
multiplier (ALM) method to optimize the material removal rate. They presented only
a computer simulation. Later, Choi et al. [6] experimented with this scheme on a
turning process, but employed a different optimum strategy, the barrier function (one
of the sequential unconstrained minimization techniques). Since the calculation for
optimization takes a great deal of time, their scheme was unable to reduce the error
level between the neural network and actual lathe immediately. Thus this method may
result in a false optimal result.
Although neural networks can represent more I/O relationships without increasing
computing time, the networks employed in all of the previous work in this area require
a great deal of time to find the optimal cutting conditions. Thus the calculated optimal
conditions are far from the real optimal conditions. More recently, a special type of
neural network that parallelizes the optimal algorithm has been used to solve on-line
optimization problems [7].
Tank and Hopfield [8] first found that a neural network can seek to minimize the
energy function and designed a neural network for finding a function minimum. Chua
and Lin [9] used integrator cells to model neurons and mapped the cost function and
constraints into a canonical nonlinear circuit based on the Kuhn-Tucker conditions.
Kennedy and Chua [10] showed that the linear programming circuit of Tank and
Hopfield [8], after some modification, can be reduced to the circuit of Chua and Lin
[9]. Rodriguez-Vazquez et al. [11] replaced the RC-active technique by an SC-reactive
technique, which is more suitable for VLSI implementation. Most of the above networks
use the penalty method to solve optimization problems. However, Cichocki and Unbe-
hauen [12] proposed a structure similar to Rodriguez-Vazquez et al.'s [11], that, unlike
his, employed an ALM method. Their structure parallelizes existing optimal algorithms
and constitutes a parallel network. Because of their parallelism, these networks make
on-line optimization possible.
In this paper, a neural network based adaptive control with optimization (NNBACO)
system that includes two different kinds of neural networks is proposed for on-line
selection of optimal cutting parameters and control of the machining process. A back-
propagation network with three inputs and four outputs is used as a general-purpose
model to learn the end milling. A second network, which parallelizes the ALM method,
finds the corresponding optimal cutting conditions based on the cost function and
maximum material removal rate (MRR), subject to certain constraints. Owing to the
parallel processing ability of this architecture, the processing time will not increase
when more constraints are added. Numerical simulations and a set of experiments that
apply this NNBACO system on end milling are presented to demonstrate its capabilities.
These results show that this system is valid within the cutting conditions examined.
This paper is organized as follows. Section 2 gives an overview of neural networks.
Section 3 first presents the circuit used for solving on-line optimization of a cutting
process and then describes the architecture of the proposed NNBACO system. Section
4 presents a simulation that illustrates the application of the proposed structure in end
milling. Section 5 describes the experimental procedure and results. Conclusions are
given in the last section.

2. NEURAL NETWORKS

In an artificial neural network, the unit analogous to the biological neuron is referred
to as a "processing element" (PE) (see Fig. 1). Like a biological nervous system, an
artificial neural network consists of a large number of interconnected PEs. A P E has
many inputs oj.~_~, but only a single output o~.~, which can fan out to many other PEs
in the network. The basic function of a neuron is to sum its inputs and to produce an
output. Let wi4.~ be the weight between the ith input branch in the kth layer connected
to the jth neuron in the (k - 1) layer. Then the output of the ith neuron oi,k is given
by
Adaptive Control Optimization 639

O 1,k-I W
-'-- N y ,~i i ~W
I L"; . -

o
2,k-1 ~ nat I f
w

0 " z Wi,j,k
j,k-1
FI~. 1. Schematic diagram of the basic model.

1 (1)

where f(.) is called the activation function.


The operation of neural networks can be divided into two main phases: learning and
recall. Learning is the process of adapting the weights in response to stimuli at the
inputs. Once the weights have been adapted, the neural network has learned the I/O
mapping. Recall refers to how the network processes a stimulus presented at the input
and creates a response at the output.
Many kinds of neural network models have been proposed in recent years. A
supervised processing neural network, back-propagation, is introduced in the following.
The back-propagation network, shown in Fig. 2, was introduced by Rumelhart and
McClelland [13] in 1986. A typical back-propagation network is a supervised multilayer
network that includes an input layer, an output layer, and at least one hidden layer
and in which each layer is fully connected to the succeeding layer.
As illustrated by the solid lines in Fig. 2, the stimuli are fed into the input layer and
propagated forward to the output layer. The output is compared with the desired one
and then the error signal is propagated backward through the network, as shown by
the dashed lines, to upgrade the weights of each layer. The name back-propagation is
derived from the backward propagation of the error signal.
The main steps in the back-propagation algorithm are summarized as follows:
STEP 1 Initialize all weights wij.k with small random values.
STEP 2 Present input patterns and specify the corresponding desired outputs.
STEP 3 Calculate the actual outputs of all the nodes, using the present value of the
weights, by

Oi.k = f ( n e t i . k ) (2)

where

neti.k = ~ [Wij.k " O j . k - l ] . (3)


J

Output layer

Hidden layer

Input layer

FnG.2. Three-layered back-propagation neural network.


640 SHIUH-TARNG CHIANG et al.

STEP 4 Find an error term 8i for an output node using the equation

8i. k = (d i - Oi.k) f(neti.k) . (4)

For a hidden layer node, find the error term using the equation

8i.k = f ( n e t j . k _ , ) ~ [S,.k+, W,.i.k+,] (5)


l

where l is the number of neurons in the layer above node j.


STEP 5 Adjust the weights by

w(k + 1) = w(k) + ~ 8Ok O,.k + 13 " ( w ( k ) - w ( k - 1)) , (6)

where rl is the learning rate and 13 is a constant, between 0 and 1, which


determines the effect of past weight change on the current direction of move-
ment in the weight space.
STEP 6 Present another input and go back to STEP 2 cyclically until all the weights
converge.
Since neural networks have a highly parallel structure, they are well suited to parallel
implementation. Such an implementation can result in very fast processing and can
achieve a very high degree of fault tolerance. In addition, since neural networks can
naturally process many inputs and have many outputs, they are readily applicable to
multivariable systems. Because of these promising features, recently neural networks
have been widely applied in fields such as image compression, character recognition,
and automatic control [14]. Moreover, specially designed chips for neural networks
have also been developed.
3. N E U R A L N E T W O R K BASED A D A P T I V E C O N T R O L WITH OPTIMIZATION IN END
MILLING

In what follows, the mathematical formulation for optimization of cutting conditions


is described and the parallel structure used for solving the on-line optimization problem
of a cutting process is developed. The neural network based adaptive control with
optimization (NNBACO) system applied to the end milling process is proposed in the
last subsection.
3.1. Mathematical formulation for optimization of cutting conditions
In a cutting process, in order to prevent damage to the cutting tool and maintain
the minimum acceptable workpiece surface finish, there are upper bounds on the
cutting forces in the X and Y directions (Fx,Fv), the power (P), and the surface finish
(Ra). Similarly, because of the variety of cutting tools used and variations in machine
capacity, the input variables (feedrate per tooth (ft), axial depth of cut (Ao) and radial
depth of cut (Rd)) also have their own operating ranges. Although there are many
factors that restrict the operating conditions, a high material removal rate is also
required. Hence, the above description can be transformed into an optimization prob-
lem: under the limitations on the inputs and outputs, find a set of optimal inputs that
will maximize the MRR, i.e.
maximize the performance index

F = ft " Ad " Ro (7)

subject to the following constraints on the input variables:

Min. offt - f t - Max. offt


Min. of Rd --< Ro - Max. of Rd
Min. of Aa --<Ao -< Max. of Ao (8)
Adaptive Control Optimization 641

and the following constraints on the output variables:

Fx - Allowable Fx -< 0
Fy - Allowable Fy -< 0
P - Allowable P -< 0
Ra - Allowable Ra -< 0 . (9)

Since the above optimization problem must be solved on-line during control of a
machining process, a special type of neural network is introduced to solve this problem
in the next subsection.

3.2. Optimization using neural networks


Nonlinear constrained programming is a basic tool in systems where a set of design
parameters are optimized subject to inequality constraints. Many numerical algorithms
have been developed for solving such problems [15]. The main disadvantage in applying
these conventional optimal algorithms to many industrial applications is that they
generally converge slowly. However, in many engineering and scientific problems, e.g.
automatic control, on-line optimization is required.
The goal of this section is to propose a parallel structure, one that parallels the
conventional optimal algorithm, to solve this problem. The adapted optimization theory
is described first.
Consider the following nonlinear constrained optimization problem: minimize a
scalar cost function

F(X) = F(xl,x2 ..... x.) (10)

subject to the inequality constraints

Gj(X) -< 0 j = 1...m (11)

and the equality constraints

Hk(X) = 0 k = 1...q (12)

where the vector X is referred to as the vector of design variables. The sequential
unconstrained minimization technique (SUMT) is one method used to solve constrained
optimization problems. It turns a constrained problem into an unconstrained one. After
that, an unconstrained optimization method can be applied.
The SUMT creates a pseudo-objective function of the form

A(X,rp) = F(X) + rp F(X) , (13)

where F(X) is the original cost function and F(X) is an imposed penalty function, the
form of which depends on the SUMT employed. The scalar rp is a multiplier that
determines the magnitude of the penalty. Since the augmented Lagrange multiplier
(ALM) method is an efficient and reliable SUMT, it will be adapted in the following.
In the ALM method, the pseudo-objective function is

A(X,h,rp) = F(X) + ~ {h]. (G](X) + z2) + rp- (G](X) + z~) 2}


1=1
q
+ ~ {hk+m Hk(X) + rp. (Hk(X)) z) , (14)
k=l
642 SHIUH-TARNG CHIANG et al.

where kj is the Lagrange multiplier and zj is a slack variable. Because the new variable
zj has been added in equation (14), it greatly increases the number of design variables.
According to Rockafellar [16], equation (14) is equivalent to

A(X,h,rp) = F(X) + ~ {hi. *j + rp ~b~}


]=1
q
+ ~ {~k+m " / - / k ( x ) + rp. (Hk(X)) 2} , (15)
k=l

where

(16)

and the upgrade formulas for hj are

Xp + I = ~.P + 2 " rp. +~ j= 1...rn (17)


+1
h~+ m = h~+ m + 2 rp Hk(X) k= 1...q. (18)
A detailed flow chart of this algorithm is shown in Fig. 3.
Now, applying a general gradient strategy in the unconstrained part, we develop the
recursive discrete-time algorithm

Given: X? LO,rp, y ,r~lax

Minimize A(X,~, rp)


as an unconstrafi~ed
function

~ ;so
Yes

kj= L j+2rp" Max[3(X*),-~,j/(2. rp )]


j--I ,...,m

~,k+m--kk+m+2rp' Hk(X *)
k=l ..... 1

] rp-- rp
i
@ vos2 rr p =rpm a x

Fro. 3. Algorithm for the ALM method [15].


Adaptive Control Optimization 643

X(t + 1) = X(t) - IX,. VA(X,h,rp) (19)


where Ix, is the step size in the tth iteration and VA(X,h,rp) is the gradient of A(X,h,rp),
which is defined as

VA(X,h,rp)=[O~x~ OA 0A] T (20)


' OX 2 " ' ' ' OX n

in which

OA _ OF(X) ~ sj (hi + 2 rp dpj) OXi


3Xi OXi j= 1
q
+ ~ (Xk+m + 2 " rp Hk) OH
k~ (21)
k=1 tPXi

and

sj = if G/(X) - i -rp
otherwise.

The optimization problem to be solved as given in equations (7)-(9) can be presented


in a mathematical form as follows:
maximize the performance index (MRR)
F(X) = xl "x2 "x3 (22)
subject to the inequality constraints

G/(X) = xj - x; j=1,2,3

G j ( X ) = x;'_3 - x/_3 j=4,5,6

aj(x) ~-- y j _ 6 ( X l , X 2 , X 3 ) - Y;-6 j=7,8,9,10 , (23)

where x; and x~' are the upper and lower limits on the input variables xj and y} is the
allowable value of the output yj. Since there are no equality constraints, the partial
differential term of Hk(X) with respect to xi in equation (21) can be removed. Substitut-
ing equations (22) and (23) into (21), we obtain
OA
Oxi - Xj " Xk + Si " (hi + 2 " rp " d~i) - si+3 " ( X i + 3 + 2 " r p ' 6 i + 3 )

~[ OGJ+6] (24)
"~- Sj+ 6 ( h j + 6 + 2 rp 61)/-+6) dxi J ,
j=l

where 6i, ~bi+3, and 6i+6 depend on the subscript and they are as follows:

6j = Max[Gj(X), ~ ] ifj=l,2,3

(bj= M a x [ G j ( X ) , ~ ] = Max[(x~'_3 - xj-3), ~ ] - h J , ifj=4,5,6

d~j=Max[Gj(X),~krp] = Max[(yj_6-Y;-6), -2~p] , ifj=7,8,9,10. (25)


644 SH1UH-TARNGCHIANGet al.

I---- 2rnl- -~,- + ]+ ~ +2rn.(Pl

: -:., '

xi

FIG. 4. Circuit of a constrained nonlinear optimization solver based on the ALM method.

The parallel structure corresponding to equation (24) is proposed and shown in


Fig. 4, where the step size I~t is adopted as a constant. The main feature of this
architecture is that it uses the ALM method, which converges more quickly than other
penalty methods. This optimization circuit is not like the neural networks described in
Section 2, but it has many of the features of neural networks. For example, this circuit
performs massive parallel processing of analog signals and is capable of adapting the
connection weights to accelerate convergence during the optimization process.

3.3. Procedure for NNBACO system in end milling


The proposed NNBACO system is shown in Fig. 5, in which there are two different
kinds of neural networks employed. Neural network (I) is used to learn the appropriate
mappings between the input and output variables of the machining process, so we shall
refer to it as the neural network for modeling. This neural network is a three-layered
back-propagation network with three input nodes, each representing feedrate per tooth,
axial depth of cut, and radial depth of cut, and four output nodes, i.e. the forces in
the X and Y directions, cutting power, and the surface finish of the workpiece.
Neural network (II), which is described in Section 3.2, is used to determine the
optimal inputs, so we shall refer to this network as the neural network for optimization.
Since neural network (I) is used to describe the cutting process, the derivatives in
equation (24) can be calculated by a forward pass through the back-propagation
network. The detailed derivation is shown in the Appendix. This can greatly reduce
computing time in solving the optimization problem.
The procedure of this NNBACO system can be summarized as follows:

7
.~ Neuralnetwork I
for modeling

Current ~lstate

.~ Neuralnetwork 11 I
for optimization .Optimal
inputs

Measured outputs

Ft6. 5. The neural network based ACO (NNBACO) system.


Adaptive Control Optimization 645

STEP 1 Under the initial constraints, neural network (II) determines a set of optimal
inputs and then sends them into the milling machines and neural network (I).
STEP 2 The measured outputs of the milling machine, corresponding to the optimal
input, are used as the desired output to train neural network (I).
STEP 3 Neural network (II) uses the newly upgraded neural network (I) as the model
of the end milling process to find the optimal inputs and sends them into the
milling machine and neural network (I).
STEP 4 STEP 2 and STEP 3 are repeated until the termination of the cutting process.
4. SIMULATION

In the following, the theoretical model of the end milling process is described. A
simulation employing this theoretical model and the results are presented to confirm
the feasibility of the NNBACO system.

4.1. Theoretical m o d e l s o f end milling


Since there are four cutting constraints taken into account in this simulation (the
average cutting force in the X, Y directions, cutting power and surface finish) we use
four corresponding theoretical models to describe the end milling. These models are
presented below.
The theoretical models for the average cutting forces in the X, Y directions presented
by Lee et al. [17] are expressed as

Fx = K t ' N t ' f t ' A a ' { K r ' [ s i n ( 2 " f 3 e n ) - 2"[3e,] + [1 -- COS(2"13en)]}/(8"rr) (N)

Fv = K , . N t . f t . A d ' { K r ' [ 1 - cos(2.13c,)] + [2"13~n -- sin(2.13~,,)]}/(8.'rr) (N), (26)

where 13nis the tooth entry angle, Nf is the tooth number of the end mill, Kt indicates
the ratio of tangential cutting force to the chip load, and Kr indicates the ratio of
radial to tangential cutting force. Kt and Kr for the aluminum with hardness 55 HRB
are

Kt = 757.7045 ft 55s (N/mm 2)


Kr = 0.2627. f-(, 2279 . (27)

The theoretical model for the cutting power can be expressed as

P = 1.6 + 4.26- [(N,. A,~ Rd ft) 0"66] rpm/97422 " (kW), (28)

where rpm is the spindle speed, expressed in revolutions per minute.


According to [18], the surface finish of the workpiece can be modeled by

Ra = ~ / ( 8 . R) (mm) (29)

in which R a is the average peak to valley height on the workpiece surface and R is
the radius of the end mill. The deformation of the cutting tool is ignored here.

4.2. C o m p u t e r simulation a n d results


In order to investigate the stability and adaptation of the NNBACO system, we
simulated a procedure similar to that described in Section 3.3 and the real end milling
process was replaced by the theoretical model, i.e. equations (26)-(29).
Since an untrained neural network has no knowledge about the system, unpredictable
results may be obtained. Therefore, initial training was applied to neural network (I).
In initial training, the first step is to generate the I/O samples from the theoretical
model. Different values spanning the allowable range of each input variable yield a
total of 420 input combinations. These input variables were used to determine the
646 SHIUH-TARNGCHIANGet al.

output quantities, and they composed the set of I/O samples which were used for
initial training and testing.
The three-layered back-propagation network with three inputs and four outputs
described in Section 3.1 was employed for neural network (I). In the initial training,
we start with three hidden nodes and the number of nodes was increased until a low
error rate was achieved. The error rate is the ratio of incorrect samples, i.e. samples
in which the absolute difference between the theoretical output and the output of the
neural network exceeds 10%, to all 420 samples. The simulation results for the initial
training, with ten hidden-layer nodes, are shown in Fig. 6; the results indicate that
after 150 iterations the error rate approached 10%.
If the number of iterations used in training the neural network is increased, the
error rate may be reduced slightly. But, from a macroscopic point of view, the error
rate remains in the vicinity of 10%. Increasing the number of nodes in the hidden
layer may also decrease the error rate, but it may make it more difficult to realize the
proposed structure physically and may cause more time to be consumed in calculating
the optimal inputs. Hence ten hidden-layer nodes were chosen in this stage.
According to the experiment set-up and workpiece used in our laboratory, the limits
on the input and output variables that were used in the simulation were as follows:
The upper limitation on the input variables:

ft: 0.33 (mm/tooth)


Ra: 12.5 (mm)
Ad: 24.5 (mm) .

The allowable outputs:

Fx:800(N)
~:1600(N)
e:l.8(kW)
Ra:0.0015(mm).

In order to examine the performance of the NNBACO system under specific cutting
conditions, one of the input variables (radial depth of cut) was specified as varying
between three specific values, and it changed from one value to another every one
hundred iterations. Through this simulation, not only the static (constant Ro) but also
the dynamic (varying Rd) performance of this system were investigated.

I00
%

80

60

4O I ~ i"
i
20 . . . . . . . . . . i
i
0% I .
0 50 100 150 200 250 300
iterations

FIG. 6. Learning error in the initial training.


Adaptive Control Optimization 647

Figure 7(b) shows the three values (2, 6 and 10 mm) for the radial depth of cut,
and Figs 7(a) and (c) show the corresponding optimal inputs (Aa and ft). Because the
neural network can not "learn" as fast as the change in the radial depth of cut, there
were some transient states at the beginning of each state. When the radial depth of
cut is 2 mm, the inequality constraints of the cutting power are satisfied more easily
than when the depth is 6 or 10. Hence the product of Jet and Rd, shown in Fig. 7(d),
has the greates t value when Ra = 2. In these figures, we also find that as the system
varies, the steady optimal inputs do as well.
The neural network's outputs corresponding to the optimal inputs are shown in

.5
Ill m

(~),4
; i
,3
ft
.2

.1

0
0 200 400 600 800 1000
(a) optimal fee&ate per tooth (iterations)
(mm) 12

10
8

Rd 6
4
2
0
V
0 200 400 600 800 .. 1006 .
(b) radial depth of cut 0teratlons)
(mm) 25

20

15

10
f

i
0 200 400 600 800 1000
(c) axial depth of cut (iterations)
8

ft*Ad 4

0
0 200 400 600 800 100
(iterations)
(d) cost function--ft*A d

FIG. 7. Optimal inputs in simulation.


648 SHIUH-TARNGCHIANG et al.

Fig. 8. Because the calculation of the optimizer is based on the neural network, all
the inequality constraints, i.e. equation (23), are satisfied. However, in order to satisfy
the inequality constraints for the cutting power, all the other outputs of the neural
networks are far from the upper limits.
Most of the outputs of the theoretical model, shown in Fig. 9, are below the maximum
limitations. But some overshoots can be detected in the cutting power during the
changes between states. We also find that in these figures the maximum MRR is mainly
limited by the cutting power.
The learning errors in the simulation are shown in Fig. 10, in which the error is
defined as follows

(-#cc- : ;-; -:
800 , I

600

Fx
400

200

0
0 200 400 600 800 IO40
(a) NN oulpul: force Fx (ileratlons)

1000
09
800

600

FY
400

200

2;o 400 600 800 1000


(b) NN output: force F,, (ileralions)

I
400 600 800 IO00
(c) NN output: power P (ilcralions)

_-_- -~
200 400 600 800 1000
(iteralions)
(d) NN output: surface finish R
a

FIG. 8. Neural network outputs in simulation.


Adaptive Control Optimization 649

600 i ............... i . . . . . . .

Fx 400 ~ ........

201 ...........................................................................................................
0 200 400 600 800 1000
(a) milling output: force F x (iterations)
10001 ~
soo ................... i . . . . . . . . . . . . . . . i ....

600 .................................
Fy 400 ~ ............~ ............. 7 ...............~ ......... : .

200 ............................................................]..............................i ...............................


ol i i
0 200 400 600 800 1000
(b) milling output: force F y (iterations)
2.5
(kW)
2.0
r---iv- ~ :
1.5

1.0

0.5

0 200 400 600 800 1000


(c) milling output: power P (iterations)
1.2"10-3
(ram)
I.
.8

R a .6
.4

.2 . . . . .
T
0 i
0 200 400 600 800 1000
(d) milling output: surface finish R a (iterations)

FIG. 9. Outputs from the theoretical model in simulation.

Itheoretical o u t p u t - neural n e t w o r k output I


error = theoretical o u t p u t (30)

Because of large variation of weights at the beginning of step change in Rd, the
optimizer takes m o r e iterations to find a set of o p t i m a l inputs. As the neural n e t w o r k
a p p r o a c h e s the steady state, the o p t i m a l inputs can be o b t a i n e d in fewer iterations.
This m e a n s that the neural n e t w o r k can describe the system m o r e and m o r e accurately
in the static state. C o m p a r e d with the results of initial training, the e r r o r diminished
very quickly.
650 SHIUH-TARNG CHIANG et al.

(x 100%)
10

0.8

0.6

0.4

(x 100%)
02

0 ikL _LI
i, ,i

200
i
400 600
(a) Fx error
800 1000
(iterations)
1.0 I

0.8

0.6 /
Fy

0 200 400 600 800 1000


(x 100%) (b) Fy error (iterations)
0.1

0.08

o.o . . . . !
P
0.04 ] i

0.02

0
0 200 400 600 800 1000
(x 100% (c) power error (iterations)
1.0

0.8

0.6
Ra
0.4

0.2

0
2OO 400 600 800 1000
(d) surface finish error (iterations)

FIG. 10. The learning errors in the simulation.

After the simulations in this section, we adopted some patterns from the initial
training set to test the underlying neural network and found that for patterns located
near the optimal inputs the neural network generated outputs with a very small error.
But for patterns far from the optimal values, the network generated outputs with a
large error. This result implies that the neural network has only partial knowledge
after training.
AdaptiveControlOptimization 651
5. EXPERIMENTS
5.1. Experimental set-up
We conducted a series of experiments with end milling to confirm the feasibility of
the proposed NNBACO system.
The experimental set-up for the NNBACO system is shown in Fig. 11, which also
defines the positive directions of the three axes. The feedrate command to the servo-
driver was generated by a hardware interpolator. Each NC servo loop (per axis)
included a DC servo motor and a P.I.D. Velocity and position feedback were provided
by tachometer and encoders.
A vertical knee-type milling machine (3 horse power spindle motor) was used for
the experiments. The cutting forces were measured by a dynamometer (Kistler 9257B)
and a charge amplifier (Kistler 5007). The A/D converter used was a 12-bit one; the
collected data were saved to the memory of a 486 personal computer by DMA (direct
memory access) transfer. The sampling time in the A / D converter was 1 ms and the
NNBACO system updated the feedrate parameter in 100 ms.
The data processing and the control algorithm were implemented using Borland
C+ + V.3.0 on a 32-bit 486 PC. The other machining conditions and parameters of
the workpiece are listed below:
Workpiece: aluminum alloy (T6061).
Cutting tool: end mill,
25 mm diameter,
four teeth,
160 mm total length.
Cutting conditions:
spindle speed: 300 rpm,
axial depth of cut: 25 mm.
No coolant.

5.2. Experiments
We conducted three main series of experiments, in which two differently shaped
workpieces were machined. Details of the experimental conditions and the dimensions
of the workpiece are shown in Figs 12(a) and (b).
The first experiment is conventional cutting that the feedrate per tooth was set to
be constant under the constraints on the maximum allowable cutting forces.
In the second experiment, the proposed NNBACO system was applied in the end
milling to demonstrate its performance. Because of the limitations of the laboratory
equipment used, the neural network for modeling was set up with only two outputs,
Fx and Fy, and the axial depth of cut was kept at 25 mm. There were ten nodes in
the hidden layer. The radial depth of cut and other cutting conditions were set up as
described in Figs 12(a) and (b).
In the last experiment on each workpiece, regardless of the variation in radial depth

feed direction [~
~~1 workpiece
dynamometer I
[ 486 pc I~.~Iinterplatr
hardware ~ NC
X-axiservo
s table Il

T X Charge /
Y amplifieri

FIG.11. The experimentalset-up.


652 SHIUH-TARNGCHIANGet al.

(a)
Down milling
90
40 --~
Workpiece A:
4 1 3 (unit: iron)
I-

Experiment 1 Case: case a-I


(Constant feedrate) Cutting condition ft: 0.12 ram/tooth
Result: Figure 13
Experiment 2 Case: case a-2
(NNBACO system is apply) Cutting condition ft: 0.075-0.25 Iron/tooth
Result: Fisure 14
Experiment 3 Case: case a-3
(Rd to NN is kept constant Cutting condition ft: 0.075-0.25 iron/tooth
-7 2 ram) Result: Figure 15
Maximum allowable force Fx: 400 N, Fy: 450 N

(b)
Down milling
90
-4-25--,.-1 ~r
(unit: ram)
Workpiece B: 2 i
T

Experiment 1 Case: case b-I


(Constant feedrate) Cutting condition ft: 0.12 nun/tooth
Result: Figure 16
Experiment 2 Case: case b-2
(NNBACO system is apply) Culting condition It: 0.075-0.25 nun/tooth
Result: Figure 17
Experiment 3 Case: case b-3
(Rd to NN is kept constant Cutting condition It: 0.075-0.25 iron/tooth
= 2 1ran) Result: Figure 18
Maximum allowable force - - Fx: 400 N, Fy: 450 N

Fro. 12. (a) Cutting conditions for workpiece A. (b) Cutting conditions for workpiece B.

of cut, the input (radial depth of cut) to the neural network was fixed at specific
"false" values as defined in Figs 12(a) and (b). The difference between the "real" and
"false" input is like a noise value. Thus, the error tolerance of the NNBACO system
can be investigated from this experiment.
Adaptive Control Optimization 653

5.3. Results and discussion

The geometry of the first workpiece, shown in Fig. 12(a), consisted of two steps.
In order to satisfy the force limitation (the maximum allowable forces in the X, Y
directions were 400 and 450 N, respectively), the feedrate was kept at 0.12 (mm/tooth)
in case a-1. The measured force is shown in Fig. 13.
The NNBACO system was applied in case a-2, in which the feedrate decreased to
a lower level as the cutting tool stepped up to a higher stair. The results are shown
in Fig. 14. The error defined is similar to equation (30), but the theoretical outputs
are replaced by the measured signals. Since the neural network cannot learn very
accurately when the radial depth of cut changes, there are two peaks in the measured
force. After a while, however, the measured force in the X direction stabilized around
the preset value, which means that the optimal cutting conditions are constrained by
the force in the X direction in this case. Due to the sudden change in the input to the
neural network, there is a peak in the neural network's output shown in Fig. 14(c).
Although the radial depth of cut varied from 2 to 3 mm, the input to the neural
network was kept at 2 mm throughout the tool path in case a-3. The results, as depicted
in Fig. 15, show that the peak in the neural network's output vanished, because there
was no change in the input to the network. However, the two peaks are still present
in the measured force and the error diagram, i.e. Figs 15(b) and (d).
For the other cases of experiment, there were three steps in the second workpiece,
and the distance between each step was shorter. This workpiece, as shown in Fig. 12(b),
includes ascending and descending parts. The maximum allowable forces in the X, Y
directions were again set as 400 and 450 N, respectively.
In case b-l, feedrate was kept at 0.12 (mm/tooth). The measured forces are shown
in Fig. 16.
The results of case b-2 obtained using the NNBACO system are shown in Fig. 17.
For the same reasons described in case a-2, there are four peaks in the neural network's
output. In Fig. 17(c), we see that the optimal cutting conditions are limited sometimes
by the force in the X direction and sometimes by the force in the Y direction. From
the Fig. 17(d), we see that the error in the third step is not less than that in the first
step. It means that the system only possesses knowledge near about optimun region.
As the state (Rd) returns to the previous state, the system also needs to reoptimize
the cutting conditions as before.

(N) 600

500 m
400

measured 300

force
200 i Fx
i F y .........
100

-!00
10 20 30 40 50 55
time (second)
FIG. 13. The results of case a-].

141]435-4-K
654 SHIUH-TARNGCHIANGel al.

( -to-I~[~ .3 ,
.J
.25
.2
(a) optimal .fee&ale
per tootla:ft .15
.I
.05 i~ i !

0
(W)600
500
400
(b) measured 300
forces Fx,Fy 200

100
0 ......... i ........ i . . . . . L

-100
(N~600
500
....................... i.--:-/.i.Z ...........
400
(c)NN's output: 300
forces Fx,Fy 200

100
0
-100
l
(x ! 00%)
.8
~-- x error!
il i..... v error i
(d) error .6

.4

.2

0 10 20 30 40
time (seconds)
Fro. 14. The results of case a-2.

For the results of the last case b-3, Fig. 18, as in the previous case, we see that the
optimal conditions are always constrained by the maximum allowable force in the X
direction. When the results of Figs 17(d) and 18(d) are compared, they seem to be
different. In the case of Fig. 17(d), as the radial depth of cut changes, the input of
neural network suddenly changes which causes the weights to update tremendously
and generates larger force errors. In the case of Fig. 18(d), this situation is avoided
Adaptive Control Optimization 655

.25

.2
(a) optimal feedrate
per tooth:f t .15
.1

.05

0
~)600
500
400
(b) measured 300
force Fx ,Fy 200

100
0
-100
~)600
500
, i i ...... ..... i
400
(e)NN's output: 300
force Fx ,Fy 200

100
0 .... i
-100 i
1
(x 100%)
.8 I .....
X error
i...... Y error
(d) error .6

.4

.2

10 20 30 40
time (seconds)

FIG. 15. The results of case a-3.

and s m o o t h transitions o f the errors occur as true radial depth of cut changes. H o w e v e r ,
for large variation o f depth of cut, the N N B A C O still n e e d to give correct input to
reduce the c o n v e r g e n c e time.
The results o f the real-time control experiments p r e s e n t e d above show that w h e n
applied in end milling the N N B A C O system is stable within the range of the cutting
conditions examined. Since the cost function is M R R and the radial and axial depth
656 SHIuH-TARNG CrUANGet al.

(N) 600

500
Z

400

L'-i/-"i/ " "i.


300
measured
force 200

100

0
ii , Fx- ti - ....

-100
0 10 20 30 40 50 55
time (second)
FIG. 16. The results of case b-1.

of cut are kept at specific values in each case studied, we can use the time needed to
cut through the workpiece as an index of efficiency. That is, the less time spent, the
higher M R R is. The time spent in cutting through each workpiece is given in Table
1. From the table, it is clear that when the NNBACO system is applied M R R is
increased greatly.
Although a "false" input was applied in the last experiment on each workpiece, the
MRR for these experiments is still high. Moreover, in some cases, because of less
variation in the neural network's input (Ra), these cases perform better than the "true"
input cases. This shows that the NNBACO system is an architecture with high fault
tolerance.
6. CONCLUSIONS

In this paper, an architecture is presented for on-line determining optimal cutting


conditions in an end milling process. The proposed NNBACO system, which includes
two different neural networks, differs from conventional adaptive control with optimiz-
ation (ACO) systems in that (1) multi-constraints are handled simultaneously without
increasing the processing time; (2) no specific model exists, but rather a back-propa-
gation network is employed and (3) a special optimal mechanism is adopted to deter-
mine optimal cutting conditions.
Although the two neural networks in the NNBACO system are not realized by chip
technology, the simulations and experiments presented here show that this architecture
can effectively describe the behavior of end milling and increase the cutting efficiency.
Because the neural network (II) performs the optimization in the NNBACO system,
neural network (I), the modeling network, only possesses knowledge in the vicinity of
a local optimum. In the simulations and the experiments described here, although
T A B L E 1. T I M E SPENT IN END MILLING WITH RESPECT TO DIFFERENT WORKPIECE SHAPES

Workpiece A Workpiece B

Experiment 1 Case case a-1 case b-1


(Feedrate keeps constant) Time 54.2 s 54.2 s
Experiment 2 Case case a-2 case b-2
(Apply NNBACO) Time 40.0 s 34.9 s
Experiment 3 Case case a-3 case b-3
(Apply NNBACO but Rd neglected) Time 39.1 s 34.9 s
Adaptive Control Optimization 657

I mm ) .3
i-6-6ih-
.25

.2
(a)optimal feedrate
per tooth: ft .15 -. \

.1
.05
0
(NJ500
500
400
300
(b) measured
forces:Fx ,Fy 200

100
0
-100
(N) 600
500
. . . . -_ ..

400
300 .- - - ,

(c) NN's output


forces:Fx ,Fy 200
Fx
F~,. ',
100
0
-100
I
(x100%)
.8 i
X
I Y
.6
(d) error
.4

.2

L
0 10 15 20 25 30 35
lime (second)

FIG. 17. The results of case b-2,

there were some errors in the beginning, the N N B A C O system gradually achieved
optimal cutting conditions. And since the N N B A C O system itself determines the
knowledge to be acquired, i.e. by the neural network for optimization, it behaves like
a self-organizing system.
The N N B A C O system is applied to end milling in this paper, but it is obvious that
this system is a general-purpose architecture, i.e. it can be extended to other machines
to improve cutting efficiency.
658 SHIUH-TARNGCHIANGet al.

(_lnnL.~ .3
tooth j
.25

.2
(a) optimal feedrate
per tooth: fl .15
/--/
.1

.05
0
(N) 600
500
4OO
300
(b) measured
forces: Fx ,Fy200
Fx-
100

0
-I00
(N) 60O
500
400 !

i t"- " " .... "" " ........... ""


300
(c) NN's output
forces: Fx ,Fy200 i/
100
0
-100
1
(x100%)
.8
I
,6
(d) error
.4
J
I
.2

0 5 10 15 21) 25 30 35
time (second)

FIG. 18. The results of case b-3.

REFERENCES
[ 1] Y. KOREN, Computer Control of Manufacturing System. McGraw-Hill, New York (1983).
[2] M. A. ELBESTAWl,A S M E J. Dyn. Syst. Meas. Control 112, 6ll (1990).
[3] G. CHRVSSOLOUmSand M. GUmLOT, A S M E J. Engng Ind. 112, 122 (1990).
[4] G. R. MADEY, J. WEINROTHand V. SHAH, J. lntell. Manufact. 3, 193 (1992).
[5] S. R. RANGWALAand D. A. DOaNFELD, IEEE Trans. Syst. Man Cyber. 19, 299 (1989).
[6] G. S. CHOI, Z. WANG and D. A. DORNFELD, Proc. IEEE, Int. Conf. Robotics and Automation 1567
(1991).
[7] J. CHEN, M. A. SHANBLA'rrand C. MAA, Int. J. Neural Syst. 2, 331 (1992).
[8] D. W. TANK and J. J. HOPFIELD,1EEE Trans. Circuits Syst. CAS-33, 533 (1986).
[9] L. O. CHUg and G. N. LIN, IEEE Trans. Circuits Syst. CAS-31. 182 (1984).
[10] M. P. KENNEDYand L. O. CHug, IEEE Trans. Circuits Syst. CAS-34, 210 (1987).
Adaptive Control Optimization 659

[11] A. RODRIGUEZ-VAZQUEZ,R. DOMINGUE7,A. RUEDA, J. L. H.r;ERTASand E, SANCHEZ,I E E E Traits.


Circuits Syst. 37, 384 (1990).
[12] A. CiCHOCrd and R. UNBEHANEN,Int. J. Orcuit Theory Appl. 19, 161 (1991).
[ 13] D. RUMELnARTand J. L. McCLELLAND,Parallel Distributed Processing: Explorations in the Microstruc-
ture of Recognition, Vols 1 and 2. MIT Press, Cambridge, MA (1986).
[14} K. S. NARENDRAand K. PARTHASARATHY,IEEE Trans. Neural Networks 1(4), (1990).
[15] G. N. VANDI':aPLAATS,Numerical Optimization Techniques for Engineering Design: with Application.
McGraw-Hill, New York (1984).
[16] R. T. ROCKAFELLER,J. Opt. Theory Appl. 12, 555 (1973).
[17] A. C. LEE, S. T. CH]Ar,~Gand C. S. LIu, J. Chinese Soc. mech. Engrs 12, 412 (1991).
[18] M. C. SHAW, Metal Cutting Principles. Oxford University Press, New York, (1984).

APPENDIX
In this Appendix, the method used to calculate the derivatives of equation (24) by a forward pass through
the network is described. With regard to Fig. 2, we define the following nomenclature:
net~.k the sum of the ith neuron's inputs in the kth layer, i.e.

net/., = ~ [wi.j.k ' o,,k-I] (A1)


]

Oi.k the ith neuron's output in the kth layer, i.e.

o,.k = f(neti.k) (A2)

if f(.) is a sigmoid function then equation (A2) becomes

1
f(net,.k) = (A3)
1 + e--nct~.k

Yi the output of the ith neuron in the output layer. If the neural network has three layers then
Yi = 0t.3"
di the desired output of the ith output node.
Wid,k the weight between the ith neuron in the kth layer and the jth neuron in the (k -1)th layer.
E defined as the global error

E = 0.5 ~ ( d , - O,.k) 2 . (A4)


i

Since the neural network is used for learning the cutting process, yj is the jth output and x/ is the ith
input. Then in equation (24)

Ox~ = Ox~= Onet~.t = Q,iJ

With respect to distinct layers, (A5) is different. The following derivation is divided into "the output
layer" and "the hidden and input layer"
for the output layer

r/.,.. Oy/ = . ( 1 - y~) (A6)


= Onet~.t YJ

for the (n - 1)th layer

Oyj Oyj 0o~.._


rl4.n- I
0net~.._ t = Oo~.._ ~ " Onet~,#_ i
Oyj . Onet/,.. Oo~,._ i
(A7)
Onetj.. Oo/.._) #net/,._1

for the (n - 2)th layer

Oyj Oy/ Oo~.._2


ri.i,n_ 2 =
Oneti.n-2 00i.,-2 Oneti.n_2

(~[ Oy/ Onet.... i]1. 00.... 2


LOnet . . . . i" ~ J J 0net,.._2

= (A8)
660 SHIUH-TARNGCmA~G et al.

Substituting (A8) into (A5), we can calculate the derivatives of equation (24) by a forward pass through
the neural network.

You might also like