18-Cellular Layout, Group Technology, Heuristic Approaches-08-11-2021 (08-Nov-2021) Material - II - 08-11-2021 - G.T - PDF
18-Cellular Layout, Group Technology, Heuristic Approaches-08-11-2021 (08-Nov-2021) Material - II - 08-11-2021 - G.T - PDF
18-Cellular Layout, Group Technology, Heuristic Approaches-08-11-2021 (08-Nov-2021) Material - II - 08-11-2021 - G.T - PDF
1
This chapter is based on Chapter 6 of Askin & Standridge (1993). It is recommended to read this chapter parallel to the
course notes.
In this GT context a typical approach would be the use of composite Part families. Consider e.g. the
parts family shown in Figure 3.1.
GT offers numerous benefits w.r.t. throughput time, WIP inventory, materials handling, job
satisfaction, fixtures, setup time, space needs, quality, finished goods, and labor cost; read also
Chapter 6.1 of Askin & Standridge, 1993.
In general, GT simplifies and standardizes. The approach to simplify, standardize, and internalize
through repetition produces efficiency.
Since a workcenter will work only on a family of similar parts generic fixtures can be developed and
used. Tooling can be stored locally since parts will always be processed through the same machines.
Tool changes may be required due to tool wear only, not part changeovers (e.g. a press may have a
generic fixture that can hold all the parts in a family without any change or simply by changing a part-
specific insert secured by a single screw. Hence setup time is reduced, and tooling cost is reduced.
Using queuing theory (M/M/1 model) it is possible to show that if setup time is reduced, also the
throughput time for the system is reduced by the same percentage.
Clearly, also the organization should be structured around groups. Each group performs functions that
in many cases were previously attributed to different functional departments. For instance, in most
situations employee bonuses should be based on group performance.
Worker empowerment is an important aspect of manned cells. Exchanging ideas and work load is
necessary. Many groups are allocated the responsibility for individual work assignments. By cross-
training of technical skills, at least two workers can perform each task and all workers can perform
multiple tasks. Hence the there is some flexibility in work assignments.
The group should be an independent profit center in some sense. It should also retain the responsibility
for its performance and authority to affect that performance. The group is a single entity and must act
together to resolve problems.
Part coding is helpful for design and group formation. But, the time and cost involved in collecting
data, determining part families, and rearranging facilities can be seen as the major disadvantage of GT.
For designing new facilities and product lines, this is not so problematic: Parts must be identified and
designed, and facilities must be constructed anyway. The extra effort to plan under a GT framework is
marginal, and the framework facilitates standardization and operation thereafter. Hence, GT is a logical
approach to product and facility planning.
This gives a new ordering of the machines: B – D – C – A – E. Next, we sort columns w.r.t. decreasing
binary numbers (note the new order of rows here):
part
machine 1 2 3 4 5 6 2x
B 1 - 1 - 1 1 16
D 1 - - - 1 1 6
C - 1 1 1 - 1 4
A - 1 - 1 - - 2
E - - - 1 1 - 1
value
By summing up all entries in a row we obtain total machine utilization. If this value exceeds one, at
least two machines are needed. More generally, this number must be rounded up to the next integer to
give the minimum number of machines needed. It should be noted, that this minimum number of
machines is a lower bound. It may be necessary to use more copies of some machines than this
minimum number suggests.
Summing up the minimum number of machines for all machine types we obtain, that at least 9
machines are needed. Since not more than 4 machines are permitted in a group, we know that at least
9/4 = 2,25 groups are needed. Since only integer numbers of groups make sense, this must be rounded
up to obtain the lower bound on the number of groups: at least 3 groups.
The Single-Pass Heuristic by Askin and Standridge consists of the two steps
1. obtain (nearly) block diagonal structure (e.g. using Binary Ordering)
2. form cells/groups one after another:
• Assign parts to groups (in sorting order)
• Also include necessary machines in group
• Add parts to group until either
o the capacity of some machine would be exceeded, or
o the maximum number of machines would be exceeded
Example continued:
For binary sorting treat all entries as 1s. The result is the matrix
part
machine 1 5 7 3 4 6 2
D 0.2 0.3 0.5 0.4 - - -
C 0.4 - - - 0.5 0.3 -
A 0.3 0.6 - - - - -
F - - 0.2 0.3 0.4 - 0.2
B - - 0.1 - 0.3 - 0.3
E - - - - - 0.5 0.4
Hence, the parts are considered in the following order: D – C – A – F – B – E.
part
Iteration group assigned machines remaining capacity
chosen
1 1
2 5
3 7
4 3
5 4
6 6
7 2
∑ ∑y
i∈I k∈K
ik → min!
∑x
i∈I
ij =1 j∈J (each part j in exactly one group)
∑a
j∈J
jk ⋅ xij ≤ y ik i ∈ I,k ∈ K (capacity of machine k in group i)
∑y
k∈K
ik ≤M i∈I (not more than M machines in group i)
Hence, the simple single-pass heuristic did not find the optimal solution:
part
machine 1 2 3 4 5 6 7 sum min. # Single-Pass Heur. opt
A 0.3 - - - 0.6 - - 0.9 1 1 1
B - 0.3 - 0.3 - - 0.1 0.7 1 2 2
C 0.4 - - 0.5 - 0.3 - 1.2 2 3 2
D 0.2 - 0.4 - 0.3 - 0.5 1.4 2 2 2
E - 0.4 - - - 0.5 - 0.9 1 1 1
F - 0.2 0.3 0.4 - - 0.2 1.1 2 2 2
Sum 9 11 10
F
This gives the similarity coefficients:
sij parts
machine A B C D E F
A 1 1 0,33 0 0 0
B 1 1 0,33 0 0 0
C 0,33 0,33 1 0,75 0 0
D 0 0 0,75 1 0,5 0,5
E 0 0 0 0,5 1 1
F 0 0 0 0,5 1 1
These have a similar function as the savings values known from transportation logistics. The following
hierarchical clustering heuristic is very similar to the savings algorithm known from VRP.
Before proceeding, one can eliminate all entries with sij ≤ T, where T is some parameter between 0 and
1. By omitting the “weak” links the structure becomes clearer. Here, we choose T = 1 and we do not
eliminate any links at the moment.
Hierarchical clustering heuristic:
1. Form N initial clusters (one for each machine). Compute similarity coefficients sij for all
machine pairs.
2. Merge clusters: Let i and j range over all clusters. Choose the pair if clusters (i*, j*) that has the
highest similarity coefficient sij. Merge clusters i* and j* if possible.
If more than one cluster remains, go to 3. otherwise stop.
3. Update coefficients: Remove rows and columns i*, j* from the similarity coefficient matrix.
Replace them with a new row k and a new column k. For all remaining clusters r, the updated
similarity coefficients of this new cluster k are computed as:
srk = max {sri*, srj*}
In step 3, when clusters i* and j* are joined to become the new cluster k the new similarity coefficient
to some other cluster k is computed as the maximum of the corresponding similarity coefficient of
clusters i* and j*. This is one possible setting.
¾ Other updating rules are possible, such as e.g. the average of the corresponding similarity
coefficients.
In the first iteration, groups i* = A and j* = B are joined to become new group k = AB. The updated
similarity coefficients are
sij parts
machine AB C D E F
AB 1 0,33 0 0 0
C 0,33 1 0,75 0 0
D 0 0,75 1 0,5 0,5
E 0 0 0,5 1 1
F 0 0 0,5 1 1
In the next iteration, clusters i* = E and j* = F are joined to become new group k = EF. The updated
similarity coefficients are:
sij parts
machine AB C D EF
AB 1 0,33 0 0
C 0,33 1 0,75 0
D 0 0,75 1 0,5
EF 0 0 0,5 1
Next, clusters i* = C and j* = D are joined to become new group k = CD. The updated similarity
coefficients are:
sij parts
machine AB CD EF
AB 1 0,33 0
CD 0,33 1 0,5
EF 0 0,5 1
If groups should be joined further (because the constraints permit this), clusters i* = CD and j* = EF
are joined to become new group k = CDEF.
3 4 1 cut-size = 3 + 1 + 2 + 4 + 6 = 16
Try to improve this partitioning (i.e. reduce cut-size) using
6 KL.
e f
For each node x ∈ { a, b, c, d, e, f }.compute the gain values of moving node x to the others set:
Gx = Ex - Ix
where
Ex = cost of edges connecting node x with the other group (extra)
Ix = cost of edges connecting node x within its own group (intra)
This gives:
Ga = Ea – Ia = 3 – 4 – 2= – 3
Gc = Ec – Ic = 1 + 2 + 4 – 4 – 3 =0
G e = Ee – I e = 6 – 2 – 3 = + 1
Gb = Eb – Ib = 3 + 1 –2 = + 2
G d = Ed – I d = 2 – 2 – 1 = – 1
G f = Ef – I f = 4 + 6 – 1 = + 9
Cost saving when exchanging a and b is essentially Ga + Gb
However, the cost saving 3 of the direct edge (a, b) was counted twice. But this edge still connects the
two different groups ⇒ must be added twice. Hence, the real “gain” (cost saving) of this exchange is
gab = Ga + Gb - 2cab
Must compute this for all possible combinations (pairs):
gab = Ga + Gb – 2wab = –3 + 2 – 2⋅3 = –7
gad = Ga + Gd – 2wad = –3 – 1 – 2⋅0 = –4
gaf = Ga + Gf – 2waf = –3 + 9 – 2⋅0 = +6
gcb = Gc + Gb – 2wcb = 0 + 2 – 2⋅1 = 0
gcd = Gc + Gd – 2wcd = 0 – 1 – 2⋅2 = –5
gcf = Gc + Gf – 2wcf = 0 + 9 – 2⋅4 = +1
geb = Ge + Gb – 2web = +1 + 2 – 2⋅0 = +1
ged = Ge + Gd – 2wed = +1 – 1 – 2⋅0 = 0
gef = Ge + Gf – 2wef = +1 + 9 – 2⋅6 = –2
The maximum gain is obtained by exchanging nodes a and f ⇒ new cut-size = 16 – 6 = 10.
Perform this exchange
Verify: new cut-size = 1 + 1 + 2 + 4 + 2 = 10
f b
1 Lock all exchanged nodes (a and f)
4 1 2 New sets of unlocked nodes:
3
X’ = { c, e }
2 Y’ = { b, d }
c d
Update the G-values of unlocked nodes
6 3 4 G’c = Gc + 2cca – 2ccf = 0 + 2(4 – 4) = 0
G’e = Ge + 2cea – 2cef = 1 + 2(2 – 6) = –7
2
e a G’b = Gb + 2cbf – 2cba= 2 + 2(0 – 3) = –4
G’d = Gd + 2cdf – 2cda = –1 + 2(1 – 0) = 1
Let us assume that we need 3 clusters with at least 2 and at most 4 machines each. We start with an
initial clustering with 3 machines each. For this, we simply use the rows of the above matrix
(apparently this is not the best clustering, but we want to demonstrate the improvement step).
Note that we have also added dummy machines /with zero cost connections) to represent empty spaces
that could be occupied by real machines (note that up to 4 machines are permitted).
We start with optimizing the partition Group 1 = {A1, A2, B1, Dummy1} and Group 2 = {B2, C1, C2,
Dummy2} while we keep Group 3 = {D, E, F, Dummy3} unchanged for the moment.
Next, we apply the KL heuristic to Group 1 and Group 2:
For all nodes in these groups, we compute Ex, Ix, and Gx.
Group Node i Ei Ii Gi
1 A1 0 2 -2
B1 0 2 -2
A2 1 0 1
Dummy1 0 0 0
2 B2 1 1 0
C1 0 1 -1
C2 0 0 0
Dummy2 0 0 0
Next we compute the Gij
Node i Node j Gij Gij’ Gij”
A1 B2 -2 -4
C1 -3 -3
C2 -2 -2
Dummy2 -2
B1 B2 -2 -4
C1 -3 -3
C2 -2 -2
Dummy2 -2
A2 B2 -1
C1 0
C2 1
Dummy2 1←
Dummy1 B2 0 -2
C1 -1 -1
C2 0 0←
Dummy2 0
We could choose the pairs (A2, C2), (A2, Dummy2), or (Dummy1, C1). We arbitrarily choose (A2,
Dummy2) and fix these two machines (nodes). Then we update Gi:
Gi’ = Gi + 2ciA2 – 2ciDummy2 in Group 1 and Gjnew = Gj + 2ciDummy2 – 2cjA2 in Group 2.
Group Node i G i’ G i”
1 A1 -2+0-0 = -2
B1 -2+0-0 = -2
Dummy2
Dummy1 0
2 B2 0+0-2 = -2
C1 -1+0-0 = -1
C2 0+0-0 = 0
A2
Then we update Gij. We can do this in the above table in a new column. No improvements possible, but
the switch (Dummy1, C2) is the best one (no change in cost). This change is performed and the
machines Dummy1, C2 are fixed. New group 1 = {A1, B1, Dummy2, C2} and group 2 = {B2, C1,
Dummy1, A2} where fixed values are cancelled. Next step with Gi” and Gij”.
We see that only the first step brought an improvement and get the new partition:
group 1 = {A1, B1, Dummy2, Dummy1}, and group 2 = {B2, C1, C2, A2}.
We could repeat this pass of the KL heuristic, but since the cut-value of this partition is zero, we know
that this is already the optimal partition of these 8 machines (including 2 dummies).
In a similar way, the KL heuristic can be applied to groups 2 and 3 to exchange C2 and Dummy3. Then
the optimal partition with cut-value zero is obtained in this example.
In general, this procedure is a heuristic and it is not guaranteed that an optimal partition is found.
3.6 Metaheuristics
We have briefly discussed some of the classical constructive heuristics and improvement heuristics
from the literature.
Since we are dealing with a tactical problem (that is not solved every day) where long computation
times are acceptable, it makes sense to invest more time. This can be done by applying metaheuristics,
exact methods (up to a certain problem size) and combined methods (matheuristics).
There is a large literature on applying metaheuristics and grouping or clustering problems (mainly
genetic algorithms or tabu search). Nevertheless, various possibilities exist to come up with new
metaheuristic approaches.
Examples:
• Since the similarity coefficients are rather similar to the savings values of transportation
logistics (VRP), the idea of a savings based ant system for VRP could be transferred to
grouping problems.
• The KL algorithm could be considered a local search (maybe in a simplified faster version), and
could be combined with some larger shaking steps to a VNS. Other fast local searches
(exchange and move) could be considered.
• A matheuristic could easily be constructed by applying e.g. the principle of destroy and
reconstruct: for a large problem, a subset of groups could be “destroyed” and all their machines
and parts could be freed. Then this smaller problem (considering only these parts and machines)
could be solved using some exact algorithm (e.g. applying CPLEX to a MIP formulation).
When designing metaheuristics or matheuristics for grouping problems, there are also 2 possibilities:
• Work directly on the model formulation (e.g. the above examples)
• Use a more aggregated representation and then apply some constructive algorithm to compute
the solution out of it. For example, the metaheuristic could just work on the ordering of parts
and machines (to give a better block diagonal structure than binary ordering) and then the single
pass heuristic by Askin and Standridge could be used to construct a solution.
It should also be noted that there ate various classes of grouping problems that differ w.r.t. objective
and constraints. This concerns e.g. duplication of machines and/or inter-group transport, etc.
References
Askin, R.G., Standridge, C.R.: Modeling & Analysis Of Manufacturing Systems, John Wiley & Sons,
1993.
B. Kernighan and S. Lin (1970): An Efficient Heuristic Procedure for Partitioning of Electrical
Circuits, Bell System Technical Journal, 291-307.
Internet sources on Opiz, KK3 und some methods, e.g.:
• http://www.ielm.ust.hk/dfaculty/ajay/courses/ieem513/GT/GT.html
Examples for metaheuristics:
• T.L. James, E.C. Brown, K.B. Keeling (2007): A hybrid grouping genetic algorithm for the cell
formation problem, Computers and Operations Research, Volume 34 (7, July) 2059-2079 .
• Mahdavi, M.M. Paydar, M. Solimanpur, A. Heidarzade (2009): Genetic algorithm approach for
solving a cell formation problem in cellular manufacturing, Expert Systems with Applications: An
International Journal, Volume 36 (3, April) 6598-6604.
• T. Tunnukij, C. Hicks (2009): An Enhanced Grouping Genetic Algorithm for solving the cell
formation problem, International Journal of Production Research, Volume 47 (7, Jan.) 1989 – 2007.
• D. Cao and M. Chen (2004): Using penalty function and Tabu search to solve cell formation
problems with fixed cell cost, Computers & Operations Research, Volume 31 (1, Jan.) 21-37.
• J. Schaller (2005): Tabu search procedures for the cell formation problem with intra-cell transfer
costs as a function of cell size, Computers and Industrial Engineering, Volume 49 (3, Nov.), 449 –
462.