02 Data Mining-Partitioning Method
02 Data Mining-Partitioning Method
02 Data Mining-Partitioning Method
a road network, a vector space, or any other space. In other methods, the similarity
may be defined by connectivity based on density or contiguity, and may not rely on
the absolute distance between two objects. Similarity measures play a fundamental
role in the design of clustering methods. While distance-based methods can often
take advantage of optimization techniques, density- and continuity-based methods
can often find clusters of arbitrary shape.
Clustering space: Many clustering methods search for clusters within the entire given
data space. These methods are useful for low-dimensionality data sets. With high-
dimensional data, however, there can be many irrelevant attributes, which can make
similarity measurements unreliable. Consequently, clusters found in the full space
are often meaningless. It’s often better to instead search for clusters within different
subspaces of the same data set. Subspace clustering discovers clusters and subspaces
(often of low dimensionality) that manifest object similarity.
the object to its cluster center is squared, and the distances are summed. This objective
function tries to make the resulting k clusters as compact and as separate as possible.
Optimizing the within-cluster variation is computationally challenging. In the worst
case, we would have to enumerate a number of possible partitionings that are exponen-
tial to the number of clusters, and check the within-cluster variation values. It has been
shown that the problem is NP-hard in general Euclidean space even for two clusters (i.e.,
k = 2). Moreover, the problem is NP-hard for a general number of clusters k even in the
2-D Euclidean space. If the number of clusters k and the dimensionality of the space d
are fixed, the problem can be solved in time O(ndk+1 log n), where n is the number of
objects. To overcome the prohibitive computational cost for the exact solution, greedy
approaches are often used in practice. A prime example is the k-means algorithm, which
is simple and commonly used.
“How does the k-means algorithm work?” The k-means algorithm defines the centroid
of a cluster as the mean value of the points within the cluster. It proceeds as follows. First,
it randomly selects k of the objects in D, each of which initially represents a cluster mean
or center. For each of the remaining objects, an object is assigned to the cluster to which
it is the most similar, based on the Euclidean distance between the object and the cluster
mean. The k-means algorithm then iteratively improves the within-cluster variation.
For each cluster, it computes the new mean using the objects assigned to the cluster in
the previous iteration. All the objects are then reassigned using the updated means as
the new cluster centers. The iterations continue until the assignment is stable, that is,
the clusters formed in the current round are the same as those formed in the previous
round. The k-means procedure is summarized in Figure 10.2.
Algorithm: k-means. The k-means algorithm for partitioning, where each cluster’s center
is represented by the mean value of the objects in the cluster.
Input:
+
+ +
+
+
+
+
+ +
Figure 10.3 Clustering of a set of objects using the k-means method; for (b) update cluster centers and
reassign objects accordingly (the mean of each cluster is marked by a +).
Example 10.1 Clustering by k-means partitioning. Consider a set of objects located in 2-D space,
as depicted in Figure 10.3(a). Let k = 3, that is, the user would like the objects to be
partitioned into three clusters.
According to the algorithm in Figure 10.2, we arbitrarily choose three objects as
the three initial cluster centers, where cluster centers are marked by a +. Each object
is assigned to a cluster based on the cluster center to which it is the nearest. Such a
distribution forms silhouettes encircled by dotted curves, as shown in Figure 10.3(a).
Next, the cluster centers are updated. That is, the mean value of each cluster is recal-
culated based on the current objects in the cluster. Using the new cluster centers, the
objects are redistributed to the clusters based on which cluster center is the nearest.
Such a redistribution forms new silhouettes encircled by dashed curves, as shown in
Figure 10.3(b).
This process iterates, leading to Figure 10.3(c). The process of iteratively reassigning
objects to clusters to improve the partitioning is referred to as iterative relocation. Even-
tually, no reassignment of the objects in any cluster occurs and so the process terminates.
The resulting clusters are returned by the clustering process.
The k-means method is not guaranteed to converge to the global optimum and often
terminates at a local optimum. The results may depend on the initial random selection
of cluster centers. (You will be asked to give an example to show this as an exercise.)
To obtain good results in practice, it is common to run the k-means algorithm multiple
times with different initial cluster centers.
The time complexity of the k-means algorithm is O(nkt), where n is the total number
of objects, k is the number of clusters, and t is the number of iterations. Normally, k n
and t n. Therefore, the method is relatively scalable and efficient in processing large
data sets.
There are several variants of the k-means method. These can differ in the selection
of the initial k-means, the calculation of dissimilarity, and the strategies for calculating
cluster means.
454 Chapter 10 Cluster Analysis: Basic Concepts and Methods
The k-means method can be applied only when the mean of a set of objects is defined.
This may not be the case in some applications such as when data with nominal attributes
are involved. The k-modes method is a variant of k-means, which extends the k-means
paradigm to cluster nominal data by replacing the means of clusters with modes. It uses
new dissimilarity measures to deal with nominal objects and a frequency-based method
to update modes of clusters. The k-means and the k-modes methods can be integrated
to cluster data with mixed numeric and nominal values.
The necessity for users to specify k, the number of clusters, in advance can be seen as a
disadvantage. There have been studies on how to overcome this difficulty, however, such
as by providing an approximate range of k values, and then using an analytical technique
to determine the best k by comparing the clustering results obtained for the different k
values. The k-means method is not suitable for discovering clusters with nonconvex
shapes or clusters of very different size. Moreover, it is sensitive to noise and outlier data
points because a small number of such data can substantially influence the mean value.
“How can we make the k-means algorithm more scalable?” One approach to mak-
ing the k-means method more efficient on large data sets is to use a good-sized set of
samples in clustering. Another is to employ a filtering approach that uses a spatial hier-
archical data index to save costs when computing means. A third approach explores the
microclustering idea, which first groups nearby objects into “microclusters” and then
performs k-means clustering on the microclusters. Microclustering is further discussed
in Section 10.3.
Example 10.2 A drawback of k-means. Consider six points in 1-D space having the values
1, 2, 3, 8, 9, 10, and 25, respectively. Intuitively, by visual inspection we may imagine the
points partitioned into the clusters {1, 2, 3} and {8, 9, 10}, where point 25 is excluded
because it appears to be an outlier. How would k-means partition the values? If we
apply k-means using k = 2 and Eq. (10.1), the partitioning {{1, 2, 3}, {8, 9, 10, 25}} has
the within-cluster variation
(1 − 2)2 + (2 − 2)2 + (3 − 2)2 + (8 − 13)2 + (9 − 13)2 + (10 − 13)2 + (25 − 13)2 = 196,
given that the mean of cluster {1, 2, 3} is 2 and the mean of {8, 9, 10, 25} is 13. Compare
this to the partitioning {{1, 2, 3, 8}, {9, 10, 25}}, for which k-means computes the within-
cluster variation as
(1 − 3.5)2 + (2 − 3.5)2 + (3 − 3.5)2 + (8 − 3.5)2 + (9 − 14.67)2
+ (10 − 14.67)2 + (25 − 14.67)2 = 189.67,
10.2 Partitioning Methods 455
given that 3.5 is the mean of cluster {1, 2, 3, 8} and 14.67 is the mean of cluster {9, 10, 25}.
The latter partitioning has the lowest within-cluster variation; therefore, the k-means
method assigns the value 8 to a cluster different from that containing 9 and 10 due to
the outlier point 25. Moreover, the center of the second cluster, 14.67, is substantially far
from all the members in the cluster.
“How can we modify the k-means algorithm to diminish such sensitivity to outliers?”
Instead of taking the mean value of the objects in a cluster as a reference point, we can
pick actual objects to represent the clusters, using one representative object per cluster.
Each remaining object is assigned to the cluster of which the representative object is
the most similar. The partitioning method is then performed based on the principle of
minimizing the sum of the dissimilarities between each object p and its corresponding
representative object. That is, an absolute-error criterion is used, defined as
k X
X
E= dist(p, oi ), (10.2)
i=1 p∈Ci
where E is the sum of the absolute error for all objects p in the data set, and oi is the
representative object of Ci . This is the basis for the k-medoids method, which groups n
objects into k clusters by minimizing the absolute error (Eq. 10.2).
When k = 1, we can find the exact median in O(n2 ) time. However, when k is a
general positive number, the k-medoid problem is NP-hard.
The Partitioning Around Medoids (PAM) algorithm (see Figure 10.5 later) is a pop-
ular realization of k-medoids clustering. It tackles the problem in an iterative, greedy
way. Like the k-means algorithm, the initial representative objects (called seeds) are
chosen arbitrarily. We consider whether replacing a representative object by a nonrep-
resentative object would improve the clustering quality. All the possible replacements
are tried out. The iterative process of replacing representative objects by other objects
continues until the quality of the resulting clustering cannot be improved by any replace-
ment. This quality is measured by a cost function of the average dissimilarity between
an object and the representative object of its cluster.
Specifically, let o1 , . . . , ok be the current set of representative objects (i.e., medoids).
To determine whether a nonrepresentative object, denoted by orandom , is a good replace-
ment for a current medoid oj (1 ≤ j ≤ k), we calculate the distance from every
object p to the closest object in the set {o1 , . . . , oj−1 , orandom , oj+1 , . . . , ok }, and
use the distance to update the cost function. The reassignments of objects to
{o1 , . . . , oj−1 , orandom , oj+1 , . . . , ok } are simple. Suppose object p is currently assigned to
a cluster represented by medoid oj (Figure 10.4a or b). Do we need to reassign p to a
different cluster if oj is being replaced by orandom ? Object p needs to be reassigned to
either orandom or some other cluster represented by oi (i 6= j), whichever is the closest.
For example, in Figure 10.4(a), p is closest to oi and therefore is reassigned to oi . In
Figure 10.4(b), however, p is closest to orandom and so is reassigned to orandom . What if,
instead, p is currently assigned to a cluster represented by some other object oi , i 6= j?
456 Chapter 10 Cluster Analysis: Basic Concepts and Methods
oi oi oi oi
p oj oj oj oj
Data object
p Cluster center
p Before swapping
orandom orandom orandom p orandom
After swapping
(a) Reassigned (b) Reassigned (c) No change (d) Reassigned
to oi to orandom to orandom
Figure 10.4 Four cases of the cost function for k-medoids clustering.
is one of the best k-medoids but is not selected during sampling, CLARA will never find
the best clustering. (You will be asked to provide an example demonstrating this as an
exercise.)
“How might we improve the quality and scalability of CLARA?” Recall that when
searching for better medoids, PAM examines every object in the data set against every
current medoid, whereas CLARA confines the candidate medoids to only a random
sample of the data set. A randomized algorithm called CLARANS (Clustering Large
Applications based upon RANdomized Search) presents a trade-off between the cost
and the effectiveness of using samples to obtain clustering.
First, it randomly selects k objects in the data set as the current medoids. It then
randomly selects a current medoid x and an object y that is not one of the current
medoids. Can replacing x by y improve the absolute-error criterion? If yes, the replace-
ment is made. CLARANS conducts such a randomized search l times. The set of the
current medoids after the l steps is considered a local optimum. CLARANS repeats this
randomized process m times and returns the best local optimal as the final result.
While partitioning methods meet the basic clustering requirement of organizing a set of
objects into a number of exclusive groups, in some situations we may want to partition
our data into groups at different levels such as in a hierarchy. A hierarchical clustering
method works by grouping data objects into a hierarchy or “tree” of clusters.
Representing data objects in the form of a hierarchy is useful for data summarization
and visualization. For example, as the manager of human resources at AllElectronics,
458 Chapter 10 Cluster Analysis: Basic Concepts and Methods
you may organize your employees into major groups such as executives, managers, and
staff. You can further partition these groups into smaller subgroups. For instance, the
general group of staff can be further divided into subgroups of senior officers, officers,
and trainees. All these groups form a hierarchy. We can easily summarize or characterize
the data that are organized into a hierarchy, which can be used to find, say, the average
salary of managers and of officers.
Consider handwritten character recognition as another example. A set of handwrit-
ing samples may be first partitioned into general groups where each group corresponds
to a unique character. Some groups can be further partitioned into subgroups since
a character may be written in multiple substantially different ways. If necessary, the
hierarchical partitioning can be continued recursively until a desired granularity is
reached.
In the previous examples, although we partitioned the data hierarchically, we did not
assume that the data have a hierarchical structure (e.g., managers are at the same level
in our AllElectronics hierarchy as staff). Our use of a hierarchy here is just to summarize
and represent the underlying data in a compressed way. Such a hierarchy is particularly
useful for data visualization.
Alternatively, in some applications we may believe that the data bear an underly-
ing hierarchical structure that we want to discover. For example, hierarchical clustering
may uncover a hierarchy for AllElectronics employees structured on, say, salary. In the
study of evolution, hierarchical clustering may group animals according to their bio-
logical features to uncover evolutionary paths, which are a hierarchy of species. As
another example, grouping configurations of a strategic game (e.g., chess or checkers) in
a hierarchical way may help to develop game strategies that can be used to train players.
In this section, you will study hierarchical clustering methods. Section 10.3.1 begins
with a discussion of agglomerative versus divisive hierarchical clustering, which organize
objects into a hierarchy using a bottom-up or top-down strategy, respectively. Agglo-
merative methods start with individual objects as clusters, which are iteratively merged
to form larger clusters. Conversely, divisive methods initially let all the given objects
form one cluster, which they iteratively split into smaller clusters.
Hierarchical clustering methods can encounter difficulties regarding the selection
of merge or split points. Such a decision is critical, because once a group of objects is
merged or split, the process at the next step will operate on the newly generated clusters.
It will neither undo what was done previously, nor perform object swapping between
clusters. Thus, merge or split decisions, if not well chosen, may lead to low-quality
clusters. Moreover, the methods do not scale well because each decision of merge or
split needs to examine and evaluate many objects or clusters.
A promising direction for improving the clustering quality of hierarchical meth-
ods is to integrate hierarchical clustering with other clustering techniques, resulting in
multiple-phase (or multiphase) clustering. We introduce two such methods, namely
BIRCH and Chameleon. BIRCH (Section 10.3.3) begins by partitioning objects hierar-
chically using tree structures, where the leaf or low-level nonleaf nodes can be
viewed as “microclusters” depending on the resolution scale. It then applies other